Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
5,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Saving models
It is possible to save fitted Prophet models so that they can be loaded and used later.
In R, this is done with saveRDS and readRDS
Step1: In Python, models should not be saved with pickle; the Stan backend attached to the model object will not pickle well, and will produce issues under certain versions of Python. Instead, you should use the built-in serialization functions to serialize the model to json
Step2: The json file will be portable across systems, and deserialization is backwards compatible with older versions of prophet.
Flat trend and custom trends
For time series that exhibit strong seasonality patterns rather than trend changes, it may be useful to force the trend growth rate to be flat. This can be achieved simply by passing growth=flat when creating the model
Step4: Note that if this is used on a time series that doesn't have a constant trend, any trend will be fit with the noise term and so there will be high predictive uncertainty in the forecast.
To use a trend besides these three built-in trend functions (piecewise linear, piecewise logistic growth, and flat), you can download the source code from github, modify the trend function as desired in a local branch, and then install that local version. This PR provides a good illustration of what must be done to implement a custom trend, as does this one that implements a step function trend and this one for a new trend in R.
Updating fitted models
A common setting for forecasting is fitting models that need to be updated as additional data come in. Prophet models can only be fit once, and a new model must be re-fit when new data become available. In most settings, model fitting is fast enough that there isn't any issue with re-fitting from scratch. However, it is possible to speed things up a little by warm-starting the fit from the model parameters of the earlier model. This code example shows how this can be done in Python | Python Code:
%%R
saveRDS(m, file="model.RDS") # Save model
m <- readRDS(file="model.RDS") # Load model
Explanation: Saving models
It is possible to save fitted Prophet models so that they can be loaded and used later.
In R, this is done with saveRDS and readRDS:
End of explanation
import json
from prophet.serialize import model_to_json, model_from_json
with open('serialized_model.json', 'w') as fout:
json.dump(model_to_json(m), fout) # Save model
with open('serialized_model.json', 'r') as fin:
m = model_from_json(json.load(fin)) # Load model
Explanation: In Python, models should not be saved with pickle; the Stan backend attached to the model object will not pickle well, and will produce issues under certain versions of Python. Instead, you should use the built-in serialization functions to serialize the model to json:
End of explanation
%%R
m <- prophet(df, growth='flat')
m = Prophet(growth='flat')
Explanation: The json file will be portable across systems, and deserialization is backwards compatible with older versions of prophet.
Flat trend and custom trends
For time series that exhibit strong seasonality patterns rather than trend changes, it may be useful to force the trend growth rate to be flat. This can be achieved simply by passing growth=flat when creating the model:
End of explanation
def stan_init(m):
Retrieve parameters from a trained model.
Retrieve parameters from a trained model in the format
used to initialize a new Stan model.
Parameters
----------
m: A trained model of the Prophet class.
Returns
-------
A Dictionary containing retrieved parameters of m.
res = {}
for pname in ['k', 'm', 'sigma_obs']:
res[pname] = m.params[pname][0][0]
for pname in ['delta', 'beta']:
res[pname] = m.params[pname][0]
return res
df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')
df1 = df.loc[df['ds'] < '2016-01-19', :] # All data except the last day
m1 = Prophet().fit(df1) # A model fit to all data except the last day
%timeit m2 = Prophet().fit(df) # Adding the last day, fitting from scratch
%timeit m2 = Prophet().fit(df, init=stan_init(m1)) # Adding the last day, warm-starting from m1
Explanation: Note that if this is used on a time series that doesn't have a constant trend, any trend will be fit with the noise term and so there will be high predictive uncertainty in the forecast.
To use a trend besides these three built-in trend functions (piecewise linear, piecewise logistic growth, and flat), you can download the source code from github, modify the trend function as desired in a local branch, and then install that local version. This PR provides a good illustration of what must be done to implement a custom trend, as does this one that implements a step function trend and this one for a new trend in R.
Updating fitted models
A common setting for forecasting is fitting models that need to be updated as additional data come in. Prophet models can only be fit once, and a new model must be re-fit when new data become available. In most settings, model fitting is fast enough that there isn't any issue with re-fitting from scratch. However, it is possible to speed things up a little by warm-starting the fit from the model parameters of the earlier model. This code example shows how this can be done in Python:
End of explanation |
5,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring the H-2B Visa Programme
H-2B visas are nonimmigrant visas, which allow foreign nationals to enter the U.S. temporarily and engage in nonagricultural employment which is seasonal, intermittent, a peak load need, or a one-time occurrence.
Summary It turns out that Texas has the highest need for foreign unskilled employees. However, it is a Salmon farm in Alaska that has requested the most and only offering them a wage of 10$ an hour.
Step1: 1. How many requests did the Office of Foreign Labor Certification (OFLC) receive in 2015?
Step2: 2. How many jobs did that regard in total? And how many full time positions?
Step3: 3. How many jobs did the ETA National Processing Center actually certify?
Step4: **4. What was the average pay?
Step5: The majority of the jobs are payed hourly at an average rate of 12.65 $ an hour.
5. Who earned the least? And where are these people actually doing?
Step6: This table displays the lowest payed jobs for which no workers were certified.
Step7: And this table shows that landscape laborers are the ones that are earning the least.
Step8: 6. What was the most common unit of pay (daily, weekly, monthly)?
Step9: 7. Work our total pay amount payed to H-2B laborers?
Step10: Approx. ####Count * Mean (Year, Week, Month, Hour(8)(33 Million, Bi-Weekly (180'000)#### 40 million $.
8. Were there any foreign companies hiring foreign workers in the US? If yes, work out averages by nation.
Step11: 9. Most common job title. Graph this.
Step12: 10. Which US states have the largest need for unskilled workers? Make a graph of this.
Step13: 11. Which industries had the largest need?
Step14: Importing the NAIC_Codes from here.
Step15: 12. Which companies had the largest need? Compare acceptance/denials of each company.
Step16: BONUS Looking into Silver Bay Seafoods and UK International Soccer Campus.
Silver Bay's claim | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_excel("H-2B_Disclosure_Data_FY15_Q4.xlsx")
df.head()
#df.info()
Explanation: Exploring the H-2B Visa Programme
H-2B visas are nonimmigrant visas, which allow foreign nationals to enter the U.S. temporarily and engage in nonagricultural employment which is seasonal, intermittent, a peak load need, or a one-time occurrence.
Summary It turns out that Texas has the highest need for foreign unskilled employees. However, it is a Salmon farm in Alaska that has requested the most and only offering them a wage of 10$ an hour.
End of explanation
df['CASE_NUMBER'].count()
Explanation: 1. How many requests did the Office of Foreign Labor Certification (OFLC) receive in 2015?
End of explanation
df['NBR_WORKERS_REQUESTED'].sum()
df.groupby('FULL_TIME_POSITION')['NBR_WORKERS_REQUESTED'].sum()
Explanation: 2. How many jobs did that regard in total? And how many full time positions?
End of explanation
df['NBR_WORKERS_CERTIFIED'].sum()
df.groupby('FULL_TIME_POSITION')['NBR_WORKERS_CERTIFIED'].sum()
Explanation: 3. How many jobs did the ETA National Processing Center actually certify?
End of explanation
df.groupby('BASIC_UNIT_OF_PAY')['PREVAILING_WAGE'].mean()
df.groupby('BASIC_UNIT_OF_PAY')['BASIC_UNIT_OF_PAY'].count()
Explanation: **4. What was the average pay?
End of explanation
worst_wage = df[df['BASIC_UNIT_OF_PAY'] == 'Hour'].sort_values(by='PREVAILING_WAGE', ascending=True).head()
Explanation: The majority of the jobs are payed hourly at an average rate of 12.65 $ an hour.
5. Who earned the least? And where are these people actually doing?
End of explanation
worst_wage[['BASIC_UNIT_OF_PAY', 'PREVAILING_WAGE', 'EMPLOYER_NAME', 'JOB_TITLE', 'WORKSITE_CITY', 'NBR_WORKERS_REQUESTED', 'NBR_WORKERS_CERTIFIED']]
lowest_wages_accepted = df[df['NBR_WORKERS_CERTIFIED'] != 0].sort_values(by='PREVAILING_WAGE', ascending=True).head()
Explanation: This table displays the lowest payed jobs for which no workers were certified.
End of explanation
lowest_wages_accepted[['BASIC_UNIT_OF_PAY', 'PREVAILING_WAGE', 'EMPLOYER_NAME', 'JOB_TITLE', 'WORKSITE_CITY', 'NBR_WORKERS_REQUESTED', 'NBR_WORKERS_CERTIFIED']]
Explanation: And this table shows that landscape laborers are the ones that are earning the least.
End of explanation
df.groupby('BASIC_UNIT_OF_PAY')['BASIC_UNIT_OF_PAY'].count()
Explanation: 6. What was the most common unit of pay (daily, weekly, monthly)?
End of explanation
#df.groupby('BASIC_UNIT_OF_PAY')['PREVAILING_WAGE'].describe()
#df.groupby('PREVAILING_WAGE').count()
Explanation: 7. Work our total pay amount payed to H-2B laborers?
End of explanation
df.groupby('EMPLOYER_COUNTRY')['EMPLOYER_COUNTRY'].count()
Explanation: Approx. ####Count * Mean (Year, Week, Month, Hour(8)(33 Million, Bi-Weekly (180'000)#### 40 million $.
8. Were there any foreign companies hiring foreign workers in the US? If yes, work out averages by nation.
End of explanation
#x = df.groupby('JOB_TITLE')['JOB_TITLE'].value_counts()
df['JOB_TITLE'].value_counts().head(10)
plt.style.use('ggplot')
df['JOB_TITLE'].value_counts(ascending=True).tail(10).plot(kind='barh')
plt.savefig("Top_Jobs.svg")
##Is there an efficient way for Pandas to clean the data? Merge "Landscape Laborer" with "LANDSCAPE LABORER" etc.?
Explanation: 9. Most common job title. Graph this.
End of explanation
#x = df['EMPLOYER_STATE'].value_counts(ascending=False).head(10) * df['NBR_WORKERS_REQUESTED'].sum()
df['EMPLOYER_STATE'].value_counts(ascending=False).head(10).plot(kind='bar')
plt.savefig("semand_in_states.svg")
#x = df['EMPLOYER_STATE'].value_counts(ascending=False).head(10) * df['NBR_WORKERS_REQUESTED'].sum()
df['EMPLOYER_STATE'].value_counts(ascending=True).head(10).plot(kind='bar')
plt.savefig("demand_in_states.svg")
Workers_in_state_count = df.groupby('EMPLOYER_STATE')['NBR_WORKERS_REQUESTED'].sum()
Workers_in_state_count.sort_values(ascending=True).tail(10).plot(kind='barh', legend='NBR_WORKERS_REQUESTED')
plt.savefig("workers_requestet_in_states.svg")
Explanation: 10. Which US states have the largest need for unskilled workers? Make a graph of this.
End of explanation
#changing df['NAIC_CODE'] from non_null object into int
#This .fillna(0.0) is magic. I found it here:
#http://stackoverflow.com/questions/21291259/convert-floats-to-ints-in-pandas
#df['NAIC_CODE'] = df['NAIC_CODE'].fillna(0.0).astype(int)
#But it turns out, it only works for my one fill. Not on the other. Why?
Explanation: 11. Which industries had the largest need?
End of explanation
NAIC_CODEs = pd.read_excel("6-digit_2012_Code.xls")
NAIC_CODEs.info()
#Changing the NAIC_Codesfrom non-null object into float64
#NAIC_CODEs['NAICS12'] = df['NAIC_CODE'].fillna(0.0).astype(int)
NAIC_CODEs.head()
#And now reimporting the original file.
df = pd.read_excel("H-2B_Disclosure_Data_FY15_Q4.xlsx")
#now in the NAIC_CODE is a Float64 in the cells we want to merge.
df_merged = df.merge(NAIC_CODEs, how = 'left', left_on = 'NAIC_CODE', right_on ='NAICS2012')
#df_merged.info()
df_merged['Industry'].value_counts().head(10)
workers_by_industry = df_merged.groupby('Industry')['NBR_WORKERS_REQUESTED'].sum()
workers_by_industry.sort_values(ascending=True).tail(10).plot(kind='barh', legend='NBR_WORKERS_REQUESTED')
plt.savefig("workers_by_industry.svg")
Explanation: Importing the NAIC_Codes from here.
End of explanation
df['EMPLOYER_NAME'].value_counts().head(5)
company_workers_demand = df.groupby('EMPLOYER_NAME')['NBR_WORKERS_REQUESTED'].sum()
company_workers_demand.sort_values(ascending=True).tail(10).plot(kind='barh')
plt.savefig("company_workers_demand.svg")
company_workers_demand = df.groupby('EMPLOYER_NAME')['NBR_WORKERS_CERTIFIED'].sum()
company_workers_demand.sort_values(ascending=True).tail(10).plot(kind='barh')
plt.savefig("company_workers_demand.svg")
Explanation: 12. Which companies had the largest need? Compare acceptance/denials of each company.
End of explanation
SILVER_BAY_SEAFOODS = df[df['EMPLOYER_NAME'] == 'SILVER BAY SEAFOODS, LLC']
SILVER_BAY_SEAFOODS[['JOB_TITLE', 'PREVAILING_WAGE', 'HOURLY_WORK_SCHEDULE_AM', 'HOURLY_WORK_SCHEDULE_PM', 'OVERTIME_RATE_FROM', 'OVERTIME_RATE_TO', 'NATURE_OF_TEMPORARY_NEED', 'NBR_WORKERS_REQUESTED', 'NBR_WORKERS_CERTIFIED']]
SOCCER_CAMPS = df[df['EMPLOYER_NAME'] == 'UK International Soccer Camps']
SOCCER_CAMPS[['JOB_TITLE', 'PREVAILING_WAGE', 'HOURLY_WORK_SCHEDULE_AM', 'HOURLY_WORK_SCHEDULE_PM', 'OVERTIME_RATE_FROM', 'OVERTIME_RATE_TO', 'NATURE_OF_TEMPORARY_NEED', 'NBR_WORKERS_REQUESTED', 'NBR_WORKERS_CERTIFIED']]
Explanation: BONUS Looking into Silver Bay Seafoods and UK International Soccer Campus.
Silver Bay's claim: [Silver Bays's "...primary strength is in its combination of having a state of the art processing plant and favorable logistics to support its operations; competent management and key personnel; an established fish buying system; and ownership by fishermen who represent over 80% of the committed fishing effort."] How much does the company pay it's workers on average?
End of explanation |
5,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: FIO-RONM
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:01
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
5,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Basics & Convolution
TensorFlow does not do computation immediately but constructs a graph. We define everything that we want to compute, in a graph and running it requires a session.
Intro
Step1: LinSpace
Step2: We can get the elements of the graph by doing as follows. We can also get the output of a certain node in the graph
Step3: Session
In order to get run a TF program, we need a session. The session computes the graph we construct. Here's an example.
Step4: We can ask TF to create a new graph and have it be connected to another session. We are allowed to have multiple sessions running at the same time.
Step5: Interactive Session - This is a way to run session in environments like notebooks where you don't want to pass around a session variable. But it's just like a session. Here's how to create one. Also this behaves more like a normal python program. You have to recompute the formula if you want updates. For example, z is defined below in the gaussian curve example. You have to rerun the formula after changing x to get new z. Just running z.eval() won't do it. However, in a normal session, it will.
Without interactive session, whenever you call x.eval(), you have to pass session as x.eval(session = sess)
Step6: Example - Creating a Gaussian Curve
Step7: Making it into a 2D Gaussian
Step8: Convolution
Loading 'camera' images from sklearn
Step9: Convolution operation in TF takes in a 4d tensor for images. The dimensions are (Batch x Height x Width x Channel). Our image is grayscale. So we reshape it using numpy into 4d as shown below.
Tensors must be float16, float32 or float64.
Step10: For the convolution operation we need to provide the specifics of the kernels - Height x Width x Channel x Number of kernels. Let's now convert our gaussian kernel in this format and convolve our image.
Step11: Gabor Kernel
We can take a sin wave and modulate it with the gaussian kernel to get a gabor kernel.
Step12: We need to convert this sine wave into a matrix and multiply with the gaussian kernel. That will be the gabor filter.
Step13: Convolution using Placeholders
We can specify parameters that we expect to fit in the graph later on, now, by using placeholders. Convolution using placeholders is presented below. | Python Code:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-3, 3, 5) # Computes values immediately
print x
Explanation: TensorFlow Basics & Convolution
TensorFlow does not do computation immediately but constructs a graph. We define everything that we want to compute, in a graph and running it requires a session.
Intro
End of explanation
x = tf.linspace(-3.0, 3.0, 100) # Doesn't compute immediately
# Note that tf.linspace(-3, 3, 5) gives an error because datatypes are
# mismatched
print (x)
Explanation: LinSpace:0 means output of LinSpace. TensorFlow doesn't compute the values immediately. It only specifies the nature of the output of a TF operation, also called an Op node.
End of explanation
g = tf.get_default_graph()
print [op.name for op in g.get_operations()] # List of ops
# This next step would not work because the tensor doesn't exist yet, we will compute it later.
### print g.get_tensor_by_name('LinSpace_1:0')
# Note that LinSpace has a :0 at the end of it. Without :0, it refers to the Node itself, with :0 it refers to the
# tensor.
Explanation: We can get the elements of the graph by doing as follows. We can also get the output of a certain node in the graph
End of explanation
sess = tf.Session()
# We can ask a session to compute the value of a node
computed_x = sess.run(x)
# print (computed_x)
# Or we can ask the node to compute itself using the session
computed_x = x.eval(session = sess)
# print computed_x
# We can close the session by doing this
sess.close()
Explanation: Session
In order to get run a TF program, we need a session. The session computes the graph we construct. Here's an example.
End of explanation
g = tf.get_default_graph() # Fetch the default graph
g2 = tf.Graph()
print g2
sess2 = tf.Session(graph = g2)
print sess2
sess2.close()
Explanation: We can ask TF to create a new graph and have it be connected to another session. We are allowed to have multiple sessions running at the same time.
End of explanation
sess = tf.InteractiveSession()
# print x.eval()
print x.get_shape() # x.shape
print x.get_shape().as_list() # x.shape.tolist()
Explanation: Interactive Session - This is a way to run session in environments like notebooks where you don't want to pass around a session variable. But it's just like a session. Here's how to create one. Also this behaves more like a normal python program. You have to recompute the formula if you want updates. For example, z is defined below in the gaussian curve example. You have to rerun the formula after changing x to get new z. Just running z.eval() won't do it. However, in a normal session, it will.
Without interactive session, whenever you call x.eval(), you have to pass session as x.eval(session = sess)
End of explanation
mean = 0
sigma = 1.0
z = 1.0/(tf.sqrt(2*3.14)*sigma) * (tf.exp(-1*(tf.pow(x-mean, 2)/(2*tf.pow(sigma, 2)))))
res = z.eval() # Note that x is already defined from above
plt.plot(res)
plt.show()
Explanation: Example - Creating a Gaussian Curve
End of explanation
l = z.get_shape().as_list()[0]
res2d = tf.matmul(tf.reshape(z, [l, 1]), tf.reshape(z, [1, l])).eval()
plt.imshow(res2d)
plt.show()
Explanation: Making it into a 2D Gaussian
End of explanation
from skimage import data
img = data.camera().astype(np.float32)
plt.imshow(img, cmap='gray')
plt.show()
Explanation: Convolution
Loading 'camera' images from sklearn
End of explanation
# Image shape is 512x512
img4d = tf.reshape(img, [1, img.shape[0], img.shape[1], 1])
print img4d.get_shape()
Explanation: Convolution operation in TF takes in a 4d tensor for images. The dimensions are (Batch x Height x Width x Channel). Our image is grayscale. So we reshape it using numpy into 4d as shown below.
Tensors must be float16, float32 or float64.
End of explanation
l = res2d.shape[0]
kernel = tf.reshape(res2d, [l, l, 1, 1])
print kernel.get_shape()
# Convolution operation
convolved = tf.nn.conv2d(img4d, kernel, strides = [1, 1, 1, 1],
padding = 'SAME')
plt.imshow(convolved.eval()[0, :, :, 0], cmap = 'gray')
plt.show()
Explanation: For the convolution operation we need to provide the specifics of the kernels - Height x Width x Channel x Number of kernels. Let's now convert our gaussian kernel in this format and convolve our image.
End of explanation
ksize = 100
xs = tf.linspace(-3.0, 3.0, ksize)
ys = tf.sin(xs+2)
# The following two statements are equivalent to
# plt.plot(xs.eval(), ys.eval())
plt.figure()
plt.plot(ys.eval())
plt.show()
Explanation: Gabor Kernel
We can take a sin wave and modulate it with the gaussian kernel to get a gabor kernel.
End of explanation
ys = tf.reshape(ys, [ksize, 1])
ones = tf.ones([1, ksize])
mat = tf.matmul(ys, ones)
plt.imshow(mat.eval(), cmap = 'gray')
plt.show()
# Multiply with the gaussian kernel
# kernel is 4 dimensional, res2d is the 2d version
gabor = tf.matmul(mat, res2d)
plt.imshow(gabor.eval(), cmap = 'gray')
plt.show()
Explanation: We need to convert this sine wave into a matrix and multiply with the gaussian kernel. That will be the gabor filter.
End of explanation
img = tf.placeholder(tf.float32, shape = [None, None], name = 'img')
# Reshaping inbuilt function
img3d = tf.expand_dims(img, 2)
print img3d.get_shape()
img4d = tf.expand_dims(img3d, 0)
print img4d.get_shape()
mean = tf.placeholder(tf.float32, name = 'mean')
sigma = tf.placeholder(tf.float32, name = 'sigma')
ksize = tf.placeholder(tf.int32, name = 'ksize')
# Giving formula for x, gaussian kernel, gabor kernel etc..
x = tf.linspace(-3.0, 3.0, ksize)
z = 1.0/(tf.sqrt(2*3.14)*sigma) * (tf.exp(-1*(tf.pow(x-mean, 2)/(2*tf.pow(sigma, 2)))))
z2d = tf.matmul(tf.reshape(z, [ksize, 1]), tf.reshape(z, [1, ksize]))
xs = tf.linspace(-3.0, 3.0, ksize)
ys = tf.sin(xs)
ys = tf.reshape(ys, [ksize, 1])
ones = tf.ones([1, ksize])
mat = tf.matmul(ys, ones)
gabor = tf.matmul(mat, z2d)
gabor4d = tf.reshape(gabor, [ksize, ksize, 1, 1])
convolved = tf.nn.conv2d(img4d, gabor4d, strides = [1, 1, 1, 1],
padding = 'SAME')
# We defined the graph above, now we are going to evaluate it.
result = convolved.eval(feed_dict = {
img: data.camera(),
mean: 0.0,
sigma: 1.0,
ksize: 5
})
plt.imshow(result[0, :, :, 0], cmap = 'gray')
plt.title('Gabor filter output')
plt.show()
Explanation: Convolution using Placeholders
We can specify parameters that we expect to fit in the graph later on, now, by using placeholders. Convolution using placeholders is presented below.
End of explanation |
5,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ipython
ipython is an interactive version of the python interpreter. It provides a number of extras which are helpful when writing code. ipython code is almost always python code, and the differences are generally only important when editing a code in a live (interactive) environment.
The jupyter notebook is a fine example of an interactive environment - you are changing the code as it runs and checking answers as you go. Because you may have a lot of half-completed results in an interactive script, you probably want to make as few mistakes as you can. This is the purpose of ipython.
ipython provides access to the help / documentation system, provides tab completion of variable and function names, allows you see see what methods live inside a module ...
Step1: It works on modules to list the available methods and variables. Take the math module, for example
Step2: It works on functions that take special arguments and tells you what you need to supply.
Try this and try tabbing in the parenthesis when you use this function yourself
Step3: It also provides special operations that allow you to drill down into the underlying shell / filesystem (but these are not standard python code any more).
Step4: Another way to do this is to use the cell magic functionality to direct the notebook to change the cell to something different (here everything in the cell is interpreted as a unix shell )
Step5: I don't advise using this too often as the code becomes more difficult to convert to python.
A % is a one-line magic function that can go anywhere in the cell.
A %% is a cell-wide function | Python Code:
## Try the autocomplete ... it works on functions that are in scope
# pr
# it also works on variables
# long_but_helpful_variable_name = 1
# long_b
Explanation: ipython
ipython is an interactive version of the python interpreter. It provides a number of extras which are helpful when writing code. ipython code is almost always python code, and the differences are generally only important when editing a code in a live (interactive) environment.
The jupyter notebook is a fine example of an interactive environment - you are changing the code as it runs and checking answers as you go. Because you may have a lot of half-completed results in an interactive script, you probably want to make as few mistakes as you can. This is the purpose of ipython.
ipython provides access to the help / documentation system, provides tab completion of variable and function names, allows you see see what methods live inside a module ...
End of explanation
import math
# math.is # Try completion on this
help(math.isinf)
# try math.isinf() and hit shift-tab while the cursor is between the parentheses
# you should see the same help pop up.
# math.isinf()
Explanation: It works on modules to list the available methods and variables. Take the math module, for example:
End of explanation
import string
string.capwords("the quality of mercy is not strained")
# string.capwords()
Explanation: It works on functions that take special arguments and tells you what you need to supply.
Try this and try tabbing in the parenthesis when you use this function yourself:
End of explanation
# execute simple unix shell commands
!ls
!echo ""
!pwd
Explanation: It also provides special operations that allow you to drill down into the underlying shell / filesystem (but these are not standard python code any more).
End of explanation
%%sh
ls -l
echo ""
pwd
Explanation: Another way to do this is to use the cell magic functionality to direct the notebook to change the cell to something different (here everything in the cell is interpreted as a unix shell )
End of explanation
%magic # to see EVERYTHING in the magic system !
Explanation: I don't advise using this too often as the code becomes more difficult to convert to python.
A % is a one-line magic function that can go anywhere in the cell.
A %% is a cell-wide function
End of explanation |
5,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Moving from Python 2 to Python 3
Python 2 has a limited lifetime, and by 2020, there will no longer be any active development on Python 2.
http
Step1: A (non-exhaustive) list of differences between Python 2 and Python 3
print is now a function, no longer a keyword
exec is now a function, no longer a keyword
division, /, no longer truncates! (no more 2/3 == 0)
all strings are unicode (this is... controversial)
the functions range(), zip(), map(), filter(), dict.keys(), dict.items(), dict.values(), all return an iterator instead of a list
exceptions are handled using a slightly different syntax
strict comparisons, so 'a' < 1 will fail with an error
from the standard library, urllib is reorganized
For a more complete list, see
http
Step2: New string formatting
The old string formatting (with %) is depricated in favor of str.format(). A good comparison of the two can be found here
Step3: Writing code for both Python 2 and Python 3
Ever wonder what those from __future__ import foo statements were doing?
http | Python Code:
import sys
print(sys.version)
Explanation: Moving from Python 2 to Python 3
Python 2 has a limited lifetime, and by 2020, there will no longer be any active development on Python 2.
http://legacy.python.org/dev/peps/pep-0373/
Why? Apparently it was easier to make a shiny new python by breaking backwards compatibility. The good news is it's relatively painless to switch small projects over to Python 3, and most major Python packages already support Python 3 (including most of the scientific stack: numpy, scipy, pandas, astropy).
End of explanation
# python2 has list comprehensions
[x ** 2 for x in range(5)]
# python3 has dict comprehensions!
{str(x): x ** 2 for x in range(5)}
# and set comprehensions
{x ** 2 for x in range(5)}
# magic dictionary concatenation
some_kwargs = {'do': 'this',
'not': 'that'}
other_kwargs = {'use': 'something',
'when': 'sometime'}
{**some_kwargs, **other_kwargs}
# unpacking magic
a, *stuff, b = range(5)
print(a)
print(stuff)
print(b)
# native support for unicode
s = 'Το Ζεν του Πύθωνα'
print(s)
# unicode variable names!
import numpy as np
π = np.pi
np.cos(2 * π)
# infix matrix multiplication
A = np.random.choice(list(range(-9, 10)), size=(3, 3))
B = np.random.choice(list(range(-9, 10)), size=(3, 3))
print("A = \n", A)
print("B = \n", B)
print("A B = \n", A @ B)
print("A B = \n", np.dot(A, B))
Explanation: A (non-exhaustive) list of differences between Python 2 and Python 3
print is now a function, no longer a keyword
exec is now a function, no longer a keyword
division, /, no longer truncates! (no more 2/3 == 0)
all strings are unicode (this is... controversial)
the functions range(), zip(), map(), filter(), dict.keys(), dict.items(), dict.values(), all return an iterator instead of a list
exceptions are handled using a slightly different syntax
strict comparisons, so 'a' < 1 will fail with an error
from the standard library, urllib is reorganized
For a more complete list, see
http://ptgmedia.pearsoncmg.com/imprint_downloads/informit/promotions/python/python2python3.pdf
Cool things in Python 3
Some of these have been back-ported to Python 2.7
End of explanation
s = 'asdf'
b = s.encode('utf-8')
b
b.decode('utf-8')
# this will be problematic if other encodings are used...
s = 'asdf'
b = s.encode('utf-32')
b
b.decode('utf-8')
Explanation: New string formatting
The old string formatting (with %) is depricated in favor of str.format(). A good comparison of the two can be found here:
https://pyformat.info/
Unicode
Dealing with unicode can be a pain when *nix doesn't give or expect unicode. Sometimes importing data in python3 will give you strings with a weird b in front. These are bytestrings, and they can usually be converted to unicode strings with bytestring.decode('utf-8').
End of explanation
# shouldn't change anything in python3
from __future__ import print_function, division
print('non-truncated division in a print function: 2/3 =', 2/3)
Explanation: Writing code for both Python 2 and Python 3
Ever wonder what those from __future__ import foo statements were doing?
http://python-future.org/quickstart.html
Using the future package, you can write code that works for either Python 2 or Python 3. You'll still have to avoid using some Python 3 specific syntax.
End of explanation |
5,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Symca
Symca is used to perform symbolic metabolic control analysis [1] on metabolic pathway models in order to dissect the control properties of these pathways in terms of the different chains of local effects (or control patterns) that make up the total control coefficient values. Symbolic/algebraic expressions are generated for each control coefficient in a pathway which can be subjected to further analysis.
Features
Generates symbolic expressions for each control coefficient of a metabolic pathway model.
Splits control coefficients into control patterns that indicate the contribution of different chains of local effects.
Control coefficient and control pattern expressions can be manipulated using standard SymPy functionality.
Values of control coefficient and control pattern values are determined automatically and updated automatically following the calculation of standard (non-symbolic) control coefficient values subsequent to a parameter alteration.
Analysis sessions (raw expression data) can be saved to disk for later use.
The effect of parameter scans on control coefficient and control patters can be generated and displayed using ScanFig.
Visualisation of control patterns by using ModelGraph functionality.
Saving/loading of Symca sessions.
Saving of control pattern results.
Usage and feature walkthrough
Workflow
Performing symbolic control analysis with Symca usually requires the following steps
Step1: Additionally Symca has the following arguments
Step2: do_symca has the following arguments
Step3: Inspecting an individual control coefficient yields a symbolic expression together with a value
Step4: In the above example, the expression of the control coefficient consists of two numerator terms and a common denominator shared by all the control coefficient expression signified by $\Sigma$.
Various properties of this control coefficient can be accessed such as the
Step5: Numerator expression (as a SymPy expression)
Step6: Denominator expression (as a SymPy expression)
Step7: Value (as a float64)
Step8: Additional, less pertinent, attributes are abs_value, latex_expression, latex_expression_full, latex_numerator, latex_name, name and denominator_object.
The individual control coefficient numerator terms, otherwise known as control patterns, may also be accessed as follows
Step9: Each control pattern is numbered arbitrarily starting from 001 and has similar properties as the control coefficient object (i.e., their expression, numerator, value etc. can also be accessed).
Control pattern percentage contribution
Additionally control patterns have a percentage field which indicates the degree to which a particular control pattern contributes towards the overall control coefficient value
Step10: Unlike conventional percentages, however, these values are calculated as percentage contribution towards the sum of the absolute values of all the control coefficients (rather than as the percentage of the total control coefficient value). This is done to account for situations where control pattern values have different signs.
A particularly problematic example of where the above method is necessary, is a hypothetical control coefficient with a value of zero, but with two control patterns with equal value but opposite signs. In this case a conventional percentage calculation would lead to an undefined (NaN) result, whereas our methodology would indicate that each control pattern is equally ($50\%$) responsible for the observed control coefficient value.
Dynamic value updating
The values of the control coefficients and their control patterns are automatically updated when new steady-state
elasticity coefficients are calculated for the model. Thus changing a parameter of lin4_hill, such as the $V_{f}$ value of reaction 4, will lead to new control coefficient and control pattern values
Step11: Control pattern graphs
As described under Basic Usage, Symca has the functionality to display the chains of local effects represented by control patterns on a scheme of a metabolic model. This functionality can be accessed via the highlight_patterns method
Step12: highlight_patterns has the following optional arguments
Step13: Parameter scans
Parameter scans can be performed in order to determine the effect of a parameter change on either the control coefficient and control pattern values or of the effect of a parameter change on the contribution of the control patterns towards the control coefficient (as discussed above). The procedure for both the "value" and "percentage" scans are very much the same and rely on the same principles as described under basic_usage#plotting-and-displaying-results and RateChar#plotting-results.
To perform a parameter scan the do_par_scan method is called. This method has the following arguments
Step14: As previously described, these data can be displayed using ScanFig by calling the plot method of percentage_scan_data. Furthermore, lines can be enabled/disabled using the toggle_category method of ScanFig or by clicking on the appropriate buttons
Step15: A value plot can similarly be generated and displayed. In this case, however, an additional line indicating $C^{J}_{4}$ will also be present
Step16: Fixed internal metabolites
In the case where the concentration of an internal intermediate is fixed (such as in the case of a GSDA) the internal_fixed argument must be set to True in either the do_symca method, or when instantiating the Symca object. This will typically result in the creation of a cc_results_N object for each separate reaction block, where N is a number starting at 0. Results can then be accessed via these objects as with normal free internal intermediate models.
Thus for a variant of the lin4_fb model where the intermediateS3 is fixed at its steady-state value the procedure is as follows
Step17: The normal sc_fixed_S3.cc_results object is still generated, but will be invalid for the fixed model. Each additional cc_results_N contains control coefficient expressions that have the same common denominator and corresponds to a specific reaction block. These cc_results_N objects are numbered arbitrarily, but consistantly accross different sessions. Each results object accessed and utilised in the same way as the normal cc_results object.
For the mod_fixed_c model two additional results objects (cc_results_0 and cc_results_1) are generated
Step18: cc_results_0 contains the control coefficients describing the sensitivity of flux and concentrations of either reaction block towards reactions in the other reaction block (i.e., all control coefficients here should be zero). Due to the fact that the S3 demand block consists of a single reaction, this object also contains the control coefficient of R4 on J_R4, which is equal to one. This results object is useful confirming that the results were generated as expected.
Step19: If the demand block of S3 in this pathway consisted of multiple reactions, rather than a single reaction, there would have been an additional cc_results_N object containing the control coefficients of that reaction block.
Saving results
In addition to being able to save parameter scan results (as previously described), a summary of the control coefficient and control pattern results can be saved using the save_results method. This saves a csv file (by default) to disk to any specified location. If no location is specified, a file named cc_summary_N is saved to the ~/Pysces/$modelname/symca/ directory, where N is a number starting at 0
Step20: save_results has the following optional arguments
Step21: Saving/loading sessions
Saving and loading Symca sessions is very simple and works similar to RateChar. Saving a session takes place with the save_session method, whereas the load_session method loads the saved expressions. As with the save_results method and most other saving and loading functionality, if no file_name argument is provided, files will be saved to the default directory (see also basic_usage.html#saving-and-default-directories). As previously described, expressions can also automatically be loaded/saved by do_symca by using the auto_save_load argument which saves and loads using the default path. Models with internal fixed metabolites are handled automatically. | Python Code:
mod = pysces.model('lin4_fb')
mod.doLoad() # this method call is necessary to ensure that future `doLoad` method calls are executed correctly
sc = psctb.Symca(mod)
Explanation: Symca
Symca is used to perform symbolic metabolic control analysis [1] on metabolic pathway models in order to dissect the control properties of these pathways in terms of the different chains of local effects (or control patterns) that make up the total control coefficient values. Symbolic/algebraic expressions are generated for each control coefficient in a pathway which can be subjected to further analysis.
Features
Generates symbolic expressions for each control coefficient of a metabolic pathway model.
Splits control coefficients into control patterns that indicate the contribution of different chains of local effects.
Control coefficient and control pattern expressions can be manipulated using standard SymPy functionality.
Values of control coefficient and control pattern values are determined automatically and updated automatically following the calculation of standard (non-symbolic) control coefficient values subsequent to a parameter alteration.
Analysis sessions (raw expression data) can be saved to disk for later use.
The effect of parameter scans on control coefficient and control patters can be generated and displayed using ScanFig.
Visualisation of control patterns by using ModelGraph functionality.
Saving/loading of Symca sessions.
Saving of control pattern results.
Usage and feature walkthrough
Workflow
Performing symbolic control analysis with Symca usually requires the following steps:
Instantiation of a Symca object using a PySCeS model object.
Generation of symbolic control coefficient expressions.
Access generated control coefficient expression results via cc_results and the corresponding control coefficient name (see basic_usage)
Inspection of control coefficient values.
Inspection of control pattern values and their contributions towards the total control coefficient values.
Inspection of the effect of parameter changes (parameter scans) on the values of control coefficients and control patterns and the contribution of control patterns towards control coefficients.
Session/result saving if required
Further analysis.
Object instantiation
Instantiation of a Symca analysis object requires PySCeS model object (PysMod) as an argument. Using the included lin4_fb.psc model a Symca session is instantiated as follows:
End of explanation
sc.do_symca()
Explanation: Additionally Symca has the following arguments:
internal_fixed: This must be set to True in the case where an internal metabolite has a fixed concentration (default: False)
auto_load: If True Symca will try to load a previously saved session. Saved data is unaffected by the internal_fixed argument above (default: False).
.. note:: For the case where an internal metabolite is fixed see Fixed internal metabolites below.
Generating symbolic control coefficient expressions
Control coefficient expressions can be generated as soon as a Symca object has been instantiated using the do_symca method. This process can potentially take quite some time to complete, therefore we recommend saving the generated expressions for later loading (see Saving/Loading Sessions below). In the case of lin4_fb.psc expressions should be generated within a few seconds.
End of explanation
sc.cc_results
Explanation: do_symca has the following arguments:
internal_fixed: This must be set to True in the case where an internal metabolite has a fixed concentration (default: False)
auto_save_load: If set to True Symca will attempt to load a previously saved session and only generate new expressions in case of a failure. After generation of new results, these results will be saved instead. Setting internal_fixed to True does not affect previously saved results that were generated with this argument set to False (default: False).
Accessing control coefficient expressions
Generated results may be accessed via a dictionary-like cc_results object (see basic_usage#tables). Inspecting this cc_results object in a IPython/Jupyter notebook yields a table of control coefficient values:
End of explanation
sc.cc_results.ccJR1_R4
Explanation: Inspecting an individual control coefficient yields a symbolic expression together with a value:
End of explanation
sc.cc_results.ccJR1_R4.expression
Explanation: In the above example, the expression of the control coefficient consists of two numerator terms and a common denominator shared by all the control coefficient expression signified by $\Sigma$.
Various properties of this control coefficient can be accessed such as the:
* Expression (as a SymPy expression)
End of explanation
sc.cc_results.ccJR1_R4.numerator
Explanation: Numerator expression (as a SymPy expression)
End of explanation
sc.cc_results.ccJR1_R4.denominator
Explanation: Denominator expression (as a SymPy expression)
End of explanation
sc.cc_results.ccJR1_R4.value
Explanation: Value (as a float64)
End of explanation
sc.cc_results.ccJR1_R4.CP001
sc.cc_results.ccJR1_R4.CP002
Explanation: Additional, less pertinent, attributes are abs_value, latex_expression, latex_expression_full, latex_numerator, latex_name, name and denominator_object.
The individual control coefficient numerator terms, otherwise known as control patterns, may also be accessed as follows:
End of explanation
sc.cc_results.ccJR1_R4.CP001.percentage
sc.cc_results.ccJR1_R4.CP002.percentage
Explanation: Each control pattern is numbered arbitrarily starting from 001 and has similar properties as the control coefficient object (i.e., their expression, numerator, value etc. can also be accessed).
Control pattern percentage contribution
Additionally control patterns have a percentage field which indicates the degree to which a particular control pattern contributes towards the overall control coefficient value:
End of explanation
mod.doLoad()
# mod.Vf_4 has a default value of 50
mod.Vf_4 = 0.1
# calculating new steady state
mod.doMca()
# now ccJR1_R4 and its two control patterns should have new values
sc.cc_results.ccJR1_R4
# original value was 0.000
sc.cc_results.ccJR1_R4.CP001
# original value was 0.964
sc.cc_results.ccJR1_R4.CP002
# resetting to default Vf_4 value and recalculating
mod.doLoad()
mod.doMca()
Explanation: Unlike conventional percentages, however, these values are calculated as percentage contribution towards the sum of the absolute values of all the control coefficients (rather than as the percentage of the total control coefficient value). This is done to account for situations where control pattern values have different signs.
A particularly problematic example of where the above method is necessary, is a hypothetical control coefficient with a value of zero, but with two control patterns with equal value but opposite signs. In this case a conventional percentage calculation would lead to an undefined (NaN) result, whereas our methodology would indicate that each control pattern is equally ($50\%$) responsible for the observed control coefficient value.
Dynamic value updating
The values of the control coefficients and their control patterns are automatically updated when new steady-state
elasticity coefficients are calculated for the model. Thus changing a parameter of lin4_hill, such as the $V_{f}$ value of reaction 4, will lead to new control coefficient and control pattern values:
End of explanation
# This path leads to the provided layout file
path_to_layout = '~/Pysces/psc/lin4_fb.dict'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32':
path_to_layout = psctb.utils.misc.unix_to_windows_path(path_to_layout)
else:
path_to_layout = path.expanduser(path_to_layout)
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_model_graph_1.png'))) #ex
Explanation: Control pattern graphs
As described under Basic Usage, Symca has the functionality to display the chains of local effects represented by control patterns on a scheme of a metabolic model. This functionality can be accessed via the highlight_patterns method:
End of explanation
# clicking on CP002 shows that this control pattern representing
# the chain of effects passing through the feedback loop
# is totally responsible for the observed control coefficient value.
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_model_graph_2.png'))) #ex
# clicking on CP001 shows that this control pattern representing
# the chain of effects of the main pathway does not contribute
# at all to the control coefficient value.
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_model_graph_3.png'))) #ex
Explanation: highlight_patterns has the following optional arguments:
width: Sets the width of the graph (default: 900).
height:Sets the height of the graph (default: 500).
show_dummy_sinks: If True reactants with the "dummy" or "sink" will not be displayed (default: False).
show_external_modifier_links: If True edges representing the interaction of external effectors with reactions will be shown (default: False).
Clicking either of the two buttons representing the control patterns highlights these patterns according according to their percentage contribution (as discussed above) towards the total control coefficient.
End of explanation
percentage_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',
scan_range=numpy.logspace(-1,3,200),
scan_type='percentage')
Explanation: Parameter scans
Parameter scans can be performed in order to determine the effect of a parameter change on either the control coefficient and control pattern values or of the effect of a parameter change on the contribution of the control patterns towards the control coefficient (as discussed above). The procedure for both the "value" and "percentage" scans are very much the same and rely on the same principles as described under basic_usage#plotting-and-displaying-results and RateChar#plotting-results.
To perform a parameter scan the do_par_scan method is called. This method has the following arguments:
parameter: A String representing the parameter which should be varied.
scan_range: Any iterable representing the range of values over which to vary the parameter (typically a NumPy ndarray generated by numpy.linspace or numpy.logspace).
scan_type: Either "percentage" or "value" as described above (default: "percentage").
init_return: If True the parameter value will be reset to its initial value after performing the parameter scan (default: True).
par_scan: If True, the parameter scan will be performed by multiple parallel processes rather than a single process, thus speeding performance (default: False).
par_engine: Specifies the engine to be used for the parallel scanning processes. Can either be "multiproc" or "ipcluster". A discussion of the differences between these methods are beyond the scope of this document, see here for a brief overview of Multiprocessing in Python. (default: "multiproc").
force_legacy: If True do_par_scan will use a older and slower algorithm for performing the parameter scan. This is mostly used for debugging purposes. (default: False)
Below we will perform a percentage scan of $V_{f^4}$ for 200 points between 0.01 and 1000 in log space:
End of explanation
percentage_scan_plot = percentage_scan_data.plot()
# set the x-axis to a log scale
percentage_scan_plot.ax.semilogx()
# enable all the lines
percentage_scan_plot.toggle_category('Control Patterns', True)
percentage_scan_plot.toggle_category('CP001', True)
percentage_scan_plot.toggle_category('CP002', True)
# display the plot
percentage_scan_plot.interact()
#remove_next
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_perscan.png'))) #ex
Explanation: As previously described, these data can be displayed using ScanFig by calling the plot method of percentage_scan_data. Furthermore, lines can be enabled/disabled using the toggle_category method of ScanFig or by clicking on the appropriate buttons:
End of explanation
value_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',
scan_range=numpy.logspace(-1,3,200),
scan_type='value')
value_scan_plot = value_scan_data.plot()
# set the x-axis to a log scale
value_scan_plot.ax.semilogx()
# enable all the lines
value_scan_plot.toggle_category('Control Coefficients', True)
value_scan_plot.toggle_category('ccJR1_R4', True)
value_scan_plot.toggle_category('Control Patterns', True)
value_scan_plot.toggle_category('CP001', True)
value_scan_plot.toggle_category('CP002', True)
# display the plot
value_scan_plot.interact()
#remove_next
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_valscan.png'))) #ex
Explanation: A value plot can similarly be generated and displayed. In this case, however, an additional line indicating $C^{J}_{4}$ will also be present:
End of explanation
# Create a variant of mod with 'C' fixed at its steady-state value
mod_fixed_S3 = psctb.modeltools.fix_metabolite_ss(mod, 'S3')
# Instantiate Symca object the 'internal_fixed' argument set to 'True'
sc_fixed_S3 = psctb.Symca(mod_fixed_S3,internal_fixed=True)
# Run the 'do_symca' method (internal_fixed can also be set to 'True' here)
sc_fixed_S3.do_symca()
Explanation: Fixed internal metabolites
In the case where the concentration of an internal intermediate is fixed (such as in the case of a GSDA) the internal_fixed argument must be set to True in either the do_symca method, or when instantiating the Symca object. This will typically result in the creation of a cc_results_N object for each separate reaction block, where N is a number starting at 0. Results can then be accessed via these objects as with normal free internal intermediate models.
Thus for a variant of the lin4_fb model where the intermediateS3 is fixed at its steady-state value the procedure is as follows:
End of explanation
sc_fixed_S3.cc_results_1
Explanation: The normal sc_fixed_S3.cc_results object is still generated, but will be invalid for the fixed model. Each additional cc_results_N contains control coefficient expressions that have the same common denominator and corresponds to a specific reaction block. These cc_results_N objects are numbered arbitrarily, but consistantly accross different sessions. Each results object accessed and utilised in the same way as the normal cc_results object.
For the mod_fixed_c model two additional results objects (cc_results_0 and cc_results_1) are generated:
cc_results_1 contains the control coefficients describing the sensitivity of flux and concentrations within the supply block of S3 towards reactions within the supply block.
End of explanation
sc_fixed_S3.cc_results_0
Explanation: cc_results_0 contains the control coefficients describing the sensitivity of flux and concentrations of either reaction block towards reactions in the other reaction block (i.e., all control coefficients here should be zero). Due to the fact that the S3 demand block consists of a single reaction, this object also contains the control coefficient of R4 on J_R4, which is equal to one. This results object is useful confirming that the results were generated as expected.
End of explanation
sc.save_results()
Explanation: If the demand block of S3 in this pathway consisted of multiple reactions, rather than a single reaction, there would have been an additional cc_results_N object containing the control coefficients of that reaction block.
Saving results
In addition to being able to save parameter scan results (as previously described), a summary of the control coefficient and control pattern results can be saved using the save_results method. This saves a csv file (by default) to disk to any specified location. If no location is specified, a file named cc_summary_N is saved to the ~/Pysces/$modelname/symca/ directory, where N is a number starting at 0:
End of explanation
# the following code requires `pandas` to run
import pandas as pd
# load csv file at default path
results_path = '~/Pysces/lin4_fb/symca/cc_summary_0.csv'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32':
results_path = psctb.utils.misc.unix_to_windows_path(results_path)
else:
results_path = path.expanduser(results_path)
saved_results = pd.read_csv(results_path)
# show first 20 lines
saved_results.head(n=20)
Explanation: save_results has the following optional arguments:
file_name: Specifies a path to save the results to. If None, the path defaults as described above.
separator: The separator between fields (default: ",")
The contents of the saved data file is as follows:
End of explanation
# saving session
sc.save_session()
# create new Symca object and load saved results
new_sc = psctb.Symca(mod)
new_sc.load_session()
# display saved results
new_sc.cc_results
Explanation: Saving/loading sessions
Saving and loading Symca sessions is very simple and works similar to RateChar. Saving a session takes place with the save_session method, whereas the load_session method loads the saved expressions. As with the save_results method and most other saving and loading functionality, if no file_name argument is provided, files will be saved to the default directory (see also basic_usage.html#saving-and-default-directories). As previously described, expressions can also automatically be loaded/saved by do_symca by using the auto_save_load argument which saves and loads using the default path. Models with internal fixed metabolites are handled automatically.
End of explanation |
5,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Simple Conditional Statements</h1>
<h2>01.Excellent Result</h2>
The first task of this topic is to write a console program that introduces an estimate (decimal number) and prints "Excellent!" if the score is 5.50 or higher.
Step1: <h2>02.Excellent or Not</h2>
The next task of this topic is to write a console program that introduces an estimate (decimal number)
and prints "Excellent!" if the score is 5.50 or higher, or "Not Excellent." in the opposite case.
Step2: <h1>03.Even or Odd</h1>
Write a program that enters an integer and print whether it is even or odd.
Step3: <h2>04.Greater Number</h2>
Write a program that introduces two integers and prints the larger one.
num_1 = int(input())
num_2 = int(input())
if num_1 >= num_2
Step4: <h2>06.Bonus Score</h2>
An integer number is given. Bonus points based on the rules described below are charged.
Yes a program is written that calculates the bonus points for that number and the total number of points with the bonuses.
01.If the number is up to 100 inclusive, the bonus points are 5.
02.If the number is greater than 100, the bonus points are 20% of the number.
03.If the number is greater than 1000, the bonus points are 10% of the number.
04.Additional bonus points (charged separately from the previous ones)
Step5: <h2>07.Sum Seconds</h2>
Three athletes finish for some seconds (between 1 and 50).
To write a program,which sets the times of the contestants and calculates their cumulative time in the "minutes
Step6: <h2>08.Metric Converter</h2>
Write a program that translates a distance between the following 8 units
Step7: <h2>09.Password Guess</h2>
Write a program that enters a password (one line with any text) and
checks if it is entered matches the phrase "s3cr3t! P @ ssw0rd".
In case of a collision, bring "Welcome".
In case of inconsistency "Wrong Password!"
Step8: <h2>10.Number 100...200</h2>
Write a program that enters an integer and checks if it is below 100, between 100 and 200 or more 200.
Print relevant messages as in the examples below
Step9: <h2>11.Equal Words</h2>
Write a program that introduces two words and checks whether they are the same.
Do not make a difference between headwords and small words. Show "yes" or "no".
Step10: <h2>12.Speed Info</h2>
Write a program that introduces a speed (decimal number) and prints speed information.
At speeds of up to 10 (inclusive), print "slow". At speeds over 10 and up to 50, print "average".
At speeds over 50 and up to 150, print "fast". At speed above 150 and up to 1000, print "ultra fast".
At higher speed, print "extremely fast".
Step11: <h2>13.Area of Figures</h2>
Write a program that introduces the dimensions of a geometric figure and calculates its face.
The figures are four types
Step12: <h2>14.Time + 15 Minutes</h2>
Write a program that introduces hours and minutes of 24 hours a dayand calculates how much time it will take after 15 minutes. The result is printed in hh
Step13: <h2>15.3 Equal Numbers</h2>
Enter 3 numbers and print whether they are the same (yes / no) | Python Code:
num = float(input())
if num >= 5.50:
print("Excellent!")
Explanation: <h1 align="center">Simple Conditional Statements</h1>
<h2>01.Excellent Result</h2>
The first task of this topic is to write a console program that introduces an estimate (decimal number) and prints "Excellent!" if the score is 5.50 or higher.
End of explanation
grade = float(input())
if grade >= 5.50:
print("Excellent!")
else:
print("Not excellent.")
Explanation: <h2>02.Excellent or Not</h2>
The next task of this topic is to write a console program that introduces an estimate (decimal number)
and prints "Excellent!" if the score is 5.50 or higher, or "Not Excellent." in the opposite case.
End of explanation
num = int(input())
if num % 2 == 0:
print("even")
else:
print("odd")
Explanation: <h1>03.Even or Odd</h1>
Write a program that enters an integer and print whether it is even or odd.
End of explanation
num = int(input())
if num == 0:
print("zero")
elif num == 1:
print("one")
elif num == 2:
print("two")
elif num == 3:
print("three")
elif num == 4:
print("four")
elif num == 5:
print("five")
elif num == 6:
print("six")
elif num == 7:
print("seven")
elif num == 8:
print("eight")
elif num == 9:
print("nine")
else:
print("number too big")
Explanation: <h2>04.Greater Number</h2>
Write a program that introduces two integers and prints the larger one.
num_1 = int(input())
num_2 = int(input())
if num_1 >= num_2:
print(num_1)
else:
print(num_2)
<h2>05.Number 0...9 to Text</h2>
Write a program that enters an integer in the range [0 ... 10] and writes it in English language.
If the number is out of range, it says "number too big"
End of explanation
num = int(input())
bonus = 0
if num <= 100:
bonus += 5
elif num > 100 and num < 1000:
bonus += (num * 0.2)
elif num >= 1000:
bonus += (num * 0.1)
if num % 2 == 0:
bonus += 1
if num % 10 == 5:
bonus += 2
print(bonus)
print(bonus + num)
Explanation: <h2>06.Bonus Score</h2>
An integer number is given. Bonus points based on the rules described below are charged.
Yes a program is written that calculates the bonus points for that number and the total number of points with the bonuses.
01.If the number is up to 100 inclusive, the bonus points are 5.
02.If the number is greater than 100, the bonus points are 20% of the number.
03.If the number is greater than 1000, the bonus points are 10% of the number.
04.Additional bonus points (charged separately from the previous ones):
o For even number - + 1 p.
o For a number that ends at 5 - + 2 points.
End of explanation
first_Time = int(input())
second_Time = int(input())
third_Time = int(input())
total_Time = first_Time + second_Time + third_Time
minutes = int(total_Time / 60)
seconds = total_Time % 60
if total_Time < 60:
if total_Time <= 9:
print(f'0:0{seconds}')
else:
print(f'0:{seconds}')
elif total_Time >= 60:
if seconds <= 9:
print(f'{minutes}:0{seconds}')
else:
print(f'{minutes}:{seconds}')
Explanation: <h2>07.Sum Seconds</h2>
Three athletes finish for some seconds (between 1 and 50).
To write a program,which sets the times of the contestants and calculates their cumulative time in the "minutes: seconds" format.
Take seconds to lead zero.
End of explanation
input_num = float(input())
input_unit = input()
output_unit = input()
if input_unit == "mm":
if output_unit == "mm":
print(input_num * 1,"mm")
elif output_unit == "cm":
print(input_num / 10,"cm")
elif output_unit == "m":
print(input_num / 1000,"m")
elif output_unit == "mi":
print((input_num / 1000) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 1000) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 1000) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 1000) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 1000) * 1.0936133,"yd")
elif input_unit == "cm":
if output_unit == "mm":
print(input_num * 10,"mm")
elif output_unit == "cm":
print(input_num * 1,"cm")
elif output_unit == "m":
print(input_num / 100,"m")
elif output_unit == "mi":
print((input_num / 100) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 100) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 100) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 100) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 100) * 1.0936133,"yd")
elif input_unit == "mi":
if output_unit == "mm":
print((input_num * 1609.344)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 1609.344) * 100,"cm")
elif output_unit == "m":
print(input_num * 1609.344,"m")
elif output_unit == "mi":
print(input_num * 1,"mi")
elif output_unit == "in":
print((input_num * 1609.344) * 39.3700787,"in")
elif output_unit == "km":
print((input_num * 1609.344) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 1609.344) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 1609.344) * 1.0936133,"yd")
elif input_unit == "in":
if output_unit == "mm":
print((input_num * 0.0254)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 0.0254) * 100,"cm")
elif output_unit == "m":
print(input_num * 0.0254,"m")
elif output_unit == "mi":
print((input_num * 0.0254) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num * 1),"in")
elif output_unit == "km":
print((input_num * 0.0254) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 0.0254) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 0.0254) * 1.0936133,"yd")
elif input_unit == "km":
if output_unit == "mm":
print((input_num * 1000)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 1000) * 100,"cm")
elif output_unit == "m":
print(input_num * 1000,"m")
elif output_unit == "mi":
print((input_num * 1000) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num * 1000) * 39.3700787,"in")
elif output_unit == "km":
print((input_num * 1),"km")
elif output_unit == "ft":
print((input_num * 1000) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 1000) * 1.0936133,"yd")
elif input_unit == "ft":
if output_unit == "mm":
print((input_num / 3.2808399)* 1000,"mm")
elif output_unit == "cm":
print((input_num / 3.2808399) * 100,"cm")
elif output_unit == "m":
print(input_num / 3.2808399,"m")
elif output_unit == "mi":
print((input_num / 3.2808399) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 3.2808399) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 3.2808399) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 1),"ft")
elif output_unit == "yd":
print((input_num / 3.2808399) * 1.0936133,"yd")
elif input_unit == "yd":
if output_unit == "mm":
print((input_num / 1.0936133)* 1000,"mm")
elif output_unit == "cm":
print((input_num / 1.0936133) * 100,"cm")
elif output_unit == "m":
print(input_num / 1.0936133,"m")
elif output_unit == "mi":
print((input_num / 1.0936133) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 1.0936133) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 1.0936133) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 1.0936133) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 1),"yd")
elif input_unit == "m":
if output_unit == "mm":
print(input_num * 1000,"mm")
elif output_unit == "cm":
print(input_num * 100,"cm")
elif output_unit == "m":
print(input_num * 1,"m")
elif output_unit == "mi":
print(input_num * 0.000621371192,"mi")
elif output_unit == "in":
print(input_num * 39.3700787,"in")
elif output_unit == "km":
print(input_num * 0.001,"km")
elif output_unit == "ft":
print(input_num * 3.2808399,"ft")
elif output_unit == "yd":
print(input_num * 1.0936133,"yd")
Explanation: <h2>08.Metric Converter</h2>
Write a program that translates a distance between the following 8 units: m, mm, cm, mi, in, km, ft, yd.
End of explanation
password = input()
if password == "s3cr3t!P@ssw0rd":
print("Welcome")
else:
print("Wrong password!")
Explanation: <h2>09.Password Guess</h2>
Write a program that enters a password (one line with any text) and
checks if it is entered matches the phrase "s3cr3t! P @ ssw0rd".
In case of a collision, bring "Welcome".
In case of inconsistency "Wrong Password!"
End of explanation
num = int(input())
if num < 100:
print("Less than 100")
elif num >= 100 and num <= 200:
print("Between 100 and 200")
elif num > 200:
print("Greater than 200")
Explanation: <h2>10.Number 100...200</h2>
Write a program that enters an integer and checks if it is below 100, between 100 and 200 or more 200.
Print relevant messages as in the examples below:
End of explanation
first_Word = input().lower()
second_Word = input().lower()
if first_Word == second_Word:
print("yes")
else:
print("no")
Explanation: <h2>11.Equal Words</h2>
Write a program that introduces two words and checks whether they are the same.
Do not make a difference between headwords and small words. Show "yes" or "no".
End of explanation
speed = float(input())
if speed <= 10:
print("slow")
elif speed > 10 and speed <= 50:
print("average")
elif speed > 50 and speed <= 150:
print("fast")
elif speed > 150 and speed <= 1000:
print("ultra fast")
else:
print("extremely fast")
Explanation: <h2>12.Speed Info</h2>
Write a program that introduces a speed (decimal number) and prints speed information.
At speeds of up to 10 (inclusive), print "slow". At speeds over 10 and up to 50, print "average".
At speeds over 50 and up to 150, print "fast". At speed above 150 and up to 1000, print "ultra fast".
At higher speed, print "extremely fast".
End of explanation
import math
figure = input()
if figure == "square":
side = float(input())
area = side ** 2
print(format(area,'.3f'))
elif figure == "rectangle":
side_a = float(input())
side_b = float(input())
area = side_a * side_b
print(format(area,'.3f'))
elif figure == "circle":
radius = float(input())
area = radius ** 2 * math.pi
print(format(area,'.3f'))
elif figure == "triangle":
side = float(input())
height = float(input())
area = (side * height) / 2
print(format(area,'.3f'))
Explanation: <h2>13.Area of Figures</h2>
Write a program that introduces the dimensions of a geometric figure and calculates its face.
The figures are four types: a square, a rectangle, a circle, and a triangle.
On the first line of the input reads the shape of the figure (square, rectangle, circle or triangle).
If the figure is a square, the next line reads one number- the length of its country.
If the figure is a rectangle, the next one two lines read two numbers- the lengths of his sides.
If the figure is a circle, the next line reads one number
the radius of the circle.
If the figure is a triangle, the next two lines read two numbers - the length of the
its side and the length of the height to it. Score to round to 3 digits after the decimal point.
End of explanation
hours = int(input())
minutes = int(input())
minutes += 15
if minutes >= 60:
minutes %= 60
hours += 1
if hours >= 24:
hours -= 24
if minutes <= 9:
print(f'{hours}:0{minutes}')
else:
print(f'{hours}:{minutes}')
else:
if minutes <= 9:
print(f'{hours}:0{minutes}')
else:
print(f'{hours}:{minutes}')
else:
print(f'{hours}:{minutes}')
Explanation: <h2>14.Time + 15 Minutes</h2>
Write a program that introduces hours and minutes of 24 hours a dayand calculates how much time it will take after 15 minutes. The result is printed in hh: mm format.
Hours are always between 0 and 23 minutes are always between 0 and 59.
Hours are written in one or two digits. Minutes are always written with two digits, with lead zero when needed.
End of explanation
first_num = int(input())
second_num = int(input())
third_num = int(input())
sum = first_num + second_num + third_num
if sum / 3 == first_num:
print("yes")
else:
print("no")
Explanation: <h2>15.3 Equal Numbers</h2>
Enter 3 numbers and print whether they are the same (yes / no)
End of explanation |
5,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convergence
Description of the UCI protocol
Step1: The Speed of Search
The number of nodes searched depend linearly on time
Step2: So nodes per second is roughly constant
Step3: The hashtable usage is at full capacity
Step4: Number of nodes needed for the given depth grows exponentially, except for moves that are forced, which require very little nodes to search (those show as a horizontal plateau)
Step5: Convergence wrt. Depth
Step6: Convergence of the variations | Python Code:
%pylab inline
! grep "multipv 1" log4.txt | grep -v lowerbound | grep -v upperbound > log4_g.txt
def parse_info(l):
D = {}
k = l.split()
i = 0
assert k[i] == "info"
i += 1
while i < len(k):
if k[i] == "depth":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "seldepth":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "multipv":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "score":
if k[i+1] == "cp":
D["score_p"] = int(k[i+2]) / 100. # score in pawns
i += 3
elif k[i] == "nodes":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "nps":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "hashfull":
D[k[i]] = int(k[i+1]) / 1000. # between 0 and 1
i += 2
elif k[i] == "tbhits":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "time":
D[k[i]] = int(k[i+1]) / 1000. # elapsed time in [s]
i += 2
elif k[i] == "pv":
D[k[i]] = k[i+1:]
return D
else:
raise Exception("Unknown kw")
# Convert to an array of lists
D = []
for l in open("log4_g.txt").readlines():
D.append(parse_info(l))
# Convert to a list of arrays
data = {}
for key in D[-1].keys():
d = []
for x in D:
if key in x:
d.append(x[key])
else:
d.append(-1)
if key != "pv":
d = array(d)
data[key] = d
Explanation: Convergence
Description of the UCI protocol: https://ucichessengine.wordpress.com/2011/03/16/description-of-uci-protocol/
Let us parse the logs first:
End of explanation
title("Number of nodes searched in time")
plot(data["time"] / 60., data["nodes"], "o")
xlabel("Time [min]")
ylabel("Nodes")
grid()
show()
Explanation: The Speed of Search
The number of nodes searched depend linearly on time:
End of explanation
title("Positions per second in time")
plot(data["time"] / 60., data["nps"], "o")
xlabel("Time [min]")
ylabel("Positions / s")
grid()
show()
Explanation: So nodes per second is roughly constant:
End of explanation
title("Hashtable usage")
hashfull = data["hashfull"]
hashfull[hashfull == -1] = 0
plot(data["time"] / 60., hashfull * 100, "o")
xlabel("Time [min]")
ylabel("Hashtable filled [%]")
grid()
show()
Explanation: The hashtable usage is at full capacity:
End of explanation
title("Number of nodes vs. depth")
semilogy(data["depth"], data["nodes"], "o")
x = data["depth"]
y = exp(x/2.2)
y = y / y[-1] * data["nodes"][-1]
semilogy(x, y, "-")
xlabel("Depth [half moves]")
ylabel("Nodes")
grid()
show()
title("Number of time vs. depth")
semilogy(data["depth"], data["time"]/60., "o")
xlabel("Depth [half moves]")
ylabel("Time [min]")
grid()
show()
Explanation: Number of nodes needed for the given depth grows exponentially, except for moves that are forced, which require very little nodes to search (those show as a horizontal plateau):
End of explanation
title("Score")
plot(data["depth"], data["score_p"], "o")
xlabel("Depth [half moves]")
ylabel("Score [pawns]")
grid()
show()
Explanation: Convergence wrt. Depth
End of explanation
for i in range(len(data["depth"])):
print "%2i %s" % (data["depth"][i], " ".join(data["pv"][i])[:100])
Explanation: Convergence of the variations:
End of explanation |
5,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 1
Step1: The following assumes that the folder containing the 'dat' files is in a directory called 'fixtures' in the same directory as this script. You can also enter a full path to the files.
Step2: This import class extracts all the information contained in the 'Statoil' files, such as sizes, locations and connectivity. Note that the io classes return a project object, and the network itself can be accessed using the network attribute. The following printout display which information was contained in the file
Step3: At this point, the network can be visualized in Paraview. A suitable '.vtp' file can be created with
Step4: The resulting network is shown below
Step5: Dealing with Inlet and Outlet Pores
When importing Statoil networks, OpenPNM must perform some 'optimizations' to make the network compatible. The main problem is that the original network contains a large number of throats connecting actual internal pores to fictitious 'reservoir' pores. OpenPNM strips away all these throats since 'headless throats' break the graph theory representation. OpenPNM then labels the real internal pores as either 'inlet' or 'outlet' if they were connected to one of these fictitious reservoirs.
It is fairly simple to add a new pores to each end of the domain and stitch tehm to the internal pores labelled 'inlet' and 'outlet', but this introduces a subsequent complication that the new pores don't have any geometry properties. For this example, we will not add boundary pores, but just the pores on the inlet and outlet faces.
Part 2
Step6: Apply Pore-Scale Models
We must add the hagen-poiseuille model for calculating the conductance. In OpenPNM 2+ it is possible to add Physics models to Phase objects, which is often simpler than than applying the same model to multiple Physics.
Step7: Recall that boundary pores and throats had no geometrical properties associated with them, so the hydraulic conductances of boundary throats will be undefined (filled with NaNs)
Step8: Run StokesFlow Algorithm
Finally, we can create a StokesFlow object to run some fluid flow simulations
Step9: The resulting pressure field can be visualized in Paraview, giving the following | Python Code:
import warnings
import scipy as sp
import numpy as np
import openpnm as op
np.set_printoptions(precision=4)
np.random.seed(10)
%matplotlib inline
Explanation: Part 1: Import Networks from Statoil Files
This example explains how to use the OpenPNM.Utilies.IO.Statoil class to import a network produced by the Maximal Ball network extraction code developed by Martin Blunt's group at Imperial College London. The code is available from him upon request, but they offer a small library of pre-extracted networks on their website.
End of explanation
from pathlib import Path
path = Path('../fixtures/ICL-Sandstone(Berea)/')
project = op.io.Statoil.load(path=path, prefix='Berea')
pn = project.network
pn.name = 'berea'
Explanation: The following assumes that the folder containing the 'dat' files is in a directory called 'fixtures' in the same directory as this script. You can also enter a full path to the files.
End of explanation
print(pn)
Explanation: This import class extracts all the information contained in the 'Statoil' files, such as sizes, locations and connectivity. Note that the io classes return a project object, and the network itself can be accessed using the network attribute. The following printout display which information was contained in the file:
End of explanation
project.export_data(filename='imported_statoil')
Explanation: At this point, the network can be visualized in Paraview. A suitable '.vtp' file can be created with:
End of explanation
print('Number of pores before trimming: ', pn.Np)
h = pn.check_network_health()
op.topotools.trim(network=pn, pores=h['trim_pores'])
print('Number of pores after trimming: ', pn.Np)
Explanation: The resulting network is shown below:
<img src="http://i.imgur.com/771T36M.png" style="width: 60%" align="left"/>
Clean up network topology
Although it's not clear in the network image, there are a number of isolated and disconnected pores in the network. These are either naturally part of the sandstone, or artifacts of the Maximal Ball algorithm. In any event, these must be removed before proceeding since they cause problems for the matrix solver. The easiest way to find these is to use the check_network_health function on the network object. This will return a dictionary with several key attributes, including a list of which pores are isolated. These can then be trimmed using the trim function in the topotools module.
End of explanation
water = op.phases.Water(network=pn)
Explanation: Dealing with Inlet and Outlet Pores
When importing Statoil networks, OpenPNM must perform some 'optimizations' to make the network compatible. The main problem is that the original network contains a large number of throats connecting actual internal pores to fictitious 'reservoir' pores. OpenPNM strips away all these throats since 'headless throats' break the graph theory representation. OpenPNM then labels the real internal pores as either 'inlet' or 'outlet' if they were connected to one of these fictitious reservoirs.
It is fairly simple to add a new pores to each end of the domain and stitch tehm to the internal pores labelled 'inlet' and 'outlet', but this introduces a subsequent complication that the new pores don't have any geometry properties. For this example, we will not add boundary pores, but just the pores on the inlet and outlet faces.
Part 2: Calculating Permeability of the Network
Setup Geometry, Phase, and Physics Objects
In OpenPNM 2+ it is optional to define Geometry and Physics objects (These are really only necessary for simulations with diverse geometrical properties in different regions, resulting in different physical processes in each region, such as multiscale networks for instance). It is still necessary to define Phase objects:
End of explanation
water.add_model(propname='throat.hydraulic_conductance',
model=op.models.physics.hydraulic_conductance.valvatne_blunt)
Explanation: Apply Pore-Scale Models
We must add the hagen-poiseuille model for calculating the conductance. In OpenPNM 2+ it is possible to add Physics models to Phase objects, which is often simpler than than applying the same model to multiple Physics.
End of explanation
print(water['throat.hydraulic_conductance'])
Explanation: Recall that boundary pores and throats had no geometrical properties associated with them, so the hydraulic conductances of boundary throats will be undefined (filled with NaNs):
End of explanation
flow = op.algorithms.StokesFlow(network=pn, phase=water)
flow.set_value_BC(pores=pn.pores('inlets'), values=200000)
flow.set_value_BC(pores=pn.pores('outlets'), values=100000)
flow.run()
Explanation: Run StokesFlow Algorithm
Finally, we can create a StokesFlow object to run some fluid flow simulations:
End of explanation
# Get the average value of the fluid viscosity
mu = np.mean(water['pore.viscosity'])
# Specify a pressure difference (in Pa)
delta_P = 100000
# Using the rate method of the StokesFlow algorithm
Q = np.absolute(flow.rate(pores=pn.pores('inlets')))
# Because we know the inlets and outlets are at x=0 and x=X
Lx = np.amax(pn['pore.coords'][:, 0]) - np.amin(pn['pore.coords'][:, 0])
A = Lx*Lx # Since the network is cubic Lx = Ly = Lz
K = Q*mu*Lx/(delta_P*A)
print(K)
Explanation: The resulting pressure field can be visualized in Paraview, giving the following:
<img src="https://i.imgur.com/AIK6FbJ.png" style="width: 60%" align="left"/>
Determination of Permeability Coefficient
The way to calculate K is the determine each of the values in Darcy's law manually and solve for K, such that $$ K = \frac{Q\mu L} {\Delta P A} $$
End of explanation |
5,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The
Step1: Continuous data is stored in objects of type
Step2: <div class="alert alert-info"><h4>Note</h4><p>Accessing the `._data` attribute is done here for educational
purposes. However this is a private attribute as its name starts
with an `_`. This suggests that you should **not** access this
variable directly but rely on indexing syntax detailed just below.</p></div>
Information about the channels contained in the
Step3: Selecting subsets of channels and samples
It is possible to use more intelligent indexing to extract data, using
channel names, types or time ranges.
Step4: Notice the different scalings of these types
Step5: You can restrict the data to a specific time range
Step6: And drop channels by name
Step7: Concatenating | Python Code:
from __future__ import print_function
import mne
import os.path as op
from matplotlib import pyplot as plt
Explanation: The :class:Raw <mne.io.Raw> data structure: continuous data
End of explanation
# Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(data_path, preload=True, add_eeg_ref=False)
raw.set_eeg_reference() # set EEG average reference
# Give the sample rate
print('sample rate:', raw.info['sfreq'], 'Hz')
# Give the size of the data matrix
print('channels x samples:', raw._data.shape)
Explanation: Continuous data is stored in objects of type :class:Raw <mne.io.Raw>.
The core data structure is simply a 2D numpy array (channels × samples,
stored in a private attribute called ._data) combined with an
:class:Info <mne.Info> object (.info attribute)
(see tut_info_objects).
The most common way to load continuous data is from a .fif file. For more
information on loading data from other formats <ch_convert>, or
creating it from scratch <tut_creating_data_structures>.
Loading continuous data
End of explanation
# Extract data from the first 5 channels, from 1 s to 3 s.
sfreq = raw.info['sfreq']
data, times = raw[:5, int(sfreq * 1):int(sfreq * 3)]
_ = plt.plot(times, data.T)
_ = plt.title('Sample channels')
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Accessing the `._data` attribute is done here for educational
purposes. However this is a private attribute as its name starts
with an `_`. This suggests that you should **not** access this
variable directly but rely on indexing syntax detailed just below.</p></div>
Information about the channels contained in the :class:Raw <mne.io.Raw>
object is contained in the :class:Info <mne.Info> attribute.
This is essentially a dictionary with a number of relevant fields (see
tut_info_objects).
Indexing data
To access the data stored within :class:Raw <mne.io.Raw> objects,
it is possible to index the :class:Raw <mne.io.Raw> object.
Indexing a :class:Raw <mne.io.Raw> object will return two arrays: an array
of times, as well as the data representing those timepoints. This works
even if the data is not preloaded, in which case the data will be read from
disk when indexing. The syntax is as follows:
End of explanation
# Pull all MEG gradiometer channels:
# Make sure to use .copy() or it will overwrite the data
meg_only = raw.copy().pick_types(meg=True)
eeg_only = raw.copy().pick_types(meg=False, eeg=True)
# The MEG flag in particular lets you specify a string for more specificity
grad_only = raw.copy().pick_types(meg='grad')
# Or you can use custom channel names
pick_chans = ['MEG 0112', 'MEG 0111', 'MEG 0122', 'MEG 0123']
specific_chans = raw.copy().pick_channels(pick_chans)
print(meg_only, eeg_only, grad_only, specific_chans, sep='\n')
Explanation: Selecting subsets of channels and samples
It is possible to use more intelligent indexing to extract data, using
channel names, types or time ranges.
End of explanation
f, (a1, a2) = plt.subplots(2, 1)
eeg, times = eeg_only[0, :int(sfreq * 2)]
meg, times = meg_only[0, :int(sfreq * 2)]
a1.plot(times, meg[0])
a2.plot(times, eeg[0])
del eeg, meg, meg_only, grad_only, eeg_only, data, specific_chans
Explanation: Notice the different scalings of these types
End of explanation
raw = raw.crop(0, 50) # in seconds
print('New time range from', raw.times.min(), 's to', raw.times.max(), 's')
Explanation: You can restrict the data to a specific time range
End of explanation
nchan = raw.info['nchan']
raw = raw.drop_channels(['MEG 0241', 'EEG 001'])
print('Number of channels reduced from', nchan, 'to', raw.info['nchan'])
Explanation: And drop channels by name
End of explanation
# Create multiple :class:`Raw <mne.io.RawFIF>` objects
raw1 = raw.copy().crop(0, 10)
raw2 = raw.copy().crop(10, 20)
raw3 = raw.copy().crop(20, 40)
# Concatenate in time (also works without preloading)
raw1.append([raw2, raw3])
print('Time extends from', raw1.times.min(), 's to', raw1.times.max(), 's')
Explanation: Concatenating :class:Raw <mne.io.Raw> objects
:class:Raw <mne.io.Raw> objects can be concatenated in time by using the
:func:append <mne.io.Raw.append> function. For this to work, they must
have the same number of channels and their :class:Info
<mne.Info> structures should be compatible.
End of explanation |
5,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sheet Copy
Copy tab from a sheet to a sheet.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter Sheet Copy Recipe Parameters
Provide the full edit URL for both sheets.
Provide the tab name for both sheets.
The tab will only be copied if it does not already exist.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute Sheet Copy
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: Sheet Copy
Copy tab from a sheet to a sheet.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'from_sheet':'',
'from_tab':'',
'to_sheet':'',
'to_tab':'',
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter Sheet Copy Recipe Parameters
Provide the full edit URL for both sheets.
Provide the tab name for both sheets.
The tab will only be copied if it does not already exist.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'sheets':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'template':{
'sheet':{'field':{'name':'from_sheet','kind':'string','order':1,'default':''}},
'tab':{'field':{'name':'from_tab','kind':'string','order':2,'default':''}}
},
'sheet':{'field':{'name':'to_sheet','kind':'string','order':3,'default':''}},
'tab':{'field':{'name':'to_tab','kind':'string','order':4,'default':''}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute Sheet Copy
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
5,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step6: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
Step7: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step8: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this
Step9: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step10: Building the model
Below is a function where I build the graph for the network.
Step11: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step12: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
Step13: Saved checkpoints
Read up on saving and loading checkpoints here
Step14: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step15: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
chars[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
np.max(chars)+1
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the first split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
Explanation: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
train_x, train_y, val_x, val_y = split_data(chars, 10, 50)
train_x.shape
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
train_x[:,:50]
Explanation: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:
End of explanation
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
Explanation: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# One-hot encoding the input and target characters
x_one_hot = tf.one_hot(inputs, num_classes)
y_one_hot = tf.one_hot(targets, num_classes)
### Build the RNN layers
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
### Run the data through the RNN layers
# This makes a list where each element is on step in the sequence
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one output row for each step for each batch
seq_output = tf.concat(outputs, axis=1)
output = tf.reshape(seq_output, [-1, lstm_size])
# Now connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(num_classes))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and batch
logits = tf.matmul(output, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
preds = tf.nn.softmax(logits, name='predictions')
# Reshape the targets to match the logits
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
cost = tf.reduce_mean(loss)
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
# NOTE: I'm using a namedtuple here because I think they are cool
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
Explanation: Building the model
Below is a function where I build the graph for the network.
End of explanation
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
keep_prob = 0.5
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
Explanation: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
checkpoint = "checkpoints/i3560_l512_v1.124.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
5,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python programming
This crash course on python is take from two souces
Step1: built in magic commands start with
A good list of the commands are found in
Step2: Character encoding
The standard character encoding is ASCII, but we can use any other encoding, for example UTF-8. To specify that UTF-8 is used we include the special line
# -*- coding
Step3: Other than these two optional lines in the beginning of a Python code file, no additional code is required for initializing a program.
Jupyter notebooks ( or old ipython notebooks)
This file - an IPython notebook - does not follow the standard pattern with Python code in a text file. Instead, an IPython notebook is stored as a file in the JSON format. The advantage is that we can mix formatted text, Python code and code output. It requires the IPython notebook server to run it though, and therefore isn't a stand-alone Python program as described above. Other than that, there is no difference between the Python code that goes into a program file or an IPython notebook.
Modules
Most of the functionality in Python is provided by modules. The Python Standard Library is a large collection of modules that provides cross-platform implementations of common facilities such as access to the operating system, file I/O, string management, network communication, and much more.
References
The Python Language Reference
Step4: This includes the whole module and makes it available for use later in the program. For example, we can do
Step5: Alternatively, we can chose to import all symbols (functions and variables) in a module to the current namespace (so that we don't need to use the prefix "math." every time we use something from the math module
Step6: This pattern can be very convenient, but in large programs that include many modules it is often a good idea to keep the symbols from each module in their own namespaces, by using the import math pattern. This would elminate potentially confusing problems with name space collisions.
As a third alternative, we can chose to import only a few selected symbols from a module by explicitly listing which ones we want to import instead of using the wildcard character *
Step7: Looking at what a module contains, and its documentation
Once a module is imported, we can list the symbols it provides using the dir function
Step8: And using the function help we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way).
Step9: We can also use the help function directly on modules
Step10: White spacing is ignored inside parenteses and brackets
Step11: So use that to make you code easier to read
Step14: This may make it hard to cut and paste code since the indentation may have to adjusted to the block
Variables and types
Symbol names
Variable names in Python can contain alphanumerical characters a-z, A-Z, 0-9 and some special characters such as _. Normal variable names must start with a letter.
By convention, variable names start with a lower-case letter, and Class names start with a capital letter.
In addition, there are a number of Python keywords that cannot be used as variable names. These keywords are
Step15: You can create short anonymous fuctions or lambdas or even assign lambdas to variables but better to use a def
Step16: Assignment
The assignment operator in Python is =. Python is a dynamically typed language, so we do not need to specify the type of a variable when we create one.
Assigning a value to a new variable creates the variable
Step17: Although not explicitly specified, a variable does have a type associated with it. The type is derived from the value that was assigned to it.
Step18: If we assign a new value to a variable, its type can change.
Step19: If we try to use a variable that has not yet been defined we get an NameError
Step20: Fundamental types
Step21: Type utility functions
The module types contains a number of type name definitions that can be used to test if variables are of certain types
Step22: We can also use the isinstance method for testing types of variables
Step23: Type casting
Step24: Complex variables cannot be cast to floats or integers. We need to use z.real or z.imag to extract the part of the complex number we want
Step25: Operators and comparisons
Most operators and comparisons in Python work as one would expect
Step26: Note
Step27: Comparison operators >, <, >= (greater or equal), <= (less or equal), == equality, is identical.
Step28: Compound types
Step29: We can index a character in a string using []
Step30: Heads up MATLAB users
Step31: If we omit either (or both) of start or stop from [start
Step32: We can also define the step size using the syntax [start
Step33: This technique is called slicing. Read more about the syntax here
Step34: List
Lists are very similar to strings, except that each element can be of any type.
The syntax for creating lists in Python is [...]
Step35: We can use the same slicing techniques to manipulate lists as we could use on strings
Step36: Heads up MATLAB users
Step37: Elements in a list do not all have to be of the same type
Step38: Python lists can be inhomogeneous and arbitrarily nested
Step39: Lists play a very important role in Python. For example they are used in loops and other flow control structures (discussed below). There are a number of convenient functions for generating lists of various types, for example the range function
Step40: Adding, inserting, modifying, and removing elements from lists
Step41: We can modify lists by assigning new values to elements in the list. In technical jargon, lists are mutable.
Step42: Insert an element at an specific index using insert
Step43: Remove first element with specific value using 'remove'
Step44: Remove an element at a specific location using del
Step45: See help(list) for more details, or read the online documentation
Tuples
Tuples are like lists, except that they cannot be modified once created, that is they are immutable.
In Python, tuples are created using the syntax (..., ..., ...), or even ..., ...
Step46: We can unpack a tuple by assigning it to a comma-separated list of variables
Step47: If we try to assign a new value to an element in a tuple we get an error
Step48: Dictionaries
Dictionaries are also like lists, except that each element is a key-value pair. The syntax for dictionaries is {key1
Step49: Control Flow
Conditional statements
Step50: For the first time, here we encounted a peculiar and unusual aspect of the Python programming language
Step51: Loops
In Python, loops can be programmed in a number of different ways. The most common is the for loop, which is used together with iterable objects, such as lists. The basic syntax is
Step52: The for loop iterates over the elements of the supplied list, and executes the containing block once for each element. Any kind of list can be used in the for loop. For example
Step53: Note
Step54: To iterate over key-value pairs of a dictionary
Step55: Sometimes it is useful to have access to the indices of the values when iterating over a list. We can use the enumerate function for this
Step56: List comprehensions
Step57: while loops
Step58: Note that the print("done") statement is not part of the while loop body because of the difference in indentation.
Functions
A function in Python is defined using the keyword def, followed by a function name, a signature within parentheses (), and a colon
Step60: Optionally, but highly recommended, we can define a so called "docstring", which is a description of the functions purpose and behaivor. The docstring should follow directly after the function definition, before the code in the function body.
Step62: Functions that returns a value use the return keyword
Step64: We can return multiple values from a function using tuples (see above)
Step65: Default argument and keyword arguments
In a definition of a function, we can give default values to the arguments the function takes
Step66: If we don't provide a value of the debug argument when calling the the function myfunc it defaults to the value provided in the function definition
Step67: If we explicitly list the name of the arguments in the function calls, they do not need to come in the same order as in the function definition. This is called keyword arguments, and is often very useful in functions that takes a lot of optional arguments.
Step68: Unnamed functions (lambda function)
In Python we can also create unnamed functions, using the lambda keyword
Step69: This technique is useful for example when we want to pass a simple function as an argument to another function, like this
Step73: Classes
Classes are the key features of object-oriented programming. A class is a structure for representing an object and the operations that can be performed on the object.
In Python a class can contain attributes (variables) and methods (functions).
A class is defined almost like a function, but using the class keyword, and the class definition usually contains a number of class method definitions (a function in a class).
Each class method should have an argument self as its first argument. This object is a self-reference.
Some class method names have special meaning, for example
Step74: To create a new instance of a class
Step75: To invoke a class method in the class instance p
Step80: Note that calling class methods can modifiy the state of that particular class instance, but does not effect other class instances or any global variables.
That is one of the nice things about object-oriented design
Step81: We can import the module mymodule into our Python program using import
Step82: Use help(module) to get a summary of what the module provides
Step83: If we make changes to the code in mymodule.py, we need to reload it using reload
Step84: Exceptions
In Python errors are managed with a special language construct called "Exceptions". When errors occur exceptions can be raised, which interrupts the normal program flow and fallback to somewhere else in the code where the closest try-except statement is defined.
To generate an exception we can use the raise statement, which takes an argument that must be an instance of the class BaseException or a class derived from it.
Step85: A typical use of exceptions is to abort functions when some error condition occurs, for example
Step86: To get information about the error, we can access the Exception class instance that describes the exception by using for example
Step87: Counter
Turns a sequenc of values into a defaultdict(int)-like object mappings keys to counts ( good for histograms )
Step88: Sorting
Step89: List Comprehensions
Step91: Generators and Iterators
Step92: You need to recreate the the lazy generator to use it a second time or use a list | Python Code:
ls ..\..\Scripts\hello-world*.py
Explanation: Introduction to Python programming
This crash course on python is take from two souces:
http://github.com/jrjohansson/scientific-python-lectures.
and
Chapter 2 of the Datascience from scratch: First principles with python
Python program files
Python code is usually stored in text files with the file ending ".py":
myprogram.py
Every line in a Python program file is assumed to be a Python statement, or part thereof.
The only exception is comment lines, which start with the character # (optionally preceded by an arbitrary number of white-space characters, i.e., tabs or spaces). Comment lines are usually ignored by the Python interpreter.
To run our Python program from the command line we use:
$ python myprogram.py
On UNIX systems it is common to define the path to the interpreter on the first line of the program (note that this is a comment line as far as the Python interpreter is concerned):
#!/usr/bin/env python
If we do, and if we additionally set the file script to be executable, we can run the program like this:
$ myprogram.py
Example:
a. use some command line functions:
in Windows:
ls ..\Scripts\hello-world.py
Mac or linux:
ls ../Scripts/hello-world.py
End of explanation
%%sh
cat ../../Scripts/hello-world.py
!python ..\..\Scripts\hello-world.py
Explanation: built in magic commands start with
A good list of the commands are found in:
https://ipython.org/ipython-doc/3/interactive/magics.html
End of explanation
%%sh
cat ../../Scripts/hello-world-in-swedish.py
!python ../../Scripts/hello-world-in-swedish.py
Explanation: Character encoding
The standard character encoding is ASCII, but we can use any other encoding, for example UTF-8. To specify that UTF-8 is used we include the special line
# -*- coding: UTF-8 -*-
at the top of the file.
End of explanation
import math
Explanation: Other than these two optional lines in the beginning of a Python code file, no additional code is required for initializing a program.
Jupyter notebooks ( or old ipython notebooks)
This file - an IPython notebook - does not follow the standard pattern with Python code in a text file. Instead, an IPython notebook is stored as a file in the JSON format. The advantage is that we can mix formatted text, Python code and code output. It requires the IPython notebook server to run it though, and therefore isn't a stand-alone Python program as described above. Other than that, there is no difference between the Python code that goes into a program file or an IPython notebook.
Modules
Most of the functionality in Python is provided by modules. The Python Standard Library is a large collection of modules that provides cross-platform implementations of common facilities such as access to the operating system, file I/O, string management, network communication, and much more.
References
The Python Language Reference: http://docs.python.org/2/reference/index.html
The Python Standard Library: http://docs.python.org/2/library/
To use a module in a Python program it first has to be imported. A module can be imported using the import statement. For example, to import the module math, which contains many standard mathematical functions, we can do:
End of explanation
import math
x = math.cos(2 * math.pi)
print(x)
Explanation: This includes the whole module and makes it available for use later in the program. For example, we can do:
End of explanation
from math import *
x = cos(2 * pi)
print(x)
Explanation: Alternatively, we can chose to import all symbols (functions and variables) in a module to the current namespace (so that we don't need to use the prefix "math." every time we use something from the math module:
End of explanation
from math import cos, pi
x = cos(2 * pi)
print(x)
Explanation: This pattern can be very convenient, but in large programs that include many modules it is often a good idea to keep the symbols from each module in their own namespaces, by using the import math pattern. This would elminate potentially confusing problems with name space collisions.
As a third alternative, we can chose to import only a few selected symbols from a module by explicitly listing which ones we want to import instead of using the wildcard character *:
End of explanation
import math
print(dir(math))
Explanation: Looking at what a module contains, and its documentation
Once a module is imported, we can list the symbols it provides using the dir function:
End of explanation
help(math.log)
log(10)
log(10, 2)
Explanation: And using the function help we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way).
End of explanation
for i in [1,2,3,4]:
print i
print "done looping"
Explanation: We can also use the help function directly on modules: Try
help(math)
Some very useful modules form the Python standard library are os, sys, math, shutil, re, subprocess, multiprocessing, threading.
A complete lists of standard modules for Python 2 and Python 3 are available at http://docs.python.org/2/library/ and http://docs.python.org/3/library/, respectively.
Whitespacing formatting
Python uses indents and not bracing to delimit blocks of code
Makes code readable but it means you must be careful about your formatting
End of explanation
long_winded_computation = (1+2+ 3 + 4 + 5+ 6
+ 7 + 8 + 9 + 10 + 11+
13 + 14 + 15)
long_winded_computation
Explanation: White spacing is ignored inside parenteses and brackets:
End of explanation
list_of_lists = [[1,2,3],[4,5,6],[7,8,9]]
easier_to_read_list_of_lists = [[1,2,3],
[4,5,6],
[7,8,9]]
Explanation: So use that to make you code easier to read
End of explanation
def double(x):
this is where you put in the docstring that explains what the fuciton does:
this function multiplies its input by
return x * 2
def apply_to_one(f):
calls the fuction f with 1 as its argument
return f(1)
apply_to_one(double)
Explanation: This may make it hard to cut and paste code since the indentation may have to adjusted to the block
Variables and types
Symbol names
Variable names in Python can contain alphanumerical characters a-z, A-Z, 0-9 and some special characters such as _. Normal variable names must start with a letter.
By convention, variable names start with a lower-case letter, and Class names start with a capital letter.
In addition, there are a number of Python keywords that cannot be used as variable names. These keywords are:
and, as, assert, break, class, continue, def, del, elif, else, except,
exec, finally, for, from, global, if, import, in, is, lambda, not, or,
pass, print, raise, return, try, while, with, yield
Note: Be aware of the keyword lambda, which could easily be a natural variable name in a scientific program. But being a keyword, it cannot be used as a variable name.
Functions
A function is a rule for taking zero or more inputes and return a corresponding output
Functions are first-class which means we can assign them to variables and pass them into functions just like any aguement
End of explanation
apply_to_one(lambda x: x + 4 )
another_double = lambda x: 2 * x
def yet_another_double(x): return 2 * x
Explanation: You can create short anonymous fuctions or lambdas or even assign lambdas to variables but better to use a def
End of explanation
# variable assignments
x = 1.0
my_variable = 12.2
Explanation: Assignment
The assignment operator in Python is =. Python is a dynamically typed language, so we do not need to specify the type of a variable when we create one.
Assigning a value to a new variable creates the variable:
End of explanation
type(x)
Explanation: Although not explicitly specified, a variable does have a type associated with it. The type is derived from the value that was assigned to it.
End of explanation
x = 1
type(x)
Explanation: If we assign a new value to a variable, its type can change.
End of explanation
print(y)
Explanation: If we try to use a variable that has not yet been defined we get an NameError:
End of explanation
# integers
x = 1
type(x)
# float
x = 1.0
type(x)
# boolean
b1 = True
b2 = False
type(b1)
# complex numbers: note the use of `j` to specify the imaginary part
x = 1.0 - 1.0j
type(x)
print(x)
print(x.real, x.imag)
Explanation: Fundamental types
End of explanation
import types
# print all types defined in the `types` module
print(dir(types))
x = 1.0
# check if the variable x is a float
type(x) is float
# check if the variable x is an int
type(x) is int
Explanation: Type utility functions
The module types contains a number of type name definitions that can be used to test if variables are of certain types:
End of explanation
isinstance(x, float)
Explanation: We can also use the isinstance method for testing types of variables:
End of explanation
x = 1.5
print(x, type(x))
x = int(x)
print(x, type(x))
z = complex(x)
print(z, type(z))
x = float(z)
Explanation: Type casting
End of explanation
y = bool(z.real)
print(z.real, " -> ", y, type(y))
y = bool(z.imag)
print(z.imag, " -> ", y, type(y))
Explanation: Complex variables cannot be cast to floats or integers. We need to use z.real or z.imag to extract the part of the complex number we want:
End of explanation
1 + 2, 1 - 2, 1 * 2, 1 / 2
1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 1.0 / 2.0
# Integer division of float numbers
3.0 // 2.0
# Note! The power operators in python isn't ^, but **
2 ** 2
Explanation: Operators and comparisons
Most operators and comparisons in Python work as one would expect:
Arithmetic operators +, -, *, /, // (integer division), '**' power
End of explanation
True and False
not False
True or False
Explanation: Note: The / operator always performs a floating point division in Python 3.x.
This is not true in Python 2.x, where the result of / is always an integer if the operands are integers.
to be more specific, 1/2 = 0.5 (float) in Python 3.x, and 1/2 = 0 (int) in Python 2.x (but 1.0/2 = 0.5 in Python 2.x).
The boolean operators are spelled out as the words and, not, or.
End of explanation
2 > 1, 2 < 1
2 > 2, 2 < 2
2 >= 2, 2 <= 2
# equality
[1,2] == [1,2]
# objects identical?
l1 = l2 = [1,2]
l1 is l2
Explanation: Comparison operators >, <, >= (greater or equal), <= (less or equal), == equality, is identical.
End of explanation
s = "Hello world"
type(s)
# length of the string: the number of characters
len(s)
# replace a substring in a string with somethign else
s2 = s.replace("world", "test")
print(s2)
Explanation: Compound types: Strings, List and dictionaries
Strings
Strings are the variable type that is used for storing text messages.
End of explanation
s[0]
Explanation: We can index a character in a string using []:
End of explanation
s[0:5]
s[4:5]
Explanation: Heads up MATLAB users: Indexing start at 0!
We can extract a part of a string using the syntax [start:stop], which extracts characters between index start and stop -1 (the character at index stop is not included):
End of explanation
s[:5]
s[6:]
s[:]
Explanation: If we omit either (or both) of start or stop from [start:stop], the default is the beginning and the end of the string, respectively:
End of explanation
s[::1]
s[::2]
Explanation: We can also define the step size using the syntax [start:end:step] (the default value for step is 1, as we saw above):
End of explanation
print("str1", "str2", "str3") # The print statement concatenates strings with a space
print("str1", 1.0, False, -1j) # The print statements converts all arguments to strings
print("str1" + "str2" + "str3") # strings added with + are concatenated without space
print("value = %f" % 1.0) # we can use C-style string formatting
# this formatting creates a string
s2 = "value1 = %.2f. value2 = %d" % (3.1415, 1.5)
print(s2)
# alternative, more intuitive way of formatting a string
s3 = 'value1 = {0}, value2 = {1}'.format(3.1415, 1.5)
print(s3)
Explanation: This technique is called slicing. Read more about the syntax here: http://docs.python.org/release/2.7.3/library/functions.html?highlight=slice#slice
Python has a very rich set of functions for text processing. See for example http://docs.python.org/2/library/string.html for more information.
String formatting examples
End of explanation
l = [1,2,3,4]
print(type(l))
print(l)
Explanation: List
Lists are very similar to strings, except that each element can be of any type.
The syntax for creating lists in Python is [...]:
End of explanation
print(l)
print(l[1:3])
print(l[::2])
Explanation: We can use the same slicing techniques to manipulate lists as we could use on strings:
End of explanation
l[0]
Explanation: Heads up MATLAB users: Indexing starts at 0!
End of explanation
l = [1, 'a', 1.0, 1-1j]
print(l)
Explanation: Elements in a list do not all have to be of the same type:
End of explanation
nested_list = [1, [2, [3, [4, [5]]]]]
nested_list
Explanation: Python lists can be inhomogeneous and arbitrarily nested:
End of explanation
start = 10
stop = 30
step = 2
range(start, stop, step)
# in python 3 range generates an interator, which can be converted to a list using 'list(...)'.
# It has no effect in python 2
list(range(start, stop, step))
list(range(-10, 10))
s
# convert a string to a list by type casting:
s2 = list(s)
s2
# sorting lists
s2.sort()
print(s2)
Explanation: Lists play a very important role in Python. For example they are used in loops and other flow control structures (discussed below). There are a number of convenient functions for generating lists of various types, for example the range function:
End of explanation
# create a new empty list
l = []
# add an elements using `append`
l.append("A")
l.append("d")
l.append("d")
print(l)
Explanation: Adding, inserting, modifying, and removing elements from lists
End of explanation
l[1] = "p"
l[2] = "p"
print(l)
l[1:3] = ["d", "d"]
print(l)
Explanation: We can modify lists by assigning new values to elements in the list. In technical jargon, lists are mutable.
End of explanation
l.insert(0, "i")
l.insert(1, "n")
l.insert(2, "s")
l.insert(3, "e")
l.insert(4, "r")
l.insert(5, "t")
print(l)
Explanation: Insert an element at an specific index using insert
End of explanation
l.remove("A")
print(l)
Explanation: Remove first element with specific value using 'remove'
End of explanation
del l[7]
del l[6]
print(l)
Explanation: Remove an element at a specific location using del:
End of explanation
point = (10, 20)
print(point, type(point))
point = 10, 20
print(point, type(point))
Explanation: See help(list) for more details, or read the online documentation
Tuples
Tuples are like lists, except that they cannot be modified once created, that is they are immutable.
In Python, tuples are created using the syntax (..., ..., ...), or even ..., ...:
End of explanation
x, y = point
print("x =", x)
print("y =", y)
Explanation: We can unpack a tuple by assigning it to a comma-separated list of variables:
End of explanation
point[0] = 20
Explanation: If we try to assign a new value to an element in a tuple we get an error:
End of explanation
params = {"parameter1" : 1.0,
"parameter2" : 2.0,
"parameter3" : 3.0,}
print(type(params))
print(params)
print("parameter1 = " + str(params["parameter1"]))
print("parameter2 = " + str(params["parameter2"]))
print("parameter3 = " + str(params["parameter3"]))
params["parameter1"] = "A"
params["parameter2"] = "B"
# add a new entry
params["parameter4"] = "D"
print("parameter1 = " + str(params["parameter1"]))
print("parameter2 = " + str(params["parameter2"]))
print("parameter3 = " + str(params["parameter3"]))
print("parameter4 = " + str(params["parameter4"]))
Explanation: Dictionaries
Dictionaries are also like lists, except that each element is a key-value pair. The syntax for dictionaries is {key1 : value1, ...}:
End of explanation
statement1 = False
statement2 = False
if statement1:
print("statement1 is True")
elif statement2:
print("statement2 is True")
else:
print("statement1 and statement2 are False")
Explanation: Control Flow
Conditional statements: if, elif, else
The Python syntax for conditional execution of code uses the keywords if, elif (else if), else:
End of explanation
statement1 = statement2 = True
if statement1:
if statement2:
print("both statement1 and statement2 are True")
# Bad indentation!
if statement1:
if statement2:
print("both statement1 and statement2 are True") # this line is not properly indented
statement1 = False
if statement1:
print("printed if statement1 is True")
print("still inside the if block")
if statement1:
print("printed if statement1 is True")
print("now outside the if block")
Explanation: For the first time, here we encounted a peculiar and unusual aspect of the Python programming language: Program blocks are defined by their indentation level.
Compare to the equivalent C code:
if (statement1)
{
printf("statement1 is True\n");
}
else if (statement2)
{
printf("statement2 is True\n");
}
else
{
printf("statement1 and statement2 are False\n");
}
In C blocks are defined by the enclosing curly brakets { and }. And the level of indentation (white space before the code statements) does not matter (completely optional).
But in Python, the extent of a code block is defined by the indentation level (usually a tab or say four white spaces). This means that we have to be careful to indent our code correctly, or else we will get syntax errors.
Examples:
End of explanation
for x in [1,2,3]:
print(x)
Explanation: Loops
In Python, loops can be programmed in a number of different ways. The most common is the for loop, which is used together with iterable objects, such as lists. The basic syntax is:
for loops:
End of explanation
for x in range(4): # by default range start at 0
print(x)
Explanation: The for loop iterates over the elements of the supplied list, and executes the containing block once for each element. Any kind of list can be used in the for loop. For example:
End of explanation
for x in range(-3,3):
print(x)
for word in ["scientific", "computing", "with", "python"]:
print(word)
Explanation: Note: range(4) does not include 4 !
End of explanation
for key, value in params.items():
print(key + " = " + str(value))
Explanation: To iterate over key-value pairs of a dictionary:
End of explanation
for idx, x in enumerate(range(-3,3)):
print(idx, x)
Explanation: Sometimes it is useful to have access to the indices of the values when iterating over a list. We can use the enumerate function for this:
End of explanation
l1 = [x**2 for x in range(0,5)]
print(l1)
Explanation: List comprehensions: Creating lists using for loops:
A convenient and compact way to initialize lists:
End of explanation
i = 0
while i < 5:
print(i)
i = i + 1
print("done")
Explanation: while loops:
End of explanation
def func0():
print("test")
func0()
Explanation: Note that the print("done") statement is not part of the while loop body because of the difference in indentation.
Functions
A function in Python is defined using the keyword def, followed by a function name, a signature within parentheses (), and a colon :. The following code, with one additional level of indentation, is the function body.
End of explanation
def func1(s):
Print a string 's' and tell how many characters it has
print(s + " has " + str(len(s)) + " characters")
help(func1)
func1("test")
Explanation: Optionally, but highly recommended, we can define a so called "docstring", which is a description of the functions purpose and behaivor. The docstring should follow directly after the function definition, before the code in the function body.
End of explanation
def square(x):
Return the square of x.
return x ** 2
square(4)
Explanation: Functions that returns a value use the return keyword:
End of explanation
def powers(x):
Return a few powers of x.
return x ** 2, x ** 3, x ** 4
powers(3)
x2, x3, x4 = powers(3)
print(x3)
Explanation: We can return multiple values from a function using tuples (see above):
End of explanation
def myfunc(x, p=2, debug=False):
if debug:
print("evaluating myfunc for x = " + str(x) + " using exponent p = " + str(p))
return x**p
Explanation: Default argument and keyword arguments
In a definition of a function, we can give default values to the arguments the function takes:
End of explanation
myfunc(5)
myfunc(5, debug=True)
Explanation: If we don't provide a value of the debug argument when calling the the function myfunc it defaults to the value provided in the function definition:
End of explanation
myfunc(p=3, debug=True, x=7)
Explanation: If we explicitly list the name of the arguments in the function calls, they do not need to come in the same order as in the function definition. This is called keyword arguments, and is often very useful in functions that takes a lot of optional arguments.
End of explanation
f1 = lambda x: x**2
# is equivalent to
def f2(x):
return x**2
f1(2), f2(2)
Explanation: Unnamed functions (lambda function)
In Python we can also create unnamed functions, using the lambda keyword:
End of explanation
# map is a built-in python function
map(lambda x: x**2, range(-3,4))
# in python 3 we can use `list(...)` to convert the iterator to an explicit list
list(map(lambda x: x**2, range(-3,4)))
Explanation: This technique is useful for example when we want to pass a simple function as an argument to another function, like this:
End of explanation
class Point:
Simple class for representing a point in a Cartesian coordinate system.
def __init__(self, x, y):
Create a new Point at x, y.
self.x = x
self.y = y
def translate(self, dx, dy):
Translate the point by dx and dy in the x and y direction.
self.x += dx
self.y += dy
def __str__(self):
return("Point at [%f, %f]" % (self.x, self.y))
Explanation: Classes
Classes are the key features of object-oriented programming. A class is a structure for representing an object and the operations that can be performed on the object.
In Python a class can contain attributes (variables) and methods (functions).
A class is defined almost like a function, but using the class keyword, and the class definition usually contains a number of class method definitions (a function in a class).
Each class method should have an argument self as its first argument. This object is a self-reference.
Some class method names have special meaning, for example:
__init__: The name of the method that is invoked when the object is first created.
__str__ : A method that is invoked when a simple string representation of the class is needed, as for example when printed.
There are many more, see http://docs.python.org/2/reference/datamodel.html#special-method-names
End of explanation
p1 = Point(0, 0) # this will invoke the __init__ method in the Point class
print(p1) # this will invoke the __str__ method
Explanation: To create a new instance of a class:
End of explanation
p2 = Point(1, 1)
p1.translate(0.25, 1.5)
print(p1)
print(p2)
Explanation: To invoke a class method in the class instance p:
End of explanation
%%file mymodule.py
Example of a python module. Contains a variable called my_variable,
a function called my_function, and a class called MyClass.
my_variable = 0
def my_function():
Example function
return my_variable
class MyClass:
Example class.
def __init__(self):
self.variable = my_variable
def set_variable(self, new_value):
Set self.variable to a new value
self.variable = new_value
def get_variable(self):
return self.variable
Explanation: Note that calling class methods can modifiy the state of that particular class instance, but does not effect other class instances or any global variables.
That is one of the nice things about object-oriented design: code such as functions and related variables are grouped in separate and independent entities.
Modules
One of the most important concepts in good programming is to reuse code and avoid repetitions.
The idea is to write functions and classes with a well-defined purpose and scope, and reuse these instead of repeating similar code in different part of a program (modular programming). The result is usually that readability and maintainability of a program is greatly improved. What this means in practice is that our programs have fewer bugs, are easier to extend and debug/troubleshoot.
Python supports modular programming at different levels. Functions and classes are examples of tools for low-level modular programming. Python modules are a higher-level modular programming construct, where we can collect related variables, functions and classes in a module. A python module is defined in a python file (with file-ending .py), and it can be made accessible to other Python modules and programs using the import statement.
Consider the following example: the file mymodule.py contains simple example implementations of a variable, function and a class:
End of explanation
import mymodule
Explanation: We can import the module mymodule into our Python program using import:
End of explanation
help(mymodule)
mymodule.my_variable
mymodule.my_function()
my_class = mymodule.MyClass()
my_class.set_variable(10)
my_class.get_variable()
Explanation: Use help(module) to get a summary of what the module provides:
End of explanation
reload(mymodule) # works only in python 2
Explanation: If we make changes to the code in mymodule.py, we need to reload it using reload:
End of explanation
raise Exception("description of the error")
Explanation: Exceptions
In Python errors are managed with a special language construct called "Exceptions". When errors occur exceptions can be raised, which interrupts the normal program flow and fallback to somewhere else in the code where the closest try-except statement is defined.
To generate an exception we can use the raise statement, which takes an argument that must be an instance of the class BaseException or a class derived from it.
End of explanation
try:
print("test")
# generate an error: the variable test is not defined
print(test)
except:
print("Caught an exception")
Explanation: A typical use of exceptions is to abort functions when some error condition occurs, for example:
def my_function(arguments):
if not verify(arguments):
raise Exception("Invalid arguments")
# rest of the code goes here
To gracefully catch errors that are generated by functions and class methods, or by the Python interpreter itself, use the try and except statements:
try:
# normal code goes here
except:
# code for error handling goes here
# this code is not executed unless the code
# above generated an error
For example:
End of explanation
try:
print("test")
# generate an error: the variable test is not defined
print(test)
except Exception as e:
print("Caught an exception:" + str(e))
Explanation: To get information about the error, we can access the Exception class instance that describes the exception by using for example:
except Exception as e:
End of explanation
import collections as coll
c = coll.Counter([0,1,2,0])
print c
document = "This is a test document with a lot of different words but at least one duplicate".split(" ")
word_counts = coll.Counter(document)
print word_counts
Explanation: Counter
Turns a sequenc of values into a defaultdict(int)-like object mappings keys to counts ( good for histograms )
End of explanation
x = [4,1,2,3]
y=sorted(x)
x.sort()
print "X="+str(x)
print "y="+str(y)
Explanation: Sorting
End of explanation
even_numbers = [x for x in range(5) if x %2 == 0]
squares = [x * x for x in range(5)]
even_squared = [x*x for x in even_numbers]
print even_squared
square_dict = { x: x*x for x in range(5) }
print square_dict
Explanation: List Comprehensions
End of explanation
range(10) # works greate but sometimes we need a single number or set when we need them
def lazy_range(n):
a lazy version of ther range function to only create the value when evaluating it import when the range gets really big
i = 0
while i < n :
yield i
i += 1
for i in lazy_range(10):
print str(double(i))
Explanation: Generators and Iterators
End of explanation
### randomness
import random as rand
[rand.random() for _ in range(4)]
### regular expressions
### Object-oriented programming
### Functional Tools
### enumerate
### Zip and argument unpacking
### args and kwargs
Explanation: You need to recreate the the lazy generator to use it a second time or use a list
End of explanation |
5,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Supervised Learning
Project 2
Step1: Implementation
Step2: Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Run the code cell below to separate the student data into feature and target columns to see if any features are non-numeric.
Step3: Preprocess Feature Columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.
Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.
These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section.
Step4: Implementation
Step5: Training and Evaluating Models
In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in scikit-learn. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.
Question 2 - Model Application
List three supervised learning models that are appropriate for this problem. What are the general applications of each model? What are their strengths and weaknesses? Given what you know about the data, why did you choose these models to be applied?
Answer
Step6: Implementation
Step7: Tabular Results
Edit the cell below to see how a table can be designed in Markdown. You can record your results from above in the tables provided.
Classifer 1 - Decission Tree
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| | Python Code:
# Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
Explanation: Machine Learning Engineer Nanodegree
Supervised Learning
Project 2: Building a Student Intervention System
Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Question 1 - Classification vs. Regression
Your goal for this project is to identify students who might need early intervention before they fail to graduate. Which type of supervised learning problem is this, classification or regression? Why?
Answer: It's a classification problem since students are basicaly catogorized to "need early intervention" or "doesn't need early intervention" subgroups
Exploring the Data
Run the code cell below to load necessary Python libraries and load the student data. Note that the last column from this dataset, 'passed', will be our target label (whether the student graduated or didn't graduate). All other columns are features about each student.
End of explanation
# TODO: Calculate number of students
n_students = len(student_data)
# TODO: Calculate number of features
n_features = len(student_data.columns)-1
# TODO: Calculate passing students
n_passed = student_data['passed'].value_counts()['yes']
# TODO: Calculate failing students
n_failed = student_data['passed'].value_counts()['no']
# TODO: Calculate graduation rate
grad_rate = float(n_passed)*100/n_students
# Print the results
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
Explanation: Implementation: Data Exploration
Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following:
- The total number of students, n_students.
- The total number of features for each student, n_features.
- The number of those students who passed, n_passed.
- The number of those students who failed, n_failed.
- The graduation rate of the class, grad_rate, in percent (%).
End of explanation
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# Show the feature information by printing the first five rows
print "\nFeature values:"
print X_all.head()
Explanation: Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Run the code cell below to separate the student data into feature and target columns to see if any features are non-numeric.
End of explanation
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
Explanation: Preprocess Feature Columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.
Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.
These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section.
End of explanation
# TODO: Import any additional functionality you may need here
from sklearn import cross_validation
# TODO: Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
# TODO: Shuffle and split the dataset into the number of training and testing points above
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X_all, y_all, stratify = y_all, test_size= 95,
random_state = 42 )
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
Explanation: Implementation: Training and Testing Data Split
So far, we have converted all categorical features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following:
- Randomly shuffle and split the data (X_all, y_all) into training and testing subsets.
- Use 300 training points (approximately 75%) and 95 testing points (approximately 25%).
- Set a random_state for the function(s) you use, if provided.
- Store the results in X_train, X_test, y_train, and y_test.
End of explanation
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
Explanation: Training and Evaluating Models
In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in scikit-learn. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.
Question 2 - Model Application
List three supervised learning models that are appropriate for this problem. What are the general applications of each model? What are their strengths and weaknesses? Given what you know about the data, why did you choose these models to be applied?
Answer:
Why following models?"
Since our problem is a supervised classification problem I have chosen these three models which relatively work good on this kind of problems
Decission Tree
There are many application of Decision Trees algorithm in industry for example "Project Risk Management" softwares which are built by many companies like Salesforce are using this algorithm as part of their package for SWOT (Strengths, Weaknesses, Oppportunities, and Threats) Analysis. In general when there are some path of connected rulles or conditions (connected nodes and branches) which at the end they lead in a conclusion (Target Value) it is good to consider decision trees.
Pros:
Decision trees are easy to interpret and even regular user is able to understand the folow if it has reasonable number of nodes.
Both nominal and numerical inputs are accepted.
Decision trees can handel datasets that may have errors.
They can handel datasets that have missing values.
Cons:
Most decision tree algorithms require that the target have discrete values.
Since decision trees use the “divide and conquer” method they work poorly if many complex interactions are present.
Decision trees are susceptible for overfitting.
Reason to choose:
Because Decision Tree is easy to interpret and explain to non-professional users I tried this algorithm first.
SVM
SVM is a supervised machine learning algorithm which can be used for classification or regression problems. It uses a technique called the kernel trick to transform your data and then based on these transformations it finds an optimal boundary between the possible outputsas. Svm has a significant discriminative power for classification, especially in cases where sample sizes are small and a large number of variables are involved (high-dimensionality space). support vector machine algorithm, has demonstrated high performance in solving classification problems in many biomedical fields, especially in bioinformatics
Pros:
Finds the optimal seperation hyperplane
Can work with high dimentional data
Cons:
Needs both negative and positive examples
Best Kernel should be chosen
Needs a lot of CPU and Memory time
Reason to choose:
Since there are relatively high number of features compared to data records and the fact that SVM works well with high dimentional data so I tried this algorithm too.
AdaBoost
AdaBoost or adaptive boosting is a ML meta-algorithem which basicaly tries to improve other algorithems performance. So Adaboost fit the data with a chosen algorithem which by default is Decision tree and repeats this process over and over again but in each round it diagnose the hard examples which are missclassified and put more focus on theme by changing theire weight. in general if train/evaluation time is not an issue and higher accuracy is desired in most cases adaboost outperforms to other algorithms however if time matters usually boosting approach is not an ideal choice. One of the first and succesfull application of this algorithm was 'optical character recognition'(OCR) problem on digital handwritten digits.
Pros:
Overall better perfomance and accuracy compared to non-boosting models
Finds weak learners and combine diffrent classifiers
Cons:
Has more complexity and is time consuming
Not ideal for models which requires instant result
Reason to choose:
I got a better solution with SVM previously but since higher accuracy might be more important than the speed in this case I tried Adaboost to compare with the SVM.
Setup
Run the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows:
- train_classifier - takes as input a classifier and training data and fits the classifier to the data.
- predict_labels - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score.
- train_predict - takes as input a classifier, and the training and testing data, and performs train_clasifier and predict_labels.
- This function will report the F<sub>1</sub> score for both the training and testing data separately.
End of explanation
# TODO: Import the three supervised learning models from sklearn
# from sklearn import model_A
# from sklearn import model_B
# from skearln import model_C
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.ensemble import AdaBoostClassifier
# TODO: Initialize the three models
clf_A = DecisionTreeClassifier(random_state = 42)
clf_B = SVC(random_state = 42)
clf_C = AdaBoostClassifier(random_state=42)
# TODO: Execute the 'train_predict' function for each classifier and each training set size
# train_predict(clf, X_train, y_train, X_test, y_test)
for clf in [clf_A, clf_B, clf_C]:
print "\n{}: \n".format(clf.__class__.__name__)
for n in [100, 200, 300]:
train_predict(clf, X_train[:n], y_train[:n], X_test, y_test)
Explanation: Implementation: Model Performance Metrics
With the predefined functions above, you will now import the three supervised learning models of your choice and run the train_predict function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in clf_A, clf_B, and clf_C.
- Use a random_state for each model you use, if provided.
- Note: Use the default settings for each model — you will tune one specific model in a later section.
- Create the different training set sizes to be used to train each model.
- Do not reshuffle and resplit the data! The new training points should be drawn from X_train and y_train.
- Fit each model with each training set size and make predictions on the test set (9 in total).
Note: Three tables are provided after the following code cell which can be used to store your results.
End of explanation
# TODO: Import 'GridSearchCV' and 'make_scorer'
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer, f1_score
# TODO: Create the parameters list you wish to tune
parameters = {'C': range(1,10), 'random_state': [42], 'gamma': np.arange(0,1,0.1)}
# TODO: Initialize the classifier
clf = SVC()
# TODO: Make an f1 scoring function using 'make_scorer'
f1_scorer = make_scorer(f1_score, pos_label='yes')
# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(estimator=clf, scoring= f1_scorer, param_grid=parameters )
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_obj = grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
# Report the final F1 score for training and testing after parameter tuning
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
Explanation: Tabular Results
Edit the cell below to see how a table can be designed in Markdown. You can record your results from above in the tables provided.
Classifer 1 - Decission Tree
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0010 | 0.0000 | 1.0 | 0.6452 |
| 200 | 0.0010 | 0.0000 | 1.0 | 0.7258 |
| 300 | 0.0020 | 0.0000 | 1.0 | 0.6838 |
Classifer 2 -SVM
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0010 | 0.0000 | 0.8354 | 0.8025 |
| 200 | 0.0050 | 0.0010 | 0.8431 | 0.8105 |
| 300 | 0.0080 | 0.0020 | 0.8664 | 0.8052 |
Classifer 3 - Boosting
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0960 | 0.0050 | 0.9778 | 0.6880 |
| 200 | 0.1260 | 0.0040 | 0.8905 |0.7445 |
| 300 | 0.1220 | 0.0080 | 0.8565 | 0.7328 |
Choosing the Best Model
In this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score.
Question 3 - Chosing the Best Model
Based on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?
Answer:
Based on the experiments done and the results table above SVM model is giving the most accurate prediction in a reasonable trainig and testing time. Since this data set was small the time diffrents may not seem an issue for comparison but if we get more and more data then for sure it can cause some problems. By coparing Decision Tree model and SVM which both has almost same running time but SVM turn out to be more accurate. And also it is obvious from the results that Decision Tree is highly dependant on data and in terms of machine learning is overfitted because it gives a 100% accurate result on the trained data it shows that it was not able to generelize the prediction. Therefore my choice is SVM model.
Question 4 - Model in Layman's Terms
In one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. For example if you've chosen to use a decision tree or a support vector machine, how does the model go about making a prediction?
Answer:
Support Vector Machine is a classification algorithm which simply means it classify data into diffrent categories (in our case we have two categories "passed" and "not passed"). But how does this algorithm work? Well, the main goal of this algorithm is to find a seperating vector which will group all the data. For simplicity lets assume that we only have 2 features and when we draw the scatter plot we see that the data are divided to 2 diffrent groups which we can seperate using a line, so in this case our goal is to find the maximum margin between these two group of data. As a result in the future when we have a new data point we simply predict the associated labele to it based on it's location regarding to line. Its good to mention that when we have more features the line changes to a plane which seperates data exactly the same way but in higher dimention.
Implementation: Model Tuning
Fine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import sklearn.grid_search.gridSearchCV and sklearn.metrics.make_scorer.
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: parameters = {'parameter' : [list of values]}.
- Initialize the classifier you've chosen and store it in clf.
- Create the F<sub>1</sub> scoring function using make_scorer and store it in f1_scorer.
- Set the pos_label parameter to the correct value!
- Perform grid search on the classifier clf using f1_scorer as the scoring method, and store it in grid_obj.
- Fit the grid search object to the training data (X_train, y_train), and store it in grid_obj.
End of explanation |
5,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
learning_rate = 0.01
image_size = 784
# Size of the encoding layer (the hidden layer)
encoding_dim = 128 # feel free to change this value
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, image_size), name="inputs")
targets_ = tf.placeholder(tf.float32, (None, image_size), name="targets")
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs=inputs_, units=encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(inputs=encoded, units=image_size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name="outputs")
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits, name="loss")
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
5,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
return 1/(np.exp((energy-mu)/kT)+1)
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
The Fermi-Dirac equation is given by:
$$\large F(\epsilon)=\frac{1}{e^{\frac{(\epsilon-\mu)}{kT}}+1}$$
Where:
$\epsilon $ is the single particle energy.
$\mu $ is the chemical potential, which is related to the total number of particles.
$k $ is the Boltzmann constant.
$T $ is the temperature in Kelvin.
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
fermi = fermidist(np.linspace(0,10.0, 100), mu, kT)
f = plt.figure(figsize=(9,6))
ax = plt.subplot(111)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(np.linspace(0,10.0, 100), fermi)
plt.xlabel(r"Energy $\epsilon$")
plt.ylabel(r"Probability $F(\epsilon)$")
plt.ylim(0,1.0)
plt.title(r"Probability that a particle will have energy $\epsilon$")
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
interact(plot_fermidist, mu=(0,5.0, .1), kT=(0.1,10.0, .1));
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation |
5,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Spicing-It-Up!-(Sorry)" data-toc-modified-id="Spicing-It-Up!-(Sorry)-1"><span class="toc-item-num">1 </span>Spicing It Up! (Sorry)</a></span><ul class="toc-item"><li><span><a href="#Why?" data-toc-modified-id="Why?-1.1"><span class="toc-item-num">1.1 </span>Why?</a></span></li><li><span><a href="#Installation" data-toc-modified-id="Installation-1.2"><span class="toc-item-num">1.2 </span>Installation</a></span></li><li><span><a href="#Examples" data-toc-modified-id="Examples-1.3"><span class="toc-item-num">1.3 </span>Examples</a></span><ul class="toc-item"><li><span><a href="#Current-Through-a-Resistor" data-toc-modified-id="Current-Through-a-Resistor-1.3.1"><span class="toc-item-num">1.3.1 </span>Current Through a Resistor</a></span></li><li><span><a href="#Transient-Response-of-an-R-C-Filter" data-toc-modified-id="Transient-Response-of-an-R-C-Filter-1.3.2"><span class="toc-item-num">1.3.2 </span>Transient Response of an R-C Filter</a></span></li><li><span><a href="#A-Voltage-Controlled-Voltage-Source" data-toc-modified-id="A-Voltage-Controlled-Voltage-Source-1.3.3"><span class="toc-item-num">1.3.3 </span>A Voltage-Controlled Voltage Source</a></span></li><li><span><a href="#A-Current-Controlled-Current-Source" data-toc-modified-id="A-Current-Controlled-Current-Source-1.3.4"><span class="toc-item-num">1.3.4 </span>A Current-Controlled Current Source</a></span></li><li><span><a href="#A-Transmission-Line" data-toc-modified-id="A-Transmission-Line-1.3.5"><span class="toc-item-num">1.3.5 </span>A Transmission Line</a></span></li><li><span><a href="#A-Transformer" data-toc-modified-id="A-Transformer-1.3.6"><span class="toc-item-num">1.3.6 </span>A Transformer</a></span></li><li><span><a href="#A-Transistor-Amplifier" data-toc-modified-id="A-Transistor-Amplifier-1.3.7"><span class="toc-item-num">1.3.7 </span>A Transistor Amplifier</a></span></li><li><span><a href="#XSPICE-Parts" data-toc-modified-id="XSPICE-Parts-1.3.8"><span class="toc-item-num">1.3.8 </span>XSPICE Parts</a></span></li><li><span><a href="#A-Hierarchical-Circuit" data-toc-modified-id="A-Hierarchical-Circuit-1.3.9"><span class="toc-item-num">1.3.9 </span>A Hierarchical Circuit</a></span></li><li><span><a href="#Using-SPICE-Subcircuits" data-toc-modified-id="Using-SPICE-Subcircuits-1.3.10"><span class="toc-item-num">1.3.10 </span>Using SPICE Subcircuits</a></span></li></ul></li><li><span><a href="#The-Details" data-toc-modified-id="The-Details-1.4"><span class="toc-item-num">1.4 </span>The Details</a></span><ul class="toc-item"><li><span><a href="#Units" data-toc-modified-id="Units-1.4.1"><span class="toc-item-num">1.4.1 </span>Units</a></span></li><li><span><a href="#Available-Parts" data-toc-modified-id="Available-Parts-1.4.2"><span class="toc-item-num">1.4.2 </span>Available Parts</a></span></li><li><span><a href="#Startup" data-toc-modified-id="Startup-1.4.3"><span class="toc-item-num">1.4.3 </span>Startup</a></span></li><li><span><a href="#Miscellaneous" data-toc-modified-id="Miscellaneous-1.4.4"><span class="toc-item-num">1.4.4 </span>Miscellaneous</a></span></li></ul></li><li><span><a href="#Future-Work" data-toc-modified-id="Future-Work-1.5"><span class="toc-item-num">1.5 </span>Future Work</a></span></li><li><span><a href="#Acknowledgements" data-toc-modified-id="Acknowledgements-1.6"><span class="toc-item-num">1.6 </span>Acknowledgements</a></span></li></ul></li></ul></div>
Step1: Spicing It Up! (Sorry)
SKiDL is a Python package for describing the interconnection of electronic devices using text (instead of schematics). PySpice is an interface for controlling an external SPICE circuit simulator from Python. This document demonstrates how circuits described using SKiDL can be simulated under a variety of conditions using PySpice with the results displayed in an easily-shared Jupyter notebook.
Why?
There are existing SPICE simulators that analyze schematics created by CAD packages like KiCad.
There are also versions with their own GUI like LTSpice.
What advantages does a combination of SKiDL, PySpice and ngspice offer?
The circuit description is completely textual, so it's easy to share with others who may not have a schematic editor or GUI.
It can be archived in a Git repository for the purpose of tracking versions as modifications are made.
The documentation of the circuitry is embedded with the circuitry, so it's more likely to be kept current.
It makes the entire Python ecosystem of tools available for optimizing, analyzing, and visualizing the behavior of a circuit under a variety of conditions.
Installation
This notebook assumes you're using ngspice version 30.
To install ngspice for linux, do
Step2: Current Through a Resistor
The following example connects a 1 K$\Omega$ resistor to a voltage source whose value is ramped from 0 to 1 volts.
The current through the resistor is plotted versus the applied voltage.
Step3: Transient Response of an R-C Filter
This example shows the time-varying voltage of a capacitor charged through a resistor by a pulsed voltage source.
Step4: A Voltage-Controlled Voltage Source
A voltage source whose output is controlled by another voltage source is demonstrated in this example.
Step5: A Current-Controlled Current Source
This example shows a current source controlled by the current driven through a resistor by a voltage source.
Step6: A Transmission Line
The voltages at the beginning and end of an ideal transmission line are shown in this example.
Step7: A Transformer
This example demonstrates a transformer composed of two coupled inductors.
Step8: A Transistor Amplifier
The use of SPICE models is demonstrated in this example of a common-emitter transistor amplifier.
For this example, a subdirectory called SpiceLib was created with a single file 2N2222A.lib that holds
the .MODEL statement for that particular type of transistor.
Step9: XSPICE Parts
XSPICE parts can model a variety of functions (ADCs, DACs, etc.) having different I/O requirements, so SKiDL handles them a bit differently
Step10: A Hierarchical Circuit
SKiDL lets you describe a circuit inside a function, and then call that function to create hierarchical designs that can be analyzed with SPICE. This example defines a simple transistor inverter and then cascades several of them.
Step11: Using SPICE Subcircuits
Using @subcircuit lets you do hierarchical design directly in SKiDL, but SPICE has long had another option
Step12: SKiDL can work with SPICE subcircuits intended for PSPICE and LTSpice. All you need to do is add the top-level directories where the subcircuit libraries are stored and SKiDL will recursively search for the library files. When it reads a subcircuit library file (indicated by a .lib file extension), SKiDL will also look for a symbol file that provides names for the subcircuit I/O signals. For PSPICE, the symbol file has a .slb extension while the .asy extension is used for LTSpice.
WARNING
Step13: The following units are the ones you'll probably use most
Step14: Startup
When you import the PySpice functions into SKiDL | Python Code:
from IPython.core.display import HTML
HTML(open('custom.css', 'r').read())
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Spicing-It-Up!-(Sorry)" data-toc-modified-id="Spicing-It-Up!-(Sorry)-1"><span class="toc-item-num">1 </span>Spicing It Up! (Sorry)</a></span><ul class="toc-item"><li><span><a href="#Why?" data-toc-modified-id="Why?-1.1"><span class="toc-item-num">1.1 </span>Why?</a></span></li><li><span><a href="#Installation" data-toc-modified-id="Installation-1.2"><span class="toc-item-num">1.2 </span>Installation</a></span></li><li><span><a href="#Examples" data-toc-modified-id="Examples-1.3"><span class="toc-item-num">1.3 </span>Examples</a></span><ul class="toc-item"><li><span><a href="#Current-Through-a-Resistor" data-toc-modified-id="Current-Through-a-Resistor-1.3.1"><span class="toc-item-num">1.3.1 </span>Current Through a Resistor</a></span></li><li><span><a href="#Transient-Response-of-an-R-C-Filter" data-toc-modified-id="Transient-Response-of-an-R-C-Filter-1.3.2"><span class="toc-item-num">1.3.2 </span>Transient Response of an R-C Filter</a></span></li><li><span><a href="#A-Voltage-Controlled-Voltage-Source" data-toc-modified-id="A-Voltage-Controlled-Voltage-Source-1.3.3"><span class="toc-item-num">1.3.3 </span>A Voltage-Controlled Voltage Source</a></span></li><li><span><a href="#A-Current-Controlled-Current-Source" data-toc-modified-id="A-Current-Controlled-Current-Source-1.3.4"><span class="toc-item-num">1.3.4 </span>A Current-Controlled Current Source</a></span></li><li><span><a href="#A-Transmission-Line" data-toc-modified-id="A-Transmission-Line-1.3.5"><span class="toc-item-num">1.3.5 </span>A Transmission Line</a></span></li><li><span><a href="#A-Transformer" data-toc-modified-id="A-Transformer-1.3.6"><span class="toc-item-num">1.3.6 </span>A Transformer</a></span></li><li><span><a href="#A-Transistor-Amplifier" data-toc-modified-id="A-Transistor-Amplifier-1.3.7"><span class="toc-item-num">1.3.7 </span>A Transistor Amplifier</a></span></li><li><span><a href="#XSPICE-Parts" data-toc-modified-id="XSPICE-Parts-1.3.8"><span class="toc-item-num">1.3.8 </span>XSPICE Parts</a></span></li><li><span><a href="#A-Hierarchical-Circuit" data-toc-modified-id="A-Hierarchical-Circuit-1.3.9"><span class="toc-item-num">1.3.9 </span>A Hierarchical Circuit</a></span></li><li><span><a href="#Using-SPICE-Subcircuits" data-toc-modified-id="Using-SPICE-Subcircuits-1.3.10"><span class="toc-item-num">1.3.10 </span>Using SPICE Subcircuits</a></span></li></ul></li><li><span><a href="#The-Details" data-toc-modified-id="The-Details-1.4"><span class="toc-item-num">1.4 </span>The Details</a></span><ul class="toc-item"><li><span><a href="#Units" data-toc-modified-id="Units-1.4.1"><span class="toc-item-num">1.4.1 </span>Units</a></span></li><li><span><a href="#Available-Parts" data-toc-modified-id="Available-Parts-1.4.2"><span class="toc-item-num">1.4.2 </span>Available Parts</a></span></li><li><span><a href="#Startup" data-toc-modified-id="Startup-1.4.3"><span class="toc-item-num">1.4.3 </span>Startup</a></span></li><li><span><a href="#Miscellaneous" data-toc-modified-id="Miscellaneous-1.4.4"><span class="toc-item-num">1.4.4 </span>Miscellaneous</a></span></li></ul></li><li><span><a href="#Future-Work" data-toc-modified-id="Future-Work-1.5"><span class="toc-item-num">1.5 </span>Future Work</a></span></li><li><span><a href="#Acknowledgements" data-toc-modified-id="Acknowledgements-1.6"><span class="toc-item-num">1.6 </span>Acknowledgements</a></span></li></ul></li></ul></div>
End of explanation
from skidl import *
print(lib_search_paths)
# Load the package for drawing graphs.
import matplotlib.pyplot as plt
# Omit the following line if you're not using a Jupyter notebook.
%matplotlib inline
# Load the SKiDL + PySpice packages and initialize them for doing circuit simulations.
from skidl.pyspice import *
print(lib_search_paths)
Explanation: Spicing It Up! (Sorry)
SKiDL is a Python package for describing the interconnection of electronic devices using text (instead of schematics). PySpice is an interface for controlling an external SPICE circuit simulator from Python. This document demonstrates how circuits described using SKiDL can be simulated under a variety of conditions using PySpice with the results displayed in an easily-shared Jupyter notebook.
Why?
There are existing SPICE simulators that analyze schematics created by CAD packages like KiCad.
There are also versions with their own GUI like LTSpice.
What advantages does a combination of SKiDL, PySpice and ngspice offer?
The circuit description is completely textual, so it's easy to share with others who may not have a schematic editor or GUI.
It can be archived in a Git repository for the purpose of tracking versions as modifications are made.
The documentation of the circuitry is embedded with the circuitry, so it's more likely to be kept current.
It makes the entire Python ecosystem of tools available for optimizing, analyzing, and visualizing the behavior of a circuit under a variety of conditions.
Installation
This notebook assumes you're using ngspice version 30.
To install ngspice for linux, do:
sudo apt-get update
sudo apt-get install ngspice
For Windows:
Download ngspice-30_dll_64.zip
Unpack the zip file into C:\Program Files. The top-level folder should be named Spice64_dll so PySpice can find it.
Change subdirectory dll-vs to bin_dll.
Make sure to run 64-bit Python. Otherwise, it will be unable to run the 64-bit DLLs.
Next, for either OS, install SKiDL:
pip install skidl
After that, you'll have to manually install PySpice (it must be version 1.3.2 or higher):
pip install "PySpice>=1.3.2"
Finally, place an spinit file in the same folder as the notebook you're trying to run.
This contains the ngspice initialization commands as discussed in the ngspice manual.
Typically, I just enable the use of Pspice models and set the number of processing threads as follows:
set ngbehavior=ps
set num_thread=4
Examples
The following examples demonstrate some of the ways of using SKiDL and PySpice to simulate electronics. While shown using the Jupyter notebook, these examples will also work by placing the Python code into a file and executing it with a Python interpreter.
The following code snippet is needed at the beginning of every example.
It loads the matplotlib package for generating graphs, and SKiDL + PySpice packages for describing and simulating circuitry.
End of explanation
reset() # This will clear any previously defined circuitry.
# Create and interconnect the components.
vs = V(ref='VS', dc_value = 1 @ u_V) # Create a voltage source named "VS" with an initial value of 1 volt.
r1 = R(value = 1 @ u_kOhm) # Create a 1 Kohm resistor.
vs['p'] += r1[1] # Connect one end of the resistor to the positive terminal of the voltage source.
gnd += vs['n'], r1[2] # Connect the other end of the resistor and the negative terminal of the source to ground.
# Simulate the circuit.
circ = generate_netlist() # Translate the SKiDL code into a PyCircuit Circuit object.
sim = circ.simulator() # Create a simulator for the Circuit object.
dc_vals = sim.dc(VS=slice(0, 1, 0.1)) # Run a DC simulation where the voltage ramps from 0 to 1V by 0.1V increments.
# Get the voltage applied to the resistor and the current coming out of the voltage source.
voltage = dc_vals[node(vs['p'])] # Get the voltage applied by the positive terminal of the source.
current = -dc_vals['VS'] # Get the current coming out of the positive terminal of the voltage source.
# Print a table showing the current through the resistor for the various applied voltages.
print('{:^7s}{:^7s}'.format('V', ' I (mA)'))
print('='*15)
for v, i in zip(voltage.as_ndarray(), current.as_ndarray()*1000):
print('{:6.2f} {:6.2f}'.format(v, i))
# Create a plot of the current (Y coord) versus the applied voltage (X coord).
figure = plt.figure(1)
plt.title('Resistor Current vs. Applied Voltage')
plt.xlabel('Voltage (V)')
plt.ylabel('Current (mA)')
plt.plot(voltage, current*1000) # Plot X=voltage and Y=current (in milliamps, so multiply it by 1000).
plt.show()
Explanation: Current Through a Resistor
The following example connects a 1 K$\Omega$ resistor to a voltage source whose value is ramped from 0 to 1 volts.
The current through the resistor is plotted versus the applied voltage.
End of explanation
reset() # Clear out the existing circuitry from the previous example.
# Create a pulsed voltage source, a resistor, and a capacitor.
vs = PULSEV(initial_value=0, pulsed_value=5@u_V, pulse_width=1@u_ms, period=2@u_ms) # 1ms ON, 1ms OFF pulses.
r = R(value=1@u_kOhm) # 1 Kohm resistor.
c = C(value=1@u_uF) # 1 uF capacitor.
r['+', '-'] += vs['p'], c['+'] # Connect the resistor between the positive source terminal and one of the capacitor terminals.
gnd += vs['n'], c['-'] # Connect the negative battery terminal and the other capacitor terminal to ground.
# Simulate the circuit.
circ = generate_netlist() # Create the PySpice Circuit object from the SKiDL code.
sim = circ.simulator() # Get a simulator for the Circuit object.
waveforms = sim.transient(step_time=0.01@u_ms, end_time=10@u_ms) # Run a transient simulation from 0 to 10 msec.
# Get the simulation data.
time = waveforms.time # Time values for each point on the waveforms.
pulses = waveforms[node(vs['p'])] # Voltage on the positive terminal of the pulsed voltage source.
cap_voltage = waveforms[node(c['+'])] # Voltage on the capacitor.
# Plot the pulsed source and capacitor voltage values versus time.
figure = plt.figure(1)
plt.title('Capacitor Voltage vs. Source Pulses')
plt.xlabel('Time (ms)')
plt.ylabel('Voltage (V)')
plt.plot(time*1000, pulses) # Plot pulsed source waveform.
plt.plot(time*1000, cap_voltage) # Plot capacitor charging waveform.
plt.legend(('Source Pulses', 'Capacitor Voltage'), loc=(1.1, 0.5))
plt.show()
Explanation: Transient Response of an R-C Filter
This example shows the time-varying voltage of a capacitor charged through a resistor by a pulsed voltage source.
End of explanation
reset() # Clear out the existing circuitry from the previous example.
# Connect a sine wave to the control input of a voltage-controlled voltage source.
vs = SINEV(amplitude=1@u_V, frequency=100@u_Hz) # 1V sine wave source at 100 Hz.
vs['n'] += gnd # Connect the negative terminal of the sine wave to ground.
vc = VCVS(gain=2.5) # Voltage-controlled voltage source with a gain of 2.5.
vc['ip', 'in'] += vs['p'], gnd # Connect the sine wave to the input port of the controlled source.
vc['op', 'on'] += Net(), gnd # Connect the output port of the controlled source to a net and ground.
rl = R(value=1@u_kOhm)
rl[1,2] += vc['op'], gnd
r = R(value=1@u_kOhm)
r[1,2] += vs['p'], gnd
# Simulate the circuit.
circ = generate_netlist() # Create the PySpice Circuit object from the SKiDL code.
print(circ)
sim = circ.simulator() # Get a simulator for the Circuit object.
waveforms = sim.transient(step_time=0.01@u_ms, end_time=20@u_ms) # Run a transient simulation from 0 to 20 msec.
# Get the time-varying waveforms of the sine wave source and the voltage-controlled source.
time = waveforms.time
vin = waveforms[node(vs['p'])]
vout = waveforms[node(vc['op'])]
# Plot the input and output waveforms. Note that the output voltage is 2.5x the input voltage.
figure = plt.figure(1)
plt.title('Input and Output Sine Waves')
plt.xlabel('Time (ms)')
plt.ylabel('Voltage (V)')
plt.plot(time*1000, vin)
plt.plot(time*1000, vout)
plt.legend(('Input Voltage', 'Output Voltage'), loc=(1.1, 0.5))
plt.show()
Explanation: A Voltage-Controlled Voltage Source
A voltage source whose output is controlled by another voltage source is demonstrated in this example.
End of explanation
reset() # Clear out the existing circuitry from the previous example.
# Use the current driven through a resistor to control another current source.
vs = SINEV(amplitude=1@u_V, frequency=100@u_Hz) # 100 Hz sine wave voltage source.
rs = R(value=1@u_kOhm) # Resistor connected to the voltage source.
rs[1,2] += vs['p'], gnd # Connect resistor from positive terminal of voltage source to ground.
vs['n'] += gnd # Connect the negative terminal of the voltage source to ground.
vc = CCCS(control=vs, gain=2.5) # Current source controlled by the current entering the vs voltage source.
rc = R(value=1@u_Ohm) # Resistor connected to the current source.
rc[1,2] += vc['p'], gnd # Connect resistor from the positive terminal of the current source to ground.
vc['n'] += gnd # Connect the negative terminal of the current source to ground.
# Simulate the circuit.
circ = generate_netlist()
sim = circ.simulator()
waveforms = sim.transient(step_time=0.01@u_ms, end_time=20@u_ms)
# Get the time-varying waveforms of the currents from the sine wave source and the current-controlled current source.
time = waveforms.time
i_vs = waveforms[vs.ref]
i_vc = waveforms[node(rc[1])] / rc.value # Current-source current is the voltage across the resistor / resistance.
# Plot the waveforms. Note the input and output currents are out of phase since the output current is calculated
# based on the current *leaving* the positive terminal of the controlled current source and entering the resistor,
# whereas the current in the controlling voltage source is calculated based on what is *entering* the positive terminal.
figure = plt.figure(1)
plt.title('Control Current vs. Output Current')
plt.xlabel('Time (ms)')
plt.ylabel('Current (mA)')
plt.plot(time*1000, i_vs)
plt.plot(time*1000, i_vc)
plt.legend(('Control Current', 'Output Current'), loc=(1.1, 0.5))
plt.show()
Explanation: A Current-Controlled Current Source
This example shows a current source controlled by the current driven through a resistor by a voltage source.
End of explanation
reset() # Clear out the existing circuitry from the previous example.
# Create a 1 GHz sine wave source, drive it through a 70 ohm ideal transmission line, and terminate it with a 140 ohm resistor.
vs = SINEV(amplitude=1@u_V, frequency=1@u_GHz)
t1 = T(impedance=70@u_Ohm, frequency=1@u_GHz, normalized_length=10.0) # Trans. line is 10 wavelengths long.
rload = R(value=140@u_Ohm)
vs['p'] += t1['ip'] # Connect source to positive input terminal of trans. line.
rload[1] += t1['op'] # Connect resistor to positive output terminal of trans. line.
gnd += vs['n'], t1['in','on'], rload[2] # Connect remaining terminals to ground.
# Simulate the transmission line.
circ = generate_netlist()
sim = circ.simulator()
waveforms = sim.transient(step_time=0.01@u_ns, end_time=20@u_ns)
# Get the waveforms at the beginning and end of the trans. line.
time = waveforms.time * 10**9
vin = waveforms[node(vs['p'])] # Input voltage at the beginning of the trans. line.
vout = waveforms[node(rload['1'])] # Output voltage at the terminating resistor of the trans. line.
# Plot the input and output waveforms. Note that it takes 10 nsec for the input to reach the end of the
# transmission line, and there is a 33% "bump" in the output voltage due to the mismatch between the
# 140 ohm load resistor and the 70 ohm transmission line impedances.
figure = plt.figure(1)
plt.title('Output Voltage vs. Input Voltage')
plt.xlabel('Time (ns)')
plt.ylabel('Voltage (V)')
plt.plot(time, vin)
plt.plot(time, vout)
plt.legend(('Input Voltage', 'Output Voltage'), loc=(1.1, 0.5))
plt.show()
Explanation: A Transmission Line
The voltages at the beginning and end of an ideal transmission line are shown in this example.
End of explanation
reset() # Clear out existing circuitry from previous example.
turns_ratio = 10 # Voltage gain from primary to secondary.
primary_inductance = 1 @ u_uH
secondary_inductance = primary_inductance * turns_ratio**2
# Create a transformer from two coupled inductors.
vs = SINEV(amplitude=1@u_V, frequency=100@u_Hz) # AC input voltage.
rs = R(value=0 @ u_Ohm) # Source resistor.
primary = L(value=primary_inductance) # Inductor for transformer primary.
secondary = L(value=secondary_inductance) # Inductor for transformer secondary.
rload = R(value=100 @ u_Ohm) # Load resistor.
# This is the coupler between the inductors that transfers the
# voltage from the primary to the secondary.
coupler_prim_sec = K(ind1=primary, ind2=secondary, coupling=0.99)
# Connect the voltage source to the primary through the source resistor.
gnd & vs['n,p'] & rs & primary[1,2] & gnd
# Connect the secondary to the load resistor.
gnd & secondary[2,1] & rload & gnd
# Simulate the transformer.
sim=generate_netlist().simulator()
waveforms = sim.transient(step_time=0.1 @ u_ms, end_time=100@u_ms)
# Get the waveforms from the primary and secondary.
time = waveforms.time * 10**3
v_pri = waveforms[node(primary[1])] # Input voltage at the transformer primary.
v_sec = waveforms[node(secondary[1])] # Output voltage at transformer secondary.
# Plot the input and output waveforms.
figure = plt.figure(1)
plt.title('Primary Voltage vs. Secondary Voltage')
plt.xlabel('Time (ms)')
plt.ylabel('Voltage (V)')
plt.plot(time, v_pri)
plt.plot(time, v_sec)
plt.legend(('Primary Voltage', 'Secondary Voltage'), loc=(1.1, 0.5))
plt.show()
Explanation: A Transformer
This example demonstrates a transformer composed of two coupled inductors.
End of explanation
reset() # Clear out the existing circuitry from the previous example.
# Create a transistor, power supply, bias resistors, collector resistor, and an input sine wave source.
q = BJT(model='2n2222a') # 2N2222A NPN transistor. The model is stored in a directory of SPICE .lib files.
vdc = V(dc_value=5@u_V) # 5V power supply.
rs = R(value=5@u_kOhm) # Source resistor in series with sine wave input voltage.
rb = R(value=25@u_kOhm) # Bias resistor from 5V to base of transistor.
rc = R(value=1@u_kOhm) # Load resistor connected to collector of transistor.
vs = SINEV(amplitude=0.01@u_V, frequency=1@u_kHz) # 1 KHz sine wave input source.
q['c', 'b', 'e'] += rc[1], rb[1], gnd # Connect transistor CBE pins to load & bias resistors and ground.
vdc['p'] += rc[2], rb[2] # Connect other end of load and bias resistors to power supply's positive terminal.
vdc['n'] += gnd # Connect negative terminal of power supply to ground.
rs[1,2] += vs['p'], q['b'] # Connect source resistor from input source to base of transistor.
vs['n'] += gnd # Connect negative terminal of input source to ground.
# Simulate the transistor amplifier. This requires a SPICE library containing a model of the 2N2222A transistor.
circ = generate_netlist() # Pass the directory to the SPICE model library when creating circuit.
print(circ)
sim = circ.simulator()
waveforms = sim.transient(step_time=0.01@u_ms, end_time=5@u_ms)
# Get the input source and amplified output waveforms.
time = waveforms.time
vin = waveforms[node(vs['p'])] # Input source voltage.
vout = waveforms[node(q['c'])] # Amplified output voltage at collector of the transistor.
# Plot the input and output waveforms.
figure = plt.figure(1)
plt.title('Output Voltage vs. Input Voltage')
plt.xlabel('Time (ms)')
plt.ylabel('Voltage (V)')
plt.plot(time*1000, vin)
plt.plot(time*1000, vout)
plt.legend(('Input Voltage', 'Output Voltage'), loc=(1.1, 0.5))
plt.show()
Explanation: A Transistor Amplifier
The use of SPICE models is demonstrated in this example of a common-emitter transistor amplifier.
For this example, a subdirectory called SpiceLib was created with a single file 2N2222A.lib that holds
the .MODEL statement for that particular type of transistor.
End of explanation
reset() # Clear out the existing circuitry from the previous example.
# Sinusoidal voltage source.
vin = sinev(offset=1.65 @ u_V, amplitude=1.65 @ u_V, frequency=100e6)
# Component declarations showing various XSPICE styles.
# Creating an XSPICE part from the pyspice library.
adc = Part(
"pyspice",
"A",
io="anlg_in[], dig_out[]", # Two vector I/O ports in a string.
model=XspiceModel(
"adc", # The name assigned to this particular model instance.
"adc_bridge", # The name of the XSPICE part associated with this model.
# The rest of the arguments are keyword parameters for the model.
in_low=0.05 @ u_V,
in_high=0.1 @ u_V,
rise_delay=1e-9 @ u_s,
fall_delay=1e-9 @ u_s,
),
tool=SKIDL
)
# Creating an XSPICE part using the SPICE abbreviation 'A'.
buf = A(
io="buf_in, buf_out", # Two scalar I/O ports in a string.
model=XspiceModel(
"buf",
"d_buffer",
rise_delay=1e-9 @ u_s,
fall_delay=1e-9 @ u_s,
input_load=1e-12 @ u_s,
),
)
# Creating an XSPICE part using the XSPICE alias.
dac = XSPICE(
io=["dig_in[]", "anlg_out[]"], # Two vector ports in an array.
model=XspiceModel("dac", "dac_bridge", out_low=1.0 @ u_V, out_high=3.3 @ u_V),
)
r = R(value=1 @ u_kOhm) # Load resistor.
# Connections: sine wave -> ADC -> buffer -> DAC.
vin["p, n"] += adc["anlg_in"][0], gnd # Attach to first pin in ADC anlg_in vector of pins.
adc["dig_out"][0] += buf["buf_in"] # Attach first pin of ADC dig_out vector to buffer.
buf["buf_out"] += dac["dig_in"][0] # Attach buffer output to first pin of DAC dig_in vector of pins.
r["p,n"] += dac["anlg_out"][0], gnd # Attach first pin of DAC anlg_out vector to load resistor.
circ = generate_netlist(libs="SpiceLib")
print(circ)
sim = circ.simulator()
waveforms = sim.transient(step_time=0.1 @ u_ns, end_time=50 @ u_ns)
time = waveforms.time
vin = waveforms[node(vin["p"])]
vout = waveforms[node(r["p"])]
print('{:^7s}{:^7s}'.format('vin', 'vout'))
print('='*15)
for v1, v2 in zip(vin.as_ndarray(), vout.as_ndarray()):
print('{:6.2f} {:6.2f}'.format(v1, v2))
Explanation: XSPICE Parts
XSPICE parts can model a variety of functions (ADCs, DACs, etc.) having different I/O requirements, so SKiDL handles them a bit differently:
1. An io parameter is needed to specify the input and output pins. This parameter is either a comma-separated string or an array of strings listing the pin names in the order required for the particular XSPICE part. XSPICE I/O ports can be scalar (indicated by names which are simple strings) or vectors (indicated by names ending with "[]").
2. A model parameter is required that specifies the parameters affecting the behavior of the given XSPICE part. This is passed as an XspiceModel object.
End of explanation
reset() # You know what this does by now, right?
# Create a power supply for all the following circuitry.
pwr = V(dc_value=5@u_V)
pwr['n'] += gnd
vcc = pwr['p']
# Create a logic inverter using a transistor and a few resistors.
@subcircuit
def inverter(inp, outp):
'''When inp is driven high, outp is pulled low by transistor. When inp is driven low, outp is pulled high by resistor.'''
q = BJT(model='2n2222a') # NPN transistor.
rc = R(value=1@u_kOhm) # Resistor attached between transistor collector and VCC.
rc[1,2] += vcc, q['c']
rb = R(value=10@u_kOhm) # Resistor attached between transistor base and input.
rb[1,2] += inp, q['b']
q['e'] += gnd # Transistor emitter attached to ground.
outp += q['c'] # Inverted output comes from junction of the transistor collector and collector resistor.
# Create a pulsed voltage source to drive the input of the inverters. I set the rise and fall times to make
# it easier to distinguish the input and output waveforms in the plot.
vs = PULSEV(initial_value=0, pulsed_value=5@u_V, pulse_width=0.8@u_ms, period=2@u_ms, rise_time=0.2@u_ms, fall_time=0.2@u_ms) # 1ms ON, 1ms OFF pulses.
vs['n'] += gnd
# Create three inverters and cascade the output of one to the input of the next.
outp = Net() * 3 # Create three nets to attach to the outputs of each inverter.
inverter(vs['p'], outp[0]) # First inverter is driven by the pulsed voltage source.
inverter(outp[0], outp[1]) # Second inverter is driven by the output of the first.
inverter(outp[1], outp[2]) # Third inverter is driven by the output of the second.
# Simulate the cascaded inverters.
circ = generate_netlist() # Pass-in the library where the transistor model is stored.
sim = circ.simulator()
waveforms = sim.transient(step_time=0.01@u_ms, end_time=5@u_ms)
# Get the waveforms for the input and output.
time = waveforms.time
inp = waveforms[node(vs['p'])]
outp = waveforms[node(outp[2])] # Get the output waveform for the final inverter in the cascade.
# Plot the input and output waveforms. The output will be the inverse of the input since it passed
# through three inverters.
figure = plt.figure(1)
plt.title('Output Voltage vs. Input Voltage')
plt.xlabel('Time (ms)')
plt.ylabel('Voltage (V)')
plt.plot(time*1000, inp)
plt.plot(time*1000, outp)
plt.legend(('Input Voltage', 'Output Voltage'), loc=(1.1, 0.5))
plt.show()
Explanation: A Hierarchical Circuit
SKiDL lets you describe a circuit inside a function, and then call that function to create hierarchical designs that can be analyzed with SPICE. This example defines a simple transistor inverter and then cascades several of them.
End of explanation
reset()
lib_search_paths[SPICE].append('SpiceLib')
vin = V(ref='VIN', dc_value=8@u_V) # Input power supply.
vreg = Part('NCP1117', 'ncp1117_33-x') # Voltage regulator from ON Semi part lib.
print(vreg) # Print vreg pin names.
r = R(value=470 @ u_Ohm) # Load resistor on regulator output.
vreg['IN', 'OUT'] += vin['p'], r[1] # Connect vreg input to vin and output to load resistor.
gnd += vin['n'], r[2], vreg['GND'] # Ground connections for everybody.
# Simulate the voltage regulator subcircuit.
circ = generate_netlist() # Pass-in the library where the voltage regulator subcircuit is stored.
sim = circ.simulator()
dc_vals = sim.dc(VIN=slice(0,10,0.1)) # Ramp vin from 0->10V and observe regulator output voltage.
# Get the input and output voltages.
inp = dc_vals[node(vin['p'])]
outp = dc_vals[node(vreg['OUT'])]
# Plot the regulator output voltage vs. the input supply voltage. Note that the regulator
# starts to operate once the input exceeds 4V and the output voltage clamps at 3.3V.
figure = plt.figure(1)
plt.title('NCP1117-3.3 Regulator Output Voltage vs. Input Voltage')
plt.xlabel('Input Voltage (V)')
plt.ylabel('Output Voltage (V)')
plt.plot(inp, outp)
plt.show()
Explanation: Using SPICE Subcircuits
Using @subcircuit lets you do hierarchical design directly in SKiDL, but SPICE has long had another option: subcircuits. These are encapsulations of device behavior stored in SPICE library files.
Many thousands of these have been created over the years, both by SPICE users and semiconductor companies.
A simulation of the NCP1117 voltage regulator
from ON Semiconductor is shown below.
End of explanation
import PySpice.Unit
', '.join(dir(PySpice.Unit))
Explanation: SKiDL can work with SPICE subcircuits intended for PSPICE and LTSpice. All you need to do is add the top-level directories where the subcircuit libraries are stored and SKiDL will recursively search for the library files. When it reads a subcircuit library file (indicated by a .lib file extension), SKiDL will also look for a symbol file that provides names for the subcircuit I/O signals. For PSPICE, the symbol file has a .slb extension while the .asy extension is used for LTSpice.
WARNING: Even though SKiDL will read the PSPICE and LTSpice library files, that doesn't mean that ngspice can process them. Each SPICE simulator seems to support a different set of optional parameters for the various circuit elements (the nonlinear current source, G, for example). You will probably have to modify the library file to satisfy ngspice. PSPICE libraries seem to need the least modification. I wish it was easier, but it's not.
The Details
The examples section gave you some idea of what the combination of SKiDL, PySpice, ngspice, and matplotlib can do.
You should read the stand-alone documentation for each of those packages to get the full extent of their capabilities.
This section will address the features and functions that come in to play at their intersection.
Units
You may have noticed the use of units in the examples above, such as 10 @ u_kOhm to denote a resistance of 10 K$\Omega$.
This is a feature of the PySpice package. If you want to see all the available units, just do this:
End of explanation
from skidl.libs.pyspice_sklib import pyspice_lib
for part in pyspice_lib.parts:
print('{name} ({aliases}): {desc}\n\tPins: {pins}\n\tAttributes: {attributes}\n'.format(
name=getattr(part, 'name', ''),
aliases=', '.join(getattr(part, 'aliases','')),
desc=getattr(part, 'description'),
pins=', '.join([p.name for p in part.pins]),
attributes=', '.join([a for a in list(part.pyspice.get('pos',[])) + list(part.pyspice.get('kw',[]))]),
))
Explanation: The following units are the ones you'll probably use most:
Potential: u_TV (terravolt), u_GV (gigavolt), u_MV (megavolt), u_kV (kilovolt), u_V (volt), u_mV (millivolt), u_uV (microvolt), u_nV (nanovolt), u_pV (picovolt).
Current: u_TA (terraamp), u_GA (gigaamp), u_MA (megaamp), u_kA (kiloamp), u_A (amp), u_mA (milliamp), u_uA (microamp), u_nA (nanoamp), u_pA (picoamp).
Resistance: u_TOhm (terraohm), u_GOhm (gigaohm), u_MOhm (megaohm), u_kOhm (kiloohm), u_Ohm (ohm), u_mOhm (milliohm), u_uOhm (microohm), u_nOhm (nanoohm), u_pOhm (picoohm).
Capacitance: u_TF (terrafarad) u_GF (gigafarad), u_MF (megafarad), u_kF (kilofarad), u_F (farad), u_mF (millifarad), u_uF (microfarad), u_nF (nanofarad), u_pF (picofarad).
Inductance: u_TH (terrahenry), u_GH (gigahenry), u_MH (megahenry), u_kH (kilohenry), u_H (henry), u_mH (millihenry), u_uH (microhenry), u_nH (nanohenry), u_pH (picohenry).
Time: u_Ts (terrasecond), u_Gs (gigasecond), u_Ms (megasecond), u_ks (kilosecond), u_s (second), u_ms (millisecond), u_us (microsecond), u_ns (nanosecond), u_ps (picosecond).
Frequency: u_THz (terrahertz), u_GHz (gigahertz), u_MHz (megahertz), u_kHz (kilohertz), u_Hz (hertz), u_mHz (millihertz), u_uHz (microhertz), u_nHz (nanohertz), u_pHz (picohertz).
Available Parts
The following is a list of parts (and their aliases) that are available for use in a SPICE simulation.
Many parts (like resistors) have only two pins denoted p and n, while some parts (like transmission lines)
have two ports composed of pins ip, in (the input port) and op, on (the output port).
The parts also have attributes that modify their characteristics.
These attributes can be set when a part is instantiated:
r = R(value=1@u_kOhm)
or they can be set after instantiation like this:
r.value = 1 @ u_kOhm
You can go here
for more information about what device characteristics the attributes control.
End of explanation
reset()
# Create and interconnect the components.
vs = V(ref='VS', dc_value = 1 @ u_V) # Create a voltage source named "VS" with an initial value of 1 volt.
r1 = R(value = 1 @ u_kOhm) # Create a 1 Kohm resistor.
vs['p'] += r1[1] # Connect one end of the resistor to the positive terminal of the voltage source.
gnd += vs['n'], r1[2] # Connect the other end of the resistor and the negative terminal of the source to ground.
# Output the SPICE deck for the circuit.
circ = generate_netlist() # Translate the SKiDL code into a PyCircuit Circuit object.
print(circ) # Print the SPICE deck for the circuit.
Explanation: Startup
When you import the PySpice functions into SKiDL:
from skidl.pyspice import *
several things occur:
The parts and utilities defined in skidl.libs.pyspice.py are imported.
The default CAD tool is set to SKIDL.
A ground net named gnd or GND is created.
In addition, when the parts are imported, their names and aliases are instantiated in the namespace of the
calling module to make it easier to create parts. So, rather than using the following standard SKiDL method
of creating a part:
c = Part('pyspice', 'C', value=1@u_uF)
you can just do:
c = C(value=1@u_uF)
Miscellaneous
You can use the node() function if you need the name of a circuit node in order to extract its data
from the results returned by a simulation. The argument to node() can be either a pin of a part or a net:
If you need the actual SPICE deck for a circuit so you can simulate it outside of Python, just print the
Circuit object returned by generate_netlist() or store it in a file:
End of explanation |
5,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fashion MNIST with Keras and TPUs
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: Defining our model
We will use a standard conv-net for this example. We have 3 layers with drop-out and batch normalization between each layer.
Step2: Training on the TPU
We're ready to train! We first construct our model on the TPU, and compile it.
Here we demonstrate that we can use a generator function and fit_generator to train the model. You can also pass in x_train and y_train to tpu_model.fit() instead.
Step3: Checking our results (inference)
Now that we're done training, let's see how well we can predict fashion categories! Keras/TPU prediction isn't working due to a small bug (fixed in TF 1.12!), but we can predict on the CPU to see how our results look. | Python Code:
import tensorflow as tf
import numpy as np
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
Explanation: Fashion MNIST with Keras and TPUs
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Let's try out using tf.keras and Cloud TPUs to train a model on the fashion MNIST dataset.
First, let's grab our dataset using tf.keras.datasets.
End of explanation
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(64, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(128, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(256, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256))
model.add(tf.keras.layers.Activation('elu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10))
model.add(tf.keras.layers.Activation('softmax'))
model.summary()
Explanation: Defining our model
We will use a standard conv-net for this example. We have 3 layers with drop-out and batch normalization between each layer.
End of explanation
import os
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
)
)
tpu_model.compile(
optimizer=tf.train.AdamOptimizer(learning_rate=1e-3, ),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['sparse_categorical_accuracy']
)
def train_gen(batch_size):
while True:
offset = np.random.randint(0, x_train.shape[0] - batch_size)
yield x_train[offset:offset+batch_size], y_train[offset:offset + batch_size]
tpu_model.fit_generator(
train_gen(1024),
epochs=10,
steps_per_epoch=100,
validation_data=(x_test, y_test),
)
Explanation: Training on the TPU
We're ready to train! We first construct our model on the TPU, and compile it.
Here we demonstrate that we can use a generator function and fit_generator to train the model. You can also pass in x_train and y_train to tpu_model.fit() instead.
End of explanation
LABEL_NAMES = ['t_shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boots']
cpu_model = tpu_model.sync_to_cpu()
from matplotlib import pyplot
%matplotlib inline
def plot_predictions(images, predictions):
n = images.shape[0]
nc = int(np.ceil(n / 4))
f, axes = pyplot.subplots(nc, 4)
for i in range(nc * 4):
y = i // 4
x = i % 4
axes[x, y].axis('off')
label = LABEL_NAMES[np.argmax(predictions[i])]
confidence = np.max(predictions[i])
if i > n:
continue
axes[x, y].imshow(images[i])
axes[x, y].text(0.5, 0.5, label + '\n%.3f' % confidence, fontsize=14)
pyplot.gcf().set_size_inches(8, 8)
plot_predictions(np.squeeze(x_test[:16]),
cpu_model.predict(x_test[:16]))
Explanation: Checking our results (inference)
Now that we're done training, let's see how well we can predict fashion categories! Keras/TPU prediction isn't working due to a small bug (fixed in TF 1.12!), but we can predict on the CPU to see how our results look.
End of explanation |
5,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC.
Step1: Introduction to ML Fairness
Disclaimer
This exercise explores just a small subset of ideas and techniques relevant to fairness in machine learning; it is not the whole story!
Learning Objectives
Increase awareness of different types of biases that can manifest in model data.
Explore feature data to proactively identify potential sources of bias before training a model
Evaluate model performace by subgroup rather than in aggregate
Overview
In this exercise, you'll explore explore datasets and evaluate classifiers with fairness in mind, noting the ways undesirable biases can creep into machine learning (ML).
Throughout, you will see FairAware tasks, which provide opportunities to contextualize ML processes with respect to fairness. In performing these tasks, you'll identify biases and consider the long-term impact of model predictions if these biases are not addressed.
About the Dataset and Prediction Task
In this exercise, you'll work with the Adult Census Income dataset, which is commonly used in machine learning literature. This data was extracted from the 1994 Census bureau database by Ronny Kohavi and Barry Becker.
Each example in the dataset contains the following demographic data for a set of individuals who took part in the 1994 Census
Step2: Load the Adult Dataset
With the modules now imported, we can load the Adult dataset into a pandas DataFrame data structure.
Step4: Analyzing the Adult Dataset with Facets
As mentioned in MLCC, it is important to understand your dataset before diving straight into the prediction task.
Some important questions to investigate when auditing a dataset for fairness
Step6: FairAware Task #1
Review the descriptive statistics and histograms for each numerical and continuous feature. Click the Show Raw Data button above the histograms for categorical features to see the distribution of values per category.
Then, try to answer the following questions from earlier
Step10: FairAware Task #2
Use the menus on the left panel of the visualization to change how the data is organized
Step11: Prediction Using TensorFlow Estimators
Now that we have a better sense of the Adult dataset, we can now begin with creating a neural network to predict income. In this section, we will be using TensorFlow's Estimator API to access the DNNClassifier class
Convert Adult Dataset into Tensors
We first have to define our input fuction, which will take the Adult dataset that is in a pandas DataFrame and converts it into tensors using the tf.estimator.inputs.pandas_input_fn() function.
Step12: Represent Features in TensorFlow
TensorFlow requires that data maps to a model. To accomplish this, you have to use tf.feature_columns to ingest and represent features in TensorFlow.
Step13: Make Age a Categorical Feature
If you chose age when completing FairAware Task #3, you noticed that we suggested that age might benefit from bucketing (also known as binning), grouping together similar ages into different groups. This might help the model generalize better across age. As such, we will convert age from a numeric feature (technically, an ordinal feature) to a categorical feature.
Step14: Consider Key Subgroups
When performing feature engineering, it's important to keep in mind that you may be working with data drawn from individuals belonging to subgroups, for which you'll want to evaluate model performance separately.
NOTE
Step15: Train a Deep Neural Net Model on Adult Dataset
With the features now ready to go, we can try predicting income using deep learning.
For the sake of simplicity, we are going to keep the neural network architecture light by simply defining a feed-forward neural network with two hidden layers.
But first, we have to convert our high-dimensional categorical features into a low-dimensional and dense real-valued vector, which we call an embedding vector. Luckily, indicator_column (think of it as one-hot encoding) and embedding_column (that converts sparse features into dense features) helps us streamline the process.
The following cell creates the deep columns needed to move forward with defining the model.
Step16: Will all our data preprocessing taken care of, we can now define the deep neural net model. Start by using the parameters defined below. (Later on, after you've defined evaluation metrics and evaluated the model, you can come back and tweak these parameters to compare results.)
Step17: To keep things simple, we will train for 1000 steps—but feel free to play around with this parameter.
Step18: We can now evalute the overall model's performance using the held-out test set.
Step19: You can try retraining the model using different parameters. In the end, you will find that a deep neural net does a decent job in predicting income.
But what is missing here is evaluation metrics with respect to subgroups. We will cover some of the ways you can evaluate at the subgroup level in the next section.
Evaluating for Fairness Using a Confusion Matrix
While evaluating the overall performance of the model gives us some insight into its quality, it doesn't give us much insight into how well our model performs for different subgroups.
When evaluating a model for fairness, it's important to determine whether prediction errors are uniform across subgroups or whether certain subgroups are more susceptible to certain prediction errors than others.
A key tool for comparing the prevalence of different types of model errors is a confusion matrix. Recall from the Classification module of Machine Learning Crash Course that a confusion matrix is a grid that plots predictions vs. ground truth for your model, and tabulates statistics summarizing how often your model made the correct prediction and how often it made the wrong prediction.
Let's start by creating a binary confusion matrix for our income-prediction model—binary because our label (income_bracket) has only two possible values (<50K or >50K). We'll define an income of >50K as our positive label, and an income of <50k as our negative label.
NOTE
Step20: We will also need help plotting the binary confusion matrix. The function below combines various third-party modules (pandas DataFame, Matplotlib, Seaborn) to draw the confusion matrix.
Step21: Now that we have all the necessary functions defined, we can now compute the binary confusion matrix and evaluation metrics using the outcomes from our deep neural net model. The output of this cell is a tabbed view, which allows us to toggle between the confusion matrix and evaluation metrics table.
FairAware Task #4
Use the form below to generate confusion matrices for the two gender subgroups | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Google LLC.
End of explanation
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
import tempfile
!pip install seaborn==0.8.1
import seaborn as sns
import itertools
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve, roc_auc_score
from sklearn.metrics import precision_recall_curve
from google.colab import widgets
# For facets
from IPython.core.display import display, HTML
import base64
!pip install -q hopsfacets
import hopsfacets as facets
from hopsfacets.feature_statistics_generator import FeatureStatisticsGenerator
print('Modules are imported.')
Explanation: Introduction to ML Fairness
Disclaimer
This exercise explores just a small subset of ideas and techniques relevant to fairness in machine learning; it is not the whole story!
Learning Objectives
Increase awareness of different types of biases that can manifest in model data.
Explore feature data to proactively identify potential sources of bias before training a model
Evaluate model performace by subgroup rather than in aggregate
Overview
In this exercise, you'll explore explore datasets and evaluate classifiers with fairness in mind, noting the ways undesirable biases can creep into machine learning (ML).
Throughout, you will see FairAware tasks, which provide opportunities to contextualize ML processes with respect to fairness. In performing these tasks, you'll identify biases and consider the long-term impact of model predictions if these biases are not addressed.
About the Dataset and Prediction Task
In this exercise, you'll work with the Adult Census Income dataset, which is commonly used in machine learning literature. This data was extracted from the 1994 Census bureau database by Ronny Kohavi and Barry Becker.
Each example in the dataset contains the following demographic data for a set of individuals who took part in the 1994 Census:
Numeric Features
age: The age of the individual in years.
fnlwgt: The number of individuals the Census Organizations believes that set of observations represents.
education_num: An enumeration of the categorical representation of education. The higher the number, the higher the education that individual achieved. For example, an education_num of 11 represents Assoc_voc (associate degree at a vocational school), an education_num of 13 represents Bachelors, and an education_num of 9 represents HS-grad (high school graduate).
capital_gain: Capital gain made by the individual, represented in US Dollars.
capital_loss: Capital loss mabe by the individual, represented in US Dollars.
hours_per_week: Hours worked per week.
Categorical Features
workclass: The individual's type of employer. Examples include: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, and Never-worked.
education: The highest level of education achieved for that individual.
marital_status: Marital status of the individual. Examples include: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, and Married-AF-spouse.
occupation: The occupation of the individual. Example include: tech-support, Craft-repair, Other-service, Sales, Exec-managerial and more.
relationship: The relationship of each individual in a household. Examples include: Wife, Own-child, Husband, Not-in-family, Other-relative, and Unmarried.
gender: Gender of the individual available only in binary choices: Female or Male.
race: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Black, and Other.
native_country: Country of origin of the individual. Examples include: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, and more.
Prediction Task
The prediction task is to determine whether a person makes over $50,000 US Dollar a year.
Label
income_bracket: Whether the person makes more than $50,000 US Dollars annually.
Notes on Data Collection
All the examples extracted for this dataset meet the following conditions:
* age is 16 years or older.
* The adjusted gross income (used to calculate income_bracket) is greater than $100 USD annually.
* fnlwgt is greater than 0.
* hours_per_week is greater than 0.
Setup
First, import some modules that will be used throughout this notebook.
End of explanation
COLUMNS = ["age", "workclass", "fnlwgt", "education", "education_num",
"marital_status", "occupation", "relationship", "race", "gender",
"capital_gain", "capital_loss", "hours_per_week", "native_country",
"income_bracket"]
train_df = pd.read_csv(
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
names=COLUMNS,
sep=r'\s*,\s*',
engine='python',
na_values="?")
test_df = pd.read_csv(
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test",
names=COLUMNS,
sep=r'\s*,\s*',
skiprows=[0],
engine='python',
na_values="?")
# Drop rows with missing values
train_df = train_df.dropna(how="any", axis=0)
test_df = test_df.dropna(how="any", axis=0)
print('UCI Adult Census Income dataset loaded.')
Explanation: Load the Adult Dataset
With the modules now imported, we can load the Adult dataset into a pandas DataFrame data structure.
End of explanation
#@title Visualize the Data in Facets
fsg = FeatureStatisticsGenerator()
dataframes = [
{'table': train_df, 'name': 'trainData'}]
censusProto = fsg.ProtoFromDataFrames(dataframes)
protostr = base64.b64encode(censusProto.SerializeToString()).decode("utf-8")
HTML_TEMPLATE = <link rel="import" href="https://raw.githubusercontent.com/PAIR-code/facets/master/facets-dist/facets-jupyter.html">
<facets-overview id="elem"></facets-overview>
<script>
document.querySelector("#elem").protoInput = "{protostr}";
</script>
html = HTML_TEMPLATE.format(protostr=protostr)
display(HTML(html))
Explanation: Analyzing the Adult Dataset with Facets
As mentioned in MLCC, it is important to understand your dataset before diving straight into the prediction task.
Some important questions to investigate when auditing a dataset for fairness:
Are there missing feature values for a large number of observations?
Are there features that are missing that might affect other features?
Are there any unexpected feature values?
What signs of data skew do you see?
To start, we can use Facets Overview, an interactive visualization tool that can help us explore the dataset. With Facets Overview, we can quickly analyze the distribution of values across the Adult dataset.
End of explanation
#@title Set the Number of Data Points to Visualize in Facets Dive
SAMPLE_SIZE = 2500 #@param
train_dive = train_df.sample(SAMPLE_SIZE).to_json(orient='records')
HTML_TEMPLATE = <link rel="import" href="https://raw.githubusercontent.com/PAIR-code/facets/master/facets-dist/facets-jupyter.html">
<facets-dive id="elem" height="600"></facets-dive>
<script>
var data = {jsonstr};
document.querySelector("#elem").data = data;
</script>
html = HTML_TEMPLATE.format(jsonstr=train_dive)
display(HTML(html))
Explanation: FairAware Task #1
Review the descriptive statistics and histograms for each numerical and continuous feature. Click the Show Raw Data button above the histograms for categorical features to see the distribution of values per category.
Then, try to answer the following questions from earlier:
Are there missing feature values for a large number of observations?
Are there features that are missing that might affect other features?
Are there any unexpected feature values?
What signs of data skew do you see?
Solution
Click below for some insights we uncovered.
We can see from reviewing the missing columns for both numeric and categorical features that there are no missing feature values, so that is not a concern here.
By looking at the min/max values and histograms for each numeric feature, we can pinpoint any extreme outliers in our data set. For hours_per_week, we can see that the minimum is 1, which might be a bit surprising, given that most jobs typically require multiple hours of work per week. For capital_gain and capital_loss, we can see that over 90% of values are 0. Given that capital gains/losses are only registered by individuals who make investments, it's certainly plausible that less than 10% of examples would have nonzero values for these feature, but we may want to take a closer look to verify the values for these features are valid.
In looking at the histogram for gender, we see that over two-thirds (approximately 67%) of examples represent males. This strongly suggests data skew, as we would expect the breakdown between genders to be closer to 50/50.
A Deeper Dive
To futher explore the dataset, we can use Facets Dive, a tool that provides an interactive interface where each individual item in the visualization represents a data point. But to use Facets Dive, we need to convert our data to a JSON array.
Thankfully the DataFrame method to_json() takes care of this for us.
Run the cell below to perform the data transform to JSON and also load Facets Dive.
End of explanation
feature = 'capital_gain / capital_loss' #@param ["", "hours_per_week", "fnlwgt", "gender", "capital_gain / capital_loss", "age"] {allow-input: false}
if feature == "hours_per_week":
print(
'''It does seem a little strange to see 'hours_per_week' max out at 99 hours,
which could lead to data misrepresentation. One way to address this is by
representing 'hours_per_week' as a binary "working 40 hours/not working 40
hours" feature. Also keep in mind that data was extracted based on work hours
being greater than 0. In other words, this feature representation exclude a
subpopulation of the US that is not working. This could skew the outcomes of the
model.''')
if feature == "fnlwgt":
print(
'fnlwgt' represents the weight of the observations. After fitting the model
to this data set, if certain group of individuals end up performing poorly
compared to other groups, then we could explore ways of reweighting each data
point using this feature.)
if feature == "gender":
print(
Looking at the ratio between men and women shows how disproportionate the data
is compared to the real world where the ratio (at least in the US) is closer to
1:1. This could pose a huge probem in performance across gender. Considerable
measures may need to be taken to upsample the underrepresented group (in this
case, women).)
if feature == "capital_gain / capital_loss":
print(
Both 'capital_gain' and 'capital_loss' have very low variance, which might
suggest they don't contribute a whole lot of information for predicting income. It
may be okay to omit these features rather than giving the model more noise.)
if feature == "age":
print(
'''"age" has a lot of variance, so it might benefit from bucketing to learn
fine-grained correlations between income and age, as well as to prevent
overfitting. "age" has a lot of variance, so it might benefit from bucketization
to learn fine-grained correlations between income and age, as well as to prevent
overfitting.''')
Explanation: FairAware Task #2
Use the menus on the left panel of the visualization to change how the data is organized:
In the Faceting | X-Axis menu, select education, and in the Display | Color and Display | Type menus, select income_bracket. How would you describe the relationship between education level and income bracket?
Next, in the Faceting | X-Axis menu, select marital_status, and in the Display | Color and Display | Type menus, select gender. What noteworthy observations can you make about the gender distributions for each marital-status category?
As you perform the above tasks, keep the following fairness-related questions in mind:
What's missing?
What's being overgeneralized?
What's being underrepresented?
How do the variables, and their values, reflect the real world?
What might we be leaving out?
Solution
Click below for some insights we uncovered.
In our data set, higher education levels generally tend to correlate with a higher income bracket. An income level of greater than $50,000 is more heavily represented in examples where education level is Bachelor's degree or higher.
In most marital-status categories, the distribution of male vs. female values is close to 1:1. The one notable exception is "married-civ-spouse", where male outnumbers female by more than 5:1. Given that we already discovered in Task #1 that there is a disproportionately high representation of men in our data set, we can now infer that it's married women specifically that are underrepresented in our data.
Summary
Plotting histograms, ranking most-to-least common examples, identifying duplicate or missing examples, making sure the training and test sets are similar, computing feature quantiles—these are all critical analyses to perform on your data.
The better you know what's going on in your data, the more insight you'll have as to where unfairness might creep in!
FairAware Task #3
Now that you've explored the dataset using Facets, see if you can identify some of the problems that may arise with regard to fairness based on what you've learned about its features.
Which of the following features might pose a problem with regard to fairness?
Choose a feature from the drop-down options in the cell below, and then run the cell to check your answer. Then explore the rest of the options to get more insight about how each influences the model's predictions.
End of explanation
def csv_to_pandas_input_fn(data, batch_size=100, num_epochs=1, shuffle=False):
return tf.estimator.inputs.pandas_input_fn(
x=data.drop('income_bracket', axis=1),
y=data['income_bracket'].apply(lambda x: ">50K" in x).astype(int),
batch_size=batch_size,
num_epochs=num_epochs,
shuffle=shuffle,
num_threads=1)
print 'csv_to_pandas_input_fn() defined.'
Explanation: Prediction Using TensorFlow Estimators
Now that we have a better sense of the Adult dataset, we can now begin with creating a neural network to predict income. In this section, we will be using TensorFlow's Estimator API to access the DNNClassifier class
Convert Adult Dataset into Tensors
We first have to define our input fuction, which will take the Adult dataset that is in a pandas DataFrame and converts it into tensors using the tf.estimator.inputs.pandas_input_fn() function.
End of explanation
#@title Categorical Feature Columns
# Since we don't know the full range of possible values with occupation and
# native_country, we'll use categorical_column_with_hash_bucket() to help map
# each feature string into an integer ID.
occupation = tf.feature_column.categorical_column_with_hash_bucket(
"occupation", hash_bucket_size=1000)
native_country = tf.feature_column.categorical_column_with_hash_bucket(
"native_country", hash_bucket_size=1000)
# For the remaining categorical features, since we know what the possible values
# are, we can be more explicit and use categorical_column_with_vocabulary_list()
gender = tf.feature_column.categorical_column_with_vocabulary_list(
"gender", ["Female", "Male"])
race = tf.feature_column.categorical_column_with_vocabulary_list(
"race", [
"White", "Asian-Pac-Islander", "Amer-Indian-Eskimo", "Other", "Black"
])
education = tf.feature_column.categorical_column_with_vocabulary_list(
"education", [
"Bachelors", "HS-grad", "11th", "Masters", "9th",
"Some-college", "Assoc-acdm", "Assoc-voc", "7th-8th",
"Doctorate", "Prof-school", "5th-6th", "10th", "1st-4th",
"Preschool", "12th"
])
marital_status = tf.feature_column.categorical_column_with_vocabulary_list(
"marital_status", [
"Married-civ-spouse", "Divorced", "Married-spouse-absent",
"Never-married", "Separated", "Married-AF-spouse", "Widowed"
])
relationship = tf.feature_column.categorical_column_with_vocabulary_list(
"relationship", [
"Husband", "Not-in-family", "Wife", "Own-child", "Unmarried",
"Other-relative"
])
workclass = tf.feature_column.categorical_column_with_vocabulary_list(
"workclass", [
"Self-emp-not-inc", "Private", "State-gov", "Federal-gov",
"Local-gov", "?", "Self-emp-inc", "Without-pay", "Never-worked"
])
print 'Categorical feature columns defined.'
#@title Numeric Feature Columns
# For Numeric features, we can just call on feature_column.numeric_column()
# to use its raw value instead of having to create a map between value and ID.
age = tf.feature_column.numeric_column("age")
fnlwgt = tf.feature_column.numeric_column("fnlwgt")
education_num = tf.feature_column.numeric_column("education_num")
capital_gain = tf.feature_column.numeric_column("capital_gain")
capital_loss = tf.feature_column.numeric_column("capital_loss")
hours_per_week = tf.feature_column.numeric_column("hours_per_week")
print 'Numeric feature columns defined.'
Explanation: Represent Features in TensorFlow
TensorFlow requires that data maps to a model. To accomplish this, you have to use tf.feature_columns to ingest and represent features in TensorFlow.
End of explanation
age_buckets = tf.feature_column.bucketized_column(
age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
Explanation: Make Age a Categorical Feature
If you chose age when completing FairAware Task #3, you noticed that we suggested that age might benefit from bucketing (also known as binning), grouping together similar ages into different groups. This might help the model generalize better across age. As such, we will convert age from a numeric feature (technically, an ordinal feature) to a categorical feature.
End of explanation
# List of variables, with special handling for gender subgroup.
variables = [native_country, education, occupation, workclass,
relationship, age_buckets]
subgroup_variables = [gender]
feature_columns = variables + subgroup_variables
Explanation: Consider Key Subgroups
When performing feature engineering, it's important to keep in mind that you may be working with data drawn from individuals belonging to subgroups, for which you'll want to evaluate model performance separately.
NOTE: In this context, a subgroup is defined as a group of individuals who share a given characteristic—such as race, gender, or sexual orientation—that merits special consideration when evaluating a model with fairness in mind.
When we want our models to mitigate, or leverage, the learned signal of a characteristic pertaining to a subgroup, we will want to use different kinds of tools and techniques—most of which are still open research at this point.
As you work with different variables and define tasks for them, it can be useful to think about what comes next. For example, where are the places where the interaction of the variable and the task could be a concern?
Define the Model Features
Now we can explicitly define which feature we will include in our model.
We'll consider gender a subgroup and save it in a separate subgroup_variables list, so we can add special handling for it as needed.
End of explanation
deep_columns = [
tf.feature_column.indicator_column(workclass),
tf.feature_column.indicator_column(education),
tf.feature_column.indicator_column(age_buckets),
tf.feature_column.indicator_column(gender),
tf.feature_column.indicator_column(relationship),
tf.feature_column.embedding_column(native_country, dimension=8),
tf.feature_column.embedding_column(occupation, dimension=8),
]
print deep_columns
print 'Deep columns created.'
Explanation: Train a Deep Neural Net Model on Adult Dataset
With the features now ready to go, we can try predicting income using deep learning.
For the sake of simplicity, we are going to keep the neural network architecture light by simply defining a feed-forward neural network with two hidden layers.
But first, we have to convert our high-dimensional categorical features into a low-dimensional and dense real-valued vector, which we call an embedding vector. Luckily, indicator_column (think of it as one-hot encoding) and embedding_column (that converts sparse features into dense features) helps us streamline the process.
The following cell creates the deep columns needed to move forward with defining the model.
End of explanation
#@title Define Deep Neural Net Model
HIDDEN_UNITS = [1024, 512] #@param
LEARNING_RATE = 0.1 #@param
L1_REGULARIZATION_STRENGTH = 0.0001 #@param
L2_REGULARIZATION_STRENGTH = 0.0001 #@param
model_dir = tempfile.mkdtemp()
single_task_deep_model = tf.estimator.DNNClassifier(
feature_columns=deep_columns,
hidden_units=HIDDEN_UNITS,
optimizer=tf.train.ProximalAdagradOptimizer(
learning_rate=LEARNING_RATE,
l1_regularization_strength=L1_REGULARIZATION_STRENGTH,
l2_regularization_strength=L2_REGULARIZATION_STRENGTH),
model_dir=model_dir)
print 'Deep neural net model defined.'
Explanation: Will all our data preprocessing taken care of, we can now define the deep neural net model. Start by using the parameters defined below. (Later on, after you've defined evaluation metrics and evaluated the model, you can come back and tweak these parameters to compare results.)
End of explanation
#@title Fit Deep Neural Net Model to the Adult Training Dataset
STEPS = 1000 #@param
single_task_deep_model.train(
input_fn=csv_to_pandas_input_fn(train_df, num_epochs=None, shuffle=True),
steps=STEPS);
print "Deep neural net model is done fitting."
Explanation: To keep things simple, we will train for 1000 steps—but feel free to play around with this parameter.
End of explanation
#@title Evaluate Deep Neural Net Performance
results = single_task_deep_model.evaluate(
input_fn=csv_to_pandas_input_fn(test_df, num_epochs=1, shuffle=False),
steps=None)
print("model directory = %s" % model_dir)
print("---- Results ----")
for key in sorted(results):
print("%s: %s" % (key, results[key]))
Explanation: We can now evalute the overall model's performance using the held-out test set.
End of explanation
#@test {"output": "ignore"}
#@title Define Function to Compute Binary Confusion Matrix Evaluation Metrics
def compute_eval_metrics(references, predictions):
tn, fp, fn, tp = confusion_matrix(references, predictions).ravel()
precision = tp / float(tp + fp)
recall = tp / float(tp + fn)
false_positive_rate = fp / float(fp + tn)
false_omission_rate = fn / float(tn + fn)
return precision, recall, false_positive_rate, false_omission_rate
print 'Binary confusion matrix and evaluation metrics defined.'
Explanation: You can try retraining the model using different parameters. In the end, you will find that a deep neural net does a decent job in predicting income.
But what is missing here is evaluation metrics with respect to subgroups. We will cover some of the ways you can evaluate at the subgroup level in the next section.
Evaluating for Fairness Using a Confusion Matrix
While evaluating the overall performance of the model gives us some insight into its quality, it doesn't give us much insight into how well our model performs for different subgroups.
When evaluating a model for fairness, it's important to determine whether prediction errors are uniform across subgroups or whether certain subgroups are more susceptible to certain prediction errors than others.
A key tool for comparing the prevalence of different types of model errors is a confusion matrix. Recall from the Classification module of Machine Learning Crash Course that a confusion matrix is a grid that plots predictions vs. ground truth for your model, and tabulates statistics summarizing how often your model made the correct prediction and how often it made the wrong prediction.
Let's start by creating a binary confusion matrix for our income-prediction model—binary because our label (income_bracket) has only two possible values (<50K or >50K). We'll define an income of >50K as our positive label, and an income of <50k as our negative label.
NOTE: Positive and negative in this context should not be interpreted as value judgments (we are not suggesting that someone who earns more than 50k a year is a better person than someone who earns less than 50k). They are just standard terms used to distinguish between the two possible predictions the model can make.
Cases where the model makes the correct prediction (the prediction matches the ground truth) are classified as true, and cases where the model makes the wrong prediction are classified as false.
Our confusion matrix thus represents four possible states:
true positive: Model predicts >50K, and that is the ground truth.
true negative: Model predicts <50K, and that is the ground truth.
false positive: Model predicts >50K, and that contradicts reality.
false negative: Model predicts <50K, and that contradicts reality.
NOTE: If desired, we can use the number of outcomes in each of these states to calculate secondary evaluation metrics, such as precision and recall.
Plot the Confusion Matrix
The following cell define a function that uses the sklearn.metrics.confusion_matrix module to calculate all the instances (true positive, true negative, false positive, and false negative) needed to compute our binary confusion matrix and evaluation metrics.
End of explanation
#@title Define Function to Visualize Binary Confusion Matrix
def plot_confusion_matrix(confusion_matrix, class_names, figsize = (8,6)):
# We're taking our calculated binary confusion matrix that's already in form
# of an array and turning it into a Pandas DataFrame because it's a lot
# easier to work with when visualizing a heat map in Seaborn.
df_cm = pd.DataFrame(
confusion_matrix, index=class_names, columns=class_names,
)
fig = plt.figure(figsize=figsize)
# Combine the instance (numercial value) with its description
strings = np.asarray([['True Positives', 'False Negatives'],
['False Positives', 'True Negatives']])
labels = (np.asarray(
["{0:d}\n{1}".format(value, string) for string, value in zip(
strings.flatten(), confusion_matrix.flatten())])).reshape(2, 2)
heatmap = sns.heatmap(df_cm, annot=labels, fmt="");
heatmap.yaxis.set_ticklabels(
heatmap.yaxis.get_ticklabels(), rotation=0, ha='right')
heatmap.xaxis.set_ticklabels(
heatmap.xaxis.get_ticklabels(), rotation=45, ha='right')
plt.ylabel('References')
plt.xlabel('Predictions')
return fig
print "Binary confusion matrix visualization defined."
Explanation: We will also need help plotting the binary confusion matrix. The function below combines various third-party modules (pandas DataFame, Matplotlib, Seaborn) to draw the confusion matrix.
End of explanation
#@title Visualize Binary Confusion Matrix and Compute Evaluation Metrics Per Subgroup
CATEGORY = "gender" #@param {type:"string"}
SUBGROUP = "Male" #@param {type:"string"}
# Given define subgroup, generate predictions and obtain its corresponding
# ground truth.
predictions_dict = single_task_deep_model.predict(input_fn=csv_to_pandas_input_fn(
test_df.loc[test_df[CATEGORY] == SUBGROUP], num_epochs=1, shuffle=False))
predictions = []
for prediction_item, in zip(predictions_dict):
predictions.append(prediction_item['class_ids'][0])
actuals = list(
test_df.loc[test_df[CATEGORY] == SUBGROUP]['income_bracket'].apply(
lambda x: '>50K' in x).astype(int))
classes = ['Over $50K', 'Less than $50K']
# To stay consistent, we have to flip the confusion
# matrix around on both axes because sklearn's confusion matrix module by
# default is rotated.
rotated_confusion_matrix = np.fliplr(confusion_matrix(actuals, predictions))
rotated_confusion_matrix = np.flipud(rotated_confusion_matrix)
tb = widgets.TabBar(['Confusion Matrix', 'Evaluation Metrics'], location='top')
with tb.output_to('Confusion Matrix'):
plot_confusion_matrix(rotated_confusion_matrix, classes);
with tb.output_to('Evaluation Metrics'):
grid = widgets.Grid(2,4)
p, r, fpr, fomr = compute_eval_metrics(actuals, predictions)
with grid.output_to(0, 0):
print " Precision "
with grid.output_to(1, 0):
print " %.4f " % p
with grid.output_to(0, 1):
print " Recall "
with grid.output_to(1, 1):
print " %.4f " % r
with grid.output_to(0, 2):
print " False Positive Rate "
with grid.output_to(1, 2):
print " %.4f " % fpr
with grid.output_to(0, 3):
print " False Omission Rate "
with grid.output_to(1, 3):
print " %.4f " % fomr
Explanation: Now that we have all the necessary functions defined, we can now compute the binary confusion matrix and evaluation metrics using the outcomes from our deep neural net model. The output of this cell is a tabbed view, which allows us to toggle between the confusion matrix and evaluation metrics table.
FairAware Task #4
Use the form below to generate confusion matrices for the two gender subgroups: Female and Male. Compare the number of False Positives and False Negatives for each subgroup. Are there any significant disparities in error rates that suggest the model performs better for one subgroup than another?
End of explanation |
5,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulating with FBA
Simulations using flux balance analysis can be solved using Model.optimize(). This will maximize or minimize (maximizing is the default) flux through the objective reactions.
Step1: Running FBA
Step2: The Model.optimize() function will return a Solution object, which will also be stored at model.solution. A solution object has several attributes
Step3: Changing the Objectives
The objective function is determined from the objective_coefficient attribute of the objective reaction(s). Currently in the model, there is only one objective reaction, with an objective coefficient of 1.
Step4: The objective function can be changed by assigning Model.objective, which can be a reaction object (or just it's name), or a dict of {Reaction
Step5: The objective function can also be changed by setting Reaction.objective_coefficient directly.
Step6: Running FVA
FBA will not give always give unique solution, because multiple flux states can achieve the same optimum. FVA (or flux variability analysis) finds the ranges of each metabolic flux at the optimum.
Step7: Setting parameter fraction_of_optimium=0.90 would give the flux ranges for reactions at 90% optimality. | Python Code:
import pandas
pandas.options.display.max_rows = 100
import cobra.test
model = cobra.test.create_test_model("textbook")
Explanation: Simulating with FBA
Simulations using flux balance analysis can be solved using Model.optimize(). This will maximize or minimize (maximizing is the default) flux through the objective reactions.
End of explanation
model.optimize()
Explanation: Running FBA
End of explanation
model.solution.status
model.solution.f
Explanation: The Model.optimize() function will return a Solution object, which will also be stored at model.solution. A solution object has several attributes:
f: the objective value
status: the status from the linear programming solver
x_dict: a dictionary of {reaction_id: flux_value} (also called "primal")
x: a list for x_dict
y_dict: a dictionary of {metabolite_id: dual_value}.
y: a list for y_dict
For example, after the last call to model.optimize(), the status should be 'optimal' if the solver returned no errors, and f should be the objective value
End of explanation
model.objective
Explanation: Changing the Objectives
The objective function is determined from the objective_coefficient attribute of the objective reaction(s). Currently in the model, there is only one objective reaction, with an objective coefficient of 1.
End of explanation
# change the objective to ATPM
# the upper bound should be 1000 so we get the actual optimal value
model.reactions.get_by_id("ATPM").upper_bound = 1000.
model.objective = "ATPM"
model.objective
model.optimize()
Explanation: The objective function can be changed by assigning Model.objective, which can be a reaction object (or just it's name), or a dict of {Reaction: objective_coefficient}.
End of explanation
model.reactions.get_by_id("ATPM").objective_coefficient = 0.
model.reactions.get_by_id("Biomass_Ecoli_core").objective_coefficient = 1.
model.objective
Explanation: The objective function can also be changed by setting Reaction.objective_coefficient directly.
End of explanation
fva_result = cobra.flux_analysis.flux_variability_analysis(model, model.reactions[:20])
pandas.DataFrame.from_dict(fva_result).T
Explanation: Running FVA
FBA will not give always give unique solution, because multiple flux states can achieve the same optimum. FVA (or flux variability analysis) finds the ranges of each metabolic flux at the optimum.
End of explanation
fva_result = cobra.flux_analysis.flux_variability_analysis(model, model.reactions[:20], fraction_of_optimum=0.9)
pandas.DataFrame.from_dict(fva_result).T
Explanation: Setting parameter fraction_of_optimium=0.90 would give the flux ranges for reactions at 90% optimality.
End of explanation |
5,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Composing a pipeline from reusable, pre-built, and lightweight components
This tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component
Step1: Create client
If you run this notebook outside of a Kubeflow cluster, run the following command
Step2: Build reusable components
Writing the program code
The following cell creates a file app.py that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.
Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as /output.txt.
Step3: Create a Docker container
Create your own container image that includes your program.
Creating a Dockerfile
Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The FROM statement specifies the Base Image from which you are building. WORKDIR sets the working directory. When you assemble the Docker image, COPY copies the required files and directories (for example, app.py) to the file system of the container. RUN executes a command (for example, install the dependencies) and commits the results.
Step4: Build docker image
Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options
Step5: If you want to use docker to build the image
Run the following in a cell
```bash
%%bash -s "{PROJECT_ID}"
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
Create script to build docker image and push it.
cat > ./tmp/components/mnist_training/build_image.sh <<HERE
PROJECT_ID="${1}"
IMAGE_NAME="${IMAGE_NAME}"
TAG="${TAG}"
GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}
Step6: Writing your component definition file
To create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.
For the complete definition of a Kubeflow Pipelines component, see the component specification. However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial.
Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section
Step7: Define deployment operation on AI Platform
Step9: Kubeflow serving deployment component as an option. Note that, the deployed Endppoint URI is not availabe as output of this component.
```python
kubeflow_deploy_op = comp.load_component_from_url(
'https
Step10: Create your workflow as a Python function
Define your pipeline as a Python function. @kfp.dsl.pipeline is a required decoration, and must include name and description properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
Step11: Submit a pipeline run | Python Code:
import kfp
import kfp.gcp as gcp
import kfp.dsl as dsl
import kfp.compiler as compiler
import kfp.components as comp
import datetime
import kubernetes as k8s
# Required Parameters
PROJECT_ID='<ADD GCP PROJECT HERE>'
GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'
Explanation: Composing a pipeline from reusable, pre-built, and lightweight components
This tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component:
Write the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component.
Containerize the program.
Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system.
Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline.
Then, we will compose a pipeline from a reusable component, a pre-built component, and a lightweight component. The pipeline will perform the following steps:
- Train an MNIST model and export it to Google Cloud Storage.
- Deploy the exported TensorFlow model on AI Platform Prediction service.
- Test the deployment by calling the endpoint with test data.
Note: Ensure that you have Docker installed, if you want to build the image locally, by running the following command:
which docker
The result should be something like:
/usr/bin/docker
End of explanation
# Optional Parameters, but required for running outside Kubeflow cluster
# The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com'
# The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline'
# Examples are:
# https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com
# https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline
HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>'
# For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following
# will be needed to access the endpoint.
CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>'
OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>'
OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>'
# This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines'
# If you are not working with 'AI Platform Pipelines', this step is not necessary
! gcloud auth print-access-token
# Create kfp client
in_cluster = True
try:
k8s.config.load_incluster_config()
except:
in_cluster = False
pass
if in_cluster:
client = kfp.Client()
else:
if HOST.endswith('googleusercontent.com'):
CLIENT_ID = None
OTHER_CLIENT_ID = None
OTHER_CLIENT_SECRET = None
client = kfp.Client(host=HOST,
client_id=CLIENT_ID,
other_client_id=OTHER_CLIENT_ID,
other_client_secret=OTHER_CLIENT_SECRET)
Explanation: Create client
If you run this notebook outside of a Kubeflow cluster, run the following command:
- host: The URL of your Kubeflow Pipelines instance, for example "https://<your-deployment>.endpoints.<your-project>.cloud.goog/pipeline"
- client_id: The client ID used by Identity-Aware Proxy
- other_client_id: The client ID used to obtain the auth codes and refresh tokens.
- other_client_secret: The client secret used to obtain the auth codes and refresh tokens.
python
client = kfp.Client(host, client_id, other_client_id, other_client_secret)
If you run this notebook within a Kubeflow cluster, run the following command:
python
client = kfp.Client()
You'll need to create OAuth client ID credentials of type Other to get other_client_id and other_client_secret. Learn more about creating OAuth credentials
End of explanation
%%bash
# Create folders if they don't exist.
mkdir -p tmp/reuse_components_pipeline/mnist_training
# Create the Python file that lists GCS blobs.
cat > ./tmp/reuse_components_pipeline/mnist_training/app.py <<HERE
import argparse
from datetime import datetime
import tensorflow as tf
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_path', type=str, required=True, help='Name of the model file.')
parser.add_argument(
'--bucket', type=str, required=True, help='GCS bucket name.')
args = parser.parse_args()
bucket=args.bucket
model_path=args.model_path
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print(model.summary())
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()),
# Interrupt training if val_loss stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
]
model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(x_test, y_test))
from tensorflow import gfile
gcs_path = bucket + "/" + model_path
# The export require the folder is new
if gfile.Exists(gcs_path):
gfile.DeleteRecursively(gcs_path)
tf.keras.experimental.export_saved_model(model, gcs_path)
with open('/output.txt', 'w') as f:
f.write(gcs_path)
HERE
Explanation: Build reusable components
Writing the program code
The following cell creates a file app.py that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.
Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as /output.txt.
End of explanation
%%bash
# Create Dockerfile.
# AI platform only support tensorflow 1.14
cat > ./tmp/reuse_components_pipeline/mnist_training/Dockerfile <<EOF
FROM tensorflow/tensorflow:1.14.0-py3
WORKDIR /app
COPY . /app
EOF
Explanation: Create a Docker container
Create your own container image that includes your program.
Creating a Dockerfile
Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The FROM statement specifies the Base Image from which you are building. WORKDIR sets the working directory. When you assemble the Docker image, COPY copies the required files and directories (for example, app.py) to the file system of the container. RUN executes a command (for example, install the dependencies) and commits the results.
End of explanation
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format(
PROJECT_ID=PROJECT_ID,
IMAGE_NAME=IMAGE_NAME,
TAG=TAG
)
APP_FOLDER='./tmp/reuse_components_pipeline/mnist_training/'
# In the following, for the purpose of demonstration
# Cloud Build is choosen for 'AI Platform Pipelines'
# kaniko is choosen for 'full Kubeflow deployment'
if HOST.endswith('googleusercontent.com'):
# kaniko is not pre-installed with 'AI Platform Pipelines'
import subprocess
# ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER}
cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER]
build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
print(build_log)
else:
if kfp.__version__ <= '0.1.36':
# kfp with version 0.1.36+ introduce broken change that will make the following code not working
import subprocess
builder = kfp.containers._container_builder.ContainerBuilder(
gcs_staging=GCS_BUCKET + "/kfp_container_build_staging"
)
kfp.containers.build_image_from_working_dir(
image_name=GCR_IMAGE,
working_dir=APP_FOLDER,
builder=builder
)
else:
raise("Please build the docker image use either [Docker] or [Cloud Build]")
Explanation: Build docker image
Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options:
- Use the kfp.containers.build_image_from_working_dir to build the image and push to the Container Registry (GCR). This requires kaniko, which will be auto-installed with 'full Kubeflow deployment' but not 'AI Platform Pipelines'.
- Use Cloud Build, which would require the setup of GCP project and enablement of corresponding API. If you are working with GCP 'AI Platform Pipelines' with GCP project running, it is recommended to use Cloud Build.
- Use Docker installed locally and push to e.g. GCR.
Note:
If you run this notebook within Kubeflow cluster, with Kubeflow version >= 0.7 and exploring kaniko option, you need to ensure that valid credentials are created within your notebook's namespace.
- With Kubeflow version >= 0.7, the credential is supposed to be copied automatically while creating notebook through Configurations, which doesn't work properly at the time of creating this notebook.
- You can also add credentials to the new namespace by either copying credentials from an existing Kubeflow namespace, or by creating a new service account.
- The following cell demonstrates how to copy the default secret to your own namespace.
```bash
%%bash
NAMESPACE=<your notebook name space>
SOURCE=kubeflow
NAME=user-gcp-sa
SECRET=$(kubectl get secrets \${NAME} -n \${SOURCE} -o jsonpath="{.data.\${NAME}.json}" | base64 -D)
kubectl create -n \${NAMESPACE} secret generic \${NAME} --from-literal="\${NAME}.json=\${SECRET}"
```
End of explanation
image_name = GCR_IMAGE
Explanation: If you want to use docker to build the image
Run the following in a cell
```bash
%%bash -s "{PROJECT_ID}"
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
Create script to build docker image and push it.
cat > ./tmp/components/mnist_training/build_image.sh <<HERE
PROJECT_ID="${1}"
IMAGE_NAME="${IMAGE_NAME}"
TAG="${TAG}"
GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}"
docker build -t \${IMAGE_NAME} .
docker tag \${IMAGE_NAME} \${GCR_IMAGE}
docker push \${GCR_IMAGE}
docker image rm \${IMAGE_NAME}
docker image rm \${GCR_IMAGE}
HERE
cd tmp/components/mnist_training
bash build_image.sh
```
End of explanation
%%bash -s "{image_name}"
GCR_IMAGE="${1}"
echo ${GCR_IMAGE}
# Create Yaml
# the image uri should be changed according to the above docker image push output
cat > mnist_pipeline_component.yaml <<HERE
name: Mnist training
description: Train a mnist model and save to GCS
inputs:
- name: model_path
description: 'Path of the tf model.'
type: String
- name: bucket
description: 'GCS bucket name.'
type: String
outputs:
- name: gcs_model_path
description: 'Trained model path.'
type: GCSPath
implementation:
container:
image: ${GCR_IMAGE}
command: [
python, /app/app.py,
--model_path, {inputValue: model_path},
--bucket, {inputValue: bucket},
]
fileOutputs:
gcs_model_path: /output.txt
HERE
import os
mnist_train_op = kfp.components.load_component_from_file(os.path.join('./', 'mnist_pipeline_component.yaml'))
mnist_train_op.component_spec
Explanation: Writing your component definition file
To create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.
For the complete definition of a Kubeflow Pipelines component, see the component specification. However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial.
Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section:
End of explanation
mlengine_deploy_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/deploy/component.yaml')
def deploy(
project_id,
model_uri,
model_id,
runtime_version,
python_version):
return mlengine_deploy_op(
model_uri=model_uri,
project_id=project_id,
model_id=model_id,
runtime_version=runtime_version,
python_version=python_version,
replace_existing_version=True,
set_default=True)
Explanation: Define deployment operation on AI Platform
End of explanation
def deployment_test(project_id: str, model_name: str, version: str) -> str:
model_name = model_name.split("/")[-1]
version = version.split("/")[-1]
import googleapiclient.discovery
def predict(project, model, data, version=None):
Run predictions on a list of instances.
Args:
project: (str), project where the Cloud ML Engine Model is deployed.
model: (str), model name.
data: ([[any]]), list of input instances, where each input instance is a
list of attributes.
version: str, version of the model to target.
Returns:
Mapping[str: any]: dictionary of prediction results defined by the model.
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(project, model)
if version is not None:
name += '/versions/{}'.format(version)
response = service.projects().predict(
name=name, body={
'instances': data
}).execute()
if 'error' in response:
raise RuntimeError(response['error'])
return response['predictions']
import tensorflow as tf
import json
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
result = predict(
project=project_id,
model=model_name,
data=x_test[0:2].tolist(),
version=version)
print(result)
return json.dumps(result)
# # Test the function with already deployed version
# deployment_test(
# project_id=PROJECT_ID,
# model_name="mnist",
# version='ver_bb1ebd2a06ab7f321ad3db6b3b3d83e6' # previous deployed version for testing
# )
deployment_test_op = comp.func_to_container_op(
func=deployment_test,
base_image="tensorflow/tensorflow:1.15.0-py3",
packages_to_install=["google-api-python-client==1.7.8"])
Explanation: Kubeflow serving deployment component as an option. Note that, the deployed Endppoint URI is not availabe as output of this component.
```python
kubeflow_deploy_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/deploy/component.yaml')
def deploy_kubeflow(
model_dir,
tf_server_name):
return kubeflow_deploy_op(
model_dir=model_dir,
server_name=tf_server_name,
cluster_name='kubeflow',
namespace='kubeflow',
pvc_name='',
service_type='ClusterIP')
```
Create a lightweight component for testing the deployment
End of explanation
# Define the pipeline
@dsl.pipeline(
name='Mnist pipeline',
description='A toy pipeline that performs mnist model training.'
)
def mnist_reuse_component_deploy_pipeline(
project_id: str = PROJECT_ID,
model_path: str = 'mnist_model',
bucket: str = GCS_BUCKET
):
train_task = mnist_train_op(
model_path=model_path,
bucket=bucket
).apply(gcp.use_gcp_secret('user-gcp-sa'))
deploy_task = deploy(
project_id=project_id,
model_uri=train_task.outputs['gcs_model_path'],
model_id="mnist",
runtime_version="1.14",
python_version="3.5"
).apply(gcp.use_gcp_secret('user-gcp-sa'))
deploy_test_task = deployment_test_op(
project_id=project_id,
model_name=deploy_task.outputs["model_name"],
version=deploy_task.outputs["version_name"],
).apply(gcp.use_gcp_secret('user-gcp-sa'))
return True
Explanation: Create your workflow as a Python function
Define your pipeline as a Python function. @kfp.dsl.pipeline is a required decoration, and must include name and description properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
End of explanation
pipeline_func = mnist_reuse_component_deploy_pipeline
experiment_name = 'minist_kubeflow'
arguments = {"model_path":"mnist_model",
"bucket":GCS_BUCKET}
run_name = pipeline_func.__name__ + ' run'
# Submit pipeline directly from pipeline function
run_result = client.create_run_from_pipeline_func(pipeline_func,
experiment_name=experiment_name,
run_name=run_name,
arguments=arguments)
Explanation: Submit a pipeline run
End of explanation |
5,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Russell Westbrook and Oscar Robertson's Triple Double Seasons
Author
Step1: After adjusting Westbrook and Robertson's per game stats to a per minute basis, Westbrook has the edge. He averages about 8 more points, 1 more rebound, and 1.5 more assists per 36 minutes than Robertson did during the 1962 season. Now I will look into their respective team seasons to see if there are any other adjustments that should be made when comparing the two seasons
Step2: PACE
There is a noticable difference in the 'pace' stat between the 2017 Thunder and 1962 Royals. The pace stat measures how many possessions per game that a team plays per 48 minutes. The higher the pace total, the more possessions per game that the team plays. The 1962 Cincinatti Royals played about 125 possessions per game while the 2017 Oklahoma City Thunder played about 98 possessions per game. The number of possessions in a game would seem to have an impact on the stat totals of players. It would be estimated that the more possessions a team plays with, the more totals of stats such as points, rebounds, and assists would accumulate. I am going to see how the pace of teams has changed over time and how well that correlates with the number of points, rebounds, and assists that have been accumulated over time to see if Westbrook and Robertson's stats should be adjusted for the number of possessions played.
Step3: It seems pretty clear that the more possessions that a team plays with, the more stat totals they will accumulate. Pace seems to predict the number of shot attempts and rebounds very well just by looking at the scatterplots. Assists and points also increase as pace increases, but it seems to dip off towards the higher paces. Robertson played with a very high pace. I will perform a linear regression for assists and points.
Step4: P-Value shows that pace is a statistically significant predictor of points and R-squared shows that about 74% of the variation in points comes from the variation in possessions which is pretty significant. It makes sense to adjust Robertson and Westbrook's points for pace
Step5: Possessions also significant in predicting the number of assists | Python Code:
# importing packages
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# all data is obtained through basketball-reference.com
# http://www.basketball-reference.com/teams/OKC/2017.html
# http://www.basketball-reference.com/teams/CIN/1962.html
# http://www.basketball-reference.com/leagues/NBA_stats.html
# all 2017 okc thunder player per game stats
okc = pd.read_csv('/Users/rohanpatel/Downloads/Per_Game_OKC_2017.csv')
okc.head()
# all 1962 cincinatti royals player per game stats
cin = pd.read_csv('/Users/rohanpatel/Downloads/Per_Game_CincRoy_1962.csv')
cin.head()
# only russell westbrook's points, rebounds, assists, and minutes per game
RW = okc.loc[:0]
RW = RW[['PTS/G', 'TRB', 'AST', 'MP']]
RW = RW.rename(columns={'PTS/G': 'PTS'})
RW
# only oscar robertson's points, rebounds, assists, and minutes per game
OR = cin.loc[:0]
OR = OR[['PTS', 'TRB', 'AST', 'MP']]
OR
# robertson played a considerable amount of more minutes than westbrook
# adjusting per game stats by 36 minutes played
rw_min_factor = 36/RW['MP']
or_min_factor = 36/OR['MP']
RW[['PTS', 'TRB', 'AST']] = RW[['PTS', 'TRB', 'AST']].apply(lambda x: x*rw_min_factor)
RW_36 = RW[['PTS', 'TRB', 'AST']]
print(RW_36)
OR[['PTS', 'TRB', 'AST']] = OR[['PTS', 'TRB', 'AST']].apply(lambda x: x*or_min_factor)
OR_36 = OR[['PTS', 'TRB', 'AST']]
print(OR_36)
# difference between Westbrook and Robertson's per 36 minute stats
RW_36 - OR_36
Explanation: Comparing Russell Westbrook and Oscar Robertson's Triple Double Seasons
Author: Rohan Patel
NBA player Russell Westbrook who plays for the Oklahoma City Thunder just finished an historic NBA basketball season as he became the second basketball player in NBA history to average a triple double for an entire season. A triple double entails having atleast 3 of the stat totals of points, assists, rebounds, steals, and blocks to be in double figures. A triple double is most commonly obtained through points, rebounds, and assists. During the 2017 NBA regular season, Westbrook averaged 31.6 points per game, 10.4 assists per game, and 10.7 rebounds per game.
Former NBA basketball player Oscar Robertson who used to play for the Cincinatti Royals is the only other player to average a triple double for an entire regular season as he did so 55 years ago. During the 1962 NBA regular season, Oscar Robertson averaged 30.8 points per game, 11.4 assists per game, and 12.5 rebounds per game. Many thought no one would ever average a triple double for an entire season ever again.
My project is going to compare the 2 seasons. Since it has been 55 years in between the 2 seasons, much has changed about the NBA and how basketball is played. I want to compare the differences in the way the game is played by examining their respective seasons in order to obtain a better understanding of who had the more impressive season.
End of explanation
# 2017 NBA stats
df_2017 = pd.read_csv('/Users/rohanpatel/Downloads/2017_NBA_Stats.csv')
df_2017
# 2017 okc thunder stats
okc_2017 = df_2017.loc[9]
okc_2017
# 1962 NBA stats
df_1962 = pd.read_csv('/Users/rohanpatel/Downloads/1962_NBA_Stats.csv')
df_1962
# 1962 cincinatti royal stats
cin_1962 = df_1962.loc[4]
cin_1962
Explanation: After adjusting Westbrook and Robertson's per game stats to a per minute basis, Westbrook has the edge. He averages about 8 more points, 1 more rebound, and 1.5 more assists per 36 minutes than Robertson did during the 1962 season. Now I will look into their respective team seasons to see if there are any other adjustments that should be made when comparing the two seasons
End of explanation
# nba averages per game for every season
nba_avgs = pd.read_csv('/Users/rohanpatel/Downloads/NBA_Averages_Over_Time.csv')
nba_avgs = nba_avgs[['Pace', 'PTS', 'AST', 'TRB', 'FGA']]
# pace values after the 44th row are missing
nba_avgs = nba_avgs.iloc[:44]
print(nba_avgs)
# scatterplots of stats against number of possessions
fig, ax = plt.subplots(nrows = 4, ncols = 1, sharex = True, figsize=(10, 20))
ax[0].scatter(nba_avgs['Pace'], nba_avgs['PTS'], color = 'green')
ax[1].scatter(nba_avgs['Pace'], nba_avgs['TRB'], color = 'blue')
ax[2].scatter(nba_avgs['Pace'], nba_avgs['AST'], color = 'red')
ax[3].scatter(nba_avgs['Pace'], nba_avgs['FGA'], color = 'orange')
ax[0].set_ylabel('POINTS', fontsize = 18)
ax[1].set_ylabel('REBOUNDS', fontsize = 18)
ax[2].set_ylabel('ASSISTS', fontsize = 18)
ax[3].set_ylabel('SHOT ATTEMPTS', fontsize = 18)
ax[3].set_xlabel('NUMBER OF POSSESSIONS', fontsize = 18)
plt.suptitle('STAT TOTALS VS NUMBER OF POSSESSIONS (PER GAME)', fontsize = 22)
plt.show()
Explanation: PACE
There is a noticable difference in the 'pace' stat between the 2017 Thunder and 1962 Royals. The pace stat measures how many possessions per game that a team plays per 48 minutes. The higher the pace total, the more possessions per game that the team plays. The 1962 Cincinatti Royals played about 125 possessions per game while the 2017 Oklahoma City Thunder played about 98 possessions per game. The number of possessions in a game would seem to have an impact on the stat totals of players. It would be estimated that the more possessions a team plays with, the more totals of stats such as points, rebounds, and assists would accumulate. I am going to see how the pace of teams has changed over time and how well that correlates with the number of points, rebounds, and assists that have been accumulated over time to see if Westbrook and Robertson's stats should be adjusted for the number of possessions played.
End of explanation
import statsmodels.api as sm
from pandas.tools.plotting import scatter_matrix
y = np.matrix(nba_avgs['PTS']).transpose()
x1 = np.matrix(nba_avgs['Pace']).transpose()
X = sm.add_constant(x1)
model = sm.OLS(y,X)
f = model.fit()
print(f.summary())
Explanation: It seems pretty clear that the more possessions that a team plays with, the more stat totals they will accumulate. Pace seems to predict the number of shot attempts and rebounds very well just by looking at the scatterplots. Assists and points also increase as pace increases, but it seems to dip off towards the higher paces. Robertson played with a very high pace. I will perform a linear regression for assists and points.
End of explanation
y = np.matrix(nba_avgs['AST']).transpose()
x1 = np.matrix(nba_avgs['Pace']).transpose()
X = sm.add_constant(x1)
model = sm.OLS(y,X)
f = model.fit()
print(f.summary())
Explanation: P-Value shows that pace is a statistically significant predictor of points and R-squared shows that about 74% of the variation in points comes from the variation in possessions which is pretty significant. It makes sense to adjust Robertson and Westbrook's points for pace
End of explanation
# adjusting both player's per 36 minute points, rebounds, and assists per 100 team possessions
rw_pace_factor = 100/okc_2017['Pace']
or_pace_factor = 100/cin_1962['Pace']
RW_36_100 = RW_36.apply(lambda x: x*rw_pace_factor)
print(RW_36_100)
OR_36_100 = OR_36.apply(lambda x: x*or_pace_factor)
print(OR_36_100)
print(RW_36_100 - OR_36_100)
# westbrook's per 36 minute stats adjusted for 1962 Cincinatti Royals pace
RW_36_1962 = RW_36 * (cin_1962['Pace']/okc_2017['Pace'])
print(RW_36_1962)
# robertson's per 36 minute stats adjusted for 2017 OKC Thunder Pace
OR_36_2017 = OR_36 * (okc_2017['Pace']/cin_1962['Pace'])
print(OR_36_2017)
# difference between the two if westbrook played at 1962 robertson's pace per 36 minutes
print(RW_36_1962 - OR_36)
# difference between the two if robertson played at 2017 westbrook's pace per 36 minutes
print(RW_36 - OR_36_2017)
# huge advantages for westbrook after adjusting for possessions
Explanation: Possessions also significant in predicting the number of assists
End of explanation |
5,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 3 - Morphological image processing <a class="tocSkip">
Import dependencies
Step1: Erosion / dilation steps | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.image as mpimg
import cv2
%%bash
ls -l | grep .tiff
img = mpimg.imread('Lab_3_DIP.tiff')
plt.figure(figsize=(15,10))
plt.imshow(img)
Explanation: Lab 3 - Morphological image processing <a class="tocSkip">
Import dependencies
End of explanation
plt.figure(figsize=(20,20))
kernel = np.ones((5,5),np.uint8)
erosion = cv2.erode(img,kernel,iterations = 1)
dilation = cv2.dilate(erosion,kernel,iterations = 1)
plt.subplot(1,3,1),
plt.imshow(img),
plt.title('img')
plt.subplot(1,3,2),
plt.imshow(erosion),
plt.title('erosion(img, 1)')
plt.subplot(1,3,3),
plt.imshow(dilation),
plt.title('dilate(erosion(img, 1), 1)')
plt.figure(figsize=(20,30))
kernel = np.ones((5,5),np.uint8)
erosion = cv2.erode(img,kernel,iterations = 1)
dilation = cv2.dilate(erosion,kernel,iterations = 1)
plt.subplot(4,3,1),
plt.imshow(img),
plt.title('img')
plt.subplot(4,3,2),
plt.imshow(erosion),
plt.title('erosion(img, 1)')
plt.subplot(4,3,3),
plt.imshow(dilation),
plt.title('dilate(erosion(img, 1), 1)')
erosion2 = cv2.erode(img,kernel,iterations = 2)
erosion3 = cv2.erode(img,kernel,iterations = 3)
erosion4 = cv2.erode(img,kernel,iterations = 4)
plt.subplot(4,3,4),
plt.imshow(erosion2),
plt.title('erosion(img, 2)')
plt.subplot(4,3,5),
plt.imshow(erosion3),
plt.title('erosion(img, 3)')
plt.subplot(4,3,6),
plt.imshow(erosion4),
plt.title('erosion(img, 4)')
dilation2 = cv2.dilate(img,kernel,iterations = 2)
dilation3 = cv2.dilate(img,kernel,iterations = 3)
dilation4 = cv2.dilate(img,kernel,iterations = 4)
plt.subplot(4,3,7),
plt.imshow(dilation2),
plt.title('dilate(img, 2)')
plt.subplot(4,3,8),
plt.imshow(dilation3),
plt.title('dilate(img, 3)')
plt.subplot(4,3,9),
plt.imshow(dilation4),
plt.title('dilate(img, 4)')
dil_1_ero_2 = cv2.dilate(
cv2.erode(img,kernel,iterations = 2)
,kernel,iterations = 1
)
dil_2_ero_1 = cv2.dilate(
cv2.erode(img,kernel,iterations = 1)
,kernel,iterations = 2
)
dil_2_ero_2 = cv2.dilate(
cv2.erode(img,kernel,iterations = 2)
,kernel,iterations = 2
)
plt.subplot(4,3,10),
plt.imshow(dil_1_ero_2),
plt.title('dilate(erosion(img, 2), 1)')
plt.subplot(4,3,11),
plt.imshow(dil_2_ero_1),
plt.title('dilate(erosion(img, 1), 2)')
plt.subplot(4,3,12),
plt.imshow(dil_2_ero_2),
plt.title('dilate(erosion(img, 2), 2)')
# plt.tight_layout()
plt.subplots_adjust(wspace=0, hspace=0.1)
plt.show()
plt.figure(figsize=(70,40))
plt.subplot(1,3,1),
plt.imshow(dilation),
plt.title('dilate(erosion(img, 1), 1)')
plt.subplot(1,3,2),
plt.imshow(dil_2_ero_1),
plt.title('dilate(erosion(img, 1), 2)')
plt.subplot(1,3,3),
plt.imshow(dil_2_ero_2),
plt.title('dilate(erosion(img, 2), 2)')
Explanation: Erosion / dilation steps
End of explanation |
5,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear regression using Batch Gradient Descent
Building linear regression from ground up
Step1: Our Linear Regression Model
This python class contains our Linear Regression model/algo. We use the OOP's concepts to impart the model's behaviour to a python class. This class contains methods to train, predict and plot the cost curve.
Step2: Ingesting Sample Data
We are using the MPG dataset from UCI Datasets to test our implementation. http
Step3: Visualizing the Cost per Step
Plotting a cost curve explains to us what is happening in our gradient descent. Whether our model converges or not. It also helps to understand the point after which we need not expend resources to train.
Step4: Predicting Values
Here we are using our input dataset to predict the Y with our final model. In actual scenarios there is proper process to do this. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
Explanation: Linear regression using Batch Gradient Descent
Building linear regression from ground up
End of explanation
class linear_regression():
def __init__(self):
self.weights = None
self.learning_rate = None
self.epochs = None
self.trainx = None
self.trainy = None
self.costcurve = []
print('Status: Model Initialized')
def train(self, trainx, trainy, learning_rate, epochs):
# np.random.seed(10)
self.trainx = trainx
self.trainy = trainy
self.learning_rate = learning_rate
self.epochs = epochs
# self.weights = np.random.randn(self.trainx.shape[1]+1)
self.weights = np.random.uniform(low=0.0, high=inputx.shape[1]**0.5, size=inputx.shape[1]+1)
self.trainx = np.append(self.trainx,np.ones((self.trainx.shape[0],1)), axis=1)
for epoch in range(epochs):
output = np.dot(self.trainx, self.weights)
output = np.reshape(output, (output.shape[0],1))
error = np.subtract(self.trainy, output)
total_error = np.sum(error)
cost = np.mean(error**2)
self.costcurve.append([epoch+1, cost])
gradients = (self.learning_rate / self.trainx.shape[0]) * np.dot(error.T, self.trainx)
gradients = np.reshape(gradients, (gradients.T.shape[0],))
self.weights += gradients
print('step:{0} \n cost:{1}'.format(epoch+1, cost))
# return self.weights
def plotCostCurve(self):
costcurvearray = np.array(self.costcurve)
plt.plot(costcurvearray[:,0],costcurvearray[:,1])
plt.xlabel('Epochs')
plt.ylabel('Cost')
plt.show()
def predict(self, validatex):
validatex_new = np.append(validatex,np.ones((validatex.shape[0],1)), axis=1)
predict = np.dot(validatex_new, self.weights)
return np.reshape(predict, (predict.shape[0],1))
Explanation: Our Linear Regression Model
This python class contains our Linear Regression model/algo. We use the OOP's concepts to impart the model's behaviour to a python class. This class contains methods to train, predict and plot the cost curve.
End of explanation
mpg_data = np.genfromtxt('mpg.txt', delimiter=',', dtype='float')
print(mpg_data.shape)
mpg_data = mpg_data[~np.isnan(mpg_data).any(axis=1)]
inputx = mpg_data[:,1:8]
for i in range(inputx.shape[1]):
inputx[:,i] = (inputx[:,i]-np.min(inputx[:,i]))/(np.max(inputx[:,i])-np.min(inputx[:,i]))
inputy = np.reshape(mpg_data[:,0],(mpg_data.shape[0],1))
model = linear_regression()
model.train(inputx, inputy, 0.01, 1000)
print(model.weights)
Explanation: Ingesting Sample Data
We are using the MPG dataset from UCI Datasets to test our implementation. http://archive.ics.uci.edu/ml/datasets/Auto+MPG
End of explanation
model.plotCostCurve()
Explanation: Visualizing the Cost per Step
Plotting a cost curve explains to us what is happening in our gradient descent. Whether our model converges or not. It also helps to understand the point after which we need not expend resources to train.
End of explanation
model.predict(inputx)
Explanation: Predicting Values
Here we are using our input dataset to predict the Y with our final model. In actual scenarios there is proper process to do this.
End of explanation |
5,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="top"></a>
Composites
<hr>
Background
Composites are 2-dimensional representations of 3-dimensional data.
There are many cases in which this is desired. Sometimes composites are used in visualization - such as showing an RGB image of an area. Other times they are used for convenience, such as reducing the run time of an analysis by reducing the amount of data to be processed in a task by working with composites instead of full datasets. Other times they are required by an algorithm.
There are several kinds of composites that can be made. This notebook provides an overview of several of them and shows how to create them in the context of Open Data Cube.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Load Data from the Data Cube
Most Common Composites
Mean composites
Median composites
Geometric median (geomedian) composites
Geometric medoid (geomedoid) composites
Other Composites
Most-recent composites
Least-recent composites
<span id="Composites_import">Import Dependencies and Connect to the Data Cube ▴</span>
Step1: <span id="Composites_retrieve_data">Load Data from the Data Cube ▴</span>
Step2: <span id="Composites_most_common">Most Common Composites ▴</span>
Mean composites
A mean composite is obtained by finding the mean (average) value of each band for each pixel. To create mean composites, we use the built-in mean() method of xarray objects.
Step3: Median composites
A median composite is obtained by finding the median value of each band for each pixel. Median composites are quick to obtain and are usually fairly representative of their data, so they are acceptable for visualization as images. To create median composites, we use the built-in median() method of xarray objects.
Step4: Geometric median (geomedian) composites
Geometric median (or "geomedian") composites are the best composites to use for most applications for which a representative, synthetic (calculated, not selected from the data) time slice is desired. They are essentiall median composites, but instead of finding the median on a per-band basis, they find the median for all bands together. If a composite will be used for analysis - not just visualization - it should be a geomedian composite. The only downside of this composite type is that it takes much longer to obtain than other composite types. For more information, see the Geomedians_and_Geomedoids notebook.
Step5: Geometric medoid (geomedoid) composites
Geometric medoid (or "geomedoid") composites are the best composites to use for most applications for which a representative, non-syntheic (selected from the data, not calculated) time slice is desired. For more information, see the Geomedians_and_Geomedoids notebook.
Step6: <span id="Composites_other_composites">Other Composites ▴</span>
Most-recent composites
Most-recent composites use the most recent cloud-free pixels in an image. To create, a most-recent composite, we use the create_mosaic utility function.
Step7: Least-recent composites
Least-recent composites are simply the opposite of most-recent composites. To create, a least-recent composite, we use the create_mosaic utility function, specifying reverse_time=True. | Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
# landsat_qa_clean_mask, landsat_clean_mask_invalid
from utils.data_cube_utilities.dc_mosaic import create_hdmedians_multiple_band_mosaic
from utils.data_cube_utilities.dc_mosaic import create_mosaic
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube()
Explanation: <a id="top"></a>
Composites
<hr>
Background
Composites are 2-dimensional representations of 3-dimensional data.
There are many cases in which this is desired. Sometimes composites are used in visualization - such as showing an RGB image of an area. Other times they are used for convenience, such as reducing the run time of an analysis by reducing the amount of data to be processed in a task by working with composites instead of full datasets. Other times they are required by an algorithm.
There are several kinds of composites that can be made. This notebook provides an overview of several of them and shows how to create them in the context of Open Data Cube.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Load Data from the Data Cube
Most Common Composites
Mean composites
Median composites
Geometric median (geomedian) composites
Geometric medoid (geomedoid) composites
Other Composites
Most-recent composites
Least-recent composites
<span id="Composites_import">Import Dependencies and Connect to the Data Cube ▴</span>
End of explanation
product = 'ls8_usgs_sr_scene'
platform = 'LANDSAT_8'
collection = 'c1'
level = 'l2'
landsat_ds = dc.load(platform=platform, product=product,
time=("2017-01-01", "2017-12-31"),
lat=(-1.395447, -1.172343),
lon=(36.621306, 37.033980),
group_by='solar_day',
dask_chunks={'latitude':500, 'longitude':500,
'time':5})
# clean_mask = (landsat_qa_clean_mask(landsat_ds, platform) &
# (landsat_ds != -9999).to_array().all('variable') &
# landsat_clean_mask_invalid(landsat_ds))
clean_mask = landsat_clean_mask_full(dc, landsat_ds, product=product, platform=platform,
collection=collection, level=level)
landsat_ds = landsat_ds.where(clean_mask)
Explanation: <span id="Composites_retrieve_data">Load Data from the Data Cube ▴</span>
End of explanation
mean_composite = landsat_ds.mean('time', skipna=True)
Explanation: <span id="Composites_most_common">Most Common Composites ▴</span>
Mean composites
A mean composite is obtained by finding the mean (average) value of each band for each pixel. To create mean composites, we use the built-in mean() method of xarray objects.
End of explanation
median_composite = landsat_ds.median('time', skipna=True)
Explanation: Median composites
A median composite is obtained by finding the median value of each band for each pixel. Median composites are quick to obtain and are usually fairly representative of their data, so they are acceptable for visualization as images. To create median composites, we use the built-in median() method of xarray objects.
End of explanation
geomedian_composite = create_hdmedians_multiple_band_mosaic(landsat_ds)
Explanation: Geometric median (geomedian) composites
Geometric median (or "geomedian") composites are the best composites to use for most applications for which a representative, synthetic (calculated, not selected from the data) time slice is desired. They are essentiall median composites, but instead of finding the median on a per-band basis, they find the median for all bands together. If a composite will be used for analysis - not just visualization - it should be a geomedian composite. The only downside of this composite type is that it takes much longer to obtain than other composite types. For more information, see the Geomedians_and_Geomedoids notebook.
End of explanation
geomedoid_composite = create_hdmedians_multiple_band_mosaic(landsat_ds, operation='medoid')
Explanation: Geometric medoid (geomedoid) composites
Geometric medoid (or "geomedoid") composites are the best composites to use for most applications for which a representative, non-syntheic (selected from the data, not calculated) time slice is desired. For more information, see the Geomedians_and_Geomedoids notebook.
End of explanation
most_recent_composite = create_mosaic(landsat_ds)
Explanation: <span id="Composites_other_composites">Other Composites ▴</span>
Most-recent composites
Most-recent composites use the most recent cloud-free pixels in an image. To create, a most-recent composite, we use the create_mosaic utility function.
End of explanation
most_recent_composite = create_mosaic(landsat_ds, reverse_time=True)
Explanation: Least-recent composites
Least-recent composites are simply the opposite of most-recent composites. To create, a least-recent composite, we use the create_mosaic utility function, specifying reverse_time=True.
End of explanation |
5,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DirectLiNGAM
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
Step1: Test data
We create test data consisting of 6 variables.
Step2: Causal Discovery
To run causal discovery, we create a DirectLiNGAM object and call the fit method.
Step3: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery.
Step4: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery.
Step5: We can draw a causal graph by utility funciton.
Step6: Independence between error variables
To check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $e_i$ and $e_j$. | Python Code:
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import make_dot
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(100)
Explanation: DirectLiNGAM
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
End of explanation
x3 = np.random.uniform(size=1000)
x0 = 3.0*x3 + np.random.uniform(size=1000)
x2 = 6.0*x3 + np.random.uniform(size=1000)
x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=1000)
x5 = 4.0*x0 + np.random.uniform(size=1000)
x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=1000)
X = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5'])
X.head()
m = np.array([[0.0, 0.0, 0.0, 3.0, 0.0, 0.0],
[3.0, 0.0, 2.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 6.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[8.0, 0.0,-1.0, 0.0, 0.0, 0.0],
[4.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
dot = make_dot(m)
# Save pdf
dot.render('dag')
# Save png
dot.format = 'png'
dot.render('dag')
dot
Explanation: Test data
We create test data consisting of 6 variables.
End of explanation
model = lingam.DirectLiNGAM()
model.fit(X)
Explanation: Causal Discovery
To run causal discovery, we create a DirectLiNGAM object and call the fit method.
End of explanation
model.causal_order_
Explanation: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery.
End of explanation
model.adjacency_matrix_
Explanation: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery.
End of explanation
make_dot(model.adjacency_matrix_)
Explanation: We can draw a causal graph by utility funciton.
End of explanation
p_values = model.get_error_independence_p_values(X)
print(p_values)
Explanation: Independence between error variables
To check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $e_i$ and $e_j$.
End of explanation |
5,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KMeans
Step1: 2. Scikit
Scikit is a machine learning library for Python built upon numpy and matplotlib. It provides functions for classification, regression, clustering and other common analytics tasks.
Step2: In the following we evaluate the resulting fit (commonly referred to as the model), using the sum of squared errors and a pair plot. The following pair plot shows the scatter-plot between each of the four features. Clusters for the different species are indicated by different colors.
Step3: 3. Pilot Approach
We will now use RADICAL-Pilot to compute the distance function, as a simple representation of how the above example can be executed as a task-parallel application.
Step4: In the following, we will partition the data and distribute it to a set of CUs for fast processing
Step5: Helper Function for computing new centroids as mean of all points assigned to a cluster
Step6: Running Mapper Function as an External Process
Step7: Running Mapper Function inside RADICAL-Pilot
Helper function to read output from completed compute units after it has been executed inside the Pilot.
Step8: This is the main application loop. The distance computation is executed inside a ComputeUnit. See mapper.py for code. Data is read from files and written to stdout. We execute 10 iterations of KMeans.
Step9: Print out final centroids computed
Step10: Spark MLLib
In the following we utilize the Spark MLlib KMeans implementation. See http
Step11: Load and parse the data in a Spark DataFrame.
Step12: Convert DataFrame to Tuple for MLlib
Step13: Stop Pilot-Job | Python Code:
%matplotlib inline
import pandas as pd
import seaborn as sns
import numpy as np
data = pd.read_csv("https://raw.githubusercontent.com/pydata/pandas/master/pandas/tests/data/iris.csv")
data.head()
Explanation: KMeans: Scitkit, Pilot and Spark/MLlib
This is perhaps the best known database to be found in the pattern recognition literature. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant (see https://archive.ics.uci.edu/ml/datasets/Iris).
Source: R. A. Fisher, The Use of Multiple Measurements in Taxonomic Problems, 1936, http://rcs.chemometrics.ru/Tutorials/classification/Fisher.pdf
Pictures (Source Wikipedia)
<table>
<tr><td>
Setosa
</td><td>
Versicolor
</td><td>
Virginica
</td></tr>
<tr><td>
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Kosaciec_szczecinkowaty_Iris_setosa.jpg/180px-Kosaciec_szczecinkowaty_Iris_setosa.jpg"/>
</td><td>
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/41/Iris_versicolor_3.jpg/320px-Iris_versicolor_3.jpg"/>
</td><td>
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9/9f/Iris_virginica.jpg/295px-Iris_virginica.jpg"/>
</td></tr></table>
1. Data Overview
We will begin by loading the data into a Pandas dataframe.
End of explanation
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=3)
results = kmeans.fit_predict(data[['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth']])
data_kmeans=pd.concat([data, pd.Series(results, name="ClusterId")], axis=1)
data_kmeans.head()
Explanation: 2. Scikit
Scikit is a machine learning library for Python built upon numpy and matplotlib. It provides functions for classification, regression, clustering and other common analytics tasks.
End of explanation
print "Sum of squared error: %.1f"%kmeans.inertia_
sns.pairplot(data_kmeans, vars=["SepalLength", "SepalWidth", "PetalLength", "PetalWidth"], hue="ClusterId");
Explanation: In the following we evaluate the resulting fit (commonly referred to as the model), using the sum of squared errors and a pair plot. The following pair plot shows the scatter-plot between each of the four features. Clusters for the different species are indicated by different colors.
End of explanation
import os, sys
import commands
import radical.pilot as rp
os.environ["RADICAL_PILOT_DBURL"]="mongodb://ec2-54-221-194-147.compute-1.amazonaws.com:24242/giannis"
def print_details(detail_object):
if type(detail_object)==str:
detail_object = ast.literal_eval(detail_object)
for i in detail_object:
detail_object[i]=str(detail_object[i])
#print str(detail_object)
return pd.DataFrame(detail_object.values(),
index=detail_object.keys(),
columns=["Value"])
session = rp.Session()
c = rp.Context('ssh')
c.user_id = "radical"
session.add_context(c)
pmgr = rp.PilotManager(session=session)
umgr = rp.UnitManager (session=session,
scheduler=rp.SCHED_DIRECT_SUBMISSION)
print "Session id: %s Pilot Manager: %s" % (session.uid, str(pmgr.as_dict()))
pdesc = rp.ComputePilotDescription ()
pdesc.resource = "local.localhost_anaconda"
pdesc.runtime = 10
pdesc.cores = 16
pdesc.cleanup = False
pilot = pmgr.submit_pilots(pdesc)
umgr = rp.UnitManager (session=session,
scheduler=rp.SCHED_DIRECT_SUBMISSION)
umgr.add_pilots(pilot)
Explanation: 3. Pilot Approach
We will now use RADICAL-Pilot to compute the distance function, as a simple representation of how the above example can be executed as a task-parallel application.
End of explanation
number_clusters = 3
clusters = data.sample(number_clusters)
clusters
clusters.to_csv("clusters.csv")
data.to_csv("points.csv")
Explanation: In the following, we will partition the data and distribute it to a set of CUs for fast processing
End of explanation
def compute_new_centroids(distances):
df = pd.DataFrame(distances)
df[4] = df[4].astype(int)
df = df.groupby(4)[0,1,2,3].mean()
centroids_np = df.as_matrix()
return centroids_np
Explanation: Helper Function for computing new centroids as mean of all points assigned to a cluster
End of explanation
for i in range(10):
distances =!/opt/anaconda/bin/python mapper.py points.csv clusters.csv
distances_np = np.array(eval(" ".join(distances)))
new_centroids = compute_new_centroids(distances_np)
new_centroids_df = pd.DataFrame(new_centroids, columns=["SepalLength", "SepalWidth", "PetalLength", "PetalWidth"])
new_centroids_df.to_csv("clusters.csv")
Explanation: Running Mapper Function as an External Process
End of explanation
import urlparse
def get_output(compute_unit):
working_directory=compute_unit.as_dict()['working_directory']
path = urlparse.urlparse(working_directory).path
output=open(os.path.join(path, "STDOUT")).read()
return output
Explanation: Running Mapper Function inside RADICAL-Pilot
Helper function to read output from completed compute units after it has been executed inside the Pilot.
End of explanation
for i in range(10):
cudesc = rp.ComputeUnitDescription()
cudesc.executable = "/opt/anaconda/bin/python"
cudesc.arguments = [os.path.join(os.getcwd(), "mapper.py"),
os.path.join(os.getcwd(), "points.csv"),
os.path.join(os.getcwd(), "clusters.csv")]
cu_set = umgr.submit_units([cudesc])
umgr.wait_units()
output = get_output(cu_set[0])
distances_np = np.array(eval(output))
new_centroids = compute_new_centroids(distances_np)
new_centroids_df = pd.DataFrame(new_centroids, columns=["SepalLength", "SepalWidth", "PetalLength", "PetalWidth"])
new_centroids_df.to_csv("clusters.csv")
print "Finished iteration: %d"%(i)
Explanation: This is the main application loop. The distance computation is executed inside a ComputeUnit. See mapper.py for code. Data is read from files and written to stdout. We execute 10 iterations of KMeans.
End of explanation
new_centroids_df
session.close()
Explanation: Print out final centroids computed
End of explanation
from numpy import array
from math import sqrt
%run ../env.py
%run ../util/init_spark.py
from pilot_hadoop import PilotComputeService as PilotSparkComputeService
try:
sc
except:
pilotcompute_description = {
"service_url": "yarn-client://sc15.radical-cybertools.org",
"number_of_processes": 5
}
pilot_spark = PilotSparkComputeService.create_pilot(pilotcompute_description=pilotcompute_description)
sc = pilot_spark.get_spark_context()
sqlCtx=SQLContext(sc)
Explanation: Spark MLLib
In the following we utilize the Spark MLlib KMeans implementation. See http://spark.apache.org/docs/latest/mllib-clustering.html#k-means
We use Pilot-Spark to startup Spark.
End of explanation
data_spark=sqlCtx.createDataFrame(data)
data_spark_without_class=data_spark.select('SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth').show()
Explanation: Load and parse the data in a Spark DataFrame.
End of explanation
data_spark_tuple = data_spark.map(lambda a: (a[0],a[1],a[2],a[3]))
from pyspark.mllib.clustering import KMeans, KMeansModel
clusters = KMeans.train(data_spark_tuple, 3, maxIterations=10,
runs=10, initializationMode="random")
# Evaluate clustering by computing Within Set Sum of Squared Errors
def error(point):
center = clusters.centers[clusters.predict(point)]
return sqrt(sum([x**2 for x in (point - center)]))
WSSSE = data_spark_tuple.map(lambda point: error(point)).reduce(lambda x, y: x + y)
print("Within Set Sum of Squared Error = " + str(WSSSE))
Explanation: Convert DataFrame to Tuple for MLlib
End of explanation
pilot_spark.cancel()
Explanation: Stop Pilot-Job
End of explanation |
5,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executing a python file
Step1: Executing a python function
Step2: Executing a complete notebook
Step3: Executing it with large #CPUs and huge Memory
You Kubernetes cluster should have a node pool that can satisfy these resource requests. For example, to schedule a job with 90 cpus and 600GB memory you need a nodepool created using n1-hihmem-624 in GCP. | Python Code:
%%writefile train.py
print("hello world!")
job = TrainJob("train.py", backend=KubeflowGKEBackend())
job.submit()
Explanation: Executing a python file
End of explanation
def train():
print("simple train job!")
job = TrainJob(train, backend=KubeflowGKEBackend())
job.submit()
Explanation: Executing a python function
End of explanation
%%writefile requirements.txt
papermill
jupyter
job = TrainJob("train.ipynb", backend=KubeflowGKEBackend(), input_files=["requirements.txt"])
job.submit()
Explanation: Executing a complete notebook
End of explanation
import multiprocessing
import os
def train():
print("CPU count: {}".format(multiprocessing.cpu_count()))
print("Memory: {}", os.sysconf('SC_PAGE_SIZE') * os.sysconf('SC_PHYS_PAGES')/(1024.**3))
train()
job = TrainJob(train, base_docker_image=None, docker_registry=None, backend=KubeflowGKEBackend(),
pod_spec_mutators=[get_resource_mutator(cpu=90, memory=600)])
job.submit()
Explanation: Executing it with large #CPUs and huge Memory
You Kubernetes cluster should have a node pool that can satisfy these resource requests. For example, to schedule a job with 90 cpus and 600GB memory you need a nodepool created using n1-hihmem-624 in GCP.
End of explanation |
5,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aprendizaje out-of-core
Problemas de escalabilidad
Las clases sklearn.feature_extraction.text.CountVectorizer y sklearn.feature_extraction.text.TfidfVectorizer tienen una serie de problemas de escalabilidad que provienen de la forma en que se utiliza, a nivel interno, el atributo vocabulary_ (que es un diccionario Python) para convertir los nombres de las características (cadenas) a índices enteros de características.
Los principales problemas de escalabilidad son
Step1: El vocabulario se utiliza en la fase transform para construir la matriz de ocurrencias
Step2: Vamos a realizar un nuevo fit con un corpus algo más grande
Step3: El atributo vocabulary_ crece (en escala logarítmica) con respecto al tamaño del conjunto de entrenamiento. Observa que no podemos construir los vocabularios en paralelo para cada documento de texto ya que hay algunas palabras que son comunes y necesitaríamos alguna estructura compartida o barrera de sincronización (aumentando la complejidad de implementar el entrenamiento, sobre todo si queremos distribuirlo en un cluster).
Con este nuevo vocabulario, la dimensionalidad del espacio de salida es mayor
Step4: El dataset de películas IMDb
Para mostrar los problemas de escalabilidad con los vocabularios basados en vectorizadores, vamos a cargar un dataset realista que proviene de una tarea típica de clasificación de textos
Step5: Ahora, vamos a cargarlos en nuestra sesión activa usando la función load_files de scikit-learn
Step6: <div class="alert alert-warning">
<b>NOTA</b>
Step7: En particular, solo estamos interesados en los arrays data y target.
Step8: Como puedes comprobar, el array 'target' consiste en valores 0 y 1, donde el 0 es una revisión negativa y el 1 representa una positiva.
El truco del hashing
Recuerda la representación bag-of-words que se obtenía usando un vectorizador basado en vocabulario
Step9: La conversión no tiene estado y la dimensionalidad del espacio de salida se fija a priori (aquí usamos módulo 2 ** 20, que significa aproximadamente que tenemos un millón de dimensiones, $2^{20}$). Esto hace posible evitar las limitaciones del vectorizador de vocabulario, tanto a nivel de paralelización como de poder aplicar aprendizaje online.
La clase HashingVectorizer es una alternativa a CountVectorizer (o a TfidfVectorizer si consideramos use_idf=False) que aplica internamente la función de hash llamada murmurhash
Step10: Comparte la misma estructura de preprocesamiento, generación de tokens y análisis
Step11: Podemos vectorizar nuestros datasets en matriz dispersa de scipy de la misma forma que hubiéramos hecho con CountVectorizer o TfidfVectorizer, excepto que podemos llamar directamente al método transform. No hay necesidad de llamar a fit porque el HashingVectorizer no se entrena, las transformaciones están prefijadas.
Step12: La dimensión de salida se fija de antemano a n_features=2 ** 20 (valor por defecto) para minimizar la probabilidad de colisión en la mayoría de problemas de clasificación (1M de pesos en el atributo coef_)
Step13: Ahora vamos a comparar la eficiencia computacional de HashingVectorizer con respecto a CountVectorizer
Step14: Como puedes observar, HashingVectorizer es mucho más rápido que Countvectorizer.
Por último, vamos a entrenar un clasificador LogisticRegression en los datos de entrenamiento de IMDb
Step15: Aprendizaje Out-of-Core
El aprendizaje Out-of-Core consiste en entrenar un modelo de aprendizaje automático usando un dataset que no cabe en memoria RAM. Requiere las siguientes condiciones
Step16: Ahora vamos a crear el array de etiquetas
Step17: Ahora vamos a implementar la función batch_train function
Step18: Ahora vamos a utilizar la clase un SGDClassifier con un coste logístico en lugar de LogisticRegression. SGD proviene de stochastic gradient descent, un algoritmo de optimización que optimiza los pesos de forma iterativa ejemplo a ejemplo, lo que nos permite pasarle los datos en grupos.
Como empleamos el SGDClassifier con la configuración por defecto, entrenará el clasificador en 25*1000=25000 documentos (lo que puede llevar algo de tiempo).
Step19: Al terminar, evaluemos el rendimiento | Python Code:
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df=1)
vectorizer.fit([
"The cat sat on the mat.",
])
vectorizer.vocabulary_
Explanation: Aprendizaje out-of-core
Problemas de escalabilidad
Las clases sklearn.feature_extraction.text.CountVectorizer y sklearn.feature_extraction.text.TfidfVectorizer tienen una serie de problemas de escalabilidad que provienen de la forma en que se utiliza, a nivel interno, el atributo vocabulary_ (que es un diccionario Python) para convertir los nombres de las características (cadenas) a índices enteros de características.
Los principales problemas de escalabilidad son:
Uso de memoria del vectorizador de texto: todas las representaciones textuales de características se cargan en memoria.
Problemas de paralelización para extracción de características: el atributo vocabulary_ es compartido, lo que conlleva que sea difícil la sincronización y por tanto que se produzca una sobrecarga.
Imposibilidad de realizar aprendizaje online, out-of-core o streaming: el atributo vocabulary_ tiene que obtenerse a partir de los datos y su tamaño no se puede conocer hasta que no realizamos una pasada completa por toda la base de datos de entrenamiento.
Para entender mejor estos problemas, analicemos como trabaja el atributo vocabulary_. En la fase de fit se identifican los tokens del corpus de forma unívoca, mediante un índice entero, y esta correspondencia se guarda en el vocabulario:
End of explanation
X = vectorizer.transform([
"The cat sat on the mat.",
"This cat is a nice cat.",
]).toarray()
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names())
print(X)
Explanation: El vocabulario se utiliza en la fase transform para construir la matriz de ocurrencias:
End of explanation
vectorizer = CountVectorizer(min_df=1)
vectorizer.fit([
"The cat sat on the mat.",
"The quick brown fox jumps over the lazy dog.",
])
vectorizer.vocabulary_
Explanation: Vamos a realizar un nuevo fit con un corpus algo más grande:
End of explanation
X = vectorizer.transform([
"The cat sat on the mat.",
"This cat is a nice cat.",
]).toarray()
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names())
print(X)
Explanation: El atributo vocabulary_ crece (en escala logarítmica) con respecto al tamaño del conjunto de entrenamiento. Observa que no podemos construir los vocabularios en paralelo para cada documento de texto ya que hay algunas palabras que son comunes y necesitaríamos alguna estructura compartida o barrera de sincronización (aumentando la complejidad de implementar el entrenamiento, sobre todo si queremos distribuirlo en un cluster).
Con este nuevo vocabulario, la dimensionalidad del espacio de salida es mayor:
End of explanation
import os
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train')
test_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'test')
Explanation: El dataset de películas IMDb
Para mostrar los problemas de escalabilidad con los vocabularios basados en vectorizadores, vamos a cargar un dataset realista que proviene de una tarea típica de clasificación de textos: análisis de sentimientos en texto. El objetivo es discernir entre revisiones positivas y negativas a partir de la base de datos de Internet Movie Database (IMDb).
En las siguientes secciones, vamos a usar el siguiente dataset subset de revisiones de películas de IMDb, que has sido recolectado por Maas et al.
A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning Word Vectors for Sentiment Analysis. In the proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics.
Este dataset contiene 50,000 revisiones de películas, divididas en 25,000 ejemplos de entrenamiento y 25,000 ejemplos de test. Las revisiones se etiquetan como negativas (neg) o positivas (pos). De hecho, las negativas recibieron $\le 4$ estrellas en IMDb; las positivas recibieron $\ge 7$ estrellas. Las revisiones neutrales no se incluyeron en el dataset.
Asumiendo que ya habéis ejecutado el script fetch_data.py, deberías tener disponibles los siguientes ficheros:
End of explanation
from sklearn.datasets import load_files
train = load_files(container_path=(train_path),
categories=['pos', 'neg'])
test = load_files(container_path=(test_path),
categories=['pos', 'neg'])
Explanation: Ahora, vamos a cargarlos en nuestra sesión activa usando la función load_files de scikit-learn:
End of explanation
train.keys()
Explanation: <div class="alert alert-warning">
<b>NOTA</b>:
<ul>
<li>
Ya que el dataset de películas contiene 50,000 ficheros individuales de texto, ejecutar el código anterior puede llevar bastante tiempo.
</li>
</ul>
</div>
La función load_files ha cargado los datasets en objetos sklearn.datasets.base.Bunch, que son diccionarios de Python:
End of explanation
import numpy as np
for label, data in zip(('ENTRENAMIENTO', 'TEST'), (train, test)):
print('\n\n%s' % label)
print('Número de documentos:', len(data['data']))
print('\n1er documento:\n', data['data'][0])
print('\n1era etiqueta:', data['target'][0])
print('\nNombre de las clases:', data['target_names'])
print('Conteo de las clases:',
np.unique(data['target']), ' -> ',
np.bincount(data['target']))
Explanation: En particular, solo estamos interesados en los arrays data y target.
End of explanation
from sklearn.utils.murmurhash import murmurhash3_bytes_u32
# Codificado para compatibilidad con Python 3
for word in "the cat sat on the mat".encode("utf-8").split():
print("{0} => {1}".format(
word, murmurhash3_bytes_u32(word, 0) % 2 ** 20))
Explanation: Como puedes comprobar, el array 'target' consiste en valores 0 y 1, donde el 0 es una revisión negativa y el 1 representa una positiva.
El truco del hashing
Recuerda la representación bag-of-words que se obtenía usando un vectorizador basado en vocabulario:
<img src="figures/bag_of_words.svg" width="100%">
Para solventar las limitaciones de los vectorizadores basados en vocabularios, se puede utilizar el truco del hashing. En lugar de construir y almacenar una conversión explícita de los nombres de las características a los índices de las mismas dentro de un diccionario Python, podemos aplicar una función de hash y el operador módulo:
<img src="figures/hashing_vectorizer.svg" width="100%">
Podemos acceder a más información y a las referencias a los artículos originales en el siguiente sitio web, y una descripción más sencilla en este otro sitio.
End of explanation
from sklearn.feature_extraction.text import HashingVectorizer
h_vectorizer = HashingVectorizer(encoding='latin-1')
h_vectorizer
Explanation: La conversión no tiene estado y la dimensionalidad del espacio de salida se fija a priori (aquí usamos módulo 2 ** 20, que significa aproximadamente que tenemos un millón de dimensiones, $2^{20}$). Esto hace posible evitar las limitaciones del vectorizador de vocabulario, tanto a nivel de paralelización como de poder aplicar aprendizaje online.
La clase HashingVectorizer es una alternativa a CountVectorizer (o a TfidfVectorizer si consideramos use_idf=False) que aplica internamente la función de hash llamada murmurhash:
End of explanation
analyzer = h_vectorizer.build_analyzer()
analyzer('Esta es una frase de prueba.')
Explanation: Comparte la misma estructura de preprocesamiento, generación de tokens y análisis:
End of explanation
docs_train, y_train = train['data'], train['target']
docs_valid, y_valid = test['data'][:12500], test['target'][:12500]
docs_test, y_test = test['data'][12500:], test['target'][12500:]
Explanation: Podemos vectorizar nuestros datasets en matriz dispersa de scipy de la misma forma que hubiéramos hecho con CountVectorizer o TfidfVectorizer, excepto que podemos llamar directamente al método transform. No hay necesidad de llamar a fit porque el HashingVectorizer no se entrena, las transformaciones están prefijadas.
End of explanation
h_vectorizer.transform(docs_train)
Explanation: La dimensión de salida se fija de antemano a n_features=2 ** 20 (valor por defecto) para minimizar la probabilidad de colisión en la mayoría de problemas de clasificación (1M de pesos en el atributo coef_):
End of explanation
h_vec = HashingVectorizer(encoding='latin-1')
%timeit -n 1 -r 3 h_vec.fit(docs_train, y_train)
count_vec = CountVectorizer(encoding='latin-1')
%timeit -n 1 -r 3 count_vec.fit(docs_train, y_train)
Explanation: Ahora vamos a comparar la eficiencia computacional de HashingVectorizer con respecto a CountVectorizer:
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
h_pipeline = Pipeline([
('vec', HashingVectorizer(encoding='latin-1')),
('clf', LogisticRegression(random_state=1)),
])
h_pipeline.fit(docs_train, y_train)
print('Accuracy de entrenamiento', h_pipeline.score(docs_train, y_train))
print('Accuracy de validación', h_pipeline.score(docs_valid, y_valid))
import gc
del count_vec
del h_pipeline
gc.collect()
Explanation: Como puedes observar, HashingVectorizer es mucho más rápido que Countvectorizer.
Por último, vamos a entrenar un clasificador LogisticRegression en los datos de entrenamiento de IMDb:
End of explanation
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train')
train_pos = os.path.join(train_path, 'pos')
train_neg = os.path.join(train_path, 'neg')
fnames = [os.path.join(train_pos, f) for f in os.listdir(train_pos)] +\
[os.path.join(train_neg, f) for f in os.listdir(train_neg)]
fnames[:3]
Explanation: Aprendizaje Out-of-Core
El aprendizaje Out-of-Core consiste en entrenar un modelo de aprendizaje automático usando un dataset que no cabe en memoria RAM. Requiere las siguientes condiciones:
Una capa de extracción de características con una dimensionalidad de salida fija.
Saber la lista de clases de antemano (en este caso, sabemos que hay tweets positivos y negativos).
Un algoritmo de aprendizaje automático que soporte aprendizaje incremental (método partial_fit en scikit-learn).
En la siguientes secciones, vamos a configurar una función simple de entrenamiento iterativo de un SGDClassifier.
Pero primero cargamos los nombres de los ficheros en una lista de Python:
End of explanation
y_train = np.zeros((len(fnames), ), dtype=int)
y_train[:12500] = 1
np.bincount(y_train)
Explanation: Ahora vamos a crear el array de etiquetas:
End of explanation
from sklearn.base import clone
def batch_train(clf, fnames, labels, iterations=25, batchsize=1000, random_seed=1):
vec = HashingVectorizer(encoding='latin-1')
idx = np.arange(labels.shape[0])
c_clf = clone(clf)
rng = np.random.RandomState(seed=random_seed)
for i in range(iterations):
rnd_idx = rng.choice(idx, size=batchsize)
documents = []
for i in rnd_idx:
with open(fnames[i], 'r') as f:
documents.append(f.read())
X_batch = vec.transform(documents)
batch_labels = labels[rnd_idx]
c_clf.partial_fit(X=X_batch,
y=batch_labels,
classes=[0, 1])
return c_clf
Explanation: Ahora vamos a implementar la función batch_train function:
End of explanation
from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(loss='log', random_state=1)
sgd = batch_train(clf=sgd,
fnames=fnames,
labels=y_train)
Explanation: Ahora vamos a utilizar la clase un SGDClassifier con un coste logístico en lugar de LogisticRegression. SGD proviene de stochastic gradient descent, un algoritmo de optimización que optimiza los pesos de forma iterativa ejemplo a ejemplo, lo que nos permite pasarle los datos en grupos.
Como empleamos el SGDClassifier con la configuración por defecto, entrenará el clasificador en 25*1000=25000 documentos (lo que puede llevar algo de tiempo).
End of explanation
vec = HashingVectorizer(encoding='latin-1')
sgd.score(vec.transform(docs_test), y_test)
Explanation: Al terminar, evaluemos el rendimiento:
End of explanation |
5,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Portuguese Bank Marketing Stratergy- TPOT Tutorial
The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed.
https
Step1: Data Exploration
Step2: Data Munging
The first and most important step in using TPOT on any data set is to rename the target class/response variable to class.
Step3: At present, TPOT requires all the data to be in numerical format. As we can see below, our data set has 11 categorical variables
which contain non-numerical values
Step4: We then check the number of levels that each of the five categorical variables have.
Step5: As we can see, contact and poutcome have few levels. Let's find out what they are.
Step6: We then code these levels manually into numerical values. For nan i.e. the missing values, we simply replace them with a placeholder value (-999). In fact, we perform this replacement for the entire data set.
Step7: For other categorical variables, we encode the levels as digits using Scikit-learn's MultiLabelBinarizer and treat them as new features.
Step8: Drop the unused features from the dataset.
Step9: We then add the encoded features to form the final dataset to be used with TPOT.
Step10: Keeping in mind that the final dataset is in the form of a numpy array, we can check the number of features in the final dataset as follows.
Step11: Finally we store the class labels, which we need to predict, in a separate variable.
Step12: Data Analysis using TPOT
To begin our analysis, we need to divide our training data into training and validation sets. The validation set is just to give us an idea of the test set error. The model selection and tuning is entirely taken care of by TPOT, so if we want to, we can skip creating this validation set.
Step13: After that, we proceed to calling the fit(), score() and export() functions on our training dataset.
An important TPOT parameter to set is the number of generations (via the generations kwarg). Since our aim is to just illustrate the use of TPOT, we assume the default setting of 100 generations, whilst bounding the total running time via the max_time_mins kwarg (which may, essentially, override the former setting). Further, we enable control for the maximum amount of time allowed for optimization of a single pipeline, via max_eval_time_mins.
On a standard laptop with 4GB RAM, each generation takes approximately 5 minutes to run. Thus, for the default value of 100, without the explicit duration bound, the total run time could be roughly around 8 hours.
Step14: In the above, 4 generations were computed, each giving the training efficiency of fitting model on the training set. As evident, the best pipeline is the one that has the CV score of 91.373%. The model that produces this result is one that fits a decision tree algorithm on the data set. Next, the test error is computed for validation purposes. | Python Code:
# Import required libraries
from tpot import TPOTClassifier
from sklearn.cross_validation import train_test_split
import pandas as pd
import numpy as np
#Load the data
Marketing=pd.read_csv('Data_FinalProject.csv')
Marketing.head(5)
Explanation: Portuguese Bank Marketing Stratergy- TPOT Tutorial
The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed.
https://archive.ics.uci.edu/ml/datasets/Bank+Marketing
End of explanation
Marketing.groupby('loan').y.value_counts()
Marketing.groupby(['loan','marital']).y.value_counts()
Explanation: Data Exploration
End of explanation
Marketing.rename(columns={'y': 'class'}, inplace=True)
Explanation: Data Munging
The first and most important step in using TPOT on any data set is to rename the target class/response variable to class.
End of explanation
Marketing.dtypes
Explanation: At present, TPOT requires all the data to be in numerical format. As we can see below, our data set has 11 categorical variables
which contain non-numerical values: job, marital, education, default, housing, loan, contact, month, day_of_week, poutcome, class.
End of explanation
for cat in ['job', 'marital', 'education', 'default', 'housing', 'loan', 'contact', 'month', 'day_of_week', 'poutcome' ,'class']:
print("Number of levels in category '{0}': \b {1:2.2f} ".format(cat, Marketing[cat].unique().size))
Explanation: We then check the number of levels that each of the five categorical variables have.
End of explanation
for cat in ['contact', 'poutcome','class', 'marital', 'default', 'housing', 'loan']:
print("Levels for catgeory '{0}': {1}".format(cat, Marketing[cat].unique()))
Explanation: As we can see, contact and poutcome have few levels. Let's find out what they are.
End of explanation
Marketing['marital'] = Marketing['marital'].map({'married':0,'single':1,'divorced':2,'unknown':3})
Marketing['default'] = Marketing['default'].map({'no':0,'yes':1,'unknown':2})
Marketing['housing'] = Marketing['housing'].map({'no':0,'yes':1,'unknown':2})
Marketing['loan'] = Marketing['loan'].map({'no':0,'yes':1,'unknown':2})
Marketing['contact'] = Marketing['contact'].map({'telephone':0,'cellular':1})
Marketing['poutcome'] = Marketing['poutcome'].map({'nonexistent':0,'failure':1,'success':2})
Marketing['class'] = Marketing['class'].map({'no':0,'yes':1})
Marketing = Marketing.fillna(-999)
pd.isnull(Marketing).any()
Explanation: We then code these levels manually into numerical values. For nan i.e. the missing values, we simply replace them with a placeholder value (-999). In fact, we perform this replacement for the entire data set.
End of explanation
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
job_Trans = mlb.fit_transform([{str(val)} for val in Marketing['job'].values])
education_Trans = mlb.fit_transform([{str(val)} for val in Marketing['education'].values])
month_Trans = mlb.fit_transform([{str(val)} for val in Marketing['month'].values])
day_of_week_Trans = mlb.fit_transform([{str(val)} for val in Marketing['day_of_week'].values])
day_of_week_Trans
Explanation: For other categorical variables, we encode the levels as digits using Scikit-learn's MultiLabelBinarizer and treat them as new features.
End of explanation
marketing_new = Marketing.drop(['marital','default','housing','loan','contact','poutcome','class','job','education','month','day_of_week'], axis=1)
assert (len(Marketing['day_of_week'].unique()) == len(mlb.classes_)), "Not Equal" #check correct encoding done
Marketing['day_of_week'].unique(),mlb.classes_
Explanation: Drop the unused features from the dataset.
End of explanation
marketing_new = np.hstack((marketing_new.values, job_Trans, education_Trans, month_Trans, day_of_week_Trans))
np.isnan(marketing_new).any()
Explanation: We then add the encoded features to form the final dataset to be used with TPOT.
End of explanation
marketing_new[0].size
Explanation: Keeping in mind that the final dataset is in the form of a numpy array, we can check the number of features in the final dataset as follows.
End of explanation
marketing_class = Marketing['class'].values
Explanation: Finally we store the class labels, which we need to predict, in a separate variable.
End of explanation
training_indices, validation_indices = training_indices, testing_indices = train_test_split(Marketing.index, stratify = marketing_class, train_size=0.75, test_size=0.25)
training_indices.size, validation_indices.size
Explanation: Data Analysis using TPOT
To begin our analysis, we need to divide our training data into training and validation sets. The validation set is just to give us an idea of the test set error. The model selection and tuning is entirely taken care of by TPOT, so if we want to, we can skip creating this validation set.
End of explanation
tpot = TPOTClassifier(verbosity=2, max_time_mins=2, max_eval_time_mins=0.04, population_size=15)
tpot.fit(marketing_new[training_indices], marketing_class[training_indices])
Explanation: After that, we proceed to calling the fit(), score() and export() functions on our training dataset.
An important TPOT parameter to set is the number of generations (via the generations kwarg). Since our aim is to just illustrate the use of TPOT, we assume the default setting of 100 generations, whilst bounding the total running time via the max_time_mins kwarg (which may, essentially, override the former setting). Further, we enable control for the maximum amount of time allowed for optimization of a single pipeline, via max_eval_time_mins.
On a standard laptop with 4GB RAM, each generation takes approximately 5 minutes to run. Thus, for the default value of 100, without the explicit duration bound, the total run time could be roughly around 8 hours.
End of explanation
tpot.score(marketing_new[validation_indices], Marketing.loc[validation_indices, 'class'].values)
tpot.export('tpot_marketing_pipeline.py')
# %load tpot_marketing_pipeline.py
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
# NOTE: Make sure that the class is labeled 'target' in the data file
tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)
features = tpot_data.drop('target', axis=1).values
training_features, testing_features, training_target, testing_target = \
train_test_split(features, tpot_data['target'].values, random_state=42)
# Score on the training set was:0.913728927925
exported_pipeline = DecisionTreeClassifier(criterion="gini", max_depth=5, min_samples_leaf=16, min_samples_split=8)
exported_pipeline.fit(training_features, training_target)
results = exported_pipeline.predict(testing_features)
Explanation: In the above, 4 generations were computed, each giving the training efficiency of fitting model on the training set. As evident, the best pipeline is the one that has the CV score of 91.373%. The model that produces this result is one that fits a decision tree algorithm on the data set. Next, the test error is computed for validation purposes.
End of explanation |
5,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.soft - Jupyter et commandes magiques
Pour être inventif, il faut être un peu paresseux. Cela explique parfois la syntaxe peu compréhensible mais réduite de certaines instructions. Cela explique sans doute aussi que Jupyter offre la possibilité de définir des commandes magiques qu'on peut interpréter comme des raccourcis. % pour une ligne, %% pour une cellule.
Step1: Commande magique
Ce sont des raccourcis. Si vous n'avez plus envie d'écrire le même code tous les jours alors peut-être que vous avez envie de créer une commande magique. C'est une fonctionnalité des notebooks Jupyter. L'exemple suivant crée une commande magique qui génère une séquence de nombre aléatoire dans le notebook. On l'appelle RND. Cela se passe en trois étapes | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.soft - Jupyter et commandes magiques
Pour être inventif, il faut être un peu paresseux. Cela explique parfois la syntaxe peu compréhensible mais réduite de certaines instructions. Cela explique sans doute aussi que Jupyter offre la possibilité de définir des commandes magiques qu'on peut interpréter comme des raccourcis. % pour une ligne, %% pour une cellule.
End of explanation
import random
from IPython.core.magic import Magics, magics_class, line_magic, cell_magic, line_cell_magic
@magics_class
class CustomMagics(Magics):
@line_magic
def RND(self, line):
return [ random.random() for i in range(0,int(line))]
ip = get_ipython()
ip.register_magics(CustomMagics)
%RND 20
Explanation: Commande magique
Ce sont des raccourcis. Si vous n'avez plus envie d'écrire le même code tous les jours alors peut-être que vous avez envie de créer une commande magique. C'est une fonctionnalité des notebooks Jupyter. L'exemple suivant crée une commande magique qui génère une séquence de nombre aléatoire dans le notebook. On l'appelle RND. Cela se passe en trois étapes :
implémentation de la commande
déclaration de la commande (pour Jupyter)
utilisation de la commande
C'est l'objet des trois cellules qui suivent.
End of explanation |
5,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Loading our Previous Functions...
Step5: Possible Solution
Step6: Okay, so now that we have the reveal and flag functions sorted, we are dangeriously close to actually playing a game. Lets try and play...
Step7: The above code should be familiar to you. The only addition I've made here is that I've added some character definitions. I thought it was good idea to define what the bomb_character is just once and have all the other functions reference that constant value.
When you start to glue the various peices of code together its perfectly normal to come up with new (and often better) ideas. So this means I'll need to go back and 'refactor' some of my older code to utilise this.
Anyway, how about we play a game?
Step8: FUCK!
Notice that the player_board reveals a 1 here. But there is clearly two bombs nearby. What this means is that I have a bug somewhere in my code.
It could be that my game_boards and player_boards are out of sync, or (more likely) I have some sort of indexing error in the "count_occurence_of_character_in_neighbour_squares" function. My best guess is that I've mixed up (x,y) coordinates somewhere.
But, in a way, this is a good thing. It's a good thing because it gives me an excuse to talk a bit more about testing and debugging code. In development there is a concept called "failing fast". At heart it's a simple idea; you should test your code quickly and often. And you should do that because it's much better to see code fail when it is small and simple than when its large and complex. If you have two thousand lines of code then that means there are two thousand possible places where the bug could be. If there are only twenty lines of code then there are only twenty possible places where the bug could be.
The other advantage of testing code as soon as possible is that the longer bugs go unnoticed the harder they are to fix.
Okay I've made a mental note of the error, but let's carry on for the moment. Maybe we shall find another bug...
Step9: Okay so after a bit more testing it looks like we have something that is working fairly well. So the only things left to do now if to solve that bug...
Step10: The above two code snippets belong to two different functions. Notice that there is some confusion here regarding px and py. In the first bit of code we have the values (px, py) but store them in (py, px) order. The the for-loop says for px, py do some stuff. This means the names get switched, py becomes px and vice versa. This is bad. Its bad because this sort of error (even if it is not the source of the bug) just leads to confusion, and confusion leads to bugs and wasted time.
So the first thing to do would be to at least make the naming consistent across functions... | Python Code:
import random
def set_square(x, y, new_val, board):
This function indexes into the given board at position (x, y).
We then change that value to new_val. Returns nothing.
board[x][y] = new_val
def get_square(x, y, board):
This function takes a board and returns the value at that square(x,y).
return board[x][y]
def display_board(board):
print(*board, sep="\n")
def neighbour_squares(x, y, num_rows, num_cols):
(x, y) 0-based index co-ordinate pair.
num_rows, num_cols: specifiy the max size of the board
returns all valid (x, y) coordinates from starting position.
offsets = [(-1,-1), (-1,0), (-1,1),
( 0,-1), ( 0,1),
( 1,-1), ( 1,0), ( 1,1)]
result = []
for x2, y2 in offsets:
px = x + x2
py = y + y2
row_check = 0 <= px < num_rows
col_check = 0 <= py < num_cols
if row_check and col_check:
point = (py, px)
result.append(point)
return result
def count_occurence_of_character_in_neighbour_squares(x, y, board, character):
returns the number of neighbours of (x,y) that are bombs. Max is 8, min is 0.
num_rows = len(board[0])
num_cols = len(board)
squares = neighbour_squares(x, y, num_rows, num_cols)
character_found = 0
for px, py in squares:
square_value = get_square(px, py, board)
if square_value == character:
character_found += 1
return character_found
def build_board(num_rows, num_cols, bomb_count=0, non_bomb_character="-"):
board_temp = ["B"] * bomb_count + [non_bomb_character] * (num_rows * num_cols - bomb_count)
if bomb_count:
random.shuffle(board_temp)
board = []
for i in range(0, num_rows*num_cols, num_cols):
board.append(board_temp[i:i+num_cols])
return board
Explanation: Loading our Previous Functions...
End of explanation
def flag_square(row, col, player_board):
p_square = get_square(row, col, player_board)
if p_square in "012345678":
# do nothing
return
## set flag
if p_square == EMPTY_SQUARE_CHARACTER:
set_square(row, col, FLAG_CHARACTER, player_board)
## Deflag if flag is already set
if p_square == FLAG_CHARACTER:
set_square(row, col, EMPTY_SQUARE_CHARACTER, player_board)
return
def reveal_square(row, col, player_board, game_board):
p_square = get_square(row, col, player_board)
if p_square in "012345678" or p_square == FLAG_CHARACTER:
## do nothing
return
g_square = get_square(row, col, game_board)
if g_square == BOMB_CHARACTER:
return game_over()
else:
bomb_count = count_occurence_of_character_in_neighbour_squares(row, col, game_board, BOMB_CHARACTER)
set_square(row, col, str(bomb_count), player_board)
def game_over():
print("GAME OVER")
## Later on, we can implement more logic here, such as asking if the player wants to play again.
Explanation: Possible Solution:
End of explanation
import random
NUMBER_OF_ROWS = 3
NUMBER_OF_COLS = 3
NUMBER_OF_BOMBS = 2 # You may remember that all_caps mean that these variables should NOT change values at runtime.
BOMB_CHARACTER = "B"
FLAG_CHARACTER = "F"
EMPTY_SQUARE_CHARACTER = "-"
random.seed(213) # for reproducible results
game_board = build_board(NUMBER_OF_ROWS, NUMBER_OF_COLS, bomb_count = NUMBER_OF_BOMBS)
player_board = build_board(NUMBER_OF_ROWS, NUMBER_OF_COLS, bomb_count=0)
Explanation: Okay, so now that we have the reveal and flag functions sorted, we are dangeriously close to actually playing a game. Lets try and play...
End of explanation
print("INTERNAL GAME STATE:")
display_board(game_board)
print("")
print("PLAYER BOARD:")
display_board(player_board)
## Player board should display 2.
reveal_square(1, 0, player_board, game_board)
print("INTERNAL GAME STATE:")
display_board(game_board)
print("")
print("PLAYER BOARD:")
display_board(player_board)
Explanation: The above code should be familiar to you. The only addition I've made here is that I've added some character definitions. I thought it was good idea to define what the bomb_character is just once and have all the other functions reference that constant value.
When you start to glue the various peices of code together its perfectly normal to come up with new (and often better) ideas. So this means I'll need to go back and 'refactor' some of my older code to utilise this.
Anyway, how about we play a game?
End of explanation
## Square already revealed, nothing happens...
reveal_square(1, 0, player_board, game_board)
print("INTERNAL GAME STATE:")
display_board(game_board)
print("")
print("PLAYER BOARD:")
display_board(player_board)
## Flag a square
flag_square(2, 0, player_board)
print("INTERNAL GAME STATE:")
display_board(game_board)
print("")
print("PLAYER BOARD:")
display_board(player_board)
## Deflag square
flag_square(2, 0, player_board)
print("INTERNAL GAME STATE:")
display_board(game_board)
print("")
print("PLAYER BOARD:")
display_board(player_board)
## Reveal a bomb, Should display game over
reveal_square(1, 1, player_board, game_board)
print("INTERNAL GAME STATE:")
display_board(game_board)
print("")
print("PLAYER BOARD:")
display_board(player_board)
Explanation: FUCK!
Notice that the player_board reveals a 1 here. But there is clearly two bombs nearby. What this means is that I have a bug somewhere in my code.
It could be that my game_boards and player_boards are out of sync, or (more likely) I have some sort of indexing error in the "count_occurence_of_character_in_neighbour_squares" function. My best guess is that I've mixed up (x,y) coordinates somewhere.
But, in a way, this is a good thing. It's a good thing because it gives me an excuse to talk a bit more about testing and debugging code. In development there is a concept called "failing fast". At heart it's a simple idea; you should test your code quickly and often. And you should do that because it's much better to see code fail when it is small and simple than when its large and complex. If you have two thousand lines of code then that means there are two thousand possible places where the bug could be. If there are only twenty lines of code then there are only twenty possible places where the bug could be.
The other advantage of testing code as soon as possible is that the longer bugs go unnoticed the harder they are to fix.
Okay I've made a mental note of the error, but let's carry on for the moment. Maybe we shall find another bug...
End of explanation
if row_check and col_check:
point = (py, px)
result.append(point)
return result
for px, py in squares: # result
square_value = get_square(px, py, board)
Explanation: Okay so after a bit more testing it looks like we have something that is working fairly well. So the only things left to do now if to solve that bug...
End of explanation
if row_check and col_check:
point = (py, px)
result.append(point)
return result
for py, px in squares: # result
square_value = get_square(px, py, board)
Explanation: The above two code snippets belong to two different functions. Notice that there is some confusion here regarding px and py. In the first bit of code we have the values (px, py) but store them in (py, px) order. The the for-loop says for px, py do some stuff. This means the names get switched, py becomes px and vice versa. This is bad. Its bad because this sort of error (even if it is not the source of the bug) just leads to confusion, and confusion leads to bugs and wasted time.
So the first thing to do would be to at least make the naming consistent across functions...
End of explanation |
5,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook demonstrates how BioThings Explorer can be used to answer the following query
Step1: Step 1
Step2: Step 2
Step3: The df object contains the full output from BioThings Explorer. Each row shows one path that joins the input node (PRDX1) to an intermediate node (a gene or protein) to an ending node (a chemical compound). The data frame includes a set of columns with additional details on each node and edge (including human-readable labels, identifiers, and sources). Let's remove all examples where the output_name (the compound label) is None, and specifically focus on paths with specific mechanistic predicates decreasesActivityOf and targetedBy.
Filter for drugs that targets genes which decrease the activity of PRDX1
Step4: Step 3 | Python Code:
!pip install git+https://github.com/biothings/biothings_explorer#egg=biothings_explorer
Explanation: Introduction
This notebook demonstrates how BioThings Explorer can be used to answer the following query:
"Finding Marketed Drugs that Might Treat an Unknown Syndrome by Perturbing the Disease Mechanism Pathway"
This query corresponds to Tidbit 4 which was formulated as a demonstration of the NCATS Translator program.
Background of BTE: BioThings Explorer can answer two classes of queries -- "EXPLAIN" and "PREDICT". EXPLAIN queries are described in EXPLAIN_demo.ipynb, and PREDICT queries are described in PREDICT_demo.ipynb. Here, we describe PREDICT queries and how to use BioThings Explorer to execute them. A more detailed overview of the BioThings Explorer systems is provided in these slides.
To experiment with an executable version of this notebook, load it in Google Colaboratory.
Background of TIDBIT 04:
A five-year-old patient was brought to the emergency room with recurrent polymicrobial lung infections, only 29% small airway function and was unresponsive to antibiotics.
The patient’s medical records included a genetics report from age 1, which showed a 1p34.1 chromosomal duplication encompassing 1.9 Mb, including the PRDX1 gene, which encodes Peroxiredoxin 1. The gene has been linked to airway disease in both rats and humans, and is known to act as an agonist of toll-like receptor 4 (TLR4), a pro-inflammatory receptor. In addition, two patients at another clinic were found to have 1p34.1 duplications:
One patient with a duplication including PRDX1 died with similar phenotypes
One patient with a duplication that did NOT include PRDX1 showed no airway disease phenotype
While recurrent lung infections are typically treated with antibiotics, this patient was unresponsive to standard treatments. The patient’s earlier genetics report and data from other patients with similar duplications gave the physician evidence that PRDX1 may play a role in the disease, but no treatments directly related to the gene were known. With this information in mind, the physician asked a researcher familiar with Translator to try to find possible treatments for this patient.
How Might Translator Help?
The patient’s duplication of the 1p34.1 region of chromosome 1 gave Translator researchers a good place to start. Since PRDX1 is an agonist of TLR4, the duplication of the PRDX1 gene likely causes overexpression of PRDX1, which could lead to overactivity of both of the gene products. The researcher decided to try to find drugs that could be used to reduce the activity of those two proteins. An exhaustive search of chemical databases and PubMed to find safe drug options could take days to weeks.
For a known genetic mutation, can Translator be used to quickly find existing modulators to compensate for the dysfunctional gene product?
Step 0: Load BioThings Explorer modules
First, install the biothings_explorer and biothings_schema packages, as described in this README. This only needs to be done once (but including it here for compability with colab).
End of explanation
from biothings_explorer.hint import Hint
ht = Hint()
prdx1 = ht.query("PRDX1")['Gene'][0]
prdx1
Explanation: Step 1: Find representation of "PRDX1" in BTE
In this step, BioThings Explorer translates our query string "PRDX1" into BioThings objects, which contain mappings to many common identifiers. Generally, the top result returned by the Hint module will be the correct item, but you should confirm that using the identifiers shown.
Search terms can correspond to any child of BiologicalEntity from the Biolink Model, including DiseaseOrPhenotypicFeature (e.g., "lupus"), ChemicalSubstance (e.g., "acetaminophen"), Gene (e.g., "CDK2"), BiologicalProcess (e.g., "T cell differentiation"), and Pathway (e.g., "Citric acid cycle").
End of explanation
from biothings_explorer.user_query_dispatcher import FindConnection
fc = FindConnection(input_obj=prdx1, output_obj='ChemicalSubstance', intermediate_nodes=['Gene'])
fc.connect(verbose=True)
df = fc.display_table_view()
Explanation: Step 2: Find drugs that are associated with genes which are associated with PRDX1
In this section, we find all paths in the knowledge graph that connect PRDX1 to any entity that is a chemical compound. To do that, we will use FindConnection. This class is a convenient wrapper around two advanced functions for query path planning and query path execution. More advanced features for both query path planning and query path execution are in development and will be documented in the coming months.
The parameters for FindConnection are described below:
End of explanation
dfFilt = df.loc[df['output_name'].notnull()].query('pred1 == "decreasesActivityOf" and pred2 == "targetedBy"')
dfFilt
dfFilt.node1_id.unique()
dfFilt.node1_name.unique()
Explanation: The df object contains the full output from BioThings Explorer. Each row shows one path that joins the input node (PRDX1) to an intermediate node (a gene or protein) to an ending node (a chemical compound). The data frame includes a set of columns with additional details on each node and edge (including human-readable labels, identifiers, and sources). Let's remove all examples where the output_name (the compound label) is None, and specifically focus on paths with specific mechanistic predicates decreasesActivityOf and targetedBy.
Filter for drugs that targets genes which decrease the activity of PRDX1
End of explanation
import requests
# query pfocr to see if PRDX1 and VEGFA is in the same pathway figure
doc = requests.get('https://pending.biothings.io/pfocr/query?q=associatedWith.genes:5052 AND associatedWith.genes:7124').json()
doc
Explanation: Step 3: Evaluating Paths based on published pathway figures
Let's see if PRDX1 (entrez:5052) is in the same pathway as TNF (entrez:7124) using our newly created API PFOCR
End of explanation |
5,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problemas de classificação representam uma ampla categoria de problemas de machine learning que envolvem a previsão de valores dentro de um conjunto finito e discreto de casos.
Neste exemplo, construiremos um classificador para prever a qual espécie uma flor pertence.
Leitura dos dados
Step1: Visualização dos dados
Step2: Classificação de espécies
Usaremos a classe LogisticRegression do scikit-learn para construir o classificador.
Step3: Inspeção dos resultados
Cálculos como o realizado acima geralmente não representam bem aquilo que queremos avaliar quando estamos resolvendo um problema de classificação. Ele apenas retorna o erro médio obtido entre as previsões e as classes reais do dataset de treinamento.
Pense, por exemplo, no que aconteceria se você estivesse treinando um modelo para classificar se uma pessoa possui ou não uma doença em um contexto onde sabe-se que, normalmente, 99% ads pessoas não têm essa doença. O que poderia dar errado se calculássemos a taxa de erros e acertos do modelo como uma forma de avaliá-lo? Dica
Step4: Outra técnica útil para inspecionar os resultados gerados por um modelo de classificação é a checagem da matriz de confusão. A matriz de confusão é uma matriz de dimensões K x K (onde K é o número de classes que o classificador pode identificar) que mostra, na posição (i,j), quantos exemplos pertencentes à classe i foram classificados como pertencentes à classe j.
Isso pode trazer insights a respeito de quais classes possuem a maior quantidade de classificações incorretas, por exemplo, e que portanto poderiam receber uma maior atenção por parte da pessoa cientista de dados. | Python Code:
import pandas as pd
iris = # carregue o arquivo 'datasets/iris.csv'
# Exiba informações sobre o dataset
# Exiba as classes presentes nesse dataset usando o método unique() na coluna "Class"
# Use o método describe() para exibir estatísticas sobre o dataset
Explanation: Problemas de classificação representam uma ampla categoria de problemas de machine learning que envolvem a previsão de valores dentro de um conjunto finito e discreto de casos.
Neste exemplo, construiremos um classificador para prever a qual espécie uma flor pertence.
Leitura dos dados
End of explanation
# Criação de um scatterplot dos valores as colunas "Sepal_length" e "Sepal_width"
import matplotlib.pyplot as plt
%matplotlib inline
sl = iris['Sepal_length']
sw = iris['Sepal_width']
# Crie um scatterplot dessas duas propriedades usando a função plt.scatter()
# Atribua cores diferentes a cada exemplo do dataset de acordo com a classe à qual ele pertence
# Atribua labels aos eixos X e Y
# Exiba o gráfico
# Criação de um scatterplot dos valores as colunas "Petal_length" e "Pepal_width"
pl = iris['Petal_length']
pw = iris['Petal_width']
# Crie um scatterplot dessas duas propriedades usando a função plt.scatter()
# Atribua cores diferentes a cada exemplo do dataset de acordo com a classe à qual ele pertence
# Atribua labels aos eixos X e Y
# Exiba o gráfico
Explanation: Visualização dos dados
End of explanation
X = # Crie um DataFrame com todas as features através da remoção da coluna "Class"
t = # Pegue os valores da coluna "Class"
RANDOM_STATE = 4321
# Use o método train_test_plit() para dividir os dados em dois conjuntos
from sklearn.model_selection import train_test_split
Xtr, Xts, ytr, yts = train_test_split(X, t, random_state=RANDOM_STATE)
# Use o conjunto de treinamento para construir um modelo LogisticRegression
from sklearn.linear_model import LogisticRegression
lr = # Crie um objeto LogisticRegression aqui
# Treine o modelo usando os dados do conjunto de treinamento
# Use o método score() do objeto LogisticRegression para avaliar a acurácia do modelo
# Use o método score() do objeto LogisticRegression para avaliar a acurácia
# do modelo no conjunto de teste
Explanation: Classificação de espécies
Usaremos a classe LogisticRegression do scikit-learn para construir o classificador.
End of explanation
# scikit-learn implementa uma função chamada "classification_report" que calcula as três métricas acima
# para um dado classificador.
from sklearn.metrics import classification_report
# Use essa função para exibir as métricas de classificação no modelo treinado anteriormente
# http://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html
Explanation: Inspeção dos resultados
Cálculos como o realizado acima geralmente não representam bem aquilo que queremos avaliar quando estamos resolvendo um problema de classificação. Ele apenas retorna o erro médio obtido entre as previsões e as classes reais do dataset de treinamento.
Pense, por exemplo, no que aconteceria se você estivesse treinando um modelo para classificar se uma pessoa possui ou não uma doença em um contexto onde sabe-se que, normalmente, 99% ads pessoas não têm essa doença. O que poderia dar errado se calculássemos a taxa de erros e acertos do modelo como uma forma de avaliá-lo? Dica: qual seria o valor dessa taxa de erros/acertos para um classificador "hardcoded" que sempre retorna 0 (isto é, ele sempre diz que a pessoa não tem a doença)?
Métricas simples de acurácia geralmente não são recomendadas para problemas de classificação. Existem pelo menos três métricas que costumam ser usadas dependendo do contexto:
Precisão: esse número responde à seguinte pergunta: dentre os exemplos que o classificador disse que pertencem a uma classe, quantos de fato pertencem a ela?
Recall: esse número responde a uma pergunta levemente diferente da mostrada na Precisão: dentre os exemplos que realmente pertencem a uma classe, quantos o classificador conseguiu identificar?
F1-Score: essa métrica representa uma soma ponderada de precisão e recall - ela não apresenta uma interpretação intuitiva, mas a ideia é que o f1-score representa um meio-termo entre precisão e recall.
<img src='images/Precisionrecall.svg'></img>
Source: https://en.wikipedia.org/wiki/Precision_and_recall
Outras métodos de avaliação para mdodelos de classificação incluem análise de curva ROC e, relacionada a essa técnica, o conceito de área sob a curva ROC.
Qual dessas métricas você priorizaria para o exemplo do classificador de doença descrito no parágrafo anterior? Quais são os custos para falsos positivos e falsos negativos nesse caso?
End of explanation
from sklearn.metrics import confusion_matrix
# Use a função confusion_matrix para entender quais classes estão sendo classificadas incorretamente
# http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html
Explanation: Outra técnica útil para inspecionar os resultados gerados por um modelo de classificação é a checagem da matriz de confusão. A matriz de confusão é uma matriz de dimensões K x K (onde K é o número de classes que o classificador pode identificar) que mostra, na posição (i,j), quantos exemplos pertencentes à classe i foram classificados como pertencentes à classe j.
Isso pode trazer insights a respeito de quais classes possuem a maior quantidade de classificações incorretas, por exemplo, e que portanto poderiam receber uma maior atenção por parte da pessoa cientista de dados.
End of explanation |
5,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2>This example shows how to create flight track plots using AWOT
Step1: Supply user input information
Step2: <li>Set up some characteristics for plotting.
<li>Use Cylindrical Equidistant Area map projection.
<li>Set the spacing of the barbs and X-axis time step for labels.
<li>Set the start and end times for subsetting.
<li>Add landmarks.
Step3: Read in the flight data
Step4: Create figure and set up track plot shaded by altitude
Step5: Now an example of plotting a single variable from the flight data file. We'll also subset it further than the plot above. | Python Code:
# Load the needed packages
import numpy as np
import matplotlib.pyplot as plt
from awot.io.flight import read_netcdf
from awot.graph.common import create_basemap
from awot.graph.flight_level import FlightLevel
%matplotlib inline
Explanation: <h2>This example shows how to create flight track plots using AWOT
End of explanation
# Set the date
yymmdd="111124"
# Set the project name
Project="DYNAMO"
# Set the path for data file
flname="/Users/guy/data/dynamo/" + yymmdd + "I/20111124I1_DJ_AC.nc"
Explanation: Supply user input information
End of explanation
# Set map projection to use
proj = 'cea'
Wbarb_Spacing = 300 # Spacing of wind barbs along flight path (sec)
# Choose the X-axis time step (in seconds) where major labels will be
XlabStride = 3600
# Should landmarks be plotted? [If yes, then modify the section below
Lmarks=True
# Optional variables that can be included with AWOT
# Start and end times for track in Datetime instance format
start_time = "2011-11-24 01:40:00"
end_time = "2011-11-24 10:50:00"
corners = [72.,-9.,82.,1.]
# IF landmarks are chosen we can make these easuy to display later using AWOT
if Lmarks:
# Create a list of Landmark data
LocMark = []
# Add locations as [ StringName, Longitude, Latitude ,XlabelOffset, YlabelOffset]
LocMark.append(['Diego Garcia', 72.4160, -7.3117, 0.1, -0.6])
LocMark.append(['R/V Revelle', 80.5010, 0.12167, -0.4, -0.6])
LocMark.append(['Gan', 73.1017, -0.6308, -0.9, 0.0])
LocMark.append(['R/V Marai', 80.50, -7.98, -0.1, -0.6])
# Build a few variables for accessing data and plotting the labels
if Lmarks:
# Build arrays for plotting
Labels = []
LabLons = []
LabLats = []
XOffset = []
YOffset = []
for L1, L2, L3, L4, L5 in LocMark:
Labels.append(L1)
LabLons.append(L2)
LabLats.append(L3)
XOffset.append(L4)
YOffset.append(L5)
Explanation: <li>Set up some characteristics for plotting.
<li>Use Cylindrical Equidistant Area map projection.
<li>Set the spacing of the barbs and X-axis time step for labels.
<li>Set the start and end times for subsetting.
<li>Add landmarks.
End of explanation
fl = read_netcdf(fname=flname, platform='p-3')
Explanation: Read in the flight data
End of explanation
# Creating axes outside seems to screw up basemap
fig, ax = plt.subplots(1, 1, figsize=(7, 7))
# Set the map for plotting
bm = create_basemap(corners=corners, proj=proj, resolution='l', area_thresh=1.,ax=ax)
flp = FlightLevel(fl, basemap=bm)
flp.plot_trackmap(
start_time=start_time, end_time=end_time,
color_by_altitude=True, track_cmap='spectral',
min_altitude=50., max_altitude= 8000.,
addlegend=True, addtitle=True)
#flp.draw_scale(location='lower_middle')
#flp.draw_barbs(barbspacing=Wbarb_Spacing)
# Write text names on the basemap instance
for lab, LonTx, LatTx, XOff, YOff in zip(Labels, LabLons, LabLats, XOffset, YOffset):
flp.plot_point(LonTx, LatTx, label_text=lab, label_offset=(XOff, YOff))
# Add time stamps to the figure
flp.time_stamps()
Explanation: Create figure and set up track plot shaded by altitude
End of explanation
start_time2 = '2011-11-24 03:51:00'
end_time2 = '2011-11-24 04:57:00'
# Domain subset
corners = [75.,-5.,81.,1.]
# Creating axes outside seems to screw up basemap
fig, ax2 = plt.subplots(1, 1, figsize=(7, 7))
# Set the map for plotting
bm2 = create_basemap(corners=corners, proj=proj, resolution='l', area_thresh=1.,ax=ax2)
# Begin a flight plotting instance
flp2 = FlightLevel(fl, basemap=bm2)
# Plot a track using only a variable
flp2.plot_trackmap_variable(
start_time=start_time2, end_time=end_time2,
field='potential_temp', cblabel='Equivalent Potential Temperature (K)',
track_cmap='jet', min_value=300., max_value= 360.,
addlegend=False, addtitle=True)
# If we want to add a scale
flp2.draw_scale(location='lower_middle')
# Now let's add wind barbs along the track
flp2.draw_barbs(barbspacing=Wbarb_Spacing, start_time=start_time2, end_time=end_time2)
Explanation: Now an example of plotting a single variable from the flight data file. We'll also subset it further than the plot above.
End of explanation |
5,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with data 2017. Class 3
Contact
Javier Garcia-Bernardo
[email protected]
0. Structure
Error debugging
Data visualization theory
Scatter
Histograms, violinplots and two histograms (jointplot)
Line plots with distributions (factorplot)
Paralell coordinates
Dealing with missing data
In-class exercises to melt, pivot, concat and merge
Groupby and in-class exercises
Stats
What's a p-value?
One-tailed test vs two-tailed test
Count vs expected count (binomial test)
Independence between factors
Step1: 3. Dealing with missing data
Let's imagine we have a data of a survey, with age, income, education, political ideas, location, and if you will vote for Trump or Hillary.
We may have some missing values. This missing values can be
Step2: 3.1 Ignoring data
Step3: 3.2 Imputing with mean/median/mode
Step4: 3.3 Imputing using neighbours
We may go over this another day, it's not an easy topic.
But the basic idea is that you are probably similar to your neighbors (variables are correlated)
In this case is easier because we have the neighbours (same city for other years). But let's assume we don't.
Step5: 4. In-class exercises to melt, pivot, concat and merge
4.0 Paths
Absolute (too long, no cross-compativility)
Step6: Relative to the current directory
the path "class3a_groupby.ipynb" is the same than '/datastore0/classroom/classes/wwd2017/class3/class3a_groupby.ipynb'
the path "data/colombia.dta" is the same than '/datastore0/classroom/classes/wwd2017/class3/data/class3a_groupby.ipynb'
the path "../README.md" is the same than '/datastore0/classroom/classes/wwd2017/README.md'
the path "../class2/hw_2.ipynb" is the same than '/datastore0/classroom/classes/wwd2017/class2/hw_2.ipynb'
4.1 Read the data from the world bank (inside folder data, subfolder world_bank), and save it with name df
Step7: 4.2 Fix the format and save it with name df_fixed
Remember, this was the code that we use to fix the file of the
`
### Fix setp 1
Step8: 4.3 Create two dataframes with names df_NL and df_CO.
The first with the data for the Netherlands
The second with the data for Colombia
4.4 Concatenate/Merge (the appropriate one) the two dataframes
4.5 Create two dataframes with names df_pri and df_pu.
The first with the data for all rows and columns "country", "year" and indicator "SH.XPD.PRIV.ZS" (expenditure in health care as %GDP)
The second with the data for all rows and columns "country", "year" and indicator "SH.XPD.PUBL.ZS"
4.6 Concatenate/Merge (the appropriate one) the two dataframes
5. Groupby and in-class exercises
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
For instance, if we have data on temperatures by country and month and we want to calculate the mean temperature for each country
Step9: 5.2 Calculate the mean for every variable, as a function of each country
Step10: A note on keeping only non-missing values from an array
Step11: 5.3 Calculate our custom function for every variable, as a function of each country
Step12: 5.4 Iterate over all gropus
Step13: 5.4 In class assignment | Python Code:
##Some code to run at the beginning of the file, to be able to show images in the notebook
##Don't worry about this cell
#Print the plots in this screen
%matplotlib inline
#Be able to plot images saved in the hard drive
from IPython.display import Image
#Make the notebook wider
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
import seaborn as sns
import pylab as plt
import pandas as pd
import numpy as np
def read_our_csv():
#reading the raw data from oecd
df = pd.read_csv("../class2/data/CITIES_19122016195113034.csv",sep="\t")
#fixing the columns (the first one is ""METRO_ID"" instead of "METRO_ID")
cols = list(df.columns)
cols[0] = "METRO_ID"
df.columns = cols
#pivot the table
column_with_values = "Value"
column_to_split = ["VAR"]
variables_already_present = ["METRO_ID","Metropolitan areas","Year"]
df_fixed = df.pivot_table(column_with_values,
variables_already_present,
column_to_split).reset_index()
return df_fixed
Explanation: Working with data 2017. Class 3
Contact
Javier Garcia-Bernardo
[email protected]
0. Structure
Error debugging
Data visualization theory
Scatter
Histograms, violinplots and two histograms (jointplot)
Line plots with distributions (factorplot)
Paralell coordinates
Dealing with missing data
In-class exercises to melt, pivot, concat and merge
Groupby and in-class exercises
Stats
What's a p-value?
One-tailed test vs two-tailed test
Count vs expected count (binomial test)
Independence between factors: ($\chi^2$ test)
End of explanation
#Read and fix the data
df_fixed = read_our_csv()
#Remove rows with missing values
cols = ["LABOUR_PRODUCTIVITY","UNEMP_R","GDP_PC"]
df_fixed = df_fixed.dropna(subset=cols)
#Creating a column for country
df_fixed["C"] = df_fixed["METRO_ID"].apply(lambda x: x[:2])
#Keeping italy
df_fixed = df_fixed.loc[df_fixed["C"]=="IT",["C","METRO_ID","Metropolitan areas"] +cols]
#We are going to normalize values dividing by the mean (so new values have a mean of 1)
df_fixed.loc[:,cols] = df_fixed[cols]/np.nanmean(df_fixed[cols],0)
#Make a copy of the data
df_original = df_fixed.copy()
#Take a random sample of 20 values of productivity
sample = set(df_fixed.loc[:,"LABOUR_PRODUCTIVITY"].sample(20))
#Deleting those values (saying that they are np.NaN (missing))
df_fixed.loc[df_fixed["LABOUR_PRODUCTIVITY"].isin(sample),"LABOUR_PRODUCTIVITY"] = np.NaN
df_fixed.head(10)
Explanation: 3. Dealing with missing data
Let's imagine we have a data of a survey, with age, income, education, political ideas, location, and if you will vote for Trump or Hillary.
We may have some missing values. This missing values can be:
- MCAR (missing completely at random), which means that we have a representative sample.
- This for example could happen if during the survey collection there were some IT problems.
- It is a strong assumption but it is usually made.
- If the data is MCAR we can either ignore the rows with missing values and still have a representative sample.
- If we have some missing data in a survey: Imagine if young voters of Trump are less likely to answer --> Then your data is MAR.
- Usually in surveys you make sure you ask to a percentage of people of age and location that correspond with the real population. But you may be missing an important variable (for example US pollsters didn't ask to a representative sample in terms of education).
MAR (missing at random), which means that we don't have a representative sample, but we can use another column to impute missing values.
This is very common.
We can correct the data by using other people that did answer. For instance, two people living in the same area, with the same age, income, education and political ideas are likely to vote similar, so if you only know how one of them intends to vote you can say that the other one will vote the same (there are methods for this, don't do it by hand!)
MNAR (missing not at random), which means that we don't have a representative sample, and imputation is very very hard. This can happen for example if Trump voters are less likely to open the door, then they are not even in your sample.
We are in trouble and the methods to correct for this are way beyond the scope of the class.
What are the strategies to correct for missing values?
3.1 Ignore those values (only if your data is MCAR)
Impute those values (always better but more complicated)
3.2 Use the mean/median/mode as the value (only works well in MCAR)
3.3 Use similar values -> fancyimpute package (another time)
End of explanation
#How to fix by ignoring the rows
ignoring = df_fixed.dropna(subset=["LABOUR_PRODUCTIVITY"])
ignoring.head(10)
Explanation: 3.1 Ignoring data
End of explanation
#How to fix by imputing with mean/median/mode
mean_inputed = df_fixed.fillna(df_fixed.mean())
mean_inputed.head(10)
Explanation: 3.2 Imputing with mean/median/mode
End of explanation
#Based on this. Similar points for unemployment have similar points for productivity
sns.lmplot(x="LABOUR_PRODUCTIVITY",y="UNEMP_R",data=df_fixed,fit_reg=False)
Image("figures/labels.png")
print("Using a random sample => MCAR DATA")
Image("figures/kmeans_vs_mean.png")
print("Using a biasad sample => MAR DATA")
Image("figures/kmeans_vs_mean_worst_case.png")
Explanation: 3.3 Imputing using neighbours
We may go over this another day, it's not an easy topic.
But the basic idea is that you are probably similar to your neighbors (variables are correlated)
In this case is easier because we have the neighbours (same city for other years). But let's assume we don't.
End of explanation
#What's our current directory
import os
os.getcwd()
pd.read_stata("data/colombia.dta")
Explanation: 4. In-class exercises to melt, pivot, concat and merge
4.0 Paths
Absolute (too long, no cross-compativility)
End of explanation
#Read data and print the head to see how it looks like
df = pd.read_csv("data/world_bank/data.csv",na_values="..")
df.head()
df.columns = ["Country Name","Country Code","Series Name","Series Code",1967,1968,1969,...]
df.to_csv("data/new_columns.csv",sep="\t")
## 4.1b Fix the year of the column (make it numbers)
df = pd.read_csv("data/world_bank/data.csv",na_values="..")
old_columns = list(df.columns)
new_columns = []
for index,column_name in enumerate(old_columns):
if index < 4:
new_columns.append(column_name)
else:
year_column = int(column_name[:4])
new_columns.append(year_column)
df.columns = new_columns
df.head()
Explanation: Relative to the current directory
the path "class3a_groupby.ipynb" is the same than '/datastore0/classroom/classes/wwd2017/class3/class3a_groupby.ipynb'
the path "data/colombia.dta" is the same than '/datastore0/classroom/classes/wwd2017/class3/data/class3a_groupby.ipynb'
the path "../README.md" is the same than '/datastore0/classroom/classes/wwd2017/README.md'
the path "../class2/hw_2.ipynb" is the same than '/datastore0/classroom/classes/wwd2017/class2/hw_2.ipynb'
4.1 Read the data from the world bank (inside folder data, subfolder world_bank), and save it with name df
End of explanation
#code
Explanation: 4.2 Fix the format and save it with name df_fixed
Remember, this was the code that we use to fix the file of the
`
### Fix setp 1: Melt
variables_already_presents = ['METRO_ID', 'Metropolitan areas','VAR']
columns_combine = cols
df = pd.melt(df,
id_vars=variables_already_presents,
value_vars=columns_combine,
var_name="Year",
value_name="Value")
df.head()
### Fix step 2: Pivot
column_with_values = "Value"
column_to_split = ["VAR"]
variables_already_present = ["METRO_ID","Metropolitan areas","Year"]
df.pivot_table(column_with_values,
variables_already_present,
column_to_split).reset_index().head()
`
End of explanation
df = pd.read_csv("data/world_bank/data.csv",na_values="..")
df
df.groupby(["Country Code"]).describe()
Explanation: 4.3 Create two dataframes with names df_NL and df_CO.
The first with the data for the Netherlands
The second with the data for Colombia
4.4 Concatenate/Merge (the appropriate one) the two dataframes
4.5 Create two dataframes with names df_pri and df_pu.
The first with the data for all rows and columns "country", "year" and indicator "SH.XPD.PRIV.ZS" (expenditure in health care as %GDP)
The second with the data for all rows and columns "country", "year" and indicator "SH.XPD.PUBL.ZS"
4.6 Concatenate/Merge (the appropriate one) the two dataframes
5. Groupby and in-class exercises
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
For instance, if we have data on temperatures by country and month and we want to calculate the mean temperature for each country:
- We split the data by country
- We calculate the mean for each group
- We combine all of it.
Luckily, python can make this easier (in one line).
5.1 Describe the data by a variable
End of explanation
df.groupby(["Country Code"]).mean()
Explanation: 5.2 Calculate the mean for every variable, as a function of each country
End of explanation
import numpy as np
#This creates 11 equally spaced numbers between 0 and 10
x = np.linspace(0,10,11)
#The fourth element is NaN
x[3] = np.NaN
x
np.isfinite(x)
#keep only finite values (no missing, no infinite)
x = x[np.isfinite(x)]
x
Explanation: A note on keeping only non-missing values from an array
End of explanation
def my_function(x):
return np.median(x[np.isfinite(x)])
df.groupby(["Country Code"]).agg(my_function)
df.groupby(["Country Code"]).agg(lambda x: np.median(x[np.isfinite(x)]))
Explanation: 5.3 Calculate our custom function for every variable, as a function of each country
End of explanation
for country,data in df.groupby(["Country Code"]):
print(country)
display(data.head())
Explanation: 5.4 Iterate over all gropus
End of explanation
df.groupby(["Country Code"]).max()
Explanation: 5.4 In class assignment: Calculate the maximum for every variable, as a function of each country
End of explanation |
5,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regular Expression
| | | |
|----------|--------------------------------------------|---|
| ^ | the start of a line | '^From
Step1: Extracting email addresses from text
Step2: Extracting the domain name in email addresses
Step3: Extracting prices in text | Python Code:
import re
emaildata = open('enron-email-dataset.txt')
for line in emaildata:
line = line.rstrip()
if re.search('^From:', line):
print(line)
x = 'Team A beat team B 38-7. That was the greatest record for team A since 1987.'
y = re.findall('[0-9]+', x)
y
Explanation: Regular Expression
| | | |
|----------|--------------------------------------------|---|
| ^ | the start of a line | '^From:' |
| $ | end of a line | |
| . | wildcard for any character | |
| * | Repeating a character 0 or more times | '\s*' or '.*' |
| *? | | |
| + | Repeating a character 1 or more times | '[0-9]+' |
| +? | | |
| \s | white space | |
| \S | non-white space (any non-blank character) | |
| [list] | matching a single character in the list | |
| [^list] | matching any character not in the list | |
| [a-z0-9] | range of characters a to z, and digits 0-9 | |
| ( ) | String extraction | |
If two intersecting matches were found:
Greedy expressions will output the largest matches
Non-greedy: satisfying the expression with the shortest match
To search for a bigger match, but extract a subset of the match:
Example: '^From: (\S+@\S+)'
```
import re
re.search()
```
Enron email dataset: https://www.cs.cmu.edu/~./enron/
Python regular expression functions:
re.search() to see if there is any pattern match
re.findall() to extract all the matches in a list
End of explanation
x = 'My work email address is [email protected] and \
my personal email is [email protected].'
re.findall('\S+@\S+', x)
x = 'From: [email protected] My work email address is [email protected] and \
my personal email is [email protected].'
re.findall('^From: (\S+@\S+)', x)
Explanation: Extracting email addresses from text
End of explanation
x = 'My work email address is [email protected] and \
my personal email is [email protected].'
re.findall('\S+@(\S+)', x)
re.findall('@([^ ]+)', x.rstrip())
emaildata = open('enron-email-dataset.txt')
for line in emaildata:
line = line.rstrip()
res = re.findall('^X-To: (.*@\S+)', line)
if (len(res)>0):
print(res)
Explanation: Extracting the domain name in email addresses
End of explanation
x = "It's a big weekend sale! 70% Everything. \
You can get jeans for $9.99 or get 2 for only $14.99"
re.findall('\$([0-9.]+)', x)
Explanation: Extracting prices in text
End of explanation |
5,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Dataset
We will use the famous Iris Data set.
More info on the data set
Step2: Split the Data into Training and Test
Its time to split the data into a train/test set. Keep in mind, sometimes people like to split 3 ways, train/test/validation. We'll keep things simple for now. Remember to check out the video explanation as to why we split and what all the parameters mean!
Step3: Standardizing the Data
Usually when using Neural Networks, you will get better performance when you standardize the data. Standardization just means normalizing the values to all fit between a certain range, like 0-1, or -1 to 1.
The scikit learn library also provides a nice function for this.
http
Step4: Ok, now we have the data scaled!
Step5: Building the Network with Keras
Let's build a simple neural network!
Step6: Fit (Train) the Model
Step7: Predicting New Unseen Data
Let's see how we did by predicting on new data. Remember, our model has never seen the test data that we scaled previously! This process is the exact same process you would use on totally brand new data. For example , a brand new bank note that you just analyzed .
Step8: Evaluating Model Performance
So how well did we do? How do we actually measure "well". Is 95% accuracy good enough? It all depends on the situation. Also we need to take into account things like recall and precision. Make sure to watch the video discussion on classification evaluation before running this code!
Step9: Saving and Loading Models
Now that we have a model trained, let's see how we can save and load it. | Python Code:
import numpy as np
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Keras Basics
Welcome to the section on deep learning! We'll be using Keras with a TensorFlow backend to perform our deep learning operations.
This means we should get familiar with some Keras fundamentals and basics!
Imports
End of explanation
from sklearn.datasets import load_iris
iris = load_iris()
type(iris)
print(iris.DESCR)
X = iris.data
X
y = iris.target
y
from keras.utils import to_categorical
y = to_categorical(y)
y.shape
y
Explanation: Dataset
We will use the famous Iris Data set.
More info on the data set:
https://en.wikipedia.org/wiki/Iris_flower_data_set
Reading in the Data Set
We've already downloaded the dataset, its in this folder. So let's open it up.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
X_train
X_test
y_train
y_test
Explanation: Split the Data into Training and Test
Its time to split the data into a train/test set. Keep in mind, sometimes people like to split 3 ways, train/test/validation. We'll keep things simple for now. Remember to check out the video explanation as to why we split and what all the parameters mean!
End of explanation
from sklearn.preprocessing import MinMaxScaler
scaler_object = MinMaxScaler()
scaler_object.fit(X_train)
scaled_X_train = scaler_object.transform(X_train)
scaled_X_test = scaler_object.transform(X_test)
Explanation: Standardizing the Data
Usually when using Neural Networks, you will get better performance when you standardize the data. Standardization just means normalizing the values to all fit between a certain range, like 0-1, or -1 to 1.
The scikit learn library also provides a nice function for this.
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html
End of explanation
X_train.max()
scaled_X_train.max()
X_train
scaled_X_train
Explanation: Ok, now we have the data scaled!
End of explanation
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(8, input_dim=4, activation='relu'))
model.add(Dense(8, input_dim=4, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
Explanation: Building the Network with Keras
Let's build a simple neural network!
End of explanation
# Play around with number of epochs as well!
model.fit(scaled_X_train,y_train,epochs=150, verbose=2)
Explanation: Fit (Train) the Model
End of explanation
scaled_X_test
# Spits out probabilities by default.
# model.predict(scaled_X_test)
model.predict_classes(scaled_X_test)
Explanation: Predicting New Unseen Data
Let's see how we did by predicting on new data. Remember, our model has never seen the test data that we scaled previously! This process is the exact same process you would use on totally brand new data. For example , a brand new bank note that you just analyzed .
End of explanation
model.metrics_names
model.evaluate(x=scaled_X_test,y=y_test)
from sklearn.metrics import confusion_matrix,classification_report
predictions = model.predict_classes(scaled_X_test)
predictions
y_test.argmax(axis=1)
confusion_matrix(y_test.argmax(axis=1),predictions)
print(classification_report(y_test.argmax(axis=1),predictions))
Explanation: Evaluating Model Performance
So how well did we do? How do we actually measure "well". Is 95% accuracy good enough? It all depends on the situation. Also we need to take into account things like recall and precision. Make sure to watch the video discussion on classification evaluation before running this code!
End of explanation
model.save('myfirstmodel.h5')
from keras.models import load_model
newmodel = load_model('myfirstmodel.h5')
newmodel.predict_classes(X_test)
Explanation: Saving and Loading Models
Now that we have a model trained, let's see how we can save and load it.
End of explanation |
5,839 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How can I get get the position (indices) of the second largest value in a multi-dimensional NumPy array `a`? | Problem:
import numpy as np
a = np.array([[10,50,30],[60,20,40]])
idx = np.unravel_index(a.argmax(), a.shape)
a[idx] = a.min()
result = np.unravel_index(a.argmax(), a.shape) |
5,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
View Northeast Pacific SST based on an Ensemble Empirical Mode Decomposition
The oscillation of sea surface temperature (SST) has substantial impacts on the global climate. For example, anomalously high SST near the equator (between 5°S and 5°N and the Peruvian coast) causes the El Niño phenomenon, while low SST in this area brings about the La Niña phenomenon, both of which impose considerable influence on temperature, precipitation and wind globally.
In this notebook, an adaptive and temporal local analysis method, the recently developed ensemble empirical mode decomposition (EEMD) method (Huang and Wu 2008; Wu and Huang 2009) is applied to study the oscillation of SST over Northeast Pacific(40°–50°N, 150°–135°W). The EEMD is the most recent improvement of the EMD method (Huang et al. 1998; Huang and Wu 2008). The package of PyEMD is used, which is a Python implementation of Empirical Mode Decomposition (EMD) and its variations. One of the most popular expansion is Ensemble Empirical Mode Decomposition (EEMD), which utilises an ensemble of noise-assisted executions. As a result of EMD one will obtain a set of components that possess oscillatory features. In case of plain EMD algorithm, these are called Intrinsic Mode Functions (IMFs) as they are expected to have a single mode. In contrary, EEMD will unlikely produce pure oscillations as the effects of injected noise can propagate throughout the decomposition.
The SST data is extracted from the lastest version of Extended Reconstructed Sea Surface Temperature (ERSST) dataset, version5. It is a global monthly sea surface temperature dataset derived from the International Comprehensive Ocean–Atmosphere Dataset (ICOADS). Production of the ERSST is on a 2° × 2° grid. For more information see https
Step1: 2. Load SST data
2.1 Load time series SST
Select the region (40°–50°N, 150°–135°W) and the period(1981-2016)
Step2: 2.2 Calculate climatology between 1981-2010
Step3: 2.3 Calculate SSTA
Step4: 3. Carry out EMD analysis
Step5: 4. Visualize
4.1 Plot IMFs
Step6: 4.2 Error of reconstruction | Python Code:
%matplotlib inline
import xarray as xr
from PyEMD import EEMD
import numpy as np
import pylab as plt
plt.rcParams['figure.figsize'] = (9,5)
Explanation: View Northeast Pacific SST based on an Ensemble Empirical Mode Decomposition
The oscillation of sea surface temperature (SST) has substantial impacts on the global climate. For example, anomalously high SST near the equator (between 5°S and 5°N and the Peruvian coast) causes the El Niño phenomenon, while low SST in this area brings about the La Niña phenomenon, both of which impose considerable influence on temperature, precipitation and wind globally.
In this notebook, an adaptive and temporal local analysis method, the recently developed ensemble empirical mode decomposition (EEMD) method (Huang and Wu 2008; Wu and Huang 2009) is applied to study the oscillation of SST over Northeast Pacific(40°–50°N, 150°–135°W). The EEMD is the most recent improvement of the EMD method (Huang et al. 1998; Huang and Wu 2008). The package of PyEMD is used, which is a Python implementation of Empirical Mode Decomposition (EMD) and its variations. One of the most popular expansion is Ensemble Empirical Mode Decomposition (EEMD), which utilises an ensemble of noise-assisted executions. As a result of EMD one will obtain a set of components that possess oscillatory features. In case of plain EMD algorithm, these are called Intrinsic Mode Functions (IMFs) as they are expected to have a single mode. In contrary, EEMD will unlikely produce pure oscillations as the effects of injected noise can propagate throughout the decomposition.
The SST data is extracted from the lastest version of Extended Reconstructed Sea Surface Temperature (ERSST) dataset, version5. It is a global monthly sea surface temperature dataset derived from the International Comprehensive Ocean–Atmosphere Dataset (ICOADS). Production of the ERSST is on a 2° × 2° grid. For more information see https://www.ncdc.noaa.gov/data-access/marineocean-data/extended-reconstructed-sea-surface-temperature-ersst-v5.
1. Load all needed libraries
End of explanation
ds = xr.open_dataset('data\sst.mnmean.v5.nc')
sst = ds.sst.sel(lat=slice(50, 40), lon=slice(190, 240), time=slice('1981-01-01','2015-12-31'))
#sst.mean(dim='time').plot()
Explanation: 2. Load SST data
2.1 Load time series SST
Select the region (40°–50°N, 150°–135°W) and the period(1981-2016)
End of explanation
sst_clm = sst.sel(time=slice('1981-01-01','2010-12-31')).groupby('time.month').mean(dim='time')
#sst_clm = sst.groupby('time.month').mean(dim='time')
Explanation: 2.2 Calculate climatology between 1981-2010
End of explanation
sst_anom = sst.groupby('time.month') - sst_clm
sst_anom_mean = sst_anom.mean(dim=('lon', 'lat'), skipna=True)
Explanation: 2.3 Calculate SSTA
End of explanation
S = sst_anom_mean.values
t = sst.time.values
# Assign EEMD to `eemd` variable
eemd = EEMD()
# Execute EEMD on S
eIMFs = eemd.eemd(S)
Explanation: 3. Carry out EMD analysis
End of explanation
nIMFs = eIMFs.shape[0]
plt.figure(figsize=(11,20))
plt.subplot(nIMFs+1, 1, 1)
# plot original data
plt.plot(t, S, 'r')
# plot IMFs
for n in range(nIMFs):
plt.subplot(nIMFs+1, 1, n+2)
plt.plot(t, eIMFs[n], 'g')
plt.ylabel("eIMF %i" %(n+1))
plt.locator_params(axis='y', nbins=5)
plt.xlabel("Time [s]")
Explanation: 4. Visualize
4.1 Plot IMFs
End of explanation
reconstructed = eIMFs.sum(axis=0)
plt.plot(t, reconstructed-S)
Explanation: 4.2 Error of reconstruction
End of explanation |
5,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Almond Nut Learner
Use published rankings together with distance traveled to play to classify winners + losers
Train to regular season and test on post season
considerations
Step1: Feature engineering
Log of distance
Capture rating diffs
Capture rating diffs acct for variance (t score)
Diff in expected scores via EM diffs
Tag winners in training set + viz. Also, normalize data.
Step2: Running the model
Step3: Sandbox explorations
Step4: Effect of C on different years
Step5: Look at who is contributing to logloss
Step6: Logloss contribution by round
Step7: Overtime counts
Step8: A look at dynamics of ratings data
Step9: Quick investigation | Python Code:
def attach_ratings_diff_stats(df, ratings_eos, season):
out_cols = list(df.columns) + ['mean_rtg_1', 'std_rtg_1', 'num_rtg_1', 'mean_rtg_2', 'std_rtg_2', 'num_rtg_2']
rtg_1 = ratings_eos.rename(columns = {'mean_rtg' : 'mean_rtg_1', 'std_rtg' : 'std_rtg_1', 'num_rtg' : 'num_rtg_1'})
rtg_2 = ratings_eos.rename(columns = {'mean_rtg' : 'mean_rtg_2', 'std_rtg' : 'std_rtg_2', 'num_rtg' : 'num_rtg_2'})
return df\
.merge(rtg_1, left_on = ['Season', 'Team1'], right_on = ['season', 'team'])\
.merge(rtg_2, left_on = ['Season', 'Team2'], right_on = ['season', 'team'])\
[out_cols]
def get_eos_ratings(ratings):
ratings_last_day = ratings.groupby('season').aggregate(max)[['rating_day_num']].reset_index()
ratings_eos_all = ratings_last_day\
.merge(ratings, left_on = ['season', 'rating_day_num'], right_on = ['season', 'rating_day_num'])
ratings_eos = ratings_eos_all.groupby(['season', 'team']).aggregate([np.mean, np.std, len])['orank']
return ratings_eos.reset_index().rename(columns = {'mean' : 'mean_rtg', 'std' : 'std_rtg', 'len' : 'num_rtg'})
def get_score_fluctuation(reg_season, season):
# note: quick and dirty; not best practice for home / away etc b/c these would only improve est for
# std on second order
# scale the score spreads by # posessions
# note: units don't really matter because this is used in a ratio and is normalized later
rsc = reg_season[reg_season['Season'] == season].copy()
# avg home vs away
hscores = rsc[rsc['Wloc'] == 'H']['Wscore'].tolist() + rsc[rsc['Wloc'] == 'A']['Lscore'].tolist()
ascores = rsc[rsc['Wloc'] == 'A']['Wscore'].tolist() + rsc[rsc['Wloc'] == 'H']['Lscore'].tolist()
home_correction = np.mean(hscores) - np.mean(ascores)
# get posessions per game
posessions = 0.5 * (
rsc['Lfga'] - rsc['Lor'] + rsc['Lto'] + 0.475*rsc['Lfta'] +\
rsc['Wfga'] - rsc['Wor'] + rsc['Wto'] + 0.475*rsc['Wfta']
)
# get victory margins and correct for home / away -- scale for posessions
rsc['win_mgn'] = rsc['Wscore'] - rsc['Lscore']
rsc['win_mgn'] += np.where(rsc['Wloc'] == 'H', -home_correction, 0)
rsc['win_mgn'] += np.where(rsc['Wloc'] == 'A', home_correction, 0)
rsc['win_mgn_scaled'] = rsc['win_mgn'] * 100 / posessions # score per 100 posessions
# get mgn of victory stats per team
win_mgns_wins = rsc[['Wteam', 'win_mgn_scaled']].rename(columns = {'Wteam' : 'team', 'win_mgn_scaled' : 'mgn'})
win_mgns_losses = rsc[['Lteam', 'win_mgn_scaled']].rename(columns = {'Lteam' : 'team', 'win_mgn_scaled' : 'mgn'})
win_mgns_losses['mgn'] *= -1
win_mgns = pd.concat([win_mgns_wins, win_mgns_losses])
return win_mgns.groupby('team').aggregate(np.std).rename(columns = {'mgn' : 'std_mgn'}).reset_index()
def attach_score_fluctuations(df, reg_season, season):
cols_to_keep = list(df.columns) + ['std_mgn_1', 'std_mgn_2']
fluct = get_score_fluctuation(reg_season, season)
fluct1 = fluct.rename(columns = {'std_mgn' : 'std_mgn_1'})
fluct2 = fluct.rename(columns = {'std_mgn' : 'std_mgn_2'})
return df\
.merge(fluct1, left_on = 'Team1', right_on = 'team')\
.merge(fluct2, left_on = 'Team2', right_on = 'team')[cols_to_keep]
def attach_kenpom_stats(df, kenpom, season):
cols_to_keep = list(df.columns) + ['adjem_1', 'adjem_2', 'adjt_1', 'adjt_2']
kp1 = kenpom[kenpom['Season'] == season][['Team_Id', 'AdjEM', 'AdjTempo']]\
.rename(columns = {'AdjEM' : 'adjem_1', 'AdjTempo' : 'adjt_1'})
kp2 = kenpom[kenpom['Season'] == season][['Team_Id', 'AdjEM', 'AdjTempo']]\
.rename(columns = {'AdjEM' : 'adjem_2', 'AdjTempo' : 'adjt_2'})
return df\
.merge(kp1, left_on = 'Team1', right_on = 'Team_Id')\
.merge(kp2, left_on = 'Team2', right_on = 'Team_Id')[cols_to_keep]
def get_root_and_leaves(hierarchy):
all_children = set(hierarchy[['Strongseed', 'Weakseed']].values.flatten())
all_parents = set(hierarchy[['Slot']].values.flatten())
root = [ p for p in all_parents if p not in all_children ][0]
leaves = [ c for c in all_children if c not in all_parents ]
return root, leaves
def get_tourney_tree_one_season(tourney_slots, season):
def calculate_depths(tree, child, root):
if child == root:
return 0
elif tree[child]['depth'] < 0:
tree[child]['depth'] = 1 + calculate_depths(tree, tree[child]['parent'], root)
return tree[child]['depth']
hierarchy = tourney_slots[tourney_slots['Season'] == season][['Slot', 'Strongseed', 'Weakseed']]
root, leaves = get_root_and_leaves(hierarchy) # should be R6CH...
tree_raw = {**dict(zip(hierarchy['Strongseed'],hierarchy['Slot'])),
**dict(zip(hierarchy['Weakseed'],hierarchy['Slot']))}
tree = { c : {'parent' : tree_raw[c], 'depth' : -1} for c in tree_raw}
for c in leaves:
calculate_depths(tree, c, root)
return tree
def get_tourney_trees(tourney_slots):
return { season : get_tourney_tree_one_season(tourney_slots, season)\
for season in tourney_slots['Season'].unique() }
def slot_matchup_from_seed(tree, seed1, seed2):
# return which slot the two teams would face off in
if seed1 == seed2:
return seed1
next_seed1 = seed1 if tree[seed1]['depth'] < tree[seed2]['depth'] else tree[seed1]['parent']
next_seed2 = seed2 if tree[seed2]['depth'] < tree[seed1]['depth'] else tree[seed2]['parent']
return slot_matchup_from_seed(tree, next_seed1, next_seed2)
def get_team_seed(tourney_seeds, season, team):
seed = tourney_seeds[
(tourney_seeds['Team'] == team) &
(tourney_seeds['Season'] == season)
]['Seed'].values
if len(seed) == 1:
return seed[0]
else:
return None
def dist(play_lat, play_lng, lat, lng):
return geodist((play_lat, play_lng), (lat, lng)).miles
def reg_distance_to_game(games_in, team_geog):
games = games_in.copy()
out_cols = list(games.columns) + ['w_dist', 'l_dist']
w_geog = team_geog.rename(columns = {'lat' : 'w_lat', 'lng' : 'w_lng'})
l_geog = team_geog.rename(columns = {'lat' : 'l_lat', 'lng' : 'l_lng'})
games = games\
.merge(w_geog, left_on = 'Wteam', right_on = 'team_id')\
.merge(l_geog, left_on = 'Lteam', right_on = 'team_id')
# handle neutral locations later by averaging distance from home for 2 teams if neutral location
games['play_lat'] = np.where(games['Wloc'] == 'H', games['w_lat'], games['l_lat'])
games['play_lng'] = np.where(games['Wloc'] == 'H', games['w_lng'], games['l_lng'])
games['w_dist'] = games.apply(lambda x: dist(x['play_lat'], x['play_lng'], x['w_lat'], x['w_lng']), axis = 1)
games['l_dist'] = games.apply(lambda x: dist(x['play_lat'], x['play_lng'], x['l_lat'], x['l_lng']), axis = 1)
# correct for neutral
games['w_dist'], games['l_dist'] =\
np.where(games['Wloc'] == 'N', (games['w_dist'] + games['l_dist'])/2, games['w_dist']),\
np.where(games['Wloc'] == 'N', (games['w_dist'] + games['l_dist'])/2, games['l_dist'])
return games[out_cols]
def tourney_distance_to_game(tourney_raw_in, tourney_geog, team_geog, season):
out_cols = list(tourney_raw_in.columns) + ['dist_1', 'dist_2']
tourney_raw = tourney_raw_in.copy()
geog_1 = team_geog.rename(columns = {'lat' : 'lat_1', 'lng' : 'lng_1'})
geog_2 = team_geog.rename(columns = {'lat' : 'lat_2', 'lng' : 'lng_2'})
geog_play = tourney_geog[tourney_geog['season'] == season][['slot', 'lat', 'lng']]\
.rename(columns = {'lat' : 'lat_p', 'lng' : 'lng_p'})
tourney_raw = tourney_raw\
.merge(geog_1, left_on = 'Team1', right_on = 'team_id')\
.merge(geog_2, left_on = 'Team2', right_on = 'team_id')\
.merge(geog_play, left_on = 'SlotMatchup', right_on = 'slot')
tourney_raw['dist_1'] = tourney_raw.apply(lambda x: dist(x['lat_p'], x['lng_p'], x['lat_1'], x['lng_1']), axis = 1)
tourney_raw['dist_2'] = tourney_raw.apply(lambda x: dist(x['lat_p'], x['lng_p'], x['lat_2'], x['lng_2']), axis = 1)
return tourney_raw[out_cols]
def get_raw_reg_season_data(reg_season, team_geog, season):
cols_to_keep = ['Season', 'Daynum', 'Team1', 'Team2', 'score_1', 'score_2', 'dist_1', 'dist_2']
rsr = reg_season[reg_season['Season'] == season] # reg season raw
rsr = reg_distance_to_game(rsr, team_geog)
rsr['Team1'] = np.where(rsr['Wteam'] < rsr['Lteam'], rsr['Wteam'], rsr['Lteam'])
rsr['Team2'] = np.where(rsr['Wteam'] > rsr['Lteam'], rsr['Wteam'], rsr['Lteam'])
rsr['score_1'] = np.where(rsr['Wteam'] < rsr['Lteam'], rsr['Wscore'], rsr['Lscore'])
rsr['score_2'] = np.where(rsr['Wteam'] > rsr['Lteam'], rsr['Wscore'], rsr['Lscore'])
rsr['dist_1'] = np.where(rsr['Wteam'] < rsr['Lteam'], rsr['w_dist'], rsr['l_dist'])
rsr['dist_2'] = np.where(rsr['Wteam'] > rsr['Lteam'], rsr['w_dist'], rsr['l_dist'])
return rsr[cols_to_keep]
def get_raw_tourney_data(tourney_seeds, tourney_trees, tourney_geog, team_geog, season):
# tree to find play location
tree = tourney_trees[season]
# get all teams in tourney
seed_map = tourney_seeds[tourney_seeds['Season'] == season].set_index('Team').to_dict()['Seed']
teams = sorted(seed_map.keys())
team_pairs = sorted([ (team1, team2) for team1 in teams for team2 in teams if team1 < team2 ])
tourney_raw = pd.DataFrame(team_pairs).rename(columns = { 0 : 'Team1', 1 : 'Team2' })
tourney_raw['Season'] = season
# find out where they would play each other
tourney_raw['SlotMatchup'] = tourney_raw.apply(
lambda x: slot_matchup_from_seed(tree, seed_map[x['Team1']], seed_map[x['Team2']]), axis = 1
)
# get features
tourney_raw = tourney_distance_to_game(tourney_raw, tourney_geog, team_geog, season)
return tourney_raw
def attach_supplements(data, reg_season, kenpom, ratings_eos, season):
dc = data.copy()
dc = attach_ratings_diff_stats(dc, ratings_eos, season) # get ratings diff stats
dc = attach_kenpom_stats(dc, kenpom, season)
dc = attach_score_fluctuations(dc, reg_season, season)
return dc
Explanation: Almond Nut Learner
Use published rankings together with distance traveled to play to classify winners + losers
Train to regular season and test on post season
considerations:
Refine
Vegas odds in first round
PREDICTING UPSETS??
team upset rating
team score variance
upset predictors based on past seasons
Ratings closer to date played
Model tuning / hyperparameter tuning
Implemented
individual ratings vs aggregate
Look at aggregate and derive statistics
diff vs absolute ratings
Use diffs for feature generation
only use final rankings instead of those at time of play?
For now: time of play
Distance from home? Distance from last game?
For now: distance from home
How do regular season and playoffs differ in features?
Is using distance in playoffs trained on regular season right?
Augment (not yet executed)
Defensive / offense ratings from kenpom
Elo, Elo differences, and assoc probabilities
Ensemble?
Construct micro-classifier from elo
Coaches
Look at momentum + OT effects when training
Beginning of season vs end of season for training
End of explanation
def generate_features(df):
has_score = 'score_1' in df.columns and 'score_2' in df.columns
cols_to_keep = ['Team1', 'Team2', 'Season', 'ln_dist_diff', 'rtg_diff', 't_rtg', 'pt_diff', 't_score'] +\
(['Team1_win'] if has_score else [])
features = df.copy()
features['ln_dist_diff'] = np.log((1 + df['dist_1'])/(1 + df['dist_2']))
# use negative for t_rtg so that better team has higher statistic than worse team
features['rtg_diff'] = -(df['mean_rtg_1'] - df['mean_rtg_2'])
features['t_rtg'] = -(df['mean_rtg_1'] - df['mean_rtg_2']) / np.sqrt(df['std_rtg_1']**2 + df['std_rtg_2']**2)
features['pt_diff'] = df['adjem_1'] - df['adjem_2']
features['t_score'] = (df['adjem_1'] - df['adjem_2']) / np.sqrt(df['std_mgn_1']**2 + df['std_mgn_2']**2)
# truth feature: did team 1 win?
if has_score:
features['Team1_win'] = features['score_1'] > features['score_2']
return features[cols_to_keep]
def normalize_features(train, test, features):
all_data_raw = pd.concat([train[features], test[features]])
all_data_norm = skpp.scale(all_data_raw) # with_mean = False ?
train_norm = train.copy()
test_norm = test.copy()
train_norm[features] = all_data_norm[:len(train)]
test_norm[features] = all_data_norm[len(train):]
return train_norm, test_norm
def get_key(df):
return df['Season'].map(str) + '_' + df['Team1'].map(str) + '_' + df['Team2'].map(str)
Explanation: Feature engineering
Log of distance
Capture rating diffs
Capture rating diffs acct for variance (t score)
Diff in expected scores via EM diffs
Tag winners in training set + viz. Also, normalize data.
End of explanation
features_to_use = ['ln_dist_diff', 'rtg_diff', 't_rtg', 'pt_diff', 't_score']
predict_field = 'Team1_win'
def get_features(season, tourney_slots, ratings, reg_season, team_geog, kenpom, tourney_seeds, tourney_geog):
# support data
tourney_trees = get_tourney_trees(tourney_slots)
ratings_eos = get_eos_ratings(ratings)
# regular season cleaned data
regular_raw = get_raw_reg_season_data(reg_season, team_geog, season)
regular_raw = attach_supplements(regular_raw, reg_season, kenpom, ratings_eos, season)
# post season cleaned data
tourney_raw = get_raw_tourney_data(tourney_seeds, tourney_trees, tourney_geog, team_geog, season)
tourney_raw = attach_supplements(tourney_raw, reg_season, kenpom, ratings_eos, season)
# get and normalize features
feat_train = generate_features(regular_raw)
feat_test = generate_features(tourney_raw)
train_norm, test_norm = normalize_features(feat_train, feat_test, features_to_use)
return regular_raw, tourney_raw, feat_train, feat_test, train_norm, test_norm
def make_predictions(season, train_norm, test_norm, tourney, C = 1):
# fit
lr = sklm.LogisticRegression(C = C) # fit_intercept = False???
lr.fit(train_norm[features_to_use].values, train_norm[predict_field].values)
# predictions
probs = lr.predict_proba(test_norm[features_to_use].values)
keys = get_key(test_norm)
predictions = pd.DataFrame({'Id' : keys.values, 'Pred' : probs[:,1]})
# Evaluate outcomes
res_base = tourney[(tourney['Season'] == season) & (tourney['Daynum'] > 135)].copy().reset_index()
res_base['Team1'] = np.where(res_base['Wteam'] < res_base['Lteam'], res_base['Wteam'], res_base['Lteam'])
res_base['Team2'] = np.where(res_base['Wteam'] > res_base['Lteam'], res_base['Wteam'], res_base['Lteam'])
res_base['Result'] = (res_base['Wteam'] == res_base['Team1']).map(lambda x: 1 if x else 0)
res_base['Id'] = get_key(res_base)
# attach results to predictions
res = pd.merge(res_base[['Id', 'Result']], predictions, on = 'Id', how = 'left')
# logloss
ll = skm.log_loss(res['Result'], res['Pred'])
# print(lr.intercept_)
# print(lr.coef_)
return predictions, res, ll
all_predictions = []
for season in [2013, 2014, 2015, 2016]:
regular_raw, tourney_raw, feat_train, feat_test, train_norm, test_norm = \
get_features(season, tourney_slots, ratings, reg_season, team_geog, kenpom, tourney_seeds, tourney_geog)
# see below for choice of C
predictions, res, ll = make_predictions(season, train_norm, test_norm, tourney, C = 5e-3)
print(ll)
all_predictions += [predictions]
# 0.559078513104 -- 2013
# 0.541984791608 -- 2014
# 0.480356337664 -- 2015
# 0.511671826092 -- 2016
pd.concat(all_predictions).to_csv('./submissions/simpleLogisticModel2013to2016_tuned.csv', index = False)
sns.pairplot(train_norm, hue = predict, vars = ['ln_dist_diff', 'rtg_diff', 't_rtg', 'pt_diff', 't_score'])
plt.show()
Explanation: Running the model
End of explanation
teams[teams['Team_Id'].isin([1163, 1196])]
tourney_raw[(tourney_raw['Team1'] == 1163) & (tourney_raw['Team2'] == 1196)]
feat_test[(feat_test['Team1'] == 1195) & (feat_test['Team2'] == 1196)]
res.ix[np.argsort(-(res['Pred'] - res['Result']).abs())].reset_index(drop = True)
# accuracy?
np.sum(np.where(res['Pred'] > 0.5, res['Result'] == 1, res['Result'] == 0)) / len(res)
Explanation: Sandbox explorations
End of explanation
cs_to_check = np.power(10, np.arange(-4, 2, 0.1))
years_to_check = range(2011, 2017)
c_effect_df_dict = { 'C' : cs_to_check }
for yr in years_to_check:
regular_raw, tourney_raw, feat_train, feat_test, train_norm, test_norm = \
get_features(yr, tourney_slots, ratings, reg_season, team_geog, kenpom, tourney_seeds, tourney_geog)
log_losses = [ make_predictions(yr, train_norm, test_norm, tourney, C = C)[2] for C in cs_to_check ]
c_effect_df_dict[str(yr)] = log_losses
c_effect = pd.DataFrame(c_effect_df_dict)
plt.semilogx()
for col in [ col for col in c_effect if col != 'C' ]:
plt.plot(c_effect['C'], c_effect[col])
plt.legend(loc = 3)
plt.xlabel('C')
plt.ylabel('logloss')
plt.ylim(0.45, 0.65)
plt.show()
Explanation: Effect of C on different years
End of explanation
# contribution to logloss
rc = res.copy()
ftc = feat_test.copy()
ftc['Id'] = get_key(ftc)
rc['logloss_contrib'] = -np.log(np.where(rc['Result'] == 1, rc['Pred'], 1 - rc['Pred'])) / len(rc)
ftc = pd.merge(rc, ftc, how = 'left', on = 'Id')
fig, axes = plt.subplots(nrows=1, ncols=2, figsize = (10, 4))
im = axes[0].scatter(ftc['t_score'], ftc['t_rtg'], c = ftc['logloss_contrib'], vmin = 0, vmax = 0.025, cmap = plt.cm.get_cmap('coolwarm'))
axes[0].set_xlabel('t_score')
axes[0].set_ylabel('t_rtg')
#plt.colorbar(sc)
axes[1].scatter(-ftc['ln_dist_diff'], ftc['t_rtg'], c = ftc['logloss_contrib'], vmin = 0, vmax = 0.025, cmap = plt.cm.get_cmap('coolwarm'))
axes[1].set_xlabel('ln_dist_diff')
cb = fig.colorbar(im, ax=axes.ravel().tolist(), label = 'logloss_contrib')
plt.show()
Explanation: Look at who is contributing to logloss
End of explanation
tourney_rounds = tourney_raw[['Team1', 'Team2', 'Season', 'SlotMatchup']].copy()
tourney_rounds['Id'] = get_key(tourney_rounds)
tourney_rounds['round'] = tourney_rounds['SlotMatchup'].map(lambda s: int(s[1]))
tourney_rounds = tourney_rounds[['Id', 'round']]
ftc_with_rounds = pd.merge(ftc, tourney_rounds, how = 'left', on = 'Id')
fig, axs = plt.subplots(ncols=2, figsize = (10, 4))
sns.barplot(data = ftc_with_rounds, x = 'round', y = 'logloss_contrib', errwidth = 0, ax = axs[0])
sns.barplot(data = ftc_with_rounds, x = 'round', y = 'logloss_contrib', errwidth = 0, estimator=max, ax = axs[1])
axs[0].set_ylim(0, 0.035)
axs[1].set_ylim(0, 0.035)
plt.show()
Explanation: Logloss contribution by round
End of explanation
sns.barplot(data = reg_season[reg_season['Season'] > 2000], x = 'Season', y = 'Numot', errwidth = 0)
plt.show()
Explanation: Overtime counts
End of explanation
sns.lmplot('mean_rtg', 'std_rtg', data = ratings_eos, fit_reg = False)
plt.show()
ratings_eos_test = ratings_eos.copy()
ratings_eos_test['parabola_mean_model'] =(ratings_eos_test['mean_rtg'].max()/2)**2-(ratings_eos_test['mean_rtg'] - ratings_eos_test['mean_rtg'].max()/2)**2
sns.lmplot('parabola_mean_model', 'std_rtg', data = ratings_eos_test, fit_reg = False)
plt.show()
test_data_test = test_data.copy()
test_data_test['rtg_diff'] = test_data_test['mean_rtg_1'] - test_data_test['mean_rtg_2']
test_data_test['t_model'] = test_data_test['rtg_diff']/(test_data_test['std_rtg_1']**2 + test_data_test['std_rtg_2']**2)**0.5
#sns.lmplot('rtg_diff', 't_model', data = test_data_test, fit_reg = False)
sns.pairplot(test_data_test[['rtg_diff', 't_model']])
plt.show()
Explanation: A look at dynamics of ratings data
End of explanation
dist_test = get_training_data(reg_season, team_geog, 2016)
w_dist_test = dist_test[['w_dist', 'Wscore']].rename(columns = {'w_dist' : 'dist', 'Wscore' : 'score'})
l_dist_test = dist_test[['l_dist', 'Lscore']].rename(columns = {'l_dist' : 'dist', 'Lscore' : 'score'})
dist_test = pd.concat([w_dist_test, l_dist_test]).reset_index()[['dist', 'score']]
plt.hist(dist_test['dist'])
plt.xlim(0, 3000)
plt.semilogy()
plt.show()
bucket_size = 1
dist_test['bucket'] = bucket_size * (np.log(dist_test['dist'] + 1) // bucket_size)
dist_grp = dist_test.groupby('bucket').aggregate([np.mean, np.std, len])['score']
dist_grp['err'] = dist_grp['std'] / np.sqrt(dist_grp['len'])
plt.plot(dist_grp['mean'])
plt.fill_between(dist_grp.index,
(dist_grp['mean'] - 2*dist_grp['err']).values,
(dist_grp['mean'] + 2*dist_grp['err']).values,
alpha = 0.3)
plt.xlabel('log of distance traveled')
plt.ylabel('avg score')
plt.show()
Explanation: Quick investigation: looks like avg score decreases with log of distance traveled
End of explanation |
5,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An introduction to matplotlib
Matplotlib is a Python package used widely throughout the scientific Python community to produce high quality 2D publication graphics. It transparently supports a wide range of output formats including PNG (and other raster formats), PostScript/EPS, PDF and SVG and has interfaces for all of the major desktop GUI (Graphical User Interface) toolkits.
Matplotlib comes with a convenience sub-package called pyplot. For consistency with the wider maptlotlib community, this should always be imported as plt
Step1: The matplotlib Figure
At the heart of every matplotlib plot is the "Figure". The Figure is the top level concept that can be drawn to one of the many output formats, or simply just to screen.
Let's create our first Figure using pyplot, and then show it
Step2: On its own, drawing the Figure is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).
Other visible elements are added to a Figure to make a plot. All visible items in Matplotlib are instances of the Artist class
Step3: Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious object construction. For example, we did not need to manually create the Figure with plt.figure because it was implicit that we needed a Figure when we created the Axes.
Under the hood matplotlib still had to create a Figure; we just didn't need to capture it into a variable. We can access the created object with the "state" functions found in pyplot called gcf and gca.
Exercise 1
Go to matplotlib.org and search for what these strangely named functions do.
Hint
Step4: Notice how the Axes view limits (ax.viewLim) have been updated to include the whole of the line.
Should we want to add some spacing around the edges of our Axes we can set a margin using the margins method. Alternatively, we can manually set the limits with the set_xlim and set_ylim methods.
Exercise 2
Modify the previous example to produce three different Figures that control the limits of the Axes.
1. Manually set the x and y limits to $[0.5, 2]$ and $[1, 5]$ respectively.
2. Define a margin such that there is 10% whitespace inside the axes around the drawn line (Hint
Step5: The simplicity of this example shows how visualisations can be produced quickly and easily with matplotlib, but it is worth remembering that for full control of the Figure and Axes artists we can mix the convenience of pyplot with the power of matplotlib's object oriented design.
Exercise 3
By calling plot multiple times, create a single Axes showing the line plots of $y=sin(x)$ and $y=cos(x)$ in the interval $[0, 2\pi]$ with 200 linearly spaced $x$ samples.
Multiple Axes on the same Figure (aka subplot)
Matplotlib makes it relatively easy to add more than one Axes object to a Figure. The Figure.add_subplot() method, which is wrapped by the pyplot.subplot() function, adds an Axes in the grid position specified. To compute the position, we tell matplotlib the number of rows and columns (respectively) to divide the figure into, followed by the index of the axes to be created (1 based).
For example, to create an axes grid with two columns, the grid specification would be plt.subplot(1, 2, <plot_number>).
The left-hand Axes is plot number 1, created with subplot(1, 2, 1), and the right-hand one is number 2, subplot(1, 2, 2)
Step6: Likewise, for plots above + below one another we would use two rows and one column, as in subplot(2, 1, <plot_number>).
Now let's expand our grid to two rows and three columns, and place one set of axes on the top right (grid specification (2, 3, 3)) and another on the bottom left (grid specification (2, 3, 4))
Step7: Exercise 3 continued
Step8: Titles, legends, colorbars and annotations
Matplotlib has convenience functions for the addition of plot elements such as titles, legends, colorbars and text based annotation.
The suptitle pyplot function allows us to set the title of a Figure, and the set_title method on an Axes allows us to set the title of an individual Axes. Additionally, an Axes has methods named set_xlabel and set_ylabel to label the respective x and y axes. Finally, we can add text, located by data coordinates, with the Axes text method
Step9: The creation of a legend is as simple as adding a "label" to lines of interest. This can be done in the call to plt.plot and then followed up with a call to plt.legend
Step10: Colorbars are created with the plt.colorbar function
Step11: Matplotlib comes with powerful annotation capabilities, which are described in detail at http
Step12: Savefig & backends
Matplotlib allows you to specify a "backend" to drive rendering the Figure. The backend includes the graphical user interface (GUI) library to use, and the most used backend (as it is normally the default one) is the "TkAgg" backend. When plt.show() is called, this backend pops up a Figure in a new TkInter window, which is rendered by the anti-grain graphics library (also known as "agg"). Generally, the most common reason to want to change backends is for automated Figure production on a headless server. In this situation, the "agg" backend can be used | Python Code:
import matplotlib.pyplot as plt
Explanation: An introduction to matplotlib
Matplotlib is a Python package used widely throughout the scientific Python community to produce high quality 2D publication graphics. It transparently supports a wide range of output formats including PNG (and other raster formats), PostScript/EPS, PDF and SVG and has interfaces for all of the major desktop GUI (Graphical User Interface) toolkits.
Matplotlib comes with a convenience sub-package called pyplot. For consistency with the wider maptlotlib community, this should always be imported as plt:
End of explanation
fig = plt.figure()
plt.show()
Explanation: The matplotlib Figure
At the heart of every matplotlib plot is the "Figure". The Figure is the top level concept that can be drawn to one of the many output formats, or simply just to screen.
Let's create our first Figure using pyplot, and then show it:
End of explanation
ax = plt.axes()
plt.show()
Explanation: On its own, drawing the Figure is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).
Other visible elements are added to a Figure to make a plot. All visible items in Matplotlib are instances of the Artist class : The Figure and Axes are both types of Artist.
To start with we can draw an Axes artist in the Figure, to represent our data space. The most basic Axes is rectangular and has tick labels and tick marks. Multiple Axes artists can be placed on a Figure.
Let's go ahead and create a Figure with a single Axes, and show it using pyplot:
End of explanation
ax = plt.axes()
line1, = ax.plot([0, 1, 2, 1.5], [3, 1, 2, 4])
plt.show()
Explanation: Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious object construction. For example, we did not need to manually create the Figure with plt.figure because it was implicit that we needed a Figure when we created the Axes.
Under the hood matplotlib still had to create a Figure; we just didn't need to capture it into a variable. We can access the created object with the "state" functions found in pyplot called gcf and gca.
Exercise 1
Go to matplotlib.org and search for what these strangely named functions do.
Hint: you will find multiple results so remember we are looking for the pyplot versions of these functions.
Working with the Axes
As has already been mentioned, most of your time building a graphic in matplotlib will be spent on the Axes. Whilst the matplotlib documentation for the Axes is very detailed, it is also rather difficult to navigate (though this is an area of ongoing improvement).
As a result, it is often easier to find new plot types by looking at the pyplot module's documentation.
The first and most common Axes method is plot. Go ahead and look at the plot documentation from the following sources:
http://matplotlib.org/api/pyplot_summary.html
http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot
http://matplotlib.org/api/axes_api.html?#matplotlib.axes.Axes.plot
Plot can be used to draw one or more lines in axes data space:
End of explanation
plt.plot([0, 1, 2, 1.5], [3, 1, 2, 4])
plt.show()
Explanation: Notice how the Axes view limits (ax.viewLim) have been updated to include the whole of the line.
Should we want to add some spacing around the edges of our Axes we can set a margin using the margins method. Alternatively, we can manually set the limits with the set_xlim and set_ylim methods.
Exercise 2
Modify the previous example to produce three different Figures that control the limits of the Axes.
1. Manually set the x and y limits to $[0.5, 2]$ and $[1, 5]$ respectively.
2. Define a margin such that there is 10% whitespace inside the axes around the drawn line (Hint: numbers to margins are normalised such that 0% is 0.0 and 100% is 1.0).
3. Set a 10% margin on the Axes with the lower y limit set to 0. (Note: order is important here)
If we want to create a plot in its simplest form, without any modifications to the Figure or Axes, we can leave out the creation of artist variables. Our simple line example then becomes:
End of explanation
ax_left = plt.subplot(1, 2, 1)
plt.plot([2,1,3,4])
plt.title('left = #1')
ax_left = plt.subplot(1, 2, 2)
plt.plot([4,1,3,2])
plt.title('right = #2')
plt.show()
Explanation: The simplicity of this example shows how visualisations can be produced quickly and easily with matplotlib, but it is worth remembering that for full control of the Figure and Axes artists we can mix the convenience of pyplot with the power of matplotlib's object oriented design.
Exercise 3
By calling plot multiple times, create a single Axes showing the line plots of $y=sin(x)$ and $y=cos(x)$ in the interval $[0, 2\pi]$ with 200 linearly spaced $x$ samples.
Multiple Axes on the same Figure (aka subplot)
Matplotlib makes it relatively easy to add more than one Axes object to a Figure. The Figure.add_subplot() method, which is wrapped by the pyplot.subplot() function, adds an Axes in the grid position specified. To compute the position, we tell matplotlib the number of rows and columns (respectively) to divide the figure into, followed by the index of the axes to be created (1 based).
For example, to create an axes grid with two columns, the grid specification would be plt.subplot(1, 2, <plot_number>).
The left-hand Axes is plot number 1, created with subplot(1, 2, 1), and the right-hand one is number 2, subplot(1, 2, 2) :
End of explanation
top_right_ax = plt.subplot(2, 3, 3, title='#3 = top-right')
bottom_left_ax = plt.subplot(2, 3, 4, title='#4 = bottom-left')
plt.show()
Explanation: Likewise, for plots above + below one another we would use two rows and one column, as in subplot(2, 1, <plot_number>).
Now let's expand our grid to two rows and three columns, and place one set of axes on the top right (grid specification (2, 3, 3)) and another on the bottom left (grid specification (2, 3, 4))
End of explanation
import numpy as np
x = np.linspace(-180, 180, 60)
y = np.linspace(-90, 90, 30)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))
plt.contourf(x, y, data)
plt.show()
plt.imshow(data, extent=[-180, 180, -90, 90],
interpolation='nearest', origin='lower')
plt.show()
plt.pcolormesh(x, y, data)
plt.show()
plt.scatter(x2d, y2d, c=data, s=15)
plt.show()
plt.bar(x, data.sum(axis=0), width=np.diff(x)[0])
plt.show()
plt.plot(x, data.sum(axis=0), linestyle='--',
marker='d', markersize=10, color='red')
plt.show()
Explanation: Exercise 3 continued: Copy the answer from the previous task (plotting $y=sin(x)$ and $y=cos(x)$) and add the appropriate plt.subplot calls to create a Figure with two rows of Axes, one showing $y=sin(x)$ and the other showing $y=cos(x)$.
Further plot types
Matplotlib comes with a huge variety of different plot types. Here is a quick demonstration of the more common ones.
End of explanation
fig = plt.figure()
ax = plt.axes()
# Adjust the created axes so its topmost extent is 0.8 of the figure.
fig.subplots_adjust(top=0.8)
fig.suptitle('Figure title', fontsize=18, fontweight='bold')
ax.set_title('Axes title', fontsize=16)
ax.set_xlabel('The X axis')
ax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)
ax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\nin data coordinates.',
horizontalalignment='center', fontsize=14)
plt.show()
Explanation: Titles, legends, colorbars and annotations
Matplotlib has convenience functions for the addition of plot elements such as titles, legends, colorbars and text based annotation.
The suptitle pyplot function allows us to set the title of a Figure, and the set_title method on an Axes allows us to set the title of an individual Axes. Additionally, an Axes has methods named set_xlabel and set_ylabel to label the respective x and y axes. Finally, we can add text, located by data coordinates, with the Axes text method:
End of explanation
x = np.linspace(-3, 7, 200)
plt.plot(x, 0.5*x**3 - 3*x**2, linewidth=2,
label='$f(x)=0.5x^3-3x^2$')
plt.plot(x, 1.5*x**2 - 6*x, linewidth=2, linestyle='--',
label='Gradient of $f(x)$', )
plt.legend(loc='lower right')
plt.grid()
plt.show()
Explanation: The creation of a legend is as simple as adding a "label" to lines of interest. This can be done in the call to plt.plot and then followed up with a call to plt.legend:
End of explanation
x = np.linspace(-180, 180, 60)
y = np.linspace(-90, 90, 30)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))
plt.contourf(x, y, data)
plt.colorbar(orientation='horizontal')
plt.show()
Explanation: Colorbars are created with the plt.colorbar function:
End of explanation
x = np.linspace(-3, 7, 200)
plt.plot(x, 0.5*x**3 - 3*x**2, linewidth=2)
plt.annotate('Local minimum',
xy=(4, -18),
xytext=(-2, -40), fontsize=15,
arrowprops={'facecolor': 'black', 'frac': 0.3})
plt.grid()
plt.show()
Explanation: Matplotlib comes with powerful annotation capabilities, which are described in detail at http://matplotlib.org/users/annotations_intro.html.
The annotation's power can mean that the syntax is a little harder to read, which is demonstrated by one of the simplest examples of using annotate:
End of explanation
plt.plot(range(10))
plt.savefig('simple.svg')
Explanation: Savefig & backends
Matplotlib allows you to specify a "backend" to drive rendering the Figure. The backend includes the graphical user interface (GUI) library to use, and the most used backend (as it is normally the default one) is the "TkAgg" backend. When plt.show() is called, this backend pops up a Figure in a new TkInter window, which is rendered by the anti-grain graphics library (also known as "agg"). Generally, the most common reason to want to change backends is for automated Figure production on a headless server. In this situation, the "agg" backend can be used:
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
Note: The backend must be chosen before importing pyplot for the first time, unless the force keyword is added.
Non-interactive backends such as the "agg" backend will do nothing when plt.show() is called - this is because there is nowhere (no graphical display) specified for a Figure to be displayed.
To save a Figure programmatically the savefig function can be used from any backend:
End of explanation |
5,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Lasso
Stats 208
Step1: Wavelet reconstruction
Can reconstruct the sequence by
$$
\hat y = W \hat \beta.
$$
The objective is likelihood term + L1 penalty term,
$$
\frac 12 \sum_{i=1}^T (y - W \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|.
$$
The L1 penalty "forces" some $\beta_i = 0$, inducing sparsity
Step2: Non-orthogonal design
The objective is likelihood term + L1 penalty term,
$$
\frac 12 \sum_{i=1}^T (y - X \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|.
$$
does not have closed form for $X$ that is non-orthogonal.
it is convex
it is non-smooth (recall $|x|$)
has tuning parameter $\lambda$
Compare to best subset selection (NP-hard)
Step3: We can also compare this to the selected model from forward stagewise regression | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
## Explore Turkish stock exchange dataset
tse = pd.read_excel('../../data/data_akbilgic.xlsx',skiprows=1)
tse = tse.rename(columns={'ISE':'TLISE','ISE.1':'USDISE'})
def const_wave(T,a,b):
wave = np.zeros(T)
s1 = (b-a) // 2
s2 = (b-a) - s1
norm_C = (s1*s2 / (s1+s2))**0.5
wave[a:a+s1] = norm_C / s1
wave[a+s1:b] = -norm_C / s2
return wave
def _const_wave_basis(T,a,b):
if b-a < 2:
return []
wave_basis = []
wave_basis.append(const_wave(T,a,b))
mid_pt = a + (b-a)//2
wave_basis += _const_wave_basis(T,a,mid_pt)
wave_basis += _const_wave_basis(T,mid_pt,b)
return wave_basis
def const_wave_basis(T,a,b):
father = np.ones(T) / T**0.5
return [father] + _const_wave_basis(T,a,b)
# Construct discrete Haar wavelet basis
T,p = tse.shape
wave_basis = const_wave_basis(T,0,T)
W = np.array(wave_basis).T
_ = plt.plot(W[:,:3])
def soft(y,lamb):
pos_part = (y - lamb) * (y > lamb)
neg_part = (y + lamb) * (y < -lamb)
return pos_part + neg_part
## Volatility seems most interesting
## will construct local measure of volatility
## remove rolling window estimate (local centering)
## square the residuals
#tse = tse.set_index('date')
tse_trem = tse - tse.rolling("7D").mean()
tse_vol = tse_trem**2.
## Make wavelet transformation and soft threshold
tse_wave = W.T @ tse_vol.values
lamb = .001
tse_soft = soft(tse_wave,lamb)
tse_rec = W @ tse_soft
tse_den = tse_vol.copy()
tse_den.iloc[:,:] = tse_rec
_ = tse_vol.plot(subplots=True,figsize=(10,10))
_ = tse_den.plot(subplots=True,figsize=(10,10))
Explanation: The Lasso
Stats 208: Lecture 5
Prof. Sharpnack
Lecture slides at course github page
Some content of these slides are from STA 251 notes and STA 141B lectures.
Some content is from Elements of Statistical Learning
Recall Convex Optimization
Def A function $f : \mathbb R^p \to \mathbb R$ is convex if for any $0 \le \alpha \le 1$, $x_0, x_1 \in \mathbb R^p$,
$$
f(\alpha x_0 + (1 - \alpha) x_1) \le \alpha f(x_0) + (1 - \alpha) f(x_1).
$$
For convex functions, local minima are global minima
Recall 1st Order Condition. If f is differentiable then it is convex if
$$
f(x) \ge f(x_0) + \nabla f(x_0)^\top (x - x_0), \forall x,x_0
$$
and when $\nabla f(x_0) = 0$ then
$$
f(x) \ge f(x_0), \forall x
$$
so any fixed point of gradient descent is a global min (for convex, differentiable f)
Subdifferential
Def. $g(x_0) \in \mathbb R^p$ is a subgradient of $f$ at $x_0$ if
$$
f(x) \ge f(x_0) + g(x_0)^\top (x - x_0), \forall x.
$$
The set of all subgradients at $x_0$ is call the subdifferential, denoted $\partial f(x_0)$.
For any global optima, $0 \in \partial f(x_0)$.
Wavelet denoising
Soft thresholding is commonly used for orthonormal bases.
- Suppose that we have a vector $y_1,\ldots, y_T$ (like a time series).
- And we want to reconstruct $y$ with $W \beta$ where $\beta$ has a small sum of absolute values $\sum_i |\beta_i|$
- $W$ is $T \times T$ and $W W^\top = W^\top W = I$ (orthonormal full rank design)
Want to minimize
$$
\frac 12 \sum_{i=1}^T (y - W \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|.
$$
End of explanation
plt.plot(tse_soft[:,4])
high_idx = np.where(np.abs(tse_soft[:,5]) > .0001)[0]
print(high_idx)
fig, axs = plt.subplots(len(high_idx) + 1,1)
for i, idx in enumerate(high_idx):
axs[i].plot(W[:,idx])
plt.plot(tse_den['FTSE'],c='r')
Explanation: Wavelet reconstruction
Can reconstruct the sequence by
$$
\hat y = W \hat \beta.
$$
The objective is likelihood term + L1 penalty term,
$$
\frac 12 \sum_{i=1}^T (y - W \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|.
$$
The L1 penalty "forces" some $\beta_i = 0$, inducing sparsity
End of explanation
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing, model_selection, linear_model
%matplotlib inline
## Modified from the github repo: https://github.com/JWarmenhoven/ISLR-python
## which is based on the book by James et al. Intro to Statistical Learning.
df = pd.read_csv('../../data/Hitters.csv', index_col=0).dropna()
df.index.name = 'Player'
df.info()
## Simulate a dataset for lasso
n=100
p=1000
X = np.random.randn(n,p)
X = preprocessing.scale(X)
## Subselect true active set
sprob = 0.02
Sbool = np.random.rand(p) < sprob
s = np.sum(Sbool)
print("Number of non-zero's: {}".format(s))
## Construct beta and y
mu = 100.
beta = np.zeros(p)
beta[Sbool] = mu * np.random.randn(s)
eps = np.random.randn(n)
y = X.dot(beta) + eps
## Run lars with lasso mod, find active set
larper = linear_model.lars_path(X,y,method="lasso")
S = set(np.where(Sbool)[0])
def plot_it():
for j in S:
_ = plt.plot(larper[0],larper[2][j,:],'r')
for j in set(range(p)) - S:
_ = plt.plot(larper[0],larper[2][j,:],'k',linewidth=.75)
_ = plt.title('Lasso path for simulated data')
_ = plt.xlabel('lambda')
_ = plt.ylabel('Coef')
plot_it()
## Hitters dataset
df = pd.read_csv('../../data/Hitters.csv', index_col=0).dropna()
df.index.name = 'Player'
df.info()
df.head()
dummies = pd.get_dummies(df[['League', 'Division', 'NewLeague']])
dummies.info()
print(dummies.head())
y = df.Salary
# Drop the column with the independent variable (Salary), and columns for which we created dummy variables
X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64')
# Define the feature set X.
X = pd.concat([X_, dummies[['League_N', 'Division_W', 'NewLeague_N']]], axis=1)
X.info()
X.head(5)
loo = model_selection.LeaveOneOut()
looiter = loo.split(X)
hitlasso = linear_model.LassoCV(cv=looiter)
hitlasso.fit(X,y)
print("The selected lambda value is {:.2f}".format(hitlasso.alpha_))
hitlasso.coef_
Explanation: Non-orthogonal design
The objective is likelihood term + L1 penalty term,
$$
\frac 12 \sum_{i=1}^T (y - X \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|.
$$
does not have closed form for $X$ that is non-orthogonal.
it is convex
it is non-smooth (recall $|x|$)
has tuning parameter $\lambda$
Compare to best subset selection (NP-hard):
$$
\min \frac 12 \sum_{i=1}^T (y - X \beta)_i^2.
$$
for
$$
\| \beta \|_0 = |{\rm supp}(\beta)| < s.
$$
Image of Lasso solution
<img src="lasso_soln.PNG" width=100%>
Solving the Lasso
The lasso can be written in regularized form,
$$
\min \frac 12 \sum_{i=1}^T (y - X \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|,
$$
or in constrained form,
$$
\min \frac 12 \sum_{i=1}^T (y - X \beta)i^2, \quad \textrm{s.t.} \sum{i=1}^T |\beta_i| \le C,
$$
For every $\lambda$ there is a $C$ such that the regularized form and constrained form have the same argmin
This correspondence is data dependent
Solving Lasso
A quadratic program (QP) is a convex optimization of the form
$$
\min \beta^\top Q \beta + \beta^\top a \quad \textrm{ s.t. } A\beta \le c
$$
where $Q$ is positive semi-definite.
claim: The lasso (constrained form) is a QP.
$$
\sum_{i=1}^T (y - X \beta)_i^2 = \frac 12 \beta^\top (X^\top X) \beta + \beta^\top (X^\top y) + C
$$
but what about $\| \beta \|_1$?
Solving the lasso
For a single $\lambda$ (or $C$ in constrained form) can solve the lasso with many specialized methods
- quadratic program solver
- proximal gradient
- alternating direction method of multipliers
but $\lambda$ is a tuning parameter. Options
1. Construct a grid of $\lambda$ and solve each lasso
2. Solve for all $\lambda$ values - path algorithm
Active sets and why lasso works better
Let $\hat \beta_\lambda$ be the $\hat \beta$ at tuning parameter $\lambda$.
Define $\mathcal A_\lambda = {\rm supp}(\hat \beta_\lambda)$ the non-zero elements of $\hat \beta_\lambda$.
For large $\lambda = \infty$, $|\mathcal A_\lambda| = 0$ (penalty dominates)
For small $\lambda = 0$, $|\mathcal A_\lambda| = p$ (loss dominates)
Forward greedy selection only adds elements to the active set, does not remove elements.
Lasso Path
Start at $\lambda = +\infty, \hat \beta = 0$.
Decrease $\lambda$ until $\hat \beta_{j_1} \ne 0$, $\mathcal A \gets {j_1}$. (Hitting event)
Continue decreasing $\lambda$ updating $\mathcal A$ with hitting and leaving events
$x_{j_1}$ is the predictor variable most correlated with $y$
Hitting events are when element is added to $\mathcal A$
Leaving events are when element is removed from $\mathcal A$
$\hat \beta_{\lambda,j}$ is piecewise linear, continuous, as a function of $\lambda$
knots are at "hitting" and "leaving" events
from sklearn.org
Least Angle Regression (LAR)
Standardize predictors and start with residual $r = y - \bar y$, $\hat beta = 0$
Find $x_j$ most correlated with $r$
Move $\beta_j$ in the direction of $x_j^\top r$ until the residual is more correlated with another $x_k$
Move $\beta_j,\beta_k$ in the direction of their joint OLS coefficients of $r$ on $(x_j,x_k)$ until some other competitor $x_l$ has as much correlation with the current residual
Continue until all predictors have been entered.
Lasso modification
4.5 If a non-zero coefficient drops to 0 then remove it from the active set and recompute the restricted OLS.
from ESL
End of explanation
bforw = [-0.21830515, 0.38154135, 0. , 0. , 0. ,
0.16139123, 0. , 0. , 0. , 0. ,
0.09994524, 0.56696569, -0.16872682, 0.16924078, 0. ,
0. , 0. , -0.19429699, 0. ]
print(", ".join(X.columns[(hitlasso.coef_ != 0.) != (bforw != 0.)]))
Explanation: We can also compare this to the selected model from forward stagewise regression:
[-0.21830515, 0.38154135, 0. , 0. , 0. ,
0.16139123, 0. , 0. , 0. , 0. ,
0.09994524, 0.56696569, -0.16872682, 0.16924078, 0. ,
0. , 0. , -0.19429699, 0. ]
This is not exactly the same model with differences in the inclusion or exclusion of AtBat, HmRun, Runs, RBI, Years, CHmRun, Errors, League_N, Division_W, NewLeague_N
End of explanation |
5,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inversion sampling example
First find the normalizing constant
Step1: Joint Distribution
Find the normalizing constant
Step2: Find the marginal distribution | Python Code:
%%latex
\begin{align*}
f_X(X=x) &= cx^2, 0 \leq x \leq 2 \\
1 &= c\int_0^2 x^2 dx \\
&= c[\frac{1}{3}x^3 + d]_0^2 \\
&= c[\frac{8}{3} + d - d] \\
&= c[\frac{8}{3}] \\
f_X(X=x) &= \frac{3}{8}x^2, 0 \leq x \leq 2
\end{align*}
u = np.random.uniform(size=100000)
x = 2 * u**.3333
df = pd.DataFrame({'x':x})
print df.describe()
ggplot(aes(x='x'), data=df) + geom_histogram()
Explanation: Inversion sampling example
First find the normalizing constant:
$$
\begin{align}
f_X(X=x) &&= cx^2, 0 \leq x \leq 2 \
1 &&= c\int_0^2 x^2 dx \
&&= c[\frac{1}{3}x^3 + d]_0^2 \
&&= c[\frac{8}{3} + d - d] \
&&= c[\frac{8}{3}] \
f_X(X=x) &&= \frac{3}{8}x^2, 0 \leq x \leq 2
\end{align}
$$
Next find the cumulative distribution function:
* $F_X(X=x) = \int_0^x \frac{3}{8}x^2dx$
* $=\frac{3}{8}[\frac{1}{3}x^3 + d]_0^x$
* $=\frac{3}{8}[\frac{1}{3}x^3 + d - d]$
* $=\frac{1}{8}x^3$
We can randomly generate values from a standard uniform distribution and set equal to the CDF. Solve for $x$. Plug the randomly generated values into the equation and plot the histogram or density of $x$ to get the shape of the distribution:
* $u = \frac{1}{8}x^3$
* $x^3 = 8u$
* $x = 2u^{\frac{1}{3}}$
End of explanation
x = np.random.uniform(size=10000)
y = np.random.uniform(size=10000)
Explanation: Joint Distribution
Find the normalizing constant:
* $f_{X,Y}(X=x,Y=y) = c(2x + y), 0 \leq x \leq 2, 0 \leq y \leq 2$
* $1 = \int_0^2 \int_0^2 c(2x + y) dy dx$
* $ = c\int_0^2 [2xy + \frac{1}{2}y^2 + d]0^2 dx$
* $ = c\int_0^2 [4x + \frac{1}{2}4 + d - d] dx$
* $ = c\int_0^2 (4x + 2) dx$
* $ = c[2x^2 + 2x + d]_0^2$
* $ = c[2(4) + 2(2) + d - d]$
* $ = 12c$
* $c = \frac{1}{12}$
* $f{X,Y}(X=x,Y=y) = \frac{1}{12}(2x + y), 0 \leq x \leq 2, 0 \leq y \leq 2$
End of explanation
u = np.random.uniform(size=100000)
x = (-1 + (1 + 24*u)**.5) / 2
df = pd.DataFrame({'x':x})
ggplot(aes(x='x'), data=df) + geom_histogram()
Explanation: Find the marginal distribution:
* $f_{X,Y}(X=x,Y=y) = \frac{1}{12}(2x + y), 0 \leq x \leq 2, 0 \leq y \leq 2$
* $f_X(X=x) = \int_0^2 \frac{1}{12}(2x + y) dy$
* $ = \frac{1}{12}[2xy + \frac{1}{2}y^2 + d]_0^2$
* $ = \frac{1}{12}[4x + 2 + d - d]$
* $ = \frac{4x + 2}{12}$
* $ = \frac{2x + 1}{6}$
Inversion sampling example:
* $F_X(X=x) = \int_0^x \dfrac{2x+1}{6}dx$
* $= \frac{1}{6}[x^2 + x + d]_0^x$
* $= \frac{x(x + 1)}{6}$
* $u = \frac{x^2 + x}{6}$
* $0 = x^2 + x - 6u$
* $x = \frac{-1 \pm \sqrt{1 + 4 \times 6u}}{2}$
End of explanation |
5,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Tensorflow Lattice를 사용한 윤리에 대한 형상 제약 조건
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 필수 패키지 가져오기
Step3: 이 튜토리얼에서 사용되는 기본값
Step4: 사례 연구 #1
Step5: 데이터세트 전처리하기
Step7: 데이터를 훈련/검증/테스트 세트로 분할하기
Step8: 데이터 분포 시각화하기
먼저 데이터 분포를 시각화합니다. 기준점을 통과한 모든 학생들과 통과하지 못한 모든 학생들에 대한 GPA 및 LSAT 점수를 플롯할 것입니다.
Step11: 기준점 시험 통과를 예측하도록 보정된 선형 모델 훈련하기
다음으로, TFL에서 보정된 선형 모델을 훈련하여 학생이 기준점을 통과할지 여부를 예측합니다. 두 가지 입력 특성은 LSAT 점수와 학부 GPA이며, 훈련 레이블은 학생이 기준점을 통과했는지 여부입니다.
먼저 제약 조건 없이 보정된 선형 모델을 훈련합니다. 그런 다음, 단조성 제약 조건을 사용하여 보정된 선형 모델을 훈련하고 모델 출력 및 정확성의 차이를 관찰합니다.
TFL 보정 선형 estimator를 훈련하기 위한 도우미 함수
이들 함수는 이 로스쿨 사례 연구와 아래의 대출 연체 사례 연구에 사용됩니다.
Step14: 로스쿨 데이터세트 특성을 구성하기 위한 도우미 함수
이들 도우미 함수는 로스쿨 사례 연구에만 해당됩니다.
Step15: 훈련된 모델의 출력 시각화를 위한 도우미 함수
Step16: 제약이 없는(단조가 아닌) 보정 선형 모델 훈련하기
Step17: 단조성 보정 선형 모델 훈련하기
Step18: 다른 제약이 없는 모델 훈련하기
TFL 보정 선형 모델이 정확성을 크게 희생하지 않고도 LSAT 점수와 GPA 모두에서 단조롭도록 훈련될 수 있음을 입증했습니다.
그렇다면 보정 선형 모델이 심층 신경망(DNN) 또는 그래디언트 부스트 트리(GBT)와 같은 다른 형태의 모델과 어떻게 비교될까요? DNN과 GBT가 합리적으로 공정한 출력을 제공하는 것으로 보입니까? 이 질문의 해답을 얻기 위해 이제 제약이 없는 DNN 및 GBT를 훈련할 것입니다. 실제로, DNN과 GBT 모두 LSAT 점수와 학부 GPA에서 단조성을 쉽게 위반한다는 사실을 관찰하게 될 것입니다.
제약이 없는 심층 신경망(DNN) 모델 훈련하기
앞서 높은 검증 정확성을 얻기 위해 아키텍처를 최적화했습니다.
Step19: 제약이 없는 그래디언트 부스트 트리(GBT) 모델 훈련하기
앞서 높은 검증 정확성을 얻기 위해 트리 구조를 최적화했습니다.
Step20: 사례 연구 #2
Step21: 데이터를 훈련/검증/테스트 세트로 분할하기
Step22: 데이터 분포 시각화하기
먼저 데이터 분포를 시각화합니다. 결혼 여부와 상환 상태가 서로 다른 사람들에 대해 관찰된 연체율의 평균 및 표준 오차를 플롯할 것입니다. 상환 상태는 대출 상환 기간(2005년 4월 현재)에 연체된 개월 수를 나타냅니다.
Step25: 대출 연체율을 예측하도록 보정 선형 모델 훈련하기
다음으로, TFL에서 보정 선형 모델을 훈련하여 개인이 대출을 불이행할지 여부를 예측합니다. 두 가지 입력 특성은 결혼 여부와 4월에 대출금을 연체한 개월 수(상환 상태)입니다. 훈련 레이블은 대출을 연체했는지 여부입니다.
먼저 제약 조건 없이 보정된 선형 모델을 훈련합니다. 그런 다음, 단조성 제약 조건을 사용하여 보정된 선형 모델을 훈련하고 모델 출력 및 정확성의 차이를 관찰합니다.
대출 연체 데이터세트 특성을 구성하기 위한 도우미 함수
이들 도우미 함수는 대출 연체 사례 연구에만 해당합니다.
Step26: 훈련된 모델의 출력 시각화를 위한 도우미 함수
Step27: 제약이 없는(단조가 아닌) 보정 선형 모델 훈련하기
Step28: 단조 보정 선형 모델 훈련하기 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install tensorflow-lattice seaborn
Explanation: Tensorflow Lattice를 사용한 윤리에 대한 형상 제약 조건
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/shape_constraints_for_ethics"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
개요
이 튜토리얼에서는 TensorFlow Lattice(TFL) 라이브러리를 사용하여 책임감 있게 작동하고 윤리적이거나 공정한 특정 가정을 위반하지 않는 모델을 훈련하는 방법을 보여줍니다. 특히 특정 속성에 대한 불공정한 불이익을 피하기 위해 단조성 제약 조건을 사용하는 데 초점을 맞출 것입니다. 이 튜토리얼에는 Serena Wang 및 Maya Gupta이 AISTATS 2020에 게재한 Deontological Ethics By Monotonicity Shape Constraints(단조성 형상 제약 조건에 의한 의무론적 윤리) 논문의 실험 데모가 포함되어 있습니다.
공개된 데이터세트에 TFL 사전 구성 estimator를 사용할 것이지만 이 튜토리얼의 모든 내용은 TFL Keras 레이어로 구성된 모델로도 수행할 수 있습니다.
계속하기 전에 필요한 모든 패키지가 런타임에 설치되어 있는지 확인하세요(아래 코드 셀에서 가져옴).
설정
TF Lattice 패키지 설치하기:
End of explanation
import tensorflow as tf
import logging
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
Explanation: 필수 패키지 가져오기:
End of explanation
# List of learning rate hyperparameters to try.
# For a longer list of reasonable hyperparameters, try [0.001, 0.01, 0.1].
LEARNING_RATES = [0.01]
# Default number of training epochs and batch sizes.
NUM_EPOCHS = 1000
BATCH_SIZE = 1000
# Directory containing dataset files.
DATA_DIR = 'https://raw.githubusercontent.com/serenalwang/shape_constraints_for_ethics/master'
Explanation: 이 튜토리얼에서 사용되는 기본값:
End of explanation
# Load data file.
law_file_name = 'lsac.csv'
law_file_path = os.path.join(DATA_DIR, law_file_name)
raw_law_df = pd.read_csv(law_file_path, delimiter=',')
Explanation: 사례 연구 #1: 로스쿨 입학
이 튜토리얼의 첫 번째 부분에서는 로스쿨 입학 위원회(LSAC)의 로스쿨 입학 데이터세트를 사용한 사례 연구를 살펴봅니다. 학생의 LSAT 점수와 학부 GPA의 두 가지 특성을 사용하여 학생이 기준점을 통과할지 여부를 예측하도록 분류자를 훈련할 것입니다.
분류자의 점수가 로스쿨 입학 또는 장학금 판단 요소로 사용되었다고 가정합니다. 성과 기반 사회 규범에 따르면 GPA와 LSAT 점수가 높은 학생이 분류자로부터 더 높은 점수를 받아야 합니다. 그러나 모델이 이러한 직관적인 규범을 위반하기 쉽고 때로는 더 높은 GPA 또는 LSAT 점수를 받은 학생들에게 불이익을 주는 것을 관찰하게 됩니다.
이 불공정한 불이익 문제를 해결하기 위해 모델이 더 높은 GPA 또는 더 높은 LSAT 점수에 불이익을 주지 않도록 단조성 제약 조건을 적용할 수 있습니다. 이 튜토리얼에서는 TFL을 사용하여 이러한 단조성 제약 조건을 적용하는 방법을 보여줍니다.
로스쿨 데이터 로드하기
End of explanation
# Define label column name.
LAW_LABEL = 'pass_bar'
def preprocess_law_data(input_df):
# Drop rows with where the label or features of interest are missing.
output_df = input_df[~input_df[LAW_LABEL].isna() & ~input_df['ugpa'].isna() &
(input_df['ugpa'] > 0) & ~input_df['lsat'].isna()]
return output_df
law_df = preprocess_law_data(raw_law_df)
Explanation: 데이터세트 전처리하기:
End of explanation
def split_dataset(input_df, random_state=888):
Splits an input dataset into train, val, and test sets.
train_df, test_val_df = train_test_split(
input_df, test_size=0.3, random_state=random_state)
val_df, test_df = train_test_split(
test_val_df, test_size=0.66, random_state=random_state)
return train_df, val_df, test_df
law_train_df, law_val_df, law_test_df = split_dataset(law_df)
Explanation: 데이터를 훈련/검증/테스트 세트로 분할하기
End of explanation
def plot_dataset_contour(input_df, title):
plt.rcParams['font.family'] = ['serif']
g = sns.jointplot(
x='ugpa',
y='lsat',
data=input_df,
kind='kde',
xlim=[1.4, 4],
ylim=[0, 50])
g.plot_joint(plt.scatter, c='b', s=10, linewidth=1, marker='+')
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels('Undergraduate GPA', 'LSAT score', fontsize=14)
g.fig.suptitle(title, fontsize=14)
# Adust plot so that the title fits.
plt.subplots_adjust(top=0.9)
plt.show()
law_df_pos = law_df[law_df[LAW_LABEL] == 1]
plot_dataset_contour(
law_df_pos, title='Distribution of students that passed the bar')
law_df_neg = law_df[law_df[LAW_LABEL] == 0]
plot_dataset_contour(
law_df_neg, title='Distribution of students that failed the bar')
Explanation: 데이터 분포 시각화하기
먼저 데이터 분포를 시각화합니다. 기준점을 통과한 모든 학생들과 통과하지 못한 모든 학생들에 대한 GPA 및 LSAT 점수를 플롯할 것입니다.
End of explanation
def train_tfl_estimator(train_df, monotonicity, learning_rate, num_epochs,
batch_size, get_input_fn,
get_feature_columns_and_configs):
Trains a TFL calibrated linear estimator.
Args:
train_df: pandas dataframe containing training data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rate: learning rate of Adam optimizer for gradient descent.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
estimator: a trained TFL calibrated linear estimator.
feature_columns, feature_configs = get_feature_columns_and_configs(
monotonicity)
model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs, use_bias=False)
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=get_input_fn(input_df=train_df, num_epochs=1),
optimizer=tf.keras.optimizers.Adam(learning_rate))
estimator.train(
input_fn=get_input_fn(
input_df=train_df, num_epochs=num_epochs, batch_size=batch_size))
return estimator
def optimize_learning_rates(
train_df,
val_df,
test_df,
monotonicity,
learning_rates,
num_epochs,
batch_size,
get_input_fn,
get_feature_columns_and_configs,
):
Optimizes learning rates for TFL estimators.
Args:
train_df: pandas dataframe containing training data.
val_df: pandas dataframe containing validation data.
test_df: pandas dataframe containing test data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rates: list of learning rates to try.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
A single TFL estimator that achieved the best validation accuracy.
estimators = []
train_accuracies = []
val_accuracies = []
test_accuracies = []
for lr in learning_rates:
estimator = train_tfl_estimator(
train_df=train_df,
monotonicity=monotonicity,
learning_rate=lr,
num_epochs=num_epochs,
batch_size=batch_size,
get_input_fn=get_input_fn,
get_feature_columns_and_configs=get_feature_columns_and_configs)
estimators.append(estimator)
train_acc = estimator.evaluate(
input_fn=get_input_fn(train_df, num_epochs=1))['accuracy']
val_acc = estimator.evaluate(
input_fn=get_input_fn(val_df, num_epochs=1))['accuracy']
test_acc = estimator.evaluate(
input_fn=get_input_fn(test_df, num_epochs=1))['accuracy']
print('accuracies for learning rate %f: train: %f, val: %f, test: %f' %
(lr, train_acc, val_acc, test_acc))
train_accuracies.append(train_acc)
val_accuracies.append(val_acc)
test_accuracies.append(test_acc)
max_index = val_accuracies.index(max(val_accuracies))
return estimators[max_index]
Explanation: 기준점 시험 통과를 예측하도록 보정된 선형 모델 훈련하기
다음으로, TFL에서 보정된 선형 모델을 훈련하여 학생이 기준점을 통과할지 여부를 예측합니다. 두 가지 입력 특성은 LSAT 점수와 학부 GPA이며, 훈련 레이블은 학생이 기준점을 통과했는지 여부입니다.
먼저 제약 조건 없이 보정된 선형 모델을 훈련합니다. 그런 다음, 단조성 제약 조건을 사용하여 보정된 선형 모델을 훈련하고 모델 출력 및 정확성의 차이를 관찰합니다.
TFL 보정 선형 estimator를 훈련하기 위한 도우미 함수
이들 함수는 이 로스쿨 사례 연구와 아래의 대출 연체 사례 연구에 사용됩니다.
End of explanation
def get_input_fn_law(input_df, num_epochs, batch_size=None):
Gets TF input_fn for law school models.
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['ugpa', 'lsat']],
y=input_df['pass_bar'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_law(monotonicity):
Gets TFL feature configs for law school models.
feature_columns = [
tf.feature_column.numeric_column('ugpa'),
tf.feature_column.numeric_column('lsat'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='ugpa',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='lsat',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs
Explanation: 로스쿨 데이터세트 특성을 구성하기 위한 도우미 함수
이들 도우미 함수는 로스쿨 사례 연구에만 해당됩니다.
End of explanation
def get_predicted_probabilities(estimator, input_df, get_input_fn):
predictions = estimator.predict(
input_fn=get_input_fn(input_df=input_df, num_epochs=1))
return [prediction['probabilities'][1] for prediction in predictions]
def plot_model_contour(estimator, input_df, num_keypoints=20):
x = np.linspace(min(input_df['ugpa']), max(input_df['ugpa']), num_keypoints)
y = np.linspace(min(input_df['lsat']), max(input_df['lsat']), num_keypoints)
x_grid, y_grid = np.meshgrid(x, y)
positions = np.vstack([x_grid.ravel(), y_grid.ravel()])
plot_df = pd.DataFrame(positions.T, columns=['ugpa', 'lsat'])
plot_df[LAW_LABEL] = np.ones(len(plot_df))
predictions = get_predicted_probabilities(
estimator=estimator, input_df=plot_df, get_input_fn=get_input_fn_law)
grid_predictions = np.reshape(predictions, x_grid.shape)
plt.rcParams['font.family'] = ['serif']
plt.contour(
x_grid,
y_grid,
grid_predictions,
colors=('k',),
levels=np.linspace(0, 1, 11))
plt.contourf(
x_grid,
y_grid,
grid_predictions,
cmap=plt.cm.bone,
levels=np.linspace(0, 1, 11)) # levels=np.linspace(0,1,8));
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
cbar = plt.colorbar()
cbar.ax.set_ylabel('Model score', fontsize=20)
cbar.ax.tick_params(labelsize=20)
plt.xlabel('Undergraduate GPA', fontsize=20)
plt.ylabel('LSAT score', fontsize=20)
Explanation: 훈련된 모델의 출력 시각화를 위한 도우미 함수
End of explanation
nomon_linear_estimator = optimize_learning_rates(
train_df=law_train_df,
val_df=law_val_df,
test_df=law_test_df,
monotonicity=0,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_law,
get_feature_columns_and_configs=get_feature_columns_and_configs_law)
plot_model_contour(nomon_linear_estimator, input_df=law_df)
Explanation: 제약이 없는(단조가 아닌) 보정 선형 모델 훈련하기
End of explanation
mon_linear_estimator = optimize_learning_rates(
train_df=law_train_df,
val_df=law_val_df,
test_df=law_test_df,
monotonicity=1,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_law,
get_feature_columns_and_configs=get_feature_columns_and_configs_law)
plot_model_contour(mon_linear_estimator, input_df=law_df)
Explanation: 단조성 보정 선형 모델 훈련하기
End of explanation
feature_names = ['ugpa', 'lsat']
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=[
tf.feature_column.numeric_column(feature) for feature in feature_names
],
hidden_units=[100, 100],
optimizer=tf.keras.optimizers.Adam(learning_rate=0.008),
activation_fn=tf.nn.relu)
dnn_estimator.train(
input_fn=get_input_fn_law(
law_train_df, batch_size=BATCH_SIZE, num_epochs=NUM_EPOCHS))
dnn_train_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_train_df, num_epochs=1))['accuracy']
dnn_val_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_val_df, num_epochs=1))['accuracy']
dnn_test_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_test_df, num_epochs=1))['accuracy']
print('accuracies for DNN: train: %f, val: %f, test: %f' %
(dnn_train_acc, dnn_val_acc, dnn_test_acc))
plot_model_contour(dnn_estimator, input_df=law_df)
Explanation: 다른 제약이 없는 모델 훈련하기
TFL 보정 선형 모델이 정확성을 크게 희생하지 않고도 LSAT 점수와 GPA 모두에서 단조롭도록 훈련될 수 있음을 입증했습니다.
그렇다면 보정 선형 모델이 심층 신경망(DNN) 또는 그래디언트 부스트 트리(GBT)와 같은 다른 형태의 모델과 어떻게 비교될까요? DNN과 GBT가 합리적으로 공정한 출력을 제공하는 것으로 보입니까? 이 질문의 해답을 얻기 위해 이제 제약이 없는 DNN 및 GBT를 훈련할 것입니다. 실제로, DNN과 GBT 모두 LSAT 점수와 학부 GPA에서 단조성을 쉽게 위반한다는 사실을 관찰하게 될 것입니다.
제약이 없는 심층 신경망(DNN) 모델 훈련하기
앞서 높은 검증 정확성을 얻기 위해 아키텍처를 최적화했습니다.
End of explanation
tree_estimator = tf.estimator.BoostedTreesClassifier(
feature_columns=[
tf.feature_column.numeric_column(feature) for feature in feature_names
],
n_batches_per_layer=2,
n_trees=20,
max_depth=4)
tree_estimator.train(
input_fn=get_input_fn_law(
law_train_df, num_epochs=NUM_EPOCHS, batch_size=BATCH_SIZE))
tree_train_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_train_df, num_epochs=1))['accuracy']
tree_val_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_val_df, num_epochs=1))['accuracy']
tree_test_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_test_df, num_epochs=1))['accuracy']
print('accuracies for GBT: train: %f, val: %f, test: %f' %
(tree_train_acc, tree_val_acc, tree_test_acc))
plot_model_contour(tree_estimator, input_df=law_df)
Explanation: 제약이 없는 그래디언트 부스트 트리(GBT) 모델 훈련하기
앞서 높은 검증 정확성을 얻기 위해 트리 구조를 최적화했습니다.
End of explanation
# Load data file.
credit_file_name = 'credit_default.csv'
credit_file_path = os.path.join(DATA_DIR, credit_file_name)
credit_df = pd.read_csv(credit_file_path, delimiter=',')
# Define label column name.
CREDIT_LABEL = 'default'
Explanation: 사례 연구 #2: 대출 연체
이 튜토리얼에서 고려할 두 번째 사례 연구는 개인의 대출 연체 확률을 예측하는 것입니다. UCI 리포지토리의 Default of Credit Card Clients 데이터세트를 사용합니다. 이 데이터는 30,000명의 대만 신용카드 사용자로부터 수집되었으며 사용자가 일정 기간 내에 결제를 불이행했는지 여부를 나타내는 바이너리 레이블을 포함하고 있습니다. 특성에는 결혼 여부, 성별, 학력, 사용자가 2005년 4월부터 9월까지 월별로 기존 청구액을 연체한 기간이 포함됩니다.
첫 번째 사례 연구에서와 마찬가지로, 불공정한 불이익을 피하기 위해 단조성 제약 조건을 사용하는 방법을 다시 설명합니다. 모델을 사용하여 사용자의 신용 점수를 결정하는 경우, 다른 모든 조건이 동일할 때 청구액을 조기에 지불하는 것에 대해 불이익을 받는다면 많은 사람들이 불공정하다고 느낄 수 있습니다. 따라서 모델이 조기 결제에 불이익을 주지 않도록 하는 단조성 제약 조건을 적용합니다.
대출 연체 데이터 로드하기
End of explanation
credit_train_df, credit_val_df, credit_test_df = split_dataset(credit_df)
Explanation: 데이터를 훈련/검증/테스트 세트로 분할하기
End of explanation
def get_agg_data(df, x_col, y_col, bins=11):
xbins = pd.cut(df[x_col], bins=bins)
data = df[[x_col, y_col]].groupby(xbins).agg(['mean', 'sem'])
return data
def plot_2d_means_credit(input_df, x_col, y_col, x_label, y_label):
plt.rcParams['font.family'] = ['serif']
_, ax = plt.subplots(nrows=1, ncols=1)
plt.setp(ax.spines.values(), color='black', linewidth=1)
ax.tick_params(
direction='in', length=6, width=1, top=False, right=False, labelsize=18)
df_single = get_agg_data(input_df[input_df['MARRIAGE'] == 1], x_col, y_col)
df_married = get_agg_data(input_df[input_df['MARRIAGE'] == 2], x_col, y_col)
ax.errorbar(
df_single[(x_col, 'mean')],
df_single[(y_col, 'mean')],
xerr=df_single[(x_col, 'sem')],
yerr=df_single[(y_col, 'sem')],
color='orange',
marker='s',
capsize=3,
capthick=1,
label='Single',
markersize=10,
linestyle='')
ax.errorbar(
df_married[(x_col, 'mean')],
df_married[(y_col, 'mean')],
xerr=df_married[(x_col, 'sem')],
yerr=df_married[(y_col, 'sem')],
color='b',
marker='^',
capsize=3,
capthick=1,
label='Married',
markersize=10,
linestyle='')
leg = ax.legend(loc='upper left', fontsize=18, frameon=True, numpoints=1)
ax.set_xlabel(x_label, fontsize=18)
ax.set_ylabel(y_label, fontsize=18)
ax.set_ylim(0, 1.1)
ax.set_xlim(-2, 8.5)
ax.patch.set_facecolor('white')
leg.get_frame().set_edgecolor('black')
leg.get_frame().set_facecolor('white')
leg.get_frame().set_linewidth(1)
plt.show()
plot_2d_means_credit(credit_train_df, 'PAY_0', 'default',
'Repayment Status (April)', 'Observed default rate')
Explanation: 데이터 분포 시각화하기
먼저 데이터 분포를 시각화합니다. 결혼 여부와 상환 상태가 서로 다른 사람들에 대해 관찰된 연체율의 평균 및 표준 오차를 플롯할 것입니다. 상환 상태는 대출 상환 기간(2005년 4월 현재)에 연체된 개월 수를 나타냅니다.
End of explanation
def get_input_fn_credit(input_df, num_epochs, batch_size=None):
Gets TF input_fn for credit default models.
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['MARRIAGE', 'PAY_0']],
y=input_df['default'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_credit(monotonicity):
Gets TFL feature configs for credit default models.
feature_columns = [
tf.feature_column.numeric_column('MARRIAGE'),
tf.feature_column.numeric_column('PAY_0'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='MARRIAGE',
lattice_size=2,
pwl_calibration_num_keypoints=3,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='PAY_0',
lattice_size=2,
pwl_calibration_num_keypoints=10,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs
Explanation: 대출 연체율을 예측하도록 보정 선형 모델 훈련하기
다음으로, TFL에서 보정 선형 모델을 훈련하여 개인이 대출을 불이행할지 여부를 예측합니다. 두 가지 입력 특성은 결혼 여부와 4월에 대출금을 연체한 개월 수(상환 상태)입니다. 훈련 레이블은 대출을 연체했는지 여부입니다.
먼저 제약 조건 없이 보정된 선형 모델을 훈련합니다. 그런 다음, 단조성 제약 조건을 사용하여 보정된 선형 모델을 훈련하고 모델 출력 및 정확성의 차이를 관찰합니다.
대출 연체 데이터세트 특성을 구성하기 위한 도우미 함수
이들 도우미 함수는 대출 연체 사례 연구에만 해당합니다.
End of explanation
def plot_predictions_credit(input_df,
estimator,
x_col,
x_label='Repayment Status (April)',
y_label='Predicted default probability'):
predictions = get_predicted_probabilities(
estimator=estimator, input_df=input_df, get_input_fn=get_input_fn_credit)
new_df = input_df.copy()
new_df.loc[:, 'predictions'] = predictions
plot_2d_means_credit(new_df, x_col, 'predictions', x_label, y_label)
Explanation: 훈련된 모델의 출력 시각화를 위한 도우미 함수
End of explanation
nomon_linear_estimator = optimize_learning_rates(
train_df=credit_train_df,
val_df=credit_val_df,
test_df=credit_test_df,
monotonicity=0,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_credit,
get_feature_columns_and_configs=get_feature_columns_and_configs_credit)
plot_predictions_credit(credit_train_df, nomon_linear_estimator, 'PAY_0')
Explanation: 제약이 없는(단조가 아닌) 보정 선형 모델 훈련하기
End of explanation
mon_linear_estimator = optimize_learning_rates(
train_df=credit_train_df,
val_df=credit_val_df,
test_df=credit_test_df,
monotonicity=1,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_credit,
get_feature_columns_and_configs=get_feature_columns_and_configs_credit)
plot_predictions_credit(credit_train_df, mon_linear_estimator, 'PAY_0')
Explanation: 단조 보정 선형 모델 훈련하기
End of explanation |
5,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: A simple DynamicMap
Let us now create a simple DynamicMap using three annotation elements, namely Box, Text, and Ellipse
Step2: This example uses the concepts introduced in the exploring with containers section. As before, the argument angle is supplied by the position of the 'angle' slider.
Introducing Streams
HoloViews offers a way of supplying the angle value to our annotation function through means other than sliders, namely via the streams system which you can learn about in the user guide.
All stream classes are found in the streams submodule and are subclasses of Stream. You can use Stream directly to make custom stream classes via the define classmethod
Step3: Here Angle is capitalized as it is a subclass of Stream with a numeric angle parameter, which has a default value of zero. You can verify this using hv.help
Step4: Now we can declare a DynamicMap where instead of specifying kdims, we instantiate Angle with an angle of 45º and pass it to the streams parameter of the DynamicMap
Step5: As expected, we see our ellipse with an angle of 45º as specified via the angle parameter of our Angle instance. In itself, this wouldn't be very useful but given that we have a handle on our DynamicMap dmap, we can now use the event method to update the angle parameter value and update the plot
Step6: When running this cell, the visualization above will jump to the 90º position! If you have already run the cell, just change the value above and re-run, and you'll see the plot above update.
This simple example shows how you can use the event method to update a visualization with any value you can generate in Python.
Step7: Periodic updates
Using streams you can animate your visualizations by driving them with events from Python. Of course, you could use loops to call the event method, but this approach can queue up events much faster than they can be visualized. Instead of inserting sleeps into your loops to avoid that problem, it is recommended you use the periodic method, which lets you specify a time period between updates (in seconds)
Step8: If you re-execute the above cell, you should see the preceding plot update continuously until the count value is reached.
Step9: Linked streams
Often, you will want to tie streams to specific user actions in the live JavaScript interface. There are no limitations on how you can generate updated stream parameters values in Python, and so you could manually support updating streams from JavaScript as long as it can communicate with Python to trigger an appropriate stream update. But as Python programmers, we would rather avoid writing JavaScript directly, so HoloViews supports the concept of linked stream classes where possible.
Currently, linked streams are only supported by the Bokeh plotting extension, because only Bokeh executes JavaScript in the notebook and has a suitable event system necessary to enable linked streams (matplotlib displays output as static PNG or SVG in the browser). Here is a simple linked stream example
Step10: When hovering in the plot above when backed by a live Python process, the crosshair will track the cursor.
The way it works is very simple
Step11: You can view other similar examples of custom interactivity in our reference gallery and learn more about linked streams in the user guide. Here is a quick summary of some of the more useful linked stream classes HoloViews currently offers and the parameters they supply | Python Code:
import numpy as np
import pandas as pd
import holoviews as hv
hv.extension('bokeh', 'matplotlib')
%opts Ellipse [xaxis=None yaxis=None] (color='red' line_width=2)
%opts Box [xaxis=None yaxis=None] (color='blue' line_width=2)
Explanation: <a href='http://www.holoviews.org'><img src="assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a>
<div style="float:right;"><h2>06. Custom Interactivity</h2></div>
In the exploring with containers section, the DynamicMap container was introduced. In that section, the arguments to the callable returning elements were supplied by HoloViews sliders. In this section, we will generalize the ways in which you can generate values to update a DynamicMap.
End of explanation
def annotations(angle):
radians = (angle / 180.) * np.pi
return (hv.Box(0,0,4, orientation=np.pi/4)
* hv.Ellipse(0,0,(2,4), orientation=radians)
* hv.Text(0,0,'{0}º'.format(float(angle))))
hv.DynamicMap(annotations, kdims=['angle']).redim.range(angle=(0, 360)).redim.label(angle='angle (º)')
Explanation: A simple DynamicMap
Let us now create a simple DynamicMap using three annotation elements, namely Box, Text, and Ellipse:
End of explanation
from holoviews import streams
from holoviews.streams import Stream
Angle = Stream.define('Angle', angle=0)
Explanation: This example uses the concepts introduced in the exploring with containers section. As before, the argument angle is supplied by the position of the 'angle' slider.
Introducing Streams
HoloViews offers a way of supplying the angle value to our annotation function through means other than sliders, namely via the streams system which you can learn about in the user guide.
All stream classes are found in the streams submodule and are subclasses of Stream. You can use Stream directly to make custom stream classes via the define classmethod:
End of explanation
hv.help(Angle)
Explanation: Here Angle is capitalized as it is a subclass of Stream with a numeric angle parameter, which has a default value of zero. You can verify this using hv.help:
End of explanation
%%opts Box (color='green')
dmap=hv.DynamicMap(annotations, streams=[Angle(angle=45)])
dmap
Explanation: Now we can declare a DynamicMap where instead of specifying kdims, we instantiate Angle with an angle of 45º and pass it to the streams parameter of the DynamicMap:
End of explanation
dmap.event(angle=90)
Explanation: As expected, we see our ellipse with an angle of 45º as specified via the angle parameter of our Angle instance. In itself, this wouldn't be very useful but given that we have a handle on our DynamicMap dmap, we can now use the event method to update the angle parameter value and update the plot:
End of explanation
# Exercise: Regenerate the DynamicMap, initializing the angle to 15 degrees
dmap = hv.DynamicMap(annotations, streams=[Angle(angle=15)])
dmap
# Exercise: Use dmap.event to set the angle shown to 145 degrees.
dmap.event(angle=145)
# Exercise: Do not specify an initial angle so that the default value of 0 degrees is used.
hv.DynamicMap(annotations, streams=[Angle()])
%%opts Ellipse (color='green')
%%output backend='matplotlib'
# Exercise: Use the cell magic %%output backend='matplotlib' to try the above with matplotlib
dmap = hv.DynamicMap(annotations, streams=[Angle(angle=15)])
dmap
dmap.event(angle=145)
# Exercise: Declare a DynamicMap using annotations2 and AngleAndSize
# Then use the event method to set the size to 1.5 and the angle to 30 degrees
def annotations2(angle, size):
radians = (angle / 180) * np.pi
return (hv.Box(0,0,4, orientation=np.pi/4)
* hv.Ellipse(0,0,(size,size*2), orientation=radians)
* hv.Text(0,0,'{0}º'.format(float(angle))))
AngleAndSize = Stream.define('AngleAndSize', angle=0., size=1.)
exercise_dmap = hv.DynamicMap(annotations2, streams=[AngleAndSize(angle=30, size=1.5)])
exercise_dmap
Explanation: When running this cell, the visualization above will jump to the 90º position! If you have already run the cell, just change the value above and re-run, and you'll see the plot above update.
This simple example shows how you can use the event method to update a visualization with any value you can generate in Python.
End of explanation
%%opts Ellipse (color='orange')
dmap2=hv.DynamicMap(annotations, streams=[Angle(angle=0)])
dmap2
dmap2.periodic(0.01, count=180, timeout=8, param_fn=lambda i: {'angle':i})
Explanation: Periodic updates
Using streams you can animate your visualizations by driving them with events from Python. Of course, you could use loops to call the event method, but this approach can queue up events much faster than they can be visualized. Instead of inserting sleeps into your loops to avoid that problem, it is recommended you use the periodic method, which lets you specify a time period between updates (in seconds):
End of explanation
# Exercise: Experiment with different period values. How fast can things update?
dmap2.periodic(0.00001, count=180, timeout=8, param_fn=lambda i: {'angle':i})
# Exercise: Increase count so that the oval completes a full rotation.
dmap2.periodic(0.01, count=360, timeout=15, param_fn=lambda i: {'angle':i})
# Exercise: Lower the timeout so the oval completes less than a quarter turn before stopping
# Note: The appropriate timeout will vary between different machines
dmap2.periodic(0.01, count=360, timeout=3, param_fn=lambda i: {'angle':i})
Explanation: If you re-execute the above cell, you should see the preceding plot update continuously until the count value is reached.
End of explanation
%%opts HLine [xaxis=None yaxis=None]
pointer = streams.PointerXY(x=0, y=0)
def crosshair(x, y):
return hv.Ellipse(0,0,1) * hv.HLine(y) * hv.VLine(x)
hv.DynamicMap(crosshair, streams=[pointer])
Explanation: Linked streams
Often, you will want to tie streams to specific user actions in the live JavaScript interface. There are no limitations on how you can generate updated stream parameters values in Python, and so you could manually support updating streams from JavaScript as long as it can communicate with Python to trigger an appropriate stream update. But as Python programmers, we would rather avoid writing JavaScript directly, so HoloViews supports the concept of linked stream classes where possible.
Currently, linked streams are only supported by the Bokeh plotting extension, because only Bokeh executes JavaScript in the notebook and has a suitable event system necessary to enable linked streams (matplotlib displays output as static PNG or SVG in the browser). Here is a simple linked stream example:
End of explanation
%%opts HLine [xaxis=None yaxis=None]
# Exercise: Set the defaults so that the crosshair initializes at x=0.25, y=0.25
pointer = streams.PointerXY(x=0.25, y=0.25)
def crosshair(x, y):
return hv.Ellipse(0,0,1) * hv.HLine(y) * hv.VLine(x)
hv.DynamicMap(crosshair, streams=[pointer])
%%opts Points [xaxis=None yaxis=None] (size=10 color='red')
# Exercise: Copy the above example and adapt it to make a red point of size 10 follow your cursor (using hv.Points)
# Exercise: Set the defaults so that the crosshair initializes at x=0.25, y=0.25
pointer = streams.PointerXY(x=0.25, y=0.25)
def crosshair(x, y):
return hv.Points((x,y))
hv.DynamicMap(crosshair, streams=[pointer])
Explanation: When hovering in the plot above when backed by a live Python process, the crosshair will track the cursor.
The way it works is very simple: the crosshair function puts a crosshair at whatever x,y location it is given, the pointer object supplies a stream of x,y values based on the mouse pointer location, and the DynamicMap object connects the pointer stream's x,y values to the crosshair function to generate the resulting plots.
End of explanation
%%opts Scatter[width=900 height=400 tools=['xbox_select'] ] (cmap='RdBu' line_color='black' size=5 line_width=0.5)
%%opts Scatter [color_index='latitude' colorbar=True colorbar_position='bottom' colorbar_opts={'title': 'Latitude'}]
eclipses = pd.read_csv('../data/eclipses_21C.csv', parse_dates=['date'])
magnitudes = hv.Scatter(eclipses, kdims=['hour_local'], vdims=['magnitude','latitude'])
def selection_example(index):
text = '{0} eclipses'.format(len(index)) if index else ''
return magnitudes * hv.Text(2,1, text)
dmap3 = hv.DynamicMap(selection_example, streams=[streams.Selection1D()])
dmap3.redim.label(magnitude='Eclipse Magnitude', hour_local='Hour (local time)')
Explanation: You can view other similar examples of custom interactivity in our reference gallery and learn more about linked streams in the user guide. Here is a quick summary of some of the more useful linked stream classes HoloViews currently offers and the parameters they supply:
PointerX/PointerY/PointerYX: The x,y or (x,y) position of the cursor.
SingleTap/DoubleTap/Tap: Position of single, double or all tap events.
BoundsX/BoundsY/BoundsXY: The x,y or x and y extents selected with the Bokeh box select tool.
RangeX/RangeY/RangeXY: The x,y or x and y range of the currently displayed axes
Selection1D: The selected glyphs as a 1D selection.
Any of these values can easily be tied to any visible element of your visualization.
A more advanced example
Let's now build a more advanced example using the eclipse dataset we explored earlier, where the stream supplies values when a particular Bokeh tool ("Box Select") is active:
End of explanation |
5,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Depression Identification Simulation
Note
Step1: 1. State assumptions about your data
X has size
Step2: Training & Test Utilities
Step3: 4 & 5. Sample data from a simulation setting & Compute Accuracy
Step4: 6. Plot accuracy vs. sample size in simulation
Step5: 7. Apply method directly on real data | Python Code:
import pandas as pd
import numpy as np
df_feats = pd.read_csv('reduced_data.csv')
df_labels = pd.read_csv('disorders.csv')['Depressed']
Explanation: Depression Identification Simulation
Note: The features are generated using PCA.ipynb.
End of explanation
np.random.seed(12345678) # for reproducibility, set random seed
r = 27 # define number of rois
N = 100 # number of samples at each iteration
p0 = 0.10
p1 = 0.15
# define number of subjects per class
S = np.array((8, 16, 20, 32, 40, 64, 80, 100, 120, 200, 320, 400, 800, 1000))
Explanation: 1. State assumptions about your data
X has size: n samples by m features.
X are i.i.d. random variables.
Features X<sub>ij</sub> (j = 1,...,m) are not identically distributed.
2. Formally define classification problem
Feature matrix X: R<sup>n x m</sup>
Each sample X<sub>i</sub> ∈ R<sup>m</sup>, i ∈ [1, n]
Label y<sub>i</sub> ∈ {0, 1}, i ∈ [1, n]
g(X) → y
G := { g: R → {0, 1} }
Goal: g<sup>*</sup> = argmin<sub>g ∈ G</sub> E[L(g(X), y)], where L denotes loss function.
The loss function L differs for different classifiers and is speficied in the classfication context below:
Multinomial Naive Bayes: negative joint likelihood
L = -log p(X, y)
Logistic Regression: logistic loss (cross-entropy loss)
L = -log P(y|g(X)) = -(y · log(g(X)) + (1 - y) · log(1 - g(X))
K Nearest Neighbors
L = ∑<sub>i</sub> D(X<sub>i</sub>|y<sub>i</sub>=1, X|y=1) + ∑<sub>i</sub> D(X<sub>i</sub>|y<sub>i</sub>=0, X|y=0)
D(a, b) = (a - b)<sup>2</sup>
Support Vector Machine: squared hinge loss
L = (max{0, 1 − y · g(x)})<sup>2</sup>
Random Forest
L = ∑<sub>i</sub> (g(X<sub>i</sub>) - y<sub>i</sub>)<sup>2</sup>
Quadratic Discriminant Analysis
L = max{0, 1 − y · g(x)}
3. Provide algorithm for solving problem (including choosing hyperparameters as appropriate)
Logistic Regression
penalty = 'l1' (l1 norm for penalization)
K Nearest Neighbors
n_neighbors = 7
Support Vector Machine (Linear Kernel)
C: default = 1.0 (penalty parameter of the error term)
Random Forest
n_estimators = 20 (number of trees)
criterion: default = 'gini'
Quadratic Discriminant Analysis
None
Note: Generate random samples and plot accuracies based on Greg's code.
Simulation Setup
End of explanation
from sklearn import cross_validation
from sklearn.cross_validation import LeaveOneOut
# Train the given classifier
def train_clf(clf, train_feats, train_labels):
# Supervised training
clf.fit(train_feats, train_labels)
# Test the given classifier anc calculate accuracy
def test_clf(clf, test_feats, test_labels):
# Predict using test set
predicted = clf.predict(test_feats)
# Compute accuracy
acc = np.mean(predicted == test_labels)
return predicted, acc
# Compute accuracy of a model trained with a specific number (n) of samples
def compute_acc(clf, n):
train_clf(clf, train_X[:n], train_y[:n])
predict_y, acc = test_clf(clf, test_X, test_y)
return acc
# Leave one out cross validation
def loo_cv(clf, X, y):
loo = LeaveOneOut(len(X))
scores = cross_validation.cross_val_score(clf, X, y, cv=loo)
return scores.mean(), scores.std()
Explanation: Training & Test Utilities
End of explanation
from sklearn.cross_validation import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.qda import QDA
[acc_LG, acc_KNN, acc_SVM, acc_RF, acc_QDA] = [[] for i in xrange(5)]
[err_LG, err_KNN, err_SVM, err_RF, err_QDA] = [[] for i in xrange(5)] # accuracy standard deviation
for idx1, s in enumerate(S):
s0=s/2
s1=s/2
g0 = 1 * (np.random.rand(r, r, s0) > 1-p0)
g1 = 1 * (np.random.rand(r, r, s1) > 1-p1)
mbar0 = 1.0 * np.sum(g0, axis=(0,1))
mbar1 = 1.0 * np.sum(g1, axis=(0,1))
X = np.array((np.append(mbar0, mbar1), np.append(mbar0/(r**2), mbar1/(r**2)))).T
y = np.append(np.zeros(s0), np.ones(s1))
# Split the simulated data into training set and test set
# Randomly sample 20% data as the test set
train_X, test_X, train_y, test_y = cross_validation.train_test_split(X, y, test_size=0.2, random_state=42)
# Logistic Regression
lg = LogisticRegression(penalty='l1')
acc, acc_std = loo_cv(lg, X, y)
acc_LG.append(acc)
err_LG.append(acc_std)
# K Nearest Neighbors
knn = KNeighborsClassifier(n_neighbors=7)
acc, acc_std = loo_cv(knn, X, y)
acc_KNN.append(acc)
err_KNN.append(acc_std)
# Support Vector Machine
svc = LinearSVC()
acc, acc_std = loo_cv(svc, X, y)
acc_SVM.append(acc)
err_SVM.append(acc_std)
# Random Forest
rf = RandomForestClassifier(n_estimators=20)
acc, acc_std = loo_cv(rf, X, y)
acc_RF.append(acc)
err_RF.append(acc_std)
# Quadratic Discriminant Analysis
qda = QDA()
acc, acc_std = loo_cv(qda, X, y)
acc_QDA.append(acc)
err_QDA.append(acc_std)
Explanation: 4 & 5. Sample data from a simulation setting & Compute Accuracy
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(1)
fig.set_size_inches(9, 6.5)
plt.errorbar(S, acc_LG, yerr=err_LG, label='Logistic Regression')
plt.errorbar(S, acc_KNN, yerr=err_KNN, label='K Nearest Neighbors')
plt.errorbar(S, acc_SVM, yerr=err_SVM, label='Support Vector Machine')
plt.errorbar(S, acc_RF, yerr=err_RF, label='Random Forest')
plt.errorbar(S, acc_QDA, yerr=err_QDA, label='Quadratic Discriminant Analysis')
plt.xscale('log')
plt.xlabel('number of samples')
plt.ylabel('accuracy')
plt.title('Accuracty of gender classification under simulated data')
plt.axhline(1, color='red', linestyle='--')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
Explanation: 6. Plot accuracy vs. sample size in simulation
End of explanation
# Not needed in this notebook, only used in an ealier verion, kept in case
def clean_negs(X):
# Get indices of columns that contain negative values
neg_col_inds = np.unique(np.where(X<0)[1])
# Subtract minimum negative for each column
for neg_i in neg_col_inds:
neg_col = X[:, neg_i]
min_neg = np.min(neg_col)
new_col = [c - min_neg for c in neg_col]
X[:, neg_i] = new_col
return X
'''
Data Preparation
'''
from sklearn.cross_validation import train_test_split
real_X = df_feats.get_values()
real_y = df_labels.get_values()
print 'Dataset size is', real_X.shape
# Logistic Regression
lg = LogisticRegression(penalty='l1')
acc_lg, acc_std_lg = loo_cv(lg, real_X, real_y)
# K Nearest Neighbors
knn = KNeighborsClassifier(n_neighbors=7)
acc_knn, acc_std_knn = loo_cv(knn, real_X, real_y)
# Support Vector Machine
svc = LinearSVC()
acc_svm, acc_std_svm = loo_cv(svc, real_X, real_y)
# Random Forest
rf = RandomForestClassifier(n_estimators=20)
acc_rf, acc_std_rf = loo_cv(rf, real_X, real_y)
# Quadratic Discriminant Analysis
qda = QDA()
acc_qda, acc_std_qda = loo_cv(qda, real_X, real_y)
print 'Logistic Regression accuracy is %0.4f (+/- %0.3f)' % (acc_lg, acc_std_lg)
print 'K Nearest Neighbors accuracy is %0.4f (+/- %0.3f)' % (acc_knn, acc_std_knn)
print 'Support Vector Machine (Linear Kernel) accuracy is %0.4f (+/- %0.3f)' % (acc_svm, acc_std_svm)
print 'Random Forest accuracy is %0.4f (+/- %0.3f)' % (acc_rf, acc_std_rf)
print 'Quadratic Discriminant Analysis accuracy is %0.4f (+/- %0.3f)' % (acc_qda, acc_std_qda)
# Visualize classifier performance
x = range(5)
y = [acc_lg, acc_knn, acc_svm, acc_rf, acc_qda]
clf_names = ['Logistic Regression', 'K Nearest Neighbors', \
'Support Vector Machine', 'Random Forest', 'Quadratic Discriminant Analysis']
width = 0.6/1.2
plt.bar(x, y, width)
plt.title('Classifier Performance')
plt.xticks(x, clf_names, rotation=25)
plt.ylabel('Accuracy')
Explanation: 7. Apply method directly on real data
End of explanation |
5,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Horizontal Bar Charts
Best suited for categories comparison
Example 1
Step1: You can also save your chart with the save method
Step2: Example 2
Step3: Vertical Bar Charts
Ideal for a small number of labels or a portion of time dependent values
Fixed width
Example 3
Step4: Example 4 | Python Code:
data = dict(
labels=['Bananas','Apples','Oranges','Watermelons','Grapes','Kiwis'],
values=[4000,8000,3000,1600,1000,2500]
)
out = StdCharts.HBar(data)
HTML(out)
Explanation: Horizontal Bar Charts
Best suited for categories comparison
Example 1: default options, "as is"
End of explanation
StdCharts.save(out,'report1.png')
Explanation: You can also save your chart with the save method:
End of explanation
data = dict(
labels=['Bananas','Apples','Oranges','Watermelons','Grapes','Kiwis'],
values=[4000,8000,3000,1600,1000,2500]
)
out = StdCharts.HBar(
data = data,
width=600,
color='#996666',
title='Fruit prices ($)',
source='Source: Local Market',
fill='rgb(220, 75, 30)',
values_sorted=True,
paper='#f6f6f6',
locale='en',
font='Tahoma')
HTML(out)
StdCharts.save(out,'report2.png')
Explanation: Example 2: sample options, sorting enabled
End of explanation
data = dict(
labels=[2010,2011,2012,2013,2014,2015,2016,2017],
values=[4000,8000,3000,1600,1000,2500,4300,4200]
)
out = StdCharts.VBar(data=data,paper='#f3f3f3')
HTML(out)
StdCharts.save(out,'report3.png')
Explanation: Vertical Bar Charts
Ideal for a small number of labels or a portion of time dependent values
Fixed width
Example 3: defaults
End of explanation
data = dict(
labels=['City ABCD','City EFGHM','City OPQRSTUV'],
values=[200,100,500]
)
out = StdCharts.VBar(data=data,values_sorted=True,color="#446699",fill="#999999")
HTML(out)
StdCharts.save(out,'report4.png')
Explanation: Example 4
End of explanation |
5,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Dropout and Data Augmentation
In this exercise we will implement two ways to reduce overfitting.
Like the previous assignment, we will train ConvNets to recognize the categories in CIFAR-10. However unlike the previous assignment where we used 49,000 images for training, in this exercise we will use just 500 images for training.
If we try to train a high-capacity model like a ConvNet on this small amount of data, we expect to overfit, and end up with a solution that does not generalize. We will see that we can drastically reduce overfitting by using dropout and data augmentation.
Step3: Load data
For this exercise our training set will contain 500 images and our validation and test sets will contain 1000 images as usual.
Step4: Overfit
Now that we've loaded our data, we will attempt to train a three layer convnet on this data. The three layer convnet has the architecture
conv - relu - pool - affine - relu - affine - softmax
We will use 32 5x5 filters, and our hidden affine layer will have 128 neurons.
This is a very expressive model given that we have only 500 training samples, so we should expect to massively overfit this dataset, and achieve a training accuracy of nearly 0.9 with a much lower validation accuracy.
Step5: Dropout
The first way we will reduce overfitting is to use dropout.
You have already implemented this in Q1 of this exercise, but let's just check that it still works
Step6: Data Augmentation
The next way we will reduce overfitting is to implement data augmentation. Since we have very little training data, we will use what little training data we have to generate artificial data, and use this artificial data to train our network.
CIFAR-10 images are 32x32, and up until this point we have used the entire image as input to our convnets. Now we will do something different
Step7: Train again
We will now train a new network with the same training data and the same architecture, but using data augmentation and dropout.
If everything works, you should see a higher validation accuracy than above and a smaller gap between the training accuracy and the validation accuracy.
Networks with dropout usually take a bit longer to train, so we will use more training epochs this time. | Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from time import time
from cs231n.layers import *
from cs231n.fast_layers import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Dropout and Data Augmentation
In this exercise we will implement two ways to reduce overfitting.
Like the previous assignment, we will train ConvNets to recognize the categories in CIFAR-10. However unlike the previous assignment where we used 49,000 images for training, in this exercise we will use just 500 images for training.
If we try to train a high-capacity model like a ConvNet on this small amount of data, we expect to overfit, and end up with a solution that does not generalize. We will see that we can drastically reduce overfitting by using dropout and data augmentation.
End of explanation
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=500, num_validation=1000, num_test=1000, normalize=True):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
if normalize:
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Transpose so that channels come first
X_train = X_train.transpose(0, 3, 1, 2).copy()
X_val = X_val.transpose(0, 3, 1, 2).copy()
X_test = X_test.transpose(0, 3, 1, 2).copy()
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data(num_training=500)
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
Explanation: Load data
For this exercise our training set will contain 500 images and our validation and test sets will contain 1000 images as usual.
End of explanation
from cs231n.classifiers.convnet import *
from cs231n.classifier_trainer import ClassifierTrainer
model = init_three_layer_convnet(filter_size=5, num_filters=(32, 128))
trainer = ClassifierTrainer()
best_model, loss_history, train_acc_history, val_acc_history = trainer.train(
X_train, y_train, X_val, y_val, model, three_layer_convnet, dropout=None,
reg=0.05, learning_rate=0.00005, batch_size=50, num_epochs=15,
learning_rate_decay=1.0, update='rmsprop', verbose=True)
# Visualize the loss and accuracy for our network trained on a small dataset
plt.subplot(2, 1, 1)
plt.plot(train_acc_history)
plt.plot(val_acc_history)
plt.title('accuracy vs time')
plt.legend(['train', 'val'], loc=4)
plt.xlabel('epoch')
plt.ylabel('classification accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss_history)
plt.title('loss vs time')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.show()
Explanation: Overfit
Now that we've loaded our data, we will attempt to train a three layer convnet on this data. The three layer convnet has the architecture
conv - relu - pool - affine - relu - affine - softmax
We will use 32 5x5 filters, and our hidden affine layer will have 128 neurons.
This is a very expressive model given that we have only 500 training samples, so we should expect to massively overfit this dataset, and achieve a training accuracy of nearly 0.9 with a much lower validation accuracy.
End of explanation
# Check the dropout forward pass
x = np.random.randn(100, 100)
dropout_param_train = {'p': 0.25, 'mode': 'train'}
dropout_param_test = {'p': 0.25, 'mode': 'test'}
out_train, _ = dropout_forward(x, dropout_param_train)
out_test, _ = dropout_forward(x, dropout_param_test)
# Test dropout training mode; about 25% of the elements should be nonzero
print np.mean(out_train != 0)
# Test dropout test mode; all of the elements should be nonzero
print np.mean(out_test != 0)
from cs231n.gradient_check import eval_numerical_gradient_array
# Check the dropout backward pass
x = np.random.randn(5, 4)
dout = np.random.randn(*x.shape)
dropout_param = {'p': 0.8, 'mode': 'train', 'seed': 123}
dx_num = eval_numerical_gradient_array(lambda x: dropout_forward(x, dropout_param)[0], x, dout)
_, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
# The error should be around 1e-12
print 'Testing dropout_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
Explanation: Dropout
The first way we will reduce overfitting is to use dropout.
You have already implemented this in Q1 of this exercise, but let's just check that it still works :-)
End of explanation
from cs231n.data_augmentation import *
X = get_CIFAR10_data(num_training=100, normalize=False)[0]
num_imgs = 8
print X.dtype
X = X[np.random.randint(100, size=num_imgs)]
X_flip = random_flips(X)
X_rand_crop = random_crops(X, (28, 28))
# To give more dramatic visualizations we use large scales for random contrast
# and tint adjustment.
X_contrast = random_contrast(X, scale=(0.5, 1.0))
X_tint = random_tint(X, scale=(-50, 50))
next_plt = 1
for i in xrange(num_imgs):
titles = ['original', 'flip', 'rand crop', 'contrast', 'tint']
for j, XX in enumerate([X, X_flip, X_rand_crop, X_contrast, X_tint]):
plt.subplot(num_imgs, 5, next_plt)
img = XX[i].transpose(1, 2, 0)
if j == 4:
# For visualization purposes we rescale the pixel values of the
# tinted images
low, high = np.min(img), np.max(img)
img = 255 * (img - low) / (high - low)
plt.imshow(img.astype('uint8'))
if i == 0:
plt.title(titles[j])
plt.gca().axis('off')
next_plt += 1
plt.show()
Explanation: Data Augmentation
The next way we will reduce overfitting is to implement data augmentation. Since we have very little training data, we will use what little training data we have to generate artificial data, and use this artificial data to train our network.
CIFAR-10 images are 32x32, and up until this point we have used the entire image as input to our convnets. Now we will do something different: our convnet will expect a smaller input (say 28x28). Instead of feeding our training images directly to the convnet, at training time we will randomly crop each training image to 28x28, randomly flip half of the training images horizontally, and randomly adjust the contrast and tint of each training image.
Open the file cs231n/data_augmentation.py and implement the random_flips, random_crops, random_contrast, and random_tint functions. In the same file we have implemented the fixed_crops function to get you started. When you are done you can run the cell below to visualize the effects of each type of data augmentation.
End of explanation
input_shape = (3, 28, 28)
def augment_fn(X):
out = random_flips(random_crops(X, input_shape[1:]))
out = random_tint(random_contrast(out))
return out
def predict_fn(X):
return fixed_crops(X, input_shape[1:], 'center')
model = init_three_layer_convnet(filter_size=5, input_shape=input_shape, num_filters=(32, 128))
trainer = ClassifierTrainer()
best_model, loss_history, train_acc_history, val_acc_history = trainer.train(
X_train, y_train, X_val, y_val, model, three_layer_convnet,
reg=0.05, learning_rate=0.00005, learning_rate_decay=1.0,
batch_size=50, num_epochs=30, update='rmsprop', verbose=True, dropout=0.6,
augment_fn=augment_fn, predict_fn=predict_fn)
# Visualize the loss and accuracy for our network trained with dropout and data augmentation.
# You should see less overfitting, and you may also see slightly better performance on the
# validation set.
plt.subplot(2, 1, 1)
plt.plot(train_acc_history)
plt.plot(val_acc_history)
plt.title('accuracy vs time')
plt.legend(['train', 'val'], loc=4)
plt.xlabel('epoch')
plt.ylabel('classification accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss_history)
plt.title('loss vs time')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.show()
Explanation: Train again
We will now train a new network with the same training data and the same architecture, but using data augmentation and dropout.
If everything works, you should see a higher validation accuracy than above and a smaller gap between the training accuracy and the validation accuracy.
Networks with dropout usually take a bit longer to train, so we will use more training epochs this time.
End of explanation |
5,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reusable Embeddings
Learning Objectives
1. Learn how to use a pre-trained TF Hub text modules to generate sentence vectors
1. Learn how to incorporate a pre-trained TF-Hub module into a Keras model
1. Learn how to deploy and use a text model on CAIP
Introduction
In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset.
First, we will load and pre-process the texts and labels so that they are suitable to be fed to sequential Keras models with first layer being TF-hub pre-trained modules. Thanks to this first layer, we won't need to tokenize and integerize the text before passing it to our models. The pre-trained layer will take care of that for us, and consume directly raw text. However, we will still have to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will build, train and compare simple DNN models starting with different pre-trained TF-Hub layers.
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Step12: Let's write the sample datatset to disk.
Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step14: Loading the dataset
As in the previous labs, our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times)
Step15: Let's look again at the number of examples per label to make sure we have a well-balanced dataset
Step16: Preparing the labels
In this lab, we will use pre-trained TF-Hub embeddings modules for english for the first layer of our models. One immediate
advantage of doing so is that the TF-Hub embedding module will take care for us of processing the raw text.
This also means that our model will be able to consume text directly instead of sequences of integers representing the words.
However, as before, we still need to preprocess the labels into one-hot-encoded vectors
Step17: Preparing the train/test splits
Let's split our data into train and test splits
Step18: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per class.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
Step19: Now let's create the features and labels we will feed our models with
Step20: NNLM Model
We will first try a word embedding pre-trained using a Neural Probabilistic Language Model. TF-Hub has a 50-dimensional one called
nnlm-en-dim50-with-normalization, which also
normalizes the vectors produced.
Once loaded from its url, the TF-hub module can be used as a normal Keras layer in a sequential or functional model. Since we have enough data to fine-tune the parameters of the pre-trained embedding itself, we will set trainable=True in the KerasLayer that loads the pre-trained embedding
Step21: Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence
Step22: Swivel Model
Then we will try a word embedding obtained using Swivel, an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings.
TF-Hub hosts the pretrained gnews-swivel-20dim-with-oov 20-dimensional Swivel module.
Step23: Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence
Step24: Building the models
Let's write a function that
takes as input an instance of a KerasLayer (i.e. the swivel_module or the nnlm_module we constructed above) as well as the name of the model (say swivel or nnlm)
returns a compiled Keras sequential model starting with this pre-trained TF-hub layer, adding one or more dense relu layers to it, and ending with a softmax layer giving the probability of each of the classes
Step25: Let's also wrap the training code into a train_and_evaluate function that
* takes as input the training and validation data, as well as the compiled model itself, and the batch_size
* trains the compiled model for 100 epochs at most, and does early-stopping when the validation loss is no longer decreasing
* returns an history object, which will help us to plot the learning curves
Step26: Training NNLM
Step27: Training Swivel
Step28: Comparing the models
Swivel trains faster but achieves a lower validation accuracy, and requires more epochs to train on.
At last, let's compare all the models we have trained at once using TensorBoard in order
to choose the one that overfits the less for the same performance level.
Running the following command will launch TensorBoard on port 6006. This will
block the notebook execution, so you'll have to interrupt that cell first before
you can run other cells.
Step29: Deploying the model
The first step is to serialize one of our trained Keras model as a SavedModel
Step30: Then we can deploy the model using the gcloud CLI as before
Step31: Before we try our deployed model, let's inspect its signature to know what to send to the deployed API
Step32: Let's go ahead and hit our model | Python Code:
import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
Explanation: Reusable Embeddings
Learning Objectives
1. Learn how to use a pre-trained TF Hub text modules to generate sentence vectors
1. Learn how to incorporate a pre-trained TF-Hub module into a Keras model
1. Learn how to deploy and use a text model on CAIP
Introduction
In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset.
First, we will load and pre-process the texts and labels so that they are suitable to be fed to sequential Keras models with first layer being TF-hub pre-trained modules. Thanks to this first layer, we won't need to tokenize and integerize the text before passing it to our models. The pre-trained layer will take care of that for us, and consume directly raw text. However, we will still have to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will build, train and compare simple DNN models starting with different pre-trained TF-Hub layers.
End of explanation
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
SEED = 0
Explanation: Replace the variable values in the cell below:
End of explanation
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
regex = '.*://(.[^/]+)/'
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(regex)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(sub_query=sub_query)
print(query)
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
title_dataset.source.value_counts()
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
End of explanation
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset.head()
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow_hub import KerasLayer
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
Explanation: Let's write the sample datatset to disk.
End of explanation
MODEL_DIR = "./text_models"
DATA_DIR = "./data"
Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:
End of explanation
ls ./data/
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
Explanation: Loading the dataset
As in the previous labs, our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times):
End of explanation
titles_df.source.value_counts()
Explanation: Let's look again at the number of examples per label to make sure we have a well-balanced dataset:
End of explanation
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes, num_classes=N_CLASSES)
return one_hots
encode_labels(titles_df.source[:4])
Explanation: Preparing the labels
In this lab, we will use pre-trained TF-Hub embeddings modules for english for the first layer of our models. One immediate
advantage of doing so is that the TF-Hub embedding module will take care for us of processing the raw text.
This also means that our model will be able to consume text directly instead of sequences of integers representing the words.
However, as before, we still need to preprocess the labels into one-hot-encoded vectors:
End of explanation
N_TRAIN = int(len(titles_df) * 0.95)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
Explanation: Preparing the train/test splits
Let's split our data into train and test splits:
End of explanation
sources_train.value_counts()
sources_valid.value_counts()
Explanation: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per class.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
End of explanation
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
Explanation: Now let's create the features and labels we will feed our models with:
End of explanation
# TODO 1
NNLM = "https://tfhub.dev/google/nnlm-en-dim50/2"
nnlm_module = KerasLayer(
NNLM, output_shape=[50], input_shape=[], dtype=tf.string, trainable=True)
Explanation: NNLM Model
We will first try a word embedding pre-trained using a Neural Probabilistic Language Model. TF-Hub has a 50-dimensional one called
nnlm-en-dim50-with-normalization, which also
normalizes the vectors produced.
Once loaded from its url, the TF-hub module can be used as a normal Keras layer in a sequential or functional model. Since we have enough data to fine-tune the parameters of the pre-trained embedding itself, we will set trainable=True in the KerasLayer that loads the pre-trained embedding:
End of explanation
# TODO 1
nnlm_module(tf.constant(["The dog is happy to see people in the street."]))
Explanation: Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence:
End of explanation
# TODO 1
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"
swivel_module = KerasLayer(
SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True)
Explanation: Swivel Model
Then we will try a word embedding obtained using Swivel, an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings.
TF-Hub hosts the pretrained gnews-swivel-20dim-with-oov 20-dimensional Swivel module.
End of explanation
# TODO 1
swivel_module(tf.constant(["The dog is happy to see people in the street."]))
Explanation: Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence:
End of explanation
def build_model(hub_module, name):
model = Sequential([
hub_module, # TODO 2
Dense(16, activation='relu'),
Dense(N_CLASSES, activation='softmax')
], name=name)
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
Explanation: Building the models
Let's write a function that
takes as input an instance of a KerasLayer (i.e. the swivel_module or the nnlm_module we constructed above) as well as the name of the model (say swivel or nnlm)
returns a compiled Keras sequential model starting with this pre-trained TF-hub layer, adding one or more dense relu layers to it, and ending with a softmax layer giving the probability of each of the classes:
End of explanation
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
X_train, Y_train = train_data
tf.random.set_seed(33)
model_dir = os.path.join(MODEL_DIR, model.name)
if tf.io.gfile.exists(model_dir):
tf.io.gfile.rmtree(model_dir)
history = model.fit(
X_train, Y_train,
epochs=100,
batch_size=batch_size,
validation_data=val_data,
callbacks=[EarlyStopping(), TensorBoard(model_dir)],
)
return history
Explanation: Let's also wrap the training code into a train_and_evaluate function that
* takes as input the training and validation data, as well as the compiled model itself, and the batch_size
* trains the compiled model for 100 epochs at most, and does early-stopping when the validation loss is no longer decreasing
* returns an history object, which will help us to plot the learning curves
End of explanation
data = (X_train, Y_train)
val_data = (X_valid, Y_valid)
nnlm_model = build_model(nnlm_module, 'nnlm')
nnlm_history = train_and_evaluate(data, val_data, nnlm_model)
history = nnlm_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
Explanation: Training NNLM
End of explanation
swivel_model = build_model(swivel_module, name='swivel')
swivel_history = train_and_evaluate(data, val_data, swivel_model)
history = swivel_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
Explanation: Training Swivel
End of explanation
!tensorboard --logdir $MODEL_DIR --port 6006
Explanation: Comparing the models
Swivel trains faster but achieves a lower validation accuracy, and requires more epochs to train on.
At last, let's compare all the models we have trained at once using TensorBoard in order
to choose the one that overfits the less for the same performance level.
Running the following command will launch TensorBoard on port 6006. This will
block the notebook execution, so you'll have to interrupt that cell first before
you can run other cells.
End of explanation
OUTPUT_DIR = "./savedmodels"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, 'swivel')
os.environ['EXPORT_PATH'] = EXPORT_PATH
shutil.rmtree(EXPORT_PATH, ignore_errors=True)
tf.saved_model.save(swivel_model, EXPORT_PATH)
Explanation: Deploying the model
The first step is to serialize one of our trained Keras model as a SavedModel:
End of explanation
%%bash
# TODO 5
PROJECT=# TODO: Change this to your PROJECT
BUCKET=${PROJECT}
REGION=us-east1
MODEL_NAME=title_model
VERSION_NAME=swivel
EXPORT_PATH=$EXPORT_PATH
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "$MODEL_NAME already exists"
else
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud beta ai-platform versions create \
--model=$MODEL_NAME $VERSION_NAME \
--framework=tensorflow \
--python-version=3.7 \
--runtime-version=1.15 \
--origin=$EXPORT_PATH \
--staging-bucket=gs://$BUCKET \
--machine-type n1-standard-4
Explanation: Then we can deploy the model using the gcloud CLI as before:
End of explanation
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir {EXPORT_PATH}
!find {EXPORT_PATH}
Explanation: Before we try our deployed model, let's inspect its signature to know what to send to the deployed API:
End of explanation
%%writefile input.json
{"keras_layer_1_input": "hello"}
!gcloud ai-platform predict \
--model title_model \
--json-instances input.json \
--version swivel
Explanation: Let's go ahead and hit our model:
End of explanation |
5,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: MRI
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
5,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the BISON API
The USGS provides an API for accessing species observation data. https
Step1: Yikes, that's much less readable than the NWIS output!
Well, that's because the response from the BISON server is in JSON format. We can use Python's build in json module to convert this raw JSON text to a handy dictionary (of dictionaries) that we can [somewhat] easily manipulate...
Step2: So we see the Bison observations are stored as list of dictionaries which are accessed within the data key in the results dictionary genrated from the JSON response to our API request. (Phew!)
With a bit more code we can loop through all the data records and print out the lat and long coordinates...
Step3: Or we can witness Pandas cleverness again! Here, we convert the collection of observations into a data frame
Step4: And Pandas allows us to do some nifty analyses, including subsetting records where the provider is 'iNaturalist.org' | Python Code:
#First, import the wonderful requests module
import requests
#Now, we'll deconstruct the example URL into the service URL and parameters, saving the paramters as a dictionary
url = 'http://bison.usgs.gov/api/search.json'
params = {'species':'Bison bison',
'type':'scientific_name',
'start':'0',
'count':'10'
}
response = requests.get(url,params)
print(response.content)
Explanation: Using the BISON API
The USGS provides an API for accessing species observation data. https://bison.usgs.gov/doc/api.jsp
This API is much better documented than the NWIS API and we'll use it to dig a bit deeper into how the requests package can faciliate data access via APIs.
We'll begin by replicating the example API call they show on their web page:<br>
https://bison.usgs.gov/api/search.json?species=Bison bison&type=scientific_name&start=0&count=1
End of explanation
#Import the module
import json
#Convert the response
data = json.loads(response.content)
type(data)
#Ok, if it's a dictionary, what are it's keys?
data.keys()
#What are the values of the 'data' key
data['data']
#Oh, it's a list of occurrences! Let's examine the first one
data['data'][0]
#We see it's a dictionary too
#We can get the latitude of the record from it's `decimalLatitude` key
data['data'][0]['decimalLatitude']
Explanation: Yikes, that's much less readable than the NWIS output!
Well, that's because the response from the BISON server is in JSON format. We can use Python's build in json module to convert this raw JSON text to a handy dictionary (of dictionaries) that we can [somewhat] easily manipulate...
End of explanation
#Loop thorough each observation and print the lat and long values
for observation in data['data']:
print observation['decimalLatitude'],observation['decimalLongitude']
Explanation: So we see the Bison observations are stored as list of dictionaries which are accessed within the data key in the results dictionary genrated from the JSON response to our API request. (Phew!)
With a bit more code we can loop through all the data records and print out the lat and long coordinates...
End of explanation
import pandas as pd
df = pd.DataFrame(data['data'])
df.head()
Explanation: Or we can witness Pandas cleverness again! Here, we convert the collection of observations into a data frame
End of explanation
df[df.provider == 'iNaturalist.org']
Explanation: And Pandas allows us to do some nifty analyses, including subsetting records where the provider is 'iNaturalist.org'
End of explanation |
5,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tidal Flow Calculator
(Greg Tucker, August 2020)
This tutorial explains the theory behind the TidalFlowCalculator Landlab component, and shows several examples of how to use the component in various different configurations.
Theory
The TidalFlowCalculator computes a tidal-cycle averaged flow velocity field, given a topography (bathymetry), mean sea level, tidal range, and tidal period. The approach that the component uses is based on Mariotti (2018). The idea is to calculate a flow velocity field that is just sufficient to bring in (flood tide) or send out (ebb tide) all of the water that enters or leaves the system during one tidal cycle.
The inertial terms in the shallow-water momentum equations are assumed to be negligible, so that the operative driving forces are gravity and pressure (represented by the water-surface slope), and the resisting force is friction. The resulting relationship between velocity, depth, roughness, and water-surface slope is linearized into the following form
Step2: As we would expect, the numerical solution is slightly lower than the analytical solution, because our simplified analytical solution does not take into account the extra water depth whose gradient propels the ebb tide. (Exercise to the reader
Step3: Uniform with one open boundary
Step4: Uniform with narrow open boundary
Step5: Straight channel
Step6: Case study based on example in Giulio Mariotti's MarshMorpho2D package
This example reads topography/bathymetry from a 2-meter resolution digital elevation model. Locations above mean high tide are flagged as closed boundaries.
Step7: Example with hex grid
The following example demonstrates that the TidalFlowCalculator can operate on a hex grid
(Note that the slightly odd flow patterns along the two closed edges are just artifacts of the method used to map velocity vectors from links onto nodes for plotting purposes; the current method doesn't accurately handle nodes adjacent to closed boundaries.) | Python Code:
# imports
import numpy as np
import matplotlib.pyplot as plt
from landlab import RasterModelGrid, imshow_grid
from landlab.components import TidalFlowCalculator
# set up the grid
grid = RasterModelGrid((3, 101), xy_spacing=2.0) # only 1 row of core nodes, between 2 boundary rows
grid.set_closed_boundaries_at_grid_edges(False, True, True, True) # only east/right side open
z = grid.add_zeros('topographic__elevation', at='node') # create the bathymetry field
z[:] = -50.0 # mean water depth is 50 m below MSL, which is our vertical datum
# create the component
tfc = TidalFlowCalculator(grid, tidal_range=2.0, tidal_period=4.0e4, roughness=0.01)
# run the component
tfc.run_one_step()
# calculate the analytical solution
x = np.arange(3.0, 200.0, 2.0)
vel_analytical = 2.0e-6 * x
# plot both
plt.plot(x, grid.at_link['ebb_tide_flow__velocity'][grid.active_links], 'b.')
plt.plot(x, vel_analytical, 'r')
plt.xlabel('Distance from sea wall (x)')
plt.ylabel('Ebb tide velocity (m/s)')
plt.legend(['numerical', 'analytical'])
Explanation: Tidal Flow Calculator
(Greg Tucker, August 2020)
This tutorial explains the theory behind the TidalFlowCalculator Landlab component, and shows several examples of how to use the component in various different configurations.
Theory
The TidalFlowCalculator computes a tidal-cycle averaged flow velocity field, given a topography (bathymetry), mean sea level, tidal range, and tidal period. The approach that the component uses is based on Mariotti (2018). The idea is to calculate a flow velocity field that is just sufficient to bring in (flood tide) or send out (ebb tide) all of the water that enters or leaves the system during one tidal cycle.
The inertial terms in the shallow-water momentum equations are assumed to be negligible, so that the operative driving forces are gravity and pressure (represented by the water-surface slope), and the resisting force is friction. The resulting relationship between velocity, depth, roughness, and water-surface slope is linearized into the following form:
$$U = -\frac{h^{4/3}}{n^2\chi} \nabla\eta$$ (1)
Here, $U$ is velocity (2D vector), $h$ is tidal-averaged water depth, $n$ is roughness, $\chi$ is a scale velocity (here assumed to be 1 m/s), and $\eta = h + z$ is water surface elevation (and $z$ is bed surface elevation). The equation above represents momentum conservation. Note that $U$ and $\nabla\eta$ are vectors, representing the $x$ and $y$ components of flow velocity and water-surface gradient, respectively.
The method uses a steady form of the mass-conservation equation---again, the idea is that we're seeking a flow velocity field that is just sufficient to carry in or out all the water that enters or exits during a tidal cycle. The mass conservation equation is:
$$\nabla \cdot \mathbf{q} = I$$
Here, $\mathbf{q} = U h$ is the volume flow per unit width (again, a two-dimensional vector). The variable $I$ is "the distributed input of water over half a tidal cycle" (Mariotti, 2018), defined as
$$I(x,y) = \left[r/2 − \max(−r/2, \min(z(x,y), r/2))\right]/(T/2)$$
where $r$ is the tidal range [L] and $T$ is the tidal period [T]. In the expression above, if the water at a point $(x,y)$ is deeper than the tidal amplitude (i.e, half the tidal range, or $r/2$), then the depth of inundation or drainage during half of a tidal cycle is simply the tidal range $r$. All of this water must enter or leave during half a tidal cycle, or $T/2$. Therefore the rate [L/T] of inundation or drainage is equal to the depth divided by $T/2$. Again, if the water is deeper than $r/2$, the rate is just $2r/T$.
Our goal is to calculate $U$ at each location. We get it by solving for $\eta$ then using equation (1) to calculate $U$. It turns out that we can formulate this as a Poisson equation: a steady diffusion equation, in this case in two (horizontal) dimensions. First, approximate that $h$, $n$ are uniform (even though they aren't, in the general problem). Substituting, we have
$$\nabla U h = \nabla \frac{h^{7/3}}{n^2\chi} \nabla\cdot\eta = \frac{h^{7/3}}{n^2\chi} \nabla^2 \eta$$
Plugging this into our mass conservation law
$$\frac{h^{7/3}}{n^2\chi} \nabla^2 \eta = I$$
This can be rearranged to:
$$\boxed{\nabla^2\eta = \frac{In^2\chi}{h^{7/3}}} \text{ (equation 1)}$$
This is the Poisson problem to be solved numerically.
Note that $I$ is positive on the flood tide and negative on the ebb tide. In practice, the component solves for the ebb tide velocity, than calculates the flood tide velocity as -1 times the ebb tide velocity (i.e., just the reverse of the ebb tide).
Numerical methods
The TidalFlowCalculator uses a finite-volume method to solve equation (1) numerically at the core nodes of a Landlab grid. The grid must be either a RasterModelGrid or a HexModelGrid. You can find a discussion of finite-volume methods in the tutorial for Landlab's matrix-building utility. Here, a quick sketch of the solution method is as follows. The governing mass conservation equation is:
$$\nabla\cdot \mathbf{q} = I$$
The basis for the 2d finite-volume method is to integrate both sides of the equation over a region $R$, representing a grid cell. Then Green's theorem is used to turn the divergence term into a line integral of the flux around the perimeter of the region, $S$. The above equation becomes
$$\oint_S \mathbf{q} \mathbf{n} dS = IA_c$$
where $A_c$ is the surface area of the region and $\mathbf{n}$ is the outward unit vector normal to the perimeter of the region. When the region is a grid cell with $N$ faces of width $w$, the above becomes
$$\sum_{k=1}^N q_k \delta_k w = IA_c$$
where $q_k$ is the magnitude of $q$ in the face-perpendicular direction at cell face $k$, and $\delta$ is either 1 or -1, depending on the orientation of the grid link that crosses the face. The flux strength $q$ is positive when flow is in the direction of the link, and negative when it is in the opposite direction. For a RasterModelGrid, $N=4$, and for a HexModelGrid, $N=6$.
As discussed in the tutorial Building a matrix for numerical methods using a Landlab grid, when $q$ depends on the gradient in some field (in this case, water-surface elevation), the above equation can be translated into a matrix equation of the form $A\mathbf{x}=\mathbf{b}$, whose solution gives the solution to the Poisson equation.
Examples
One-dimensional case
Consider a one dimensional domain with open water at the east (right) side and a closed boundary (e.g., seawall) at the west (left) side, where by definition the distance from the seawall is $x=0$. Assume that the mean water depth is larger than the tidal amplitude, so that the sea bed is never exposed, even at low tide. Imagine that our tidal range is 2 meters, the water depth is 50 meters, and (to make the math a bit easier) the tidal period is 40,000 seconds. The analytical solution for flow discharge, $q$, can be found by noting that at any given distance from the sea wall, $q$ must be just enough to carry out all the outgoing water (ebb tide) or carry in all the incoming water (flood tide). The rate of inflow or outflow is equal to the inundation/drainage rate $I$ times distance from the sea wall, $x$:
$$q = -I x$$
The negative sign just means that $q$ is positive (flow to the right/east) when the tide is going out (negative $I$) and negative (flow to the left/west) when the tide is coming in. The velocity is
$$U = -I x / h$$
Here, $h$ is a function of $x$, but with a modest roughness (Manning's $n$) of 0.01 and relatively deep water, we can get a good approximation using just the tidal-average depth of 50 m. With this approximation, we expect the solution to be:
$$U = \pm \frac{(2 m)}{(50 m) \cdot (2\times 10^4 s)} x = 2\times 10^{-6} x$$
The code below runs the component for these conditions, and compares the solution with this analytical solution.
End of explanation
from landlab.grid.mappers import map_link_vector_components_to_node
def map_velocity_components_to_nodes(grid):
Map the velocity components from the links to the nodes, and return the node arrays.
ebb_vel_x, ebb_vel_y = map_link_vector_components_to_node(grid, grid.at_link['ebb_tide_flow__velocity'])
flood_vel_x = -ebb_vel_x
flood_vel_y = -ebb_vel_y
return (ebb_vel_x, ebb_vel_y, flood_vel_x, flood_vel_y)
def plot_tidal_flow(grid, resample=1):
(ebb_x, ebb_y, flood_x, flood_y) = map_velocity_components_to_nodes(grid)
# depth
plt.figure()
imshow_grid(grid, grid.at_node['mean_water__depth'], cmap='YlGnBu', color_for_closed='g')
plt.title('Water depth (m)')
plt.xlabel('Distance (m)')
plt.ylabel('Distance (m)')
# down-sample for legible quiver plots if needed
if resample != 1:
xr = grid.x_of_node.reshape((grid.number_of_node_rows, grid.number_of_node_columns))[::resample, ::resample]
yr = grid.y_of_node.reshape((grid.number_of_node_rows, grid.number_of_node_columns))[::resample, ::resample]
ebb_xr = ebb_x.reshape((grid.number_of_node_rows, grid.number_of_node_columns))[::resample, ::resample]
ebb_yr = ebb_y.reshape((grid.number_of_node_rows, grid.number_of_node_columns))[::resample, ::resample]
fld_xr = flood_x.reshape((grid.number_of_node_rows, grid.number_of_node_columns))[::resample, ::resample]
fld_yr = flood_y.reshape((grid.number_of_node_rows, grid.number_of_node_columns))[::resample, ::resample]
else:
xr = grid.x_of_node
yr = grid.y_of_node
ebb_xr = ebb_x
ebb_yr = ebb_y
fld_xr = flood_x
fld_yr = flood_y
# ebb tide
plt.figure()
imshow_grid(grid, grid.at_node['topographic__elevation'])
plt.quiver(xr, yr, ebb_xr, ebb_yr)
plt.title('Ebb Tide')
plt.xlabel('Distance (m)')
plt.ylabel('Distance (m)')
ebb_vel_magnitude = np.sqrt(ebb_x * ebb_x + ebb_y * ebb_y)
plt.figure()
imshow_grid(grid, ebb_vel_magnitude, cmap='magma', color_for_closed='g')
plt.title('Ebb Tide Velocity Magnitude (m/s)')
plt.xlabel('Distance (m)')
plt.ylabel('Distance (m)')
# flood tide
plt.figure()
imshow_grid(grid, grid.at_node['topographic__elevation'])
plt.quiver(xr, yr, fld_xr, fld_yr)
plt.title('Flood Tide')
plt.xlabel('Distance (m)')
plt.ylabel('Distance (m)')
plt.figure()
flood_vel_magnitude = np.sqrt(flood_x * flood_x + flood_y * flood_y)
imshow_grid(grid, flood_vel_magnitude, cmap='magma', color_for_closed='g')
plt.title('Flood Tide Velocity Magnitude (m/s)')
plt.xlabel('Distance (m)')
plt.ylabel('Distance (m)')
# parameters
nrows = 15
ncols = 25
grid_spacing = 100.0 # m
mean_depth = 2.0 # m
tidal_range = 2.0 # m
roughness = 0.01 # s/m^1/3, i.e., Manning's n
# create and set up the grid
grid = RasterModelGrid((nrows, ncols), xy_spacing=grid_spacing)
z = grid.add_zeros('topographic__elevation', at='node')
z[:] = -mean_depth
grid.set_closed_boundaries_at_grid_edges(False, False, True, True)
# instantiate the TidalFlowCalculator
tfc = TidalFlowCalculator(grid, tidal_range=2.0, roughness=0.01)
# run it
tfc.run_one_step()
# make plots...
plot_tidal_flow(grid)
Explanation: As we would expect, the numerical solution is slightly lower than the analytical solution, because our simplified analytical solution does not take into account the extra water depth whose gradient propels the ebb tide. (Exercise to the reader: develop the analytical solution for water surface elevation, and then use it to derive a correct flow velocity that accounts for a little bit of extra depth at ebb tide, and a little less depth at flood tide.)
Idealized two-dimensional cases
Two open boundaries
Here we use a rectangular domain with two open sides and two closed sides. Start by defining a generic plotting function:
End of explanation
# parameters
nrows = 400
ncols = 200
grid_spacing = 2.0 # m
mean_depth = 2.0 # m
tidal_range = 3.1 # m
tidal_period = 12.5 * 3600.0 # s
roughness = 0.01 # s/m^1/3, i.e., Manning's n
open_nodes = np.arange(95, 105, dtype=np.int)
# create and set up the grid
grid = RasterModelGrid((nrows, ncols), xy_spacing=grid_spacing)
z = grid.add_zeros('topographic__elevation', at='node')
z[:] = -mean_depth
grid.set_closed_boundaries_at_grid_edges(True, True, True, False)
# instantiate the TidalFlowCalculator
tfc = TidalFlowCalculator(grid, tidal_range=tidal_range, tidal_period=tidal_period, roughness=0.01)
# run it
tfc.run_one_step()
# make plots...
plot_tidal_flow(grid, resample=5)
Explanation: Uniform with one open boundary
End of explanation
# parameters
nrows = 400
ncols = 200
grid_spacing = 2.0 # m
mean_depth = 2.0 # m
tidal_range = 3.1 # m
tidal_period = 12.5 * 3600.0 # s
roughness = 0.01 # s/m^1/3, i.e., Manning's n
open_nodes = np.arange(95, 105, dtype=np.int)
# create and set up the grid
grid = RasterModelGrid((nrows, ncols), xy_spacing=grid_spacing)
z = grid.add_zeros('topographic__elevation', at='node')
z[:] = -mean_depth
grid.set_closed_boundaries_at_grid_edges(True, True, True, True)
grid.status_at_node[open_nodes] = grid.BC_NODE_IS_FIXED_VALUE
# instantiate the TidalFlowCalculator
tfc = TidalFlowCalculator(grid, tidal_range=tidal_range, tidal_period=tidal_period, roughness=0.01)
# run it
tfc.run_one_step()
# make plots...
plot_tidal_flow(grid, resample=5)
Explanation: Uniform with narrow open boundary
End of explanation
from landlab.grid.mappers import map_max_of_link_nodes_to_link
# parameters
nrows = 400
ncols = 200
grid_spacing = 2.0 # m
marsh_height = 1.0 # m
channel_depth = 2.0 # m
tidal_range = 3.1 # m
tidal_period = 12.5 * 3600.0 # s
open_nodes = np.arange(94, 105, dtype=np.int) # IDs of open-boundary nodes (along channel at bottom/south boundary)
roughness_shallow = 0.2 # Manning's n for areas above mean sea level (i.e., the marsh)
roughness_deep = 0.01 # Manning's n for areas below mean sea level (i.e., the channel)
# create and set up the grid
grid = RasterModelGrid((nrows, ncols), xy_spacing=grid_spacing)
z = grid.add_zeros('topographic__elevation', at='node')
z[grid.core_nodes] = marsh_height
channel = np.logical_and(grid.x_of_node >= 188.0, grid.x_of_node <= 208.0)
z[channel] = -channel_depth
grid.set_closed_boundaries_at_grid_edges(True, True, True, True)
grid.status_at_node[open_nodes] = grid.BC_NODE_IS_FIXED_VALUE
# set up roughness field (calculate on nodes, then map to links)
roughness_at_nodes = roughness_shallow + np.zeros(z.size)
roughness_at_nodes[z < 0.0] = roughness_deep
roughness = grid.add_zeros('roughness', at='link')
map_max_of_link_nodes_to_link(grid, roughness_at_nodes, out=roughness)
# instantiate the TidalFlowCalculator
tfc = TidalFlowCalculator(grid, tidal_range=tidal_range, tidal_period=tidal_period, roughness='roughness')
# run it
tfc.run_one_step()
# make plots...
plot_tidal_flow(grid, resample=10)
Explanation: Straight channel
End of explanation
from landlab.io import read_esri_ascii
# Set parameters (these are from the MarshMorpho2D source code)
tidal_period = 12.5 * 3600.0 # tidal period in seconds
tidal_range = 3.1 # tidal range in meters
roughness = 0.02 # Manning's n
mean_sea_level = 0.0 # mean sea level in meters
min_water_depth = 0.01 # minimum depth for water on areas higher than low tide water surface, meters
nodata_code = 999 # code for a DEM cell with no valid data
# Read the DEM to create a grid and topography field
(grid, z) = read_esri_ascii('zSW3.asc', name='topographic__elevation')
# Configure boundaries: any nodata nodes, plus any nodes higher than mean high tide
grid.status_at_node[z==nodata_code] = grid.BC_NODE_IS_CLOSED
grid.status_at_node[z>1.8] = grid.BC_NODE_IS_CLOSED
boundaries_above_msl = np.logical_and(grid.status_at_node==grid.BC_NODE_IS_FIXED_VALUE, z > 0.0)
grid.status_at_node[boundaries_above_msl] = grid.BC_NODE_IS_CLOSED
# Instantiate a TidalFlowCalculator component
tfc = TidalFlowCalculator(
grid,
tidal_period=tidal_period,
tidal_range=tidal_range,
roughness=roughness,
mean_sea_level=mean_sea_level,
min_water_depth=min_water_depth,
)
# Calculate tidal flow
tfc.run_one_step()
# make plots...
plot_tidal_flow(grid, resample=5)
Explanation: Case study based on example in Giulio Mariotti's MarshMorpho2D package
This example reads topography/bathymetry from a 2-meter resolution digital elevation model. Locations above mean high tide are flagged as closed boundaries.
End of explanation
from landlab.grid.mappers import map_link_vector_components_to_node
def plot_tidal_flow_hex(grid):
(ebb_x, ebb_y) = map_link_vector_components_to_node(grid, grid.at_link['ebb_tide_flow__velocity'])
# ebb tide velocity vectors & magnitude
ebb_vel_magnitude = np.sqrt(ebb_x * ebb_x + ebb_y * ebb_y)
plt.figure()
imshow_grid(grid, ebb_vel_magnitude, cmap='magma')
plt.quiver(grid.x_of_node, grid.y_of_node, ebb_x, ebb_y)
plt.title('Ebb Tide')
plt.xlabel('Distance (m)')
plt.ylabel('Distance (m)')
from landlab import HexModelGrid
# parameters
nrows = 15
ncols = 25
grid_spacing = 100.0 # m
mean_depth = 2.0 # m
tidal_range = 2.0 # m
roughness = 0.01 # s/m^1/3, i.e., Manning's n
# create and set up the grid
grid = HexModelGrid((nrows, ncols), spacing=grid_spacing, node_layout='rect')
z = grid.add_zeros('topographic__elevation', at='node')
z[:] = -mean_depth
grid.status_at_node[grid.nodes_at_bottom_edge] = grid.BC_NODE_IS_CLOSED
grid.status_at_node[grid.nodes_at_left_edge] = grid.BC_NODE_IS_CLOSED
# instantiate the TidalFlowCalculator
tfc = TidalFlowCalculator(grid, tidal_range=2.0, roughness=0.01)
# run it
tfc.run_one_step()
# make plots...
plot_tidal_flow_hex(grid)
Explanation: Example with hex grid
The following example demonstrates that the TidalFlowCalculator can operate on a hex grid
(Note that the slightly odd flow patterns along the two closed edges are just artifacts of the method used to map velocity vectors from links onto nodes for plotting purposes; the current method doesn't accurately handle nodes adjacent to closed boundaries.)
End of explanation |
5,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Draw Hopf fibration using python and POV-Ray
What is Hopf fibration?
Hopf fibration is a continous map from the 3-sphere $S^3$ onto the 2-sphere $S^2$, where the preimage of each point $p\in S^2$ is a distinct circle called the fiber at $p$. The definition of the map is quite simple
Step3: Hopf inverse map and stereographic projection
Step7: Circle passes through three points
To draw the projected 3d circle of a fiber we choose three points on the fiber and construct the circle from their projected images.
Step10: Convert vector/matrix to POV-Ray format
Step12: Orient a circle in 3d space
In POV-Ray CSG 3d circles are represented by the Torus object and by POV-Ray's default a Torus lies on the $xz$-plane with $y$-axis sticking through its center. So we need an orthogonal matrix to rotate it to a general orientation.
Step14: Export data to POV-Ray
Our POV-Ray macro as the interface between python and POV-Ray will be
Torus(center, radius, matrix, color)
This macro is implemented in the POV-Ray scene file. In python we just pack the data into this format and send them to POV-Ray for rendering.
Step16: Let's draw some examples!
Finally we can draw a set of random points on $S^2$ and see how their fibers look like in 3d space
Step18: And also a flower pattern | Python Code:
import subprocess
import numpy as np
from IPython.display import Image
PI = np.pi
POV_SCENE_FILE = "hopf_fibration.pov"
POV_DATA_FILE = "torus-data.inc"
POV_EXE = "povray"
COMMAND = "{} +I{} +W500 +H500 +Q11 +A0.01 +R2".format(POV_EXE, POV_SCENE_FILE)
IMG = POV_SCENE_FILE[:-4] + ".png"
Explanation: Draw Hopf fibration using python and POV-Ray
What is Hopf fibration?
Hopf fibration is a continous map from the 3-sphere $S^3$ onto the 2-sphere $S^2$, where the preimage of each point $p\in S^2$ is a distinct circle called the fiber at $p$. The definition of the map is quite simple: identify $\mathbb{R}^4$ with $\mathbb{C}^2$ and $\mathbb{R}^3$ with $\mathbb{C}\times\mathbb{R}$ by writing $(x_1,x_2,x_3,x_4)$ as $(z_1,z_2)=(x_1+ix_2, x_3+ix_4)$ and $(x_1,x_2,x_3)$ as $(z,x)=(x_1+ix_2,x_3)$, thus $S^3$ is identified with the subset of $\mathbb{C}^2$ such that $|z_1|^2+|z_2|^2=1$ and $S^2$ is identified with the subset of $\mathbb{C}\times\mathbb{R}$ such that $|z|^2+x^2=1$, then the Hopf fibration is defined by
$$(z_1,z_2)\to (2z_1\overline{z_2},\, |z_1|^2-|z_2|^2).$$
You can easily verify that the first term $2z_1\overline{z_2}$ belongs to $S^3$ and the second term $|z_1|^2-|z_2|^2$ belongs to $S^2$.
It's also not hard to write down a parametric representation for the inverse map: a point $p=(x,y,z)\in S^2$ can be parameterized by
\begin{align}x &= \sin(\phi)\cos(\psi),\ y &= \sin(\phi)\sin(\psi),\ z &= \cos(\phi).\end{align}
where $0 \leq \phi \leq \pi$ and $0 \leq\psi \leq 2\pi$. Then the fiber at $p$ is a circle on $S^3$ parameterized by $0\leq\theta\leq2\pi$:
\begin{align}x_1&=\cos((\theta+\psi) / 2)\sin(\phi / 2),\ x_2&=\sin((\theta+\psi) / 2)\sin(\phi / 2),\x_3&=\cos((\theta-\psi) / 2) \cos(\phi / 2),\ x_4&=\sin((\theta-\psi) / 2)\cos(\phi / 2).\end{align}
How can we visualize it?
To visualize the Hopf fibration we want to choose some points on the 2-sphere $S^2$, draw their fibers and and see what they look like. Since these fibers lie in the 4d space we cannot see them directly, but if we project them to 3d space using the stereographic projection then some remarkable structure appears. The fibers are projected to circles in 3d space (one of which in a line, comes from the fiber through infinity), any two such circles are linked with each other and the line passes through all circles. The 3d space is filled with nested tori made of linking Villarceau circles, each tori is the preimage of a circle of latitude of the 2-sphere.
So our plan is:
Choose some points on the 2-sphere $S^2$.
Compute their fibers as circles in $\mathbb{R}^4$.
Use stereographic projection to project these fibers to circles in 3d space and draw these circles.
We will use POV-Ray to render our 3d scene here. The computation task is handled in the python part and the rendering task is handled in the POV-Ray part. Certain background knowledge of POV-Ray's syntax is required to understand how the latter works. In summary we simply exports the data of the circles in the format of POV-Ray macros ("macros" are synonymous to "functions" in POV-Ray) and call these macros in the POV-Ray scene file.
Some global settings:
POV_SCENE_FILE is the scene file will be called by POV-Ray. It's named hopf_fibration.pov in the same directory with this notebook.
POV_DATA_FILE is the output data file.
POV_EXE is your POV-Ray executable file. (Don't forget add your POV-Ray executable file to system PATH!)
End of explanation
def hopf_inverse(phi, psi, theta):
Inverse map of Hopf fibration. It's a circle in 4d parameterized by theta.
return np.array([np.cos((theta + psi) / 2) * np.sin(phi / 2),
np.sin((theta + psi) / 2) * np.sin(phi / 2),
np.cos((theta - psi) / 2) * np.cos(phi / 2),
np.sin((theta - psi) / 2) * np.cos(phi / 2)])
def stereo_projection(v):
Stereographic projection of a 4d vector with pole at (0, 0, 0, 1).
v = normalize(v)
x, y, z, w = v
return np.array([x, y, z]) / (1 + 1e-8 - w)
Explanation: Hopf inverse map and stereographic projection
End of explanation
def normalize(v):
Normalize a vector.
return np.array(v) / np.linalg.norm(v)
def norm2(v):
Return squared Euclidean norm of a vector.
return np.dot(v, v)
def get_circle(A, B, C):
Compute the center, radius and normal of the circle passes
through 3 given points (A, B, C) in 3d space.
See "https://en.wikipedia.org/wiki/Circumscribed_circle"
a = A - C
b = B - C
axb = np.cross(a, b)
center = C + np.cross((norm2(a) * b - norm2(b) * a), axb) / (2 * norm2(axb))
radius = np.sqrt(norm2(a) * norm2(b) * norm2(a - b) / (4 * norm2(axb)))
normal = normalize(axb)
return center, radius, normal
Explanation: Circle passes through three points
To draw the projected 3d circle of a fiber we choose three points on the fiber and construct the circle from their projected images.
End of explanation
def pov_vector(v):
Convert a vector to POV-Ray format.
return "<{}>".format(", ".join([str(x) for x in v]))
def pov_matrix(M):
Convert a 3x3 matrix to a POV-Ray 3x3 array.
return "array[3]{{{}}}\n".format(", ".join([pov_vector(v) for v in M]))
# write a test to see if they work as expected:
v = (1, 0, 0)
print("POV-Ray format of {}: {}".format(v, pov_vector(v)))
M = np.eye(3)
print("POV-Ray format of {}: {}".format(M, pov_matrix(M)))
Explanation: Convert vector/matrix to POV-Ray format
End of explanation
def transform_matrix(v):
Return a 3x3 orthogonal matrix that transforms y-axis (0, 1, 0) to v.
This matrix is not uniquely determined, we simply choose one with a simple form.
y = normalize(v)
a, b, c = y
if a == 0:
x = [1, 0, 0]
else:
x = normalize([-b, a, 0])
z = np.cross(x, y)
return np.array([x, y, z])
Explanation: Orient a circle in 3d space
In POV-Ray CSG 3d circles are represented by the Torus object and by POV-Ray's default a Torus lies on the $xz$-plane with $y$-axis sticking through its center. So we need an orthogonal matrix to rotate it to a general orientation.
End of explanation
def export_fiber(phi, psi, color):
Export the data of a fiber to POV-Ray format.
A, B, C = [stereo_projection(hopf_inverse(phi, psi, theta))
for theta in (0, PI/2, PI)]
center, radius, normal = get_circle(A, B, C)
matrix = transform_matrix(normal)
return "Torus({}, {}, {}, {})\n".format(pov_vector(center),
radius,
pov_matrix(matrix),
pov_vector(color))
Explanation: Export data to POV-Ray
Our POV-Ray macro as the interface between python and POV-Ray will be
Torus(center, radius, matrix, color)
This macro is implemented in the POV-Ray scene file. In python we just pack the data into this format and send them to POV-Ray for rendering.
End of explanation
def draw_random_fibers(N):
Draw fibers of some random points on the 2-sphere.
`N` is the number of fibers.
phi_range = (PI / 6, PI * 4 / 5)
psi_range = (0, 2 * PI)
phi_list = np.random.random(N) * (phi_range[1] - phi_range[0]) + phi_range[0]
psi_list = np.random.random(N) * (psi_range[1] - psi_range[0]) + psi_range[0]
with open(POV_DATA_FILE, "w") as f:
for phi, psi in zip(phi_list, psi_list):
color = np.random.random(3)
f.write(export_fiber(phi, psi, color))
subprocess.call(COMMAND, shell=True)
draw_random_fibers(N=200)
Image(IMG)
Explanation: Let's draw some examples!
Finally we can draw a set of random points on $S^2$ and see how their fibers look like in 3d space:
End of explanation
def draw_flower(petals=7, fattness=0.5, amp=-PI/7, lat=PI/2, num_fibers=200):
parameters
----------
petals: controls the number of petals.
fattness: controls the fattness of the petals.
amp: controls the amplitude of the polar angle range.
lat: controls latitude of the flower.
with open(POV_DATA_FILE, "w") as f:
for t in np.linspace(0, 1, num_fibers):
phi = amp * np.sin(petals * 2 * PI * t) + lat
psi = PI * 2 * t + fattness * np.cos(petals * 2 * PI * t)
color = np.random.random(3)
f.write(export_fiber(phi, psi, color))
subprocess.call(COMMAND, shell=True)
draw_flower()
Image(IMG)
Explanation: And also a flower pattern:
End of explanation |
5,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NCAR
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:22
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
5,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Monitor Convergence for Run 6
Applying multiple convergence checks for run 6, which adopted a floating Y and alpha. Up to now, we have monitored convergence by visually inspecting trace plots. It would be useful to know if convergence has been obtained using other metrics.
Step1: Defining convergence diagnostics
(1) trace plot, (2) acceptance fraction, (3) Gelman-Rubin diagnostic, (4) autocorrelation, (5) moving average. Others to consider
Step2: Process samples | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Monitor Convergence for Run 6
Applying multiple convergence checks for run 6, which adopted a floating Y and alpha. Up to now, we have monitored convergence by visually inspecting trace plots. It would be useful to know if convergence has been obtained using other metrics.
End of explanation
def tracePlot(chains, labels=None, truths=None):
n_dim = chains.shape[2]
fig, ax = plt.subplots(n_dim, 1, figsize=(8., 27.), sharex=True)
ax[-1].set_xlabel('Iteration', fontsize=20.)
for i in range(len(ax)):
try:
ax[i].set_ylabel(labels[i], fontsize=20.)
except IndexError:
pass
ax[i].tick_params(which='major', axis='both', length=10., labelsize=16.)
for j in range(len(chains)):
try:
ax[i].plot([0, len(chains[j,:,i])+10], [truths[i], truths[i]], '-', lw=4, dashes=(20., 10.),
c='#B22222')
except:
pass
ax[i].plot(chains[j,:,i], '-', lw=1, c='#0473B3', alpha=0.5)
fig.tight_layout()
def GelmanRubin(chains, labels=None):
n_chains = chains.shape[0]
n_iter = chains.shape[1]/2
n_params = chains.shape[2]
# take last n samples if total was 2n
sample = chains[:,-n_iter:,:]
# compute mean of intra-chain (within) variances
W = np.mean(np.var(sample, axis=1), axis=0)
# compute mean of inter-chain (between) variances
chain_means = np.mean(sample, axis=1)
mean_of_chain_means = np.mean(chain_means, axis=0)
B = np.empty(n_params)
for i in range(n_params):
B[i] = np.sum((chain_means[:, i] - mean_of_chain_means[i])**2)*n_iter/(n_chains - 1.)
# estimated variance (likely an over-estimate)
Sigma_hat_2 = ((n_iter - 1.)*W + B)/n_iter
# pooled posterior variance
Var_hat = Sigma_hat_2 + B/(n_chains*n_iter)
# correction for degree of freedom
# compute potential scale reduction factor
PSRF = np.sqrt(Var_hat/W)
return W, B, Var_hat, PSRF
Explanation: Defining convergence diagnostics
(1) trace plot, (2) acceptance fraction, (3) Gelman-Rubin diagnostic, (4) autocorrelation, (5) moving average. Others to consider: Geweke diagnostic, Raferty-Lewis diagnostic, Heidelberg-Welch diagnostic, ...
End of explanation
# test with Gl 876, the largest number of iterations
flatchain = np.genfromtxt('/Users/grefe950/Software/StarBay/interbay/chains/run06/GJ876_W0300_N0600_B0000.dat')
chains = flatchain.reshape(300, -1, 9)
labels=['Mass', '[Fe/H]', 'Y', 'log(Age)', 'Distance', 'alpha', 'log(Teff)', 'log(Fbol)', 'theta']
truths = [np.nan, 0.17, np.nan, np.nan, 1./0.21328, np.nan, np.log10(3189.), np.log10(1.9156e-8), 0.746]
tracePlot(chains, labels=labels, truths=truths)
GelmanRubin(chains, labels=labels)
Explanation: Process samples
End of explanation |
5,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Session 4
Step2: <a name="part-1---pretrained-networks"></a>
Part 1 - Pretrained Networks
In the libs module, you'll see that I've included a few modules for loading some state of the art networks. These include
Step3: Now we can load a pre-trained network's graph and any labels. Explore the different networks in your own time.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step4: Each network returns a dictionary with the following keys defined. Every network has a key for "labels" except for "i2v", since this is a feature only network, e.g. an unsupervised network, and does not have labels.
Step5: <a name="preprocessdeprocessing"></a>
Preprocess/Deprocessing
Each network has a preprocessing/deprocessing function which we'll use before sending the input to the network. This preprocessing function is slightly different for each network. Recall from the previous sessions what preprocess we had done before sending an image to a network. We would often normalize the input by subtracting the mean and dividing by the standard deviation. We'd also crop/resize the input to a standard size. We'll need to do this for each network except for the Inception network, which is a true convolutional network and does not require us to do this (will be explained in more depth later).
Whenever we preprocess the image, and want to visualize the result of adding back the gradient to the input image (when we use deep dream), we'll need to use the deprocess function stored in the dictionary. Let's explore how these work. We'll confirm this is performing the inverse operation, let's try to preprocess the image, then I'll have you try to deprocess it.
Step6: Let's now try preprocessing this image. The function for preprocessing is inside the module we used to load it. For instance, for vgg16, we can find the preprocess function as vgg16.preprocess, or for inception, inception.preprocess, or for i2v, i2v.preprocess. Or, we can just use the key preprocess in our dictionary net, as this is just convenience for us to access the corresponding preprocess function.
Step7: Let's undo the preprocessing. Recall that the net dictionary has the key deprocess which is the function we need to use on our processed image, img.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step8: <a name="tensorboard"></a>
Tensorboard
I've added a utility module called nb_utils which includes a function show_graph. This will use Tensorboard to draw the computational graph defined by the various Tensorflow functions. I didn't go over this during the lecture because there just wasn't enough time! But explore it in your own time if it interests you, as it is a really unique tool which allows you to monitor your network's training progress via a web interface. It even lets you monitor specific variables or processes within the network, e.g. the reconstruction of an autoencoder, without having to print to the console as we've been doing. We'll just be using it to draw the pretrained network's graphs using the utility function I've given you.
Be sure to interact with the graph and click on the various modules.
For instance, if you've loaded the inception v5 network, locate the "input" to the network. This is where we feed the image, the input placeholder (typically what we've been denoting as X in our own networks). From there, it goes to the "conv2d0" variable scope (i.e. this uses the code
Step9: If you open up the "mixed3a" node above (double click on it), you'll see the first "inception" module. This network encompasses a few advanced concepts that we did not have time to discuss during the lecture, including residual connections, feature concatenation, parallel convolution streams, 1x1 convolutions, and including negative labels in the softmax layer. I'll expand on the 1x1 convolutions here, but please feel free to skip ahead if this isn't of interest to you.
<a name="a-note-on-1x1-convolutions"></a>
A Note on 1x1 Convolutions
The 1x1 convolutions are setting the ksize parameter of the kernels to 1. This is effectively allowing you to change the number of dimensions. Remember that you need a 4-d tensor as input to a convolution. Let's say its dimensions are $\text{N}\ x\ \text{W}\ x\ \text{H}\ x\ \text{C}_I$, where $\text{C}_I$ represents the number of channels the image has. Let's say it is an RGB image, then $\text{C}_I$ would be 3. Or later in the network, if we have already convolved it, it might be 64 channels instead. Regardless, when you convolve it w/ a $\text{K}_H\ x\ \text{K}_W\ x\ \text{C}_I\ x\ \text{C}_O$ filter, where $\text{K}_H$ is 1 and $\text{K}_W$ is also 1, then the filters size is
Step10: <a name="using-context-managers"></a>
Using Context Managers
Up until now, we've mostly used a single tf.Session within a notebook and didn't give it much thought. Now that we're using some bigger models, we're going to have to be more careful. Using a big model and being careless with our session can result in a lot of unexpected behavior, program crashes, and out of memory errors. The VGG network and the I2V networks are quite large. So we'll need to start being more careful with our sessions using context managers.
Let's see how this works w/ VGG
Step11: <a name="part-2---visualizing-gradients"></a>
Part 2 - Visualizing Gradients
Now that we know how to load a network and extract layers from it, let's grab only the pooling layers
Step12: Let's also grab the input layer
Step14: We'll now try to find the gradient activation that maximizes a layer with respect to the input layer x.
Step15: Let's try this w/ an image now. We're going to use the plot_gradient function to help us. This is going to take our input image, run it through the network up to a layer, find the gradient of the mean of that layer's activation with respect to the input image, then backprop that gradient back to the input layer. We'll then visualize the gradient by normalizing it's values using the utils.normalize function.
Step16: <a name="part-3---basic-deep-dream"></a>
Part 3 - Basic Deep Dream
In the lecture we saw how Deep Dreaming takes the backpropagated gradient activations and simply adds it to the image, running the same process again and again in a loop. We also saw many tricks one can add to this idea, such as infinitely zooming into the image by cropping and scaling, adding jitter by randomly moving the image around, or adding constraints on the total activations.
Have a look here for inspiration
Step17: Let's now try running Deep Dream for every feature, each of our 5 pooling layers. We'll need to get the layer corresponding to our feature. Then find the gradient of this layer's mean activation with respect to our input, x. Then pass these to our dream function. This can take awhile (about 10 minutes using the CPU on my Macbook Pro).
Step18: Instead of using an image, we can use an image of noise and see how it "hallucinates" the representations that the layer most responds to
Step19: We'll do the same thing as before, now w/ our noise image
Step20: <a name="part-4---deep-dream-extensions"></a>
Part 4 - Deep Dream Extensions
As we saw in the lecture, we can also use the final softmax layer of a network to use during deep dream. This allows us to be explicit about the object we want hallucinated in an image.
<a name="using-the-softmax-layer"></a>
Using the Softmax Layer
Let's get another image to play with, preprocess it, and then make it 4-dimensional.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step21: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step22: Let's decide on some parameters of our deep dream. We'll need to decide how many iterations to run for. And we'll plot the result every few iterations, also saving it so that we can produce a GIF. And at every iteration, we need to decide how much to ascend our gradient.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step23: Now let's dream. We're going to define a context manager to create a session and use our existing graph, and make sure we use the CPU device, as there is no gain in using GPU, and we have much more CPU memory than GPU memory.
Step24: <a name="fractal"></a>
Fractal
During the lecture we also saw a simple trick for creating an infinite fractal
Step25: <a name="guided-hallucinations"></a>
Guided Hallucinations
Instead of following the gradient of an arbitrary mean or max of a particular layer's activation, or a particular object that we want to synthesize, we can also try to guide our image to look like another image. One way to try this is to take one image, the guide, and find the features at a particular layer or layers. Then, we take our synthesis image and find the gradient which makes it's own layers activations look like the guide image.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step26: Preprocess both images
Step27: Like w/ Style Net, we are going to measure how similar the features in the guide image are to the dream images. In order to do that, we'll calculate the dot product. Experiment with other measures such as l1 or l2 loss to see how this impacts the resulting Dream!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step28: We'll now use another measure that we saw when developing Style Net during the lecture. This measure the pixel to pixel difference of neighboring pixels. What we're doing when we try to optimize a gradient that makes the mean differences small is saying, we want the difference to be low. This allows us to smooth our image in the same way that we did using the Gaussian to blur the image.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step29: Now we train just like before, except we'll need to combine our two loss terms, feature_loss and tv_loss by simply adding them! The one thing we have to keep in mind is that we want to minimize the tv_loss while maximizing the feature_loss. That means we'll need to use the negative tv_loss and the positive feature_loss. As an experiment, try just optimizing the tv_loss and removing the feature_loss from the tf.gradients call. What happens?
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step30: <a name="further-explorations"></a>
Further Explorations
In the libs module, I've included a deepdream module which has two functions for performing Deep Dream and the Guided Deep Dream. Feel free to explore these to create your own deep dreams.
<a name="part-5---style-net"></a>
Part 5 - Style Net
We'll now work on creating our own style net implementation. We've seen all the steps for how to do this during the lecture, and you can always refer to the Lecture Transcript if you need to. I want to you to explore using different networks and different layers in creating your content and style losses. This is completely unexplored territory so it can be frustrating to find things that work. Think of this as your empty canvas! If you are really stuck, you will find a stylenet implementation under the libs module that you can use instead.
Have a look here for inspiration
Step31: Let's now import the graph definition into our newly created Graph using a context manager and specifying that we want to use the CPU.
Step32: Let's then grab the names of every operation in our network
Step33: Now we need an image for our content image and another one for our style image.
Step34: Let's see what the network classifies these images as just for fun
Step35: <a name="content-features"></a>
Content Features
We're going to need to find the layer or layers we want to use to help us define our "content loss". Recall from the lecture when we used VGG, we used the 4th convolutional layer.
Step36: Pick a layer for using for the content features. If you aren't using VGG remember to get rid of the dropout stuff!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step37: <a name="style-features"></a>
Style Features
Let's do the same thing now for the style features. We'll use more than 1 layer though so we'll append all the features in a list. If you aren't using VGG remember to get rid of the dropout stuff!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step38: Now we find the gram matrix which we'll use to optimize our features.
Step39: <a name="remapping-the-input"></a>
Remapping the Input
We're almost done building our network. We just have to change the input to the network to become "trainable". Instead of a placeholder, we'll have a tf.Variable, which allows it to be trained. We could set this to the content image, another image entirely, or an image of noise. Experiment with all three options!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step40: <a name="content-loss"></a>
Content Loss
In the lecture we saw that we'll simply find the l2 loss between our content layer features.
Step41: <a name="style-loss"></a>
Style Loss
Instead of straight l2 loss on the raw feature activations, we're going to calculate the gram matrix and find the loss between these. Intuitively, this is finding what is common across all convolution filters, and trying to enforce the commonality between the synthesis and style image's gram matrix.
Step42: <a name="total-variation-loss"></a>
Total Variation Loss
And just like w/ guided hallucinations, we'll try to enforce some smoothness using a total variation loss.
Step43: <a name="training"></a>
Training
We're almost ready to train! Let's just combine our three loss measures and stick it in an optimizer.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step44: And now iterate! Feel free to play with the number of iterations or how often you save an image. If you use a different network to VGG, then you will not need to feed in the dropout parameters like I've done here.
Step45: <a name="assignment-submission"></a>
Assignment Submission
After you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as | Python Code:
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n',
'You should consider updating to Python 3.4.0 or',
'higher as the libraries built for this course',
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda'
'and then restart `jupyter notebook`:\n',
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
from scipy.ndimage.filters import gaussian_filter
import IPython.display as ipyd
import tensorflow as tf
from libs import utils, gif, datasets, dataset_utils, vae, dft, vgg16, nb_utils
except ImportError:
print("Make sure you have started notebook in the same directory",
"as the provided zip file which includes the 'libs' folder",
"and the file 'utils.py' inside of it. You will NOT be able",
"to complete this assignment unless you restart jupyter",
"notebook inside the directory created by extracting",
"the zip file or cloning the github repo. If you are still")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML(<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>)
Explanation: Session 4: Visualizing Representations
Assignment: Deep Dream and Style Net
<p class='lead'>
Creative Applications of Deep Learning with Google's Tensorflow
Parag K. Mital
Kadenze, Inc.
</p>
Overview
In this homework, we'll first walk through visualizing the gradients of a trained convolutional network. Recall from the last session that we had trained a variational convolutional autoencoder. We also trained a deep convolutional network. In both of these networks, we learned only a few tools for understanding how the model performs. These included measuring the loss of the network and visualizing the W weight matrices and/or convolutional filters of the network.
During the lecture we saw how to visualize the gradients of Inception, Google's state of the art network for object recognition. This resulted in a much more powerful technique for understanding how a network's activations transform or accentuate the representations in the input space. We'll explore this more in Part 1.
We also explored how to use the gradients of a particular layer or neuron within a network with respect to its input for performing "gradient ascent". This resulted in Deep Dream. We'll explore this more in Parts 2-4.
We also saw how the gradients at different layers of a convolutional network could be optimized for another image, resulting in the separation of content and style losses, depending on the chosen layers. This allowed us to synthesize new images that shared another image's content and/or style, even if they came from separate images. We'll explore this more in Part 5.
Finally, you'll packaged all the GIFs you create throughout this notebook and upload them to Kadenze.
<a name="learning-goals"></a>
Learning Goals
Learn how to inspect deep networks by visualizing their gradients
Learn how to "deep dream" with different objective functions and regularization techniques
Learn how to "stylize" an image using content and style losses from different images
Table of Contents
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Part 1 - Pretrained Networks
Graph Definition
Preprocess/Deprocessing
Tensorboard
A Note on 1x1 Convolutions
Network Labels
Using Context Managers
Part 2 - Visualizing Gradients
Part 3 - Basic Deep Dream
Part 4 - Deep Dream Extensions
Using the Softmax Layer
Fractal
Guided Hallucinations
Further Explorations
Part 5 - Style Net
Network
Content Features
Style Features
Remapping the Input
Content Loss
Style Loss
Total Variation Loss
Training
Assignment Submission
<!-- /MarkdownTOC -->
End of explanation
from libs import vgg16, inception, i2v
Explanation: <a name="part-1---pretrained-networks"></a>
Part 1 - Pretrained Networks
In the libs module, you'll see that I've included a few modules for loading some state of the art networks. These include:
Inception v3
This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects (+ 8 for unknown categories). This network is about only 50MB!
Inception v5
This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects (+ 8 for unknown categories). This network is about only 50MB! It presents a few extensions to v5 which are not documented anywhere that I've found, as of yet...
Visual Group Geometry @ Oxford's 16 layer
This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects. This model is nearly half a gigabyte, about 10x larger in size than the inception network. The trade off is that it is very fast.
Visual Group Geometry @ Oxford's Face Recognition
This network has been trained on the VGG Face Dataset and its final output layer is a softmax layer denoting 1 of 2622 different possible people.
Illustration2Vec
This network has been trained on illustrations and manga and its final output layer is 4096 features.
Illustration2Vec Tag
Please do not use this network if you are under the age of 18 (seriously!)
This network has been trained on manga and its final output layer is one of 1539 labels.
When we use a pre-trained network, we load a network's definition and its weights which have already been trained. The network's definition includes a set of operations such as convolutions, and adding biases, but all of their values, i.e. the weights, have already been trained.
<a name="graph-definition"></a>
Graph Definition
In the libs folder, you will see a few new modules for loading the above pre-trained networks. Each module is structured similarly to help you understand how they are loaded and include example code for using them. Each module includes a preprocess function for using before sending the image to the network. And when using deep dream techniques, we'll be using the deprocess function to undo the preprocess function's manipulations.
Let's take a look at loading one of these. Every network except for i2v includes a key 'labels' denoting what labels the network has been trained on. If you are under the age of 18, please do not use the i2v_tag model, as its labels are unsuitable for minors.
Let's load the libaries for the different pre-trained networks:
End of explanation
# Stick w/ Inception for now, and then after you see how
# the next few sections work w/ this network, come back
# and explore the other networks.
net = inception.get_inception_model(version='v5')
# net = inception.get_inception_model(version='v3')
# net = vgg16.get_vgg_model()
# net = vgg16.get_vgg_face_model()
# net = i2v.get_i2v_model()
# net = i2v.get_i2v_tag_model()
Explanation: Now we can load a pre-trained network's graph and any labels. Explore the different networks in your own time.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
print(net.keys())
Explanation: Each network returns a dictionary with the following keys defined. Every network has a key for "labels" except for "i2v", since this is a feature only network, e.g. an unsupervised network, and does not have labels.
End of explanation
# First, let's get an image:
og = plt.imread('clinton.png')[..., :3]
plt.imshow(og)
print(og.min(), og.max())
Explanation: <a name="preprocessdeprocessing"></a>
Preprocess/Deprocessing
Each network has a preprocessing/deprocessing function which we'll use before sending the input to the network. This preprocessing function is slightly different for each network. Recall from the previous sessions what preprocess we had done before sending an image to a network. We would often normalize the input by subtracting the mean and dividing by the standard deviation. We'd also crop/resize the input to a standard size. We'll need to do this for each network except for the Inception network, which is a true convolutional network and does not require us to do this (will be explained in more depth later).
Whenever we preprocess the image, and want to visualize the result of adding back the gradient to the input image (when we use deep dream), we'll need to use the deprocess function stored in the dictionary. Let's explore how these work. We'll confirm this is performing the inverse operation, let's try to preprocess the image, then I'll have you try to deprocess it.
End of explanation
# Now call the preprocess function. This will preprocess our
# image ready for being input to the network, except for changes
# to the dimensions. I.e., we will still need to convert this
# to a 4-dimensional Tensor once we input it to the network.
# We'll see how that works later.
img = net['preprocess'](og)
print(img.min(), img.max())
Explanation: Let's now try preprocessing this image. The function for preprocessing is inside the module we used to load it. For instance, for vgg16, we can find the preprocess function as vgg16.preprocess, or for inception, inception.preprocess, or for i2v, i2v.preprocess. Or, we can just use the key preprocess in our dictionary net, as this is just convenience for us to access the corresponding preprocess function.
End of explanation
deprocessed = ...
plt.imshow(deprocessed)
plt.show()
Explanation: Let's undo the preprocessing. Recall that the net dictionary has the key deprocess which is the function we need to use on our processed image, img.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
nb_utils.show_graph(net['graph_def'])
Explanation: <a name="tensorboard"></a>
Tensorboard
I've added a utility module called nb_utils which includes a function show_graph. This will use Tensorboard to draw the computational graph defined by the various Tensorflow functions. I didn't go over this during the lecture because there just wasn't enough time! But explore it in your own time if it interests you, as it is a really unique tool which allows you to monitor your network's training progress via a web interface. It even lets you monitor specific variables or processes within the network, e.g. the reconstruction of an autoencoder, without having to print to the console as we've been doing. We'll just be using it to draw the pretrained network's graphs using the utility function I've given you.
Be sure to interact with the graph and click on the various modules.
For instance, if you've loaded the inception v5 network, locate the "input" to the network. This is where we feed the image, the input placeholder (typically what we've been denoting as X in our own networks). From there, it goes to the "conv2d0" variable scope (i.e. this uses the code: with tf.variable_scope("conv2d0") to create a set of operations with the prefix "conv2d0/". If you expand this scope, you'll see another scope, "pre_relu". This is created using another tf.variable_scope("pre_relu"), so that any new variables will have the prefix "conv2d0/pre_relu". Finally, inside here, you'll see the convolution operation (tf.nn.conv2d) and the 4d weight tensor, "w" (e.g. created using tf.get_variable), used for convolution (and so has the name, "conv2d0/pre_relu/w". Just after the convolution is the addition of the bias, b. And finally after exiting the "pre_relu" scope, you should be able to see the "conv2d0" operation which applies the relu nonlinearity. In summary, that region of the graph can be created in Tensorflow like so:
python
input = tf.placeholder(...)
with tf.variable_scope('conv2d0'):
with tf.variable_scope('pre_relu'):
w = tf.get_variable(...)
h = tf.nn.conv2d(input, h, ...)
b = tf.get_variable(...)
h = tf.nn.bias_add(h, b)
h = tf.nn.relu(h)
End of explanation
net['labels']
label_i = 851
print(net['labels'][label_i])
Explanation: If you open up the "mixed3a" node above (double click on it), you'll see the first "inception" module. This network encompasses a few advanced concepts that we did not have time to discuss during the lecture, including residual connections, feature concatenation, parallel convolution streams, 1x1 convolutions, and including negative labels in the softmax layer. I'll expand on the 1x1 convolutions here, but please feel free to skip ahead if this isn't of interest to you.
<a name="a-note-on-1x1-convolutions"></a>
A Note on 1x1 Convolutions
The 1x1 convolutions are setting the ksize parameter of the kernels to 1. This is effectively allowing you to change the number of dimensions. Remember that you need a 4-d tensor as input to a convolution. Let's say its dimensions are $\text{N}\ x\ \text{W}\ x\ \text{H}\ x\ \text{C}_I$, where $\text{C}_I$ represents the number of channels the image has. Let's say it is an RGB image, then $\text{C}_I$ would be 3. Or later in the network, if we have already convolved it, it might be 64 channels instead. Regardless, when you convolve it w/ a $\text{K}_H\ x\ \text{K}_W\ x\ \text{C}_I\ x\ \text{C}_O$ filter, where $\text{K}_H$ is 1 and $\text{K}_W$ is also 1, then the filters size is: $1\ x\ 1\ x\ \text{C}_I$ and this is perfomed for each output channel $\text{C}_O$. What this is doing is filtering the information only in the channels dimension, not the spatial dimensions. The output of this convolution will be a $\text{N}\ x\ \text{W}\ x\ \text{H}\ x\ \text{C}_O$ output tensor. The only thing that changes in the output is the number of output filters.
The 1x1 convolution operation is essentially reducing the amount of information in the channels dimensions before performing a much more expensive operation, e.g. a 3x3 or 5x5 convolution. Effectively, it is a very clever trick for dimensionality reduction used in many state of the art convolutional networks. Another way to look at it is that it is preseving the spatial information, but at each location, there is a fully connected network taking all the information from every input channel, $\text{C}_I$, and reducing it down to $\text{C}_O$ channels (or could easily also be up, but that is not the typical use case for this). So it's not really a convolution, but we can use the convolution operation to perform it at every location in our image.
If you are interested in reading more about this architecture, I highly encourage you to read Network in Network, Christian Szegedy's work on the Inception network, Highway Networks, Residual Networks, and Ladder Networks.
In this course, we'll stick to focusing on the applications of these, while trying to delve as much into the code as possible.
<a name="network-labels"></a>
Network Labels
Let's now look at the labels:
End of explanation
# Load the VGG network. Scroll back up to where we loaded the inception
# network if you are unsure. It is inside the "vgg16" module...
net = ..
assert(net['labels'][0] == (0, 'n01440764 tench, Tinca tinca'))
# Let's explicity use the CPU, since we don't gain anything using the GPU
# when doing Deep Dream (it's only a single image, benefits come w/ many images).
device = '/cpu:0'
# We'll now explicitly create a graph
g = tf.Graph()
# And here is a context manager. We use the python "with" notation to create a context
# and create a session that only exists within this indent, as soon as we leave it,
# the session is automatically closed! We also tel the session which graph to use.
# We can pass a second context after the comma,
# which we'll use to be explicit about using the CPU instead of a GPU.
with tf.Session(graph=g) as sess, g.device(device):
# Now load the graph_def, which defines operations and their values into `g`
tf.import_graph_def(net['graph_def'], name='net')
# Now we can get all the operations that belong to the graph `g`:
names = [op.name for op in g.get_operations()]
print(names)
Explanation: <a name="using-context-managers"></a>
Using Context Managers
Up until now, we've mostly used a single tf.Session within a notebook and didn't give it much thought. Now that we're using some bigger models, we're going to have to be more careful. Using a big model and being careless with our session can result in a lot of unexpected behavior, program crashes, and out of memory errors. The VGG network and the I2V networks are quite large. So we'll need to start being more careful with our sessions using context managers.
Let's see how this works w/ VGG:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# First find all the pooling layers in the network. You can
# use list comprehension to iterate over all the "names" we just
# created, finding whichever ones have the name "pool" in them.
# Then be sure to append a ":0" to the names
features = ...
# Let's print them
print(features)
# This is what we want to have at the end. You could just copy this list
# if you are stuck!
assert(features == ['net/pool1:0', 'net/pool2:0', 'net/pool3:0', 'net/pool4:0', 'net/pool5:0'])
Explanation: <a name="part-2---visualizing-gradients"></a>
Part 2 - Visualizing Gradients
Now that we know how to load a network and extract layers from it, let's grab only the pooling layers:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Use the function 'get_tensor_by_name' and the 'names' array to help you
# get the first tensor in the network. Remember you have to add ":0" to the
# name to get the output of an operation which is the tensor.
x = ...
assert(x.name == 'net/images:0')
Explanation: Let's also grab the input layer:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
def plot_gradient(img, x, feature, g, device='/cpu:0'):
Let's visualize the network's gradient activation
when backpropagated to the original input image. This
is effectively telling us which pixels contribute to the
predicted layer, class, or given neuron with the layer
# We'll be explicit about the graph and the device
# by using a context manager:
with tf.Session(graph=g) as sess, g.device(device):
saliency = tf.gradients(tf.reduce_mean(feature), x)
this_res = sess.run(saliency[0], feed_dict={x: img})
grad = this_res[0] / np.max(np.abs(this_res))
return grad
Explanation: We'll now try to find the gradient activation that maximizes a layer with respect to the input layer x.
End of explanation
og = plt.imread('clinton.png')[..., :3]
img = net['preprocess'](og)[np.newaxis]
fig, axs = plt.subplots(1, len(features), figsize=(20, 10))
for i in range(len(features)):
axs[i].set_title(features[i])
grad = plot_gradient(img, x, g.get_tensor_by_name(features[i]), g)
axs[i].imshow(utils.normalize(grad))
Explanation: Let's try this w/ an image now. We're going to use the plot_gradient function to help us. This is going to take our input image, run it through the network up to a layer, find the gradient of the mean of that layer's activation with respect to the input image, then backprop that gradient back to the input layer. We'll then visualize the gradient by normalizing it's values using the utils.normalize function.
End of explanation
def dream(img, gradient, step, net, x, n_iterations=50, plot_step=10):
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
fig, axs = plt.subplots(1, n_iterations // plot_step, figsize=(20, 10))
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Or we could use the `utils.normalize function:
# this_res = utils.normalize(this_res)
# Experiment with all of the above options. They will drastically
# effect the resulting dream, and really depend on the network
# you use, and the way the network handles normalization of the
# input image, and the step size you choose! Lots to explore!
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
axs[it_i // plot_step].imshow(m)
# We'll run it for 3 iterations
n_iterations = 3
# Think of this as our learning rate. This is how much of
# the gradient we'll add back to the input image
step = 1.0
# Every 1 iterations, we'll plot the current deep dream
plot_step = 1
Explanation: <a name="part-3---basic-deep-dream"></a>
Part 3 - Basic Deep Dream
In the lecture we saw how Deep Dreaming takes the backpropagated gradient activations and simply adds it to the image, running the same process again and again in a loop. We also saw many tricks one can add to this idea, such as infinitely zooming into the image by cropping and scaling, adding jitter by randomly moving the image around, or adding constraints on the total activations.
Have a look here for inspiration:
https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB
https://mtyka.github.io/deepdream/2016/02/05/bilateral-class-vis.html
Let's stick the necessary bits in a function and try exploring how deep dream amplifies the representations of the chosen layers:
End of explanation
for feature_i in range(len(features)):
with tf.Session(graph=g) as sess, g.device(device):
# Get a feature layer
layer = g.get_tensor_by_name(features[feature_i])
# Find the gradient of this layer's mean activation
# with respect to the input image
gradient = tf.gradients(tf.reduce_mean(layer), x)
# Dream w/ our image
dream(img, gradient, step, net, x, n_iterations=n_iterations, plot_step=plot_step)
Explanation: Let's now try running Deep Dream for every feature, each of our 5 pooling layers. We'll need to get the layer corresponding to our feature. Then find the gradient of this layer's mean activation with respect to our input, x. Then pass these to our dream function. This can take awhile (about 10 minutes using the CPU on my Macbook Pro).
End of explanation
noise = net['preprocess'](
np.random.rand(256, 256, 3) * 0.1 + 0.45)[np.newaxis]
Explanation: Instead of using an image, we can use an image of noise and see how it "hallucinates" the representations that the layer most responds to:
End of explanation
for feature_i in range(len(features)):
with tf.Session(graph=g) as sess, g.device(device):
# Get a feature layer
layer = ...
# Find the gradient of this layer's mean activation
# with respect to the input image
gradient = ...
# Dream w/ the noise image. Complete this!
dream(...)
Explanation: We'll do the same thing as before, now w/ our noise image:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Load your own image here
og = ...
plt.imshow(og)
# Preprocess the image and make sure it is 4-dimensional by adding a new axis to the 0th dimension:
img = ...
assert(img.ndim == 4)
# Let's get the softmax layer
print(names[-2])
layer = g.get_tensor_by_name(names[-2] + ":0")
# And find its shape
with tf.Session(graph=g) as sess, g.device(device):
layer_shape = tf.shape(layer).eval(feed_dict={x:img})
# We can find out how many neurons it has by feeding it an image and
# calculating the shape. The number of output channels is the last dimension.
n_els = layer_shape[-1]
# Let's pick a label. First let's print out every label and then find one we like:
print(net['labels'])
Explanation: <a name="part-4---deep-dream-extensions"></a>
Part 4 - Deep Dream Extensions
As we saw in the lecture, we can also use the final softmax layer of a network to use during deep dream. This allows us to be explicit about the object we want hallucinated in an image.
<a name="using-the-softmax-layer"></a>
Using the Softmax Layer
Let's get another image to play with, preprocess it, and then make it 4-dimensional.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Pick a neuron. Or pick a random one. This should be 0-n_els
neuron_i = ...
print(net['labels'][neuron_i])
assert(neuron_i >= 0 and neuron_i < n_els)
# And we'll create an activation of this layer which is very close to 0
layer_vec = np.ones(layer_shape) / 100.0
# Except for the randomly chosen neuron which will be very close to 1
layer_vec[..., neuron_i] = 0.99
Explanation: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Explore different parameters for this section.
n_iterations = 51
plot_step = 5
# If you use a different network, you will definitely need to experiment
# with the step size, as each network normalizes the input image differently.
step = 0.2
Explanation: Let's decide on some parameters of our deep dream. We'll need to decide how many iterations to run for. And we'll plot the result every few iterations, also saving it so that we can produce a GIF. And at every iteration, we need to decide how much to ascend our gradient.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
imgs = []
with tf.Session(graph=g) as sess, g.device(device):
gradient = tf.gradients(tf.reduce_max(layer), x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={
x: img_copy, layer: layer_vec})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.figure(figsize=(5, 5))
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
# Save the gif
gif.build_gif(imgs, saveto='softmax.gif')
ipyd.Image(url='softmax.gif?i={}'.format(
np.random.rand()), height=300, width=300)
Explanation: Now let's dream. We're going to define a context manager to create a session and use our existing graph, and make sure we use the CPU device, as there is no gain in using GPU, and we have much more CPU memory than GPU memory.
End of explanation
n_iterations = 101
plot_step = 10
step = 0.1
crop = 1
imgs = []
n_imgs, height, width, *ch = img.shape
with tf.Session(graph=g) as sess, g.device(device):
# Explore changing the gradient here from max to mean
# or even try using different concepts we learned about
# when creating style net, such as using a total variational
# loss on `x`.
gradient = tf.gradients(tf.reduce_max(layer), x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer
# we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={
x: img_copy, layer: layer_vec})[0]
# This is just one way we could normalize the
# gradient. It helps to look at the range of your image's
# values, e.g. if it is 0 - 1, or -115 to +115,
# and then consider the best way to normalize the gradient.
# For some networks, it might not even be necessary
# to perform this normalization, especially if you
# leave the dream to run for enough iterations.
# this_res = this_res / (np.std(this_res) + 1e-10)
this_res = this_res / (np.max(np.abs(this_res)) + 1e-10)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Optionally, we could apply any number of regularization
# techniques... Try exploring different ways of regularizing
# gradient. ascent process. If you are adventurous, you can
# also explore changing the gradient above using a
# total variational loss, as we used in the style net
# implementation during the lecture. I leave that to you
# as an exercise!
# Crop a 1 pixel border from height and width
img_copy = img_copy[:, crop:-crop, crop:-crop, :]
# Resize (Note: in the lecture, we used scipy's resize which
# could not resize images outside of 0-1 range, and so we had
# to store the image ranges. This is a much simpler resize
# method that allows us to `preserve_range`.)
img_copy = resize(img_copy[0], (height, width), order=3,
clip=False, preserve_range=True
)[np.newaxis].astype(np.float32)
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
# Create a GIF
gif.build_gif(imgs, saveto='fractal.gif')
ipyd.Image(url='fractal.gif?i=2', height=300, width=300)
Explanation: <a name="fractal"></a>
Fractal
During the lecture we also saw a simple trick for creating an infinite fractal: crop the image and then resize it. This can produce some lovely aesthetics and really show some strong object hallucinations if left long enough and with the right parameters for step size/normalization/regularization. Feel free to experiment with the code below, adding your own regularizations as shown in the lecture to produce different results!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Replace these with your own images!
guide_og = plt.imread(...)[..., :3]
dream_og = plt.imread(...)[..., :3]
assert(guide_og.ndim == 3 and guide_og.shape[-1] == 3)
assert(dream_og.ndim == 3 and dream_og.shape[-1] == 3)
Explanation: <a name="guided-hallucinations"></a>
Guided Hallucinations
Instead of following the gradient of an arbitrary mean or max of a particular layer's activation, or a particular object that we want to synthesize, we can also try to guide our image to look like another image. One way to try this is to take one image, the guide, and find the features at a particular layer or layers. Then, we take our synthesis image and find the gradient which makes it's own layers activations look like the guide image.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
guide_img = net['preprocess'](guide_og)[np.newaxis]
dream_img = net['preprocess'](dream_og)[np.newaxis]
fig, axs = plt.subplots(1, 2, figsize=(7, 4))
axs[0].imshow(guide_og)
axs[1].imshow(dream_og)
Explanation: Preprocess both images:
End of explanation
x = g.get_tensor_by_name(names[0] + ":0")
# Experiment with the weighting
feature_loss_weight = 1.0
with tf.Session(graph=g) as sess, g.device(device):
feature_loss = tf.Variable(0.0)
# Explore different layers/subsets of layers. This is just an example.
for feature_i in features[3:5]:
# Get the activation of the feature
layer = g.get_tensor_by_name(feature_i)
# Do the same for our guide image
guide_layer = sess.run(layer, feed_dict={x: guide_img})
# Now we need to measure how similar they are!
# We'll use the dot product, which requires us to first reshape both
# features to a 2D vector. But you should experiment with other ways
# of measuring similarity such as l1 or l2 loss.
# Reshape each layer to 2D vector
layer = tf.reshape(layer, [-1, 1])
guide_layer = guide_layer.reshape(-1, 1)
# Now calculate their dot product
correlation = tf.matmul(guide_layer.T, layer)
# And weight the loss by a factor so we can control its influence
feature_loss += feature_loss_weight * correlation
Explanation: Like w/ Style Net, we are going to measure how similar the features in the guide image are to the dream images. In order to do that, we'll calculate the dot product. Experiment with other measures such as l1 or l2 loss to see how this impacts the resulting Dream!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
n_img, height, width, ch = dream_img.shape
# We'll weight the overall contribution of the total variational loss
# Experiment with this weighting
tv_loss_weight = 1.0
with tf.Session(graph=g) as sess, g.device(device):
# Penalize variations in neighboring pixels, enforcing smoothness
dx = tf.square(x[:, :height - 1, :width - 1, :] - x[:, :height - 1, 1:, :])
dy = tf.square(x[:, :height - 1, :width - 1, :] - x[:, 1:, :width - 1, :])
# We will calculate their difference raised to a power to push smaller
# differences closer to 0 and larger differences higher.
# Experiment w/ the power you raise this to to see how it effects the result
tv_loss = tv_loss_weight * tf.reduce_mean(tf.pow(dx + dy, 1.2))
Explanation: We'll now use another measure that we saw when developing Style Net during the lecture. This measure the pixel to pixel difference of neighboring pixels. What we're doing when we try to optimize a gradient that makes the mean differences small is saying, we want the difference to be low. This allows us to smooth our image in the same way that we did using the Gaussian to blur the image.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Experiment with the step size!
step = 0.1
imgs = []
with tf.Session(graph=g) as sess, g.device(device):
# Experiment with just optimizing the tv_loss or negative tv_loss to understand what it is doing!
gradient = tf.gradients(-tv_loss + feature_loss, x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = dream_img.copy()
with tf.Session(graph=g) as sess, g.device(device):
sess.run(tf.initialize_all_variables())
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.figure(figsize=(5, 5))
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
gif.build_gif(imgs, saveto='guided.gif')
ipyd.Image(url='guided.gif?i=0', height=300, width=300)
Explanation: Now we train just like before, except we'll need to combine our two loss terms, feature_loss and tv_loss by simply adding them! The one thing we have to keep in mind is that we want to minimize the tv_loss while maximizing the feature_loss. That means we'll need to use the negative tv_loss and the positive feature_loss. As an experiment, try just optimizing the tv_loss and removing the feature_loss from the tf.gradients call. What happens?
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
sess.close()
tf.reset_default_graph()
# Stick w/ VGG for now, and then after you see how
# the next few sections work w/ this network, come back
# and explore the other networks.
net = vgg16.get_vgg_model()
# net = vgg16.get_vgg_face_model()
# net = inception.get_inception_model(version='v5')
# net = inception.get_inception_model(version='v3')
# net = i2v.get_i2v_model()
# net = i2v.get_i2v_tag_model()
# Let's explicity use the CPU, since we don't gain anything using the GPU
# when doing Deep Dream (it's only a single image, benefits come w/ many images).
device = '/cpu:0'
# We'll now explicitly create a graph
g = tf.Graph()
Explanation: <a name="further-explorations"></a>
Further Explorations
In the libs module, I've included a deepdream module which has two functions for performing Deep Dream and the Guided Deep Dream. Feel free to explore these to create your own deep dreams.
<a name="part-5---style-net"></a>
Part 5 - Style Net
We'll now work on creating our own style net implementation. We've seen all the steps for how to do this during the lecture, and you can always refer to the Lecture Transcript if you need to. I want to you to explore using different networks and different layers in creating your content and style losses. This is completely unexplored territory so it can be frustrating to find things that work. Think of this as your empty canvas! If you are really stuck, you will find a stylenet implementation under the libs module that you can use instead.
Have a look here for inspiration:
https://mtyka.github.io/code/2015/10/02/experiments-with-style-transfer.html
http://kylemcdonald.net/stylestudies/
<a name="network"></a>
Network
Let's reset the graph and load up a network. I'll include code here for loading up any of our pretrained networks so you can explore each of them!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# And here is a context manager. We use the python "with" notation to create a context
# and create a session that only exists within this indent, as soon as we leave it,
# the session is automatically closed! We also tel the session which graph to use.
# We can pass a second context after the comma,
# which we'll use to be explicit about using the CPU instead of a GPU.
with tf.Session(graph=g) as sess, g.device(device):
# Now load the graph_def, which defines operations and their values into `g`
tf.import_graph_def(net['graph_def'], name='net')
Explanation: Let's now import the graph definition into our newly created Graph using a context manager and specifying that we want to use the CPU.
End of explanation
names = [op.name for op in g.get_operations()]
Explanation: Let's then grab the names of every operation in our network:
End of explanation
content_og = plt.imread('arles.png')[..., :3]
style_og = plt.imread('clinton.png')[..., :3]
fig, axs = plt.subplots(1, 2)
axs[0].imshow(content_og)
axs[0].set_title('Content Image')
axs[0].grid('off')
axs[1].imshow(style_og)
axs[1].set_title('Style Image')
axs[1].grid('off')
# We'll save these with a specific name to include in your submission
plt.imsave(arr=content_og, fname='content.png')
plt.imsave(arr=style_og, fname='style.png')
content_img = net['preprocess'](content_og)[np.newaxis]
style_img = net['preprocess'](style_og)[np.newaxis]
Explanation: Now we need an image for our content image and another one for our style image.
End of explanation
# Grab the tensor defining the input to the network
x = ...
# And grab the tensor defining the softmax layer of the network
softmax = ...
for img in [content_img, style_img]:
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
# Remember from the lecture that we have to set the dropout
# "keep probability" to 1.0.
res = softmax.eval(feed_dict={x: img,
'net/dropout_1/random_uniform:0': [[1.0]],
'net/dropout/random_uniform:0': [[1.0]]})[0]
print([(res[idx], net['labels'][idx])
for idx in res.argsort()[-5:][::-1]])
Explanation: Let's see what the network classifies these images as just for fun:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
print(names)
Explanation: <a name="content-features"></a>
Content Features
We're going to need to find the layer or layers we want to use to help us define our "content loss". Recall from the lecture when we used VGG, we used the 4th convolutional layer.
End of explanation
# Experiment w/ different layers here. You'll need to change this if you
# use another network!
content_layer = 'net/conv3_2/conv3_2:0'
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
content_features = g.get_tensor_by_name(content_layer).eval(
session=sess,
feed_dict={x: content_img,
'net/dropout_1/random_uniform:0': [[1.0]],
'net/dropout/random_uniform:0': [[1.0]]})
Explanation: Pick a layer for using for the content features. If you aren't using VGG remember to get rid of the dropout stuff!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Experiment with different layers and layer subsets. You'll need to change these
# if you use a different network!
style_layers = ['net/conv1_1/conv1_1:0',
'net/conv2_1/conv2_1:0',
'net/conv3_1/conv3_1:0',
'net/conv4_1/conv4_1:0',
'net/conv5_1/conv5_1:0']
style_activations = []
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
for style_i in style_layers:
style_activation_i = g.get_tensor_by_name(style_i).eval(
feed_dict={x: style_img,
'net/dropout_1/random_uniform:0': [[1.0]],
'net/dropout/random_uniform:0': [[1.0]]})
style_activations.append(style_activation_i)
Explanation: <a name="style-features"></a>
Style Features
Let's do the same thing now for the style features. We'll use more than 1 layer though so we'll append all the features in a list. If you aren't using VGG remember to get rid of the dropout stuff!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
style_features = []
for style_activation_i in style_activations:
s_i = np.reshape(style_activation_i, [-1, style_activation_i.shape[-1]])
gram_matrix = np.matmul(s_i.T, s_i) / s_i.size
style_features.append(gram_matrix.astype(np.float32))
Explanation: Now we find the gram matrix which we'll use to optimize our features.
End of explanation
tf.reset_default_graph()
g = tf.Graph()
# Get the network again
net = vgg16.get_vgg_model()
# Load up a session which we'll use to import the graph into.
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
# We can set the `net_input` to our content image
# or perhaps another image
# or an image of noise
# net_input = tf.Variable(content_img / 255.0)
net_input = tf.get_variable(
name='input',
shape=content_img.shape,
dtype=tf.float32,
initializer=tf.random_normal_initializer(
mean=np.mean(content_img), stddev=np.std(content_img)))
# Now we load the network again, but this time replacing our placeholder
# with the trainable tf.Variable
tf.import_graph_def(
net['graph_def'],
name='net',
input_map={'images:0': net_input})
Explanation: <a name="remapping-the-input"></a>
Remapping the Input
We're almost done building our network. We just have to change the input to the network to become "trainable". Instead of a placeholder, we'll have a tf.Variable, which allows it to be trained. We could set this to the content image, another image entirely, or an image of noise. Experiment with all three options!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
content_loss = tf.nn.l2_loss((g.get_tensor_by_name(content_layer) -
content_features) /
content_features.size)
Explanation: <a name="content-loss"></a>
Content Loss
In the lecture we saw that we'll simply find the l2 loss between our content layer features.
End of explanation
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
style_loss = np.float32(0.0)
for style_layer_i, style_gram_i in zip(style_layers, style_features):
layer_i = g.get_tensor_by_name(style_layer_i)
layer_shape = layer_i.get_shape().as_list()
layer_size = layer_shape[1] * layer_shape[2] * layer_shape[3]
layer_flat = tf.reshape(layer_i, [-1, layer_shape[3]])
gram_matrix = tf.matmul(tf.transpose(layer_flat), layer_flat) / layer_size
style_loss = tf.add(style_loss, tf.nn.l2_loss((gram_matrix - style_gram_i) / np.float32(style_gram_i.size)))
Explanation: <a name="style-loss"></a>
Style Loss
Instead of straight l2 loss on the raw feature activations, we're going to calculate the gram matrix and find the loss between these. Intuitively, this is finding what is common across all convolution filters, and trying to enforce the commonality between the synthesis and style image's gram matrix.
End of explanation
def total_variation_loss(x):
h, w = x.get_shape().as_list()[1], x.get_shape().as_list()[1]
dx = tf.square(x[:, :h-1, :w-1, :] - x[:, :h-1, 1:, :])
dy = tf.square(x[:, :h-1, :w-1, :] - x[:, 1:, :w-1, :])
return tf.reduce_sum(tf.pow(dx + dy, 1.25))
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
tv_loss = total_variation_loss(net_input)
Explanation: <a name="total-variation-loss"></a>
Total Variation Loss
And just like w/ guided hallucinations, we'll try to enforce some smoothness using a total variation loss.
End of explanation
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
# Experiment w/ the weighting of these! They produce WILDLY different
# results.
loss = 5.0 * content_loss + 1.0 * style_loss + 0.001 * tv_loss
optimizer = tf.train.AdamOptimizer(0.05).minimize(loss)
Explanation: <a name="training"></a>
Training
We're almost ready to train! Let's just combine our three loss measures and stick it in an optimizer.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
imgs = []
n_iterations = 100
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
sess.run(tf.initialize_all_variables())
# map input to noise
og_img = net_input.eval()
for it_i in range(n_iterations):
_, this_loss, synth = sess.run([optimizer, loss, net_input], feed_dict={
'net/dropout_1/random_uniform:0': np.ones(
g.get_tensor_by_name(
'net/dropout_1/random_uniform:0'
).get_shape().as_list()),
'net/dropout/random_uniform:0': np.ones(
g.get_tensor_by_name(
'net/dropout/random_uniform:0'
).get_shape().as_list())
})
print("%d: %f, (%f - %f)" %
(it_i, this_loss, np.min(synth), np.max(synth)))
if it_i % 5 == 0:
m = vgg16.deprocess(synth[0])
imgs.append(m)
plt.imshow(m)
plt.show()
gif.build_gif(imgs, saveto='stylenet.gif')
ipyd.Image(url='stylenet.gif?i=0', height=300, width=300)
Explanation: And now iterate! Feel free to play with the number of iterations or how often you save an image. If you use a different network to VGG, then you will not need to feed in the dropout parameters like I've done here.
End of explanation
utils.build_submission('session-4.zip',
('softmax.gif',
'fractal.gif',
'guided.gif',
'content.png',
'style.png',
'stylenet.gif',
'session-4.ipynb'))
Explanation: <a name="assignment-submission"></a>
Assignment Submission
After you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as:
<pre>
session-4/
session-4.ipynb
softmax.gif
fractal.gif
guided.gif
content.png
style.png
stylenet.gif
</pre>
You'll then submit this zip file for your third assignment on Kadenze for "Assignment 4: Deep Dream and Style Net"! Remember to complete the rest of the assignment, gallery commenting on your peers work, to receive full credit! If you have any questions, remember to reach out on the forums and connect with your peers or with me.
To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info
Also, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the #CADL hashtag so that other students can find your work!
End of explanation |
5,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
The purpose of this notebook is to calculate zone entry and exit-related data for tracked games, and generate TOI and shot-related data for the same games.
Here's what we'll do
Step1: Aggregate NZ data
They're in a bunch of separate .xlsx files. We just aggregate them together.
Each excel file has three sheets we want
Step2: Now, let's check--do the sheets we're interested in (listed in cell above) contain the same columns?
Step3: They look like they're in decent shape--there's a lot of overlap. I'll keep items with at least 515 occurrences.
Step4: So now, we'll combine.
Step5: Here's what the dataframes look like
Step6: Augment with PBP/TOI
Let's look at how complete the shot data is. I'll take a 10% sample of Caps games and compare shot counts in the tracked data with the PBP.
First, the games selected
Step7: Here are the shot counts from the tracked data
Step8: Now, let's pull shot counts for those games from our scraped data.
Step9: The counts are pretty close, so we'll use the tracked data.
We still need TOI, though. I'll pull a second-by-second log into a dataframe.
Step10: One final item
Step11: Let's look at games included by team.
Step12: Let's reduce this to just Caps games now.
Step13: Let's convert numbers to names. I don't have a lookup table handy, so I'll do it by hand.
Step14: Entries per 60
Step15: Shots per entry
Luckily, Corey has a Fenwick-post-entry column. We'll just use that.
Step16: Exits per 60
Some of the relevant result codes | Python Code:
from os import listdir, chdir, getcwd
import pandas as pd
from pylab import *
from tqdm import tqdm # progress bar
%matplotlib inline
current_wd = getcwd()
Explanation: Outline
The purpose of this notebook is to calculate zone entry and exit-related data for tracked games, and generate TOI and shot-related data for the same games.
Here's what we'll do:
- Download NZ data (not included)
- Aggregate NZ data
- Augment with PBP/TOI data as needed
- Calculate metrics at team, individual, line, and pair levels, for and against. For example:
- Entries per 60
- Failed entries per 60
- Controlled entries per 60
- Failed entries per 60
- Controlled entry%
- Failed entry%
- Shots per entry
- Controlled exits per 60
End of explanation
# Want to combine all files
folders = ['/Users/muneebalam/Downloads/Game Reports 1718/',
#'/Users/muneebalam/Downloads/Passing Game Archive 1718/',
'/Users/muneebalam/Downloads/Game Reports 1617/']
sheets = {'shots': 'Shot Data', 'entries': 'Raw Entries', 'exits': 'Zone Exits Raw Data'}
copy = False
if copy:
for folder in folders:
chdir(folder)
files = listdir()
files = [f for f in files if f[-5:] == '.xlsx']
for file in tqdm(files, desc='Converting to csv'):
xl = pd.ExcelFile(file)
sheetnames = xl.sheet_names
for s in sheetnames:
df = xl.parse(s)
fout = '{0:s}_{1:s}.csv'.format(file[:-5], s)
df.to_csv(fout, index=False)
print('Done with', folder)
Explanation: Aggregate NZ data
They're in a bunch of separate .xlsx files. We just aggregate them together.
Each excel file has three sheets we want:
Shot Data
Raw Entries
Zone Exits Raw Data
First, let's copy everything to csv, which will make for faster file read-in later on.
End of explanation
colnames = {}
for skey, sval in sheets.items():
colnames[sval] = {}
for folder in folders:
chdir(folder)
files = listdir()
files = [f for f in files if f[f.rfind('_')+1:-4] == sval]
for file in tqdm(files, desc='Reading files'):
try:
cnames = pd.read_csv(file).columns
except Exception as e:
print(skey, sval, file, e, e.args)
continue
cnames = tuple(sorted(cnames))
if cnames not in colnames[sval]:
colnames[sval][cnames] = set()
colnames[sval][cnames].add('{0:s}-{1:s}'.format(folder[-5:-1], file[:5])) # Season and game number
print('Done with', skey, folder)
def intersect(*sets):
if len(sets) == 0:
return set()
if len(sets) == 1:
return sets[0]
if len(sets) == 2:
return set(sets[0]) & set(sets[1])
return set(sets[0]) & intersect(*tuple(sets[1:]))
for sval in colnames:
# Figure out column name frequency
colcount = {}
for clist in colnames[sval].keys():
for c in clist:
if c not in colcount:
colcount[c] = 0
colcount[c] += len(colnames[sval][clist])
colcount = [(k, v) for k, v in colcount.items()]
colcount = sorted(colcount, key = lambda x: x[1], reverse=True)
print(sval)
for k, v in colcount:
# Only ones with more than 200, for brevity's sake
if v >= 200:
print(k, v)
print('')
Explanation: Now, let's check--do the sheets we're interested in (listed in cell above) contain the same columns?
End of explanation
cols_to_keep = {}
for sval in colnames:
# Figure out column name frequency
colcount = {}
for clist in colnames[sval].keys():
for c in clist:
if c not in colcount:
colcount[c] = 0
colcount[c] += len(colnames[sval][clist])
cols_to_keep[sval] = [k for k, v in colcount.items() if v >= 515]
print(cols_to_keep)
Explanation: They look like they're in decent shape--there's a lot of overlap. I'll keep items with at least 515 occurrences.
End of explanation
dfs = {k: [] for k in sheets.keys()}
generate = False
for skey, sval in sheets.items():
fout = skey + ' combined.csv'
if generate:
for folder in folders:
chdir(folder)
files = listdir()
files = [f for f in files if f[f.rfind('_')+1:-4] == sval]
for file in tqdm(files, desc='Reading files'):
try:
df = pd.read_csv(file)
# Exclude columns I don't want
cols = set(df.columns) - set(cols_to_keep[sval])
df = df.drop(cols, axis=1, errors='ignore')
df = df.assign(Season=2000 + int(folder[-5:-3]), Game=int(file[:5]))
dfs[skey].append(df)
except Exception as e:
print(skey, sval, file, e, e.args)
continue
print('Done with', skey, folder)
dfs[skey] = pd.concat(dfs[skey])
dfs[skey].to_csv(fout, index=False)
dfs[skey] = pd.read_csv(fout)
chdir(current_wd)
print('Done aggregating and reading files')
Explanation: So now, we'll combine.
End of explanation
dfs['shots'].head()
dfs['entries'].head()
dfs['exits'].head()
Explanation: Here's what the dataframes look like:
End of explanation
np.random.seed(8) # Obv, for shots, pick Ovechkin
wsh_games = dfs['shots'].query('Team == "WSH"')[['Season', 'Game']].drop_duplicates().sort_values(['Season', 'Game'])
wsh_games.loc[:, 'RandomNum'] = np.random.randint(low=0, high=100, size=len(wsh_games))
wsh_games.loc[:, 'Selected'] = wsh_games.RandomNum.apply(lambda x: x <= 10)
wsh_games = wsh_games.query('Selected == True').drop('RandomNum', axis=1)
wsh_games.set_index(['Season', 'Game']) # Just for viewing purposes
Explanation: Augment with PBP/TOI
Let's look at how complete the shot data is. I'll take a 10% sample of Caps games and compare shot counts in the tracked data with the PBP.
First, the games selected:
End of explanation
wsh_shots = dfs['shots'].merge(wsh_games[['Season', 'Game']], how='inner', on=['Season', 'Game']) \
.query('Strength == "5v5"')
wsh_shots.loc[:, 'Team'] = wsh_shots.Team.apply(lambda x: x if x == 'WSH' else 'Opp')
wsh_shots = wsh_shots[['Season', 'Game', 'Team']].assign(Count=1) \
.groupby(['Season', 'Game', 'Team'], as_index=False) \
.count()
wsh_shots = wsh_shots.pivot_table(index=['Season', 'Game'], columns='Team', values='Count')
wsh_shots
Explanation: Here are the shot counts from the tracked data:
End of explanation
from scrapenhl2.scrape import teams
df1 = teams.get_team_pbp(2016, 'WSH').assign(Season=2016)
df2 = teams.get_team_pbp(2017, 'WSH').assign(Season=2017)
df3 = pd.concat([df1, df2]).merge(wsh_games[['Season', 'Game']], how='inner', on=['Season', 'Game'])
# Go to 5v5 only
from scrapenhl2.manipulate import manipulate as manip
df3 = manip.filter_for_five_on_five(manip.filter_for_corsi(df3))
counts = df3[['Season', 'Game', 'Team']]
counts.loc[:, 'Team'] = counts.Team.apply(lambda x: 'WSH' if x == 15 else 'Opp')
counts = counts.assign(Count=1) \
.groupby(['Season', 'Game', 'Team'], as_index=False) \
.count()
counts = counts.pivot_table(index=['Season', 'Game'], columns='Team', values='Count')
counts
Explanation: Now, let's pull shot counts for those games from our scraped data.
End of explanation
from scrapenhl2.scrape import players, team_info
dfs['toi'] = {}
team_convert = {'LA': 'LAK', 'NJ': 'NJD', 'TB': 'TBL', 'SJ': 'SJS',
'L.A': 'LAK', 'N.J': 'NJD', 'T.B': 'TBL', 'S.J': 'SJS'}
for team in dfs['shots'].Team.unique():
if team in team_convert:
team = team_convert[team]
if not isinstance(team, str) or len(team) != 3:
#print('Skipping', team)
continue
toi = []
for season in range(2016, 2018):
try:
toi.append(teams.get_team_toi(season, team).assign(Season=season))
except Exception as e:
print('Could not read', team, e, e.args)
toi = pd.concat(toi)
# Filter for appropriate games using an inner join, and filter for 5v5
toi = toi.merge(dfs['shots'][['Season', 'Game']].drop_duplicates(),
how='inner', on=['Season', 'Game'])
toi = manip.filter_for_five_on_five(toi)
# Get only certain columns
toi = toi[['Season', 'Game', 'Time', 'Team1', 'Team2', 'Team3', 'Team4', 'Team5']]
renaming = {'Team{0:d}'.format(i): '{0:s}{1:d}'.format(team, i) for i in range(1, 6)}
toi = toi.rename(columns=renaming)
# Convert to player names
for col in toi.columns[3:]:
toi.loc[:, col] = toi[col].apply(lambda x: players.player_as_str(x))
dfs['toi'][team] = toi
dfs['toi']['WSH'].head()
Explanation: The counts are pretty close, so we'll use the tracked data.
We still need TOI, though. I'll pull a second-by-second log into a dataframe.
End of explanation
fives = {}
from scrapenhl2.manipulate import add_onice_players as onice
dfs['shots'].loc[:, 'Time2'] = dfs['shots'].Time.str.extract(r'(\d{1,2}:\d{1,2}):\d{1,2}$')
dfs['shots'] = onice.add_times_to_file(dfs['shots'].dropna(subset=['Time']),
periodcol='Period', timecol='Time2', time_format='remaining')
fives['shots'] = dfs['shots']
for team, toi in dfs['toi'].items():
toi = toi.rename(columns={'Time': '_Secs'})
fives['shots'] = fives['shots'].merge(toi, how='left', on=['Season', 'Game', '_Secs'])
fives['shots'].head()
dfs['entries'].loc[:, 'Time2'] = dfs['entries'].Time.str.extract(r'(\d{1,2}:\d{1,2}):\d{1,2}$')
dfs['entries'] = onice.add_times_to_file(dfs['entries'].dropna(subset=['Time']),
periodcol='Period', timecol='Time2', time_format='remaining')
fives['entries'] = dfs['entries']
for team, toi in dfs['toi'].items():
toi = toi.rename(columns={'Time': '_Secs'})
fives['entries'] = fives['entries'].merge(toi, how='left', on=['Season', 'Game', '_Secs'])
fives['entries'].head()
dfs['exits'].loc[:, 'Time2'] = dfs['exits'].Time.str.extract(r'(\d{1,2}:\d{1,2}):\d{1,2}$')
dfs['exits'] = onice.add_times_to_file(dfs['exits'].dropna(subset=['Time']),
periodcol='Period', timecol='Time2', time_format='remaining')
fives['exits'] = dfs['exits']
for team, toi in dfs['toi'].items():
toi = toi.rename(columns={'Time': '_Secs'})
fives['exits'] = fives['exits'].merge(toi, how='left', on=['Season', 'Game', '_Secs'])
fives['exits'].head()
Explanation: One final item: let's filter the entries, exits, and shots dataframes to 5v5 using this TOI data.
End of explanation
teamcolnames = [x for x in fives['shots'].columns if x == x.upper() and len(x) == 4 and x[-1] == '1']
gps = []
for teamcol in teamcolnames:
gp = len(dfs['toi'][teamcol[:-1]][['Season', 'Game']].drop_duplicates())
print(teamcol[:3], gp)
gps.append(gp)
print('Total', sum(gps))
Explanation: Let's look at games included by team.
End of explanation
wsh = {key: fives[key].dropna(subset=['WSH1']) for key in ['shots', 'entries', 'exits']}
wsh['shots'].head()
Explanation: Let's reduce this to just Caps games now.
End of explanation
wsh['shots'][['WSH1', 'WSH2', 'WSH3', 'WSH4', 'WSH5']] \
.melt(var_name='P', value_name='Name') \
['Name'].value_counts()
wsh_players = {'Alex Ovechkin': 8, 'Nicklas Backstrom': 19, 'Andre Burakovsky': 65,
'T.J. Oshie': 77, 'John Carlson': 74, 'Evgeny Kuznetsov': 92,
'Dmitry Orlov': 9, 'Christian Djoos': 29, 'Devante Smith-Pelly': 25,
'Jay Beagle': 83, 'Brooks Orpik': 44, 'Chandler Stephenson': 18,
'Jakub Vrana': 13, 'Tom Wilson': 43, 'Lars Eller': 20,
'Alex Chiasson': 39, 'Brett Connolly': 10, 'Madison Bowey': 22, 'Kevin Shattenkirk': 22,
'Tyler Graovac': 91, '': 36, 'Matt Niskanen': 2,
'Aaron Ness': 55, 'Nathan Walker': 79, 'Taylor Chorney': 4,
'Zach Sanford': 12, 'Karl Alzner': 27, 'Marcus Johansson': 90,
'Zachary Sanford': 12,
'Justin Williams': 14, 'Daniel Winnik': 26, 'Nate Schmidt': 88}
for col in ['WSH1', 'WSH2', 'WSH3', 'WSH4', 'WSH5']:
for key in ['shots', 'entries', 'exits']:
wsh[key].loc[:, col] = wsh[key][col].apply(lambda x: str(wsh_players[x]) + 'WSH' if x in wsh_players else x)
wsh['shots'].head()
Explanation: Let's convert numbers to names. I don't have a lookup table handy, so I'll do it by hand.
End of explanation
# Team comparison
# Drop extra team cols
allteamcols = [x for x in fives['entries'].columns if not x.upper() == x]
allteams = fives['entries'][allteamcols]
# Remove fails and faceoffs
allteams = allteams[pd.isnull(allteams.Fail)]
allteams = allteams[allteams['Entry type'] != 'FAC']
# Counts by game and team
allteams = allteams[['Season', 'Game', 'Entry by', 'Entry type']]
# Extract ending text part of entry.
import re
def extract_team(string):
result = re.search('\d*(\w{2,3})$', str(string))
if result:
return result.group(1)
return string
allteams.loc[:, 'Team'] = allteams['Entry by'].apply(lambda x: extract_team(x))
# Get rid of typo teams -- about 100 in 77000
valid_teams = set(allteams.Team.value_counts().index[:31])
allteams = allteams[allteams.Team.apply(lambda x: x in valid_teams)]
allteams = allteams.drop('Entry by', axis=1) \
.assign(Count=1) \
.groupby(['Season', 'Game', 'Team', 'Entry type'], as_index=False) \
.count()
# Add opp numbers
gametotals = allteams.drop('Team', axis=1) \
.groupby(['Season', 'Game', 'Entry type'], as_index=False) \
.sum() \
.rename(columns={'Count': 'OppCount'})
fives['toi'] = pd.concat([dfs['toi'][team] \
[['Season', 'Game']] \
.assign(TOI=1) \
.groupby(['Season', 'Game'], as_index=False) \
.count() for team in dfs['toi']]).drop_duplicates()
# Sum by season and calculate per 60
allteams = allteams.merge(fives['toi'], how='left', on=['Season', 'Game'])
allteams = allteams.merge(gametotals, how='inner', on=['Season', 'Game', 'Entry type']) \
.drop('Game', axis=1) \
.groupby(['Season', 'Entry type', 'Team'], as_index=False) \
.sum()
allteams.loc[:, 'OppCount'] = allteams['OppCount'] - allteams['Count']
allteams.loc[:, 'Per60'] = allteams['Count'] / (allteams.TOI / 3600)
allteams.loc[:, 'OppPer60'] = allteams['OppCount'] / (allteams.TOI / 3600)
allteams.head()
f = figure(figsize=[8, 8])
tmp = allteams[allteams['Entry type'] != 'X'] \
.drop(['Count', 'OppCount', 'TOI'], axis=1) \
.pivot_table(index=['Entry type', 'Team'], columns='Season', values='Per60') \
.reset_index()
for etype in tmp['Entry type'].unique():
tmp2 = tmp[tmp['Entry type'] == etype]
scatter(tmp2.loc[:, 2016].values, tmp2.loc[:, 2017].values, label=etype, s=200, alpha=0.5)
for s, etype, t, r1, r2 in tmp.itertuples():
annotate(t, xy=(r1, r2), ha='center', va='center')
from scrapenhl2.plot import visualization_helper as vhelper
vhelper.add_good_bad_fast_slow(bottomleft='Bad', topleft='Improved', topright='Good', bottomright='Declined')
vhelper.add_cfpct_ref_lines_to_plot(ax=gca(), refs=[50])
title('Team 5v5 entry rate, 2016 vs 2017')
xlabel('2016')
ylabel('2017')
legend(loc=2, bbox_to_anchor=(1, 1))
f = figure(figsize=[8, 8])
tmp = allteams[allteams['Entry type'] != 'X'] \
.drop(['Count', 'OppCount', 'TOI'], axis=1) \
.pivot_table(index=['Season', 'Team'], columns='Entry type', values='Per60') \
.reset_index()
tmp.loc[:, 'Ctrl%'] = tmp.C / (tmp.C + tmp.D)
tmp = tmp.drop(['C', 'D'], axis=1)
tmp = tmp.pivot_table(index='Team', columns='Season', values='Ctrl%').reset_index()
scatter(tmp.loc[:, 2016].values, tmp.loc[:, 2017].values, s=200, alpha=0.5)
for i, t, r1, r2 in tmp.itertuples():
annotate(t, xy=(r1, r2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='Bad', topleft='Improved', topright='Good', bottomright='Declined')
vhelper.add_cfpct_ref_lines_to_plot(ax=gca(), refs=[50])
title('Team 5v5 controlled entry rate, 2016 vs 2017')
xlabel('2016')
ylabel('2017')
fig, axes = subplots(1, 2, sharex=True, sharey=True, figsize=[12, 6])
for i, season in enumerate(allteams.Season.unique()):
tmp = allteams[(allteams.Season == season) & (allteams['Entry type'] != 'X')]
for etype in tmp['Entry type'].unique():
tmp2 = tmp[tmp['Entry type'] == etype]
axes[i].scatter(tmp2.Per60.values, tmp2.OppPer60.values, s=250, alpha=0.3, label=etype)
axes[i].set_title('Entries for and against, {0:d}'.format(season))
axes[i].set_xlabel('Entries per 60')
if i == 0:
axes[i].set_ylabel('Entries against per 60')
for _, t, r1, r2 in tmp[['Team', 'Per60', 'OppPer60']].itertuples():
axes[i].annotate(t, xy=(r1, r2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='NZ Jam', topleft='Bad', topright='Racetrack', bottomright='Good')
vhelper.add_cfpct_ref_lines_to_plot(ax=axes[i])
legend(loc=2, bbox_to_anchor=(1, 1))
tmp = allteams[allteams['Entry type'] != 'X'] \
.drop(['Count', 'OppCount', 'TOI'], axis=1) \
.melt(id_vars=['Season', 'Entry type', 'Team']) \
.pivot_table(index=['Season', 'Team', 'variable'], columns='Entry type', values='value') \
.reset_index()
tmp.loc[:, 'CE%'] = tmp.C / (tmp.C + tmp.D)
tmp = tmp.drop(['C', 'D'], axis=1) \
.pivot_table(index=['Season', 'Team'], columns='variable', values='CE%') \
.rename(columns={'Per60': 'TeamCE%', 'OppPer60': 'OppCE%'}) \
.reset_index()
fig, axes = subplots(1, 2, sharex=True, sharey=True, figsize=[12, 6])
for i, season in enumerate(tmp.Season.unique()):
tmp2 = tmp[(tmp.Season == season)]
axes[i].scatter(tmp2['TeamCE%'].values, tmp2['OppCE%'].values, s=250, alpha=0.3)
axes[i].set_title('Controlled entries for and against, {0:d}'.format(season))
axes[i].set_xlabel('Team CE%')
if i == 0:
axes[i].set_ylabel('Opp CE%')
for _, t, r1, r2 in tmp2[['Team', 'TeamCE%', 'OppCE%']].itertuples():
axes[i].annotate(t, xy=(r1, r2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='NZ Jam', topleft='Bad', topright='Racetrack', bottomright='Good')
vhelper.add_cfpct_ref_lines_to_plot(ax=axes[i])
legend(loc=2, bbox_to_anchor=(1, 1))
entries = wsh['entries']
# Drop extra team cols
colnames = [x for x in entries.columns if not x.upper() == x or x[:3] == 'WSH']
entries = entries[colnames]
# Remove fails
entries = entries[pd.isnull(entries.Fail)]
# Flag entries as WSH or Opp
entries.loc[:, 'Team'] = entries['Entry by'].apply(lambda x: 'WSH' if str(x)[-3:] == 'WSH' else 'Opp')
# Remove faceoffs
# entries2 = entries
entries2 = entries[entries['Entry type'] != 'FAC']
# Melt to long
idvars = [x for x in entries2.columns if x not in ['WSH1', 'WSH2', 'WSH3', 'WSH4', 'WSH5']]
entries2 = entries2.melt(id_vars=idvars, value_name='Player').sort_values(['Season', 'Game', '_Secs', 'variable'])
entries2.head()
# Season level
# Count by entry type
entries60 = entries2[['Season', 'Entry type', 'Team', 'Game', 'Period', 'Time']] \
.drop_duplicates() \
[['Season', 'Entry type', 'Team']] \
.assign(Count=1) \
.groupby(['Season', 'Entry type', 'Team'], as_index=False) \
.count()
# Add TOI
toi = dfs['toi']['WSH'] \
[['Season']].assign(TOI=1) \
.groupby('Season', as_index=False).count()
entries60 = entries60.merge(toi, how='left', on='Season')
entries60.loc[:, 'Per60'] = entries60.Count / (entries60.TOI / 3600)
entries60
tmp = entries60.assign(Height=entries60.Per60).sort_values(['Season', 'Team', 'Entry type'])
tmp.loc[:, 'Left'] = tmp.Team.apply(lambda x: 0 if x == 'WSH' else 1) + tmp.Season.apply(lambda x: 0 if x == 2016 else 0.4)
tmp.loc[:, 'Bottom'] = tmp.groupby(['Season', 'Team']).Height.cumsum() - tmp.Height
for etype in tmp['Entry type'].unique():
tmp2 = tmp[tmp['Entry type'] == etype]
bar(left=tmp2.Left.values, height=tmp2.Height.values, bottom=tmp2.Bottom.values, label=etype, width=0.3)
xlabs = tmp.drop_duplicates(subset=['Season', 'Team', 'Left'])
xticks([x for x in xlabs.Left], ['{0:d} {1:s}'.format(s, t) for s, t in zip(tmp2.Season, tmp2.Team)])
legend(loc=2, bbox_to_anchor=(1, 1))
title('Zone entries per 60, Caps vs opponents')
# Individual level, on-ice
# Count by entry type
entries60 = entries2[['Season', 'Entry type', 'Team', 'Game', 'Period', 'Time', 'Player']] \
[['Season', 'Entry type', 'Team', 'Player']] \
.assign(Count=1) \
.groupby(['Season', 'Entry type', 'Team', 'Player'], as_index=False) \
.count() \
.pivot_table(index=['Season', 'Entry type', 'Player'], columns='Team', values='Count') \
.reset_index()
# Add TOI
toi = dfs['toi']['WSH'] \
[['Season', 'WSH1', 'WSH2', 'WSH3', 'WSH4', 'WSH5']] \
.melt(id_vars='Season', value_name='Player') \
.drop('variable', axis=1) \
.assign(TOI=1) \
.groupby(['Season', 'Player'], as_index=False).count()
toi.loc[:, 'Player'] = toi['Player'].apply(lambda x: str(wsh_players[x]) + 'WSH' if x in wsh_players else x)
entries60 = entries60.merge(toi, how='left', on=['Season', 'Player']) \
.sort_values(['Player', 'Season', 'Entry type'])
entries60.loc[:, 'WSH60'] = entries60.WSH / (entries60.TOI / 3600)
entries60.loc[:, 'Opp60'] = entries60.Opp / (entries60.TOI / 3600)
entries60
title('Entry%, 2016 vs 2017')
xlabel('2016')
ylabel('2017')
tmp = entries60[['Season', 'Player', 'WSH', 'Opp']] \
.groupby(['Season', 'Player', 'WSH', 'Opp'], as_index=False) \
.sum()
tmp.loc[:, 'Entry%'] = tmp.WSH / (tmp.WSH + tmp.Opp)
tmp = tmp.drop(['WSH', 'Opp'], axis=1)
tmp = tmp.pivot_table(index='Player', columns='Season', values='Entry%')
scatter(tmp.loc[:, 2016].values, tmp.loc[:, 2017].values, s=200, alpha=0.5)
for p, e1, e2 in tmp.itertuples():
annotate(p[:-3], xy=(e1, e2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='Bad', topleft='Improved', topright='Good', bottomright='Declined')
vhelper.add_cfpct_ref_lines_to_plot(ax=gca(), refs=[50])
title('Entries per 60, 2016 vs 2017')
xlabel('2016')
ylabel('2017')
tmp = entries60[['Season', 'Player', 'WSH60']] \
.groupby(['Season', 'Player'], as_index=False) \
.sum()
tmp = tmp.pivot_table(index='Player', columns='Season', values='WSH60')
scatter(tmp.loc[:, 2016].values, tmp.loc[:, 2017].values, s=200, alpha=0.5)
for p, e1, e2 in tmp.itertuples():
annotate(p[:-3], xy=(e1, e2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='Bad', topleft='Improved', topright='Good', bottomright='Declined')
vhelper.add_cfpct_ref_lines_to_plot(ax=gca(), refs=[50])
title('Entries against per 60, 2016 vs 2017')
xlabel('2016')
ylabel('2017')
tmp = entries60[['Season', 'Player', 'Opp60']] \
.groupby(['Season', 'Player'], as_index=False) \
.sum()
tmp = tmp.pivot_table(index='Player', columns='Season', values='Opp60')
scatter(tmp.loc[:, 2016].values, tmp.loc[:, 2017].values, s=200, alpha=0.5)
for p, e1, e2 in tmp.itertuples():
annotate(p[:-3], xy=(e1, e2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='Good', topleft='Declined', topright='Bad', bottomright='Improved')
vhelper.add_cfpct_ref_lines_to_plot(ax=gca(), refs=[50])
ylim(bottom=xlim()[0])
xlim(right=ylim()[1])
sums = entries60[['Season', 'Player', 'WSH', 'Opp']] \
.groupby(['Season', 'Player'], as_index=False) \
.sum()
cep = entries60[entries60['Entry type'] == 'C'] \
[['Season', 'Player', 'WSH', 'Opp']] \
.merge(sums, how='inner', on=['Season', 'Player'], suffixes=['', '_Tot'])
cep.loc[:, 'CE%'] = cep.WSH / (cep.WSH + cep.WSH_Tot)
cep.loc[:, 'Opp CE%'] = cep.Opp / (cep.Opp + cep.Opp_Tot)
ce = cep[['Season', 'Player', 'CE%']] \
.pivot_table(index='Player', columns='Season', values='CE%')
oppce = cep[['Season', 'Player', 'Opp CE%']] \
.pivot_table(index='Player', columns='Season', values='Opp CE%')
title('On-ice controlled entry%, 2016 vs 2017')
xlabel('2016')
ylabel('2017')
scatter(ce.loc[:, 2016].values, ce.loc[:, 2017].values, s=200, alpha=0.5)
for p, e1, e2 in ce.itertuples():
annotate(p[:-3], xy=(e1, e2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='Bad', topleft='Improved', topright='Good', bottomright='Declined')
vhelper.add_cfpct_ref_lines_to_plot(ax=gca(), refs=[50])
title('Opp on-ice controlled entry%, 2016 vs 2017')
xlabel('2016')
ylabel('2017')
scatter(oppce.loc[:, 2016].values, oppce.loc[:, 2017].values, s=200, alpha=0.5)
for p, e1, e2 in oppce.itertuples():
annotate(p[:-3], xy=(e1, e2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='Good', topleft='Declined', topright='Bad', bottomright='Improved')
vhelper.add_cfpct_ref_lines_to_plot(ax=gca(), refs=[50])
xlim(left=0.2)
# Individual level, individual
# Count by entry type
ientries60 = entries2[pd.notnull(entries2['Entry by'])]
ientries60 = ientries60[ientries60['Entry by'].str.contains('WSH')] \
[['Season', 'Entry type', 'Team', 'Game', 'Period', 'Time', 'Entry by', 'Fen total']] \
.drop_duplicates() \
[['Season', 'Entry type', 'Team', 'Entry by', 'Fen total']] \
.assign(Count=1) \
.groupby(['Season', 'Entry type', 'Team', 'Entry by'], as_index=False) \
.count() \
.pivot_table(index=['Season', 'Entry type', 'Entry by'], columns='Team', values='Count') \
.reset_index()
# Add TOI
toi = dfs['toi']['WSH'] \
[['Season', 'WSH1', 'WSH2', 'WSH3', 'WSH4', 'WSH5']] \
.melt(id_vars='Season', value_name='Entry by') \
.drop('variable', axis=1) \
.assign(TOI=1) \
.groupby(['Season', 'Entry by'], as_index=False).count()
toi.loc[:, 'Entry by'] = toi['Entry by'].apply(lambda x: str(wsh_players[x]) + 'WSH' if x in wsh_players else x)
ientries60 = ientries60.merge(toi, how='left', on=['Season', 'Entry by']) \
.sort_values(['Entry by', 'Season', 'Entry type'])
ientries60.loc[:, 'WSH60'] = entries60.WSH / (entries60.TOI / 3600)
ientries60.drop(['TOI', 'WSH'], axis=1).pivot_table(index=['Entry by', 'Season'], columns='Entry type', values='WSH60')
tmp = entries60[['Season', 'Entry type', 'Player', 'WSH']] \
.merge(ientries60[['Season', 'Entry type', 'Entry by', 'WSH']] \
.rename(columns={'Entry by': 'Player', 'WSH': 'iWSH'}),
how='left', on=['Season', 'Entry type', 'Player']) \
.drop('Entry type', axis=1) \
.groupby(['Season', 'Player'], as_index=False) \
.sum()
tmp.loc[:, 'iE%'] = tmp.iWSH / tmp.WSH
tmp = tmp.drop({'WSH', 'iWSH'}, axis=1) \
.pivot_table(index='Player', columns='Season', values='iE%') \
.fillna(0)
title('Individual entry share, 2016 vs 2017')
xlabel('2016')
ylabel('2017')
scatter(tmp.loc[:, 2016].values, tmp.loc[:, 2017].values, s=200, alpha=0.5)
for p, e1, e2 in tmp.itertuples():
annotate(p[:-3], xy=(e1, e2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='Bad', topleft='Improved', topright='Good', bottomright='Declined')
vhelper.add_cfpct_ref_lines_to_plot(ax=gca(), refs=[50])
Explanation: Entries per 60
End of explanation
# Season level
# Count by entry type
spe = entries2[['Season', 'Entry type', 'Team', 'Game', 'Period', 'Time', 'Fen total']] \
.drop_duplicates() \
[['Season', 'Entry type', 'Team', 'Fen total']] \
.groupby(['Season', 'Entry type', 'Team'], as_index=False) \
.mean() \
.pivot_table(index=['Season', 'Entry type'], columns='Team', values='Fen total') \
.reset_index() \
.sort_values(['Entry type', 'Season'])
spe
# Individual level, on-ice
# Count by entry type
spe = entries2[['Season', 'Entry type', 'Team', 'Game', 'Period', 'Time', 'Player', 'Fen total']] \
[['Season', 'Entry type', 'Team', 'Player', 'Fen total']] \
.assign(Count=1) \
.groupby(['Season', 'Entry type', 'Team', 'Player'], as_index=False) \
.mean() \
.pivot_table(index=['Season', 'Entry type', 'Player'], columns='Team', values='Fen total') \
.reset_index() \
.sort_values(['Player', 'Season', 'Entry type'])
spe
tmp = spe[spe['Entry type'] != 'X'] \
.drop('Opp', axis=1) \
.pivot_table(index=['Player', 'Entry type'], columns='Season', values='WSH') \
.fillna(0) \
.reset_index()
for etype in tmp['Entry type'].unique():
tmp2 = tmp[tmp['Entry type'] == etype]
scatter(tmp2.loc[:, 2016].values, tmp2.loc[:, 2017].values, label=etype, s=200, alpha=0.5)
for s, p, etype, e1, e2 in tmp.itertuples():
annotate(p[:-3], xy=(e1, e2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='Bad', topleft='Improved', topright='Good', bottomright='Declined')
vhelper.add_cfpct_ref_lines_to_plot(ax=gca(), refs=[50])
title('On-ice shots per entry')
xlabel('2016')
ylabel('2017')
legend(loc=2, bbox_to_anchor=(1, 1))
tmp = spe[spe['Entry type'] != 'X'] \
.drop('WSH', axis=1) \
.pivot_table(index=['Player', 'Entry type'], columns='Season', values='Opp') \
.fillna(0) \
.reset_index()
for etype in tmp['Entry type'].unique():
tmp2 = tmp[tmp['Entry type'] == etype]
scatter(tmp2.loc[:, 2016].values, tmp2.loc[:, 2017].values, label=etype, s=200, alpha=0.5)
for s, p, etype, e1, e2 in tmp.itertuples():
annotate(p[:-3], xy=(e1, e2), ha='center', va='center')
vhelper.add_good_bad_fast_slow(bottomleft='Good', topleft='Declined', topright='Bad', bottomright='Improved')
vhelper.add_cfpct_ref_lines_to_plot(ax=gca(), refs=[50])
title('On-ice shots against per entry')
xlabel('2016')
ylabel('2017')
legend(loc=2, bbox_to_anchor=(1, 1))
# Individual level, individual
# Count by entry type
spe = entries2[pd.notnull(entries2['Entry by'])]
spe = spe[spe['Entry by'].str.contains('WSH')] \
[['Season', 'Entry type', 'Team', 'Game', 'Period', 'Time', 'Entry by', 'Fen total']] \
.drop_duplicates() \
[['Season', 'Entry type', 'Team', 'Entry by', 'Fen total']] \
.assign(Count=1) \
.groupby(['Season', 'Entry type', 'Team', 'Entry by'], as_index=False) \
.mean() \
.pivot_table(index=['Season', 'Entry type', 'Entry by'], columns='Team', values='Fen total') \
.reset_index() \
.sort_values(['Entry by', 'Season', 'Entry type'])
spe.pivot_table(index=['Entry by', 'Season'], columns='Entry type', values='WSH')
Explanation: Shots per entry
Luckily, Corey has a Fenwick-post-entry column. We'll just use that.
End of explanation
exits = wsh['exits']
# Drop extra team cols
colnames = [x for x in exits.columns if not x.upper() == x or x[:3] == 'WSH']
exits = exits[colnames]
# Flag exits as WSH or Opp
exits.loc[:, 'Team'] = exits['Attempt'].apply(lambda x: 'WSH' if str(x)[-3:] == 'WSH' else 'Opp')
# Melt to long
idvars = [x for x in exits.columns if x not in ['WSH1', 'WSH2', 'WSH3', 'WSH4', 'WSH5']]
exits2 = exits.melt(id_vars=idvars, value_name='Player').sort_values(['Season', 'Game', '_Secs', 'variable'])
exits2.head()
# Season level
# Count by exit type
exits60 = exits2[['Season', 'Result', 'Team', 'Game', 'Period', 'Time']] \
.drop_duplicates() \
[['Season', 'Result', 'Team']] \
.assign(Count=1) \
.groupby(['Season', 'Result', 'Team'], as_index=False) \
.count() \
.pivot_table(index=['Season', 'Result'], columns='Team', values='Count') \
.reset_index() \
.sort_values(['Result', 'Season'])
# Add TOI
toi = dfs['toi']['WSH'] \
[['Season']].assign(TOI=1) \
.groupby('Season', as_index=False).count()
exits60 = exits60.merge(toi, how='left', on='Season')
exits60.loc[:, 'WSH60'] = exits60.WSH / (exits60.TOI / 3600)
exits60.loc[:, 'Opp60'] = exits60.Opp / (exits60.TOI / 3600)
exits60
# Individual level, on-ice
# Count by exit type
exits60 = exits2[['Season', 'Result', 'Team', 'Game', 'Period', 'Time', 'Player']] \
[['Season', 'Result', 'Team', 'Player']] \
.assign(Count=1) \
.groupby(['Season', 'Result', 'Team', 'Player'], as_index=False) \
.count() \
.pivot_table(index=['Season', 'Result', 'Player'], columns='Team', values='Count') \
.reset_index() \
.sort_values(['Result', 'Season', 'Player'])
# Add TOI
toi = dfs['toi']['WSH'] \
[['Season', 'WSH1', 'WSH2', 'WSH3', 'WSH4', 'WSH5']] \
.melt(id_vars='Season', value_name='Player') \
.drop('variable', axis=1) \
.assign(TOI=1) \
.groupby(['Season', 'Player'], as_index=False).count()
toi.loc[:, 'Player'] = toi['Player'].apply(lambda x: str(wsh_players[x]) + 'WSH' if x in wsh_players else x)
exits60 = exits60.merge(toi, how='left', on=['Season', 'Player'])
exits60.loc[:, 'WSH60'] = exits60.WSH / (exits60.TOI / 3600)
exits60.loc[:, 'Opp60'] = exits60.Opp / (exits60.TOI / 3600)
exits60
# Individual level, individual
# Count by exit type
exits60 = exits2[exits2.Attempt.str.contains('WSH')] \
.query('Game == 20502') \
[['Season', 'Result', 'Team', 'Game', 'Period', 'Time', 'Attempt']] \
.drop_duplicates() \
[['Season', 'Result', 'Team', 'Attempt']] \
.assign(Count=1) \
.groupby(['Season', 'Result', 'Team', 'Attempt'], as_index=False) \
.count() \
.pivot_table(index=['Season', 'Result', 'Attempt'], columns='Team', values='Count') \
.reset_index() \
.sort_values(['Result', 'Season', 'Attempt'])
# Add TOI
toi = dfs['toi']['WSH'] \
[['Season', 'WSH1', 'WSH2', 'WSH3', 'WSH4', 'WSH5']] \
.melt(id_vars='Season', value_name='Attempt') \
.drop('variable', axis=1) \
.assign(TOI=1) \
.groupby(['Season', 'Attempt'], as_index=False).count()
toi.loc[:, 'Attempt'] = toi['Attempt'].apply(lambda x: str(wsh_players[x]) + 'WSH' if x in wsh_players else x)
exits60 = exits60.merge(toi, how='left', on=['Season', 'Attempt'])
exits60.loc[:, 'WSH60'] = exits60.WSH / (exits60.TOI / 3600)
exits60.pivot_table(index=['Attempt', 'Season'], columns='Result', values='WSH60')
Explanation: Exits per 60
Some of the relevant result codes:
P for pass
C for carry
D for dump
M for clear
F for fail
End of explanation |
5,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning
Step1: Model Zoo -- Saving and Loading Trained Models
from TensorFlow Checkpoint Files and NumPy NPZ Archives
This notebook demonstrates different strategies on how to export and import training TensorFlow models based on a a simple 2-hidden layer multilayer perceptron. These include
Using regular TensorFlow meta and checkpoint files
Loading variables from NumPy archives (.npz) files
Note that the graph def is going set up in a way that it constructs "rigid," not trainable TensorFlow classifier if .npz files are provided. This is on purpose, since it may come handy in certain use cases, but the code can be easily modified to make the model trainable if NumPy .npz files are provided -- for example, by wrapping the tf.constant calls in fc_layer in a tf.Variable constructor like so
Step2: Train and Save Multilayer Perceptron
Step3: Reload Model from Meta and Checkpoint Files
You can restart and the notebook and the following code cells should execute without any additional code dependencies.
Step4: Working with NumPy Archive Files and Creating Non-Trainable Graphs
Export Model Parameters to NumPy NPZ files
Step5: Load NumPy .npz files into the mlp_graph
Note that the graph def was set up in a way that it constructs "rigid," not trainable TensorFlow classifier if .npz files are provided. This is on purpose, since it may come handy in certain use cases, but the code can be easily modified to make the model trainable if NumPy .npz files are provided (e.g., by wrapping the tf.constant calls in fc_layer in a tf.Variable constructor.
Note | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p tensorflow
Explanation: Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by Sebastian Raschka. All code examples are released under the MIT license. If you find this content useful, please consider supporting the work by buying a copy of the book.
Other code examples and content are available on GitHub. The PDF and ebook versions of the book are available through Leanpub.
End of explanation
import tensorflow as tf
##########################
### WRAPPER FUNCTIONS
##########################
def fc_layer(input_tensor, n_output_units, name,
activation_fn=None, seed=None,
weight_params=None, bias_params=None):
with tf.variable_scope(name):
if weight_params is not None:
weights = tf.constant(weight_params, name='weights',
dtype=tf.float32)
else:
weights = tf.Variable(tf.truncated_normal(
shape=[input_tensor.get_shape().as_list()[-1], n_output_units],
mean=0.0,
stddev=0.1,
dtype=tf.float32,
seed=seed),
name='weights',)
if bias_params is not None:
biases = tf.constant(bias_params, name='biases',
dtype=tf.float32)
else:
biases = tf.Variable(tf.zeros(shape=[n_output_units]),
name='biases',
dtype=tf.float32)
act = tf.matmul(input_tensor, weights) + biases
if activation_fn is not None:
act = activation_fn(act)
return act
def mlp_graph(n_input=784, n_classes=10, n_hidden_1=128, n_hidden_2=256,
learning_rate=0.1,
fixed_params=None):
# fixed_params to allow loading weights & biases
# from NumPy npz archives and defining a fixed, non-trainable
# TensorFlow classifier
if not fixed_params:
var_names = ['fc1/weights:0', 'fc1/biases:0',
'fc2/weights:0', 'fc2/biases:0',
'logits/weights:0', 'logits/biases:0',]
fixed_params = {v: None for v in var_names}
found_params = False
else:
found_params = True
# Input data
tf_x = tf.placeholder(tf.float32, [None, n_input], name='features')
tf_y = tf.placeholder(tf.int32, [None], name='targets')
tf_y_onehot = tf.one_hot(tf_y, depth=n_classes, name='onehot_targets')
# Multilayer perceptron
fc1 = fc_layer(input_tensor=tf_x,
n_output_units=n_hidden_1,
name='fc1',
weight_params=fixed_params['fc1/weights:0'],
bias_params=fixed_params['fc1/biases:0'],
activation_fn=tf.nn.relu)
fc2 = fc_layer(input_tensor=fc1,
n_output_units=n_hidden_2,
name='fc2',
weight_params=fixed_params['fc2/weights:0'],
bias_params=fixed_params['fc2/biases:0'],
activation_fn=tf.nn.relu)
logits = fc_layer(input_tensor=fc2,
n_output_units=n_classes,
name='logits',
weight_params=fixed_params['logits/weights:0'],
bias_params=fixed_params['logits/biases:0'],
activation_fn=tf.nn.relu)
# Loss and optimizer
### Only necessary if no existing params are found
### and a trainable graph has to be initialized
if not found_params:
loss = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=tf_y_onehot)
cost = tf.reduce_mean(loss, name='cost')
optimizer = tf.train.GradientDescentOptimizer(
learning_rate=learning_rate)
train = optimizer.minimize(cost, name='train')
# Prediction
probabilities = tf.nn.softmax(logits, name='probabilities')
labels = tf.cast(tf.argmax(logits, 1), tf.int32, name='labels')
correct_prediction = tf.equal(labels,
tf_y, name='correct_predictions')
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32),
name='accuracy')
Explanation: Model Zoo -- Saving and Loading Trained Models
from TensorFlow Checkpoint Files and NumPy NPZ Archives
This notebook demonstrates different strategies on how to export and import training TensorFlow models based on a a simple 2-hidden layer multilayer perceptron. These include
Using regular TensorFlow meta and checkpoint files
Loading variables from NumPy archives (.npz) files
Note that the graph def is going set up in a way that it constructs "rigid," not trainable TensorFlow classifier if .npz files are provided. This is on purpose, since it may come handy in certain use cases, but the code can be easily modified to make the model trainable if NumPy .npz files are provided -- for example, by wrapping the tf.constant calls in fc_layer in a tf.Variable constructor like so:
python
...
if weight_params is not None:
weights = tf.Variable(tf.constant(weight_params, name='weights',
dtype=tf.float32))
...
instead of
python
...
if weight_params is not None:
weights = tf.constant(weight_params, name='weights',
dtype=tf.float32)
...
Define Multilayer Perceptron Graph
The following code cells defines wrapper functions for our convenience; it saves us some re-typing later when we set up the TensorFlow multilayer perceptron graphs for the trainable and non-trainable models.
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
##########################
### SETTINGS
##########################
# Hyperparameters
learning_rate = 0.1
training_epochs = 10
batch_size = 64
##########################
### GRAPH DEFINITION
##########################
g = tf.Graph()
with g.as_default():
mlp_graph()
##########################
### DATASET
##########################
mnist = input_data.read_data_sets("./", one_hot=False)
##########################
### TRAINING & EVALUATION
##########################
with tf.Session(graph=g) as sess:
sess.run(tf.global_variables_initializer())
saver0 = tf.train.Saver()
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = mnist.train.num_examples // batch_size
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
_, c = sess.run(['train', 'cost:0'], feed_dict={'features:0': batch_x,
'targets:0': batch_y})
avg_cost += c
train_acc = sess.run('accuracy:0', feed_dict={'features:0': mnist.train.images,
'targets:0': mnist.train.labels})
valid_acc = sess.run('accuracy:0', feed_dict={'features:0': mnist.validation.images,
'targets:0': mnist.validation.labels})
print("Epoch: %03d | AvgCost: %.3f" % (epoch + 1, avg_cost / (i + 1)), end="")
print(" | Train/Valid ACC: %.3f/%.3f" % (train_acc, valid_acc))
test_acc = sess.run('accuracy:0', feed_dict={'features:0': mnist.test.images,
'targets:0': mnist.test.labels})
print('Test ACC: %.3f' % test_acc)
##########################
### SAVE TRAINED MODEL
##########################
saver0.save(sess, save_path='./mlp')
Explanation: Train and Save Multilayer Perceptron
End of explanation
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./", one_hot=False)
with tf.Session() as sess:
saver1 = tf.train.import_meta_graph('./mlp.meta')
saver1.restore(sess, save_path='./mlp')
test_acc = sess.run('accuracy:0', feed_dict={'features:0': mnist.test.images,
'targets:0': mnist.test.labels})
print('Test ACC: %.3f' % test_acc)
Explanation: Reload Model from Meta and Checkpoint Files
You can restart and the notebook and the following code cells should execute without any additional code dependencies.
End of explanation
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
with tf.Session() as sess:
saver1 = tf.train.import_meta_graph('./mlp.meta')
saver1.restore(sess, save_path='./mlp')
var_names = [v.name for v in
tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)]
params = {}
print('Found variables:')
for v in var_names:
print(v)
ary = sess.run(v)
params[v] = ary
np.savez('mlp', **params)
Explanation: Working with NumPy Archive Files and Creating Non-Trainable Graphs
Export Model Parameters to NumPy NPZ files
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
###########################
### LOAD DATA AND PARAMS
###########################
mnist = input_data.read_data_sets("./", one_hot=False)
param_dict = np.load('mlp.npz')
##########################
### GRAPH DEFINITION
##########################
g = tf.Graph()
with g.as_default():
# here: constructs a non-trainable graph
# due to the provided fixed_params argument
mlp_graph(fixed_params=param_dict)
with tf.Session(graph=g) as sess:
test_acc = sess.run('accuracy:0', feed_dict={'features:0': mnist.test.images,
'targets:0': mnist.test.labels})
print('Test ACC: %.3f' % test_acc)
Explanation: Load NumPy .npz files into the mlp_graph
Note that the graph def was set up in a way that it constructs "rigid," not trainable TensorFlow classifier if .npz files are provided. This is on purpose, since it may come handy in certain use cases, but the code can be easily modified to make the model trainable if NumPy .npz files are provided (e.g., by wrapping the tf.constant calls in fc_layer in a tf.Variable constructor.
Note: If you defined the fc_layer and mlp_graph wrapper functions in Define Multilayer Perceptron Graph, the following code cell is otherwise independent and has no other code dependencies.
End of explanation |
5,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Setup a symmetric system using SyMD
Setup some simulation parameters, initialize the spatial group, and the constraint function.
Step2: Randomly initialize positions and velocities.
Step3: Transform the positions and velocities using homogeneous coordinates to get all of the images.
Step4: Transform the positions from fractional coordinates to real space (not necessary).
Step5: Visualize the initial system using JAX MD
Step6: Simulate the system using JAX MD
First setup the space and a Lennard-Jones potential.
Step7: Perform a few steps of minimization so that the Lennard-Jones particles don't become unstable.
Step8: Now do a simulation at constant temperature. First initialize the simulation environment.
Step9: Define a helper function to re-fold the particles after each step.
Step10: Create the folding function and initialize the simulation.
Step11: Run the simulation for 20000 steps, recording every 100 steps. | Python Code:
%%capture
!pip install jax-md
!pip install symd
from symd import symd, groups
import jax.numpy as jnp
from jax import random
from jax import config; config.update('jax_enable_x64', True)
Explanation: <a href="https://colab.research.google.com/github/google/jax-md/blob/main/notebooks/symd.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Sy(JAX)MD
End of explanation
group = 11
N = 1000
dim = 2
group = groups.load_group(group, dim)
in_unit = symd.asymm_constraints(group.asymm_unit)
Explanation: Setup a symmetric system using SyMD
Setup some simulation parameters, initialize the spatial group, and the constraint function.
End of explanation
key = random.PRNGKey(0)
key, pos_key, vel_key = random.split(key, 3)
pos_key, vel_key = random.split(random.PRNGKey(0))
positions = random.uniform(pos_key, (N, dim))
positions = positions[jnp.array([in_unit(*p) for p in positions])]
N = positions.shape[0]
velocities = random.normal(vel_key, (N, dim))
Explanation: Randomly initialize positions and velocities.
End of explanation
homo_positions = jnp.concatenate((positions, jnp.ones((N, 1))), axis=-1)
homo_velocities = jnp.concatenate((velocities, jnp.zeros((N, 1))), axis=-1)
positions = []
velocities = []
colors = []
for s in group.genpos:
g = symd.str2mat(s)
xp = homo_positions @ g
xp = jnp.fmod(xp, 1.0)
positions += [xp[:, :2]]
xv = homo_velocities @ g
velocities += [xv[:, :2]]
key, split = random.split(key)
colors += [random.uniform(split, (1, 3)) * jnp.ones((N, 1))]
positions = jnp.concatenate(positions, axis=0) + 0.5
velocities = jnp.concatenate(velocities, axis=0)
colors = jnp.concatenate(colors, axis=0)
Explanation: Transform the positions and velocities using homogeneous coordinates to get all of the images.
End of explanation
from jax_md import quantity
box = quantity.box_size_at_number_density(len(positions), 0.1, 2)
positions = positions * box
Explanation: Transform the positions from fractional coordinates to real space (not necessary).
End of explanation
from jax_md import space
from jax_md.colab_tools import renderer
renderer.render(box,
renderer.Disk(positions, color=colors),
resolution=(512, 512),
background_color=[1, 1, 1])
Explanation: Visualize the initial system using JAX MD
End of explanation
from jax import jit
from jax_md import space
from jax_md import energy
from jax_md import simulate
from jax_md import minimize
from jax_md import dataclasses
displacement, shift = space.periodic(box)
neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement, box)
Explanation: Simulate the system using JAX MD
First setup the space and a Lennard-Jones potential.
End of explanation
init_fn, step_fn = minimize.fire_descent(energy_fn, shift, dt_start=1e-7, dt_max=4e-7)
step_fn = jit(step_fn)
@jit
def sim_fn(state, nbrs):
state = step_fn(state, neighbor=nbrs)
nbrs = nbrs.update(state.position)
return state, nbrs
# Setup the neighbor list (we have to allocate extra capacity so it doesn't
# overflow during the simulation).
nbrs = neighbor_fn.allocate(positions, extra_capacity=6)
# Initialize the minimizer.
state = init_fn(positions, neighbor=nbrs)
# Run 100 steps of minimization.
for i in range(100):
state, nbrs = sim_fn(state, nbrs)
print(f'Did neighborlist overflow: {nbrs.did_buffer_overflow}')
Explanation: Perform a few steps of minimization so that the Lennard-Jones particles don't become unstable.
End of explanation
init_fn, step_fn = simulate.nvt_nose_hoover(energy_fn, shift, dt=1e-3, kT=0.8)
step_fn = jit(step_fn)
Explanation: Now do a simulation at constant temperature. First initialize the simulation environment.
End of explanation
def fold_particles(group, box, n):
def fold_fn(state):
R, V = state.position, state.velocity
R = R / box - 0.5
R_homo = jnp.concatenate((R[:n], jnp.ones((n, 1))), axis=-1)
V_homo = jnp.concatenate((V[:n], jnp.zeros((n, 1))), axis=-1)
for i, s in enumerate(group.genpos):
g = symd.str2mat(s)
R = R.at[i * n:(i + 1) * n].set(jnp.fmod(R_homo @ g, 1.0)[:, :2])
V = V.at[i * n:(i + 1) * n].set((V_homo @g)[:, :2])
R = box * (R + 0.5)
return dataclasses.replace(state, position=R, velocity=V)
return fold_fn
Explanation: Define a helper function to re-fold the particles after each step.
End of explanation
fold_fn = fold_particles(group, box, N)
state = init_fn(key, state.position, neighbor=nbrs)
# We need to replace the velocities that JAX MD generates with the symmetric
# velocities.
state = dataclasses.replace(state, velocity=velocities)
Explanation: Create the folding function and initialize the simulation.
End of explanation
from jax import lax
def sim_fn(i, state_nbrs):
state, nbrs = state_nbrs
state = step_fn(state, neighbor=nbrs)
state = fold_fn(state)
nbrs = nbrs.update(state.position)
return state, nbrs
trajectory = []
for i in range(200):
trajectory += [state.position]
state, nbrs = lax.fori_loop(0, 100, sim_fn, (state, nbrs))
trajectory = jnp.stack(trajectory)
print(f'Did neighborlist overflow: {nbrs.did_buffer_overflow}')
renderer.render(box,
renderer.Disk(trajectory, color=colors),
resolution=(512, 512),
background_color=[1, 1, 1])
Explanation: Run the simulation for 20000 steps, recording every 100 steps.
End of explanation |
5,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to language-model-based data augmentation (LAMBADA)
https
Step1: Step 0
Step2: Step 1
Step3: Step 2a
Step4: Step 2b
Step5: Step 3 ~ 5 | Python Code:
from IPython.display import Image
Image(filename='../res/lambada_algo.png')
Explanation: Introduction to language-model-based data augmentation (LAMBADA)
https://arxiv.org/pdf/1911.03118.pdf
LAMBADA (Anaby-Tavor et al., 2019) is proposed to generate synthetic data. We follow the approach with modification into nlpaug so that we can generate more data with few lines of code. The following figures show steps of training LAMBADA. We will go through it step by step
End of explanation
import pandas as pd
data = pd.read_csv('../test/res/text/classification.csv')
data
Explanation: Step 0: Input Data
expected column name are "text" and "label"
End of explanation
!python ../scripts/lambada/train_cls.py \
--train_data_path ../test/res/text/classification.csv \
--val_data_path ../test/res/text/classification.csv \
--output_dir ../model/lambada/cls \
--device cuda \
--num_epoch 2
Explanation: Step 1: Train the classifier
End of explanation
!python ../scripts/lambada/data_processing.py \
--data_path ../test/res/text/classification.csv \
--output_dir ../test/res/text
Explanation: Step 2a: Processing data for task-adaptive pretraining
End of explanation
!source activate py39; python ../scripts/lambada/run_clm.py \
--tokenizer_name ../model/lambada/cls \
--model_name_or_path gpt2 \
--model_type gpt2 \
--train_file ../test/res/text/mlm_data.txt \
--output_dir ../model/lambada/gen \
--do_train \
--overwrite_output_dir \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--save_steps=10000 \
--num_train_epochs 2
Explanation: Step 2b: Task-adpative pretraining for langauge model
End of explanation
import nlpaug.augmenter.sentence as nas
aug = nas.LambadaAug(model_dir='../model/lambada', threshold=0.3, batch_size=4)
aug.augment(['LABEL_0', 'LABEL_1', 'LABEL_2', 'LABEL_3'], n=10)
Explanation: Step 3 ~ 5: Generate synthetic data
End of explanation |
5,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FAQ
This document will address frequently asked questions not addressed in other pages of the documentation.
How do I install cobrapy?
Please see the INSTALL.md file.
How do I cite cobrapy?
Please cite the 2013 publication
Step1: The Model.repair function will rebuild the necessary indexes
Step2: How do I delete a gene?
That depends on what precisely you mean by delete a gene.
If you want to simulate the model with a gene knockout, use the cobra.maniupulation.delete_model_genes function. The effects of this function are reversed by cobra.manipulation.undelete_model_genes.
Step3: If you want to actually remove all traces of a gene from a model, this is more difficult because this will require changing all the gene_reaction_rule strings for reactions involving the gene.
How do I change the reversibility of a Reaction?
Reaction.reversibility is a property in cobra which is computed when it is requested from the lower and upper bounds.
Step4: Trying to set it directly will result in an error or warning
Step5: The way to change the reversibility is to change the bounds to make the reaction irreversible.
Step6: How do I generate an LP file from a COBRA model?
While the cobrapy does not include python code to support this feature directly, many of the bundled solvers have this capability. Create the problem with one of these solvers, and use its appropriate function.
Please note that unlike the LP file format, the MPS file format does not specify objective direction and is always a minimzation. Some (but not all) solvers will rewrite the maximization as a minimzation. | Python Code:
from __future__ import print_function
import cobra.test
model = cobra.test.create_test_model()
for metabolite in model.metabolites:
metabolite.id = "test_" + metabolite.id
try:
model.metabolites.get_by_id(model.metabolites[0].id)
except KeyError as e:
print(repr(e))
Explanation: FAQ
This document will address frequently asked questions not addressed in other pages of the documentation.
How do I install cobrapy?
Please see the INSTALL.md file.
How do I cite cobrapy?
Please cite the 2013 publication: 10.1186/1752-0509-7-74
How do I rename reactions or metabolites?
TL;DR Use Model.repair afterwards
When renaming metabolites or reactions, there are issues because cobra indexes based off of ID's, which can cause errors. For example:
End of explanation
model.repair()
model.metabolites.get_by_id(model.metabolites[0].id)
Explanation: The Model.repair function will rebuild the necessary indexes
End of explanation
model = cobra.test.create_test_model()
PGI = model.reactions.get_by_id("PGI")
print("bounds before knockout:", (PGI.lower_bound, PGI.upper_bound))
cobra.manipulation.delete_model_genes(model, ["STM4221"])
print("bounds after knockouts", (PGI.lower_bound, PGI.upper_bound))
Explanation: How do I delete a gene?
That depends on what precisely you mean by delete a gene.
If you want to simulate the model with a gene knockout, use the cobra.maniupulation.delete_model_genes function. The effects of this function are reversed by cobra.manipulation.undelete_model_genes.
End of explanation
model = cobra.test.create_test_model()
model.reactions.get_by_id("PGI").reversibility
Explanation: If you want to actually remove all traces of a gene from a model, this is more difficult because this will require changing all the gene_reaction_rule strings for reactions involving the gene.
How do I change the reversibility of a Reaction?
Reaction.reversibility is a property in cobra which is computed when it is requested from the lower and upper bounds.
End of explanation
try:
model.reactions.get_by_id("PGI").reversibility = False
except Exception as e:
print(repr(e))
Explanation: Trying to set it directly will result in an error or warning:
End of explanation
model.reactions.get_by_id("PGI").lower_bound = 10
model.reactions.get_by_id("PGI").reversibility
Explanation: The way to change the reversibility is to change the bounds to make the reaction irreversible.
End of explanation
model = cobra.test.create_test_model()
# glpk through cglpk
glp = cobra.solvers.cglpk.create_problem(model)
glp.write("test.lp")
glp.write("test.mps") # will not rewrite objective
# gurobi
gurobi_problem = cobra.solvers.gurobi_solver.create_problem(model)
gurobi_problem.write("test.lp")
gurobi_problem.write("test.mps") # rewrites objective
# cplex
cplex_problem = cobra.solvers.cplex_solver.create_problem(model)
cplex_problem.write("test.lp")
cplex_problem.write("test.mps") # rewrites objective
Explanation: How do I generate an LP file from a COBRA model?
While the cobrapy does not include python code to support this feature directly, many of the bundled solvers have this capability. Create the problem with one of these solvers, and use its appropriate function.
Please note that unlike the LP file format, the MPS file format does not specify objective direction and is always a minimzation. Some (but not all) solvers will rewrite the maximization as a minimzation.
End of explanation |
5,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Namaste 2 example
Here is a short readable (and copy-and-pastable) example as to how to use Namaste 2 to fit a single transit.
Step1: Initialising the star
Step2: To change settings use the st.settings approach
Step3: Adding a lightcurve
Step4: Initialising the GP
Step5: Adding and initialising the Monotransit
So we have set up our star, importantly with the density included. Now we must add a monotransit. This is set up such that multiple monotransits can be added (although I have yet to check this feature).
The format is $t_{\rm cen}$, $b$, and $R_p/R_s$
Step6: Running a model MCMC
Step7: If in the following line you see
Step8: Running the MCMC
Step9: Or...
you can do all of the above lines since the "Running a model MCMC title with the following command
Step10: Now let's save the MCMC samples and plot the MCMC result
Step11: Without GPs | Python Code:
from namaste import *
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#%matplotlib inline
%reload_ext autoreload
%autoreload 2
Explanation: Namaste 2 example
Here is a short readable (and copy-and-pastable) example as to how to use Namaste 2 to fit a single transit.
End of explanation
#Using 203311200 (from Osborn 2016) as a test case
st=namaste.Star('203311200',namaste.Settings(nsteps=1000,nthreads=4,mission='K2'))
#Can add data directly here:
st.addRad(2.024,0.176)#solar
st.addMass(1.33,0.35)#solar
st.addTeff(6088,250)#K
#Caculates density:
st.addDens()
#Or you can use a csv file, for example:
st.csvfile_dat('ExampleData/EPIC203311200_beststelpars.csv')
#st.addLightcurve('/Volumes/CHEOPS/K2_MONOS/BestLCs/EPIC203311200_bestlc_ev.npy')
#Saving the star
st.SaveAll(overwrite=True)
#We need to use Teff and Logg information to initialise Limb Darkening (default: Kepler)
st.initLD()
Explanation: Initialising the star
End of explanation
st.settings.printall()
#eg change number of steps in mcmc:
st.settings.update(nsteps=8000)
#or the "sigma" at which to cut anomalies:
st.settings.update(anomcut=4.5)
#Not doing a GP!
st.settings.update(GP=True)
st.settings.update(kernel='Real')
Explanation: To change settings use the st.settings approach
End of explanation
st.addLightcurve('ExampleData/EPIC203311200_bestlc_ev.npy')
plt.plot(st.Lcurve.lc[:,0],st.Lcurve.lc[:,1],'.')
plt.plot(st.Lcurve.lc[st.Lcurve.lcmask,0],st.Lcurve.lc[st.Lcurve.lcmask,1],',')
Explanation: Adding a lightcurve:
End of explanation
# Can set 'kernel' in settings as "quasi" or "Real"
st.settings.update(kernel='Real')
st.addGP()
#This then optimizes the hyperparameters using the lightcurve (and the out-of-transit regions, if a transit has been specified)
st.Optimize_GP()
Explanation: Initialising the GP:
This uses Celerite. For the moment, only the quasi-periodic stellar rotation kernel is working, but I hope to update this to include "no GP" and "ExpSq" in the future.
End of explanation
#Adds this to mono1
st.AddMonotransit(2121.02,0.5,0.0032,name='mono1')
#Accessing pure transit fit:
st.BuildMeanModel()
model=st.meanmodel_comb.get_value(np.arange(2120.6,2121.4,0.005))
plt.plot(st.Lcurve.lc[abs(st.Lcurve.lc[:,0]-st.mono1.tcen)<1.5,0],st.Lcurve.lc[abs(st.Lcurve.lc[:,0]-st.mono1.tcen)<1.5,1],'.')
plt.plot(np.arange(2120.6,2121.4,0.005),model,'--')
Explanation: Adding and initialising the Monotransit
So we have set up our star, importantly with the density included. Now we must add a monotransit. This is set up such that multiple monotransits can be added (although I have yet to check this feature).
The format is $t_{\rm cen}$, $b$, and $R_p/R_s$
End of explanation
#Now we have our mean-functions (eg transit models) added, we need to build the priors and the model:
st.BuildMeanPriors()
st.BuildMeanModel()
# Then we need the combine the kernel we've optimised with the monotransit mean model into a Celerite GP:
import celerite
st.gp=celerite.GP(kernel=st.kern,mean=st.meanmodel_comb,fit_mean=True)
# Now we need to build all the priors (not just the mean functions)
st.BuildAllPriors(st.gp.get_parameter_names())
# Making walkers initial positions
chx=np.random.choice(st.settings.npdf,st.settings.nwalkers,replace=False)
#Tidying up so that the Celerite priors match our prior names
st.fitdict.update({'mean:'+nm:getattr(st.mono1,nm+'s')[chx] for nm in ['tcen','b','vel','RpRs']})
st.fitdict.update({'mean:'+nm:getattr(st,nm+'s')[chx] for nm in ['LD1','LD2']})
#Removing medians:
for row in st.fitdict:
st.fitdict[row][np.isnan(st.fitdict[row])]=np.nanmedian(np.isnan(st.fitdict[row]))
dists=[st.fitdict[cname] for cname in st.gp.get_parameter_names()]
st.init_mcmc_params=np.column_stack(dists)
#Masking an arbitrary region around the transit for analysis (2.75d in this case)
mask=abs(st.Lcurve.lc[:,0]-st.gp.get_parameter('mean:tcen'))<2.75
# Doing an initial fit with PlotModel:
_=namaste.PlotModel(st.Lcurve.lc[mask,:], st.gp, np.median(st.init_mcmc_params,axis=0), fname=st.settings.outfilesloc+st.objname+'_initfit.png',GP=True)
#RUNNING THE MCMC WITH EMCEE:
import emcee
st.sampler = emcee.EnsembleSampler(st.settings.nwalkers, len(st.gp.get_parameter_vector()), namaste.MonoLogProb, args=(st.Lcurve.lc,st.priors,st.gp), threads=st.settings.nthreads)
Explanation: Running a model MCMC
End of explanation
#Just checking if the MCMC works/progresses:
st.sampler.run_mcmc(st.init_mcmc_params, 1, rstate0=np.random.get_state())
Explanation: If in the following line you see:
PicklingError: Can't pickle <class 'namaste.namaste.MonotransitModel'>: it's not the same object as namaste.namaste.MonotransitModel
then try re-running this script!
End of explanation
_ = st.sampler.run_mcmc(st.init_mcmc_params, int(st.settings.nsteps/12), rstate0=np.random.get_state())
# let's make an area of the samples and their names
st.samples = st.sampler.chain.reshape((-1, len(dists)))
st.samples = np.column_stack((st.samples,st.sampler.lnprobability.reshape(-1)))
st.sampleheaders=list(st.gp.get_parameter_names())+['logprob']
Explanation: Running the MCMC
End of explanation
st.RunMCMC()
Explanation: Or...
you can do all of the above lines since the "Running a model MCMC title with the following command:
End of explanation
st.SaveMCMC()
st.PlotMCMC()
Explanation: Now let's save the MCMC samples and plot the MCMC result:
End of explanation
st_nogp=namaste.Star('203311200',namaste.Settings(nsteps=1000,nthreads=4,mission='K2'))
st_nogp.settings.update(GP=False)
st_nogp.settings.update(verbose=False)
#Or you can use a csv file, for example:
st_nogp.csvfile_dat('ExampleData/EPIC203311200_beststelpars.csv')
#Initialising LDs:
st_nogp.initLD()
#Adding a lightcurve:
st_nogp.addLightcurve('/Volumes/CHEOPS/K2_MONOS/BestLCs/EPIC203311200_bestlc_ev.npy')
#When we're not doing GPs, lets flatten the LC:
st_nogp.Lcurve.lc=st_nogp.Lcurve.flatten()
plt.plot(st_nogp.Lcurve.lc[:,0],st_nogp.Lcurve.lc[:,1],'.')
plt.plot(st_nogp.Lcurve.lc[st_nogp.Lcurve.lcmask,0],st_nogp.Lcurve.lc[st_nogp.Lcurve.lcmask,1],',')
#Adds this to mono1
st_nogp.AddMonotransit(2121.02,0.5,0.0032,name='mono1')
#Now we have our mean-functions (eg transit models) added, we need to build the priors and the model:
st_nogp.BuildMeanPriors()
#Accessing pure transit fit:
st_nogp.BuildMeanModel()
model=st_nogp.meanmodel_comb.get_value(np.arange(2120.6,2121.4,0.005))
plt.plot(st_nogp.Lcurve.lc[abs(st_nogp.Lcurve.lc[:,0]-st_nogp.mono1.tcen)<1.5,0],st_nogp.Lcurve.lc[abs(st_nogp.Lcurve.lc[:,0]-st_nogp.mono1.tcen)<1.5,1],'.')
plt.plot(np.arange(2120.6,2121.4,0.005),model,'--')
st_nogp.RunMCMC()
st_nogp.SaveMCMC(suffix='NoGP',overwrite=True)
st_nogp.PlotMCMC(suffix='NoGP')
Explanation: Without GPs:
End of explanation |
5,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-cm4', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: GFDL-CM4
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
5,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PD- Control of a robot
In this section you will control the paddle to move to a desired location. The robot is force controlled. This means that for every time step, you can specify an applied force to the robot's center of mass. Additionally you can specify the an applied angular torque.
The goal is to program the robot to move to a desired location specified by $\vec{x}^* = (x,y,\theta)$ by specifing the velocity at each time step.
We will break this into a few steps.
1. Running the simulation and accessing the robot state information
2. Open loop control of the robot
3. Feedback control of the robot
The following code shows the instructor solution for a simple PD controller. You can modify the initial position and desired position/velocity of the robot to see how it works.
Step1: PD Control Part 1
Step2: Let's rerun our simulation and plot the state of the robot over time.
Step3: PD Control Part 2
Step4: Suppose we want to move our robot up 4 meters to position (16, 16) from position (16, 12) using our open loop control function. What forces should we apply and for how long? The mass of the robot is 2 kg.
Assuming we apply a constant force $u_y$, the dynamics of the system will be
Step5: PD Control Part 3
Step6: PD Control Part 3 | Python Code:
import tutorial; from tutorial import *
initial_pose = (16, 12,0.0)
desired_pose = (16, 16,3.14/2.)
desired_vel = (0, 0, 0)
play_pd_control_solution(initial_pose, \
desired_pose, desired_vel)
Explanation: PD- Control of a robot
In this section you will control the paddle to move to a desired location. The robot is force controlled. This means that for every time step, you can specify an applied force to the robot's center of mass. Additionally you can specify the an applied angular torque.
The goal is to program the robot to move to a desired location specified by $\vec{x}^* = (x,y,\theta)$ by specifing the velocity at each time step.
We will break this into a few steps.
1. Running the simulation and accessing the robot state information
2. Open loop control of the robot
3. Feedback control of the robot
The following code shows the instructor solution for a simple PD controller. You can modify the initial position and desired position/velocity of the robot to see how it works.
End of explanation
import tutorial; from tutorial import *
initial_pose = (16, 12, 3.14/2.)
result = run_pd_control(initial_pose, controller=None)
Explanation: PD Control Part 1: Running the simulation and accessing the robot state information
The following code will show a simple example of running the simulator
You should see the robot (paddle) start in the middle of the screen and fall down due to gravity.
Try changing the robot's orientation and rerun the simulation.
End of explanation
import tutorial; reload(tutorial); from tutorial import *
initial_pose = (16, 12, 3.14/2.)
result = run_pd_control(initial_pose, controller=None)
plot(result, "Robot")
Explanation: Let's rerun our simulation and plot the state of the robot over time.
End of explanation
import tutorial; reload(tutorial); from tutorial import *
initial_pose = (16, 12, 3.14/2.)
def openLoopController (time, robot_state):
u_x = 1.0
u_y = 0
u_th = 0
return u_x, u_y, u_th
result = run_pd_control(initial_pose, openLoopController)
plot(result, "Robot")
Explanation: PD Control Part 2:Open loop control of the robot
Now we are going to move our robot using open loop control. We can apply a force to the center of mass in the x or y direction, and an angular torque about the center of mass.
One of the inputs to the run_pd_control is currently set to None. In this example we are going to show how to write a controller that gets run at every time step.
The output of the controller is $u_x, u_y, u_th$, which is the amount of force applied in the x direction, in the y direction, and angular torque applied. The force is applied to the robot's center of mass.
End of explanation
import tutorial; reload(tutorial); from tutorial import *
initial_pose = (16, 12,0.0)
constant_force = 23.62
time_applied = 2
def openLoopController (time, robot_state):
u_x = 0
u_y = 0
u_th = 0
# only apply force for time < time_applied
if time < time_applied:
u_y = constant_force
# when the robot is near time_applied print the current y value
if abs(time-time_applied) < 0.1:
print "Time: %.2f, Height: %.2f " % (time, robot_state[1])
return u_x, u_y, u_th
result = run_pd_control(initial_pose, openLoopController)
plot(result, "Robot")
Explanation: Suppose we want to move our robot up 4 meters to position (16, 16) from position (16, 12) using our open loop control function. What forces should we apply and for how long? The mass of the robot is 2 kg.
Assuming we apply a constant force $u_y$, the dynamics of the system will be:
$$ y(t) = y_0 + \frac{1}{2}(\frac{u_y}{m} - 9.81)t^2 $$
If we assume the force will be applied for 2 seconds only, we can find what constant force to apply:
$$ 16 = 12 + \frac{1}{2}(\frac{u_y}{m} - 9.81)2^2 $$
$$ u_y = 23.62 $$
Program the robot to move to position (16, 15) using open loop commands. How close can you get?
End of explanation
import tutorial; reload(tutorial); from tutorial import *
initial_pose = (16, 12,0.0)
desired_pose = (16, 16,0.0)
K_px = 10
K_py = 10
K_pth = 10
def closedLoopController (time, robot_state):
# the output signal
x,y,th, xdot, ydot, thdot = robot_state
# the reference signal
rx, ry, rth = desired_pose
# the error signal
e_x = rx - x
e_y = ry - y
e_th = rth - th
# the controller output
u_x = K_px*e_x
u_y = K_py*e_y
u_th = K_pth*e_th
return u_x, u_y, u_th
result = run_pd_control(initial_pose, closedLoopController)
plot(result, "Robot")
Explanation: PD Control Part 3: Feedback control of the robot
The open loop controller method we used required a lot of effort on the designers part and won't work very well in practice. In this case we knew the robot's mass and could perfectly apply a force in the center of motion.
An alternative method is to use the current state of the robot to determine what force to apply. In this next section you are going to implement a position controller.
The following is an equation for a position controller:
$$u = K_{p}\cdot(X_{desired} - X_{current})$$
$u$ is the output of our controller
$K_{p}$ is the proportional gain
$X_{desired}$ is the reference signal
$X_{current}$ is the output signal
$(X_{desired} - X_{current})$ is the error signal
This controller is going to apply forces in the direction that decreases the error signal.
The robot state is given to you as $(x, y, \theta, \dot{x}, \dot{y}, \dot{th})$.
End of explanation
import tutorial; reload(tutorial); from tutorial import *
initial_pose = (16, 12,3.14/2)
desired_pose = (3, 16,0.0)
desired_vel = (0, 0, 0)
K_px = 100
K_py = 100
K_pth = 10
K_dx = 50
K_dy = 50
K_dth = 20
def closedLoopController (time, robot_state):
# the output signal
x,y,th, xdot, ydot, thdot = robot_state
# the reference signal
rx, ry, rth = desired_pose
rxdot, rydot, rthdot = desired_vel
# the error signal
e_x = rx - x
e_y = ry - y
e_th = rth - th
e_xdot = rxdot - xdot
e_ydot = rydot - ydot
e_thdot = rthdot - thdot
# the controller output
u_x = K_px*e_x + K_dx*e_xdot
u_y = K_py*e_y + K_dy*e_ydot
u_th = K_pth*e_th + K_dth*e_thdot
return u_x, u_y, u_th
result = run_pd_control(initial_pose, closedLoopController)
plot(result, "Robot")
Explanation: PD Control Part 3: Feedback control of the robot (continued)
Activities:
Try using different gains. See if you can observe different system response behavior, such as:
under damped
damped
overdamped
Improve upon your controller by adding a derivative term. In this case the reference signal for the derivative terms should be equal to 0.
$$u = K_{pose}\cdot(X_{desired} - X_{current}) + K_{d}\cdot(\dot{X}{desired} - \dot{X}{current})$$
$u$ is the output of our controller
$K_{d}$ is the derivitave gain
$\dot{X}_{desired}$ is the reference signal (In our case it is equal to 0)
rxdot, rydot, rthdot = 0,0,0
End of explanation |
5,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: http
Step2: http
Step3: https | Python Code:
#print res.text
print dir(res)
print res.status_code
print res.headers['content-type']
import requests
payload ={
'StartStation':'977abb69-413a-4ccf-a109-0272c24fd490',
'EndStation':'fbd828d8-b1da-4b06-a3bd-680cdca4d2cd',
'SearchDate':'2015/09/11',
'SearchTime':'14:30',
'SearchWay':'DepartureInMandarin'
}
res = requests.post('https://www.thsrc.com.tw/tw/TimeTable/SearchResult', data = payload)
print res
from datetime import datetime
datetime.strptime('Fri Sep 11 12:56:09 2015', '%y %b %d %H:%M:%S %Y',)
import requests
res = requests.get('http://24h.pchome.com.tw/prod/DRAA0C-A90067G2U')
print res.text
import requests
res = requests.get('http://ecapi.pchome.com.tw/ecshop/prodapi/v2/prod/button&id=DRAA0C-A90067G2U&fields=Seq,Id,Price,Qty,ButtonType,SaleStatus&_callback=jsonp_button?_callback=jsonp_button')
print res.text
Explanation: https://zh.wikipedia.org/wiki/HTTP%E7%8A%B6%E6%80%81%E7%A0%81
End of explanation
# -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
import time, re
driver = webdriver.Firefox()
driver.implicitly_wait(3)
base_url = "http://www.agoda.com"
driver.get(base_url + "/zh-tw/city/taipei-tw.html")
driver.find_element_by_id("CheckInMonthYear").click()
driver.implicitly_wait(1)
Select(driver.find_element_by_id("CheckInMonthYear")).select_by_visible_text(u"2015年11月")
driver.implicitly_wait(1)
driver.find_element_by_id("search-submit").click()
driver.implicitly_wait(1)
driver.implicitly_wait(3)
driver.find_element_by_link_text(u"下一頁").click()
Explanation: http://release.seleniumhq.org/selenium-ide/2.9.0/selenium-ide-2.9.0.xpi
End of explanation
from bs4 import BeautifulSoup
html_sample = ' \
<html> \
<body> \
<h1 id="title">Hello World</h1> \
<a href="#" class="link">This is link1</a> \
<a href="# link2" class="link">This is link2</a> \ </body> \
</html>'
soup = BeautifulSoup(html_sample)
print soup.text
atag = soup.select('a')
print atag[0]
print atag[1]
print soup.select('#title') # id => #
print soup.select('#title')[0]
print soup.select('#title')[0].text
print soup.select('.link') # class => .
print soup.select('.link')[0]
print soup.select('.link')[0].text
for link in soup.select('.link'):
print link.text
a = '<a href="#" qoo="123" abc="456" class="link"> </a>'
soup2 = BeautifulSoup(a)
print soup2.select('a')
print soup2.select('a')[0]
print soup2.select('a')[0]['href']
print soup2.select('a')[0]['class']
print soup2.select('a')[0]['qoo']
print soup2.select('a')[0]['abc']
for link in soup.select('.link'):
print link['href']
import requests
from bs4 import BeautifulSoup as bs
res = requests.get('https://tw.stock.yahoo.com/q/h?s=4105')
soup = bs(res.text)
table = soup.select('table .yui-text-left')[0]
for tr in table.select('tr')[1:]:
print tr.text.strip()
Explanation: http://phantomjs.org/download.html
http://casperjs.org/
End of explanation
# -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
import time, re
from bs4 import BeautifulSoup
driver = webdriver.Firefox()
driver.implicitly_wait(3)
driver.get('http://24h.pchome.com.tw/prod/DRAA0C-A90067G2U')
driver.implicitly_wait(1)
soup = BeautifulSoup(driver.page_source)
print soup.select('#PriceTotal')[0].text
driver.close()
import bs4
print dir(bs4)
from bs4 import BeautifulSoup
print dir(BeautifulSoup)
import bs4
doup = bs4.BeautifulSoup(res.text)
#print doup
Explanation: https://chrome.google.com/webstore/detail/infolite/ipjb adabbpedegielkhgpiekdlmfpgal
End of explanation |
5,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Qualité des calages
Les dépenses ou quantités agrégées de Budget des Familles après calage, avec celles de la comptabilité nationale. Le calage est effectué sur les dépenses en carburants.
Step1: Transport et carburants
Dépenses agrégées issue de l'enquête Budget des familles
Step2: Comparaison des dépenses agrégées de carburant
Step3: Comparaison des dépenses agrégées de transports
Step4: Dépenses issues du compte des Transports | Python Code:
# Import de modules généraux
from __future__ import division
import pkg_resources
import os
import pandas as pd
from pandas import concat
import seaborn
# modules spécifiques
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_line
# from ipp_macro_series_parser.agregats_transports.transports_cleaner import g2_1
from openfisca_france_indirect_taxation.examples.utils_example import get_input_data_frame, graph_builder_line
# palette de couleurs
seaborn.set_palette(seaborn.color_palette("Set2", 12))
%matplotlib inline
# un chemin utile
assets_directory = os.path.join(
pkg_resources.get_distribution('openfisca_france_indirect_taxation').location,
'openfisca_france_indirect_taxation',
'assets'
)
Explanation: Qualité des calages
Les dépenses ou quantités agrégées de Budget des Familles après calage, avec celles de la comptabilité nationale. Le calage est effectué sur les dépenses en carburants.
End of explanation
# Création d'une table à partir du fichiers csv des montants agrégés des dépenses de l'enquêtes BdF.
# Ces montants sont calculés dans compute_depenses_carburants
products = ['transports', 'carburants', 'essence', 'diesel']
depenses_bdf = pd.DataFrame()
for product in products:
depenses = pd.DataFrame.from_csv(os.path.join(
assets_directory, 'depenses', 'depenses_{}_totales_bdf.csv'.format(product)),
sep = ',',
header = -1
)
depenses.rename(columns = {1: '{} bdf'.format(product)}, inplace = True)
depenses.index = depenses.index.str.replace('en ', '')
depenses = depenses.sort_index()
depenses_bdf = concat([depenses, depenses_bdf], axis = 1)
depenses_bdf
depenses_bdf.index = depenses_bdf.index.astype(int)
depenses_bdf.dropna(inplace = True)
depenses_bdf
# TODO améliorer cela (virer la 1ère ligne)
# Import des fichiers csv donnant les montants agrégés des mêmes postes d'après la comptabilité nationale
parametres_fiscalite_file_path = os.path.join(
assets_directory,
'legislation',
'Parametres fiscalite indirecte.xls'
)
masses_cn_data_frame = pd.read_excel(parametres_fiscalite_file_path, sheetname = "consommation_CN")
masses_cn_carburants = masses_cn_data_frame[masses_cn_data_frame['Fonction'] == 'Carburants et lubrifiants'].transpose()
masses_cn_carburants.rename(columns = {76: 'carburants agregat'}, inplace = True)
masses_cn_transports = masses_cn_data_frame[masses_cn_data_frame['Fonction'] == 'Transports'].transpose()
masses_cn_transports.rename(columns = {69: 'transports agregat'}, inplace = True)
comparaison_bdf_agregats = concat([depenses_bdf, masses_cn_carburants, masses_cn_transports], axis = 1).dropna()
comparaison_bdf_agregats
Explanation: Transport et carburants
Dépenses agrégées issue de l'enquête Budget des familles
End of explanation
graph_builder_line(comparaison_bdf_agregats[['carburants agregat'] + ['carburants bdf']])
Explanation: Comparaison des dépenses agrégées de carburant
End of explanation
graph_builder_line(comparaison_bdf_agregats[['transports agregat'] + ['transports bdf']])
Explanation: Comparaison des dépenses agrégées de transports
End of explanation
comparaison_vehicules = g2_1[g2_1['categorie'] == u'Voitures particulières']
del comparaison_vehicules['categorie']
comparaison_vehicules = comparaison_vehicules.set_index('index')
comparaison_vehicules = comparaison_vehicules.transpose()
comparaison_vehicules.rename(columns = {'Total': 'total agregats', 'dont essence': 'essence agregats',
'dont Diesel': 'diesel agregats'}, inplace = True)
comparaison_vehicules['diesel bdf'] = 0
comparaison_vehicules['essence bdf'] = 0
comparaison_vehicules['total bdf'] = 0
for year in [2000, 2005, 2011]:
aggregates_data_frame = get_input_data_frame(year)
df_nombre_vehicules_bdf = aggregates_data_frame[['veh_diesel', 'veh_essence', 'pondmen']]
nombre_vehicules_diesel_bdf = (
df_nombre_vehicules_bdf['veh_diesel'] * df_nombre_vehicules_bdf['pondmen']
).sum() / 1000
comparaison_vehicules.loc[comparaison_vehicules.index == year, 'diesel bdf'] = \
nombre_vehicules_diesel_bdf
nombre_vehicules_essence_bdf = (
df_nombre_vehicules_bdf['veh_essence'] * df_nombre_vehicules_bdf['pondmen']
).sum() / 1000
comparaison_vehicules.loc[comparaison_vehicules.index == year, 'essence bdf'] = \
nombre_vehicules_essence_bdf
nombre_vehicules_total_bdf = (
(df_nombre_vehicules_bdf['veh_essence'] + df_nombre_vehicules_bdf['veh_diesel']) *
df_nombre_vehicules_bdf['pondmen']
).sum() / 1000
comparaison_vehicules.loc[comparaison_vehicules.index == year, 'total bdf'] = \
nombre_vehicules_total_bdf
comparaison_vehicules = comparaison_vehicules[comparaison_vehicules['total bdf'] != 0]
print 'Comparaison nombre de véhicules tous types'
graph_builder_line(comparaison_vehicules[['total bdf'] + ['total agregats']])
print 'Comparaison nombre de véhicules diesels'
graph_builder_line(comparaison_vehicules[['diesel bdf'] + ['diesel agregats']])
print 'Comparaison nombre de véhicules essences'
graph_builder_line(comparaison_vehicules[['essence bdf'] + ['essence agregats']])
Explanation: Dépenses issues du compte des Transports
End of explanation |
5,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NCC
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:25
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
5,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
reviews.head()
labels.head()
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()
for idx, row in reviews.iterrows():
for word in row[0].split(" "):
total_counts[word.lower()] += 1
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {word: indx for indx, word in enumerate(vocab)}
## create the word-to-index dictionary here
word2idx['the']
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
vector = np.zeros(len(word2idx))
words = map(str.lower, text.split(" "))
for word in words:
idx = word2idx.get(word, None)
if idx is None:
continue
else:
vector[idx] += 1
return vector
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
reviews.shape
trainX.shape
# Network building
def build_model(lr=0.01):
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# input
net = tflearn.input_data([None, 10000])
# hidden
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 20, activation='ReLU')
# output
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=lr,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=40)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
5,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing Band 9
Creation of Data Cubes
Creation of the Synthetic Data Cubes ALMA-like using ASYDO Project.
Parameters
Step1: To select the isolist, the wavelength range of the cube is obtained, and a searching from theoretical catalog Splatalogue is performed. All the isotopes that have spectral lines on the range of the cube are determinated.
Step2: Then, we get all the possible combination sets of the previously determined isotopes.
Step3: Finally, random sets previously determinated are selected in order to generate the data cubes.
Step4: Generate Datacubes in Band 9, Fixed Width
Step5: Generate Datacubes in Band 9, Variable (TO DO
Step6: Creation of Dictionary
We create the words necessary to fit a sparse coding model to the observed spectra in the previous created cube.
It returns a DataFrame with a vector for each theoretical line for each isotope in molist
Step7: Recalibration of Dictionary
Step8: Testing Band 7
Generate Datacubes in Band 7, Fixed Width
Step9: Generate Datacubes in Band 7, Variable (TO DO
Step10: Creation of Dictionary
Step11: Training
Recalibration of the Dictionary
Step12: Testing
Step13: Blending case
Step14: Hyperfine lines case
Step16: Double peaks for single Line | Python Code:
cube_params = {
'freq' : 604000,
'alpha' : 0,
'delta' : 0,
'spe_bw' : 4000,
'spe_res' : 1,
's_f' : 4,
's_a' : 0}
Explanation: Testing Band 9
Creation of Data Cubes
Creation of the Synthetic Data Cubes ALMA-like using ASYDO Project.
Parameters:
isolist : subset of the list of isotopes to generate a cube
cube_params:
freq : spectral center (frequency)
alpha : right-ascension center (degrees
delta : declination center (degrees
spe_res : spectral resolution (MHz)
spe_bw : spectral bandwidth (MHz)
s_f, s_a : skew-normal distrib, s_f: full width at half maximum, alpha: curtosis parameter.
End of explanation
# freq_init = cube_params['freq'] - cube_params['spe_bw']/2.0
# freq_end = cube_params['freq'] + cube_params['spe_bw']/2.0
# molist_present = theoretical_presence(molist, freq_init, freq_end)
Explanation: To select the isolist, the wavelength range of the cube is obtained, and a searching from theoretical catalog Splatalogue is performed. All the isotopes that have spectral lines on the range of the cube are determinated.
End of explanation
# all_subsets = sum(map(lambda r: list(combinations(molist_present, r)),
# range(1, len(molist_present)+1)), [])
Explanation: Then, we get all the possible combination sets of the previously determined isotopes.
End of explanation
# random_index = np.random.randint(len(all_subsets), size=25)
# isolist = []
# for i in random_index:
# isolist.append(all_subsets[i])
# save_isolist(isolist)
isolist = load_isolist()
Explanation: Finally, random sets previously determinated are selected in order to generate the data cubes.
End of explanation
# log=open('data/isolist_fixed_width.log', 'w')
# cube_n = 0
# cube_name = 'data/cube_fixed_width_'
# for i in range(0, 25):
# Creation of the cube
# gen_cube(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
Explanation: Generate Datacubes in Band 9, Fixed Width
End of explanation
# log=open('data/isolist_variable_width.log', 'w')
# cube_n = 25
# cube_name = 'data/cube_variable_width_'
# for i in range(0, 25):
# Creation of the cube
# gen_cube_variable_width(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
Explanation: Generate Datacubes in Band 9, Variable (TO DO: Fix variable width in ASYDO) Width
End of explanation
# dictionary = gen_words(molist, cube_params)
# save_dictionary(dictionary, 'band_9')
# dictionary = gen_words(molist, cube_params, True)
# save_dictionary(dictionary, 'band_9_dual')
# dictionary = load_dictionary('band_9')
dictionary = load_dictionary('band_9_dual')
Explanation: Creation of Dictionary
We create the words necessary to fit a sparse coding model to the observed spectra in the previous created cube.
It returns a DataFrame with a vector for each theoretical line for each isotope in molist
End of explanation
confusion_matrix = []
results = []
noise_pixel = (0,0)
train_pixel = (1,1)
for i in range(0, 1):
if (i == 0):
cube_name = 'data/cube_fixed_width_'
if (i == 25):
cube_name = 'data/cube_variable_width_'
file_path = cube_name + str(i) + '.fits'
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params, train_pixel, noise_pixel)
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
if i == 0:
confusion_matrix = [get_confusion_matrix(dictionary_recal, alpha,
file_path, cube_params, True)]
results = [get_results(confusion_matrix[i])]
else:
confusion_matrix.append(get_confusion_matrix(dictionary_recal, alpha,
file_path, cube_params, True))
results.append(get_results(confusion_matrix[i]))
print(i)
Explanation: Recalibration of Dictionary
End of explanation
cube_params['freq'] = 277000
# log=open('data/isolist_fixed_width.log', 'w')
# cube_n = 50
# cube_name = 'data/cube_fixed_width_'
# for i in range(0, 25):
# # Creation of the cube
# gen_cube(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
Explanation: Testing Band 7
Generate Datacubes in Band 7, Fixed Width
End of explanation
# log=open('data/isolist_fixed_width.log', 'w')
# cube_n = 75
# cube_name = 'data/cube_variable_width_'
# for i in range(0, 25):
# Creation of the cube
# gen_cube_variable_width(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
Explanation: Generate Datacubes in Band 7, Variable (TO DO: Fix variable width in ASYDO) Width
End of explanation
# dictionary = gen_words(molist, cube_params)
# save_dictionary(dictionary, 'band_7')
# dictionary = gen_words(molist, cube_params, True)
# save_dictionary(dictionary, 'band_7_dual')
# dictionary = load_dictionary('band_7')
dictionary = load_dictionary('band_7_dual')
Explanation: Creation of Dictionary
End of explanation
for i in range(50, 100):
if (i == 50):
cube_name = 'data/cube_fixed_width_'
if (i == 75):
cube_name = 'data/cube_variable_width_'
file_path = cube_name + str(i) + '.fits'
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params,
train_pixel, noise_pixel
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
confusion_matrix.append(get_confusion_matrix(dictionary_recal, alpha,
file_path, cube_params, True))
results.append(get_results(confusion_matrix[i]))
print(i)
Explanation: Training
Recalibration of the Dictionary
End of explanation
latexify(fig_height=6.9)
plt.subplot(3, 1, 1)
plt.title("Precision of Predictions for Fixed Width")
plt.xlabel("Precision")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Precision"]) for means in results[:25]], 25, normed=True, color='b', alpha=1, label='Band 9')
plt.hist([np.mean(means["Precision"]) for means in results[50:75]], 25, normed=True, color='r', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 2)
plt.title("Recall of Predictions for Fixed Width")
plt.xlabel("Recall")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Recall"]) for means in results[:25] if np.mean(means["Recall"]) > 0.3 and np.mean(means["Recall"]) < 1], 25, normed=True, color='b', alpha=1, label='Band 9')
plt.hist([np.mean(means["Recall"]) for means in results[50:75]], 25, normed=True, color='r', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 3)
plt.title("F-Score of Predictions for Fixed Width")
plt.xlabel("F-Score")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["F-Score"]) for means in results[:25]], 25, normed=True, color='b', alpha=1, label='Band 9')
plt.hist([np.mean(means["F-Score"]) for means in results[50:75]], 25, normed=True, color='r', alpha=0.75, label='Band 7')
plt.tight_layout()
plt.savefig("images/hist1.pdf")
latexify(fig_height=6.9)
plt.subplot(3, 1, 1)
plt.title("Precision of Predictions for Variable Width")
plt.xlabel("Precision")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Precision"]) for means in results[25:50]], 25, normed=True, color='g', alpha=1, label='Band 9')
plt.hist([np.mean(means["Precision"]) for means in results[75:]], 25, normed=True, color='y', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 2)
plt.title("Recall of Predictions for Variable Width")
plt.xlabel("Recall")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Recall"]) for means in results[25:50] if np.mean(means["Recall"]) > 0.3], 25, normed=True, color='g', alpha=1, label='Band 9')
plt.hist([np.mean(means["Recall"]) for means in results[75:]], 25, normed=True, color='y', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 3)
plt.title("F-Score of Predictions for Variable Width")
plt.xlabel("F-Score")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["F-Score"]) for means in results[25:50]], 25, normed=True, color='g', alpha=1, label='Band 9')
plt.hist([np.mean(means["F-Score"]) for means in results[75:]], 25, normed=True, color='y', alpha=0.75, label='Band 7')
plt.tight_layout()
plt.savefig("images/hist2.pdf")
Explanation: Testing
End of explanation
latexify()
file_path = "data/cube_fixed_width_6.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 605075, xmax = 605275)
plt.ylim(ymin = -1,ymax = 1)
lines = get_lines_from_fits(file_path)
current_isotopes = [""]
for line in lines:
isotope_frequency = int(line[1])
isotope_name = line[0] + "-f" + str(line[1])
if isotope_frequency in range(605075, 605275) \
and line[0] not in current_isotopes:
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line')
plt.text(isotope_frequency + 1.5, -0.125, isotope_name, size='8', rotation='vertical')
current_isotopes.append(line[0])
plt.title("Blending case")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/blending.pdf")
latexify()
file_path = "data/cube_fixed_width_6.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 605140, xmax = 605200)
plt.ylim(ymin = -1,ymax = 1)
lines = get_lines_from_fits(file_path)
current_isotopes = [""]
for line in lines:
isotope_frequency = int(line[1])
isotope_name = line[0] + "-f" + str(line[1])
if isotope_frequency in range(605140, 605275) \
and line[0] not in current_isotopes:
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line', linestyle='--')
plt.text(isotope_frequency + 1.5, -0.125, isotope_name, size='8', rotation='vertical', color='g')
current_isotopes.append(line[0])
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0 and x[idx] in range(605075, 605200):
plt.axvline(x=x[idx], ymin=0, ymax= 1, color='r')
frecuencia = x[idx]
m = 0
detections = alpha_columns[dictionary_recal.ix[frecuencia] != 0]
for pch, elements in enumerate(detections):
m = m + 0.75
# print(detections.index[pch])
probability = round(elements * 100 / np.sum(alpha_columns[dictionary_recal.ix[frecuencia] != 0]))
# print(probability)
match = "match " + str(int(probability)) + " \%"
if '33SO2-f605162.1267' in detections.index[pch]:
plt.text(frecuencia + 1.5, 0.525, match, size='10', rotation='vertical', color='r')
break
elif 'OS17O-f605172.0102' in detections.index[pch]:
plt.text(frecuencia + 1.5, 0.525, match, size='10', rotation='vertical', color='r')
plt.title("Blending case")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/blending.pdf")
Explanation: Blending case
End of explanation
file_path = "data/cube_fixed_width_6.fits"
dictionary = load_dictionary('band_9_dual')
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params, 4)
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
latexify()
file_path = "data/cube_fixed_width_6.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 605350, xmax = 605390)
plt.ylim(ymin = -1,ymax = 1.)
lines = get_lines_from_fits(file_path)
for i in range(0, len(lines)):
isotope_frequency = int(lines[i][1])
isotope_name = lines[i][0] + "-f" + str(lines[i][1])
if isotope_frequency in range(605335, 605375):
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line', linestyle='--')
if (i == 27):
plt.text(isotope_frequency + 1.5, -0.125, isotope_name.split('.')[0], size='8', rotation='vertical', color='g')
elif (i == 28):
plt.text(isotope_frequency + 2.25, -0.125, isotope_name.split('.')[0], size='8', rotation='vertical', color='g')
else:
plt.text(isotope_frequency + 1, -0.125, isotope_name.split('.')[0], size='8', rotation='vertical', color='g')
alpha_columns = pd.Series(alpha[:,0])
alpha_columns.index = dictionary_recal.columns
alpha_columns = alpha_columns[alpha_columns > 0]
hardcoder = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0 and x[idx] in range(605350, 605390):
plt.axvline(x=x[idx], ymin=0, ymax= 1, color='r')
frecuencia = x[idx]
m = 0
detections = alpha_columns[dictionary_recal.ix[frecuencia] != 0]
for pch, elements in enumerate(detections):
m = m + 0.75
print(detections.index[pch])
probability = round(elements * 100 / np.sum(alpha_columns[dictionary_recal.ix[frecuencia] != 0]))
print(probability)
match = "match " + str(int(probability)) + " \%"
if hardcoder == 0:
plt.text(frecuencia + 1.5, 0.525, match, size='10', rotation='vertical', color='r')
hardcoder = hardcoder + 1
break
else:
hardcoder = hardcoder - 1
continue
plt.title("Hyperfine lines case")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/hyperfine.pdf")
file_path = "data/cube_fixed_width_1.fits"
dictionary = load_dictionary('band_9_dual')
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params, 4)
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
Explanation: Hyperfine lines case
End of explanation
latexify()
file_path = "data/cube_fixed_width_1.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 604356, xmax = 604456)
plt.ylim(ymin = -1,ymax = 1)
lines = get_lines_from_fits(file_path)
for line in lines:
isotope_frequency = int(line[1])
isotope_name = line[0] + "-f" + str(line[1])
if isotope_frequency in range(604356, 604456):
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line', linestyle='--')
plt.text(isotope_frequency + 2, 0, isotope_name, size='8', rotation='vertical', color='g')
alpha_columns = pd.Series(alpha[:,0])
alpha_columns.index = dictionary_recal.columns
alpha_columns = alpha_columns[alpha_columns > 0]
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0 and x[idx] in range(604356, 604456):
plt.axvline(x=x[idx], ymin=0, ymax= 1, color='r')
frecuencia = x[idx]
m = 0
detections = alpha_columns[dictionary_recal.ix[frecuencia] != 0]
for pch, elements in enumerate(detections):
m = m + 0.75
print(detections.index[pch])
probability = round(elements * 100 / np.sum(alpha_columns[dictionary_recal.ix[frecuencia] != 0]))
print(probability)
match = "match " + str(int(probability)) + " \%"
plt.text(frecuencia + 2.5, 0.725, match, size='10', rotation='vertical', color='r')
plt.title("Double peaks for single Line")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/doublepeak.pdf")
np.mean([np.mean(means["F-Score"]) for means in results])
min_distance_req_list = pd.DataFrame([])
for i in range(0, 100):
if (i == 0 or i == 50):
cube_name = 'data/cube_fixed_width_'
if (i == 25 or i == 75):
cube_name = 'data/cube_variable_width_'
file_path = cube_name + str(i) + '.fits'
lines = get_lines_from_fits(file_path)
sorted_lines = sorted([lines[idx][1] for idx in range(0, len(lines) )])
min_distance_req = True
last_freq = float(sorted_lines[0])
for idx in range(1, len(sorted_lines)):
distance = float(sorted_lines[idx]) - last_freq
if(distance <= 1):
min_distance_req = False
break
last_freq = float(sorted_lines[idx])
if len(min_distance_req_list) == 0:
if (min_distance_req):
min_distance_req_list = [i]
else:
if (min_distance_req):
min_distance_req_list.append(i)
min_distance_req_list
results_filtered = [results[min_distance_req_list[0]]]
for ix in min_distance_req_list[1:]:
results_filtered.append(results[ix])
np.mean([np.mean(means["F-Score"]) for means in results_filtered])
cf_filtered = [confusion_matrix[min_distance_req_list[0]]]
for ix in min_distance_req_list[1:]:
cf_filtered.append(confusion_matrix[ix])
confusion_matrix[0]
latexify()
n = 5
fig, axes = plt.subplots(nrows=4, ncols=5)
filtered_matrices = confusion_matrix[:20]
for ax, matrix in zip(axes.flat, filtered_matrices):
order_index = np.argsort([float(f.split('f')[1].split('&')[0]) for f in matrix.index])
order_columns = np.argsort([float(f.split('f')[1].split('&')[0]) for f in matrix.columns])
im = ax.matshow(matrix[order_columns].iloc[order_index], cmap='hot')
ax.set_xticklabels([])
ax.set_yticklabels([])
fig.suptitle("Modified Confusion Matrices")
fig.colorbar(im, ax=axes.ravel().tolist())
plt.savefig("images/confusion_matrix.pdf")
latexify()
# Plot Precision-Recall curve for each cube
precision_avg = [np.mean(means["Precision"]) for means in results[:50]]
recall_avg = [np.mean(means["Recall"]) for means in results[:50]]
area = simps(precision_avg, dx=0.01)
plt.clf()
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Overall')
precision_avg = [np.mean(means["Precision"]) for means in results[:25]]
recall_avg = [np.mean(means["Recall"]) for means in results[:25]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Fixed width')
precision_avg = [np.mean(means["Precision"]) for means in results[25:50]]
recall_avg = [np.mean(means["Recall"]) for means in results[25:50]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Variable width ')
plt.xlim([0.2, 1.0])
plt.ylim([0.6, 1.01])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curves Band 9')
plt.legend(loc="lower left")
plt.savefig("images/results1.pdf")
latexify()
# Plot Precision-Recall curve for each cube
from scipy.integrate import simps
precision_avg = [np.mean(means["Precision"]) for means in results[50:100]]
recall_avg = [np.mean(means["Recall"]) for means in results[50:100]]
area = simps(precision_avg, dx=0.01)
plt.clf()
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Overall')
precision_avg = [np.mean(means["Precision"]) for means in results[50:75]]
recall_avg = [np.mean(means["Recall"]) for means in results[50:75]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Fixed Width')
precision_avg = [np.mean(means["Precision"]) for means in results[75:100]]
recall_avg = [np.mean(means["Recall"]) for means in results[75:100]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Variable Width ')
plt.xlim([0.415, 0.854])
plt.ylim([0.745, 0.96])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curves Band 7')
plt.legend(loc="lower left")
plt.savefig("images/results2.pdf")
def latexify(fig_width=None, fig_height=None, columns=1):
Set up matplotlib's RC params for LaTeX plotting.
Call this before plotting a figure.
Parameters
----------
fig_width : float, optional, inches
fig_height : float, optional, inches
columns : {1, 2}
# code adapted from http://www.scipy.org/Cookbook/Matplotlib/LaTeX_Examples
# Width and max height in inches for IEEE journals taken from
# computer.org/cms/Computer.org/Journal%20templates/transactions_art_guide.pdf
assert(columns in [1,2])
if fig_width is None:
fig_width = 4.89 if columns==1 else 6.9 # width in inches
if fig_height is None:
golden_mean = (sqrt(5)-1.0)/2.0 # Aesthetic ratio
fig_height = fig_width*golden_mean # height in inches
MAX_HEIGHT_INCHES = 24.0
if fig_height > MAX_HEIGHT_INCHES:
print("WARNING: fig_height too large:" + fig_height +
"so will reduce to" + MAX_HEIGHT_INCHES + "inches.")
fig_height = MAX_HEIGHT_INCHES
params = {'backend': 'ps',
'text.latex.preamble': ['\usepackage{gensymb}'],
'axes.labelsize': 8, # fontsize for x and y labels (was 10)
'axes.titlesize': 8,
'text.fontsize': 8, # was 10
'legend.fontsize': 8, # was 10
'xtick.labelsize': 10,
'ytick.labelsize': 8,
'text.usetex': True,
'figure.figsize': [fig_width,fig_height],
'font.family': 'serif'
}
matplotlib.rcParams.update(params)
def format_axes(ax):
for spine in ['top', 'right']:
ax.spines[spine].set_visible(False)
for spine in ['left', 'bottom']:
ax.spines[spine].set_color(SPINE_COLOR)
ax.spines[spine].set_linewidth(0.5)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
for axis in [ax.xaxis, ax.yaxis]:
axis.set_tick_params(direction='out', color=SPINE_COLOR)
return ax
Explanation: Double peaks for single Line
End of explanation |
5,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\L}{\mathcal{L}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\Parents}{\mathrm{Parents}}
\newcommand{\NonDesc}{\mathrm{NonDesc}}
\newcommand{\I}{\mathcal{I}}
\newcommand{\dsep}{\text{d-sep}}
\newcommand{\Cat}{\mathrm{Categorical}}
\newcommand{\Bin}{\mathrm{Binomial}}
$$
HMMs and the Baum-Welch Algorithm
As covered in lecture, the Baum-Welch Algorithm is a derivation of the EM algorithm for HMMs where we learn the paramaters A, B and $\pi$ given a set of observations.
In this hands-on exercise we will build upon the forward and backward algorithms from last exercise, which can be used for the E-step, and implement Baum-Welch ourselves!
Like last time, we'll work with an example where we observe a sequence of words backed by a latent part of speech variable.
$X$
Step1: Review
Step3: Implementing Baum-welch
With the forward and backward algorithm implementions ready, let's use them to implement baum-welch, EM for HMMs.
In the M step, here's the parameters are updated
Step4: Training with examples
Let's try producing updated parameters to our HMM using a few examples. How did the A and B matrixes get updated with data? Was any confidence gained in the emission probabilities of nouns? Verbs?
Step5: Tracing through the implementation
Let's look at a trace of one iteration. Study the steps carefully and make sure you understand how we are updating the parameters, corresponding to these updates | Python Code:
import numpy as np
np.set_printoptions(suppress=True)
parts_of_speech = DETERMINER, NOUN, VERB, END = 0, 1, 2, 3
words = THE, DOG, CAT, WALKED, RAN, IN, PARK, END = 0, 1, 2, 3, 4, 5, 6, 7
# transition probabilities
A = np.array([
# D N V E
[0.1, 0.8, 0.1, 0.0], # D: determiner most likely to go to noun
[0.1, 0.1, 0.6, 0.2], # N: noun most likely to go to verb
[0.4, 0.3, 0.2, 0.1], # V
[0.0, 0.0, 0.0, 1.0]]) # E: end always goes to end
# distribution of parts of speech for the first word of a sentence
pi = np.array([0.4, 0.3, 0.3, 0.0])
# emission probabilities
B = np.array([
# D N V E
[ 0.8, 0.1, 0.1, 0. ], # the
[ 0.1, 0.8, 0.1, 0. ], # dog
[ 0.1, 0.8, 0.1, 0. ], # cat
[ 0. , 0. , 1. , 0. ], # walked
[ 0. , 0.2 , 0.8 , 0. ], # ran
[ 1. , 0. , 0. , 0. ], # in
[ 0. , 0.1, 0.9, 0. ], # park
[ 0. , 0. , 0. , 1. ]]) # end
# utilties for printing out parameters of HMM
import pandas as pd
pos_labels = ["D", "N", "V", "E"]
word_labels = ["the", "dog", "cat", "walked", "ran", "in", "park", "end"]
def print_B(B):
print(pd.DataFrame(B, columns=pos_labels, index=word_labels))
def print_A(A):
print(pd.DataFrame(A, columns=pos_labels, index=pos_labels))
print_A(A)
print_B(B)
Explanation: $$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\L}{\mathcal{L}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\Parents}{\mathrm{Parents}}
\newcommand{\NonDesc}{\mathrm{NonDesc}}
\newcommand{\I}{\mathcal{I}}
\newcommand{\dsep}{\text{d-sep}}
\newcommand{\Cat}{\mathrm{Categorical}}
\newcommand{\Bin}{\mathrm{Binomial}}
$$
HMMs and the Baum-Welch Algorithm
As covered in lecture, the Baum-Welch Algorithm is a derivation of the EM algorithm for HMMs where we learn the paramaters A, B and $\pi$ given a set of observations.
In this hands-on exercise we will build upon the forward and backward algorithms from last exercise, which can be used for the E-step, and implement Baum-Welch ourselves!
Like last time, we'll work with an example where we observe a sequence of words backed by a latent part of speech variable.
$X$: discrete distribution over bag of words
$Z$: discrete distribution over parts of speech
$A$: the probability of a part of speech given a previous part of speech, e.g, what do we expect to see after a noun?
$B$: the distribution of words given a particular part of speech, e.g, what words are we likely to see if we know it is a verb?
$x_{i}s$ a sequence of observed words (a sentence). Note: in for both variables we have a special "end" outcome that signals the end of a sentence. This makes sense as a part of speech tagger would like to have a sense of sentence boundaries.
End of explanation
def forward(params, observations):
pi, A, B = params
N = len(observations)
S = pi.shape[0]
alpha = np.zeros((N, S))
# base case
alpha[0, :] = pi * B[observations[0], :]
# recursive case
for i in range(1, N):
for s2 in range(S):
for s1 in range(S):
alpha[i, s2] += alpha[i-1, s1] * A[s1, s2] * B[observations[i], s2]
return (alpha, np.sum(alpha[N-1,:]))
def print_forward(params, observations):
alpha, za = forward(params, observations)
print(pd.DataFrame(
alpha,
columns=pos_labels,
index=[word_labels[i] for i in observations]))
print_forward((pi, A, B), [THE, DOG, WALKED, IN, THE, PARK, END])
print_forward((pi, A, B), [THE, CAT, RAN, IN, THE, PARK, END])
def backward(params, observations):
pi, A, B = params
N = len(observations)
S = pi.shape[0]
beta = np.zeros((N, S))
# base case
beta[N-1, :] = 1
# recursive case
for i in range(N-2, -1, -1):
for s1 in range(S):
for s2 in range(S):
beta[i, s1] += beta[i+1, s2] * A[s1, s2] * B[observations[i+1], s2]
return (beta, np.sum(pi * B[observations[0], :] * beta[0,:]))
backward((pi, A, B), [THE, DOG, WALKED, IN, THE, PARK, END])
Explanation: Review: Forward / Backward
Here are solutions to last hands-on lecture's coding problems along with example uses with a pre-defined A and B matrices.
$\alpha_t(z_t) = B_{z_t,x_t} \sum_{z_{t-1}} \alpha_{t-1}(z_{t-1}) A_{z_{t-1}, z_t} $
$\beta(z_t) = \sum_{z_{t+1}} A_{z_t, z_{t+1}} B_{z_{t+1}, x_{t+1}} \beta_{t+1}(z_{t+1})$
End of explanation
# Some utitlities for tracing our implementation below
def left_pad(i, s):
return "\n".join(["{}{}".format(' '*i, l) for l in s.split("\n")])
def pad_print(i, s):
print(left_pad(i, s))
def pad_print_args(i, **kwargs):
pad_print(i, "\n".join(["{}:\n{}".format(k, kwargs[k]) for k in sorted(kwargs.keys())]))
def baum_welch(training, pi, A, B, iterations, trace=False):
pi, A, B = np.copy(pi), np.copy(A), np.copy(B) # take copies, as we modify them
S = pi.shape[0]
# iterations of EM
for it in range(iterations):
if trace:
pad_print(0, "for it={} in range(iterations)".format(it))
pad_print_args(2, A=A, B=B, pi=pi, S=S)
pi1 = np.zeros_like(pi)
A1 = np.zeros_like(A)
B1 = np.zeros_like(B)
for observations in training:
if trace:
pad_print(2, "for observations={} in training".format(observations))
#
# E-Step: compute forward-backward matrices
#
alpha, za = forward((pi, A, B), observations)
beta, zb = backward((pi, A, B), observations)
if trace:
pad_print(4, alpha, za = forward((pi, A, B), observations)\nbeta, zb = backward((pi, A, B), observations))
pad_print_args(4, alpha=alpha, beta=beta, za=za, zb=zb)
assert abs(za - zb) < 1e-6, "it's badness 10000 if the marginals don't agree ({} vs {})".format(za, zb)
#
# M-step: calculating the frequency of starting state, transitions and (state, obs) pairs
#
# Update PI:
pi1 += alpha[0, :] * beta[0, :] / za
if trace:
pad_print(4, "pi1 += alpha[0, :] * beta[0, :] / za")
pad_print_args(4, pi1=pi1)
pad_print(4, "for i in range(0, len(observations)):")
# Update B (transition) matrix
for i in range(0, len(observations)):
# Hint: B1 can be updated similarly to PI for each row 1
B1[observations[i], :] += alpha[i, :] * beta[i, :] / za
if trace:
pad_print(6, "B1[observations[{i}], :] += alpha[{i}, :] * beta[{i}, :] / za".format(i=i))
if trace:
pad_print_args(4, B1=B1)
pad_print(4, "for i in range(1, len(observations)):")
# Update A (emission) matrix
for i in range(1, len(observations)):
if trace:
pad_print(6, "for s1 in range(S={})".format(S))
for s1 in range(S):
if trace: pad_print(8, "for s2 in range(S={})".format(S))
for s2 in range(S):
A1[s1, s2] += alpha[i - 1, s1] * A[s1, s2] * B[observations[i], s2] * beta[i, s2] / za
if trace: pad_print(10, "A1[{s1}, {s2}] += alpha[{i_1}, {s1}] * A[{s1}, {s2}] * B[observations[{i}], {s2}] * beta[{i}, {s2}] / za".format(s1=s1, s2=s2, i=i, i_1=i-1))
if trace: pad_print_args(4, A1=A1)
# normalise pi1, A1, B1
pi = pi1 / np.sum(pi1)
for s in range(S):
A[s, :] = A1[s, :] / np.sum(A1[s, :])
B[s, :] = B1[s, :] / np.sum(B1[s, :])
return pi, A, B
Explanation: Implementing Baum-welch
With the forward and backward algorithm implementions ready, let's use them to implement baum-welch, EM for HMMs.
In the M step, here's the parameters are updated:
$ p(z_{t-1}, z_t | \X, \theta) = \frac{\alpha_{t-1}(z_{t-1}) \beta_t(z_t) A_{z_{t-1}, z_t} B_{z_t, x_t}}{\sum_k \alpha_t(k)\beta_t(k)} $
First, let's look at an implementation of this below and see how it works when applied to some training data.
End of explanation
pi2, A2, B2 = baum_welch([
[THE, DOG, WALKED, IN, THE, PARK, END, END], # END -> END needs at least one transition example
[THE, DOG, RAN, IN, THE, PARK, END],
[THE, CAT, WALKED, IN, THE, PARK, END],
[THE, DOG, RAN, IN, THE, PARK, END]], pi, A, B, 10, trace=False)
print("original A")
print_A(A)
print("updated A")
print_A(A2)
print("\noriginal B")
print_B(B)
print("updated B")
print_B(B2)
print("\nForward probabilities of sample using updated params:")
print_forward((pi2, A2, B2), [THE, DOG, WALKED, IN, THE, PARK, END])
Explanation: Training with examples
Let's try producing updated parameters to our HMM using a few examples. How did the A and B matrixes get updated with data? Was any confidence gained in the emission probabilities of nouns? Verbs?
End of explanation
pi3, A3, B3 = baum_welch([
[THE, DOG, WALKED, IN, THE, PARK, END, END],
[THE, CAT, RAN, IN, THE, PARK, END, END]], pi, A, B, 1, trace=True)
print("\n\n")
print_A(A3)
print_B(B3)
Explanation: Tracing through the implementation
Let's look at a trace of one iteration. Study the steps carefully and make sure you understand how we are updating the parameters, corresponding to these updates:
$ p(z_{t-1}, z_t | \X, \theta) = \frac{\alpha_{t-1}(z_{t-1}) \beta_t(z_t) A_{z_{t-1}, z_t} B_{z_t, x_t}}{\sum_k \alpha_t(k)\beta_t(k)} $
End of explanation |
5,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wall Street Columns Nexa - Visualization of Code Vectors
In this notebook we will analyse the distribution of code vectors. What we want to analyse is whether we are able to increase that difference by making proper use of the code vectors. But we have gather quite an ammount of data so far and we need to concentrate, in order to decide which examples to analyse we put forward the following facts
Step1: Let's load the file
Step2: Show the receptive fields first
Step3: Show and histogram
Step4: Show the histograms | Python Code:
import numpy as np
import h5py
%matplotlib inline
import sys
sys.path.append("../")
Explanation: Wall Street Columns Nexa - Visualization of Code Vectors
In this notebook we will analyse the distribution of code vectors. What we want to analyse is whether we are able to increase that difference by making proper use of the code vectors. But we have gather quite an ammount of data so far and we need to concentrate, in order to decide which examples to analyse we put forward the following facts:
For a couple of the examples gather so far we did obtain a meaningful difference between the predictions using mixed receptive field against the predictions using independent receptive fields in favor of the former. In particular we obtained the difference for the same letter task with both policies and constant number of features
On the other hand for the raw data we did obtain an increase in prediction accuracy for both the same and next letter task but only if we use the inclusive policy.
Therefore the case that we will analyse is the inclusive policy for the same letter task and constant number of features.
We want to check how the difference by using mixed receptive field increases compared to the difference that you get when going from the low to high resolution case. After that we want to test whether you can increase that gap by using more features in general and for that we check whether we are using appropriately most of the code vecotors. This last statement is what is shown here
End of explanation
# First we load the file
file_location = '../results_database/text_wall_street_columns_30_semi_constantNdata.hdf5'
f = h5py.File(file_location, 'r')
Explanation: Let's load the file
End of explanation
Nembedding = 3
max_lag = 4
Nside = 30
Nspatial_clusters = max_lag
Ntime_clusters = 60 // max_lag
# Here calculate the scores for the mixes
run_name = '/test' + str(max_lag)
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
code_vectors_winner = np.array(nexa['code-vectors-winner'])
matrix = np.zeros((Nside, max_lag))
for cluster in cluster_to_index:
cluster_indexes = cluster_to_index[str(cluster)]
for index in cluster_indexes:
first_index = index // max_lag
second_index = index % max_lag
matrix[first_index, second_index] = cluster
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.imshow(matrix, origin='lower', interpolation='none')
Explanation: Show the receptive fields first
End of explanation
from visualization.code_vectors import visualize_representation_winners
visualize_representation_winners(code_vectors_winner, Nspatial_clusters, Ntime_clusters, ax=None)
Explanation: Show and histogram
End of explanation
Nembedding = 3
max_lags = np.arange(2, 17, 2)
for max_lag in max_lags:
Nspatial_clusters = max_lag
Ntime_clusters = 60 // max_lag
# Here calculate the scores for the mixes
run_name = '/test' + str(max_lag)
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
code_vectors_softmax = np.array(nexa['code-vectors-softmax'])
code_vectors_winner = np.array(nexa['code-vectors-winner'])
visualize_representation_winners(code_vectors_winner, Nspatial_clusters, Ntime_clusters, ax=None)
Explanation: Show the histograms
End of explanation |
5,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
wxpython
To make a graphycal interface to use a code in an interactive way we need a library of widgets, called wxpython.
We will explore the usage of this library mainly with examples. But to start, we have to install it with conda
Step2: Every wxPython app is an instance of wx.App. For most simple applications you can use wx.App as is. When you get to more complex applications you may need to extend the wx.App class. The "False" parameter means "don't redirect stdout and stderr to a window".
A wx.Frame is a top-level window. The syntax is wx.Frame(Parent, Id, Title). Most of the constructors have this shape (a parent object, followed by an Id). In this example, we use None for "no parent" and wx.ID_ANY to have wxWidgets pick an id for us.
wx.TextCtrl widget
To add some text to the frame, we have to use the widget wx.TextCtrl.
By default, a text box is a single-line field, but the wx.TE_MULTILINE parameter allows you to enter multiple lines of text.
In this example, we derive from wx.Frame and overwrite its _init_ method. Here we declare a new wx.TextCtrl which is a simple text edit control. Note that since the MyFrame runs self.Show() inside its _init_ method, we no longer have to call frame.Show() explicitly.
Step3: Status bar & Menu bar
Typically, an application has a menu and sometimes a status bar to output messages.
Notice the wx.ID_ABOUT and wx.ID_EXIT ids. These are standard ids provided by wxWidgets (see a full list at http
Step4: Event handling
Reacting to events in wxPython is called event handling. An event is when "something" happens on your application (a button click, text input, mouse movement, etc). Much of GUI programming consists of responding to events. You bind an object to an event using the Bind() method
Step6: Dialogs
Of course an editor is useless if it is not able to save or open documents. That's where Common dialogs come in. Common dialogs are those offered by the underlying platform so that your application will look exactly like a native application. Here is the implementation of the OnOpen method in MainWindow
Step9: Working with Windows
In this section, we are going to present the way wxPython deals with windows and their contents, including building input forms and using various widgets/controls. We are going to build a small application that calculates the price of a quote.
Laying out Visual Elements
Within a frame, you'll use a number of wxWindow sub-classes to flesh out the frame's contents. Here are some of the more common elements you might want to put in your frame
Step10: The sizer.Add method has three arguments. The first one specifies the control to include in the sizer. The second one is a weight factor which means that this control will be sized in proportion to other ones. For example, if you had three edit controls and you wanted them to have the proportions 3
Step11: The notebook
Sometimes, a form grows too big to fit on a single page. The
wx.Notebook is used in that kind of case
Step12: Improving the layout - using Sizers
Using absolute positioning is often not very satisfying
Step13: wxpython and matplotlib
To use matplotlib to plot or show images in a panel, we rely on the matplotlib library.
Step14: Let's see how this works.
python
self.figure = Figure()
initializes the top level container for all plot elements. Everything in the
plot goes within this object, just like everything in our wx application goes into
our frame!
python
self.axes = self.figure.add_subplot(111)
Our figure can contain many subplots, but here we will only make
one. add_subplot() is what does this for us. The 111 is a grid
parameter, encoded as an integer. It means 1x1 grid, first subplot. If
you want two subplots, the number would be 2x1; the first subplot
would be 211, and the second subplot would be 212.
python
self.axes.plot(t, s)
t and s are what I chose for axis values.
They are arrays that contain values that link with each other to form our plot. These arrays
must have the same size!
This command creates and plots the t and s arrays. Since self.axes was defined as a
subplot of self.figure, this also plays a role in generating
self.figure, the container of our subplot.
python
self.canvas = FigureCanvas(self, -1, self.figure)
Finally, we have our canvas object, which paints our object onto the
screen. Simply pass in our figure and the FigureCanvas tool does the
rest.
Navigation toolbar
A useful toolbar is the navigation toolbar defined in matplotlib which allows one to explore the image.
Let's add to our previous example this toolbar.
Step16: It is possible to use this navigation toolbar as a starting point to add more buttons and capabilities.
This is another example from the matplotlib demo library.
Step17: Matplotlib examples
We give here a few more examples to show the various capabilities.
Buttons
Step18: Check Buttons
Step19: Cursor
Step21: Rectangle selector
Step22: Slider
Step24: Span selector
Step25: Further reading
http | Python Code:
%%writefile framecode.py
#!/usr/bin/env python
import wx
app = wx.App(False) # Create a new app, don't redirect stdout/stderr to a window.
frame = wx.Frame(None, wx.ID_ANY, "Hello World") # A Frame is a top-level window.
frame.Show(True) # Show the frame.
app.MainLoop()
!python framecode.py
Explanation: wxpython
To make a graphycal interface to use a code in an interactive way we need a library of widgets, called wxpython.
We will explore the usage of this library mainly with examples. But to start, we have to install it with conda:
bash
conda install wxpython
A First Application: "Hello, World"
As tradition, we are first going to write a small "Hello, world" application.
End of explanation
%%writefile editor.py
#!/usr/bin/env python
import wx
class MyFrame(wx.Frame):
We simply derive a new class of Frame.
def __init__(self, parent, title):
wx.Frame.__init__(self, parent, title=title, size=(200,100))
self.control = wx.TextCtrl(self, style=wx.TE_MULTILINE)
self.Show(True)
app = wx.App(False)
frame = MyFrame(None, 'Small editor')
app.MainLoop()
!python editor.py
Explanation: Every wxPython app is an instance of wx.App. For most simple applications you can use wx.App as is. When you get to more complex applications you may need to extend the wx.App class. The "False" parameter means "don't redirect stdout and stderr to a window".
A wx.Frame is a top-level window. The syntax is wx.Frame(Parent, Id, Title). Most of the constructors have this shape (a parent object, followed by an Id). In this example, we use None for "no parent" and wx.ID_ANY to have wxWidgets pick an id for us.
wx.TextCtrl widget
To add some text to the frame, we have to use the widget wx.TextCtrl.
By default, a text box is a single-line field, but the wx.TE_MULTILINE parameter allows you to enter multiple lines of text.
In this example, we derive from wx.Frame and overwrite its _init_ method. Here we declare a new wx.TextCtrl which is a simple text edit control. Note that since the MyFrame runs self.Show() inside its _init_ method, we no longer have to call frame.Show() explicitly.
End of explanation
%%writefile editor.py
#!/usr/bin/env python
import wx
class MainWindow(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__(self, parent, title=title, size=(200,100))
self.control = wx.TextCtrl(self, style=wx.TE_MULTILINE)
self.CreateStatusBar() # A Statusbar in the bottom of the window
# Setting up the menu.
filemenu= wx.Menu()
# wx.ID_ABOUT and wx.ID_EXIT are standard IDs provided by wxWidgets.
filemenu.Append(wx.ID_ABOUT, "&About"," Information about this program")
filemenu.AppendSeparator()
filemenu.Append(wx.ID_EXIT,"E&xit"," Terminate the program")
# Creating the menubar.
menuBar = wx.MenuBar()
menuBar.Append(filemenu,"&File") # Adding the "filemenu" to the MenuBar
self.SetMenuBar(menuBar) # Adding the MenuBar to the Frame content.
self.Show(True)
app = wx.App(False)
frame = MainWindow(None, "Sample editor")
app.MainLoop()
!python editor.py
Explanation: Status bar & Menu bar
Typically, an application has a menu and sometimes a status bar to output messages.
Notice the wx.ID_ABOUT and wx.ID_EXIT ids. These are standard ids provided by wxWidgets (see a full list at http://docs.wxwidgets.org/2.8.12/wx_stdevtid.html). It is a good habit to use the standard ID if there is one available. This helps wxWidgets know how to display the widget in each platform to make it look more native.
End of explanation
%%writefile editor.py
#!/usr/bin/env python
import os
import wx
class MainWindow(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__(self, parent, title=title, size=(200,100))
self.control = wx.TextCtrl(self, style=wx.TE_MULTILINE)
self.CreateStatusBar() # A StatusBar in the bottom of the window
# Setting up the menu.
filemenu= wx.Menu()
# wx.ID_ABOUT and wx.ID_EXIT are standard ids provided by wxWidgets.
menuAbout = filemenu.Append(wx.ID_ABOUT, "&About"," Information about this program")
menuExit = filemenu.Append(wx.ID_EXIT,"E&xit"," Terminate the program")
# Creating the menubar.
menuBar = wx.MenuBar()
menuBar.Append(filemenu,"&File") # Adding the "filemenu" to the MenuBar
self.SetMenuBar(menuBar) # Adding the MenuBar to the Frame content.
# Set events.
self.Bind(wx.EVT_MENU, self.OnAbout, menuAbout)
self.Bind(wx.EVT_MENU, self.OnExit, menuExit)
self.Show(True)
def OnAbout(self,e):
# A message dialog box with an OK button. wx.OK is a standard ID in wxWidgets.
dlg = wx.MessageDialog( self, "A small text editor", "About Sample Editor", wx.OK)
dlg.ShowModal() # Show it
dlg.Destroy() # finally destroy it when finished.
def OnExit(self,e):
self.Close(True) # Close the frame.
app = wx.App(False)
frame = MainWindow(None, "Sample editor")
app.MainLoop()
!python editor.py
Explanation: Event handling
Reacting to events in wxPython is called event handling. An event is when "something" happens on your application (a button click, text input, mouse movement, etc). Much of GUI programming consists of responding to events. You bind an object to an event using the Bind() method:
python
class MainWindow(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__(self,parent, title=title, size=(200,100))
...
menuItem = filemenu.Append(wx.ID_ABOUT, "&About"," Information about this program")
self.Bind(wx.EVT_MENU, self.OnAbout, menuItem)
This means that, from now on, when the user selects the "About" menu item, the method self.OnAbout will be executed. wx.EVT_MENU is the "select menu item" event. wxWidgets understands many other events (see the full list
at https://wiki.wxpython.org/ListOfEvents). The self.OnAbout method has the general declaration:
python
def OnAbout(self, event):
...
Here event is an instance of a subclass of wx.Event. For example, a button-click event - wx.EVT_BUTTON - is a subclass of wx.Event.
The method is executed when the event occurs. By default, this method will handle the event and the event will stop after the callback finishes. However, you can "skip" an event with event.Skip(). This causes the event to go through the hierarchy of event handlers. For example:
```python
def OnButtonClick(self, event):
if (some_condition):
do_something()
else:
event.Skip()
def OnEvent(self, event):
...
```
When a button-click event occurs, the method OnButtonClick gets called. If some_condition is true, we do_something() otherwise we let the event be handled by the more general event handler. Now let's have a look at our application:
End of explanation
%%writefile editor.py
#!/usr/bin/env python
import os
import wx
class MainWindow(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__(self, parent, title=title, size=(200,100))
self.control = wx.TextCtrl(self, style=wx.TE_MULTILINE)
self.CreateStatusBar() # A StatusBar in the bottom of the window
# Setting up the menu.
filemenu= wx.Menu()
# wx.ID_ABOUT and wx.ID_EXIT are standard ids provided by wxWidgets.
menuOpen = filemenu.Append(wx.ID_OPEN, "&Open",
" Open text file")
menuAbout = filemenu.Append(wx.ID_ABOUT, "&About"," Information about this program")
menuExit = filemenu.Append(wx.ID_EXIT,"E&xit"," Terminate the program")
# Creating the menubar.
menuBar = wx.MenuBar()
menuBar.Append(filemenu,"&File") # Adding the "filemenu" to the MenuBar
self.SetMenuBar(menuBar) # Adding the MenuBar to the Frame content.
# Set events.
self.Bind(wx.EVT_MENU, self.OnOpen, menuOpen)
self.Bind(wx.EVT_MENU, self.OnAbout, menuAbout)
self.Bind(wx.EVT_MENU, self.OnExit, menuExit)
self.Show(True)
def OnAbout(self,e):
# A message dialog box with an OK button. wx.OK is a standard ID in wxWidgets.
dlg = wx.MessageDialog( self, "A small text editor", "About Sample Editor", wx.OK)
dlg.ShowModal() # Show it
dlg.Destroy() # finally destroy it when finished.
def OnExit(self,e):
self.Close(True) # Close the frame.
def OnOpen(self,e):
Open a file
self.dirname = ''
dlg = wx.FileDialog(self, "Choose a file", self.dirname,
"", "*.*", wx.OPEN)
if dlg.ShowModal() == wx.ID_OK:
self.filename = dlg.GetFilename()
self.dirname = dlg.GetDirectory()
f = open(os.path.join(self.dirname, self.filename), 'r')
self.control.SetValue(f.read())
f.close()
dlg.Destroy()
app = wx.App(False)
frame = MainWindow(None, "Sample editor")
app.MainLoop()
!python editor.py
Explanation: Dialogs
Of course an editor is useless if it is not able to save or open documents. That's where Common dialogs come in. Common dialogs are those offered by the underlying platform so that your application will look exactly like a native application. Here is the implementation of the OnOpen method in MainWindow:
python
def OnOpen(self,e):
Open a file
self.dirname = ''
dlg = wx.FileDialog(self, "Choose a file", self.dirname, "", "*.*", wx.OPEN)
if dlg.ShowModal() == wx.ID_OK:
self.filename = dlg.GetFilename()
self.dirname = dlg.GetDirectory()
f = open(os.path.join(self.dirname, self.filename), 'r')
self.control.SetValue(f.read())
f.close()
dlg.Destroy()
Explanation:
First, we create the dialog by calling the appropriate Constructor.
Then, we call ShowModal. That opens the dialog - "Modal" means that the user cannot do anything on the application until he clicks OK or Cancel.
The return value of ShowModal is the Id of the button pressed. If the user pressed OK we read the file.
End of explanation
%%writefile sizer_demo.py
#!/usr/bin/env python
import wx
import os
class MainWindow(wx.Frame):
def __init__(self, parent, title):
self.dirname=''
# A "-1" in the size parameter instructs wxWidgets to use the default size.
# In this case, we select 200px width and the default height.
wx.Frame.__init__(self, parent, title=title, size=(200,-1))
self.control = wx.TextCtrl(self, style=wx.TE_MULTILINE)
self.CreateStatusBar() # A Statusbar in the bottom of the window
# Setting up the menu.
filemenu= wx.Menu()
menuOpen = filemenu.Append(wx.ID_OPEN, "&Open"," Open a file to edit")
menuAbout= filemenu.Append(wx.ID_ABOUT, "&About"," Information about this program")
menuExit = filemenu.Append(wx.ID_EXIT,"E&xit"," Terminate the program")
# Creating the menubar.
menuBar = wx.MenuBar()
menuBar.Append(filemenu,"&File") # Adding the "filemenu" to the MenuBar
self.SetMenuBar(menuBar) # Adding the MenuBar to the Frame content.
# Events.
self.Bind(wx.EVT_MENU, self.OnOpen, menuOpen)
self.Bind(wx.EVT_MENU, self.OnExit, menuExit)
self.Bind(wx.EVT_MENU, self.OnAbout, menuAbout)
self.sizer2 = wx.BoxSizer(wx.HORIZONTAL)
self.buttons = []
for i in range(0, 6):
self.buttons.append(wx.Button(self, -1, "Button &"+str(i)))
self.sizer2.Add(self.buttons[i], 1, wx.EXPAND)
# Use some sizers to see layout options
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.sizer.Add(self.control, 1, wx.EXPAND)
self.sizer.Add(self.sizer2, 0, wx.EXPAND)
#Layout sizers
self.SetSizer(self.sizer)
self.SetAutoLayout(1)
self.sizer.Fit(self)
self.Show()
def OnAbout(self,e):
# Create a message dialog box
dlg = wx.MessageDialog(self, " A sample editor \n in wxPython", "About Sample Editor", wx.OK)
dlg.ShowModal() # Shows it
dlg.Destroy() # finally destroy it when finished.
def OnExit(self,e):
self.Close(True) # Close the frame.
def OnOpen(self,e):
Open a file
dlg = wx.FileDialog(self, "Choose a file", self.dirname, "", "*.*", wx.OPEN)
if dlg.ShowModal() == wx.ID_OK:
self.filename = dlg.GetFilename()
self.dirname = dlg.GetDirectory()
f = open(os.path.join(self.dirname, self.filename), 'r')
self.control.SetValue(f.read())
f.close()
dlg.Destroy()
app = wx.App(False)
frame = MainWindow(None, "Sample editor")
app.MainLoop()
!python sizer_demo.py
Explanation: Working with Windows
In this section, we are going to present the way wxPython deals with windows and their contents, including building input forms and using various widgets/controls. We are going to build a small application that calculates the price of a quote.
Laying out Visual Elements
Within a frame, you'll use a number of wxWindow sub-classes to flesh out the frame's contents. Here are some of the more common elements you might want to put in your frame:
- wx.MenuBar, which puts a menu bar along the top of your frame.
- wx.StatusBar, which sets up an area along the bottom of your frame for displaying status messages, etc.
- wx.ToolBar, which puts a toolbar in your frame.
- Sub-classes of wx.Control. These are objects which represent user interface widgets (ie, visual elements which display data and/or process user input). Common examples of wx.Control objects include wx.Button, wx.StaticText, wx.TextCtrl and wx.ComboBox.
- wx.Panel, which is a container to hold your various wx.Control objects. Putting your wx.Control objects inside a wx.Panel means that the user can tab from one UI widget to the next.
All visual elements (wxWindow objects and their subclasses) can hold sub-elements. Thus, for example, a wx.Frame might hold a number of wx.Panel objects, which in turn hold a number of wx.Button, wx.StaticText and wx.TextCtrl objects, giving you an entire hierarchy of elements:  Note that this merely describes the way that certain visual elements are interrelated -- not how they are visually laid out within the frame. To handle the layout of elements within a frame, there are several options.
We are going to show the usage of wxSizers.
A sizer (that is, one of the wx.Sizer sub-classes) can be used to handle the visual arrangement of elements within a window or frame. Sizers can:
- Calculate an appropriate size for each visual element.
- Position the elements according to certain rules.
- Dynamically resize and/or reposition elements when a frame is resized.
Some of the more common types of sizers include:
- wx.BoxSizer, which arranges visual elements in a line going either horizontally or vertically.
- wx.GridSizer, which lays visual elements out into a grid-like structure.
- wx.FlexGridSizer, which is similar to a wx.GridSizer except that it allow for more flexibility in laying out visual elements.
A sizer is given a list of wx.Window objects to size, either by calling sizer.Add(window, options...), or by calling sizer.AddMany(...). A sizer will only work on those elements which it has been given. Sizers can be nested. That is, you can add one sizer to another sizer, for example to have two rows of buttons (each laid out by a horizontal wx.BoxSizer) contained within another wx.BoxSizer which places the rows of buttons one above the other.
Note: Notice that the above example does not lay out the six buttons into two rows of three columns each -- to do that, you should use a wxGridSizer.
In the following example we use two nested sizers, the main one with vertical layout and the embedded one with horizontal layout:
End of explanation
%%writefile example.py
import wx
class ExamplePanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
self.quote = wx.StaticText(self, label="Your quote :",
pos=(20, 30))
# A multiline TextCtrl - This is here to show how the events work in this program, don't pay too much attention to it
self.logger = wx.TextCtrl(self, pos=(300,20), size=(200,300),
style=wx.TE_MULTILINE | wx.TE_READONLY)
# A button
self.button =wx.Button(self, label="Save", pos=(200, 325))
self.Bind(wx.EVT_BUTTON, self.OnClick,self.button)
# the edit control - one line version.
self.lblname = wx.StaticText(self, label="Your name :",
pos=(20,60))
self.editname = wx.TextCtrl(self, value="Enter here your name",
pos=(150, 60), size=(140,-1))
self.Bind(wx.EVT_TEXT, self.EvtText, self.editname)
self.Bind(wx.EVT_CHAR, self.EvtChar, self.editname)
# the combobox Control
self.sampleList = ['friends', 'advertising', 'web search',
'Yellow Pages']
self.lblhear = wx.StaticText(self,
label="How did you hear from us ?",
pos=(20, 90))
self.edithear = wx.ComboBox(self, pos=(150, 90), size=(95, -1),
choices=self.sampleList,
style=wx.CB_DROPDOWN)
self.Bind(wx.EVT_COMBOBOX, self.EvtComboBox, self.edithear)
self.Bind(wx.EVT_TEXT, self.EvtText,self.edithear)
# Checkbox
self.insure = wx.CheckBox(self,
label="Do you want Insured Shipment ?",
pos=(20,180))
self.Bind(wx.EVT_CHECKBOX, self.EvtCheckBox, self.insure)
# Radio Boxes
radioList = ['blue', 'red', 'yellow', 'orange', 'green',
'purple', 'navy blue', 'black', 'gray']
rb = wx.RadioBox(self, label="What color would you like ?",
pos=(20, 210), choices=radioList,
majorDimension=3,
style=wx.RA_SPECIFY_COLS)
self.Bind(wx.EVT_RADIOBOX, self.EvtRadioBox, rb)
def EvtRadioBox(self, event):
self.logger.AppendText('EvtRadioBox: %d\n' % event.GetInt())
def EvtComboBox(self, event):
self.logger.AppendText('EvtComboBox: %s\n' % event.GetString())
def OnClick(self,event):
self.logger.AppendText(" Click on object with Id %d\n" %event.GetId())
def EvtText(self, event):
self.logger.AppendText('EvtText: %s\n' % event.GetString())
def EvtChar(self, event):
self.logger.AppendText('EvtChar: %d\n' % event.GetKeyCode())
event.Skip()
def EvtCheckBox(self, event):
self.logger.AppendText('EvtCheckBox: %d\n' % event.Checked())
%%writefile control_demo.py
import wx
from example import ExamplePanel
app = wx.App(False)
frame = wx.Frame(None,size=(500,400))
panel = ExamplePanel(frame)
frame.Show()
app.MainLoop()
!python control_demo.py
Explanation: The sizer.Add method has three arguments. The first one specifies the control to include in the sizer. The second one is a weight factor which means that this control will be sized in proportion to other ones. For example, if you had three edit controls and you wanted them to have the proportions 3:2:1 then you would specify these factors as arguments when adding the controls. 0 means that this control or sizer will not grow. The third argument is normally wx.GROW (same as wx.EXPAND) which means the control will be resized when necessary. If you use wx.SHAPED instead, the controls aspect ratio will remain the same.
If the second parameter is 0, i.e. the control will not be resized, the third parameter may indicate if the control should be centered horizontally and/or vertically by using wx.ALIGN_CENTER_HORIZONTAL, wx.ALIGN_CENTER_VERTICAL, or wx.ALIGN_CENTER (for both) instead of wx.GROW or wx.SHAPED as that third parameter.
You can alternatively specify combinations of wx.ALIGN_LEFT, wx.ALIGN_TOP, wx.ALIGN_RIGHT, and wx.ALIGN_BOTTOM. The default behavior is equivalent to wx.ALIGN_LEFT | wx.ALIGN_TOP.
One potentially confusing aspect of the wx.Sizer and its sub-classes is the distinction between a sizer and a parent window. When you create objects to go inside a sizer, you do not make the sizer the object's parent window. A sizer is a way of laying out windows, it is not a window in itself. In the above example, all six buttons would be created with the parent window being the frame or window which encloses the buttons -- not the sizer. If you try to create a visual element and pass the sizer as the parent window, your program will crash.
Once you have set up your visual elements and added them to a sizer (or to a nested set of sizers), the next step is to tell your frame or window to use the sizer. You do this in three steps:
python
window.SetSizer(sizer)
window.SetAutoLayout(True)
sizer.Fit(window)
The SetSizer() call tells your window (or frame) which sizer to use. The call to SetAutoLayout() tells your window to use the sizer to position and size your components. And finally, the call to sizer.Fit() tells the sizer to calculate the initial size and position for all its elements. If you are using sizers, this is the normal process you would go through to set up your window or frame's contents before it is displayed for the first time.
Controls
You will find a complete list of the numerous Controls that exist in wxPython in the demo and help, but here we are going to present those most frequently used:
wxButton The most basic Control: A button showing a text that you can click. For example, here is a "Clear" button (e.g. to clear a text):
python
clearButton = wx.Button(self, wx.ID_CLEAR, "Clear")
self.Bind(wx.EVT_BUTTON, self.OnClear, clearButton)
wxTextCtrl This control let the user input text. It generates two main events. EVT_TEXT is called whenever the text changes. EVT_CHAR is called whenever a key has been pressed.
python
textField = wx.TextCtrl(self)
self.Bind(wx.EVT_TEXT, self.OnChange, textField)
self.Bind(wx.EVT_CHAR, self.OnKeyPress, textField)
For example: If the user presses the "Clear" button and that clears the text field, that will generate an EVT_TEXT event, but not an EVT_CHAR event.
wxComboBox A combobox is very similar to wxTextCtrl but in addition to the events generated by wxTextCtrl, wxComboBox has the EVT_COMBOBOX event.
wxCheckBox The checkbox is a control that gives the user true/false choice.
wxRadioBox The radiobox lets the user choose from a list of options.
Let's see an example by defining a more complex panel:
End of explanation
%%writefile notebook_demo.py
import wx
from example import ExamplePanel
app = wx.App(False)
frame = wx.Frame(None, title="Demo with Notebook",size=(500,400))
nb = wx.Notebook(frame)
nb.AddPage(ExamplePanel(nb), "Absolute Positioning")
nb.AddPage(ExamplePanel(nb), "Page Two")
nb.AddPage(ExamplePanel(nb), "Page Three")
frame.Show()
app.MainLoop()
!python notebook_demo.py
Explanation: The notebook
Sometimes, a form grows too big to fit on a single page. The
wx.Notebook is used in that kind of case : It allows the user to navigate quickly between a small amount of pages by clicking on associated tabs. We implement this by putting the wx.Notebook instead of our form into the main Frame and then add our panel into the notebook by using method AddPage.
End of explanation
%%writefile example.py
import wx
class ExamplePanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
# create some sizers
mainSizer = wx.BoxSizer(wx.VERTICAL)
grid = wx.GridBagSizer(hgap=5, vgap=5)
hSizer = wx.BoxSizer(wx.HORIZONTAL)
self.quote = wx.StaticText(self, label="Your quote: ")
grid.Add(self.quote, pos=(0,0))
# A multiline TextCtrl - This is here to show how the events work in this program, don't pay too much attention to it
self.logger = wx.TextCtrl(self, size=(200,300), style=wx.TE_MULTILINE | wx.TE_READONLY)
# A button
self.button =wx.Button(self, label="Save")
self.Bind(wx.EVT_BUTTON, self.OnClick,self.button)
# the edit control - one line version.
self.lblname = wx.StaticText(self, label="Your name :")
grid.Add(self.lblname, pos=(1,0))
self.editname = wx.TextCtrl(self, value="Enter here your name", size=(140,-1))
grid.Add(self.editname, pos=(1,1))
self.Bind(wx.EVT_TEXT, self.EvtText, self.editname)
self.Bind(wx.EVT_CHAR, self.EvtChar, self.editname)
# the combobox Control
self.sampleList = ['friends', 'advertising', 'web search', 'Yellow Pages']
self.lblhear = wx.StaticText(self, label="How did you hear from us ?")
grid.Add(self.lblhear, pos=(3,0))
self.edithear = wx.ComboBox(self, size=(95, -1),
choices=self.sampleList,
style=wx.CB_DROPDOWN)
grid.Add(self.edithear, pos=(3,1))
self.Bind(wx.EVT_COMBOBOX, self.EvtComboBox, self.edithear)
self.Bind(wx.EVT_TEXT, self.EvtText,self.edithear)
# add a spacer to the sizer
grid.Add((10, 40), pos=(2,0))
# Checkbox
self.insure = wx.CheckBox(self, label="Do you want Insured Shipment ?")
grid.Add(self.insure, pos=(4,0), span=(1,2),
flag=wx.BOTTOM, border=5)
self.Bind(wx.EVT_CHECKBOX, self.EvtCheckBox, self.insure)
# Radio Boxes
radioList = ['blue', 'red', 'yellow', 'orange', 'green', 'purple', 'navy blue', 'black', 'gray']
rb = wx.RadioBox(self, label="What color would you like ?", pos=(20, 210), choices=radioList, majorDimension=3,
style=wx.RA_SPECIFY_COLS)
grid.Add(rb, pos=(5,0), span=(1,2))
self.Bind(wx.EVT_RADIOBOX, self.EvtRadioBox, rb)
hSizer.Add(grid, 0, wx.ALL, 5)
hSizer.Add(self.logger)
mainSizer.Add(hSizer, 0, wx.ALL, 5)
mainSizer.Add(self.button, 0, wx.CENTER)
self.SetSizerAndFit(mainSizer)
def EvtRadioBox(self, event):
self.logger.AppendText('EvtRadioBox: %d\n' % event.GetInt())
def EvtComboBox(self, event):
self.logger.AppendText('EvtComboBox: %s\n' % event.GetString())
def OnClick(self,event):
self.logger.AppendText(" Click on object with Id %d\n" %event.GetId())
def EvtText(self, event):
self.logger.AppendText('EvtText: %s\n' % event.GetString())
def EvtChar(self, event):
self.logger.AppendText('EvtChar: %d\n' % event.GetKeyCode())
event.Skip()
def EvtCheckBox(self, event):
self.logger.AppendText('EvtCheckBox: %d\n' % event.Checked())
%%writefile control_demo.py
import wx
from example import ExamplePanel
app = wx.App(False)
frame = wx.Frame(None,size=(500,400))
panel = ExamplePanel(frame)
frame.Show()
app.MainLoop()
!python control_demo.py
Explanation: Improving the layout - using Sizers
Using absolute positioning is often not very satisfying: The result is ugly if the windows are not (for one reason or another) the right size. WxPython has very rich vocabulary of objects to lay out controls.
- wx.BoxSizer is the most common and simple layout object but it permits a vast range of possibilities. Its role is roughly to arrange a set of controls in a line or in a row and rearrange them when needed (i.e. when the global size is changed).
- wx.GridSizer and wx.FlexGridSizer are two very important layout tools. They arrange the controls in a tabular layout.
Here is the sample above re-written to use sizers:
End of explanation
%%writefile mpl_demo.py
#!/usr/bin/env python
#import wxversion
#wxversion.ensureMinimal('2.8')
from numpy import arange, sin, pi
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as
FigureCanvas
from matplotlib.figure import Figure
import wx
class CanvasFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1,
'CanvasFrame', size=(550, 350))
self.figure = Figure()
self.axes = self.figure.add_subplot(111)
t = arange(0.0, 3.0, 0.01)
s = sin(2 * pi * t)
self.axes.plot(t, s)
self.canvas = FigureCanvas(self, -1, self.figure)
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.sizer.Add(self.canvas, 1, wx.LEFT | wx.TOP | wx.EXPAND)
self.SetSizer(self.sizer)
self.Fit()
class App(wx.App):
def OnInit(self):
'Create the main window and insert the custom frame'
frame = CanvasFrame()
frame.Show(True)
return True
app = App(0)
app.MainLoop()
!python mpl_demo.py
Explanation: wxpython and matplotlib
To use matplotlib to plot or show images in a panel, we rely on the matplotlib library.
End of explanation
%%writefile mpl_demo.py
#!/usr/bin/env python
import wxversion
wxversion.ensureMinimal('2.8')
from numpy import arange, sin, pi
from matplotlib.backends.backend_wxagg import \
FigureCanvasWxAgg as FigureCanvas, \
NavigationToolbar2WxAgg as NavigationToolbar
from matplotlib.figure import Figure
import wx
class CanvasFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1,
'CanvasFrame', size=(550, 350))
self.figure = Figure()
self.axes = self.figure.add_subplot(111)
t = arange(0.0, 3.0, 0.01)
s = sin(2 * pi * t)
self.axes.plot(t, s)
self.canvas = FigureCanvas(self, -1, self.figure)
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.sizer.Add(self.canvas, 1, wx.LEFT | wx.TOP | wx.EXPAND)
self.SetSizer(self.sizer)
self.Fit()
self.add_toolbar() #add toolbar
def add_toolbar(self):
self.toolbar = NavigationToolbar(self.canvas)
self.toolbar.Realize()
# By adding toolbar in sizer, we are able to put it at the bottom
# of the frame - so appearance is closer to GTK version.
self.sizer.Add(self.toolbar, 0, wx.LEFT | wx.EXPAND)
# update the axes menu on the toolbar
self.toolbar.update()
class App(wx.App):
def OnInit(self):
'Create the main window and insert the custom frame'
frame = CanvasFrame()
frame.Show(True)
return True
app = App(0)
app.MainLoop()
!python mpl_demo.py
Explanation: Let's see how this works.
python
self.figure = Figure()
initializes the top level container for all plot elements. Everything in the
plot goes within this object, just like everything in our wx application goes into
our frame!
python
self.axes = self.figure.add_subplot(111)
Our figure can contain many subplots, but here we will only make
one. add_subplot() is what does this for us. The 111 is a grid
parameter, encoded as an integer. It means 1x1 grid, first subplot. If
you want two subplots, the number would be 2x1; the first subplot
would be 211, and the second subplot would be 212.
python
self.axes.plot(t, s)
t and s are what I chose for axis values.
They are arrays that contain values that link with each other to form our plot. These arrays
must have the same size!
This command creates and plots the t and s arrays. Since self.axes was defined as a
subplot of self.figure, this also plays a role in generating
self.figure, the container of our subplot.
python
self.canvas = FigureCanvas(self, -1, self.figure)
Finally, we have our canvas object, which paints our object onto the
screen. Simply pass in our figure and the FigureCanvas tool does the
rest.
Navigation toolbar
A useful toolbar is the navigation toolbar defined in matplotlib which allows one to explore the image.
Let's add to our previous example this toolbar.
End of explanation
%%writefile mpl_demo.py
#!/usr/bin/env python
import wxversion
wxversion.ensureMinimal('2.8')
from numpy import arange, sin, pi
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas
from matplotlib.backends.backend_wxagg import NavigationToolbar2WxAgg
from matplotlib.backends.backend_wx import _load_bitmap
from matplotlib.figure import Figure
from numpy.random import rand
import wx
class MyNavigationToolbar(NavigationToolbar2WxAgg):
Extend the default wx toolbar with your own event handlers
ON_CUSTOM = wx.NewId()
def __init__(self, canvas, cankill):
NavigationToolbar2WxAgg.__init__(self, canvas)
# for simplicity I'm going to reuse a bitmap from wx, you'll
# probably want to add your own.
if 'phoenix' in wx.PlatformInfo:
self.AddTool(self.ON_CUSTOM, 'Click me',
_load_bitmap('stock_left.xpm'),
'Activate custom contol')
self.Bind(wx.EVT_TOOL, self._on_custom, id=self.ON_CUSTOM)
else:
self.AddSimpleTool(self.ON_CUSTOM, _load_bitmap('stock_left.xpm'),
'Click me', 'Activate custom contol')
self.Bind(wx.EVT_TOOL, self._on_custom, id=self.ON_CUSTOM)
def _on_custom(self, evt):
# add some text to the axes in a random location in axes (0,1)
# coords) with a random color
# get the axes
ax = self.canvas.figure.axes[0]
# generate a random location can color
x, y = tuple(rand(2))
rgb = tuple(rand(3))
# add the text and draw
ax.text(x, y, 'You clicked me',
transform=ax.transAxes,
color=rgb)
self.canvas.draw()
evt.Skip()
class CanvasFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1,
'CanvasFrame', size=(550, 350))
self.figure = Figure(figsize=(5, 4), dpi=100)
self.axes = self.figure.add_subplot(111)
t = arange(0.0, 3.0, 0.01)
s = sin(2 * pi * t)
self.axes.plot(t, s)
self.canvas = FigureCanvas(self, -1, self.figure)
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.sizer.Add(self.canvas, 1, wx.TOP | wx.LEFT | wx.EXPAND)
# Capture the paint message
self.Bind(wx.EVT_PAINT, self.OnPaint)
self.toolbar = MyNavigationToolbar(self.canvas, True)
self.toolbar.Realize()
# By adding toolbar in sizer, we are able to put it at the bottom
# of the frame - so appearance is closer to GTK version.
self.sizer.Add(self.toolbar, 0, wx.LEFT | wx.EXPAND)
# update the axes menu on the toolbar
self.toolbar.update()
self.SetSizer(self.sizer)
self.Fit()
def OnPaint(self, event):
self.canvas.draw()
event.Skip()
class App(wx.App):
def OnInit(self):
'Create the main window and insert the custom frame'
frame = CanvasFrame()
frame.Show(True)
return True
app = App(0)
app.MainLoop()
!python mpl_demo.py
Explanation: It is possible to use this navigation toolbar as a starting point to add more buttons and capabilities.
This is another example from the matplotlib demo library.
End of explanation
%%writefile buttons_demo.py
#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Button
freqs = np.arange(2, 20, 3)
fig, ax = plt.subplots()
plt.subplots_adjust(bottom=0.2)
t = np.arange(0.0, 1.0, 0.001)
s = np.sin(2*np.pi*freqs[0]*t)
l, = plt.plot(t, s, lw=2)
class Index(object):
ind = 0
def next(self, event):
self.ind += 1
i = self.ind % len(freqs)
ydata = np.sin(2*np.pi*freqs[i]*t)
l.set_ydata(ydata)
plt.draw()
def prev(self, event):
self.ind -= 1
i = self.ind % len(freqs)
ydata = np.sin(2*np.pi*freqs[i]*t)
l.set_ydata(ydata)
plt.draw()
callback = Index()
axprev = plt.axes([0.7, 0.05, 0.1, 0.075])
axnext = plt.axes([0.81, 0.05, 0.1, 0.075])
bnext = Button(axnext, 'Next')
bnext.on_clicked(callback.next)
bprev = Button(axprev, 'Previous')
bprev.on_clicked(callback.prev)
plt.show()
!python buttons_demo.py
Explanation: Matplotlib examples
We give here a few more examples to show the various capabilities.
Buttons
End of explanation
%%writefile checkbuttons_demo.py
#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import CheckButtons
t = np.arange(0.0, 2.0, 0.01)
s0 = np.sin(2*np.pi*t)
s1 = np.sin(4*np.pi*t)
s2 = np.sin(6*np.pi*t)
fig, ax = plt.subplots()
l0, = ax.plot(t, s0, visible=False, lw=2)
l1, = ax.plot(t, s1, lw=2)
l2, = ax.plot(t, s2, lw=2)
plt.subplots_adjust(left=0.2)
rax = plt.axes([0.05, 0.4, 0.1, 0.15])
check = CheckButtons(rax, ('2 Hz', '4 Hz', '6 Hz'), (False, True, True))
def func(label):
if label == '2 Hz':
l0.set_visible(not l0.get_visible())
elif label == '4 Hz':
l1.set_visible(not l1.get_visible())
elif label == '6 Hz':
l2.set_visible(not l2.get_visible())
plt.draw()
check.on_clicked(func)
plt.show()
!python checkbuttons_demo.py
Explanation: Check Buttons
End of explanation
%%writefile cursor_demo.py
#!/usr/bin/env python
from matplotlib.widgets import Cursor
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, axisbg='#FFFFCC')
x, y = 4*(np.random.rand(2, 100) - .5)
ax.plot(x, y, 'o')
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)
# set useblit = True on gtkagg for enhanced performance
cursor = Cursor(ax, useblit=True, color='red', linewidth=2)
plt.show()
!python cursor_demo.py
Explanation: Cursor
End of explanation
%%writefile rectsel.py
#!/usr/bin/env python
Do a mouseclick somewhere, move the mouse to some destination, release
the button. This class gives click- and release-events and also draws
a line or a box from the click-point to the actual mouseposition
(within the same axes) until the button is released. Within the
method 'self.ignore()' it is checked wether the button from eventpress
and eventrelease are the same.
from matplotlib.widgets import RectangleSelector
import numpy as np
import matplotlib.pyplot as plt
def line_select_callback(eclick, erelease):
'eclick and erelease are the press and release events'
x1, y1 = eclick.xdata, eclick.ydata
x2, y2 = erelease.xdata, erelease.ydata
print("(%3.2f, %3.2f) --> (%3.2f, %3.2f)" % (x1, y1, x2, y2))
print(" The button you used were: %s %s" % (eclick.button, erelease.button))
def toggle_selector(event):
print(' Key pressed.')
if event.key in ['Q', 'q'] and toggle_selector.RS.active:
print(' RectangleSelector deactivated.')
toggle_selector.RS.set_active(False)
if event.key in ['A', 'a'] and not toggle_selector.RS.active:
print(' RectangleSelector activated.')
toggle_selector.RS.set_active(True)
fig, current_ax = plt.subplots() # make a new plotingrange
N = 100000 # If N is large one can see
x = np.linspace(0.0, 10.0, N) # improvement by use blitting!
plt.plot(x, +np.sin(.2*np.pi*x), lw=3.5, c='b', alpha=.7) # plot something
plt.plot(x, +np.cos(.2*np.pi*x), lw=3.5, c='r', alpha=.5)
plt.plot(x, -np.sin(.2*np.pi*x), lw=3.5, c='g', alpha=.3)
print("\n click --> release")
# drawtype is 'box' or 'line' or 'none'
toggle_selector.RS = RectangleSelector(current_ax, line_select_callback,
drawtype='box', useblit=True,
button=[1, 3], # don't use middle button
minspanx=5, minspany=5,
spancoords='pixels',
interactive=True)
plt.connect('key_press_event', toggle_selector)
plt.show()
!python rectsel.py
Explanation: Rectangle selector
End of explanation
%%writefile slider_demo.py
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider, Button, RadioButtons
fig, ax = plt.subplots()
plt.subplots_adjust(left=0.25, bottom=0.25)
t = np.arange(0.0, 1.0, 0.001)
a0 = 5
f0 = 3
s = a0*np.sin(2*np.pi*f0*t)
l, = plt.plot(t, s, lw=2, color='red')
plt.axis([0, 1, -10, 10])
axcolor = 'lightgoldenrodyellow'
axfreq = plt.axes([0.25, 0.1, 0.65, 0.03], axisbg=axcolor)
axamp = plt.axes([0.25, 0.15, 0.65, 0.03], axisbg=axcolor)
sfreq = Slider(axfreq, 'Freq', 0.1, 30.0, valinit=f0)
samp = Slider(axamp, 'Amp', 0.1, 10.0, valinit=a0)
def update(val):
amp = samp.val
freq = sfreq.val
l.set_ydata(amp*np.sin(2*np.pi*freq*t))
fig.canvas.draw_idle()
sfreq.on_changed(update)
samp.on_changed(update)
resetax = plt.axes([0.8, 0.025, 0.1, 0.04])
button = Button(resetax, 'Reset', color=axcolor, hovercolor='0.975')
def reset(event):
sfreq.reset()
samp.reset()
button.on_clicked(reset)
rax = plt.axes([0.025, 0.5, 0.15, 0.15], axisbg=axcolor)
radio = RadioButtons(rax, ('red', 'blue', 'green'), active=0)
def colorfunc(label):
l.set_color(label)
fig.canvas.draw_idle()
radio.on_clicked(colorfunc)
plt.show()
!python slider_demo.py
Explanation: Slider
End of explanation
%%writefile span_demo.py
#!/usr/bin/env python
The SpanSelector is a mouse widget to select a xmin/xmax range and plot the
detail view of the selected region in the lower axes
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import SpanSelector
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(211, axisbg='#FFFFCC')
x = np.arange(0.0, 5.0, 0.01)
y = np.sin(2*np.pi*x) + 0.5*np.random.randn(len(x))
ax.plot(x, y, '-')
ax.set_ylim(-2, 2)
ax.set_title('Press left mouse button and drag to test')
ax2 = fig.add_subplot(212, axisbg='#FFFFCC')
line2, = ax2.plot(x, y, '-')
def onselect(xmin, xmax):
indmin, indmax = np.searchsorted(x, (xmin, xmax))
indmax = min(len(x) - 1, indmax)
thisx = x[indmin:indmax]
thisy = y[indmin:indmax]
line2.set_data(thisx, thisy)
ax2.set_xlim(thisx[0], thisx[-1])
ax2.set_ylim(thisy.min(), thisy.max())
fig.canvas.draw()
# set useblit True on gtkagg for enhanced performance
span = SpanSelector(ax, onselect, 'horizontal', useblit=True,
rectprops=dict(alpha=0.5, facecolor='red'))
plt.show()
!python span_demo.py
Explanation: Span selector
End of explanation
%load_ext version_information
%version_information wxpython
Explanation: Further reading
http://t2mh.com/python/wxPython%20in%20Action%20(2006).pdf
https://www.tutorialspoint.com/wxpython/wxpython_tutorial.pdf
Versions
End of explanation |
5,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
San Francisco Crime Dataset Conversion
Challenge
Spark does not support out-of-the box data frame creation from CSV files.
The CSV reader from Databricks provides such functionality but requires an extra library.
python
df = sqlContext.read \
.format('com.databricks.spark.csv') \
.options(header='true', inferschema='true') \
.load('train.csv')
Solution
Read scv files and create data frame manually.
Step1: Initialize contexts and input file
Step2: Remove header row from input file
Step3: Define data schema
Step4: Parse CSV lines and transform values into tuples
Step5: Write DataFrame as parquet file | Python Code:
import csv
import pyspark
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from StringIO import StringIO
from datetime import *
from dateutil.parser import parse
Explanation: San Francisco Crime Dataset Conversion
Challenge
Spark does not support out-of-the box data frame creation from CSV files.
The CSV reader from Databricks provides such functionality but requires an extra library.
python
df = sqlContext.read \
.format('com.databricks.spark.csv') \
.options(header='true', inferschema='true') \
.load('train.csv')
Solution
Read scv files and create data frame manually.
End of explanation
sc = pyspark.SparkContext('local[*]')
sqlContext = SQLContext(sc)
textRDD = sc.textFile("../../data/sf-crime/train.csv.bz2")
textRDD.count()
Explanation: Initialize contexts and input file:
End of explanation
header = textRDD.first()
textRDD = textRDD.filter(lambda line: not line == header)
Explanation: Remove header row from input file:
End of explanation
fields = [StructField(field_name, StringType(), True) for field_name in header.split(',')]
fields[0].dataType = TimestampType()
fields[7].dataType = FloatType()
fields[8].dataType = FloatType()
schema = StructType(fields)
Explanation: Define data schema:
End of explanation
# parse each csv line (fields may contain enclosed ',' in parantheses) and split into tuples
tupleRDD = textRDD \
.map(lambda line: next(csv.reader(StringIO(line)))) \
.map(lambda x: (parse(x[0]), x[1], x[2], x[3], x[4], x[5], x[6], float(x[7]), float(x[8])))
df = sqlContext.createDataFrame(tupleRDD, schema)
Explanation: Parse CSV lines and transform values into tuples:
End of explanation
df.write.save("../../data/sf-crime/train.parquet")
Explanation: Write DataFrame as parquet file:
End of explanation |
5,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 6 - Scattered Data and 'Heat Maps'
There are different ways to map point data to a smooth field. One way is to triangulate the data, smooth it and interpolate to a regular mesh (see previous notebooks). It is also possible to construct weighted averages from scattered points to a regular mesh. In this notebook we work through how to find where points lie in the mesh and map their values to nearby vertices.
Notebook contents
Computational mesh
Scattered data
Data count by triangle
Data count by nearest vertex
Distance weighting to vertices
Visualisation
The next example is Ex7-Refinement-of-Triangulations
Define a regular computational mesh
Use the (usual) icosahedron with face points included.
Step1: Point data with uneven spatial distribution
Define a relatively smooth function that we can interpolate from the coarse mesh to the fine mesh and analyse. As it is a familiar pattern, we use the seismic event catalogue for M5.5+ (dawn of time to 2017-12-31) from IRIS
Step2: Count earthquakes per triangle
This is a numpy wrapper around the STRIPACK routine which operates by retriangulation and is therefore not particularly fast.
Step3: Count earthquakes per vertex
The sTriangulation.nearest_vertices method uses a k-d tree to find the nearest vertices to a set of longitude / latitude points. It returns the great circle distance. This requires the k-d tree to have been built when the mesh was initialised (tree=True)
Step4: Inverse distance weighted number of earthquakes
The k-d tree method provides a specified number of neighbours and the arc lengths to those neighbours. This can be used in a number of ways to smooth or amalgamate data. Here for example is a weighted average of each earthquake to nearby nodes.
We compute the distances to $N$ nearby vertices and distribute information to those vertices in inverse proportion to their distance.
$$ w i = \frac{d _i}{\sum{i=1}^N d _i} $$
Alternatively, we might map information to the vertices by applying a radially symmetric kernel to the point data without normalising.
Step5: Mapping data other than frequency to the regular mesh
Here we show how to map point data to the regular mesh - produce a representation of the depth of the events instead of just their frequency. When plotting, we need to distinguish between zero information and zero (shallow) depth. This is done by using the weight function to determine the opacity of the symbol or field that we plot. This has the effect of washing out the regions with few, large events compared to those with many small ones (which in this case means washed out regions where earthquakes are deep).
Step6: Visualisation | Python Code:
import stripy as stripy
mesh = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=5, include_face_points=True, tree=True)
print(mesh.npoints)
Explanation: Example 6 - Scattered Data and 'Heat Maps'
There are different ways to map point data to a smooth field. One way is to triangulate the data, smooth it and interpolate to a regular mesh (see previous notebooks). It is also possible to construct weighted averages from scattered points to a regular mesh. In this notebook we work through how to find where points lie in the mesh and map their values to nearby vertices.
Notebook contents
Computational mesh
Scattered data
Data count by triangle
Data count by nearest vertex
Distance weighting to vertices
Visualisation
The next example is Ex7-Refinement-of-Triangulations
Define a regular computational mesh
Use the (usual) icosahedron with face points included.
End of explanation
import numpy as np
# Note - these data have some places where depth is unknown (appears as NaN in the depth )
# The IRIS data has lat, lon, depth, mag ... date/time in col 2, 3, 4, 10 (starting from zero)
eqs = np.genfromtxt("../Data/EQ-M5.5-IRIS-ALL.txt", usecols=(2,3,4,10), delimiter='|', comments="#")
lons = np.radians(eqs[:,1])
lats = np.radians(eqs[:,0])
depths = eqs[:,2]
depths[np.isnan(depths)] = -1.0
%matplotlib inline
import gdal
import cartopy
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10, 5), facecolor="none")
ax = plt.subplot(111, projection=ccrs.Mollweide())
ax.coastlines(color="#777777" )
ax.set_global()
lons0 = np.degrees(lons)
lats0 = np.degrees(lats)
ax.scatter(lons0, lats0,
marker="o", s=10.0, alpha=0.5,
transform=ccrs.Geodetic(), c=depths, cmap=plt.cm.RdBu)
pass
Explanation: Point data with uneven spatial distribution
Define a relatively smooth function that we can interpolate from the coarse mesh to the fine mesh and analyse. As it is a familiar pattern, we use the seismic event catalogue for M5.5+ (dawn of time to 2017-12-31) from IRIS
End of explanation
triangles = mesh.containing_triangle(lons, lats)
tris, counts = np.unique(triangles, return_counts=True)
tris.shape
## map to nodes so we can plot this
hit_count = np.zeros_like(mesh.lons)
for i in range(0, tris.shape[0]):
hit_count[mesh.simplices[tris[i]]] += counts[i]
hit_count /= 3.0
print(hit_count.mean())
fig = plt.figure(figsize=(10, 10), facecolor="none")
ax = plt.subplot(111, projection=ccrs.Mollweide())
ax.coastlines(color="lightgrey", )
ax.set_global()
lons0 = np.degrees(mesh.lons)
lats0 = np.degrees(mesh.lats)
ax.scatter(lons0, lats0,
marker="o", s=30.0, transform=ccrs.Geodetic(), c=hit_count, cmap=plt.cm.Reds, vmin=0.333, vmax=20.0, alpha=0.25)
pass
Explanation: Count earthquakes per triangle
This is a numpy wrapper around the STRIPACK routine which operates by retriangulation and is therefore not particularly fast.
End of explanation
distances, vertices = mesh.nearest_vertices(lons, lats, k=1)
nodes, ncounts = np.unique(vertices, return_counts=True)
hit_countn = np.zeros_like(mesh.lons)
hit_countn[nodes] = ncounts
fig = plt.figure(figsize=(10, 10), facecolor="none")
ax = plt.subplot(111, projection=ccrs.Mollweide())
ax.coastlines(color="lightgrey", )
ax.set_global()
lons0 = np.degrees(mesh.lons)
lats0 = np.degrees(mesh.lats)
ax.scatter(lons0, lats0,
marker="o", s=30.0, transform=ccrs.Geodetic(), c=hit_countn, cmap=plt.cm.Reds, vmin=0.333, vmax=10.0, alpha=0.25)
pass
Explanation: Count earthquakes per vertex
The sTriangulation.nearest_vertices method uses a k-d tree to find the nearest vertices to a set of longitude / latitude points. It returns the great circle distance. This requires the k-d tree to have been built when the mesh was initialised (tree=True)
End of explanation
distances, vertices = mesh.nearest_vertices(lons, lats, k=10)
norm = distances.sum(axis=1)
# distances, vertices are arrays of shape (data_size, 10)
hit_countid = np.zeros_like(mesh.lons)
## numpy shouldn't try to vectorise this reduction operation
for i in range(0,distances.shape[0]):
hit_countid[vertices[i,:]] += distances[i,:] / norm[i]
hit_countidr = np.zeros_like(mesh.lons)
## numpy shouldn't try to vectorise this reduction operation
for i in range(0,distances.shape[0]):
hit_countidr[vertices[i,:]] += np.exp( -distances[i,:] / 0.02 )
fig = plt.figure(figsize=(10, 10), facecolor="none")
ax = plt.subplot(111, projection=ccrs.Mollweide())
ax.coastlines(color="lightgrey", )
ax.set_global()
lons0 = np.degrees(mesh.lons)
lats0 = np.degrees(mesh.lats)
ax.scatter(lons0, lats0,
marker="o", s=30.0, transform=ccrs.Geodetic(), c=hit_countid, cmap=plt.cm.Reds, vmin=0.333, vmax=10.0, alpha=0.25)
pass
Explanation: Inverse distance weighted number of earthquakes
The k-d tree method provides a specified number of neighbours and the arc lengths to those neighbours. This can be used in a number of ways to smooth or amalgamate data. Here for example is a weighted average of each earthquake to nearby nodes.
We compute the distances to $N$ nearby vertices and distribute information to those vertices in inverse proportion to their distance.
$$ w i = \frac{d _i}{\sum{i=1}^N d _i} $$
Alternatively, we might map information to the vertices by applying a radially symmetric kernel to the point data without normalising.
End of explanation
depth_idr = np.zeros_like(mesh.lons)
## numpy shouldn't try to vectorise this reduction operation
for i in range(0,distances.shape[0]):
depth_idr[vertices[i,:]] += depths[i] * np.exp( -distances[i,:] / 0.02 )
depth_idr[hit_countidr != 0.0] /= hit_countidr[hit_countidr != 0.0]
fig = plt.figure(figsize=(10, 10), facecolor="none")
ax = plt.subplot(111, projection=ccrs.Mollweide())
ax.coastlines(color="lightgrey", )
ax.set_global()
lons0 = np.degrees(mesh.lons)
lats0 = np.degrees(mesh.lats)
ax.scatter(lons0, lats0,
marker="o", transform=ccrs.Geodetic(), c=depth_idr, s=hit_countidr,
cmap=plt.cm.RdBu, vmin=0.0, vmax=500.0, alpha=0.25)
pass
Explanation: Mapping data other than frequency to the regular mesh
Here we show how to map point data to the regular mesh - produce a representation of the depth of the events instead of just their frequency. When plotting, we need to distinguish between zero information and zero (shallow) depth. This is done by using the weight function to determine the opacity of the symbol or field that we plot. This has the effect of washing out the regions with few, large events compared to those with many small ones (which in this case means washed out regions where earthquakes are deep).
End of explanation
import lavavu
depth_range = depths.max()
lv = lavavu.Viewer(border=False, background="#FFFFFF", resolution=[666,666], near=-10.0)
tris = lv.triangles("triangles", wireframe=False, colour="#77ff88", opacity=1.0)
tris.vertices(mesh.points)
tris.indices(mesh.simplices)
tris.values(hit_count, label="hit_count")
tris.values(hit_countn, label="hit_countn")
tris.values(hit_countid, label="hit_countid")
tris.values(hit_countidr, label="hit_countidr")
tris.colourmap("#FFFFFF:0.0 (1.0)#FF0000:0.2 (50.0)#550044:0.5")
depth = lv.triangles("depth_field", wireframe=False, colour="#FFFFFF", opacity=0.999)
depth.vertices(mesh.points*(6370-depth_range*0.99) / 6370)
depth.indices(mesh.simplices)
depth.values(depth_idr, label="depths")
depth.values(hit_countidr/hit_countidr.max(), label="weights")
depth.colourmap("#550000 #0099FF #000055")
depth["opacitymap"] = "#000000:0.0 (0.1)#000000:0.9 #000000:0.9"
depth["opacityby"] = "weights"
depth["colourby"] = "depths"
bg = lv.triangles("background", wireframe=False, colour="#FFFFFF", opacity=1.0)
bg.vertices(mesh.points*(6370-depth_range) / 6370)
bg.indices(mesh.simplices)
ll = np.array(stripy.spherical.lonlat2xyz(lons, lats)).T
nodes = lv.points("events", pointsize=3.0, pointtype="shiny", colour="#448080", opacity=0.75)
nodes.vertices(ll * (6370.0 - depths.reshape(-1,1)) / 6370.0 )
nodes.values(depths, label="depths")
nodes.colourmap("#550000 #0099FF #000055")
# View from the pacific hemisphere
lv.translation(0.0, 0.0, -3.0)
lv.rotation(0.0,90.0, 90.0)
lv.control.Panel()
lv.control.Range('specular', range=(0,1), step=0.1, value=0.4)
lv.control.Checkbox(property='axis')
lv.control.ObjectList()
tris.control.List(["hit_count", "hit_countn", "hit_countid", "hit_countidr"], property="colourby", value="hit_count", command="redraw")
lv.control.show()
Explanation: Visualisation
End of explanation |
5,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MPI-M
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
5,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
5,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Syrian Refugee Resettlement, March 2011 - April 2017
Ionut Gitan • Data Bootcamp • Balint Szoke • 5 May 2017
Project Summary
Introduction. The project discusses an historical overview of Syrian refugee admission to the United States since the start of the Syrian Civil War in March 2011 to April 2017. It investigates where refugees settle in the United States by state and city, and the top receiving states and cities by population share. The project explores the questions
Step1: Crisis Background
The Syrian refugee crisis is an outgrowth of the Syrian civil war that began in 2011. The multi-sided war between the President Bashar al-Assad regime and rebels is complicated by the fracturing of militaries into militias, an emerging ethnic Kurdish federal state, intervening foreign powers, and the Islamic State emboldening international terror plots and crimes against humanity.
Step2: The Syrian refugee crisis has been called one of the greatest humanitarian crisis since World War II. Half of the population – 11 million Syrians – have been killed or forced to flee. More than 200,000 Syrians have died. 6.3 million Syrians have been internally displaced, and 5 million Syrians have fled – the combined populations of California, New York and Florida. 4 out of 5 Syrians live in poverty and the life expectancy has dropped by 20 years. School attendance has dropped over 50 percent, with more than 2 million children now out of school.
In addition to resettlement, the international community has responded to the crisis by providing aid. Syrian refugees have received record aid, including 6 billion USD in 2016 at the London Syria Conference – the highest raised in a single day for a single cause by the United Nations High Commissioner for Refugees (UNHCR). Despite fundraising efforts, the UNHCR requires nearly 2 billion USD in 2017 for the Middle East and North Africa. The US has contributed 6.5 billion USD in aid since the beginning of the crisis, including 3.3 billion USD for aid inside Syria.
U.S. Resettlement Factors
Many factors limit refugee resettlement to the U.S., including wavering political commitment from the White House between Obama and Trump administrations. The Obama administration committed to resettle 10,000 refugees in September 2015 -- a goal surpassed. On the other hand, the Trump administration signed two executive orders to ban Syrian refugees -- both orders were blocked by federal courts.
But the U.S. refugee resettlement process is also a major factor. Unlike Europe, the U.S. admits refugees into the country only after processing. Europe processes refugees once they arrive on European soil. In the U.S., Refugees undergo an ‘extreme’ vetting process that can take more than two years. They go through multiple agency background checks, finger print screenings, reviews, and interviews. It is a difficult process to resettle in the U.S. as a refugee.
U.S. Resettlement
Since March 20011, the top three resettlement states for Syrian refugees are California, Michigan, and Texas. In California, Syrian refugees have mainly resettled in San Diego and Sacramento. In Texas, resettlement is more distributed across the major cities Houston, Austin, and Fort Worth.
Michigan is a unique state for Syrian refugee resettlement because of its demographics. The city of Dearborn, 9 miles from Detroit, is home to one of the largest Arab populations in the U.S. According to the 2000 Census, 29.85 per cent of the city is Arab, or 29,181 people of the city’s 97,775 total population. The metro-Detroit cities, like Sterling Heights, Warren, and Livonia, all have Arab populations of 100,000 or more. As Arab people migrate to Michigan and build communities, the network ties increase and social capital accumulates making the area receptive for resettlement for other Arab migrants, like Syrian refugees.
Step3: 4. Resettlement Facts
1. THE U.S. HAS RESETTLED 20,525 SYRIAN REFUGES SINCE MARCH 2011
The majority of Syrian refugees settled in a single year was 15,479 in 2016 after the Obama administration’s commitment to resettle 10,000 Syrian refugees. In the first five years of the Syrian civil war between 2011 to 2015, only 2,547 Syrian refugees resettled in the U.S.
2. SYRIAN REFUGEES ACCOUNT FOR ONLY 4.81% OF ALL RESETTLED REFUGEES FROM MARCH 2011 – APRIL
The five highest refugee sending nations include Burma (20.95% or 89,300), Iraq (19.62% or 83,635), Bhutan (13.03% or 55,568), Somalia (10.95% or 46,701), and the Democratic Republic of Congo (9.61% or 40,998).
Step4: 3. THE MAJORITY OF SYRIAN REFUGEES SETTLE IN URBAN AREAS
San Diego, Chicago, and Troy have resettled the most Syrian refugees -- almost 13%. No Syrian refugees have resettled in Alabama, Alaska, Hawaii, Mississippi, and Wyoming. Montana, North Dakota, South Dakota, Vermont, and West Virginia have resettled less than ten Syrian refugees each. The majority of Syrian refugees resettled in Midwestern states (27.95%).
4. REFUGEES IMPACT COMMUNITIES
Refugees help create jobs, raise wages and boost the economy overall – paying back the upfront costs associated with resettlement. Refugees revive small towns and cities from Vermont and New York, to Pennsylvania and Michigan by becoming active members in their communities. | Python Code:
import plotly
plotly.tools.set_credentials_file(username='ionutgitan', api_key='d0QXm30QhDEcnGMQcE5c')
import plotly.plotly as py
import pandas as pd
df = pd.read_csv('https://ionutgitan.com/s/Gitan_Data.csv')
df.head()
df['text'] = df['name'] + '<br>Syrian Refugees ' + (df['pop']).astype(str)
limits = [(0,5),(6,15),(16,50),(51,135),(136,300)]
colors = ["rgb(0,145,115)","rgb(133,20,75)","rgb(24,113,141)","rgb(244,150,6)","rgb(240,86,83)"]
cities = []
scale = 600
for i in range(len(limits)):
lim = limits[i]
df_sub = df[lim[0]:lim[1]]
city = dict(
type = 'scattergeo',
locationmode = 'USA-states',
lon = df_sub['lon'],
lat = df_sub['lat'],
text = df_sub['text'],
marker = dict(
size = df_sub['pop'],
color = colors[i],
line = dict(width=0.5, color='rgb(40,40,40)'),
sizemode = 'area'
),
name = '{0} - {1}'.format(lim[0],lim[1]) )
cities.append(city)
layout = dict(
title = 'Syrian Refugee Arrivals by City, March 2011 - April 2017',
showlegend = True,
geo = dict(
scope='usa',
projection=dict( type='albers usa' ),
showland = True,
landcolor = 'rgb(217, 217, 217)',
subunitwidth=1,
countrywidth=1,
subunitcolor="rgb(255, 255, 255)",
countrycolor="rgb(255, 255, 255)"
),
)
fig = dict( data=cities, layout=layout )
py.iplot( fig, validate=False, filename='d3-bubble-map-populations' )
Explanation: Syrian Refugee Resettlement, March 2011 - April 2017
Ionut Gitan • Data Bootcamp • Balint Szoke • 5 May 2017
Project Summary
Introduction. The project discusses an historical overview of Syrian refugee admission to the United States since the start of the Syrian Civil War in March 2011 to April 2017. It investigates where refugees settle in the United States by state and city, and the top receiving states and cities by population share. The project explores the questions: Where in the United States do Syrian refugees resettle? How many Syrian refugees have settled? What are top receive states and their population of share of refugees? What factors influece Syrian refugee resettlement?
The project uses data from the Department of State Bureau of Population, Refugees, and Migration, Office of Admissions - Refugee Processing Center reporting website. A report was ran of refugee arrivals from Syria by destination from March 1, 2011 to April 30, 2017.
Explainer - Syrian Refugee Resettlement
About 3 million refugees have resettled in the U.S. since Congress passed the Refugee Act of 1980 according to Pew Research Center. According to this project's research using the Department of State Bureau of Population, Refugees, and Migration's data, the U.S. has resettled 20,525 Syrian refugees since the outbreak of the Syrian Civil War in March 2011
This number is very low when compared to European countries, like Germany that has accepted 1 million Syrian refugees, or 55 times more than the U.S. It is even lower when compared to countries neighboring Syria, like Turkey, Lebanon, and Jordan that have accepted nearly 5 million Syrian refugees according to the UN refugee agency.
End of explanation
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "https://static1.squarespace.com/static/5766abc259cc682f752f1425/t/590ca6793a04111a3cbe1b53/1494001277169/?format=750w")
Explanation: Crisis Background
The Syrian refugee crisis is an outgrowth of the Syrian civil war that began in 2011. The multi-sided war between the President Bashar al-Assad regime and rebels is complicated by the fracturing of militaries into militias, an emerging ethnic Kurdish federal state, intervening foreign powers, and the Islamic State emboldening international terror plots and crimes against humanity.
End of explanation
import plotly.plotly as py
import pandas as pd
df = pd.read_csv('https://ionutgitan.com/s/Gitan_Data_State.csv')
for col in df.columns:
df[col] = df[col].astype(str)
scl = [[0.0, 'rgb(242,240,247)'],[0.2, 'rgb(191,174,211)'],[0.4, 'rgb(165,142,193)'],\
[0.6, 'rgb(140,109,176)'],[0.8, 'rgb(114,76,158)'],[1.0, 'rgb(89,43,140)']]
df['text'] = df['state'] + '<br>' +\
'Total Refugees '+df['total refugees']
data = [ dict(
type='choropleth',
colorscale = scl,
autocolorscale = False,
locations = df['code'],
z = df['total refugees'].astype(float),
locationmode = 'USA-states',
text = df['text'],
marker = dict(
line = dict (
color = 'rgb(255,255,255)',
width = 2
) ),
colorbar = dict(
title = "Refugee Population")
) ]
layout = dict(
title = 'Syrian Refugee Arrivals by State, March 2011 - April 2017',
geo = dict(
scope='usa',
projection=dict( type='albers usa' ),
showlakes = True,
lakecolor = 'rgb(255, 255, 255)'),
)
fig = dict( data=data, layout=layout )
py.iplot( fig, filename='d3-cloropleth-map' )
Explanation: The Syrian refugee crisis has been called one of the greatest humanitarian crisis since World War II. Half of the population – 11 million Syrians – have been killed or forced to flee. More than 200,000 Syrians have died. 6.3 million Syrians have been internally displaced, and 5 million Syrians have fled – the combined populations of California, New York and Florida. 4 out of 5 Syrians live in poverty and the life expectancy has dropped by 20 years. School attendance has dropped over 50 percent, with more than 2 million children now out of school.
In addition to resettlement, the international community has responded to the crisis by providing aid. Syrian refugees have received record aid, including 6 billion USD in 2016 at the London Syria Conference – the highest raised in a single day for a single cause by the United Nations High Commissioner for Refugees (UNHCR). Despite fundraising efforts, the UNHCR requires nearly 2 billion USD in 2017 for the Middle East and North Africa. The US has contributed 6.5 billion USD in aid since the beginning of the crisis, including 3.3 billion USD for aid inside Syria.
U.S. Resettlement Factors
Many factors limit refugee resettlement to the U.S., including wavering political commitment from the White House between Obama and Trump administrations. The Obama administration committed to resettle 10,000 refugees in September 2015 -- a goal surpassed. On the other hand, the Trump administration signed two executive orders to ban Syrian refugees -- both orders were blocked by federal courts.
But the U.S. refugee resettlement process is also a major factor. Unlike Europe, the U.S. admits refugees into the country only after processing. Europe processes refugees once they arrive on European soil. In the U.S., Refugees undergo an ‘extreme’ vetting process that can take more than two years. They go through multiple agency background checks, finger print screenings, reviews, and interviews. It is a difficult process to resettle in the U.S. as a refugee.
U.S. Resettlement
Since March 20011, the top three resettlement states for Syrian refugees are California, Michigan, and Texas. In California, Syrian refugees have mainly resettled in San Diego and Sacramento. In Texas, resettlement is more distributed across the major cities Houston, Austin, and Fort Worth.
Michigan is a unique state for Syrian refugee resettlement because of its demographics. The city of Dearborn, 9 miles from Detroit, is home to one of the largest Arab populations in the U.S. According to the 2000 Census, 29.85 per cent of the city is Arab, or 29,181 people of the city’s 97,775 total population. The metro-Detroit cities, like Sterling Heights, Warren, and Livonia, all have Arab populations of 100,000 or more. As Arab people migrate to Michigan and build communities, the network ties increase and social capital accumulates making the area receptive for resettlement for other Arab migrants, like Syrian refugees.
End of explanation
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Bar(
x=['2011', '2012', '2013', '2014', '2015', '2016', '2017'],
y=[40904, 66251, 59817, 70628, 64325, 81395, 14244],
name='Other Refugees',
marker=dict(
color='rgb(89,43,140)'
)
)
trace1 = go.Bar(
x=['2011', '2012', '2013', '2014', '2015', '2016', '2017'],
y=[20, 41, 45, 249, 2192, 15479, 2499],
name='Syrian Refugees',
marker=dict(
color='rgb(163,166,168)',
)
)
data = [trace0, trace1]
layout = go.Layout(
title='Refugee Arrivals, March 2011 - April 2017',
xaxis=dict(tickangle=-45),
barmode='group',
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='angled-text-bar')
Explanation: 4. Resettlement Facts
1. THE U.S. HAS RESETTLED 20,525 SYRIAN REFUGES SINCE MARCH 2011
The majority of Syrian refugees settled in a single year was 15,479 in 2016 after the Obama administration’s commitment to resettle 10,000 Syrian refugees. In the first five years of the Syrian civil war between 2011 to 2015, only 2,547 Syrian refugees resettled in the U.S.
2. SYRIAN REFUGEES ACCOUNT FOR ONLY 4.81% OF ALL RESETTLED REFUGEES FROM MARCH 2011 – APRIL
The five highest refugee sending nations include Burma (20.95% or 89,300), Iraq (19.62% or 83,635), Bhutan (13.03% or 55,568), Somalia (10.95% or 46,701), and the Democratic Republic of Congo (9.61% or 40,998).
End of explanation
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Bar(
x=['California', 'Michigan', 'Texas', 'Pennsylvania', 'Arizona', 'Illinois', 'Florida', 'New York', 'Ohio', 'North Carolina'],
y=[2260, 2168, 1547, 1267, 1252, 1203, 1112, 1042, 906, 895],
text=['11.01% Share', '10.56% Share', '7.53% Share', '6.17% Share', '6.09% Share', '5.86% Share', '5.41% Share', '5.07% Share', '4.41% Share', '4.36% Share'],
marker=dict(
color='rgb(89,43,140)',
line=dict(
color='rgb(89,43,140)',
width=1.5,
)
),
opacity=0.8
)
data = [trace0]
layout = go.Layout(
title='Syrian Refugee Arrivals by State, March 2011 - April 2017'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='text-hover-bar')
Explanation: 3. THE MAJORITY OF SYRIAN REFUGEES SETTLE IN URBAN AREAS
San Diego, Chicago, and Troy have resettled the most Syrian refugees -- almost 13%. No Syrian refugees have resettled in Alabama, Alaska, Hawaii, Mississippi, and Wyoming. Montana, North Dakota, South Dakota, Vermont, and West Virginia have resettled less than ten Syrian refugees each. The majority of Syrian refugees resettled in Midwestern states (27.95%).
4. REFUGEES IMPACT COMMUNITIES
Refugees help create jobs, raise wages and boost the economy overall – paying back the upfront costs associated with resettlement. Refugees revive small towns and cities from Vermont and New York, to Pennsylvania and Michigan by becoming active members in their communities.
End of explanation |
5,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
words = set(text)
vocab_to_int = {}
int_to_vocab = {}
for i, word in enumerate(words):
vocab_to_int[word] = i
int_to_vocab[i] = word
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {
'.': '%%dash%%',
',': '%%comma%%',
'"': '%%quote%%',
';': '%%semicolon%%',
'!': '%%exclamation%%',
'?': '%%questionmark%%',
'(': '%%leftparen%%',
')': '%%rightparen%%',
'--': '%%dash%%',
'\n': '%%newline%%'
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
return tf.placeholder(tf.int32, shape=[None, None], name='input'), tf.placeholder(tf.int32, shape=[None, None], name='target'), tf.placeholder(tf.float32, name='learning_rate')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm])
initial_state = tf.identity(cell.zero_state(batch_size, tf.int32), name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data,vocab_size,rnn_size)
rnn, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(rnn, vocab_size, activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
ydata[-1] = int_text[0]
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 1024
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 2
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
return loaded_graph.get_tensor_by_name('input:0'), loaded_graph.get_tensor_by_name('initial_state:0'), loaded_graph.get_tensor_by_name('final_state:0'), loaded_graph.get_tensor_by_name('probs:0')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'homer_simpson'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
5,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Important
This notebook is to be run inside Jupyter. If you see In [ ]
Step2: Step 2
Step3: Step 3 | Python Code:
!unzip codes/cloudlab/emulab-0.9.zip -d codes/cloudlab
!cd codes/cloudlab/emulab-geni-lib-1baf79cf12cb/;\
source activate python2;\
python setup.py install --user
!ls /home/lngo/.local/lib/python2.7/site-packages/
!rm -Rf codes/cloudlab/emulab-geni-lib-1baf79cf12cb/
Explanation: Important
This notebook is to be run inside Jupyter. If you see In [ ]: to the left of a cell, it means that this is an executable Jupyter cell.
To run a Jupyter cell, one of the followings can be done:
- Press the Run button in the tool bar
- Hit Shift-Enter
- Hit Ctrl-Enter
In an executable Jupyter cell, the ! denotes a Linux command (or a sequence of commands) that will be sent to execute in the CentOS VM. All Linux commands in shell will assume a starting directory that is the current directory of the notebook.
In an executable Jupyter cell, the %% at the first line of the cell denotes a cell magic (a single configuration option that directs how the cell is executed). %%writefile is a cell magic that instruct Jupyter to not execute the remainder of the cell, but to save them in to a file whose path is specified after the cell magic.
Step 1. Set up emulab geni-lib package for CloudLab
Open a new terminal and run the following command:
$ sudo yum install -y unzip
$ conda create -n python2 python=2.7
$ source activate python2
$ conda install ipykernel
$ python -m ipykernel install --name python2 --user
$ conda install lxml
Restart your Jupyter Server
Reopen this notebook, go to Kernel, and change Kernel to Python 2
End of explanation
%%writefile codes/cloudlab/xenvm.py
An example of constructing a profile with a single Xen VM from CloudLab.
Instructions:
Wait for the profile instance to start, and then log in to the VM via the
ssh port specified below. (Note that in this case, you will need to access
the VM through a high port on the physical host, since we have not requested
a public IP address for the VM itself.)
import geni.portal as portal
import geni.rspec.pg as rspec
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a XenVM
node = request.XenVM("node")
node.disk_image = "urn:publicid:IDN+emulab.net+image+emulab-ops:CENTOS7-64-STD"
node.routable_control_ip = "true"
node.addService(rspec.Execute(shell="/bin/sh",
command="sudo yum update"))
node.addService(rspec.Execute(shell="/bin/sh",
command="sudo yum install -y httpd"))
node.addService(rspec.Execute(shell="/bin/sh",
command="sudo systemctl restart httpd.service"))
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
Explanation: Step 2: Reload geni-lib for the first time
On the top bar of this notebook, select Kernel and then Restart
End of explanation
!source activate python2;\
python codes/cloudlab/xenvm.py
Explanation: Step 3: Test emulab geni-lib installation
Executing the cell below should produce an XML element with the following content:
<rspec xmlns:client="http://www.protogeni.net/resources/rspec/ext/client/1" xmlns:emulab="http://www.protogeni.net/resources/rspec/ext/emulab/1" xmlns:jacks="http://www.protogeni.net/resources/rspec/ext/jacks/1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.geni.net/resources/rspec/3" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request">
<rspec_tour xmlns="http://www.protogeni.net/resources/rspec/ext/apt-tour/1">
<description type="markdown">An example of constructing a profile with a single Xen VM.</description>
<instructions type="markdown">Wait for the profile instance to start, and then log in to the VM via the
ssh port specified below. (Note that in this case, you will need to access
the VM through a high port on the physical host, since we have not requested
a public IP address for the VM itself.)
</instructions>
</rspec_tour>
<node client_id="node" exclusive="false">
<sliver_type name="emulab-xen"/>
</node>
</rspec>
End of explanation |
5,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encore une instruction pour bouger
QUESTIONS
Lorsque la liste pos contient 6 angles en degrés, que permet de faire le jeu d'instructions suivant ?
Le jeu d'instructions suivant permet de faire aller les moteurs de la liste poppy.motors à la position correspondante de la liste pos en 0,5 seconde et d'attendre que le mouvement soit terminé pour passer à l'instruction suivante.
Quelle différence avec m.goal_position = 30 par exemple ?
Ici, on a la possibilité d'attendre que le mouvement se termine pour passer au suivant. Le déplacement ne se fait pas à la vitesse m.moving_speed.
Par ailleurs, on obtient des mouvements plus harmonieux.
Step1: Quelques remarques | Python Code:
pos = [-20, -20, 40, -30, 40, 20]
i = 0
for m in poppy.motors:
m.compliant = False
m.goto_position(pos[i], 0.5, wait = True)
i = i + 1
# importation des outils nécessaires
import cv2
%matplotlib inline
import matplotlib.pyplot as plt
from hampy import detect_markers
# affichage de l'image capturée
img = poppy.camera.frame
plt.imshow(img)
# récupère dans une liste les marqueurs trouvés dans l'image
markers = detect_markers(img)
valeur = 0
for m in markers:
print('Found marker {} at {}'.format(m.id, m.center))
m.draw_contour(img)
valeur = m.id
print(valeur)
markers
Explanation: Encore une instruction pour bouger
QUESTIONS
Lorsque la liste pos contient 6 angles en degrés, que permet de faire le jeu d'instructions suivant ?
Le jeu d'instructions suivant permet de faire aller les moteurs de la liste poppy.motors à la position correspondante de la liste pos en 0,5 seconde et d'attendre que le mouvement soit terminé pour passer à l'instruction suivante.
Quelle différence avec m.goal_position = 30 par exemple ?
Ici, on a la possibilité d'attendre que le mouvement se termine pour passer au suivant. Le déplacement ne se fait pas à la vitesse m.moving_speed.
Par ailleurs, on obtient des mouvements plus harmonieux.
End of explanation
import time
RIGH = 82737172
LEFT = 76697084
NEXT = 78698884
PREV = 80826986
# la variable liste_moteur permet de n'avoir à modifier
# le nom du conteneur du robot qu'une fois.
# Si on ne l'a pas instancié en tant que poppy par exemple
liste_moteur = [m for m in poppy.motors]
num_moteur = 0
#éteindre toutes les leds des moteurs
for i in range (0,6):
liste_moteur[i].led = 'pink'
# tant que le dernier moteur n'est pas atteint
while num_moteur < 6:
#capturer une image et détecter si elle comporte un marqueur
img = poppy.camera.frame
markers = detect_markers(img)
valeur = 0
for m in markers:
print 'Found marker {} at {}'.format(m.id, m.center)
m.draw_contour(img)
valeur = m.id
print(valeur)
# mettre la led du moteur courant au rouge
liste_moteur[num_moteur].led = 'red'
# effectuer l'action correspondant au marqueur détecté
if valeur == RIGH:
liste_moteur[num_moteur].led = 'green'
liste_moteur[num_moteur].goto_position(
liste_moteur[num_moteur].present_position - 5,
0.5,
wait = True)
liste_moteur[num_moteur].led = 'pink'
valeur = 0
if valeur == PREV:
if num_moteur != 0:
liste_moteur[num_moteur].led = 'pink'
num_moteur = num_moteur - 1
liste_moteur[num_moteur].led = 'red'
time.sleep(2.0)
valeur = 0
if valeur == LEFT:
liste_moteur[num_moteur].led = 'green'
liste_moteur[num_moteur].goto_position(
liste_moteur[num_moteur].present_position + 5,
0.5,
wait = True)
liste_moteur[num_moteur].led = 'pink'
valeur = 0
if valeur == NEXT:
if num_moteur != 6:
liste_moteur[num_moteur].led = 'pink'
num_moteur = num_moteur + 1
if num_moteur != 6:
liste_moteur[num_moteur].led = 'red'
time.sleep(2.0)
valeur = 0
Explanation: Quelques remarques :
markers est une liste, elle contient les identifiants des marqueurs trouvés et la position du centre.
plusieurs marqueurs peuvent être trouvés dans une même image capturée.
m est un itérateur qui parcourt ici la liste des marqueurs.
l'instruction m.draw_coutour(img) permet de dessiner les contours des marqueurs dans l'image img.
End of explanation |
5,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XDAWN Decoding From EEG data
ERP decoding with Xdawn
Step1: Set parameters and read data
Step2: The patterns_ attribute of a fitted Xdawn instance (here from the last
cross-validation fold) can be used for visualization. | Python Code:
# Authors: Alexandre Barachant <[email protected]>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import MinMaxScaler
from mne import io, pick_types, read_events, Epochs, EvokedArray, create_info
from mne.datasets import sample
from mne.preprocessing import Xdawn
from mne.decoding import Vectorizer
print(__doc__)
data_path = sample.data_path()
Explanation: XDAWN Decoding From EEG data
ERP decoding with Xdawn :footcite:RivetEtAl2009,RivetEtAl2011. For each event
type, a set of spatial Xdawn filters are trained and applied on the signal.
Channels are concatenated and rescaled to create features vectors that will be
fed into a logistic regression.
End of explanation
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
n_filter = 3
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin')
events = read_events(event_fname)
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
# Create classification pipeline
clf = make_pipeline(Xdawn(n_components=n_filter),
Vectorizer(),
MinMaxScaler(),
LogisticRegression(penalty='l1', solver='liblinear',
multi_class='auto'))
# Get the labels
labels = epochs.events[:, -1]
# Cross validator
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=42)
# Do cross-validation
preds = np.empty(len(labels))
for train, test in cv.split(epochs, labels):
clf.fit(epochs[train], labels[train])
preds[test] = clf.predict(epochs[test])
# Classification report
target_names = ['aud_l', 'aud_r', 'vis_l', 'vis_r']
report = classification_report(labels, preds, target_names=target_names)
print(report)
# Normalized confusion matrix
cm = confusion_matrix(labels, preds)
cm_normalized = cm.astype(float) / cm.sum(axis=1)[:, np.newaxis]
# Plot confusion matrix
fig, ax = plt.subplots(1)
im = ax.imshow(cm_normalized, interpolation='nearest', cmap=plt.cm.Blues)
ax.set(title='Normalized Confusion matrix')
fig.colorbar(im)
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
fig.tight_layout()
ax.set(ylabel='True label', xlabel='Predicted label')
Explanation: Set parameters and read data
End of explanation
fig, axes = plt.subplots(nrows=len(event_id), ncols=n_filter,
figsize=(n_filter, len(event_id) * 2))
fitted_xdawn = clf.steps[0][1]
info = create_info(epochs.ch_names, 1, epochs.get_channel_types())
info.set_montage(epochs.get_montage())
for ii, cur_class in enumerate(sorted(event_id)):
cur_patterns = fitted_xdawn.patterns_[cur_class]
pattern_evoked = EvokedArray(cur_patterns[:n_filter].T, info, tmin=0)
pattern_evoked.plot_topomap(
times=np.arange(n_filter),
time_format='Component %d' if ii == 0 else '', colorbar=False,
show_names=False, axes=axes[ii], show=False)
axes[ii, 0].set(ylabel=cur_class)
fig.tight_layout(h_pad=1.0, w_pad=1.0, pad=0.1)
Explanation: The patterns_ attribute of a fitted Xdawn instance (here from the last
cross-validation fold) can be used for visualization.
End of explanation |
5,883 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Is it possible to perform circular cross-/auto-correlation on 1D arrays with a numpy/scipy/matplotlib function? I have looked at numpy.correlate() and matplotlib.pyplot.xcorr (based on the numpy function), and both seem to not be able to do circular cross-correlation. | Problem:
import numpy as np
a = np.array([1,2,3,4])
b = np.array([5, 4, 3, 2])
result = np.correlate(a, np.hstack((b[1:], b)), mode='valid') |
5,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, load up the data
First you're going to want to create a data frame from the dailybots.csv file which can be found in the data directory. You should be able to do this with the pd.read_csv() function. Take a minute to look at the dataframe because we are going to be using it for this entire worksheet.
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
Step4: Exercise 4
Step5: Exercise 5 | Python Code:
data = pd.read_csv( '../../data/dailybots.csv' )
#Look at a summary of the data
data.describe()
data['botfam'].value_counts()
Explanation: First, load up the data
First you're going to want to create a data frame from the dailybots.csv file which can be found in the data directory. You should be able to do this with the pd.read_csv() function. Take a minute to look at the dataframe because we are going to be using it for this entire worksheet.
End of explanation
grouped_df = data[data.botfam == "Ramnit"].groupby(['industry'])
grouped_df.sum()
Explanation: Exercise 1: Which industry sees the most Ramnit infections? Least?
Count the number of infected days for "Ramnit" in each industry industry.
How:
1. First filter the data to remove all the infections we don't care about
2. Aggregate the data on the column of interest. HINT: You might want to use the groupby() function
3. Add up the results
End of explanation
group2 = data[['botfam','orgs']].groupby( ['botfam'])
summary = group2.agg([np.min, np.max, np.mean, np.median, np.std])
summary.sort_values( [('orgs', 'median')], ascending=False)
Explanation: Exercise 2: Calculate the min, max, median and mean infected orgs for each bot family, sort by median
In this exercise, you are asked to calculate the min, max, median and mean of infected orgs for each bot family sorted by median. HINT:
1. Using the groupby() function, create a grouped data frame
2. You can do this one metric at a time OR you can use the .agg() function. You might want to refer to the documentation here: http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once
3. Sort the values (HINT HINT) by the median column
End of explanation
df3 = data[['date','hosts']].groupby('date').agg(['count'])
df3.sort_values(by=[('hosts', 'count')], ascending=False).head(10)
Explanation: Exercise 3: Which date had the total most bot infections and how many infections on that day?
In this exercise you are asked to aggregate and sum the number of infections (hosts) by date. Once you've done that, the next step is to sort in descending order.
End of explanation
filteredData = data[ data['botfam'].isin(['Necurs', 'Ramnit', 'PushDo']) ][['date', 'botfam', 'hosts']]
groupedFilteredData = filteredData.groupby( ['date', 'botfam']).sum()
groupedFilteredData.unstack(level=1).plot(kind='line', subplots=False)
Explanation: Exercise 4: Plot the daily infected hosts for Necurs, Ramnit and PushDo
In this exercise you're going to plot the daily infected hosts for three infection types. In order to do this, you'll need to do the following steps:
1. Filter the data to remove the botfamilies we don't care about.
2. Use groupby() to aggregate the data by date and family, then sum up the hosts in each group
3. Plot the data. Hint: You might want to use the unstack() function to prepare the data for plotting.
End of explanation
data.date = data.date = pd.to_datetime( data.date )
data['day'] = data.date.dt.weekday
data[['hosts', 'day']].boxplot( by='day')
grouped = data.groupby('day')
grouped.boxplot('hosts')
Explanation: Exercise 5: What are the distribution of infected hosts for each day-of-week across all bot families?
Hint: try a box plot and/or violin plot. In order to do this, there are two steps:
1. First create a day column where the day of the week is represented as an integer. You'll need to convert the date column to an actual date/time object. See here: http://pandas.pydata.org/pandas-docs/stable/timeseries.html
2. Next, use the .boxplot() method to plot the data. This has grouping built in, so you don't have to group by first.
End of explanation |
5,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
%run ../linked_list/linked_list.py
%load ../linked_list/linked_list.py
class MyLinkedList(LinkedList):
def kth_to_last_elem(self, k):
# TODO: Implement me
pass
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Find the kth to last element of a linked list
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Can we assume k is a valid integer?
Yes
If k = 0, does this return the last element?
Yes
What happens if k is greater than or equal to the length of the linked list?
Return None
Can you use additional data structures?
No
Can we assume we already have a linked list class that can be used for this problem?
Yes
Test Cases
Empty list -> None
k is >= the length of the linked list -> None
One element, k = 0 -> element
General case with many elements, k < length of linked list
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
# %load test_kth_to_last_elem.py
from nose.tools import assert_equal
class Test(object):
def test_kth_to_last_elem(self):
print('Test: Empty list')
linked_list = MyLinkedList(None)
assert_equal(linked_list.kth_to_last_elem(0), None)
print('Test: k >= len(list)')
assert_equal(linked_list.kth_to_last_elem(100), None)
print('Test: One element, k = 0')
head = Node(2)
linked_list = MyLinkedList(head)
assert_equal(linked_list.kth_to_last_elem(0), 2)
print('Test: General case')
linked_list.insert_to_front(1)
linked_list.insert_to_front(3)
linked_list.insert_to_front(5)
linked_list.insert_to_front(7)
assert_equal(linked_list.kth_to_last_elem(2), 3)
print('Success: test_kth_to_last_elem')
def main():
test = Test()
test.test_kth_to_last_elem()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
5,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keys for each of the columns in the orbit (Keplerian state) report.
Step1: Plot the orbital parameters which are vary significantly between different tracking files. | Python Code:
utc = 0
sma = 1
ecc = 2
inc = 3
raan = 4
aop = 5
ma = 6
ta = 7
Explanation: Keys for each of the columns in the orbit (Keplerian state) report.
End of explanation
#fig1 = plt.figure(figsize = [15,8], facecolor='w')
fig_peri = plt.figure(figsize = [15,8], facecolor='w')
fig_peri_deorbit = plt.figure(figsize = [15,8], facecolor='w')
fig_apo = plt.figure(figsize = [15,8], facecolor='w')
fig3 = plt.figure(figsize = [15,8], facecolor='w')
fig4 = plt.figure(figsize = [15,8], facecolor='w')
fig4_rap = plt.figure(figsize = [15,8], facecolor='w')
fig5 = plt.figure(figsize = [15,8], facecolor='w')
fig6 = plt.figure(figsize = [15,8], facecolor='w')
#sub1 = fig1.add_subplot(111)
sub_peri = fig_peri.add_subplot(111)
sub_peri_deorbit = fig_peri_deorbit.add_subplot(111)
sub_apo = fig_apo.add_subplot(111)
sub3 = fig3.add_subplot(111)
sub4 = fig4.add_subplot(111)
sub4_rap = fig4_rap.add_subplot(111)
sub5 = fig5.add_subplot(111)
sub6 = fig6.add_subplot(111)
subs = [sub_peri, sub_peri_deorbit, sub_apo, sub3, sub4, sub4_rap, sub5, sub6]
for file in ['orbit_deorbit.txt', 'orbit_deorbit2.txt', 'orbit_deorbit3.txt']:
orbit = load_orbit_file(file)
t = Time(mjd2unixtimestamp(orbit[:,utc]), format='unix')
#sub1.plot(t.datetime, orbit[:,sma])
sub_peri.plot(t.datetime, orbit[:,sma]*(1-orbit[:,ecc]))
deorbit_sel = (mjd2unixtimestamp(orbit[:,utc]) >= 1564012800) & (mjd2unixtimestamp(orbit[:,utc]) <= 1564963200)
if np.any(deorbit_sel):
sub_peri_deorbit.plot(t[deorbit_sel].datetime, orbit[deorbit_sel,sma]*(1-orbit[deorbit_sel,ecc]))
sub_apo.plot(t.datetime, orbit[:,sma]*(1+orbit[:,ecc]))
sub3.plot(t.datetime, orbit[:,ecc])
sub4.plot(t.datetime, orbit[:,aop])
sub4_rap.plot(t.datetime, np.fmod(orbit[:,aop] + orbit[:,raan],360))
sub5.plot(t.datetime, orbit[:,inc])
sub6.plot(t.datetime, orbit[:,raan])
sub_peri.axhline(y = 1737, color='red')
sub_peri_deorbit.axhline(y = 1737, color='red')
month_locator = mdates.MonthLocator()
day_locator = mdates.DayLocator()
for sub in subs:
sub.set_xlabel('Time')
sub.xaxis.set_major_locator(month_locator)
sub.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))
sub.xaxis.set_tick_params(rotation=45)
sub_peri_deorbit.xaxis.set_major_locator(day_locator)
sub_peri_deorbit.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
#sub1.set_ylabel('SMA (km)')
sub_peri.set_ylabel('Periapsis radius (km)')
sub_peri_deorbit.set_ylabel('Periapsis radius (km)')
sub_apo.set_ylabel('Apoapsis radius (km)')
sub3.set_ylabel('ECC')
sub4.set_ylabel('AOP (deg)')
sub4_rap.set_ylabel('RAOP (deg)')
sub5.set_ylabel('INC (deg)')
sub6.set_ylabel('RAAN (deg)')
#sub1.set_title('Semi-major axis')
sub_peri.set_title('Periapsis radius')
sub_peri_deorbit.set_title('Periapsis radius')
sub_apo.set_title('Apoapsis radius')
sub3.set_title('Eccentricity')
sub4.set_title('Argument of periapsis')
sub4_rap.set_title('Right ascension of periapsis')
sub5.set_title('Inclination')
sub6.set_title('Right ascension of ascending node')
for sub in subs:
sub.legend(['Before periapsis lowering', 'After periapsis lowering', 'Latest ephemeris'])
sub_peri.legend(['Before periapsis lowering', 'After periapsis lowering', 'Latest ephemeris', 'Lunar radius']);
sub_peri_deorbit.legend(['Before periapsis lowering', 'After periapsis lowering', 'Latest ephemeris', 'Lunar radius']);
Explanation: Plot the orbital parameters which are vary significantly between different tracking files.
End of explanation |
5,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-3', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:00
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
5,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Camera Calibration with OpenCV
Run the code in the cell below to extract object points and image points for camera calibration.
Step1: If the above cell ran sucessfully, you should now have objpoints and imgpoints needed for camera calibration. Run the cell below to calibrate, calculate distortion coefficients, and test undistortion on an image! | Python Code:
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
%matplotlib qt
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*8,3), np.float32)
objp[:,:2] = np.mgrid[0:8, 0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('../camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for idx, fname in enumerate(images):
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (8,6), None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (8,6), corners, ret)
#write_name = 'corners_found'+str(idx)+'.jpg'
#cv2.imwrite(write_name, img)
cv2.imshow('img', img)
cv2.waitKey(500)
cv2.destroyAllWindows()
Explanation: Camera Calibration with OpenCV
Run the code in the cell below to extract object points and image points for camera calibration.
End of explanation
import pickle
%matplotlib inline
# Test undistortion on an image
img = cv2.imread('calibration_wide/test_image.jpg')
img_size = (img.shape[1], img.shape[0])
# Do camera calibration given object points and image points
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)
dst = cv2.undistort(img, mtx, dist, None, mtx)
cv2.imwrite('calibration_wide/test_undist.jpg',dst)
# Save the camera calibration result for later use (we won't worry about rvecs / tvecs)
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "calibration_wide/wide_dist_pickle.p", "wb" ) )
#dst = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)
# Visualize undistortion
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(dst)
ax2.set_title('Undistorted Image', fontsize=30)
Explanation: If the above cell ran sucessfully, you should now have objpoints and imgpoints needed for camera calibration. Run the cell below to calibrate, calculate distortion coefficients, and test undistortion on an image!
End of explanation |
5,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Least-Squares with Measurement Error
Unit 12, Lecture 4
Numerical Methods and Statistics
Prof. Andrew White, April 24 2018
Goals
Step1: Plotting with Error Bars
Error bars are little lines that go up and down or left and right in graphs to indicate the standard error of a measurement. They may also indicate a confidence interval or standard deviation, however that will usually be specified in the figure caption. The code to make error bar plots is shown below with a constant y-error bar.
Step2: Somtimes you have x-error and y-error. You can do that too
Step3: You may have a different error value at each point. Then you pass an array instead
Step4: If you do quantiling or some other technique that is non-parametric, you often can have error bars that are asymmetric. Then you need to pass in a 2xN array that has the distance up in the first row and distance down in the second row.
Step5: Ordinary Least-Squares (OLS) Regression with Measurement Error
We're going to return to regression again. This time with error in both our independent and dependent variables. Here is the list of cases we'll consider
Step6: We can use our equations above to find the entropy, which is the negative of the slope
Step7: Now if we want to give a confidence interval, we need to get the standard error first. Let's start by checking our fit and we'll need the residuals
Step8: Now we have the standard error in the slope, which is the same as entropy. Now we get our confidence interval
Step9: Remember, the slope is the negative change in our entropy. So our final answer is
$$\Delta S = 0.18 \pm 0.06 \frac{\textrm{kcal}}{\textrm{mol}\cdot\textrm{K}}$$
Case 2 - OLS with constant $x$ uncertainty in 1D
Now we have a measurement error in our independent variables. Our $x$ values are just our best esimiates; we don't know the true $x$ values. Our model equation is
Step10: Our slope and intercept are unchanged, but our standard error is different.
Step11: With the new measurement error, our new confidence interval for entropy is
Step12: Notice that as our error in our measurement gets larger, the slope becomes less and less clear. This is called Attenuation Error. As uncertainty in $x$ increases, our estimates for $\alpha$ and $\beta$ get smaller. We usually don't correct for this, because all our hypothesis tests become more conservative due to attenuation and thus we won't ever accidentally think there is a correlation when there isn't. But be aware that when the uncertatinty in $x$ becomes simiar in size to our range of data, we will underestimate the slope.
Case 3 - OLS with constant x,y uncertainty in 1D
As you may have expected, the standard error in $\epsilon_3$ is just a combination of the previous two cases
Step13: With the both measurement errors, our confidence interval for entropy is | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from math import sqrt, pi, erf
import seaborn
seaborn.set_context("notebook")
seaborn.set_style("whitegrid")
import scipy.stats
Explanation: Ordinary Least-Squares with Measurement Error
Unit 12, Lecture 4
Numerical Methods and Statistics
Prof. Andrew White, April 24 2018
Goals:
Be able to plot with error bars
Be able to choose the appropiate case for OLS with measurement error
Are aware of attenuation error and its effect on independent variable measurement error
End of explanation
x = np.array([1,2,4,4.5,7,10])
y = 3 * x + 2
#yerr=1 means constant standard error up and down of 1
#fmt is for lines/points and color. capthick is necessary to make the small dashes at the top and bottom
plt.errorbar(x,y, yerr=1, fmt='o', capthick=2)
plt.show()
Explanation: Plotting with Error Bars
Error bars are little lines that go up and down or left and right in graphs to indicate the standard error of a measurement. They may also indicate a confidence interval or standard deviation, however that will usually be specified in the figure caption. The code to make error bar plots is shown below with a constant y-error bar.
End of explanation
plt.errorbar(x,y, yerr=1, xerr=0.5, fmt='o', capthick=2)
plt.show()
Explanation: Somtimes you have x-error and y-error. You can do that too:
End of explanation
#THIS IS NOT PART OF PLOTTING ERROR BARS
#DO NOT COPY FOR HW/TEST
#THIS IS TO CREATE FAKE DATA FOR PLOTTING
#create some random numbers to use for my error bars
#squre them to make sure they're positive
yerror = 5*np.random.random(size=len(x))**2
#END
plt.errorbar(x,y, yerr=yerror, xerr=0.5, fmt='o', capthick=2)
plt.show()
Explanation: You may have a different error value at each point. Then you pass an array instead:
End of explanation
#THIS IS NOT PART OF PLOTTING ERROR BARS
#DO NOT COPY FOR HW/TEST
#THIS IS TO CREATE FAKE DATA FOR PLOTTING
#create some random numbers to use for my error bars
#squre them to make sure they're positive
yerror_up = 2.5*np.random.random(size=len(x))**2
yerror_down = 2.5*np.random.random(size=len(x))**2
#END
yerror = np.row_stack( (yerror_up, yerror_down) )
plt.errorbar(x,y, yerr=yerror, xerr=0.5, fmt='o', capthick=2)
plt.show()
Explanation: If you do quantiling or some other technique that is non-parametric, you often can have error bars that are asymmetric. Then you need to pass in a 2xN array that has the distance up in the first row and distance down in the second row.
End of explanation
T = np.array([300., 312, 325, 345, 355, 400])
DG = np.array([5.2, 2.9, 0.4, -4.2,-5, -13])
plt.errorbar(T, DG, yerr=2, fmt='go', capthick=3)
plt.xlim(290, 420)
plt.xlabel('T [ Kelivn]')
plt.ylabel('$\Delta G$')
plt.show()
Explanation: Ordinary Least-Squares (OLS) Regression with Measurement Error
We're going to return to regression again. This time with error in both our independent and dependent variables. Here is the list of cases we'll consider:
OLS with constant $y$ uncertainty in 1D
OLS with constant $x$ uncertainty in 1D
OLS with constant $x,y$ uncertainty in 1D
OLS with multiple $y$ values in N-dimensions
Case 1 - OLS with constant $y$ uncertainty in 1D
In this case we have some constant extra uncertainty in $y$, so that when we measure $y$ we don't get the actual $y$. We get some estimate of $y$. For example, if I'm weighing out a powder and the balance is only accuarate to $2$mg, I don't get the true mass but isntead some estimate with an uncertainty of $2$ mg. That means our measurements for $y$ do not contain the true value of y, but are instead an estimate of $y$. We're doing regression and our equation looks like this for the true y values:
$$(y + \eta) = \alpha + \beta x + \epsilon$$
where $\eta$ is the extra uncertainty in $y$. We have measured $(y + \eta)$ and $x$ for our regression.
We can rearrange the equation a little bit and get:
$$y = \alpha + \beta x + \epsilon_1$$
where $\epsilon_1 = \epsilon - \eta$. The $_1$ stands for case 1. Notice that since $\eta$ and $\epsilon$ are normally distributed and centered at $0$, we don't actually get a smaller error term for $\epsilon_1$ than $\epsilon$. Since we've arraived at the same equation as the usual OLS regression with a slope and intercept, we can use the same equations. EXCEPT, our standard error of $\epsilon_1$ is slightly different. The standard error is:
$$ S^2_{\epsilon_1} = S^2_{\epsilon} + \sigma_{\eta}^2 = \frac{\sum_i (y_i - \hat{y}i)^2}{N - 2} + \sigma{\eta}^2 $$
where $S^2_{\epsilon}$ was our previously used standard error term. The $-2$ term is for the reduction in degrees of freedom
and $\sigma_{\eta}^2$ is the squared error in our measurement. Notice "error" here generally means an instrument's stated precision.
All Equations for Case 1
$$\hat{\beta} = \frac{\sigma_{xy}}{\sigma_x^2}$$
$$\hat{\alpha} = \frac{1}{N }\sum_i (y_i - \hat{\beta}x_i)$$
$$ S^2_{\epsilon_1} =\frac{SSR}{N-2} + \sigma_{\eta}^2 $$
$$SSR = \sum_i (y_i - \hat{\beta}x_i - \hat{\alpha})^2$$
$$S^2_{\alpha} = S^2_{\epsilon_1} \left[ \frac{1}{N - 2} + \frac{\bar{x}^2}{\sum_i\left(x_i - \bar{x}\right)^2}\right]$$
$$S^2_{\beta} = \frac{S^2_{\epsilon_1}}{\sum_i \left(x_i - \bar{x}\right)^2}$$
Case 1 Example
The Gibbs equation for a chemical reaction is:
$$\Delta G = \Delta H - T \Delta S$$
where $\Delta G = -RT\ln Q$ and $Q$ the equilibrium constant. We can measure $\Delta G$ by measuring $Q$ and due to instrument precision, we know that the precision (generally 1 standard deviation) of $\Delta G$ is 2 kcal / mol. What is the change in entropy, given these measurements:
$T \textrm{[K]}$ :300, 312, 325, 345, 355, 400
$\Delta G \textrm{[kcal/mol]}$: 5.2, 2.9, 0.4, -4.2, -13
End of explanation
cov_mat = np.cov(T, DG, ddof=1)
slope = cov_mat[0,1] / cov_mat[0,0]
DS = -slope
print(DS, 'kcal/mol*K')
Explanation: We can use our equations above to find the entropy, which is the negative of the slope:
End of explanation
intercept = np.mean(DG - T * slope)
print(intercept)
plt.errorbar(T, DG, yerr=2, fmt='go', capthick=3)
plt.plot(T, T * slope + intercept, '-')
plt.xlim(290, 420)
plt.xlabel('T [ Kelivn]')
plt.ylabel('$\Delta G$')
plt.show()
residuals = DG - T * slope - intercept
sig_e = np.sum(residuals**2) / (len(T) - 2)
#this is where we include the error in measurement
s2_e = sig_e + 2.0 ** 2
s2_slope = s2_e / (np.sum( (np.mean(T) - T)**2 ) )
Explanation: Now if we want to give a confidence interval, we need to get the standard error first. Let's start by checking our fit and we'll need the residuals
End of explanation
T = scipy.stats.t.ppf(0.975, len(T) - 2)
slope_ci = T * np.sqrt(s2_slope)
print(slope_ci)
Explanation: Now we have the standard error in the slope, which is the same as entropy. Now we get our confidence interval
End of explanation
T = np.array([300., 312, 325, 345, 355, 400])
DG = np.array([5.2, 2.9, 0.4, -4.2,-5, -13])
plt.errorbar(T, DG, xerr=5, fmt='go', capthick=3)
plt.xlim(290, 420)
plt.xlabel('T [ Kelivn]')
plt.ylabel('$\Delta G$')
plt.show()
Explanation: Remember, the slope is the negative change in our entropy. So our final answer is
$$\Delta S = 0.18 \pm 0.06 \frac{\textrm{kcal}}{\textrm{mol}\cdot\textrm{K}}$$
Case 2 - OLS with constant $x$ uncertainty in 1D
Now we have a measurement error in our independent variables. Our $x$ values are just our best esimiates; we don't know the true $x$ values. Our model equation is:
$$y = \alpha + \beta(x + \eta) + \epsilon$$
where our measurements are $y$ and $(x + \eta)$. Once again we can rearrange our model equation into:
$$y = \alpha + \beta x + \epsilon_2$$
where $\epsilon_2 = \beta \eta + \epsilon$. Everything is the same as before, except that our extra variance term depends on the slope. That changes our standard error equation to:
$$ S^2_{\epsilon_2} = \frac{SSR}{N - 2} + \hat{\beta}^2\sigma_{\eta}^2 $$
and $\sigma_{\eta}^2$ is again the squared error in our measurement. Note that this is an approximate method.
All Equations for Case 2
$$\hat{\beta} = \frac{\sigma_{xy}}{\sigma_x^2}$$
$$\hat{\alpha} = \frac{1}{N}\sum_i (y_i - \hat{\beta}x_i)$$
$$ S^2_{\epsilon_2} = \frac{SSR}{N-2} + \hat{\beta}^2\sigma_{\eta}^2 $$
$$S^2_{\alpha} = S^2_{\epsilon_2} \left[ \frac{1}{N-2} + \frac{\bar{x}^2}{\sum_i\left(x_i - \bar{x}\right)^2}\right]$$
$$S^2_{\beta} = \frac{S^2_{\epsilon_2}}{\sum_i \left(x_i - \bar{x}\right)^2}$$
Case 2 - Example
Repeat the case 1 example, except with an error in temperature measurement of 5 K.
End of explanation
#Now we use the independent variable measurement error
s2_e = sig_e + slope**2 * 5.0 ** 2
s2_slope = s2_e / (np.sum( (np.mean(T) - T)**2 ) )
T = scipy.stats.t.ppf(0.975, len(T) - 2)
slope_ci = T * np.sqrt(s2_slope)
print(slope_ci)
Explanation: Our slope and intercept are unchanged, but our standard error is different.
End of explanation
plt.figure(figsize=(24, 12))
rows = 2
cols = 3
N = 1000
for i in range(rows):
for j in range(cols):
index = i * cols + j + 1
fig = plt.subplot(rows, cols, index)
err = scipy.stats.norm.rvs(loc=0, scale = index - 1, size=N)
x = np.linspace(0,5, N)
y = 3 * (x + err)
plt.plot(x, y, 'o')
plt.xlim(-2, 7)
plt.title('$\sigma_\eta = {}$'.format(index))
plt.show()
Explanation: With the new measurement error, our new confidence interval for entropy is:
$$\Delta S = 0.18 \pm 0.03 \frac{\textrm{kcal}}{\textrm{mol}\cdot\textrm{K}}$$
Case 2 - Attenuation Error
There is an interesting side effect of independent measurement error. Let's look at some plots showing increasing uncertainty in $x$, but always with a slope of 3
End of explanation
temperature = np.array([300., 312, 325, 345, 355, 400])
DG = np.array([5.2, 2.9, 0.4, -4.2,-5, -13])
plt.errorbar(temperature, DG, xerr=5, yerr=2, fmt='go', capthick=3)
plt.xlim(290, 420)
plt.xlabel('T [ Kelivn]')
plt.ylabel('$\Delta G$')
plt.show()
cov_mat = np.cov(temperature, DG, ddof=1)
slope = cov_mat[0,1] / cov_mat[0,0]
DS = -slope
print(DS, 'kcal/mol*K')
intercept = np.mean(DG - temperature * slope)
print(intercept)
plt.errorbar(temperature, DG, xerr=5, yerr=2, fmt='go', capthick=3)
plt.plot(temperature, temperature * slope + intercept, '-')
plt.xlim(290, 420)
plt.xlabel('T [ Kelivn]')
plt.ylabel('$\Delta G$')
plt.show()
residuals = DG - temperature * slope - intercept
sig_e = np.sum(residuals**2)
#The only new part
#-------------------------------------
#Now we use both the dependent and the independent variable measurement error
sig_total = sig_e + slope**2 * 5.0 ** 2 + 2.0**2
#-------------------------------------
s2_e = sig_total / (len(temperature) - 2)
s2_slope = s2_e / (np.sum( (np.mean(temperature) - temperature)**2 ) )
T = scipy.stats.t.ppf(0.975, len(temperature) - 2)
slope_ci = T * np.sqrt(s2_slope)
print(slope_ci)
Explanation: Notice that as our error in our measurement gets larger, the slope becomes less and less clear. This is called Attenuation Error. As uncertainty in $x$ increases, our estimates for $\alpha$ and $\beta$ get smaller. We usually don't correct for this, because all our hypothesis tests become more conservative due to attenuation and thus we won't ever accidentally think there is a correlation when there isn't. But be aware that when the uncertatinty in $x$ becomes simiar in size to our range of data, we will underestimate the slope.
Case 3 - OLS with constant x,y uncertainty in 1D
As you may have expected, the standard error in $\epsilon_3$ is just a combination of the previous two cases:
$$S^2_{\epsilon_3} = \frac{SSR}{N} + \hat{\beta}^2\sigma^2_{\eta_x} + \sigma^2_{\eta_y}$$
All Equations for Case 3
$$\hat{\beta} = \frac{\sigma_{xy}}{\sigma_x^2}$$
$$\hat{\alpha} = \frac{1}{N}\sum_i (y_i - \hat{\beta}x_i)$$
$$S^2_{\epsilon_3} = \frac{SSR}{N - 2} + \hat{\beta}^2\sigma^2_{\eta_x} + \sigma^2_{\eta_y}$$
$$S^2_{\alpha} = S^2_{\epsilon_3} \left[ \frac{1}{N - 2} + \frac{\bar{x}^2}{\sum_i\left(x_i - \bar{x}\right)^2}\right]$$
$$S^2_{\beta} = \frac{S^2_{\epsilon_3}}{\sum_i \left(x_i - \bar{x}\right)^2}$$
Case 3 - Example
Repeat the Case 1 example with an uncertainty in $\Delta G$ of 2 kcal/mol and $T$ of 5K
End of explanation
x = np.array([2, 4, 6])
y_1 = np.array([1.2, 1.5, 1.1, 0.9])
y_2 = np.array([2.6, 2.2, 2.1, 2.5])
y_3 = np.array([3.0, 2.9, 3.3, 5])
y = np.array([np.mean(y_1), np.mean(y_2), np.mean(y_3)])
#compute standard error
yerr = np.sqrt(np.array([np.var(y_1, ddof=1), np.var(y_2, ddof=1), np.var(y_3, ddof=1)])) / 4.0
plt.errorbar(x, y, yerr=yerr, fmt='o', capthick=3)
plt.xlim(0, 10)
plt.show()
Explanation: With the both measurement errors, our confidence interval for entropy is:
$$\Delta S = 0.18 \pm 0.04 \frac{\textrm{kcal}}{\textrm{mol}\cdot\textrm{K}}$$
which is a slightly larger confidence interval than for case 1
Case 4 - OLS with multiple y values in N-dimensions
Sometimes you'll see people have multiple measurements for each $y$-value so that they can plot error bars. For example, let's say we have 3 $x$-values and we have 4 $y$-vaules at each $x$-value. That would give enough samples so that we can compute a standard error at each $y$-value:
$$S_y = \sqrt{\frac{\sigma_y^2}{N}}$$
End of explanation |
5,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Tensorflow Lattice와 형상 제약 조건
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 필수 패키지 가져오기
Step3: 이 가이드에서 사용되는 기본값
Step4: 레스토랑 순위 지정을 위한 훈련 데이터세트
사용자가 레스토랑 검색 결과를 클릭할지 여부를 결정하는 단순화된 시나리오를 상상해봅니다. 이 작업은 주어진 입력 특성에 따른 클릭률(CTR)을 예측하는 것입니다.
평균 평점(avg_rating)
Step6: 이 CTR 함수의 등고선도를 살펴보겠습니다.
Step7: 데이터 준비하기
이제 합성 데이터세트를 만들어야 합니다. 레스토랑과 해당 특징의 시뮬레이션된 데이터세트를 생성하는 것으로 작업을 시작합니다.
Step8: 훈련, 검증 및 테스트 데이터세트를 생성해 보겠습니다. 검색 결과에 레스토랑이 표시되면 사용자의 참여(클릭 또는 클릭 없음)를 샘플 포인트로 기록할 수 있습니다.
실제로 사용자가 모든 검색 결과를 확인하지 않는 경우가 많습니다. 즉, 사용자는 현재 사용 중인 순위 모델에서 이미 '좋은' 것으로 간주되는 식당만 볼 수 있습니다. 결과적으로 '좋은' 레스토랑은 훈련 데이터세트에서 더 자주 좋은 인상을 남기고 더 과장되게 표현됩니다. 더 많은 특성을 사용할 때 훈련 데이터세트에서는 특성 공간의 '나쁜' 부분에 큰 간격이 생길 수 있습니다.
모델이 순위 지정에 사용되면 훈련 데이터세트로 잘 표현되지 않는 보다 균일한 분포로 모든 관련 결과에 대해 평가되는 경우가 많습니다. 이 경우 과도하게 표현된 데이터 포인트에 과대 적합이 발생하여 일반화될 수 없기 때문에, 유연하고 복잡한 모델은 실패할 수 있습니다. 이 문제는 도메인 지식을 적용하여 모델이 훈련 데이터세트에서 선택할 수 없을 때 합리적인 예측을 할 수 있도록 안내하는 형상 제약 조건을 추가함으로써 처리합니다.
이 예에서 훈련 데이터세트는 대부분 우수하고 인기 있는 음식점과의 사용자 상호 작용으로 구성됩니다. 테스트 데이터세트에는 위에서 설명한 평가 설정을 시뮬레이션하도록 균일한 분포가 있습니다. 해당 테스트 데이터세트는 실제 문제 설정에서는 사용할 수 없습니다.
Step9: 훈련 및 평가에 사용되는 input_fns 정의하기
Step10: 그래디언트 Boosted 트리 적합화하기
avg_rating과 num_reviews 두 가지 특성으로 시작하겠습니다.
검증 및 테스트 메트릭을 플롯하고 계산하기 위한 몇 가지 보조 함수를 만듭니다.
Step11: TensorFlow 그래디언트 boosted 결정 트리를 데이터세트에 적합하도록 맞출 수 있습니다.
Step12: 모델이 실제 CTR의 일반적인 형상을 포착하고 적절한 검증 메트릭을 가지고 있지만, 입력 공간의 여러 부분에서 반직관적인 동작을 보입니다. 평균 평점 또는 리뷰 수가 증가하면 예상 CTR이 감소하는데, 이는 훈련 데이터세트에서 잘 다루지 않는 영역에 샘플 포인트가 부족하기 때문입니다. 모델은 데이터에서만 올바른 동작을 추론할 방법이 없습니다.
이 문제를 해결하기 위해 모델이 평균 평점과 리뷰 수에 대해 단조롭게 증가하는 값을 출력해야 한다는 형상 제약 조건을 적용합니다. 나중에 TFL에서 이를 구현하는 방법을 살펴보겠습니다.
DNN 적합화하기
DNN 분류자로 같은 단계를 반복할 수 있습니다. 여기서 비슷한 패턴이 관찰되는데 리뷰 수가 적은 샘플 포인트가 충분하지 않으면 무의미한 외삽이 발생합니다. 검증 메트릭이 트리 솔루션보다 우수하더라도 테스트 메트릭은 훨씬 나쁘다는 점을 유의하세요.
Step13: 형상 제약 조건
TensorFlow Lattice(TFL)는 훈련 데이터 이상의 모델 동작을 보호하기 위해 형상 제약 조건을 적용하는 데 중점을 둡니다. 이러한 형상 제약 조건은 TFL Keras 레이어에 적용됩니다. 자세한 내용은 JMLR 논문에서 찾을 수 있습니다.
이 튜토리얼에서는 다양한 형상 제약을 다루기 위해 준비된 TF estimator를 사용하지만, 해당 모든 단계는 TFL Keras 레이어에서 생성된 모델로 수행할 수 있습니다.
다른 TensorFlow estimator와 마찬가지로 준비된 TFL estimator는 특성 열을 사용하여 입력 형식을 정의하고 훈련 input_fn을 사용하여 데이터를 전달합니다. 준비된 TFL estimator을 사용하려면 다음이 필요합니다.
모델 구성
Step14: CalibratedLatticeConfig를 사용하면 먼저 calibrator를 각 입력(숫자 특성에 대한 부분 선형 함수)에 적용한 다음 격자 레이어를 적용하여 보정된 특성을 비선형적으로 융합하는 준비된 분류자를 생성합니다. tfl.visualization을 사용하여 모델을 시각화할 수 있습니다. 특히 다음 플롯은 미리 준비된 estimator에 포함된 두 개의 훈련된 calibrator를 보여줍니다.
Step15: 제약 조건이 추가되면 평균 평점이 증가하거나 리뷰 수가 증가함에 따라 예상 CTR이 항상 증가합니다. 이것은 calibrator와 격자가 단조로운지 확인하여 수행됩니다.
감소 수익
감소 수익은 특정 특성값을 증가시키는 한계 이득이 값이 증가함에 따라 감소한다는 것을 의미합니다. 해당 경우에는 num_reviews 특성이 이 패턴을 따를 것으로 예상하므로 그에 따라 calibrator를 구성할 수 있습니다. 감소하는 수익률은 두 가지 충분한 조건으로 분해할 수 있습니다.
calibrator가 단조롭게 증가하고 있으며
calibrator는 오목합니다.
Step16: 오목 제약 조건을 추가하여 테스트 메트릭이 어떻게 향상되는지 확인하세요. 예측 플롯은 또한 지상 진실과 더 유사합니다.
2D 형상 제약 조건
Step17: 다음 플롯은 훈련된 격자 함수를 나타냅니다. 신뢰 제약 조건으로 인해, 보정된 num_reviews의 큰 값이 보정된 avg_rating에 대한 경사를 더 높여서 격자 출력에서 더 중요한 이동이 있을 것을 예상합니다.
Step18: Smoothing Calibrator
이제 avg_rating의 calibrator를 살펴보겠습니다. 단조롭게 증가하지만 기울기의 변화는 갑작스럽고 해석하기 어렵습니다. 이는 regularizer_configs의 regularizer 설정으로 이 calibrator를 스무딩하는 것을 고려해볼 수 있음을 의미합니다.
여기에서는 곡률의 변화를 줄이기 위해 wrinkle regularizer를 적용합니다. 또한 laplacian regularizer를 사용하여 calibrator를 평면화하고 hessian regularizer를 사용하여 보다 선형적으로 만들 수 있습니다.
Step19: 이제 calibrator가 매끄럽고 전체 예상 CTR이 실제와 더 잘 일치합니다. 해당 적용은 테스트 메트릭과 등고선 플롯 모두에 반영됩니다.
범주형 보정을 위한 부분 단조
지금까지 모델에서 숫자 특성 중 두 가지만 사용했습니다. 여기에서는 범주형 보정 레이어를 사용하여 세 번째 특성을 추가합니다. 다시 플롯 및 메트릭 계산을 위한 도우미 함수를 설정하는 것으로 시작합니다.
Step20: 세 번째 특성인 dollar_rating을 포함하려면 범주형 특성이 특성 열과 특성 구성 모두에서 TFL 내에서 약간 다른 처리가 필요하다는 점을 기억해야 합니다. 여기서 다른 모든 입력이 고정될 때 'DD' 레스토랑의 출력이 'D' 레스토랑보다 커야 한다는 부분 단조 제약 조건을 적용합니다. 해당 적용은 특성 구성에서 monotonicity 설정을 사용하여 수행됩니다.
Step21: 범주형 calibrator는 모델 출력의 선호도를 보여줍니다. DD > D > DDD > DDDD는 설정과 일치합니다. 결측값에 대한 열도 있습니다. 훈련 및 테스트 데이터에는 누락된 특성이 없지만, 모델은 다운스트림 모델 제공 중에 발생하는 누락된 값에 대한 대체 값을 제공합니다.
dollar_rating을 조건으로 이 모델의 예상 CTR도 플롯합니다. 필요한 모든 제약 조건이 각 슬라이스에서 충족됩니다.
출력 보정
지금까지 훈련한 모든 TFL 모델의 경우 격자 레이어(모델 그래프에서 'Lattice'로 표시됨)가 모델 예측을 직접 출력합니다. 때때로 격자 출력이 모델 출력을 내도록 재조정되어야 하는지는 확실하지 않습니다.
특성은 $log$ 카운트이고 레이블은 카운트입니다.
격자는 매우 적은 수의 꼭짓점을 갖도록 구성되지만 레이블 분포는 비교적 복잡합니다.
이러한 경우 격자 출력과 모델 출력 사이에 또 다른 calibrator를 추가하여 모델 유연성을 높일 수 있습니다. 방금 구축한 모델에 5개의 키포인트가 있는 보정 레이어를 추가하겠습니다. 또한 함수를 원활하게 유지하기 위해 출력 calibrator용 regularizer를 추가합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install tensorflow-lattice
Explanation: Tensorflow Lattice와 형상 제약 조건
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/shape_constraints"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
개요
이 튜토리얼은 TensorFlow Lattice(TFL) 라이브러리에서 제공하는 제약 조건 및 regularizer에 대한 개요입니다. 여기서는 합성 데이터세트에 TFL canned estimator를 사용하지만, 해당 튜토리얼의 모든 내용은 TFL Keras 레이어로 구성된 모델로도 수행될 수 있습니다.
계속하기 전에 런타임에 필요한 모든 패키지가 아래 코드 셀에서 가져온 대로 설치되어 있는지 먼저 확인하세요.
설정
TF Lattice 패키지 설치하기
End of explanation
import tensorflow as tf
from IPython.core.pylabtools import figsize
import itertools
import logging
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
Explanation: 필수 패키지 가져오기
End of explanation
NUM_EPOCHS = 1000
BATCH_SIZE = 64
LEARNING_RATE=0.01
Explanation: 이 가이드에서 사용되는 기본값
End of explanation
def click_through_rate(avg_ratings, num_reviews, dollar_ratings):
dollar_rating_baseline = {"D": 3, "DD": 2, "DDD": 4, "DDDD": 4.5}
return 1 / (1 + np.exp(
np.array([dollar_rating_baseline[d] for d in dollar_ratings]) -
avg_ratings * np.log1p(num_reviews) / 4))
Explanation: 레스토랑 순위 지정을 위한 훈련 데이터세트
사용자가 레스토랑 검색 결과를 클릭할지 여부를 결정하는 단순화된 시나리오를 상상해봅니다. 이 작업은 주어진 입력 특성에 따른 클릭률(CTR)을 예측하는 것입니다.
평균 평점(avg_rating): [1,5] 범위의 값을 가진 숫자 특성입니다.
리뷰 수(num_reviews): 200개로 제한되는 값이 있는 숫자 특성으로, 트렌드를 측정하는 데 사용됩니다.
달러 등급(dollar_rating): { "D", "DD", "DDD", "DDDD"} 세트에 문자열 값이 있는 범주형 특성입니다.
실제 CTR이 공식으로 제공되는 합성 데이터세트를 만듭니다. $$ CTR = 1 / (1 + exp{\mbox {b(dollar_rating)}-\mbox {avg_rating} \times log(\mbox {num_reviews}) / 4}) $$ 여기서 $b(\cdot)$는 각 dollar_rating을 기준값으로 변환합니다. $$ \mbox{D}\to 3,\ \ mbox{DD}\to 2,\ \ mbox{DDD}\to 4,\ \mbox{DDDD}\to 4.5. $$
이 공식은 일반적인 사용자 패턴을 반영합니다. 예를 들어 다른 모든 사항이 수정된 경우 사용자는 별표 평점이 더 높은 식당을 선호하며 '$$'식당은 '$'보다 더 많은 클릭을 받고 '$$$' 및 '$$$'가 이어집니다.
End of explanation
def color_bar():
bar = matplotlib.cm.ScalarMappable(
norm=matplotlib.colors.Normalize(0, 1, True),
cmap="viridis",
)
bar.set_array([0, 1])
return bar
def plot_fns(fns, split_by_dollar=False, res=25):
Generates contour plots for a list of (name, fn) functions.
num_reviews, avg_ratings = np.meshgrid(
np.linspace(0, 200, num=res),
np.linspace(1, 5, num=res),
)
if split_by_dollar:
dollar_rating_splits = ["D", "DD", "DDD", "DDDD"]
else:
dollar_rating_splits = [None]
if len(fns) == 1:
fig, axes = plt.subplots(2, 2, sharey=True, tight_layout=False)
else:
fig, axes = plt.subplots(
len(dollar_rating_splits), len(fns), sharey=True, tight_layout=False)
axes = axes.flatten()
axes_index = 0
for dollar_rating_split in dollar_rating_splits:
for title, fn in fns:
if dollar_rating_split is not None:
dollar_ratings = np.repeat(dollar_rating_split, res**2)
values = fn(avg_ratings.flatten(), num_reviews.flatten(),
dollar_ratings)
title = "{}: dollar_rating={}".format(title, dollar_rating_split)
else:
values = fn(avg_ratings.flatten(), num_reviews.flatten())
subplot = axes[axes_index]
axes_index += 1
subplot.contourf(
avg_ratings,
num_reviews,
np.reshape(values, (res, res)),
vmin=0,
vmax=1)
subplot.title.set_text(title)
subplot.set(xlabel="Average Rating")
subplot.set(ylabel="Number of Reviews")
subplot.set(xlim=(1, 5))
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
figsize(11, 11)
plot_fns([("CTR", click_through_rate)], split_by_dollar=True)
Explanation: 이 CTR 함수의 등고선도를 살펴보겠습니다.
End of explanation
def sample_restaurants(n):
avg_ratings = np.random.uniform(1.0, 5.0, n)
num_reviews = np.round(np.exp(np.random.uniform(0.0, np.log(200), n)))
dollar_ratings = np.random.choice(["D", "DD", "DDD", "DDDD"], n)
ctr_labels = click_through_rate(avg_ratings, num_reviews, dollar_ratings)
return avg_ratings, num_reviews, dollar_ratings, ctr_labels
np.random.seed(42)
avg_ratings, num_reviews, dollar_ratings, ctr_labels = sample_restaurants(2000)
figsize(5, 5)
fig, axs = plt.subplots(1, 1, sharey=False, tight_layout=False)
for rating, marker in [("D", "o"), ("DD", "^"), ("DDD", "+"), ("DDDD", "x")]:
plt.scatter(
x=avg_ratings[np.where(dollar_ratings == rating)],
y=num_reviews[np.where(dollar_ratings == rating)],
c=ctr_labels[np.where(dollar_ratings == rating)],
vmin=0,
vmax=1,
marker=marker,
label=rating)
plt.xlabel("Average Rating")
plt.ylabel("Number of Reviews")
plt.legend()
plt.xlim((1, 5))
plt.title("Distribution of restaurants")
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
Explanation: 데이터 준비하기
이제 합성 데이터세트를 만들어야 합니다. 레스토랑과 해당 특징의 시뮬레이션된 데이터세트를 생성하는 것으로 작업을 시작합니다.
End of explanation
def sample_dataset(n, testing_set):
(avg_ratings, num_reviews, dollar_ratings, ctr_labels) = sample_restaurants(n)
if testing_set:
# Testing has a more uniform distribution over all restaurants.
num_views = np.random.poisson(lam=3, size=n)
else:
# Training/validation datasets have more views on popular restaurants.
num_views = np.random.poisson(lam=ctr_labels * num_reviews / 50.0, size=n)
return pd.DataFrame({
"avg_rating": np.repeat(avg_ratings, num_views),
"num_reviews": np.repeat(num_reviews, num_views),
"dollar_rating": np.repeat(dollar_ratings, num_views),
"clicked": np.random.binomial(n=1, p=np.repeat(ctr_labels, num_views))
})
# Generate datasets.
np.random.seed(42)
data_train = sample_dataset(500, testing_set=False)
data_val = sample_dataset(500, testing_set=False)
data_test = sample_dataset(500, testing_set=True)
# Plotting dataset densities.
figsize(12, 5)
fig, axs = plt.subplots(1, 2, sharey=False, tight_layout=False)
for ax, data, title in [(axs[0], data_train, "training"),
(axs[1], data_test, "testing")]:
_, _, _, density = ax.hist2d(
x=data["avg_rating"],
y=data["num_reviews"],
bins=(np.linspace(1, 5, num=21), np.linspace(0, 200, num=21)),
density=True,
cmap="Blues",
)
ax.set(xlim=(1, 5))
ax.set(ylim=(0, 200))
ax.set(xlabel="Average Rating")
ax.set(ylabel="Number of Reviews")
ax.title.set_text("Density of {} examples".format(title))
_ = fig.colorbar(density, ax=ax)
Explanation: 훈련, 검증 및 테스트 데이터세트를 생성해 보겠습니다. 검색 결과에 레스토랑이 표시되면 사용자의 참여(클릭 또는 클릭 없음)를 샘플 포인트로 기록할 수 있습니다.
실제로 사용자가 모든 검색 결과를 확인하지 않는 경우가 많습니다. 즉, 사용자는 현재 사용 중인 순위 모델에서 이미 '좋은' 것으로 간주되는 식당만 볼 수 있습니다. 결과적으로 '좋은' 레스토랑은 훈련 데이터세트에서 더 자주 좋은 인상을 남기고 더 과장되게 표현됩니다. 더 많은 특성을 사용할 때 훈련 데이터세트에서는 특성 공간의 '나쁜' 부분에 큰 간격이 생길 수 있습니다.
모델이 순위 지정에 사용되면 훈련 데이터세트로 잘 표현되지 않는 보다 균일한 분포로 모든 관련 결과에 대해 평가되는 경우가 많습니다. 이 경우 과도하게 표현된 데이터 포인트에 과대 적합이 발생하여 일반화될 수 없기 때문에, 유연하고 복잡한 모델은 실패할 수 있습니다. 이 문제는 도메인 지식을 적용하여 모델이 훈련 데이터세트에서 선택할 수 없을 때 합리적인 예측을 할 수 있도록 안내하는 형상 제약 조건을 추가함으로써 처리합니다.
이 예에서 훈련 데이터세트는 대부분 우수하고 인기 있는 음식점과의 사용자 상호 작용으로 구성됩니다. 테스트 데이터세트에는 위에서 설명한 평가 설정을 시뮬레이션하도록 균일한 분포가 있습니다. 해당 테스트 데이터세트는 실제 문제 설정에서는 사용할 수 없습니다.
End of explanation
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
shuffle=False,
)
# feature_analysis_input_fn is used for TF Lattice estimators.
feature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
val_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_val,
y=data_val["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_test,
y=data_test["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
Explanation: 훈련 및 평가에 사용되는 input_fns 정의하기
End of explanation
def analyze_two_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def two_d_pred(avg_ratings, num_reviews):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
def two_d_click_through_rate(avg_ratings, num_reviews):
return np.mean([
click_through_rate(avg_ratings, num_reviews,
np.repeat(d, len(avg_ratings)))
for d in ["D", "DD", "DDD", "DDDD"]
],
axis=0)
figsize(11, 5)
plot_fns([("{} Estimated CTR".format(name), two_d_pred),
("CTR", two_d_click_through_rate)],
split_by_dollar=False)
Explanation: 그래디언트 Boosted 트리 적합화하기
avg_rating과 num_reviews 두 가지 특성으로 시작하겠습니다.
검증 및 테스트 메트릭을 플롯하고 계산하기 위한 몇 가지 보조 함수를 만듭니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
gbt_estimator = tf.estimator.BoostedTreesClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
n_batches_per_layer=1,
max_depth=2,
n_trees=50,
learning_rate=0.05,
config=tf.estimator.RunConfig(tf_random_seed=42),
)
gbt_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(gbt_estimator, "GBT")
Explanation: TensorFlow 그래디언트 boosted 결정 트리를 데이터세트에 적합하도록 맞출 수 있습니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
hidden_units=[16, 8, 8],
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
dnn_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(dnn_estimator, "DNN")
Explanation: 모델이 실제 CTR의 일반적인 형상을 포착하고 적절한 검증 메트릭을 가지고 있지만, 입력 공간의 여러 부분에서 반직관적인 동작을 보입니다. 평균 평점 또는 리뷰 수가 증가하면 예상 CTR이 감소하는데, 이는 훈련 데이터세트에서 잘 다루지 않는 영역에 샘플 포인트가 부족하기 때문입니다. 모델은 데이터에서만 올바른 동작을 추론할 방법이 없습니다.
이 문제를 해결하기 위해 모델이 평균 평점과 리뷰 수에 대해 단조롭게 증가하는 값을 출력해야 한다는 형상 제약 조건을 적용합니다. 나중에 TFL에서 이를 구현하는 방법을 살펴보겠습니다.
DNN 적합화하기
DNN 분류자로 같은 단계를 반복할 수 있습니다. 여기서 비슷한 패턴이 관찰되는데 리뷰 수가 적은 샘플 포인트가 충분하지 않으면 무의미한 외삽이 발생합니다. 검증 메트릭이 트리 솔루션보다 우수하더라도 테스트 메트릭은 훨씬 나쁘다는 점을 유의하세요.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
Explanation: 형상 제약 조건
TensorFlow Lattice(TFL)는 훈련 데이터 이상의 모델 동작을 보호하기 위해 형상 제약 조건을 적용하는 데 중점을 둡니다. 이러한 형상 제약 조건은 TFL Keras 레이어에 적용됩니다. 자세한 내용은 JMLR 논문에서 찾을 수 있습니다.
이 튜토리얼에서는 다양한 형상 제약을 다루기 위해 준비된 TF estimator를 사용하지만, 해당 모든 단계는 TFL Keras 레이어에서 생성된 모델로 수행할 수 있습니다.
다른 TensorFlow estimator와 마찬가지로 준비된 TFL estimator는 특성 열을 사용하여 입력 형식을 정의하고 훈련 input_fn을 사용하여 데이터를 전달합니다. 준비된 TFL estimator을 사용하려면 다음이 필요합니다.
모델 구성: 모델 아키텍처 및 특성별 형상 제약 조건 및 regularizer를 정의합니다.
특성 분석 input_fn: TFL 초기화를 위해 데이터를 전달하는 TF input_fn.
자세한 설명은 준비된 estimator 튜토리얼 또는 API 설명서를 참조하세요.
단조
먼저 두 특성에 단조 형상 제약 조건을 추가하여 단조 문제를 해결합니다.
TFL에 형상 제약 조건을 적용하기 위해 특성 구성에 제약 조건을 지정합니다. 다음 코드는 monotonicity="increasing"을 설정하여 num_reviews 및 avg_rating 모두에 대해 출력이 단조롭게 증가하도록 요구할 수 있는 방법을 보여줍니다.
End of explanation
def save_and_visualize_lattice(tfl_estimator):
saved_model_path = tfl_estimator.export_saved_model(
"/tmp/TensorFlow_Lattice_101/",
tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec=tf.feature_column.make_parse_example_spec(
feature_columns)))
model_graph = tfl.estimators.get_model_graph(saved_model_path)
figsize(8, 8)
tfl.visualization.draw_model_graph(model_graph)
return model_graph
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: CalibratedLatticeConfig를 사용하면 먼저 calibrator를 각 입력(숫자 특성에 대한 부분 선형 함수)에 적용한 다음 격자 레이어를 적용하여 보정된 특성을 비선형적으로 융합하는 준비된 분류자를 생성합니다. tfl.visualization을 사용하여 모델을 시각화할 수 있습니다. 특히 다음 플롯은 미리 준비된 estimator에 포함된 두 개의 훈련된 calibrator를 보여줍니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: 제약 조건이 추가되면 평균 평점이 증가하거나 리뷰 수가 증가함에 따라 예상 CTR이 항상 증가합니다. 이것은 calibrator와 격자가 단조로운지 확인하여 수행됩니다.
감소 수익
감소 수익은 특정 특성값을 증가시키는 한계 이득이 값이 증가함에 따라 감소한다는 것을 의미합니다. 해당 경우에는 num_reviews 특성이 이 패턴을 따를 것으로 예상하므로 그에 따라 calibrator를 구성할 수 있습니다. 감소하는 수익률은 두 가지 충분한 조건으로 분해할 수 있습니다.
calibrator가 단조롭게 증가하고 있으며
calibrator는 오목합니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
# Larger num_reviews indicating more trust in avg_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
model_graph = save_and_visualize_lattice(tfl_estimator)
Explanation: 오목 제약 조건을 추가하여 테스트 메트릭이 어떻게 향상되는지 확인하세요. 예측 플롯은 또한 지상 진실과 더 유사합니다.
2D 형상 제약 조건: 신뢰
리뷰가 한두 개밖에 없는 레스토랑의 별 5개는 신뢰할 수 없는 평가일 가능성이 높지만(실제 레스토랑 경험은 나쁠 수 있습니다), 수백 개의 리뷰가 있는 레스토랑에 대한 4성급은 훨씬 더 신뢰할 수 있습니다(이 경우에 레스토랑 경험은 좋을 것입니다). 레스토랑 리뷰 수는 평균 평점에 대한 신뢰도에 영향을 미친다는 것을 알 수 있습니다.
TFL 신뢰 제약 조건을 실행하여 한 특성의 더 큰(또는 더 작은) 값이 다른 특성에 대한 더 많은 신뢰 또는 신뢰를 나타냄을 모델에 알릴 수 있습니다. 이는 특성 구성에서 reflects_trust_in 구성을 설정하여 수행됩니다.
End of explanation
lat_mesh_n = 12
lat_mesh_x, lat_mesh_y = tfl.test_utils.two_dim_mesh_grid(
lat_mesh_n**2, 0, 0, 1, 1)
lat_mesh_fn = tfl.test_utils.get_hypercube_interpolation_fn(
model_graph.output_node.weights.flatten())
lat_mesh_z = [
lat_mesh_fn([lat_mesh_x.flatten()[i],
lat_mesh_y.flatten()[i]]) for i in range(lat_mesh_n**2)
]
trust_plt = tfl.visualization.plot_outputs(
(lat_mesh_x, lat_mesh_y),
{"Lattice Lookup": lat_mesh_z},
figsize=(6, 6),
)
trust_plt.title("Trust")
trust_plt.xlabel("Calibrated avg_rating")
trust_plt.ylabel("Calibrated num_reviews")
trust_plt.show()
Explanation: 다음 플롯은 훈련된 격자 함수를 나타냅니다. 신뢰 제약 조건으로 인해, 보정된 num_reviews의 큰 값이 보정된 avg_rating에 대한 경사를 더 높여서 격자 출력에서 더 중요한 이동이 있을 것을 예상합니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: Smoothing Calibrator
이제 avg_rating의 calibrator를 살펴보겠습니다. 단조롭게 증가하지만 기울기의 변화는 갑작스럽고 해석하기 어렵습니다. 이는 regularizer_configs의 regularizer 설정으로 이 calibrator를 스무딩하는 것을 고려해볼 수 있음을 의미합니다.
여기에서는 곡률의 변화를 줄이기 위해 wrinkle regularizer를 적용합니다. 또한 laplacian regularizer를 사용하여 calibrator를 평면화하고 hessian regularizer를 사용하여 보다 선형적으로 만들 수 있습니다.
End of explanation
def analyze_three_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def three_d_pred(avg_ratings, num_reviews, dollar_rating):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
"dollar_rating": dollar_rating,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
figsize(11, 22)
plot_fns([("{} Estimated CTR".format(name), three_d_pred),
("CTR", click_through_rate)],
split_by_dollar=True)
Explanation: 이제 calibrator가 매끄럽고 전체 예상 CTR이 실제와 더 잘 일치합니다. 해당 적용은 테스트 메트릭과 등고선 플롯 모두에 반영됩니다.
범주형 보정을 위한 부분 단조
지금까지 모델에서 숫자 특성 중 두 가지만 사용했습니다. 여기에서는 범주형 보정 레이어를 사용하여 세 번째 특성을 추가합니다. 다시 플롯 및 메트릭 계산을 위한 도우미 함수를 설정하는 것으로 시작합니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: 세 번째 특성인 dollar_rating을 포함하려면 범주형 특성이 특성 열과 특성 구성 모두에서 TFL 내에서 약간 다른 처리가 필요하다는 점을 기억해야 합니다. 여기서 다른 모든 입력이 고정될 때 'DD' 레스토랑의 출력이 'D' 레스토랑보다 커야 한다는 부분 단조 제약 조건을 적용합니다. 해당 적용은 특성 구성에서 monotonicity 설정을 사용하여 수행됩니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
output_calibration=True,
output_calibration_num_keypoints=5,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="output_calib_wrinkle", l2=0.1),
],
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: 범주형 calibrator는 모델 출력의 선호도를 보여줍니다. DD > D > DDD > DDDD는 설정과 일치합니다. 결측값에 대한 열도 있습니다. 훈련 및 테스트 데이터에는 누락된 특성이 없지만, 모델은 다운스트림 모델 제공 중에 발생하는 누락된 값에 대한 대체 값을 제공합니다.
dollar_rating을 조건으로 이 모델의 예상 CTR도 플롯합니다. 필요한 모든 제약 조건이 각 슬라이스에서 충족됩니다.
출력 보정
지금까지 훈련한 모든 TFL 모델의 경우 격자 레이어(모델 그래프에서 'Lattice'로 표시됨)가 모델 예측을 직접 출력합니다. 때때로 격자 출력이 모델 출력을 내도록 재조정되어야 하는지는 확실하지 않습니다.
특성은 $log$ 카운트이고 레이블은 카운트입니다.
격자는 매우 적은 수의 꼭짓점을 갖도록 구성되지만 레이블 분포는 비교적 복잡합니다.
이러한 경우 격자 출력과 모델 출력 사이에 또 다른 calibrator를 추가하여 모델 유연성을 높일 수 있습니다. 방금 구축한 모델에 5개의 키포인트가 있는 보정 레이어를 추가하겠습니다. 또한 함수를 원활하게 유지하기 위해 출력 calibrator용 regularizer를 추가합니다.
End of explanation |
5,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
tsam - 3. Example
Examples of the different methods used in the time series aggregation module (tsam)
Date
Step1: Input data
Read in time series from testdata.csv with pandas
Step2: Simple k-means aggregation
Initialize an aggregation class object with k-mean as method for eight typical days, without any integration of extreme periods. Alternative clusterMethod's are 'averaging','hierarchical' and 'k_medoids'.
Step3: Create the typical periods
Step4: Save typical periods to .csv file
Step5: Simple k-medoids aggregation of weeks
Initialize a time series aggregation which integrates the day with the minimal temperature and the day with the maximal load as periods.
Step6: Create the typical periods
Step7: Save typical periods to .csv file with weeks order by GHI to get later testing consistency
Step8: The aggregation can also be evaluated by indicators | Python Code:
%load_ext autoreload
%autoreload 2
import copy
import os
import pandas as pd
import matplotlib.pyplot as plt
import tsam.timeseriesaggregation as tsam
%matplotlib inline
Explanation: tsam - 3. Example
Examples of the different methods used in the time series aggregation module (tsam)
Date: 04.01.2019
Author: Leander Kotzur
Import pandas and the relevant time series aggregation class
End of explanation
raw = pd.read_csv('testdata.csv', index_col = 0)
Explanation: Input data
Read in time series from testdata.csv with pandas
End of explanation
aggregation = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 8, hoursPerPeriod = 24,
clusterMethod = 'k_means')
Explanation: Simple k-means aggregation
Initialize an aggregation class object with k-mean as method for eight typical days, without any integration of extreme periods. Alternative clusterMethod's are 'averaging','hierarchical' and 'k_medoids'.
End of explanation
typPeriods = aggregation.createTypicalPeriods()
Explanation: Create the typical periods
End of explanation
typPeriods.to_csv(os.path.join('results','testperiods_kmeans.csv'))
Explanation: Save typical periods to .csv file
End of explanation
aggregation = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 8, hoursPerPeriod = 24*7,
clusterMethod = 'k_medoids', )
Explanation: Simple k-medoids aggregation of weeks
Initialize a time series aggregation which integrates the day with the minimal temperature and the day with the maximal load as periods.
End of explanation
typPeriods = aggregation.createTypicalPeriods()
Explanation: Create the typical periods
End of explanation
typPeriods.reindex(typPeriods['GHI'].unstack().sum(axis=1).sort_values().index,
level=0).to_csv(os.path.join('results','testperiods_kmedoids.csv'))
Explanation: Save typical periods to .csv file with weeks order by GHI to get later testing consistency
End of explanation
aggregation.accuracyIndicators()
Explanation: The aggregation can also be evaluated by indicators
End of explanation |
5,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Eclipse model
The eclipse model, pytransit.EclipseModel, can be used to model a secondary eclipse. The model is similar to pytransit.UniformModel, but the eclipse occurs correctly where it should based on the orbital eccentricity and argument of periastron, and the model takes the planet-star flux ratio as an additional free parameter. The model is parallelised using numba, and the number of threads can be set using the NUMBA_NUM_THREADS environment variable.
Step1: Model initialization
The eclipse model doesn't take any special initialization arguments, so the initialization is straightforward.
Step2: Data setup
Homogeneous time series
The model needs to be set up by calling set_data() before it can be used. At its simplest, set_data takes the mid-exposure times of the time series to be modelled.
Step3: Model use
Evaluation
The transit model can be evaluated using either a set of scalar parameters, a parameter vector (1D ndarray), or a parameter vector array (2D ndarray). The model flux is returned as a 1D ndarray in the first two cases, and a 2D ndarray in the last (one model per parameter vector).
tm.evaluate_ps(k, t0, p, a, i, e=0, w=0) evaluates the model for a set of scalar parameters, where k is the radius ratio, t0 the zero epoch, p the orbital period, a the semi-major axis divided by the stellar radius, i the inclination in radians, e the eccentricity, and w the argument of periastron. Eccentricity and argument of periastron are optional, and omitting them defaults to a circular orbit.
tm.evaluate_pv(pv) evaluates the model for a 1D parameter vector, or 2D array of parameter vectors. In the first case, the parameter vector should be array-like with elements [k, t0, p, a, i, e, w]. In the second case, the parameter vectors should be stored in a 2d ndarray with shape (npv, 7) as
[[k1, t01, p1, a1, i1, e1, w1],
[k2, t02, p2, a2, i2, e2, w2],
...
[kn, t0n, pn, an, in, en, wn]]
The reason for the different options is that the model implementations may have optimisations that make the model evaluation for a set of parameter vectors much faster than if computing them separately. This is especially the case for the OpenCL models.
Step4: Supersampling
The eclipse model can be supersampled by setting the nsamples and exptimes arguments in set_data.
Step5: Heterogeneous time series
PyTransit allows for heterogeneous time series, that is, a single time series can contain several individual light curves (with, e.g., different time cadences and required supersampling rates) observed (possibly) in different passbands.
If a time series contains several light curves, it also needs the light curve indices for each exposure. These are given through lcids argument, which should be an array of integers. If the time series contains light curves observed in different passbands, the passband indices need to be given through pbids argument as an integer array, one per light curve. Supersampling can also be defined on per-light curve basis by giving the nsamplesand exptimes as arrays with one value per light curve.
For example, a set of three light curves, two observed in one passband and the third in another passband
times_1 (lc = 0, pb = 0, sc) = [1, 2, 3, 4]
times_2 (lc = 1, pb = 0, lc) = [3, 4]
times_3 (lc = 2, pb = 1, sc) = [1, 5, 6]
Would be set up as
tm.set_data(time = [1, 2, 3, 4, 3, 4, 1, 5, 6],
lcids = [0, 0, 0, 0, 1, 1, 2, 2, 2],
pbids = [0, 0, 1],
nsamples = [ 1, 10, 1],
exptimes = [0.1, 1.0, 0.1])
Example | Python Code:
%pylab inline
sys.path.append('..')
from pytransit import EclipseModel
seed(0)
times_sc = linspace(0.5, 2.5, 5000) # Short cadence time stamps
times_lc = linspace(0.5, 2.5, 500) # Long cadence time stamps
k, t0, p, a, i, e, w = 0.1, 1., 2.0, 4.2, 0.5*pi, 0.25, 0.4*pi
ns = 50
ks = normal(k, 0.01, ns)
t0s = normal(t0, 1e-5, ns)
ps = normal(p, 1e-5, ns)
aas = normal(a, 0.01, ns)
iis = normal(i, 1e-5, ns)
es = uniform(0, 0.3, ns)
ws = uniform(0, 2*pi, ns)
frs = normal(0.01, 1e-5, ns)
Explanation: Eclipse model
The eclipse model, pytransit.EclipseModel, can be used to model a secondary eclipse. The model is similar to pytransit.UniformModel, but the eclipse occurs correctly where it should based on the orbital eccentricity and argument of periastron, and the model takes the planet-star flux ratio as an additional free parameter. The model is parallelised using numba, and the number of threads can be set using the NUMBA_NUM_THREADS environment variable.
End of explanation
tm = EclipseModel()
Explanation: Model initialization
The eclipse model doesn't take any special initialization arguments, so the initialization is straightforward.
End of explanation
tm.set_data(times_sc)
Explanation: Data setup
Homogeneous time series
The model needs to be set up by calling set_data() before it can be used. At its simplest, set_data takes the mid-exposure times of the time series to be modelled.
End of explanation
def plot_transits(tm, fmt='k', tc_label=True):
fig, axs = subplots(1, 2, figsize = (13,3), constrained_layout=True, sharey=True)
flux = tm.evaluate_ps(k, t0, p, a, i, e, w)
axs[0].plot(tm.time, flux, fmt)
axs[0].set_title('Individual parameters')
flux = tm.evaluate(ks, t0s, ps, aas, iis, es, ws)
axs[1].plot(tm.time, flux.T, fmt, alpha=0.2)
axs[1].set_title('Parameter vector')
if tc_label:
for ax in axs:
ax.axvline(t0, c='k', ls='--')
ax.text(t0-0.01, 0.999, 'Transit centre', rotation=90, va='top', ha='right')
setp(axs[0], ylabel='Normalised flux')
setp(axs, xlabel='Time [days]', xlim=tm.time[[0,-1]])
tm.set_data(times_sc)
plot_transits(tm)
Explanation: Model use
Evaluation
The transit model can be evaluated using either a set of scalar parameters, a parameter vector (1D ndarray), or a parameter vector array (2D ndarray). The model flux is returned as a 1D ndarray in the first two cases, and a 2D ndarray in the last (one model per parameter vector).
tm.evaluate_ps(k, t0, p, a, i, e=0, w=0) evaluates the model for a set of scalar parameters, where k is the radius ratio, t0 the zero epoch, p the orbital period, a the semi-major axis divided by the stellar radius, i the inclination in radians, e the eccentricity, and w the argument of periastron. Eccentricity and argument of periastron are optional, and omitting them defaults to a circular orbit.
tm.evaluate_pv(pv) evaluates the model for a 1D parameter vector, or 2D array of parameter vectors. In the first case, the parameter vector should be array-like with elements [k, t0, p, a, i, e, w]. In the second case, the parameter vectors should be stored in a 2d ndarray with shape (npv, 7) as
[[k1, t01, p1, a1, i1, e1, w1],
[k2, t02, p2, a2, i2, e2, w2],
...
[kn, t0n, pn, an, in, en, wn]]
The reason for the different options is that the model implementations may have optimisations that make the model evaluation for a set of parameter vectors much faster than if computing them separately. This is especially the case for the OpenCL models.
End of explanation
tm.set_data(times_lc, nsamples=10, exptimes=0.01)
plot_transits(tm)
Explanation: Supersampling
The eclipse model can be supersampled by setting the nsamples and exptimes arguments in set_data.
End of explanation
times_1 = linspace(1.5, 2.0, 500)
times_2 = linspace(2.0, 2.5, 10)
times = concatenate([times_1, times_2])
lcids = concatenate([full(times_1.size, 0, 'int'), full(times_2.size, 1, 'int')])
nsamples = [1, 10]
exptimes = [0, 0.0167]
tm.set_data(times, lcids, nsamples=nsamples, exptimes=exptimes)
plot_transits(tm, 'k.-', tc_label=False)
Explanation: Heterogeneous time series
PyTransit allows for heterogeneous time series, that is, a single time series can contain several individual light curves (with, e.g., different time cadences and required supersampling rates) observed (possibly) in different passbands.
If a time series contains several light curves, it also needs the light curve indices for each exposure. These are given through lcids argument, which should be an array of integers. If the time series contains light curves observed in different passbands, the passband indices need to be given through pbids argument as an integer array, one per light curve. Supersampling can also be defined on per-light curve basis by giving the nsamplesand exptimes as arrays with one value per light curve.
For example, a set of three light curves, two observed in one passband and the third in another passband
times_1 (lc = 0, pb = 0, sc) = [1, 2, 3, 4]
times_2 (lc = 1, pb = 0, lc) = [3, 4]
times_3 (lc = 2, pb = 1, sc) = [1, 5, 6]
Would be set up as
tm.set_data(time = [1, 2, 3, 4, 3, 4, 1, 5, 6],
lcids = [0, 0, 0, 0, 1, 1, 2, 2, 2],
pbids = [0, 0, 1],
nsamples = [ 1, 10, 1],
exptimes = [0.1, 1.0, 0.1])
Example: two light curves with different cadences
End of explanation |
5,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook we will use the Monte Carlo method to find the area under a curve, so first let's define a function
$$f(x) = x^2-4x+5$$
Step1: Now, we know the probability of a random point being below the curve is equal to
$$P_{curve}=\dfrac{A_{curve}}{A_{rectanle}}$$
Where $A_{rectangle}$ is the area of the plot in the given interval, so let's try to integrate it from 0 to 10
Step2: Knowing the ratio of points under the curve, we can now calculate the integral as
$$P_{curve}A_{rectangle} = A_{cruve}$$
If we take the integral
$$\int_0^{10}x^2-4x+5$$
We have $$\dfrac{x^3}{3}-2x^2+5x\big\lvert_0^{10} = \dfrac{10^3}{3}-200+50 = 333.33 - 200 + 50 = 183.33$$
Which is close to the real area, now, let's see how many points we need
Step3: As we can see the more points we sample, the more accurate our approximation is to the real values, now what about if we have segments of our curve under the $x$ axis? Let's look at this example
$$g(x) = x^2-4x-8$$
Step4: We can see the area is around 184.0, so let's take the true integral to see if we are close to the real value
We have $$\int_0^{10}x^2-4x+5 = \dfrac{x^3}{3}-2x^2-8x\big\lvert_0^{10} = \dfrac{10^3}{3}-200-80 = 333.33 - 200 - 80 = 53.33$$
So, we are off by a lot, that's because we are adding the area under the $x$ axis instead of subtracting it. To do this, we are going to first, find the point where $g(x) = 0$ which is $x = 2+\sqrt{12} = 2 + 2\sqrt{3}$ | Python Code:
f = lambda x:x**2-4*x+5
x = range(0, 11, 1)
y = [f(v) for v in x]
plt.plot(y)
Explanation: In this notebook we will use the Monte Carlo method to find the area under a curve, so first let's define a function
$$f(x) = x^2-4x+5$$
End of explanation
#Will use 3000 points
number_points=3000
#We want to see the points
points=[]
below=0
for p in range(0, number_points):
x,y=(random.uniform(0,10), random.uniform(0, 70))
# If the function for x is greater then the random y, then the point is under the curve
if f(x) >= y:
below +=1
points.append((x,y))
ratio = below/number_points
color_func = lambda x,y: (1,0,0) if f(x)>=y else (0,0,1,0.5)
colors = [color_func(x,y) for (x,y) in points]
plt.ylim(0,70)
plt.xlim(0,10)
plt.scatter(*zip(*points), color=colors)
plt.show()
print("Ratio of points under the curve: %.4f" % ratio)
print("Approximated area under the curve: %.4f" % (ratio*700))
Explanation: Now, we know the probability of a random point being below the curve is equal to
$$P_{curve}=\dfrac{A_{curve}}{A_{rectanle}}$$
Where $A_{rectangle}$ is the area of the plot in the given interval, so let's try to integrate it from 0 to 10
End of explanation
def monte_carlo_integration(f, number_points, xlims, ylims):
below=0
for p in range(0, number_points):
x,y=(random.uniform(*xlims), random.uniform(*ylims))
# If the function for x is greater then the random y, then the point is under the curve
if y <= f(x):
below +=1
ratio = below/number_points
area = ratio * (xlims[1]-xlims[0]) * (ylims[1]-ylims[0])
return (ratio, area)
total_points = 10000
step = 100
estimated = [monte_carlo_integration(f, i, (0,10), (0,70))[1]
for i in range(step,total_points, step)]
mean = sum(estimated)/len(estimated)
print("Mean Approximated value %.4f" % mean)
plt.figure(figsize=(8,5))
plt.plot(estimated)
plt.hlines(183.3, 0, total_points/step, 'g')
plt.hlines(mean, 0 , total_points/step, 'r')
plt.legend(['Approximation', 'Real', 'Mean'], loc='best')
plt.ylabel("Estimated area")
plt.xlabel("Points used (x100)")
print("Approximated Integral Value: %.4f" % mean)
Explanation: Knowing the ratio of points under the curve, we can now calculate the integral as
$$P_{curve}A_{rectangle} = A_{cruve}$$
If we take the integral
$$\int_0^{10}x^2-4x+5$$
We have $$\dfrac{x^3}{3}-2x^2+5x\big\lvert_0^{10} = \dfrac{10^3}{3}-200+50 = 333.33 - 200 + 50 = 183.33$$
Which is close to the real area, now, let's see how many points we need
End of explanation
g = lambda x:x**2-4*x-8
x = range(0, 11, 1)
y = [g(v) for v in x]
plt.plot(y)
#Will use 3000 points
number_points=3000
#We want to see the points
points=[]
below=0
for p in range(0, number_points):
x,y=(random.uniform(0,10), random.uniform(-20, 60))
# If the function for x is greater then the random y, then the point is under the curve
if f(x) > 0 and 0 < y < f(x):
below +=1
if f(x) < 0 and 0 > y >= f(x):
below += 1
points.append((x,y))
ratio = below/number_points
color_func = lambda x,y: (1,0,0) if (g(x) > 0 and 0 <= y <= g(x)) or (g(x) <= 0 and 0 > y >= g(x)) else (0,0,1,0.5)
colors = [color_func(x,y) for (x,y) in points]
plt.ylim(-20,60)
plt.xlim(0,10)
plt.scatter(*zip(*points), color=colors)
plt.show()
print("Ratio of points under the curve: %.4f" % ratio)
print("Approximated area under the curve: %.4f" % (ratio*800))
Explanation: As we can see the more points we sample, the more accurate our approximation is to the real values, now what about if we have segments of our curve under the $x$ axis? Let's look at this example
$$g(x) = x^2-4x-8$$
End of explanation
#Let's adjust our function to deal with points under the x axis
def monte_carlo_integration(f, number_points, xlims, ylims):
below=0
for p in range(0, number_points):
x,y=(random.uniform(*xlims), random.uniform(*ylims))
# If the function for x is greater then the random y, then the point is under the curve
if f(x) > 0 and 0 < y <= f(x):
below += 1
if f(x) < 0 and 0 > y >= f(x):
below += 1
ratio = below/number_points
area = ratio * (xlims[1]-xlims[0]) * (ylims[1]-ylims[0])
return (ratio, area)
#Calculating the area below and above the x axis (before and after the root)
_, area_before = monte_carlo_integration(g, 1000, (0,2+(12**(1/2))), (-20,60))
_, area_after = monte_carlo_integration(g, 1000, (2+(12**(1/2)),10), (-20,60))
area = area_after-area_before if area_after >= area_before else area_before-area_after
print("Estimated area under the curve: %.4f" % area)
def monte_carlo_integration_helper(f, number_points, xlims, ylims, root):
_, area_before = monte_carlo_integration(f, number_points, (xlims[0],root), ylims)
_, area_after = monte_carlo_integration(f, number_points, (root,xlims[1]), ylims)
return area_after-area_before if area_after >= area_before else area_before-area_after
total_points = 10000
step = 100
estimated = [monte_carlo_integration_helper(g, i, (0,10), (-20,70), 2+(12**(1/2)))
for i in range(step,total_points, step)]
mean = sum(estimated)/len(estimated)
print("Mean Approximated value %.4f" % mean)
plt.figure(figsize=(8,2))
plt.plot(estimated)
plt.hlines(53.3, 0, total_points/step, 'g')
plt.hlines(mean, 0 , total_points/step, 'r')
plt.legend(['Approximation', 'Real', 'Mean'], loc='best')
plt.ylabel("Estimated area")
plt.xlabel("Points used (x100)")
print("Approximated Integral Value: %.4f" % mean)
Explanation: We can see the area is around 184.0, so let's take the true integral to see if we are close to the real value
We have $$\int_0^{10}x^2-4x+5 = \dfrac{x^3}{3}-2x^2-8x\big\lvert_0^{10} = \dfrac{10^3}{3}-200-80 = 333.33 - 200 - 80 = 53.33$$
So, we are off by a lot, that's because we are adding the area under the $x$ axis instead of subtracting it. To do this, we are going to first, find the point where $g(x) = 0$ which is $x = 2+\sqrt{12} = 2 + 2\sqrt{3}$
End of explanation |
5,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Where to Get Help
Step1: What I wanted to do was build a nested list, x is supposed to look like
Step2: To see what's happening lets rewrite the code to make the issue even clearer
Step3: So basically the issue is was that when we write [2]*3 and try to add it to a list python isn't making three separate lists, rather, its adding the same list three times!
The fix then, we need to make sure Python knows we want three separate lists, which we can do with a for loop | Python Code:
x = [ [2] * 3 ] * 3
x[0][0] = "ZZ"
print(*x, sep="\n")
Explanation: Where to Get Help: Homework Assignment
You need to be think a little bit about your search, the better that is the more likely you are to find what you want. Let me give you a real example I stuggled with:
End of explanation
out=[[0]*3]*3
print( id(out[0]) )
print( id(out[1]) ) # want to know what "id" is? Why not read the documentation!
Explanation: What I wanted to do was build a nested list, x is supposed to look like:
[[2, 2, 2],
[2, 2, 2],
[2, 2, 2]]
And then I wanted to change the value at index [0][0] but notice that instead of a single value changing the first item in every list changes. I wanted:
[["ZZ", 2, 2],
[ 2, 2, 2],
[ 2, 2, 2]]
But I got:
[["ZZ", 2, 2],
["ZZ", 2, 2],
["ZZ", 2, 2]]
Wierd right?
Your homework for this week it to search google for an awnser to this problem. Why doesn't X behave like I want it too and what to I need to make it work?
I know for a fact the awnser is on stackoverflow already (and probably hundreds of other websites too), so this is a test of your googling skills:
What search query is likely to return the information you need?
This excerise is a really useful I think. Through-out your programming carrear you are going to stumped on tough questions. In many cases, the fastest way to solve your issue is going to be google search. BUT to get the most out of a search engine is going to require you to carefully think about your problem and what query might contain the awnser.
Possible Solution
Google to the rescue! Okay so let's think about this problem a little bit; what "buzz words” might we need to feed google in order to get the answer we want?
Lets try...
Python
Err...yeah, I think we are going to need to be a bit more specific than that.
Python nested list
This search is a bit better, I mean, from the context alone Google has probably figured out that we are not interested in snakes! But again, still probably not specific enough.
Python nested list sublist assignment
This query seems decent right? It seems pretty descriptive of the problem, afterall.
Lets run it!
Google Search (15th June 2017)
The third hit sounds promising, lets go there and see.
...And sure enough, it sounds like someone is dealing with the EXACT ISSUE we had. The top-voted reply not only explains the issue but also the fix.
Basically, the issue is that when you write [ [0]*3] *3 ] we are actually storing a reference to the same list.
End of explanation
a = [2] * 3
x = [a] * 3
print(*x, sep="\n")
print()
a[0] = "ZZ"
print(*x, sep="\n")
Explanation: To see what's happening lets rewrite the code to make the issue even clearer:
End of explanation
x = []
for i in range(3):
x.append([2]*3)
print(*x, sep="\n")
print()
x[0][0] = "ZZ"
print(*x, sep="\n")
Explanation: So basically the issue is was that when we write [2]*3 and try to add it to a list python isn't making three separate lists, rather, its adding the same list three times!
The fix then, we need to make sure Python knows we want three separate lists, which we can do with a for loop:
End of explanation |
5,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.e - Enoncé 23 octobre 2018 (2)
Correction du second énoncé de l'examen du 23 octobre 2018. L'énoncé propose une méthode pour renseigner les valeurs manquantes dans une base de deux variables.
Step1: On sait d'après les dernières questions qu'il faudra tout répéter plusieurs fois. On prend le soin d'écrire chaque question dans une fonction.
Q1 - échantillon aléatoire
Générer un ensemble de $N=1000$ couples aléatoires $(X_i,Y_i)$ qui vérifient
Step2: Remarque
Step3: Remarque 1
Step4: Remarque 1
Step5: Q5 - x le plus proche
On considère le point de coordonnées $(x, y)$, écrire une fonction qui retourne le point de la matrice $M$ dont l'abscisse est la plus proche de $x$.
Step6: C'est beaucoup plus rapide car on utilise les fonctions numpy.
Q6 - matrice m3
Pour chaque $y$ manquant, on utilise la fonction précédente pour retourner le point dont l'abscisse et la plus proche et on remplace l'ordonnée $y$ par celle du point trouvé. On fait de même avec les $x$ manquant.
On construit la matrice ainsi $M_3$ à partir de $M_1$.
Step7: Q7 - norme
On a deux méthodes pour compléter les valeurs manquantes, quelle est la meilleure ? Il faut vérifier numériquement en comparant $\parallel M-M_2 \parallel^2$ et $\parallel M-M_3 \parallel^2$.
Step8: Remarque
Step9: Q9 - plus de valeurs manquantes
Et si on augmente le nombre de valeurs manquantes, l'écart se creuse-t-il ou se réduit -il ? Montrez-le numériquement.
Step10: Plus il y a de valeurs manquantes, plus le ratio tend vers 1 car il y a moins d'informations pour compléter les valeurs manquantes autrement que par la moyenne. Il y a aussi plus souvent des couples de valeurs manquantes qui ne peuvent être remplacés que par la moyenne.
Q10
Votre fonction de la question 5 a probablement un coût linéaire. Il est probablement possible de faire mieux, si oui, il faut préciser comment et ce que cela implique sur les données. Il ne faut pas l'implémenter.
Il suffit de trier le tableau et d'utiliser une recherche dichotomique. Le coût du tri est négligeable par rapport au nombre de fois que la fonction plus_proche est utilisée. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.e - Enoncé 23 octobre 2018 (2)
Correction du second énoncé de l'examen du 23 octobre 2018. L'énoncé propose une méthode pour renseigner les valeurs manquantes dans une base de deux variables.
End of explanation
import numpy.random as rnd
import numpy
def random_mat(N):
mat = numpy.zeros((N, 2))
mat[:, 0] = rnd.normal(size=(N,))
mat[:, 1] = mat[:, 0] * 2 + rnd.normal(size=(N,))
return mat
N = 1000
mat = random_mat(N)
mat[:5]
Explanation: On sait d'après les dernières questions qu'il faudra tout répéter plusieurs fois. On prend le soin d'écrire chaque question dans une fonction.
Q1 - échantillon aléatoire
Générer un ensemble de $N=1000$ couples aléatoires $(X_i,Y_i)$ qui vérifient :
$X_i$ suit une loi normale de variance 1.
$Y_i = 2 X_i + \epsilon_i$ où $\epsilon_i$ suit une loi normale de variance 1.
End of explanation
import random
def build_m1(mat, n=20):
mat = mat.copy()
positions = []
while len(positions) < n:
h = random.randint(0, mat.shape[0] * mat.shape[1] - 1)
pos = h % mat.shape[0], h // mat.shape[0]
if pos in positions:
# La position est déjà tirée.
continue
positions.append(pos)
mat[pos] = numpy.nan
return mat, positions
m1, positions = build_m1(mat)
p = positions[0][0]
m1[max(p-2, 0):min(p+3, mat.shape[0])]
Explanation: Remarque : Un élève a retourné cette réponse, je vous laisse chercher pourquoi ce code produit deux variables tout-à-fait décorrélées.
def random_mat(N=1000):
A = np.random.normal(0,1,(N,2))
A[:,1] = 2*A[:,1] + np.random.normal(0,1,N)/10
return A
Cela peut se vérifier en calculant la corrélation.
Remarque 2 : Un élève a généré le nuage $X + 2\epsilon$ ce qui produit un nuage de points dont les deux variable sont moins corrélées. Voir à la fin pour plus de détail.
Q2 - matrice m1
On définit la matrice $M \in \mathbb{M}_{N,2}(\mathbb{R})$ définie par les deux vecteurs colonnes $(X_i)$ et $(Y_i)$. Choisir aléatoirement 20 valeurs dans cette matrice et les remplacer par numpy.nan. On obtient la matrice $M_1$.
End of explanation
def mean_no_nan(mat):
res = []
for i in range(mat.shape[1]):
ex = numpy.mean(mat[~numpy.isnan(mat[:, i]), i])
res.append(ex)
return numpy.array(res)
mean_no_nan(m1)
Explanation: Remarque 1: l'énoncé ne précisait pas s'il fallait choisir les valeurs aléatoires sur une ou deux colonnes, le faire sur une seule colonne est sans doute plus rapide et ne change rien aux conclusions des dernières questions.
Remarque 2: il ne faut pas oublier de copier la matrice mat.copy(), dans le cas contraire, la fonction modifie la matrice originale. Ce n'est pas nécessairement un problème excepté pour les dernières questions qui requiert de garder cette matrice.
Remarque 3: l'énoncé ne précisait pas avec ou sans remise. L'implémentation précédente le fait sans remise.
Q3 - moyenne
Calculer $\mathbb{E}{X} = \frac{1}{N}\sum_i^N X_i$ et $\mathbb{E}Y = \frac{1}{N}\sum_i^N Y_i$. Comme on ne tient pas compte des valeurs manquantes, les moyennes calculées se font avec moins de $N$ termes. Si on définit $V_x$ et $V_y$ l'ensemble des valeurs non manquantes, on veut calculer $\mathbb{E}{X} = \frac{\sum_{i \in V_x} X_i}{\sum_{i \in V_x} 1}$ et $\mathbb{E}Y = \frac{\sum_{i \in V_y} Y_i}{\sum_{i \in V_y} 1}$.
End of explanation
def build_m2(mat):
means = mean_no_nan(mat)
m1 = mat.copy()
for i in range(len(means)):
m1[numpy.isnan(m1[:, i]), i] = means[i]
return m1
m2 = build_m2(m1)
m2[max(p-2, 0):min(p+3, mat.shape[0])]
Explanation: Remarque 1 : il était encore plus simple d'utiliser la fonction nanmean.
Remarque 2 : Il fallait diviser par le nombre de valeurs non nulles et non le nombre de lignes de la matrice.
Q4 - matrice m2
Remplacer les valeurs manquantes de la matrice $M_1$ par la moyenne de leurs colonnes respectives. On obtient la matrice $M_2$.
End of explanation
def plus_proche(mat, x, col, colnan):
mini = None
for k in range(mat.shape[0]):
if numpy.isnan(mat[k, col]) or numpy.isnan(mat[k, colnan]):
continue
d = abs(mat[k, col] - x)
if mini is None or d < mini:
mini = d
best = k
return best
plus_proche(m1, m1[10, 0], 0, 1)
def plus_proche_rapide(mat, x, col, colnan):
mini = None
na = numpy.arange(0, mat.shape[0])[~(numpy.isnan(mat[:, col]) | numpy.isnan(mat[:, colnan]))]
diff = numpy.abs(mat[na, col] - x)
amin = numpy.argmin(diff)
best = na[amin]
return best
plus_proche_rapide(m1, m1[10, 0], 0, 1)
%timeit plus_proche(m1, m1[10, 0], 0, 1)
%timeit plus_proche_rapide(m1, m1[10, 0], 0, 1)
Explanation: Q5 - x le plus proche
On considère le point de coordonnées $(x, y)$, écrire une fonction qui retourne le point de la matrice $M$ dont l'abscisse est la plus proche de $x$.
End of explanation
def build_m3(mat):
mat = mat.copy()
for i in range(mat.shape[0]):
for j in range(mat.shape[1]):
if numpy.isnan(mat[i, j]):
col = 1-j
if numpy.isnan(mat[i, col]):
# deux valeurs nan, on utilise la moyenne
mat[i, j] = numpy.mean(mat[~numpy.isnan(mat[:,j]), j])
else:
pos = plus_proche_rapide(mat, mat[i, col], col, j)
mat[i, j] = mat[pos, j]
return mat
m3 = build_m3(m1)
m3[max(p-2, 0):min(p+3, mat.shape[0])]
Explanation: C'est beaucoup plus rapide car on utilise les fonctions numpy.
Q6 - matrice m3
Pour chaque $y$ manquant, on utilise la fonction précédente pour retourner le point dont l'abscisse et la plus proche et on remplace l'ordonnée $y$ par celle du point trouvé. On fait de même avec les $x$ manquant.
On construit la matrice ainsi $M_3$ à partir de $M_1$.
End of explanation
def distance(m1, m2):
d = m1.ravel() - m2.ravel()
return d @ d
d2 = distance(mat, m2)
d3 = distance(mat, m3)
d2, d3
Explanation: Q7 - norme
On a deux méthodes pour compléter les valeurs manquantes, quelle est la meilleure ? Il faut vérifier numériquement en comparant $\parallel M-M_2 \parallel^2$ et $\parallel M-M_3 \parallel^2$.
End of explanation
def repetition(N=1000, n=20, nb=10):
res = []
for i in range(nb):
mat = random_mat(N)
m1, _ = build_m1(mat, n)
m2 = build_m2(m1)
m3 = build_m3(m1)
d2, d3 = distance(mat, m2), distance(mat, m3)
res.append((d2, d3))
return numpy.array(res)
repetition()
Explanation: Remarque : Un élève a répondu :
On obtient (norme(M-M2))^2 = 98.9707 et (norme(M-M3))^2 = 98.2287 : la meilleure méthode semble être la seconde (Q6).
La différence n'est significative et cela suggère une erreur de calcul. Cela doit mettre la puce à l'oreille.
Q8 - répétition
Une experience réussie ne veut pas dire que cela fonctionne. Recommencer 10 fois en changeant le nuages de points et les valeurs manquantes ajoutées.
End of explanation
diff = []
ns = []
for n in range(100, 1000, 100):
print(n)
res = repetition(n=n, nb=10)
diff.append(res.mean(axis=0) / n)
ns.append(n)
diff = numpy.array(diff)
diff[:5]
%matplotlib inline
import pandas
df = pandas.DataFrame(diff, columns=["d2", "d3"])
df['N'] = ns
df = df.set_index('N')
df["ratio"] = df["d2"] / df["d3"]
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
df[["d2", "d3"]].plot(ax=ax[0])
df[["ratio"]].plot(ax=ax[1])
ax[0].set_title("d2 et d3\nErreur moyenne par valeur manquante")
ax[1].set_title("d2 / d3");
Explanation: Q9 - plus de valeurs manquantes
Et si on augmente le nombre de valeurs manquantes, l'écart se creuse-t-il ou se réduit -il ? Montrez-le numériquement.
End of explanation
def random_mat(N, alpha):
mat = numpy.zeros((N, 2))
mat[:, 0] = rnd.normal(size=(N,))
mat[:, 1] = mat[:, 0] * alpha + rnd.normal(size=(N,))
return mat
rows = []
for alpha in [0.01 * h for h in range(0, 500)]:
m = random_mat(1000, alpha)
m1, _ = build_m1(m, 20)
m2 = build_m2(m1)
m3 = build_m3(m1)
d2, d3 = distance(m, m2), distance(m, m3)
cc = numpy.corrcoef(m.T)[0, 1]
rows.append(dict(corr=cc, d2=d2**0.5, d3=d3**0.5))
df = pandas.DataFrame(rows)
df.tail()
ax = df.sort_values("corr").plot(x="corr", y=["d2", "d3"])
ax.set_title("Evolution de l'erreur en fonction de la corrélation");
Explanation: Plus il y a de valeurs manquantes, plus le ratio tend vers 1 car il y a moins d'informations pour compléter les valeurs manquantes autrement que par la moyenne. Il y a aussi plus souvent des couples de valeurs manquantes qui ne peuvent être remplacés que par la moyenne.
Q10
Votre fonction de la question 5 a probablement un coût linéaire. Il est probablement possible de faire mieux, si oui, il faut préciser comment et ce que cela implique sur les données. Il ne faut pas l'implémenter.
Il suffit de trier le tableau et d'utiliser une recherche dichotomique. Le coût du tri est négligeable par rapport au nombre de fois que la fonction plus_proche est utilisée.
End of explanation |
5,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Before start
Step1: Getting the data
We're not going into details here
Step2: Defining the input function
If we look at the image above we can see that there're two main parts in the diagram, a input function interacting with data files and the Estimator interacting with the input function and checkpoints.
This means that the estimator doesn't know about data files, it knows about input functions. So if we want to interact with a data set we need to creat an input function that interacts with it, in this example we are creating a input function for the train and test data set.
You can learn more about input functions here
Step3: Creating an experiment
After an experiment is created (by passing an Estimator and inputs for training and evaluation), an Experiment instance knows how to invoke training and eval loops in a sensible fashion for distributed training. More about it here
Step4: Run the experiment
Step5: Running a second time
Okay, the model is definitely not good... But, check OUTPUT_DIR path, you'll see that a output_dir folder was created and that there are a lot of files there that were created automatically by TensorFlow!
So, most of these files are actually checkpoints, this means that if we run the experiment again with the same model_dir it will just load the checkpoint and start from there instead of starting all over again!
This means that
Step6: Tensorboard
Another thing we get for free is tensorboard.
If you run | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# our model
import model as m
# tensorflow
import tensorflow as tf
print(tf.__version__) #tested with tf v1.2
from tensorflow.contrib import learn
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.python.estimator.inputs import numpy_io
# MNIST data
from tensorflow.examples.tutorials.mnist import input_data
# Numpy
import numpy as np
# Enable TensorFlow logs
tf.logging.set_verbosity(tf.logging.INFO)
Explanation: Before start: make sure you deleted the output_dir folder from this path
Some things we get for free by using Estimators
Estimators are a high level abstraction (Interface) that supports all the basic operations you need to support a ML model on top of TensorFlow.
Estimators:
* provide a simple interface for users of canned model architectures: Training, evaluation, prediction, export for serving.
* provide a standard interface for model developers
* drastically reduces the amount of user code required. This avoids bugs and speeds up development significantly.
* enable building production services against a standard interface.
* using experiments abstraction give you free data-parallelism (more here)
In the Estimator's interface includes: Training, evaluation, prediction, export for serving.
Image from Effective TensorFlow for Non-Experts (Google I/O '17)
You can use a already implemented estimator (canned estimator) or implement your own (custom estimator).
This tutorial is not focused on how to build your own estimator, we're using a custom estimator that implements a CNN classifier for MNIST dataset defined in the model.py file, but we're not going into details about how that's implemented.
Here we're going to show how Estimators make your life easier, once you have a estimator model is very simple to change your model and compare results.
Having a look at the code and running the experiment
Dependencies
End of explanation
# Import the MNIST dataset
mnist = input_data.read_data_sets("/tmp/MNIST/", one_hot=True)
x_train = np.reshape(mnist.train.images, (-1, 28, 28, 1))
y_train = mnist.train.labels
x_test = np.reshape(mnist.test.images, (-1, 28, 28, 1))
y_test = mnist.test.labels
Explanation: Getting the data
We're not going into details here
End of explanation
BATCH_SIZE = 128
x_train_dict = {'x': x_train }
train_input_fn = numpy_io.numpy_input_fn(
x_train_dict, y_train, batch_size=BATCH_SIZE,
shuffle=True, num_epochs=None,
queue_capacity=1000, num_threads=4)
x_test_dict = {'x': x_test }
test_input_fn = numpy_io.numpy_input_fn(
x_test_dict, y_test, batch_size=BATCH_SIZE, shuffle=False, num_epochs=1)
Explanation: Defining the input function
If we look at the image above we can see that there're two main parts in the diagram, a input function interacting with data files and the Estimator interacting with the input function and checkpoints.
This means that the estimator doesn't know about data files, it knows about input functions. So if we want to interact with a data set we need to creat an input function that interacts with it, in this example we are creating a input function for the train and test data set.
You can learn more about input functions here
End of explanation
# parameters
LEARNING_RATE = 0.01
STEPS = 1000
# create experiment
def generate_experiment_fn():
def _experiment_fn(run_config, hparams):
del hparams # unused, required by signature.
# create estimator
model_params = {"learning_rate": LEARNING_RATE}
estimator = tf.estimator.Estimator(model_fn=m.get_model(),
params=model_params,
config=run_config)
train_input = train_input_fn
test_input = test_input_fn
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input,
eval_input_fn=test_input,
train_steps=STEPS
)
return _experiment_fn
Explanation: Creating an experiment
After an experiment is created (by passing an Estimator and inputs for training and evaluation), an Experiment instance knows how to invoke training and eval loops in a sensible fashion for distributed training. More about it here
End of explanation
OUTPUT_DIR = 'output_dir/model1'
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))
Explanation: Run the experiment
End of explanation
STEPS = STEPS + 1000
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))
Explanation: Running a second time
Okay, the model is definitely not good... But, check OUTPUT_DIR path, you'll see that a output_dir folder was created and that there are a lot of files there that were created automatically by TensorFlow!
So, most of these files are actually checkpoints, this means that if we run the experiment again with the same model_dir it will just load the checkpoint and start from there instead of starting all over again!
This means that:
If we have a problem while training you can just restore from where you stopped instead of start all over again
If we didn't train enough we can just continue to train
If you have a big file you can just break it into small files and train for a while with each small file and the model will continue from where it stopped at each time :)
This is all true as long as you use the same model_dir!
So, let's run again the experiment for more 1000 steps to see if we can improve the accuracy. So, notice that the first step in this run will actually be the step 1001. So, we need to change the number of steps to 2000 (otherwhise the experiment will find the checkpoint and will think it already finished training)
End of explanation
LEARNING_RATE = 0.05
OUTPUT_DIR = 'output_dir/model2'
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))
Explanation: Tensorboard
Another thing we get for free is tensorboard.
If you run: tensorboard --logdir=OUTPUT_DIR
You'll see that we get the graph and some scalars, also if you use an embedding layer you'll get an embedding visualization in tensorboard as well!
So, we can make small changes and we'll have an easy (and totally for free) way to compare the models.
Let's make these changes:
1. change the learning rate to 0.05
2. change the OUTPUT_DIR to some path in output_dir/
The 2. is must be inside output_dir/ because we can run: tensorboard --logdir=output_dir/
And we'll get both models visualized at the same time in tensorboard.
You'll notice that the model will start from step 1, because there's no existing checkpoint in this path.
End of explanation |
5,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear regression - audio
Use linear regression to recover or 'fill out' a completely deleted portion of an audio file!
This will be using The FSDD, Free-Spoken-Digits-Dataset, an audio dataset put together by Zohar Jackson
Step1: There are 500 recordings, 50 of each digit.
Each .wav file is actually just a bunch of numeric samples, "sampled"
from the analog signal. Sampling is a type of discretization. When we mention 'samples', we mean observations. When we mention 'audio samples', we mean the actually "features" of the audio file.
The goal of this notebook is to use multi-target, linear regression to generate by extrapolation, the missing portion of the test audio file.
Each one audio_sample features will be the output of an equation,
which is a function of the provided portion of the audio_samples
Step2: Since these audio clips are unfortunately not length-normalized, we're going to have to just hard chop them to all be the same length.
Since Pandas would have inserted NANs at any spot to make zero a
perfectly rectangular [n_observed_samples, n_audio_samples] array, do a dropna on the Y axis here. Then, convert one back into an NDArray using .values
Step3: split the data into training and testing sets
There are 50 takes of each clip. You want to pull out just one of them, randomly, and that one will NOT be used in the training of the model. In other words, the one file we'll be testing / scoring on will be an unseen sample, independent to the rest of the training set.
Step4: Save the original 'test' clip, the one you're about to delete half of, so that you can compare it to the 'patched' clip once you've generated it.
This assume the sample rate is always the same for all samples
Step5: Embedding the audio file.
Note that this is not working directly in GitHub (I think all JavaScript is stripped out), fork it or download it to play the audio
Step6: carve out the labels Y
The data will have two parts
Step7: Can you hear it? Now it's only the first syllable, "ze" ...
But we can even delete more and leave only the first quarter!
Step8: Almost unrecognisable.
Will the linear regression model be able to reconstruct the audio?
Step9: Duplicate the same process for X_train, y_train.
Step10: SciKit-Learn gets mad if you don't supply your training data in the form of a 2D arrays
Step11: Create and train the linear regression model
Step12: Use the model to predict the 'label' of X_test.
SciKit-Learn will use float64 to generate the predictions so let's take those values back to int16
Step13: Evaluate the result
Step14: Obviously, if you look only at Rsquared it seems that it was a totally useless result.
But let's listen to the generated audio.
First, take the first Provided_Portion portion of the test clip, the part you fed into your linear regression model. Then, stitch that
together with the abomination the predictor model generated for you,
and then save the completed audio clip | Python Code:
import os
import scipy.io.wavfile as wavfile
zero = []
directory = "../datasets/free-spoken-digit-dataset-master/recordings/"
for fname in os.listdir(directory):
if fname.startswith("0_jackson"):
fullname = os.path.join(directory, fname)
sample_rate, data = wavfile.read(fullname)
zero.append( data )
Explanation: Linear regression - audio
Use linear regression to recover or 'fill out' a completely deleted portion of an audio file!
This will be using The FSDD, Free-Spoken-Digits-Dataset, an audio dataset put together by Zohar Jackson:
cleaned up audio (no dead-space, roughly same length, same bitrate, same samples-per-second rate, same speaker, etc) samples ready for machine learning.
get the data
End of explanation
import numpy as np
import pandas as pd
zeroDF = pd.DataFrame(zero, dtype=np.int16)
zeroDF.info()
Explanation: There are 500 recordings, 50 of each digit.
Each .wav file is actually just a bunch of numeric samples, "sampled"
from the analog signal. Sampling is a type of discretization. When we mention 'samples', we mean observations. When we mention 'audio samples', we mean the actually "features" of the audio file.
The goal of this notebook is to use multi-target, linear regression to generate by extrapolation, the missing portion of the test audio file.
Each one audio_sample features will be the output of an equation,
which is a function of the provided portion of the audio_samples:
missing_samples = f(provided_samples)
prepare the data
Convert zero into a DataFrame and set the dtype to np.int16, since the input audio files are 16 bits per sample. This is important otherwise the produced audio samples will be encoded as 64 bits per sample and will be too short.
End of explanation
if zeroDF.isnull().values.any() == True:
print("Preprocessing data: dropping all NaN")
zeroDF.dropna(axis=1, inplace=True)
else:
print("Preprocessing data: No NaN found!")
zero = zeroDF.values # this is a list
n_audio_samples = zero.shape[1]
n_audio_samples
Explanation: Since these audio clips are unfortunately not length-normalized, we're going to have to just hard chop them to all be the same length.
Since Pandas would have inserted NANs at any spot to make zero a
perfectly rectangular [n_observed_samples, n_audio_samples] array, do a dropna on the Y axis here. Then, convert one back into an NDArray using .values
End of explanation
from sklearn.utils.validation import check_random_state
rng = check_random_state(7)
random_idx = rng.randint(zero.shape[0])
test = zero[random_idx] # the test sample
train = np.delete(zero, [random_idx], axis=0)
print(train.shape)
print(test.shape)
Explanation: split the data into training and testing sets
There are 50 takes of each clip. You want to pull out just one of them, randomly, and that one will NOT be used in the training of the model. In other words, the one file we'll be testing / scoring on will be an unseen sample, independent to the rest of the training set.
End of explanation
wavfile.write('../outputs/OriginalTestClip.wav', sample_rate, test)
Explanation: Save the original 'test' clip, the one you're about to delete half of, so that you can compare it to the 'patched' clip once you've generated it.
This assume the sample rate is always the same for all samples
End of explanation
from IPython.display import Audio
Audio("../outputs/OriginalTestClip.wav")
Explanation: Embedding the audio file.
Note that this is not working directly in GitHub (I think all JavaScript is stripped out), fork it or download it to play the audio
End of explanation
Provided_Portion = 0.5 # let's delete half of the audio
test_samples = int(Provided_Portion * n_audio_samples)
X_test = test[0:test_samples] # first ones
IPython.display.Audio(data=X_test, rate= sample_rate)
Explanation: carve out the labels Y
The data will have two parts: X and y (the true labels).
X is going to be the first portion of the audio file, which we will be providing the computer as input (the "chopped" audio).
y, the "label", is going to be the remaining portion of the audio file. In this way the computer will use linear regression to derive the missing portion of the sound file based off of the training data it has received!
ProvidedPortion is how much of the audio file will be provided, in percent. The remaining percent of the file will be generated via linear extrapolation.
End of explanation
Provided_Portion = 0.25 # let's delete three quarters of the audio!
test_samples = int(Provided_Portion * n_audio_samples)
X_test = test[0:test_samples] # first ones
wavfile.write('../outputs/ChoppedTestClip.wav', sample_rate, X_test)
IPython.display.Audio("../outputs/ChoppedTestClip.wav")
Explanation: Can you hear it? Now it's only the first syllable, "ze" ...
But we can even delete more and leave only the first quarter!
End of explanation
y_test = test[test_samples:] # remaining audio part is the label
Explanation: Almost unrecognisable.
Will the linear regression model be able to reconstruct the audio?
End of explanation
X_train = train[:, 0:test_samples] # first ones: data
y_train = train[:, test_samples:] # remaining ones: label
Explanation: Duplicate the same process for X_train, y_train.
End of explanation
X_test = X_test.reshape(1,-1)
y_test = y_test.reshape(1,-1)
Explanation: SciKit-Learn gets mad if you don't supply your training data in the form of a 2D arrays: [n_samples, n_features].
So if you only have one SAMPLE, such as is our case with X_test, and y_test, then by calling .reshape(1, -1), you can turn [n_features] into [1, n_features].
End of explanation
from sklearn import linear_model
model = linear_model.LinearRegression()
model.fit(X_train, y_train)
Explanation: Create and train the linear regression model
End of explanation
y_test_prediction = model.predict(X_test)
y_test_prediction = y_test_prediction.astype(dtype=np.int16)
Explanation: Use the model to predict the 'label' of X_test.
SciKit-Learn will use float64 to generate the predictions so let's take those values back to int16
End of explanation
score = model.score(X_test, y_test) # test samples X and true values for X
print ("Extrapolation R^2 Score: ", score)
Explanation: Evaluate the result
End of explanation
completed_clip = np.hstack((X_test, y_test_prediction))
wavfile.write('../outputs/ExtrapolatedClip.wav', sample_rate, completed_clip[0])
IPython.display.Audio("../outputs/ExtrapolatedClip.wav")
Explanation: Obviously, if you look only at Rsquared it seems that it was a totally useless result.
But let's listen to the generated audio.
First, take the first Provided_Portion portion of the test clip, the part you fed into your linear regression model. Then, stitch that
together with the abomination the predictor model generated for you,
and then save the completed audio clip:
End of explanation |
5,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sharing Tutorial
Step1: Safe default
Step2: Switching to fully private by default
Call graphistry.privacy() to default to stronger privacy. It sets
Step3: Local overrides
We can locally override settings, such as opting back in to public sharing for some visualizations
Step4: Invitations and notifications
As part of the settings, we can permit specific individuals as viewers or editors, and optionally, send them an email notification
Step5: The options can be configured globally or locally, just as we did with mode. For example, we might not want to send emails by default, just on specific plots | Python Code:
#! pip install --user -q graphistry pandas
import graphistry, pandas as pd
graphistry.__version__
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
#demo data
g = graphistry.edges(pd.DataFrame({
's': ['a', 'b', 'c'],
'd': ['b', 'c', 'a'],
'v': [1, 1, 2]
}), 's', 'd')
g = g.settings(url_params={'play': 0})
Explanation: Sharing Tutorial: Securely Collaborating in Graphistry
Investigtions are better together. This tutorial walks through the new PyGraphistry method .privacy(), which enables API control of the new sharing features
We walk through:
Global defaults for graphistry.privacy(mode='private', ...)
Compositional per-visualization settings via g.privacy(...)
Inviting and notifying via privacy(invited_users=[{...])
Setup
You need pygraphistry 0.20.0+ for a corresponding Graphistry server (2.37.20+)
End of explanation
public_url = g.plot(render=False)
Explanation: Safe default: Unlisted & owner-editable
When creating a plot, Graphistry creates a dedicated URL with the following rules:
Viewing: Unlisted - Only those given the link can access it
Editing: Owner-only
The URL is unguessable, and the only webpage it is listed at is the creator's private gallery: https://hub.graphistry.com/datasets/ . That means it is as private as whomever the owner shares the URL with.
End of explanation
graphistry.privacy()
# or equivaently, graphistry.privacy(mode='private', invited_users=[], notify=False, message='')
owner_only_url = g.plot(render=False)
Explanation: Switching to fully private by default
Call graphistry.privacy() to default to stronger privacy. It sets:
mode='private' - viewing only by owners and invitees
invited_users=[] - no invitees by default
notify=False - no email notifications during invitations
message=''
By default, this means an explicit personal invitation is necessary for viewing. Subsequent plots in the session will default to this setting.
You can also explicitly set or override those as optional parameters.
End of explanation
public_g = g.privacy(mode='public')
public_url1 = public_g.plot(render=False)
#Ex: Inheriting public_g's mode='public'
public_g2 = public_g.name('viz2')
public_url2 = public_g.plot(render=False)
#Ex: Global default was still via .privacy()
still_private_url = g.plot(render=False)
Explanation: Local overrides
We can locally override settings, such as opting back in to public sharing for some visualizations:
End of explanation
VIEW = '10'
EDIT = '20'
shared_g = g.privacy(
mode='private',
notify=True,
invited_users=[{'email': '[email protected]', 'action': VIEW},
{'email': '[email protected]', 'action': EDIT}],
message='Check out this graph!')
shared_url = shared_g.plot(render=False)
Explanation: Invitations and notifications
As part of the settings, we can permit specific individuals as viewers or editors, and optionally, send them an email notification
End of explanation
graphistry.privacy(
mode='private',
notify=False,
invited_users=[{'email': '[email protected]', 'action': VIEW},
{'email': '[email protected]', 'action': EDIT}])
shared_url = g.plot(render=False)
notified_and_shared_url = g.privacy(notify=True).plot(render=False)
Explanation: The options can be configured globally or locally, just as we did with mode. For example, we might not want to send emails by default, just on specific plots:
End of explanation |
5,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Searching datasets
erddapy can wrap the same form-like search capabilities of ERDDAP with the search_for keyword.
Step1: Single word search.
Step2: Filtering the search with extra words.
Step3: Filtering the search with words that should not be found.
Step4: Quoted search or "phrase search," first let us try the unquoted search.
Step5: Too many datasets because wind, speed, and wind speed are matched.
Now let's use the quoted search to reduce the number of results to only wind speed. | Python Code:
from erddapy import ERDDAP
e = ERDDAP(
server="https://upwell.pfeg.noaa.gov/erddap",
protocol="griddap"
)
Explanation: Searching datasets
erddapy can wrap the same form-like search capabilities of ERDDAP with the search_for keyword.
End of explanation
import pandas as pd
search_for = "HFRadar"
url = e.get_search_url(search_for=search_for, response="csv")
pd.read_csv(url)["Dataset ID"]
Explanation: Single word search.
End of explanation
search_for = "HFRadar 2km"
url = e.get_search_url(search_for=search_for, response="csv")
pd.read_csv(url)["Dataset ID"]
Explanation: Filtering the search with extra words.
End of explanation
search_for = "HFRadar -EXPERIMENTAL"
url = e.get_search_url(search_for=search_for, response="csv")
pd.read_csv(url)["Dataset ID"]
Explanation: Filtering the search with words that should not be found.
End of explanation
search_for = "wind speed"
url = e.get_search_url(search_for=search_for, response="csv")
len(pd.read_csv(url)["Dataset ID"])
Explanation: Quoted search or "phrase search," first let us try the unquoted search.
End of explanation
search_for = '"wind speed"'
url = e.get_search_url(search_for=search_for, response="csv")
len(pd.read_csv(url)["Dataset ID"])
Explanation: Too many datasets because wind, speed, and wind speed are matched.
Now let's use the quoted search to reduce the number of results to only wind speed.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.