Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
4,900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CrowdTruth for Sparse Multiple Choice Tasks
Step1: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class
Step2: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Relation Extraction task
Step3: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data
Step4: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics
Step5: results is a dict object that contains the quality metrics for sentences, relations and crowd workers.
The sentence metrics are stored in results["units"]
Step6: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentence. Here we plot its histogram
Step7: The unit_annotation_score column in results["units"] contains the sentence-relation scores, capturing the likelihood that a relation is expressed in a sentence. For each sentence, we store a dictionary mapping each relation to its sentence-relation score.
Step8: The worker metrics are stored in results["workers"]
Step9: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
Step10: The relation metrics are stored in results["annotations"]. The aqs column contains the relation quality scores, capturing the overall worker agreement over one relation. | Python Code:
import pandas as pd
test_data = pd.read_csv("../data/relex-sparse-multiple-choice.csv")
test_data.head()
Explanation: CrowdTruth for Sparse Multiple Choice Tasks: Relation Extraction
In this tutorial, we will apply CrowdTruth metrics to a sparse multiple choice crowdsourcing task for Relation Extraction from sentences. The workers were asked to read a sentence with 2 highlighted terms, then pick from a multiple choice list what are the relations expressed between the 2 terms in the sentence. The options available in the multiple choice list change with the input sentence. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.
To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript.
This is a screenshot of the task as it appeared to workers:
A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. Now you can check your data:
End of explanation
import crowdtruth
from crowdtruth.configuration import DefaultConfig
Explanation: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:
End of explanation
class TestConfig(DefaultConfig):
inputColumns = ["sent_id", "term1", "b1", "e1", "term2", "b2", "e2", "sentence", "input_relations"]
outputColumns = ["output_relations"]
annotation_separator = "\n"
# processing of a closed task
open_ended_task = False
annotation_vector = [
"title", "founded_org", "place_of_birth", "children", "cause_of_death",
"top_member_employee_of_org", "employee_or_member_of", "spouse",
"alternate_names", "subsidiaries", "place_of_death", "schools_attended",
"place_of_headquarters", "charges", "origin", "places_of_residence",
"none"]
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
return judgments
Explanation: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Relation Extraction task:
inputColumns: list of input columns from the .csv file with the input data
outputColumns: list of output columns from the .csv file with the answers from the workers
annotation_separator: string that separates between the crowd annotations in outputColumns
open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False
annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of all relations that were given as input to the crowd in at least one sentence
processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector
The complete configuration class is declared below:
End of explanation
data, config = crowdtruth.load(
file = "../data/relex-sparse-multiple-choice.csv",
config = TestConfig()
)
data['judgments'].head()
Explanation: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data:
End of explanation
results = crowdtruth.run(data, config)
Explanation: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics:
End of explanation
results["units"].head()
Explanation: results is a dict object that contains the quality metrics for sentences, relations and crowd workers.
The sentence metrics are stored in results["units"]:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(results["units"]["uqs"])
plt.xlabel("Sentence Quality Score")
plt.ylabel("Sentences")
Explanation: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentence. Here we plot its histogram:
End of explanation
results["units"]["unit_annotation_score"].head(10)
Explanation: The unit_annotation_score column in results["units"] contains the sentence-relation scores, capturing the likelihood that a relation is expressed in a sentence. For each sentence, we store a dictionary mapping each relation to its sentence-relation score.
End of explanation
results["workers"].head()
Explanation: The worker metrics are stored in results["workers"]:
End of explanation
plt.hist(results["workers"]["wqs"])
plt.xlabel("Worker Quality Score")
plt.ylabel("Workers")
Explanation: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
End of explanation
results["annotations"]
results["units"].to_csv("../data/results/sparsemultchoice-relex-units.csv")
results["workers"].to_csv("../data/results/sparsemultchoice-relex-workers.csv")
results["annotations"].to_csv("../data/results/sparsemultchoice-relex-annotations.csv")
Explanation: The relation metrics are stored in results["annotations"]. The aqs column contains the relation quality scores, capturing the overall worker agreement over one relation.
End of explanation |
4,901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Blind Source Separation with the Shogun Machine Learning Toolbox
By Kevin Hughes
This notebook illustrates <a href="http
Step1: Next we're going to need a way to play the audio files we're working with (otherwise this wouldn't be very exciting at all would it?). In the next bit of code I've defined a wavPlayer class that takes the signal and the sample rate and then creates a nice HTML5 webplayer right inline with the notebook.
Step2: Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here
Step3: Now let's load a second audio clip
Step4: and a third audio clip
Step5: Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together!
First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be filled with zeros so it won't affect the sound.
The signals are mixed by creating a mixing matrix $A$ and taking the dot product of $A$ with the signals $S$.
Afterwards I plot the mixed signals and create the wavPlayers, have a listen!
Step6: Now before we can work on separating these signals we need to get the data ready for Shogun, thankfully this is pretty easy!
Step7: Now lets unmix those signals!
In this example I'm going to use an Independent Component Analysis (ICA) algorithm called JADE. JADE is one of the ICA algorithms available in Shogun and it works by performing Aproximate Joint Diagonalization (AJD) on a 4th order cumulant tensor. I'm not going to go into a lot of detail on how JADE works behind the scenes but here is the reference for the original paper
Step8: Thats all there is to it! Check out how nicely those signals have been separated and have a listen! | Python Code:
import numpy as np
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import wavfile
from scipy.signal import resample
import shogun as sg
def load_wav(filename,samplerate=44100):
# load file
rate, data = wavfile.read(filename)
# convert stereo to mono
if len(data.shape) > 1:
data = data[:,0]/2 + data[:,1]/2
# re-interpolate samplerate
ratio = float(samplerate) / float(rate)
data = resample(data, int(len(data) * ratio))
return samplerate, data.astype(np.int16)
Explanation: Blind Source Separation with the Shogun Machine Learning Toolbox
By Kevin Hughes
This notebook illustrates <a href="http://en.wikipedia.org/wiki/Blind_signal_separation">Blind Source Seperation</a>(BSS) on audio signals using <a href="http://en.wikipedia.org/wiki/Independent_component_analysis">Independent Component Analysis</a> (ICA) in Shogun. We generate a mixed signal and try to seperate it out using Shogun's implementation of ICA & BSS called <a href="http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1Jade.html">JADE</a>.
My favorite example of this problem is known as the cocktail party problem where a number of people are talking simultaneously and we want to separate each persons speech so we can listen to it separately. Now the caveat with this type of approach is that we need as many mixtures as we have source signals or in terms of the cocktail party problem we need as many microphones as people talking in the room.
Let's get started, this example is going to be in python and the first thing we are going to need to do is load some audio files. To make things a bit easier further on in this example I'm going to wrap the basic scipy wav file reader and add some additional functionality. First I added a case to handle converting stereo wav files back into mono wav files and secondly this loader takes a desired sample rate and resamples the input to match. This is important because when we mix the two audio signals they need to have the same sample rate.
End of explanation
from IPython.display import Audio
from IPython.display import display
def wavPlayer(data, rate):
display(Audio(data, rate=rate))
Explanation: Next we're going to need a way to play the audio files we're working with (otherwise this wouldn't be very exciting at all would it?). In the next bit of code I've defined a wavPlayer class that takes the signal and the sample rate and then creates a nice HTML5 webplayer right inline with the notebook.
End of explanation
# change to the shogun-data directory
import os
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
%matplotlib inline
import pylab as pl
# load
fs1,s1 = load_wav('tbawht02.wav') # Terran Battlecruiser - "Good day, commander."
# plot
pl.figure(figsize=(6.75,2))
pl.plot(s1)
pl.title('Signal 1')
pl.show()
# player
wavPlayer(s1, fs1)
Explanation: Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here: http://wavs.unclebubby.com/computer/starcraft/ among other places on the web or from your Starcraft install directory (come on I know its still there).
Another good source of data (although lets be honest less cool) is ICA central and various other more academic data sets: http://perso.telecom-paristech.fr/~cardoso/icacentral/base_multi.html. Note that for lots of these data sets the data will be mixed already so you'll be able to skip the next few steps.
Okay lets load up an audio file. I chose the Terran Battlecruiser saying "Good Day Commander". In addition to the creating a wavPlayer I also plotted the data using Matplotlib (and tried my best to have the graph length match the HTML player length). Have a listen!
End of explanation
# load
fs2,s2 = load_wav('TMaRdy00.wav') # Terran Marine - "You want a piece of me, boy?"
# plot
pl.figure(figsize=(6.75,2))
pl.plot(s2)
pl.title('Signal 2')
pl.show()
# player
wavPlayer(s2, fs2)
Explanation: Now let's load a second audio clip:
End of explanation
# load
fs3,s3 = load_wav('PZeRdy00.wav') # Protoss Zealot - "My life for Aiur!"
# plot
pl.figure(figsize=(6.75,2))
pl.plot(s3)
pl.title('Signal 3')
pl.show()
# player
wavPlayer(s3, fs3)
Explanation: and a third audio clip:
End of explanation
# Adjust for different clip lengths
fs = fs1
length = max([len(s1), len(s2), len(s3)])
s1 = np.resize(s1, (length,1))
s2 = np.resize(s2, (length,1))
s3 = np.resize(s3, (length,1))
S = (np.c_[s1, s2, s3]).T
# Mixing Matrix
#A = np.random.uniform(size=(3,3))
#A = A / A.sum(axis=0)
A = np.array([[1, 0.5, 0.5],
[0.5, 1, 0.5],
[0.5, 0.5, 1]])
print('Mixing Matrix:')
print(A.round(2))
# Mix Signals
X = np.dot(A,S)
# Mixed Signal i
for i in range(X.shape[0]):
pl.figure(figsize=(6.75,2))
pl.plot((X[i]).astype(np.int16))
pl.title('Mixed Signal %d' % (i+1))
pl.show()
wavPlayer((X[i]).astype(np.int16), fs)
Explanation: Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together!
First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be filled with zeros so it won't affect the sound.
The signals are mixed by creating a mixing matrix $A$ and taking the dot product of $A$ with the signals $S$.
Afterwards I plot the mixed signals and create the wavPlayers, have a listen!
End of explanation
from shogun import features
# Convert to features for shogun
mixed_signals = features((X).astype(np.float64))
Explanation: Now before we can work on separating these signals we need to get the data ready for Shogun, thankfully this is pretty easy!
End of explanation
# Separating with JADE
jade = sg.transformer('Jade')
jade.fit(mixed_signals)
signals = jade.transform(mixed_signals)
S_ = signals.get('feature_matrix')
A_ = jade.get('mixing_matrix')
A_ = A_ / A_.sum(axis=0)
print('Estimated Mixing Matrix:')
print(A_)
Explanation: Now lets unmix those signals!
In this example I'm going to use an Independent Component Analysis (ICA) algorithm called JADE. JADE is one of the ICA algorithms available in Shogun and it works by performing Aproximate Joint Diagonalization (AJD) on a 4th order cumulant tensor. I'm not going to go into a lot of detail on how JADE works behind the scenes but here is the reference for the original paper:
Cardoso, J. F., & Souloumiac, A. (1993). Blind beamforming for non-Gaussian signals. In IEE Proceedings F (Radar and Signal Processing) (Vol. 140, No. 6, pp. 362-370). IET Digital Library.
Shogun also has several other ICA algorithms including the Second Order Blind Identification (SOBI) algorithm, FFSep, JediSep, UWedgeSep and FastICA. All of the algorithms inherit from the ICAConverter base class and share some common methods for setting an intial guess for the mixing matrix, retrieving the final mixing matrix and getting/setting the number of iterations to run and the desired convergence tolerance. Some of the algorithms have additional getters for intermediate calculations, for example Jade has a method for returning the 4th order cumulant tensor while the "Sep" algorithms have a getter for the time lagged covariance matrices. Check out the source code on GitHub (https://github.com/shogun-toolbox/shogun) or the Shogun docs (http://www.shogun-toolbox.org/doc/en/latest/annotated.html) for more details!
End of explanation
# Show separation results
# Separated Signal i
gain = 4000
for i in range(S_.shape[0]):
pl.figure(figsize=(6.75,2))
pl.plot((gain*S_[i]).astype(np.int16))
pl.title('Separated Signal %d' % (i+1))
pl.show()
wavPlayer((gain*S_[i]).astype(np.int16), fs)
Explanation: Thats all there is to it! Check out how nicely those signals have been separated and have a listen!
End of explanation |
4,902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setting things up
Let's load the data and give it a quick look.
Step1: Checking out correlations
Let's start looking at how variables in our dataset relate to each other so we know what to expect when we start modeling.
Step2: The percentage of students enrolled in free/reduced-price lunch programs is often used as a proxy for poverty.
Step3: Conversely, the education level of a student's parents is often a good predictor of how well a student will do in school.
Step4: Running the regression
Like we did last week, we'll use scikit-learn to run basic single-variable regressions. Let's start by looking at California's Academic Performance index as it relates to the percentage of students, per school, enrolled in free/reduced-price lunch programs.
Step5: In our naive universe where we're only paying attention to two variables -- academic performance and free/reduced lunch -- we can clearly see that some percentage of schools is overperforming the performance that would be expected of them, taking poverty out of the equation.
A handful, in particular, seem to be dramatically overperforming. Let's look at them
Step6: Let's look specifically at Solano Avenue Elementary, which has an API of 922 and 80 percent of students being in the free/reduced lunch program. If you were to use the above regression to predict how well Solano would do, it would look like this | Python Code:
df = pd.read_csv('data/apib12tx.csv')
df.describe()
Explanation: Setting things up
Let's load the data and give it a quick look.
End of explanation
df.corr()
Explanation: Checking out correlations
Let's start looking at how variables in our dataset relate to each other so we know what to expect when we start modeling.
End of explanation
df.plot(kind="scatter", x="MEALS", y="API12B")
Explanation: The percentage of students enrolled in free/reduced-price lunch programs is often used as a proxy for poverty.
End of explanation
df.plot(kind="scatter", x="AVG_ED", y="API12B")
Explanation: Conversely, the education level of a student's parents is often a good predictor of how well a student will do in school.
End of explanation
data = np.asarray(df[['API12B','MEALS']])
data = Imputer().fit_transform(data)
x, y = data[:, 1:], data[:, 0]
lr = LinearRegression()
lr.fit(x, y)
# plot the linear regression line on the scatter plot
lr.coef_
lr.score(x, y)
plt.scatter(x, y, color='blue')
plt.plot(x, lr.predict(x), color='red', linewidth=1)
Explanation: Running the regression
Like we did last week, we'll use scikit-learn to run basic single-variable regressions. Let's start by looking at California's Academic Performance index as it relates to the percentage of students, per school, enrolled in free/reduced-price lunch programs.
End of explanation
df[(df['MEALS'] >= 80) & (df['API12B'] >= 90)]
Explanation: In our naive universe where we're only paying attention to two variables -- academic performance and free/reduced lunch -- we can clearly see that some percentage of schools is overperforming the performance that would be expected of them, taking poverty out of the equation.
A handful, in particular, seem to be dramatically overperforming. Let's look at them:
End of explanation
lr.predict(80)
Explanation: Let's look specifically at Solano Avenue Elementary, which has an API of 922 and 80 percent of students being in the free/reduced lunch program. If you were to use the above regression to predict how well Solano would do, it would look like this:
End of explanation |
4,903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CDR EDA
First, import relevant libraries
Step1: Then, load the data (takes a few moments)
Step2: This create a calls-per-person frequency distribution, which is the first thing we want to see.
Step3: Plot this distribution. This shows that 19344 people made 1 call over the 4 months, 36466 people made 2 calls over the 4 months, 41900 people made 3 calls over the 4 months, etc.
Step4: It might be more helpful to look at a cumulative distribution curve, from which we can read off quantiles (e.g., this percentage of the people in the data set had x or more calls, x or fewer calls). Specifically, 10% of people have 3 or fewer calls over the entire period, 25% have 7 of fewer, 33% have 10 or fewer, 50% have 17 of fewer calls, etc., all the way up to 90% of people having 76 or fewer calls.
Step5: Now, we want to look at temporal data. First, convert the categorical date_time_m to a datetime object; then, extract the date component. | Python Code:
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: CDR EDA
First, import relevant libraries:
End of explanation
# Load data
df = pd.read_csv("./aws-data/firence_foreigners_3days_past_future.csv", header=None)
df.columns = ['lat', 'lon', 'date_time_m', 'home_region', 'cust_id', 'in_florence']
df.head()
# np.max(data.date_time_m) # max date : '2016-09-30
# np.min(data.date_time_m) # min date: 2016-06-07
Explanation: Then, load the data (takes a few moments):
End of explanation
fr = df['cust_id'].value_counts().to_frame()['cust_id'].value_counts().to_frame()
fr.columns = ['frequency']
fr.index.name = 'calls'
fr.reset_index(inplace=True)
fr = fr.sort_values('calls')
fr['cumulative'] = fr['frequency'].cumsum()/fr['frequency'].sum()
fr.head()
Explanation: This create a calls-per-person frequency distribution, which is the first thing we want to see.
End of explanation
fr.plot(x='calls', y='frequency', style='o-', logx=True, figsize = (10, 10))
plt.axvline(5,ls='dotted')
plt.ylabel('Number of people')
plt.title('Number of people placing or receiving x number of calls over 4 months')
Explanation: Plot this distribution. This shows that 19344 people made 1 call over the 4 months, 36466 people made 2 calls over the 4 months, 41900 people made 3 calls over the 4 months, etc.
End of explanation
fr.plot(x='calls', y='cumulative', style='o-', logx=True, figsize = (10, 10))
plt.axhline(1.0,ls='dotted',lw=.5)
plt.axhline(.90,ls='dotted',lw=.5)
plt.axhline(.75,ls='dotted',lw=.5)
plt.axhline(.67,ls='dotted',lw=.5)
plt.axhline(.50,ls='dotted',lw=.5)
plt.axhline(.33,ls='dotted',lw=.5)
plt.axhline(.25,ls='dotted',lw=.5)
plt.axhline(.10,ls='dotted',lw=.5)
plt.axhline(0.0,ls='dotted',lw=.5)
plt.axvline(max(fr['calls'][fr['cumulative']<.90]),ls='dotted',lw=.5)
plt.ylabel('Cumulative fraction of people')
plt.title('Cumulative fraction of people placing or receiving x number of calls over 4 months')
Explanation: It might be more helpful to look at a cumulative distribution curve, from which we can read off quantiles (e.g., this percentage of the people in the data set had x or more calls, x or fewer calls). Specifically, 10% of people have 3 or fewer calls over the entire period, 25% have 7 of fewer, 33% have 10 or fewer, 50% have 17 of fewer calls, etc., all the way up to 90% of people having 76 or fewer calls.
End of explanation
df['datetime'] = pd.to_datetime(df['date_time_m'], format='%Y-%m-%d %H:%M:%S')
df['date'] = df['datetime'].dt.floor('d') # Faster than df['datetime'].dt.date
df2 = df.groupby(['cust_id','date']).size().to_frame()
df2.columns = ['count']
df2.index.name = 'date'
df2.reset_index(inplace=True)
df2.head(20)
df3 = (df2.groupby('cust_id')['date'].max() - df2.groupby('cust_id')['date'].min()).to_frame()
df3['calls'] = df2.groupby('cust_id')['count'].sum()
df3.columns = ['days','calls']
df3['days'] = df3['days'].dt.days
df3.head()
plt.scatter(np.log(df3['days']), np.log(df3['calls']))
plt.show()
fr.plot(x='calls', y='freq', style='o', logx=True, logy=True)
x=np.log(fr['calls'])
y=np.log(1-fr['freq'].cumsum()/fr['freq'].sum())
plt.plot(x, y, 'r-')
# How many home_Regions
np.count_nonzero(data['home_region'].unique())
# How many customers
np.count_nonzero(data['cust_id'].unique())
# How many Nulls are there in the customer ID column?
df['cust_id'].isnull().sum()
# How many missing data are there in the customer ID?
len(df['cust_id']) - df['cust_id'].count()
df['cust_id'].unique()
data_italians = pd.read_csv("./aws-data/firence_italians_3days_past_future_sample_1K_custs.csv", header=None)
data_italians.columns = ['lat', 'lon', 'date_time_m', 'home_region', 'cust_id', 'in_florence']
regions = np.array(data_italians['home_region'].unique())
regions
'Sardegna' in data['home_region']
Explanation: Now, we want to look at temporal data. First, convert the categorical date_time_m to a datetime object; then, extract the date component.
End of explanation |
4,904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Factorization
Factorization is the process of restating an expression as the product of two expressions (in other words, expressions multiplied together).
For example, you can make the value 16 by performing the following multiplications of integer numbers
Step1: So, we can say that 2xy<sup>2</sup> and -3xy are both factors of -6x<sup>2</sup>y<sup>3</sup>.
This also applies to polynomials with more than one term. For example, consider the following expression
Step2: Greatest Common Factor
Of course, these may not be the only factors of -6x<sup>2</sup>y<sup>3</sup>, just as 8 and 2 are not the only factors of 16.
Additionally, 2 and 8 aren't just factors of 16; they're factors of other numbers too - for example, they're both factors of 24 (because 2 x 12 = 24 and 8 x 3 = 24). Which leads us to the question, what is the highest number that is a factor of both 16 and 24? Well, let's look at all the numbers that multiply evenly into 12 and all the numbers that multiply evenly into 24
Step3: Distributing Factors
Let's look at another example. Here is a binomial expression
Step4: For something a little more complex, let's return to our previous example. Suppose we want to add our original 15x<sup>2</sup>y and 9xy<sup>3</sup> expressions
Step5: So you might be wondering what's so great about being able to distribute the common factor like this. The answer is that it can often be useful to apply a common factor to multiple terms in order to solve seemingly complex problems.
For example, consider this
Step6: Perfect Squares
A perfect square is a number multiplied by itself, for example 3 multipled by 3 is 9, so 9 is a perfect square.
When working with equations, the ability to factor between polynomial expressions and binomial perfect square expressions can be a useful tool. For example, consider this expression | Python Code:
from random import randint
x = randint(1,100)
y = randint(1,100)
(2*x*y**2)*(-3*x*y) == -6*x**2*y**3
Explanation: Factorization
Factorization is the process of restating an expression as the product of two expressions (in other words, expressions multiplied together).
For example, you can make the value 16 by performing the following multiplications of integer numbers:
- 1 x 16
- 2 x 8
- 4 x 4
Another way of saying this is that 1, 2, 4, 8, and 16 are all factors of 16.
Factors of Polynomial Expressions
We can apply the same logic to polynomial expressions. For example, consider the following monomial expression:
\begin{equation}-6x^{2}y^{3} \end{equation}
You can get this value by performing the following multiplication:
\begin{equation}(2xy^{2})(-3xy) \end{equation}
Run the following Python code to test this with arbitrary x and y values:
End of explanation
from random import randint
x = randint(1,100)
y = randint(1,100)
(x + 2)*(2*x**2 - 3*y + 2) == 2*x**3 + 4*x**2 - 3*x*y + 2*x - 6*y + 4
Explanation: So, we can say that 2xy<sup>2</sup> and -3xy are both factors of -6x<sup>2</sup>y<sup>3</sup>.
This also applies to polynomials with more than one term. For example, consider the following expression:
\begin{equation}(x + 2)(2x^{2} - 3y + 2) = 2x^{3} + 4x^{2} - 3xy + 2x - 6y + 4 \end{equation}
Based on this, x+2 and 2x<sup>2</sup> - 3y + 2 are both factors of 2x<sup>3</sup> + 4x<sup>2</sup> - 3xy + 2x - 6y + 4.
(and if you don't believe me, you can try this with random values for x and y with the following Python code):
End of explanation
from random import randint
x = randint(1,100)
y = randint(1,100)
print((3*x*y)*(5*x) == 15*x**2*y)
print((3*x*y)*(3*y**2) == 9*x*y**3)
Explanation: Greatest Common Factor
Of course, these may not be the only factors of -6x<sup>2</sup>y<sup>3</sup>, just as 8 and 2 are not the only factors of 16.
Additionally, 2 and 8 aren't just factors of 16; they're factors of other numbers too - for example, they're both factors of 24 (because 2 x 12 = 24 and 8 x 3 = 24). Which leads us to the question, what is the highest number that is a factor of both 16 and 24? Well, let's look at all the numbers that multiply evenly into 12 and all the numbers that multiply evenly into 24:
| 16 | 24 |
|--------|--------|
| 1 x 16 | 1 x 24 |
| 2 x 8 | 2 x 12 |
| | 3 x 8 |
| 4 x 4 | 4 x 6 |
The highest value that is a multiple of both 16 and 24 is 8, so 8 is the Greatest Common Factor (or GCF) of 16 and 24.
OK, let's apply that logic to the following expressions:
\begin{equation}15x^{2}y\;\;\;\;\;\;\;\;9xy^{3}\end{equation}
So what's the greatest common factor of these two expressions?
It helps to break the expressions into their consitituent components. Let's deal with the coefficients first; we have 15 and 9. The highest value that divides evenly into both of these is 3 (3 x 5 = 15 and 3 x 3 = 9).
Now let's look at the x terms; we have x<sup>2</sup> and x. The highest value that divides evenly into both is these is x (x goes into x once and into x<sup>2</sup> x times).
Finally, for our y terms, we have y and y<sup>3</sup>. The highest value that divides evenly into both is these is y (y goes into y once and into y<sup>3</sup> y•y times).
Putting all of that together, the GCF of both of our expression is:
\begin{equation}3xy\end{equation}
An easy shortcut to identifying the GCF of an expression that includes variables with exponentials is that it will always consist of:
- The largest numeric factor of the numeric coefficients in the polynomial expressions (in this case 3)
- The smallest exponential of each variable (in this case, x and y, which technically are x<sup>1</sup> and y<sup>1</sup>.
You can check your answer by dividing the original expressions by the GCF to find the coefficent expressions for the GCF (in other words, how many times the GCF divides into the original expression). The result, when multiplied by the GCF will always produce the original expression. So in this case, we need to perform the following divisions:
\begin{equation}\frac{15x^{2}y}{3xy}\;\;\;\;\;\;\;\;\frac{9xy^{3}}{3xy}\end{equation}
These fractions simplify to 5x and 3y<sup>2</sup>, giving us the following calculations to prove our factorization:
\begin{equation}3xy(5x) = 15x^{2}y\end{equation}
\begin{equation}3xy(3y^{2}) = 9xy^{3}\end{equation}
Let's try both of those in Python:
End of explanation
from random import randint
x = randint(1,100)
y = randint(1,100)
(6*x + 15*y) == (3*(2*x) + 3*(5*y)) == (3*(2*x + 5*y))
Explanation: Distributing Factors
Let's look at another example. Here is a binomial expression:
\begin{equation}6x + 15y \end{equation}
To factor this, we need to find an expression that divides equally into both of these expressions. In this case, we can use 3 to factor the coefficents, because 3 • 2x = 6x and 3• 5y = 15y, so we can write our original expression as:
\begin{equation}6x + 15y = 3(2x) + 3(5y) \end{equation}
Now, remember the distributive property? It enables us to multiply each term of an expression by the same factor to calculate the product of the expression multiplied by the factor. We can factor-out the common factor in this expression to distribute it like this:
\begin{equation}6x + 15y = 3(2x) + 3(5y) = \mathbf{3(2x + 5y)} \end{equation}
Let's prove to ourselves that these all evaluate to the same thing:
End of explanation
from random import randint
x = randint(1,100)
y = randint(1,100)
(15*x**2*y + 9*x*y**3) == (3*x*y*(5*x + 3*y**2))
Explanation: For something a little more complex, let's return to our previous example. Suppose we want to add our original 15x<sup>2</sup>y and 9xy<sup>3</sup> expressions:
\begin{equation}15x^{2}y + 9xy^{3}\end{equation}
We've already calculated the common factor, so we know that:
\begin{equation}3xy(5x) = 15x^{2}y\end{equation}
\begin{equation}3xy(3y^{2}) = 9xy^{3}\end{equation}
Now we can factor-out the common factor to produce a single expression:
\begin{equation}15x^{2}y + 9xy^{3} = \mathbf{3xy(5x + 3y^{2})}\end{equation}
And here's the Python test code:
End of explanation
from random import randint
x = randint(1,100)
(x**2 - 9) == (x - 3)*(x + 3)
Explanation: So you might be wondering what's so great about being able to distribute the common factor like this. The answer is that it can often be useful to apply a common factor to multiple terms in order to solve seemingly complex problems.
For example, consider this:
\begin{equation}x^{2} + y^{2} + z^{2} = 127\end{equation}
Now solve this equation:
\begin{equation}a = 5x^{2} + 5y^{2} + 5z^{2}\end{equation}
At first glance, this seems tricky because there are three unknown variables, and even though we know that their squares add up to 127, we don't know their individual values. However, we can distribute the common factor and apply what we do know. Let's restate the problem like this:
\begin{equation}a = 5(x^{2} + y^{2} + z^{2})\end{equation}
Now it becomes easier to solve, because we know that the expression in parenthesis is equal to 127, so actually our equation is:
\begin{equation}a = 5(127)\end{equation}
So a is 5 times 127, which is 635
Formulae for Factoring Squares
There are some useful ways that you can employ factoring to deal with expressions that contain squared values (that is, values with an exponential of 2).
Differences of Squares
Consider the following expression:
\begin{equation}x^{2} - 9\end{equation}
The constant 9 is 3<sup>2</sup>, so we could rewrite this as:
\begin{equation}x^{2} - 3^{2}\end{equation}
Whenever you need to subtract one squared term from another, you can use an approach called the difference of squares, whereby we can factor a<sup>2</sup> - b<sup>2</sup> as (a - b)(a + b); so we can rewrite the expression as:
\begin{equation}(x - 3)(x + 3)\end{equation}
Run the code below to check this:
End of explanation
from random import randint
a = randint(1,100)
b = randint(1,100)
a**2 + b**2 + (2*a*b) == (a + b)**2
Explanation: Perfect Squares
A perfect square is a number multiplied by itself, for example 3 multipled by 3 is 9, so 9 is a perfect square.
When working with equations, the ability to factor between polynomial expressions and binomial perfect square expressions can be a useful tool. For example, consider this expression:
\begin{equation}x^{2} + 10x + 25\end{equation}
We can use 5 as a common factor to rewrite this as:
\begin{equation}(x + 5)(x + 5)\end{equation}
So what happened here?
Well, first we found a common factor for our coefficients: 5 goes into 10 twice and into 25 five times (in other words, squared). Then we just expressed this factoring as a multiplication of two identical binomials (x + 5)(x + 5).
Remember the rule for multiplication of polynomials is to multiple each term in the first polynomial by each term in the second polynomial and then add the results; so you can do this to verify the factorization:
x • x = x<sup>2</sup>
x • 5 = 5x
5 • x = 5x
5 • 5 = 25
When you combine the two 5x terms we get back to our original expression of x<sup>2</sup> + 10x + 25.
Now we have an expression multipled by itself; in other words, a perfect square. We can therefore rewrite this as:
\begin{equation}(x + 5)^{2}\end{equation}
Factorization of perfect squares is a useful technique, as you'll see when we start to tackle quadratic equations in the next section. In fact, it's so useful that it's worth memorizing its formula:
\begin{equation}(a + b)^{2} = a^{2} + b^{2}+ 2ab \end{equation}
In our example, the a terms is x and the b terms is 5, and in standard form, our equation x<sup>2</sup> + 10x + 25 is actually a<sup>2</sup> + 2ab + b<sup>2</sup>. The operations are all additions, so the order isn't actually important!
Run the following code with random values for a and b to verify that the formula works:
End of explanation |
4,905 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MRI
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
4,906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pore Scale Models
Pore scale models are one of the more important facets of OpenPNM, but they can be a bit confusing at first, since they work 'behind-the-scenes'.
They offer 3 main advantages
Step1: Now we need to define the gas phase diffusivity. We can fetch the fuller model from the models library to do this, and attach it to an empty phase object
Step2: Note that we had to supply the molecular weights (MA and MB) as well as the diffusion volumes (vA and vB). This model also requires knowing the temperature and pressure, but by default it will look in 'pore.temperature' and 'pore.pressure'.
Next we need to define a physics object with the diffusive conductance, which is also available in the model libary
Step3: Lastly we can run the Fickian diffusion simulation to get the diffusion rate across the domain
Step4: Updating parameter on an existing model
It's also easy to change parameters of a model since they are all stored on the object (air in this case), meaning you don't have to reassign a new model get new parameters (although that would work). The models and their parameters are stored under the models attribute of each object. This is a dictionary with each model stored under the key match the propname to which is was assigned. For instance, to adjust the diffusion volumes of the Fuller model
Step5: Replacing an existing model with another
Let's say for some reason that the Fuller model is not suitable. It's easy to go 'shopping' in the models library to retrieve a new model and replace the existing one. In the cell below we grab the Chapman-Enskog model and simply assign it to the same propname that the Fuller model was previously.
Step6: Note that we don't need to explicitly call regenerate_models since this occurs automatically when a model is added. We do however, have to regenerate phys object so it calculates the diffusive conductance with the new diffusivity
Step7: Changing dependent properties
Now consider that you want to find the diffusion rate at higher temperature. This requires recalculating the diffusion coefficient on air, then updating the diffusive conductivity on phys, and finally re-running the simulation. Using pore-scale models this can be done as follows
Step8: We can see that the diffusivity increased with temperature as expected with the Chapman-Enskog model. We can also propagate this change to the diffusive conductance
Step9: And lastly we can recalculate the diffusion rate
Step10: Creating Custom Models
Lastly, let's illustrate the ease with which a custom pore-scale model can be defined and used. Let's create a very basic (and incorrect) model
Step11: There are a few key points to note in the above code.
Every model must accept a target argument since the regenerate_models mechanism assumes it is present. The target is the object to which the model will be attached. It allows for the looking up of necessary properties that should already be defined, like temperature and pressure. Even if you don't use target within the function it is still required by the pore-scale model mechanism. If it's presence annoys you, you can put a **kwargs at the end of the argument list to accept all arguments that you don't explicitly need.
The input parameters should not be arrays (like an Np-long list of temperature values). Instead you should pass the dictionary key of the values on the target. This allows the model to lookup the latest values for each property when regenerate_models is called. This also enables openpnm to store the model parameters as short strings rather than large arrays.
The function should return either a scalar value or an array of Np or Nt length. In the above case it returns a DAB value for each pore, depending on its local temperature and pressure in the pore. However, if the temperature were set to 'throat.temperature' and pressure to 'throat.pressure', then the above function would return a DAB value for each throat and it could be used to calculate 'throat.diffusivity'.
This function can be placed at the top of the script in which it is used, or it can be placed in a separate file and imported into the script with from my_models import new_diffusivity.
Let's add this model to our air phase and inspect the new values | Python Code:
import numpy as np
np.random.seed(0)
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
pn = op.network.Cubic(shape=[5, 5, 1], spacing=1e-4)
geo = op.geometry.SpheresAndCylinders(network=pn, pores=pn.Ps, throats=pn.Ts)
Explanation: Pore Scale Models
Pore scale models are one of the more important facets of OpenPNM, but they can be a bit confusing at first, since they work 'behind-the-scenes'.
They offer 3 main advantages:
A large library of pre-written models is included and they can be mixed together and their parameters edited to get a desired overall result.
They allow automatic regeneration of all dependent properties when something 'upstream' is changed.
The pore-scale model machinery was designed to allow easy use of custom written code for cases where a prewritten model is not available.
The best way to explain their importance is via illustration.
Consider a diffusion simulation, where the diffusive conductance is defined as:
$$ g_D = D_{AB}\frac{A}{L} $$
The diffusion coefficient can be predicted by the Fuller correlation:
$$ D_{AB} = \frac{10^{-3}T^{1.75}(M_1^{-1} + M_2^{-1})^{0.5}}{P[(\Sigma_i V_{i,1})^{0.33} + (\Sigma_i V_{i,2})^{0.33}]^2} $$
Now say you want to re-run the diffusion simulation at different temperature. This would require recalculating $D_{AB}$, followed by updating the diffusive conductance.
Using pore-scale models in OpenPNM allows for simple and reliable updating of these properties, for instance within a for-loop where temperature is being varied.
Using Existing Pore-Scale Models
The first advantage listed above is that OpenPNM includes a library of pre-written model. In this example below we can will apply the Fuller model, without having to worry about mis-typing the equation.
End of explanation
air = op.phases.GenericPhase(network=pn)
f = op.models.phases.diffusivity.fuller
air.add_model(propname='pore.diffusivity',
model=f,
MA=0.032, MB=0.028, vA=16.6, vB=17.9)
Explanation: Now we need to define the gas phase diffusivity. We can fetch the fuller model from the models library to do this, and attach it to an empty phase object:
End of explanation
phys = op.physics.GenericPhysics(network=pn, phase=air, geometry=geo)
f = op.models.physics.diffusive_conductance.ordinary_diffusion
phys.add_model(propname='throat.diffusive_conductance',
model=f)
Explanation: Note that we had to supply the molecular weights (MA and MB) as well as the diffusion volumes (vA and vB). This model also requires knowing the temperature and pressure, but by default it will look in 'pore.temperature' and 'pore.pressure'.
Next we need to define a physics object with the diffusive conductance, which is also available in the model libary:
End of explanation
fd = op.algorithms.FickianDiffusion(network=pn, phase=air)
fd.set_value_BC(pores=pn.pores('left'), values=1)
fd.set_value_BC(pores=pn.pores('right'), values=0)
fd.run()
print(fd.rate(pores=pn.pores('left')))
Explanation: Lastly we can run the Fickian diffusion simulation to get the diffusion rate across the domain:
End of explanation
print('Diffusivity before changing parameter:', air['pore.diffusivity'][0])
air.models['pore.diffusivity']['vA'] = 15.9
air.regenerate_models()
print('Diffusivity after:', air['pore.diffusivity'][0])
Explanation: Updating parameter on an existing model
It's also easy to change parameters of a model since they are all stored on the object (air in this case), meaning you don't have to reassign a new model get new parameters (although that would work). The models and their parameters are stored under the models attribute of each object. This is a dictionary with each model stored under the key match the propname to which is was assigned. For instance, to adjust the diffusion volumes of the Fuller model:
End of explanation
f = op.models.phases.diffusivity.chapman_enskog
air.add_model(propname='pore.diffusivity',
model=f, MA=0.0032, MB=0.0028, sigma_AB=3.467, omega_AB=4.1e-6)
print('Diffusivity after:', air['pore.diffusivity'][0])
Explanation: Replacing an existing model with another
Let's say for some reason that the Fuller model is not suitable. It's easy to go 'shopping' in the models library to retrieve a new model and replace the existing one. In the cell below we grab the Chapman-Enskog model and simply assign it to the same propname that the Fuller model was previously.
End of explanation
phys.regenerate_models()
fd.reset()
fd.run()
print(fd.rate(pores=pn.pores('left')))
Explanation: Note that we don't need to explicitly call regenerate_models since this occurs automatically when a model is added. We do however, have to regenerate phys object so it calculates the diffusive conductance with the new diffusivity:
End of explanation
print('Diffusivity before changing temperaure:', air['pore.diffusivity'][0])
air['pore.temperature'] = 353.0
air.regenerate_models()
print('Diffusivity after:', air['pore.diffusivity'][0])
Explanation: Changing dependent properties
Now consider that you want to find the diffusion rate at higher temperature. This requires recalculating the diffusion coefficient on air, then updating the diffusive conductivity on phys, and finally re-running the simulation. Using pore-scale models this can be done as follows:
End of explanation
phys.regenerate_models()
Explanation: We can see that the diffusivity increased with temperature as expected with the Chapman-Enskog model. We can also propagate this change to the diffusive conductance:
End of explanation
fd.reset()
fd.run()
print(fd.rate(pores=pn.pores('left')))
Explanation: And lastly we can recalculate the diffusion rate:
End of explanation
def new_diffusivity(target, A, B,
temperature='pore.temperature',
pressure='pore.pressure'):
T = target[temperature]
P = target[pressure]
DAB = A*T**3/(P*B)
return DAB
Explanation: Creating Custom Models
Lastly, let's illustrate the ease with which a custom pore-scale model can be defined and used. Let's create a very basic (and incorrect) model:
End of explanation
air.add_model(propname='pore.diffusivity',
model=new_diffusivity,
A=1e-6, B=21)
print(air['pore.diffusivity'])
Explanation: There are a few key points to note in the above code.
Every model must accept a target argument since the regenerate_models mechanism assumes it is present. The target is the object to which the model will be attached. It allows for the looking up of necessary properties that should already be defined, like temperature and pressure. Even if you don't use target within the function it is still required by the pore-scale model mechanism. If it's presence annoys you, you can put a **kwargs at the end of the argument list to accept all arguments that you don't explicitly need.
The input parameters should not be arrays (like an Np-long list of temperature values). Instead you should pass the dictionary key of the values on the target. This allows the model to lookup the latest values for each property when regenerate_models is called. This also enables openpnm to store the model parameters as short strings rather than large arrays.
The function should return either a scalar value or an array of Np or Nt length. In the above case it returns a DAB value for each pore, depending on its local temperature and pressure in the pore. However, if the temperature were set to 'throat.temperature' and pressure to 'throat.pressure', then the above function would return a DAB value for each throat and it could be used to calculate 'throat.diffusivity'.
This function can be placed at the top of the script in which it is used, or it can be placed in a separate file and imported into the script with from my_models import new_diffusivity.
Let's add this model to our air phase and inspect the new values:
End of explanation |
4,907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python and Natural Language Technologies
Lecture 5
Decorators and packaging
March 7, 2018
Let's create a greeter function
takes another function as a parameter
greets the caller before calling the function
Step1: Functions are first class objects
they can be passed as arguments
they can be returned from other functions (example later)
Let's create a count_predicate function
takes a iterable and a predicate (yes-no function)
calls the predicate on each element
counts how many times it returns True
same as std
Step2: Q. Can you write this function in fewer lines?
Step3: The predicate parameter
it can be anything 'callable'
1. function
Step4: 2. instance of a class that implements __call__ (functor)
Step5: 3. lambda expression
Step6: Functions can be nested
Step7: the nested function is only accessible from the parent
Step8: Functions can be return values
Step9: Nested functions have access to the parent's scope
closure
Step10: Function factory
Step11: Wrapper function factory
let's create a function that takes a function return an almost identical function
the returned function adds some logging
Step12: Wrapping a function
The function we are going to wrap
Step13: now add some noise
Step14: Bound the original reference to the wrapped function
i.e. greeter should refer to the wrapped function
we don't need the original function
Step15: this turns out to be a frequent operation
Step16: Decorator syntax
a decorator is a function
that takes a function as an argument
returns a wrapped version of the function
the decorator syntax is just syntactic sugar (shorthand) for
Step17: Pie syntax
introduced in PEP318 in Python 2.4
various syntax proposals were suggested, summarized here
Problem 1. Function metadata is lost
Step19: Solution 1. Copy manually
Step20: What about other metadata such as the docstring?
Step22: Solution 2. functools.wraps
Step23: Problem 2. Function arguments
so far we have only decorated functions without parameters
to wrap arbitrary functions, we need to capture a variable number of arguments
remember args and kwargs
Step24: the same mechanism can be used in decorators
Step25: the decorator has only one parameter
Step26: Decorators can take parameters too
they have to return a decorator without parameters - decorator factory
Step27: Decorators can be implemented as classes
__call__ implements the wrapped function
Step28: See also
Decorator overview with some advanced techniques
Step29: Filter
filter creates a list of elements for which a function returns true
Step30: Most comprehensions can be rewritten using map and filter
Step31: Reduce
reduce applies a rolling computation on a sequence
the first argument of reduce is two-argument function
the second argument is the sequence
the result is accumulated in an accumulator
Step32: an initial value for the accumulator may be supplied
Step33: Modules and imports
import statement combines two operations
it searches for the named module,
then it binds the results of that search to a name in the local scope -- official documentation (emphasis mine)
several formats
importing full modules
Step34: importing submodules
Step35: the as keyword binds the module to a different name
Step36: importing more than one module/submodule
Step37: importing functions or classes
Step38: importing everything from a module
NOT recommended because we have no way of knowing where names come from | Python Code:
def greeter(func):
print("Hello")
func()
def say_something():
print("Let's learn some Python.")
greeter(say_something)
# greeter(12)
Explanation: Introduction to Python and Natural Language Technologies
Lecture 5
Decorators and packaging
March 7, 2018
Let's create a greeter function
takes another function as a parameter
greets the caller before calling the function
End of explanation
def count_predicate(predicate, iterable):
true_count = 0
for element in iterable:
if predicate(element) is True:
true_count += 1
return true_count
Explanation: Functions are first class objects
they can be passed as arguments
they can be returned from other functions (example later)
Let's create a count_predicate function
takes a iterable and a predicate (yes-no function)
calls the predicate on each element
counts how many times it returns True
same as std::count in C++
End of explanation
def count_predicate(predicate, iterable):
return sum(predicate(e) for e in iterable)
Explanation: Q. Can you write this function in fewer lines?
End of explanation
def is_even(number):
return number % 2 == 0
numbers = [1, 3, 2, -5, 0, 0]
count_predicate(is_even, numbers)
Explanation: The predicate parameter
it can be anything 'callable'
1. function
End of explanation
class IsEven(object):
def __call__(self, number):
return number % 2 == 0
print(count_predicate(IsEven(), numbers))
IsEven()(123)
i = IsEven()
i(11)
Explanation: 2. instance of a class that implements __call__ (functor)
End of explanation
count_predicate(lambda x: x % 2 == 0, numbers)
Explanation: 3. lambda expression
End of explanation
def parent():
print("I'm the parent function")
def child():
print("I'm the child function")
parent()
Explanation: Functions can be nested
End of explanation
def parent():
print("I'm the parent function")
def child():
print("I'm the child function")
print("Calling the nested function")
child()
parent()
# parent.child # raises AttributeError
Explanation: the nested function is only accessible from the parent
End of explanation
def parent():
print("I'm the parent function")
def child():
print("I'm the child function")
return child
child_func = parent()
print("Calling child")
child_func()
print("\nUsing parent's return value right away")
parent()()
Explanation: Functions can be return values
End of explanation
def parent(value):
def child():
print("I'm the nested function. "
"The parent's value is {}".format(value))
return child
child_func = parent(42)
print("Calling child_func")
child_func()
f1 = parent("abc")
f2 = parent(123)
f1()
f2()
f1 is f2
Explanation: Nested functions have access to the parent's scope
closure
End of explanation
def make_func(param):
value = param
def func():
print("I'm the nested function. "
"The parent's value is {}".format(value))
return func
func_11 = make_func(11)
func_abc = make_func("abc")
func_11()
func_abc()
Explanation: Function factory
End of explanation
def add_noise(func):
def wrapped_with_noise():
print("Calling function {}".format(func.__name__))
func()
print("{} finished.".format(func.__name__))
return wrapped_with_noise
Explanation: Wrapper function factory
let's create a function that takes a function return an almost identical function
the returned function adds some logging
End of explanation
def noiseless_function():
print("This is not noise")
noiseless_function()
Explanation: Wrapping a function
The function we are going to wrap:
End of explanation
noisy_function = add_noise(noiseless_function)
noisy_function()
Explanation: now add some noise
End of explanation
def greeter():
print("Hello")
print(id(greeter))
greeter = add_noise(greeter)
greeter()
print(id(greeter))
Explanation: Bound the original reference to the wrapped function
i.e. greeter should refer to the wrapped function
we don't need the original function
End of explanation
def friendly_greeter():
print("Hello friend")
def rude_greeter():
print("Hey you")
friendly_greeter = add_noise(friendly_greeter)
rude_greeter = add_noise(rude_greeter)
friendly_greeter()
rude_greeter()
Explanation: this turns out to be a frequent operation
End of explanation
@add_noise
def informal_greeter():
print("Yo")
# informal_greeter = add_noise(informal_greeter)
informal_greeter()
Explanation: Decorator syntax
a decorator is a function
that takes a function as an argument
returns a wrapped version of the function
the decorator syntax is just syntactic sugar (shorthand) for:
python
func = decorator(func)
End of explanation
informal_greeter.__name__
Explanation: Pie syntax
introduced in PEP318 in Python 2.4
various syntax proposals were suggested, summarized here
Problem 1. Function metadata is lost
End of explanation
def add_noise(func):
def wrapped_with_noise():
print("Calling {}...".format(func.__name__))
func()
print("{} finished.".format(func.__name__))
wrapped_with_noise.__name__ = func.__name__
return wrapped_with_noise
@add_noise
def greeter():
meaningful documentation
print("Hello")
print(greeter.__name__)
Explanation: Solution 1. Copy manually
End of explanation
print(greeter.__doc__)
Explanation: What about other metadata such as the docstring?
End of explanation
from functools import wraps
def add_noise(func):
@wraps(func)
def wrapped_with_noise():
print("Calling {}...".format(func.__name__))
func()
print("{} finished.".format(func.__name__))
wrapped_with_noise.__name__ = func.__name__
return wrapped_with_noise
@add_noise
def greeter():
function that says hello
print("Hello")
print(greeter.__name__)
print(greeter.__doc__)
Explanation: Solution 2. functools.wraps
End of explanation
def function_with_variable_arguments(*args, **kwargs):
print(args)
print(kwargs)
function_with_variable_arguments(1, "apple", tree="peach")
Explanation: Problem 2. Function arguments
so far we have only decorated functions without parameters
to wrap arbitrary functions, we need to capture a variable number of arguments
remember args and kwargs
End of explanation
def add_noise(func):
@wraps(func)
def wrapped_with_noise(*args, **kwargs):
print("Calling {}...".format(func.__name__))
func(*args, **kwargs)
print("{} finished.".format(func.__name__))
return wrapped_with_noise
Explanation: the same mechanism can be used in decorators
End of explanation
@add_noise
def personal_greeter(name):
print("Hello {}".format(name))
personal_greeter("John")
Explanation: the decorator has only one parameter: func, the function to wrap
the returned function (wrapped_with_noise) takes arbitrary parameters: args, kwargs
it calls func, the decorator's argument with arbitrary parameters
End of explanation
def decorator_with_param(param1, param2=None):
print("Creating a new decorator: {0}, {1}".format(
param1, param2))
def actual_decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
print("Wrapper function {}".format(
func.__name__))
print("Params: {0}, {1}".format(param1, param2))
return func(*args, **kwargs)
return wrapper
return actual_decorator
@decorator_with_param(42, "abc")
def personal_greeter(name):
print("Hello {}".format(name))
@decorator_with_param(4)
def personal_greeter2(name):
print("Hello {}".format(name))
print("\nCalling personal_greeter")
personal_greeter("Mary")
def hello(name):
print("Hello {}".format(name))
hello = decorator_with_param(1, 2)(hello)
hello("john")
Explanation: Decorators can take parameters too
they have to return a decorator without parameters - decorator factory
End of explanation
class MyDecorator(object):
def __init__(self, func):
self.func_to_wrap = func
wraps(func)(self)
def __call__(self, *args, **kwargs):
print("before func {}".format(self.func_to_wrap.__name__))
res = self.func_to_wrap(*args, **kwargs)
print("after func {}".format(self.func_to_wrap.__name__))
return res
@MyDecorator
def foo():
print("bar")
foo()
Explanation: Decorators can be implemented as classes
__call__ implements the wrapped function
End of explanation
def double(e):
return e * 2
l = [2, 3, "abc"]
list(map(double, l))
map(double, l)
list(map(lambda x: x * 2, [2, 3, "abc"]))
Explanation: See also
Decorator overview with some advanced techniques: https://www.youtube.com/watch?v=9oyr0mocZTg
A very deep dive into decorators: https://www.youtube.com/watch?v=7jGtDGxgwEY
Functional Python: map, filter and reduce
Python has 3 built-in functions that originate from functional programming.
Map
map applies a function on each element of a sequence
End of explanation
def is_even(n):
return n % 2 == 0
l = [2, 3, -1, 0, 2]
list(filter(is_even, l))
list(filter(lambda x: x % 2 == 0, range(8)))
Explanation: Filter
filter creates a list of elements for which a function returns true
End of explanation
l = [2, 3, 0, -1, 2, 0, 1]
signum = [x / abs(x) if x != 0 else x for x in l]
print(signum)
list(map(lambda x: x / abs(x) if x != 0 else 0, l))
even = [x for x in l if x % 2 == 0]
print(even)
print(list(filter(lambda x: x % 2 == 0, l)))
Explanation: Most comprehensions can be rewritten using map and filter
End of explanation
from functools import reduce
l = [1, 2, -1, 4]
reduce(lambda x, y: x*y, l)
Explanation: Reduce
reduce applies a rolling computation on a sequence
the first argument of reduce is two-argument function
the second argument is the sequence
the result is accumulated in an accumulator
End of explanation
reduce(lambda x, y: x*y, l, 10)
reduce(lambda x, y: max(x, y), l)
reduce(max, l)
reduce(lambda x, y: x + int(y % 2 == 0) * y, l, 0)
Explanation: an initial value for the accumulator may be supplied
End of explanation
import sys
", ".join(dir(sys))
Explanation: Modules and imports
import statement combines two operations
it searches for the named module,
then it binds the results of that search to a name in the local scope -- official documentation (emphasis mine)
several formats
importing full modules
End of explanation
from os import path
try:
os
except NameError:
print("os does not seem to be defined")
try:
path
print("path found")
except NameError:
print("path does not seem to be defined")
Explanation: importing submodules
End of explanation
import os as os_module
try:
os
except NameError:
print("os does not seem to be defined")
try:
os_module
print("os_module found")
except NameError:
print("os_module does not seem to be defined")
Explanation: the as keyword binds the module to a different name:
End of explanation
# import os, sys
from sys import stdin, stderr, stdout
Explanation: importing more than one module/submodule
End of explanation
from argparse import ArgumentParser
import inspect
inspect.isclass(ArgumentParser)
Explanation: importing functions or classes
End of explanation
from os import *
try:
makedirs
stat
print("everything found")
except NameError:
print("Something not found")
Explanation: importing everything from a module
NOT recommended because we have no way of knowing where names come from
End of explanation |
4,908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
text[:100]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (50, 55)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
unique_text = set(text)
vocab_to_int = {word: i for i, word in enumerate(unique_text)}
int_to_vocab = dict(enumerate(unique_text))
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
pun_dict = {'.': '<Peroid>', ',': '<Comma>', '"': '<QuotationMark>',
';': '<Semicolon>', '!': '<ExclamationMark>', '?': 'QuestionMark',
'(': '<LeftParentheses>', ')': '<RightParentheses>', '--': '<Dash>',
'\n': 'Return'}
return pun_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return (inputs, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm])
initial_state = tf.identity(cell.zero_state(batch_size, dtype=tf.int32), name='initial_state')
return (cell, initial_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embeddings = tf.Variable(tf.random_normal([vocab_size, embed_dim], -1 ,1))
return tf.nn.embedding_lookup(embeddings, input_data)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return (outputs, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embeddings = get_embed(input_data, vocab_size, embed_dim)
output, final_state = build_rnn(cell, embeddings)
logits = tf.contrib.layers.fully_connected(output, vocab_size, activation_fn = None,
weights_initializer=tf.truncated_normal_initializer(stddev=1/np.sqrt(2)),
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
batches = len(int_text)//(batch_size * seq_length)
full = batches * batch_size * seq_length
x_raw = np.array(int_text[:full])
y_raw = np.array(int_text[1:full+1])
y_raw = np.roll(x_raw, -1)
x_batch = np.split(x_raw.reshape(batch_size,-1), batches, 1)
y_batch = np.split(y_raw.reshape(batch_size,-1), batches, 1)
return np.array(list(zip(x_batch, y_batch)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 800
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 26#25
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 128
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
#_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
_, s_vocab_to_int, s_int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
input_tensor = loaded_graph.get_tensor_by_name('input:0')
initial_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0')
final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0')
probs_tensor = loaded_graph.get_tensor_by_name('probs:0')
return (input_tensor, initial_state_tensor, final_state_tensor, probs_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
pick = np.random.choice(np.arange(len(int_to_vocab)), p=probabilities)
return int_to_vocab[pick]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'homer_simpson'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.device('/gpu:0'):
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
4,909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
Step1: Performance optimization of the 2D acoustic finite difference modelling code
During the last class, it took us only 15 minutes to develop a 2D acoustic FD code based on the 1D code. However, with a runtime of roughly 3 minutes, the performance of this "vanilla" Python implementation was quite underwhelming. Therefore, the aim of this lesson is to optimize the performance of this code.
Let's start with a slightly modified version of the original code. Basically, I moved the computation of the analytical solution outside the main code, the discretization parameters $nx,\; nz,\; nt,\; dx,\; dz,\; dt$ are also fixed in order to minimize the input to the FD modelling function.
Step2: You know what happened the last time, we executed the cell below. We had to wait 3 minutes until the modelling run finished. So for safety reasons I commented the code execution and defined the runtime. You should adapt the value of the timing measurement t_vanilla_python by the value of your computer.
Step3: Just-In-Time (JIT) code compilation with Numba
The poor performance of the vanilla Python code is due to the nested FOR loops to compute the 2nd spatial FD derivatives. We can optimize the performance using the Numba library for Python which turns Python functions into C-style compiled functions using LLVM. A nice introduction to Numba was presented at the SciPy conference 2016 by Gil Forsyth & Lorena Barba with the title
Numba
Step4: The associated Jupyter notebooks can be cloned from here.
First, we have to install Numba, which is quite easy using Anaconda
Step5: The only thing, we modify in our original Python code is to add the function decorator
@jit(nopython=True)
which tags the function FD_2D_acoustic_JIT to be compiled
Step6: Another approach to get rid of the nested FOR-loops is to use Numpy array operations
Step7: So JIT could also improve the performance of the code using NumPy array operations, but the performance of the compiled code with the nested FOR loops has a slight edge in terms of performance.
Comparison with a C++ implementation
How does the performance of the JIT-codes compare to a C++ bully code? I invested 1 hour to write this C++ code, which is similar to the 2D acoustic FD Python code.
In order to use similar matrix data structures in C++ as in Python, I use the Eigen library
Step8: The C++ code performance is comparable with the JIT version of the Python code using NumPy operations, which is quite impressive considering the simple Python code optimization using JIT.
To check if the optimized codes are not only fast but still produce reasonable modelling results, it is a good idea to check if the seismograms of the optimized codes still coincide with the analytical solution.
Step9: Finally, we produce some nice bar charts to compare the performance of the different codes developed in this Jupyter notebook
Step10: Is this the best result we can achieve or are further code improvements possible?
Using domain decomposition with the Message-Passing Interface MPI to distribute the workload over multiple CPU cores, combined with a partioning of the tasks in each domain using Multithreading can significantly improve the code performance. One key is the manual optimization of CPU and GPU kernels, especially regarding memory access times or communication between MPI processes. As an example I plotted the runtime and speedup for the same homogeneous acoustic problem from this Jupyter notebook using the 2D acoustic modelling code DENISE Black-Edition which only relies on MPI
Step11: Using less than 2 cores, the JIT compiled Python code with a runtime of 353 ms is faster than the MPI code. Utilizing more cores, the DENISE code leads to a steady runtime decrease. However, notice that the speedup is not linear anymore when using 16 cores or more. This can be explained by excessive communication time between the MPI processes, when the domain sizes decreases. More details about MPI and Multithreading optimizations are beyond the scope of the TEW2 course, but will be the topic of a future HPC lecture ...
To get an idea about the difference between JIT optimized Python codes and manually optimized codes, I recommend a SciPy 2016 talk by Andreas Klöckner | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# Import Libraries
# ----------------
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
# Definition of modelling parameters
# ----------------------------------
xmax = 500.0 # maximum spatial extension of the 1D model in x-direction (m)
zmax = xmax # maximum spatial extension of the 1D model in z-direction(m)
dx = 1.0 # grid point distance in x-direction
dz = dx # grid point distance in z-direction
tmax = 0.502 # maximum recording time of the seismogram (s)
dt = 0.0010 # time step
vp0 = 580. # P-wave speed in medium (m/s)
# acquisition geometry
xr = 330.0 # x-receiver position (m)
zr = xr # z-receiver position (m)
xsrc = 250.0 # x-source position (m)
zsrc = 250.0 # z-source position (m)
f0 = 40. # dominant frequency of the source (Hz)
t0 = 4. / f0 # source time shift (s)
# define model discretization
# ---------------------------
nx = (int)(xmax/dx) # number of grid points in x-direction
print('nx = ',nx)
nz = (int)(zmax/dz) # number of grid points in x-direction
print('nz = ',nz)
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
ir = (int)(xr/dx) # receiver location in grid in x-direction
jr = (int)(zr/dz) # receiver location in grid in z-direction
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of a Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# Analytical solution
# -------------------
G = time * 0.
# Initialize coordinates
# ----------------------
x = np.arange(nx)
x = x * dx # coordinates in x-direction (m)
z = np.arange(nz)
z = z * dz # coordinates in z-direction (m)
# calculate source-receiver distance
r = np.sqrt((x[ir] - x[isrc])**2 + (z[jr] - z[jsrc])**2)
for it in range(nt): # Calculate Green's function (Heaviside function)
if (time[it] - r / vp0) >= 0:
G[it] = 1. / (2 * np.pi * vp0**2) * (1. / np.sqrt(time[it]**2 - (r/vp0)**2))
Gc = np.convolve(G, src * dt)
Gc = Gc[0:nt]
lim = Gc.max() # get limit value from the maximum amplitude
# Initialize model (assume homogeneous model)
# -------------------------------------------
vp = np.zeros((nx,nz))
vp2 = np.zeros((nx,nz))
vp = vp + vp0 # initialize wave velocity in model
vp2 = vp**2
# 2D Wave Propagation (Finite Difference Solution)
# ------------------------------------------------
def FD_2D_acoustic_vanilla():
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros((nx,nz)) # p at time n (now)
pold = np.zeros((nx,nz)) # p at time n-1 (past)
pnew = np.zeros((nx,nz)) # p at time n+1 (present)
d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p
d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p
# Initialize empty seismogram
# ---------------------------
seis = np.zeros(nt)
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
for i in range(1, nx - 1):
for j in range(1, nz - 1):
d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx ** 2
d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz ** 2
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp ** 2 * dt ** 2 * (d2px + d2pz)
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2
# Remap Time Levels
# -----------------
pold, p = p, pnew
# Output of Seismogram
# -----------------
seis[it] = p[ir,jr]
Explanation: Performance optimization of the 2D acoustic finite difference modelling code
During the last class, it took us only 15 minutes to develop a 2D acoustic FD code based on the 1D code. However, with a runtime of roughly 3 minutes, the performance of this "vanilla" Python implementation was quite underwhelming. Therefore, the aim of this lesson is to optimize the performance of this code.
Let's start with a slightly modified version of the original code. Basically, I moved the computation of the analytical solution outside the main code, the discretization parameters $nx,\; nz,\; nt,\; dx,\; dz,\; dt$ are also fixed in order to minimize the input to the FD modelling function.
End of explanation
#%%time
#FD_2D_acoustic_vanilla()
t_vanilla_python = 190.0
Explanation: You know what happened the last time, we executed the cell below. We had to wait 3 minutes until the modelling run finished. So for safety reasons I commented the code execution and defined the runtime. You should adapt the value of the timing measurement t_vanilla_python by the value of your computer.
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('SzBi3xdEF2Y')
Explanation: Just-In-Time (JIT) code compilation with Numba
The poor performance of the vanilla Python code is due to the nested FOR loops to compute the 2nd spatial FD derivatives. We can optimize the performance using the Numba library for Python which turns Python functions into C-style compiled functions using LLVM. A nice introduction to Numba was presented at the SciPy conference 2016 by Gil Forsyth & Lorena Barba with the title
Numba: Tell those C++ bullies to get lost
End of explanation
# import JIT from Numba
from numba import jit
Explanation: The associated Jupyter notebooks can be cloned from here.
First, we have to install Numba, which is quite easy using Anaconda:
conda install numba
From the Numba library we import jit:
End of explanation
%%time
t_JIT_python = # runtime of JIT compiled Python code (s)
Explanation: The only thing, we modify in our original Python code is to add the function decorator
@jit(nopython=True)
which tags the function FD_2D_acoustic_JIT to be compiled:
Let's run the code:
End of explanation
%%time
seis_FD_numpy = FD_2D_acoustic_numpy()
t_numpy_python = # runtime of JIT compiled Python code (s)
%%time
seis_FD_numpy_JIT = FD_2D_acoustic_numpy_JIT()
t_numpy_python_JIT = # runtime of JIT compiled Python code (s)
Explanation: Another approach to get rid of the nested FOR-loops is to use Numpy array operations:
End of explanation
# Compile and run C++-version
# load seismogram
time_Cpp, seis_FD_Cpp = np.loadtxt('seis.dat', delimiter='\t', skiprows=0, unpack=True)
t_cxx = # runtime of C++ code (s)
Explanation: So JIT could also improve the performance of the code using NumPy array operations, but the performance of the compiled code with the nested FOR loops has a slight edge in terms of performance.
Comparison with a C++ implementation
How does the performance of the JIT-codes compare to a C++ bully code? I invested 1 hour to write this C++ code, which is similar to the 2D acoustic FD Python code.
In order to use similar matrix data structures in C++ as in Python, I use the Eigen library:
www.eigen.tuxfamily.org/
which also allows auto-vectorization of matrix-matrix products. To compile the source code, you need a C++ compiler, e.g. g++ and the Eigen library which can either be compiled from source or installed using the package manager of your Linux distribution.
I also recommend to use the moderate optimization option -O2 and Advanced Vector Extensions (AVX) -mavx during code compilation for a significant performance increase of the code. Let's compile and run the code:
End of explanation
# Compare FD Seismogram with analytical solution
# ----------------------------------------------
# Define figure size
rcParams['figure.figsize'] = 12, 5
plt.plot(time, seis_FD_JIT, 'b-',lw=3,label="FD solution (Python + JIT)") # plot FD seismogram
plt.plot(time, seis_FD_numpy, 'g-',lw=3,label="FD solution (Python + NumPy)") # plot FD seismogram
plt.plot(time, seis_FD_numpy_JIT, 'k-',lw=3,label="FD solution (Python + NumPy + JIT)") # plot FD seismogram
plt.plot(time_Cpp, seis_FD_Cpp, 'y-',lw=3,label="FD solution (C++)") # plot FD seismogram
Analy_seis = plt.plot(time,Gc,'r--',lw=3,label="Analytical solution") # plot analytical solution
plt.xlim(time[0], time[-1])
plt.title('Seismogram')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
Explanation: The C++ code performance is comparable with the JIT version of the Python code using NumPy operations, which is quite impressive considering the simple Python code optimization using JIT.
To check if the optimized codes are not only fast but still produce reasonable modelling results, it is a good idea to check if the seismograms of the optimized codes still coincide with the analytical solution.
End of explanation
# define codes
codes = ('Python', 'Python + JIT', 'Python + NumPy', 'Python + NumPy + JIT', 'C++')
y_pos = np.arange(len(codes))
# runtime
performance = [t_vanilla_python,t_JIT_python,t_numpy_python,t_numpy_python_JIT,t_cxx]
# speed-up with respect to the non-optimized code
speedup = [t_vanilla_python/t_vanilla_python,
t_vanilla_python/t_JIT_python,
t_vanilla_python/t_numpy_python,
t_vanilla_python/t_numpy_python_JIT,
t_vanilla_python/t_cxx]
# Define figure size
rcParams['figure.figsize'] = 12, 8
# Plot runtimes of 2D acoustic FD codes
ax1 = plt.subplot(211)
plt.bar(y_pos, performance, align='center', alpha=0.5)
plt.xticks(y_pos, codes)
plt.ylabel('Runtime (s)')
plt.title('Performance comparison of 2D acoustic FD modelling codes')
# make tick labels invisible
plt.setp(ax1.get_xticklabels(), visible=False)
# Plot speedup of 2D acoustic FD codes
ax2 = plt.subplot(212, sharex=ax1)
plt.bar(y_pos, speedup, align='center', alpha=0.5,color='r')
plt.xticks(y_pos, codes)
plt.ylabel('Speedup ()')
plt.tight_layout()
plt.show()
Explanation: Finally, we produce some nice bar charts to compare the performance of the different codes developed in this Jupyter notebook:
End of explanation
# Define figure size
rcParams['figure.figsize'] = 12, 6
# number of cores and runtime
cores = np.array([1, 2, 4, 8, 16, 25])
t_denise = np.array([0.926, 0.482, 0.234, 0.123, 0.067, 0.055])
# speed-up with respect to the runtime of the 1st core
# and linear speedup
speedup_denise = t_denise[0] / t_denise
linear_speedup = cores
# plot runtime
ax2 = plt.subplot(121)
plt.plot(cores, t_denise, 'rs-',lw=3,label="Runtime")
plt.title('Runtime DENISE Black-Edition')
plt.xlabel('Number of cores')
plt.ylabel('Runtime (s)')
plt.legend()
plt.grid()
# plot speedup
ax2 = plt.subplot(122)
plt.plot(cores, speedup_denise, 'bs-',lw=3,label="Speedup DENISE")
plt.plot(cores, linear_speedup, 'k-',lw=3,label="Linear speedup")
plt.title('Speedup DENISE Black-Edition')
plt.xlabel('Number of cores')
plt.ylabel('Speedup')
plt.legend()
plt.grid()
plt.show()
Explanation: Is this the best result we can achieve or are further code improvements possible?
Using domain decomposition with the Message-Passing Interface MPI to distribute the workload over multiple CPU cores, combined with a partioning of the tasks in each domain using Multithreading can significantly improve the code performance. One key is the manual optimization of CPU and GPU kernels, especially regarding memory access times or communication between MPI processes. As an example I plotted the runtime and speedup for the same homogeneous acoustic problem from this Jupyter notebook using the 2D acoustic modelling code DENISE Black-Edition which only relies on MPI:
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('Zz_6P5qAJck')
Explanation: Using less than 2 cores, the JIT compiled Python code with a runtime of 353 ms is faster than the MPI code. Utilizing more cores, the DENISE code leads to a steady runtime decrease. However, notice that the speedup is not linear anymore when using 16 cores or more. This can be explained by excessive communication time between the MPI processes, when the domain sizes decreases. More details about MPI and Multithreading optimizations are beyond the scope of the TEW2 course, but will be the topic of a future HPC lecture ...
To get an idea about the difference between JIT optimized Python codes and manually optimized codes, I recommend a SciPy 2016 talk by Andreas Klöckner:
High Performance with Python: Architectures, Approaches & Applications
End of explanation |
4,910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
데이터 분석의 소개
데이터 분석이란 용어는 상당히 광범위한 용어이므로 여기에서는 통계적 분석과 머신 러닝이라는 두가지 세부 영역에 국한하여 데이터 분석을 설명하도록 한다.
데이터 분석이란
데이터 분석이란 어떤 데이터가 주어졌을 때
데이터 간의 관계를 파악하거나
파악된 관계를 사용하여 원하는 데이터를 만들어 내는 과정
으로 볼 수 있다.
데이터 분석의 유형
예측(Prediction)
클러스터링(Clustering)
모사(Approximation)
데이터 분석의 유형은 다양하다. 그 중 널리 사용되는 전형적인 방법으로는 예측(prediction), 클러스터링(clustering), 모사(approximation) 등이 있다.
예측은 어떤 특정한 유형의 입력 데이터가 주어지면 데이터 분석의 결과로 다른 유형의 데이터가 출력될 수 있는 경우이다. 예를 들어 다음과 같은 작업은 예측이라고 할 수 있다.
부동산의 위치, 주거환경, 건축연도 등이 주어지면 해당 부동산의 가치를 추정한다.
꽃잎의 길이와 너비 등 식물의 외형적 특징이 주어지면 해당하는 식물의 종을 알아낸다.
얼굴 사진이 주어지면 해당하는 사람의 이름을 출력한다.
현재 바둑돌의 위치들이 주어지면 다음 바둑돌의 위치를 지정한다.
데이터 분석에서 말하는 예측이라는 용어는 시간상으로 미래의 의미는 포함하지 않는다. 시계열 분석에서는 시간상으로 미래의 데이터를 예측하는 경우가 있는데 이 경우에는 forecasting 이라는 용어를 사용한다.
클러스터링은 동일한 유형의 데이터가 주어졌을 때 유사한 데이터끼리 몇개의 집합으로 묶는 작업을 말한다. 예를 들어 다음과 같은 작업은 클러스터링이다.
지리적으로 근처에 있는 지점들을 찾아낸다.
유사한 단어를 포함하고 있는 문서의 집합을 만든다.
유사한 상품을 구해한 고객 리스트를 생성한다.
모사는 대량의 데이터를 대표하는 소량의 데이터를 생성하는 작업이다.
이미지나 음악 데이터를 압축한다.
주식 시장의 움직임을 대표하는 지수 정보를 생성한다.
입력 데이터와 출력 데이터
만약 예측을 하고자 한다면 데이터의 유형을 입력 데이터와 출력 데이터라는 두 가지 유형의 데이터로 분류할 수 있어야 한다.
입력 $X$
분석의 기반이 되는 데이터
독립변수 independent variable
feature, covariates, regressor, explanatory, attributes, stimulus
출력 $Y$
추정하거나 예측하고자 하는 데이터
종속변수 dependent variable
target, response, regressand, label, tag
예측 작업에서 생성하고자 하는 데이터 유형을 출력 데이터라고 하고 이 출력 데이터를 생성하기 위해 사용되는 기반 데이터를 입력 데이터라고 한다. 회귀 분석에서는 독립 변수와 종속 변수라는 용어를 사용하며 머신 러닝에서는 일반적으로 feature와 target이라는 용어를 사용한다.
입력 데이터와 출력 데이터의 개념을 사용하여 예측 작업을 다시 설명하면 다음과 같다.
$X$와 $Y$의 관계 $f$를 파악 한다.
$$Y = f(X)$$
현실적으로는 정확한 $f$를 구할 수 없으므로 $f$와 가장 유사한, 재현 가능한 $\hat{f}$을 구한다.
$$Y \approx \hat{f}(X)$$
$\hat{f}$를 가지고 있다면 $X$가 주어졌을 때 $Y$의 예측(추정) $\hat{Y} = \hat{f}(X)$를 구할 수 있다.
확률론적으로 $\hat{f}$는
$$ \hat{f}(X) = \arg\max_{Y} P(Y | X) $$
예측은 입력 데이터와 출력 데이터 사이의 관계를 분석하고 분석한 관계를 이용하여 출력 데이터가 아직 없거나 혹은 가지고 있는 출력 여러가지 이유로 부정확하다고 생각될 경우 보다 합리적인 출력값을 추정하는 것이다. 따라서 입력 데이터와 출력 데이터의 관계에 대한 분석이 완료된 이후에는 출력 데이터가 필요 없어도 일단 관계를 분석하기 위해서는 입력 데이터와 출력 데이터가 모두 존재해야 한다.
데이터의 유형
예측 작업에서 생성하고자 하는 데이터 유형을 출력 데이터라고 하고 이 출력 데이터를 생성하기 위해 사용되는 기반 데이터를 입력 데이터라고 한다.
예측은 입력 데이터와 출력 데이터 사이의 관계를 분석하고 분석한 관계를 이용하여 출력 데이터가 아직 없거나 혹은 가지고 있는 출력 여러가지 이유로 부정확하다고 생각될 경우 보다 합리적인 출력값을 추정하는 것이다. 따라서 입력 데이터와 출력 데이터의 관계에 대한 분석이 완료된 이후에는 출력 데이터가 필요 없어도 일단 관계를 분석하기 위해서는 입력 데이터와 출력 데이터가 모두 존재해야 한다.
입력 데이터와 출력 데이터의 개념을 사용하여 예측 작업을 다시 설명하면 다음과 같다.
통계적 분석이나 머신 러닝 등의 데이터 분석에 사용되는 데이터의 유형은 다음 숫자 혹은 카테고리 값 중 하나이어야 한다.
숫자 (number)
크기/순서 비교 가능
무한 집합
카테고리값 (category)
크기/순서 비교 불가
유한 집합
Class
Binary Class
Multi Class
숫자와 카테고리 값의 차이점은 두 개의 데이터가 있을 때 이들의 크기나 혹은 순서를 비교할 수 있는가 없는가의 차이이다. 예를 들어 10kg과 23kg이라는 두 개의 무게는 23이 "크다"라고 크기를 비교하는 것이 가능하다. 그러나 "홍길동"과 "이순신"이라는 두 개의 카테고리 값은 크기를 비교할 수 없다.
일반적으로 카테고리 값은 가질 수 있는 경우의 수가 제한되어 있다. 이러한 경우의 수를 클래스(class)라고 부르는데 동전을 던진 결과와 같이 "앞면(head)" 혹은 "뒷면(tail)"처럼 두 가지 경우만 가능하면 이진 클래스(binary class)라고 한다. 주사위를 던져서 나온 숫자와 같이 세 개 이상의 경우가 가능하면 다중 클래스(multi class)라고 한다.
카테고리값처럼 비 연속적이지만 숫자처럼 비교 가능한 경우도 있을 수 있다. 예를 들어 학점이 "A", "B", "C", "D"와 같이 표시되는 경우는 비 연속적이고 기호로 표시되지만 크기 혹은 순서를 비교할 수 있다. 이러한 경우는 서수형(ordinal) 자료라고 하며 분석의 목표에 따라 숫자로 표기하기도 하고 일반적인 카테고리값으로 표기하기도 한다.
데이터의 변환 및 전처리
숫자가 아닌 이미지나 텍스트 정보는 분석에 목표에 따라 숫자나 카테고리 값으로 변환해야 한다. 이 때 해당하는 원본 정보를 손실 없이 그대로 숫자나 카테고리 값으로 바꿀 수도 있지만 대부분의 경우에는 분석에 필요한 핵심적인 정보만을 뽑아낸다. 이러한 과정은 데이터의 전처리(preprocessing)에 해당한다.
Step1: 예측도 출력 데이터가 숫자인가 카테고리 값인가에 따라 회귀 분석(regression analysis)과 분류(classification)로 구분된다.
회귀분석(regression)
얻고자 하는 답 $Y$가 숫자
분류 (classification)
얻고자 하는 답 $Y$가 카테고리 값
| | X=Real | X=Category |
| ------------- | --------------- | --------------- |
|Y=Real | Regression | ANOVA |
|Y=Category | Classification | Classification |
회귀 분석
Step2: 분류
Iris
| setosa | versicolor | virginica |
|---|---|---|
|<img src="https
Step3: 클러스터링(Clustering)
Step4: 모사(Approximation) | Python Code:
from sklearn.datasets import load_digits
digits = load_digits()
plt.imshow(digits.images[0], interpolation='nearest');
plt.grid(False)
digits.images[0]
from sklearn.datasets import fetch_20newsgroups
news = fetch_20newsgroups()
print(news.data[0])
from sklearn.feature_extraction.text import TfidfVectorizer
vec = TfidfVectorizer(stop_words="english").fit(news.data[:100])
data = vec.transform(news.data[:100])
data
plt.imshow(data.toarray()[:,:200], interpolation='nearest');
Explanation: 데이터 분석의 소개
데이터 분석이란 용어는 상당히 광범위한 용어이므로 여기에서는 통계적 분석과 머신 러닝이라는 두가지 세부 영역에 국한하여 데이터 분석을 설명하도록 한다.
데이터 분석이란
데이터 분석이란 어떤 데이터가 주어졌을 때
데이터 간의 관계를 파악하거나
파악된 관계를 사용하여 원하는 데이터를 만들어 내는 과정
으로 볼 수 있다.
데이터 분석의 유형
예측(Prediction)
클러스터링(Clustering)
모사(Approximation)
데이터 분석의 유형은 다양하다. 그 중 널리 사용되는 전형적인 방법으로는 예측(prediction), 클러스터링(clustering), 모사(approximation) 등이 있다.
예측은 어떤 특정한 유형의 입력 데이터가 주어지면 데이터 분석의 결과로 다른 유형의 데이터가 출력될 수 있는 경우이다. 예를 들어 다음과 같은 작업은 예측이라고 할 수 있다.
부동산의 위치, 주거환경, 건축연도 등이 주어지면 해당 부동산의 가치를 추정한다.
꽃잎의 길이와 너비 등 식물의 외형적 특징이 주어지면 해당하는 식물의 종을 알아낸다.
얼굴 사진이 주어지면 해당하는 사람의 이름을 출력한다.
현재 바둑돌의 위치들이 주어지면 다음 바둑돌의 위치를 지정한다.
데이터 분석에서 말하는 예측이라는 용어는 시간상으로 미래의 의미는 포함하지 않는다. 시계열 분석에서는 시간상으로 미래의 데이터를 예측하는 경우가 있는데 이 경우에는 forecasting 이라는 용어를 사용한다.
클러스터링은 동일한 유형의 데이터가 주어졌을 때 유사한 데이터끼리 몇개의 집합으로 묶는 작업을 말한다. 예를 들어 다음과 같은 작업은 클러스터링이다.
지리적으로 근처에 있는 지점들을 찾아낸다.
유사한 단어를 포함하고 있는 문서의 집합을 만든다.
유사한 상품을 구해한 고객 리스트를 생성한다.
모사는 대량의 데이터를 대표하는 소량의 데이터를 생성하는 작업이다.
이미지나 음악 데이터를 압축한다.
주식 시장의 움직임을 대표하는 지수 정보를 생성한다.
입력 데이터와 출력 데이터
만약 예측을 하고자 한다면 데이터의 유형을 입력 데이터와 출력 데이터라는 두 가지 유형의 데이터로 분류할 수 있어야 한다.
입력 $X$
분석의 기반이 되는 데이터
독립변수 independent variable
feature, covariates, regressor, explanatory, attributes, stimulus
출력 $Y$
추정하거나 예측하고자 하는 데이터
종속변수 dependent variable
target, response, regressand, label, tag
예측 작업에서 생성하고자 하는 데이터 유형을 출력 데이터라고 하고 이 출력 데이터를 생성하기 위해 사용되는 기반 데이터를 입력 데이터라고 한다. 회귀 분석에서는 독립 변수와 종속 변수라는 용어를 사용하며 머신 러닝에서는 일반적으로 feature와 target이라는 용어를 사용한다.
입력 데이터와 출력 데이터의 개념을 사용하여 예측 작업을 다시 설명하면 다음과 같다.
$X$와 $Y$의 관계 $f$를 파악 한다.
$$Y = f(X)$$
현실적으로는 정확한 $f$를 구할 수 없으므로 $f$와 가장 유사한, 재현 가능한 $\hat{f}$을 구한다.
$$Y \approx \hat{f}(X)$$
$\hat{f}$를 가지고 있다면 $X$가 주어졌을 때 $Y$의 예측(추정) $\hat{Y} = \hat{f}(X)$를 구할 수 있다.
확률론적으로 $\hat{f}$는
$$ \hat{f}(X) = \arg\max_{Y} P(Y | X) $$
예측은 입력 데이터와 출력 데이터 사이의 관계를 분석하고 분석한 관계를 이용하여 출력 데이터가 아직 없거나 혹은 가지고 있는 출력 여러가지 이유로 부정확하다고 생각될 경우 보다 합리적인 출력값을 추정하는 것이다. 따라서 입력 데이터와 출력 데이터의 관계에 대한 분석이 완료된 이후에는 출력 데이터가 필요 없어도 일단 관계를 분석하기 위해서는 입력 데이터와 출력 데이터가 모두 존재해야 한다.
데이터의 유형
예측 작업에서 생성하고자 하는 데이터 유형을 출력 데이터라고 하고 이 출력 데이터를 생성하기 위해 사용되는 기반 데이터를 입력 데이터라고 한다.
예측은 입력 데이터와 출력 데이터 사이의 관계를 분석하고 분석한 관계를 이용하여 출력 데이터가 아직 없거나 혹은 가지고 있는 출력 여러가지 이유로 부정확하다고 생각될 경우 보다 합리적인 출력값을 추정하는 것이다. 따라서 입력 데이터와 출력 데이터의 관계에 대한 분석이 완료된 이후에는 출력 데이터가 필요 없어도 일단 관계를 분석하기 위해서는 입력 데이터와 출력 데이터가 모두 존재해야 한다.
입력 데이터와 출력 데이터의 개념을 사용하여 예측 작업을 다시 설명하면 다음과 같다.
통계적 분석이나 머신 러닝 등의 데이터 분석에 사용되는 데이터의 유형은 다음 숫자 혹은 카테고리 값 중 하나이어야 한다.
숫자 (number)
크기/순서 비교 가능
무한 집합
카테고리값 (category)
크기/순서 비교 불가
유한 집합
Class
Binary Class
Multi Class
숫자와 카테고리 값의 차이점은 두 개의 데이터가 있을 때 이들의 크기나 혹은 순서를 비교할 수 있는가 없는가의 차이이다. 예를 들어 10kg과 23kg이라는 두 개의 무게는 23이 "크다"라고 크기를 비교하는 것이 가능하다. 그러나 "홍길동"과 "이순신"이라는 두 개의 카테고리 값은 크기를 비교할 수 없다.
일반적으로 카테고리 값은 가질 수 있는 경우의 수가 제한되어 있다. 이러한 경우의 수를 클래스(class)라고 부르는데 동전을 던진 결과와 같이 "앞면(head)" 혹은 "뒷면(tail)"처럼 두 가지 경우만 가능하면 이진 클래스(binary class)라고 한다. 주사위를 던져서 나온 숫자와 같이 세 개 이상의 경우가 가능하면 다중 클래스(multi class)라고 한다.
카테고리값처럼 비 연속적이지만 숫자처럼 비교 가능한 경우도 있을 수 있다. 예를 들어 학점이 "A", "B", "C", "D"와 같이 표시되는 경우는 비 연속적이고 기호로 표시되지만 크기 혹은 순서를 비교할 수 있다. 이러한 경우는 서수형(ordinal) 자료라고 하며 분석의 목표에 따라 숫자로 표기하기도 하고 일반적인 카테고리값으로 표기하기도 한다.
데이터의 변환 및 전처리
숫자가 아닌 이미지나 텍스트 정보는 분석에 목표에 따라 숫자나 카테고리 값으로 변환해야 한다. 이 때 해당하는 원본 정보를 손실 없이 그대로 숫자나 카테고리 값으로 바꿀 수도 있지만 대부분의 경우에는 분석에 필요한 핵심적인 정보만을 뽑아낸다. 이러한 과정은 데이터의 전처리(preprocessing)에 해당한다.
End of explanation
from sklearn.datasets import load_boston
boston = load_boston()
print(boston.DESCR)
df = pd.DataFrame(boston.data, columns=boston.feature_names)
df["MEDV"] = boston.target
df.tail()
sns.pairplot(df[["MEDV", "RM", "AGE", "DIS"]]);
from sklearn.linear_model import LinearRegression
predicted = LinearRegression().fit(boston.data, boston.target).predict(boston.data)
plt.scatter(boston.target, predicted, c='r', s=20);
plt.xlabel("Target");
plt.ylabel("Predicted");
Explanation: 예측도 출력 데이터가 숫자인가 카테고리 값인가에 따라 회귀 분석(regression analysis)과 분류(classification)로 구분된다.
회귀분석(regression)
얻고자 하는 답 $Y$가 숫자
분류 (classification)
얻고자 하는 답 $Y$가 카테고리 값
| | X=Real | X=Category |
| ------------- | --------------- | --------------- |
|Y=Real | Regression | ANOVA |
|Y=Category | Classification | Classification |
회귀 분석
End of explanation
from sklearn.datasets import load_iris
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
sy = pd.Series(iris.target, dtype="category")
sy = sy.cat.rename_categories(iris.target_names)
df['species'] = sy
df.tail()
sns.pairplot(df, hue="species");
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
X = iris.data[:, [2,3]]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
model = SVC(kernel="linear", C=1.0, random_state=0)
model.fit(X_train_std, y_train)
XX_min = X_train_std[:, 0].min() - 1; XX_max = X_train_std[:, 0].max() + 1;
YY_min = X_train_std[:, 1].min() - 1; YY_max = X_train_std[:, 1].max() + 1;
XX, YY = np.meshgrid(np.linspace(XX_min, XX_max, 1000), np.linspace(YY_min, YY_max, 1000))
ZZ = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
cmap = mpl.colors.ListedColormap(sns.color_palette("Set2"))
plt.contourf(XX, YY, ZZ, cmap=cmap)
plt.scatter(X_train_std[y_train == 0, 0], X_train_std[y_train == 0, 1], c=cmap.colors[0], s=100)
plt.scatter(X_train_std[y_train == 1, 0], X_train_std[y_train == 1, 1], c=cmap.colors[2], s=100)
plt.scatter(X_train_std[y_train == 2, 0], X_train_std[y_train == 2, 1], c=cmap.colors[1], s=100)
plt.xlim(XX_min, XX_max);
plt.ylim(YY_min, YY_max);
Explanation: 분류
Iris
| setosa | versicolor | virginica |
|---|---|---|
|<img src="https://upload.wikimedia.org/wikipedia/commons/5/56/Kosaciec_szczecinkowaty_Iris_setosa.jpg" style="width: 10em; height: 10em" />|<img src="https://upload.wikimedia.org/wikipedia/commons/4/41/Iris_versicolor_3.jpg" style="width: 10em; height: 10em" />|<img src="https://upload.wikimedia.org/wikipedia/commons/9/9f/Iris_virginica.jpg" style="width: 10em; height: 10em" />|
End of explanation
from sklearn.cluster import DBSCAN
from sklearn.datasets.samples_generator import make_blobs
from sklearn.preprocessing import StandardScaler
X, labels_true = make_blobs(n_samples=750, centers=[[1, 1], [-1, -1], [1, -1]], cluster_std=0.4, random_state=0)
X = StandardScaler().fit_transform(X)
db = DBSCAN(eps=0.3, min_samples=10).fit(X)
n_clusters_ = len(set(db.labels_)) - (1 if -1 in db.labels_ else 0)
unique_labels = set(db.labels_)
f = plt.figure()
f.add_subplot(1,2,1)
plt.plot(X[:, 0], X[:, 1], 'o', markerfacecolor='k', markeredgecolor='k', markersize=10)
plt.title('Raw Data')
f.add_subplot(1,2,2)
colors = plt.cm.Spectral(np.linspace(0, 1, len(unique_labels)))
for k, col in zip(unique_labels, colors):
if k == -1: col = 'k'
class_member_mask = (db.labels_ == k)
xy = X[class_member_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=10);
plt.title('Estimated number of clusters: %d' % n_clusters_);
Explanation: 클러스터링(Clustering)
End of explanation
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances_argmin
from sklearn.datasets import load_sample_image
from sklearn.utils import shuffle
n_colors = 64
china = load_sample_image("china.jpg")
china = np.array(china, dtype=np.float64) / 255
w, h, d = original_shape = tuple(china.shape)
assert d == 3
image_array = np.reshape(china, (w * h, d))
image_array_sample = shuffle(image_array, random_state=0)[:1000]
kmeans = KMeans(n_clusters=n_colors, random_state=0).fit(image_array_sample)
labels = kmeans.predict(image_array)
def recreate_image(codebook, labels, w, h):
d = codebook.shape[1]
image = np.zeros((w, h, d))
label_idx = 0
for i in range(w):
for j in range(h):
image[i][j] = codebook[labels[label_idx]]
label_idx += 1
return image
print("{0:,} bytes -> {1:,} bytes : {2:5.2f}%".format(image_array.nbytes, labels.nbytes, float(labels.nbytes) / image_array.nbytes * 100.0))
f = plt.figure()
ax1 = f.add_subplot(1,2,1)
plt.axis('off')
plt.title('Original image (96,615 colors)')
ax1.imshow(china);
ax2 = f.add_subplot(1,2,2)
plt.axis('off')
plt.title('Quantized image (64 colors, K-Means)')
ax2.imshow(recreate_image(kmeans.cluster_centers_, labels, w, h));
Explanation: 모사(Approximation)
End of explanation |
4,911 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Principal Component Analysis in Shogun
By Abhijeet Kislay (GitHub ID
Step1: Some Formal Background (Skip if you just want code examples)
PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension.
In machine learning problems data is often high dimensional - images, bag-of-word descriptions etc. In such cases we cannot expect the training data to densely populate the space, meaning that there will be large parts in which little is known about the data. Hence it is expected that only a small number of directions are relevant for describing the data to a reasonable accuracy.
The data vectors may be very high dimensional, they will therefore typically lie closer to a much lower dimensional 'manifold'.
Here we concentrate on linear dimensional reduction techniques. In this approach a high dimensional datapoint $\mathbf{x}$ is 'projected down' to a lower dimensional vector $\mathbf{y}$ by
Step2: Step 2
Step3: Step 3
Step4: Step 5
Step5: In the above figure, the blue line is a good fit of the data. It shows the most significant relationship between the data dimensions.
It turns out that the eigenvector with the $highest$ eigenvalue is the $principle$ $component$ of the data set.
Form the matrix $\mathbf{E}=[\mathbf{e}^1,...,\mathbf{e}^M].$
Here $\text{M}$ represents the target dimension of our final projection
Step6: Step 6
Step7: Step 5 and Step 6 can be applied directly with Shogun's PCA preprocessor (from next example). It has been done manually here to show the exhaustive nature of Principal Component Analysis.
Step 7
Step8: The new data is plotted below
Step9: PCA on a 3d data.
Step1
Step10: Step 2
Step11: Step 3 & Step 4
Step12: Steps 5
Step13: Step 7
Step15: PCA Performance
Uptill now, we were using the EigenValue Decomposition method to compute the transformation matrix$\text{(N>D)}$ but for the next example $\text{(N<D)}$ we will be using Singular Value Decomposition.
Practical Example
Step16: Lets have a look on the data
Step17: Represent every image $I_i$ as a vector $\Gamma_i$
Step18: Step 2
Step19: Step 3 & Step 4
Step20: These 20 eigenfaces are not sufficient for a good image reconstruction. Having more eigenvectors gives us the most flexibility in the number of faces we can reconstruct. Though we are adding vectors with low variance, they are in directions of change nonetheless, and an external image that is not in our database could in fact need these eigenvectors to get even relatively close to it. But at the same time we must also keep in mind that adding excessive eigenvectors results in addition of little or no variance, slowing down the process.
Clearly a tradeoff is required.
We here set for M=100.
Step 5
Step21: Step 7
Step22: Recognition part.
In our face recognition process using the Eigenfaces approach, in order to recognize an unseen image, we proceed with the same preprocessing steps as applied to the training images.
Test images are represented in terms of eigenface coefficients by projecting them into face space$\text{(eigenspace)}$ calculated during training. Test sample is recognized by measuring the similarity distance between the test sample and all samples in the training. The similarity measure is a metric of distance calculated between two vectors. Traditional Eigenface approach utilizes $\text{Euclidean distance}$.
Step23: Here we have to project our training image as well as the test image on the PCA subspace.
The Eigenfaces method then performs face recognition by
Step24: Shogun's way of doing things | Python Code:
%pylab inline
%matplotlib inline
# import all shogun classes
from modshogun import *
Explanation: Principal Component Analysis in Shogun
By Abhijeet Kislay (GitHub ID: <a href='https://github.com/kislayabhi'>kislayabhi</a>)
This notebook is about finding Principal Components (<a href="http://en.wikipedia.org/wiki/Principal_component_analysis">PCA</a>) of data (<a href="http://en.wikipedia.org/wiki/Unsupervised_learning">unsupervised</a>) in Shogun. Its <a href="http://en.wikipedia.org/wiki/Dimensionality_reduction">dimensional reduction</a> capabilities are further utilised to show its application in <a href="http://en.wikipedia.org/wiki/Data_compression">data compression</a>, image processing and <a href="http://en.wikipedia.org/wiki/Facial_recognition_system">face recognition</a>.
End of explanation
#number of data points.
n=100
#generate a random 2d line(y1 = mx1 + c)
m = random.randint(1,10)
c = random.randint(1,10)
x1 = random.random_integers(-20,20,n)
y1=m*x1+c
#generate the noise.
noise=random.random_sample([n]) * random.random_integers(-35,35,n)
#make the noise orthogonal to the line y=mx+c and add it.
x=x1 + noise*m/sqrt(1+square(m))
y=y1 + noise/sqrt(1+square(m))
twoD_obsmatrix=array([x,y])
#to visualise the data we must plot it.
rcParams['figure.figsize'] = 7, 7
figure,axis=subplots(1,1)
xlim(-50,50)
ylim(-50,50)
axis.plot(twoD_obsmatrix[0,:],twoD_obsmatrix[1,:],'o',color='green',markersize=6)
#the line from which we generated the data is plotted in red
axis.plot(x1[:],y1[:],linewidth=0.3,color='red')
title('One-Dimensional sub-space with noise')
xlabel("x axis")
_=ylabel("y axis")
Explanation: Some Formal Background (Skip if you just want code examples)
PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension.
In machine learning problems data is often high dimensional - images, bag-of-word descriptions etc. In such cases we cannot expect the training data to densely populate the space, meaning that there will be large parts in which little is known about the data. Hence it is expected that only a small number of directions are relevant for describing the data to a reasonable accuracy.
The data vectors may be very high dimensional, they will therefore typically lie closer to a much lower dimensional 'manifold'.
Here we concentrate on linear dimensional reduction techniques. In this approach a high dimensional datapoint $\mathbf{x}$ is 'projected down' to a lower dimensional vector $\mathbf{y}$ by:
$$\mathbf{y}=\mathbf{F}\mathbf{x}+\text{const}.$$
where the matrix $\mathbf{F}\in\mathbb{R}^{\text{M}\times \text{D}}$, with $\text{M}<\text{D}$. Here $\text{M}=\dim(\mathbf{y})$ and $\text{D}=\dim(\mathbf{x})$.
From the above scenario, we assume that
The number of principal components to use is $\text{M}$.
The dimension of each data point is $\text{D}$.
The number of data points is $\text{N}$.
We express the approximation for datapoint $\mathbf{x}^n$ as:$$\mathbf{x}^n \approx \mathbf{c} + \sum\limits_{i=1}^{\text{M}}y_i^n \mathbf{b}^i \equiv \tilde{\mathbf{x}}^n.$$
* Here the vector $\mathbf{c}$ is a constant and defines a point in the lower dimensional space.
* The $\mathbf{b}^i$ define vectors in the lower dimensional space (also known as 'principal component coefficients' or 'loadings').
* The $y_i^n$ are the low dimensional co-ordinates of the data.
Our motive is to find the reconstruction $\tilde{\mathbf{x}}^n$ given the lower dimensional representation $\mathbf{y}^n$(which has components $y_i^n,i = 1,...,\text{M})$. For a data space of dimension $\dim(\mathbf{x})=\text{D}$, we hope to accurately describe the data using only a small number $(\text{M}\ll \text{D})$ of coordinates of $\mathbf{y}$.
To determine the best lower dimensional representation it is convenient to use the square distance error between $\mathbf{x}$ and its reconstruction $\tilde{\mathbf{x}}$:$$\text{E}(\mathbf{B},\mathbf{Y},\mathbf{c})=\sum\limits_{n=1}^{\text{N}}\sum\limits_{i=1}^{\text{D}}[x_i^n - \tilde{x}i^n]^2.$$
* Here the basis vectors are defined as $\mathbf{B} = [\mathbf{b}^1,...,\mathbf{b}^\text{M}]$ (defining $[\text{B}]{i,j} = b_i^j$).
* Corresponding low dimensional coordinates are defined as $\mathbf{Y} = [\mathbf{y}^1,...,\mathbf{y}^\text{N}].$
* Also, $x_i^n$ and $\tilde{x}_i^n$ represents the coordinates of the data points for the original and the reconstructed data respectively.
* The bias $\mathbf{c}$ is given by the mean of the data $\sum_n\mathbf{x}^n/\text{N}$.
Therefore, for simplification purposes we centre our data, so as to set $\mathbf{c}$ to zero. Now we concentrate on finding the optimal basis $\mathbf{B}$( which has the components $\mathbf{b}^i, i=1,...,\text{M} $).
Deriving the optimal linear reconstruction
To find the best basis vectors $\mathbf{B}$ and corresponding low dimensional coordinates $\mathbf{Y}$, we may minimize the sum of squared differences between each vector $\mathbf{x}$ and its reconstruction $\tilde{\mathbf{x}}$:
$\text{E}(\mathbf{B},\mathbf{Y}) = \sum\limits_{n=1}^{\text{N}}\sum\limits_{i=1}^{\text{D}}\left[x_i^n - \sum\limits_{j=1}^{\text{M}}y_j^nb_i^j\right]^2 = \text{trace} \left( (\mathbf{X}-\mathbf{B}\mathbf{Y})^T(\mathbf{X}-\mathbf{B}\mathbf{Y}) \right)$
where $\mathbf{X} = [\mathbf{x}^1,...,\mathbf{x}^\text{N}].$
Considering the above equation under the orthonormality constraint $\mathbf{B}^T\mathbf{B} = \mathbf{I}$ (i.e the basis vectors are mutually orthogonal and of unit length), we differentiate it w.r.t $y_k^n$. The squared error $\text{E}(\mathbf{B},\mathbf{Y})$ therefore has zero derivative when:
$y_k^n = \sum_i b_i^kx_i^n$
By substituting this solution in the above equation, the objective becomes
$\text{E}(\mathbf{B}) = (\text{N}-1)\left[\text{trace}(\mathbf{S}) - \text{trace}\left(\mathbf{S}\mathbf{B}\mathbf{B}^T\right)\right],$
where $\mathbf{S}$ is the sample covariance matrix of the data.
To minimise equation under the constraint $\mathbf{B}^T\mathbf{B} = \mathbf{I}$, we use a set of Lagrange Multipliers $\mathbf{L}$, so that the objective is to minimize:
$-\text{trace}\left(\mathbf{S}\mathbf{B}\mathbf{B}^T\right)+\text{trace}\left(\mathbf{L}\left(\mathbf{B}^T\mathbf{B} - \mathbf{I}\right)\right).$
Since the constraint is symmetric, we can assume that $\mathbf{L}$ is also symmetric. Differentiating with respect to $\mathbf{B}$ and equating to zero we obtain that at the optimum
$\mathbf{S}\mathbf{B} = \mathbf{B}\mathbf{L}$.
This is a form of eigen-equation so that a solution is given by taking $\mathbf{L}$ to be diagonal and $\mathbf{B}$ as the matrix whose columns are the corresponding eigenvectors of $\mathbf{S}$. In this case,
$\text{trace}\left(\mathbf{S}\mathbf{B}\mathbf{B}^T\right) =\text{trace}(\mathbf{L}),$
which is the sum of the eigenvalues corresponding to the eigenvectors forming $\mathbf{B}$. Since we wish to minimise $\text{E}(\mathbf{B})$, we take the eigenvectors with the largest corresponding eigenvalues.
Whilst the solution to this eigen-problem is unique, this only serves to define the solution subspace since one may rotate and scale $\mathbf{B}$ and $\mathbf{Y}$ such that the value of the squared loss is exactly the same. The justification for choosing the non-rotated eigen solution is given by the additional requirement that the principal components corresponds to directions of maximal variance.
Maximum variance criterion
We aim to find that single direction $\mathbf{b}$ such that, when the data is projected onto this direction, the variance of this projection is maximal amongst all possible such projections.
The projection of a datapoint onto a direction $\mathbf{b}$ is $\mathbf{b}^T\mathbf{x}^n$ for a unit length vector $\mathbf{b}$. Hence the sum of squared projections is: $$\sum\limits_{n}\left(\mathbf{b}^T\mathbf{x}^n\right)^2 = \mathbf{b}^T\left[\sum\limits_{n}\mathbf{x}^n(\mathbf{x}^n)^T\right]\mathbf{b} = (\text{N}-1)\mathbf{b}^T\mathbf{S}\mathbf{b} = \lambda(\text{N} - 1)$$
which ignoring constants, is simply the negative of the equation for a single retained eigenvector $\mathbf{b}$(with $\mathbf{S}\mathbf{b} = \lambda\mathbf{b}$). Hence the optimal single $\text{b}$ which maximises the projection variance is given by the eigenvector corresponding to the largest eigenvalues of $\mathbf{S}.$ The second largest eigenvector corresponds to the next orthogonal optimal direction and so on. This explains why, despite the squared loss equation being invariant with respect to arbitrary rotation of the basis vectors, the ones given by the eigen-decomposition have the additional property that they correspond to directions of maximal variance. These maximal variance directions found by PCA are called the $\text{principal} $ $\text{directions}.$
There are two eigenvalue methods through which shogun can perform PCA namely
* Eigenvalue Decomposition Method.
* Singular Value Decomposition.
EVD vs SVD
The EVD viewpoint requires that one compute the eigenvalues and eigenvectors of the covariance matrix, which is the product of $\mathbf{X}\mathbf{X}^\text{T}$, where $\mathbf{X}$ is the data matrix. Since the covariance matrix is symmetric, the matrix is diagonalizable, and the eigenvectors can be normalized such that they are orthonormal:
$\mathbf{S}=\frac{1}{\text{N}-1}\mathbf{X}\mathbf{X}^\text{T},$
where the $\text{D}\times\text{N}$ matrix $\mathbf{X}$ contains all the data vectors: $\mathbf{X}=[\mathbf{x}^1,...,\mathbf{x}^\text{N}].$
Writing the $\text{D}\times\text{N}$ matrix of eigenvectors as $\mathbf{E}$ and the eigenvalues as an $\text{N}\times\text{N}$ diagonal matrix $\mathbf{\Lambda}$, the eigen-decomposition of the covariance $\mathbf{S}$ is
$\mathbf{X}\mathbf{X}^\text{T}\mathbf{E}=\mathbf{E}\mathbf{\Lambda}\Longrightarrow\mathbf{X}^\text{T}\mathbf{X}\mathbf{X}^\text{T}\mathbf{E}=\mathbf{X}^\text{T}\mathbf{E}\mathbf{\Lambda}\Longrightarrow\mathbf{X}^\text{T}\mathbf{X}\tilde{\mathbf{E}}=\tilde{\mathbf{E}}\mathbf{\Lambda},$
where we defined $\tilde{\mathbf{E}}=\mathbf{X}^\text{T}\mathbf{E}$. The final expression above represents the eigenvector equation for $\mathbf{X}^\text{T}\mathbf{X}.$ This is a matrix of dimensions $\text{N}\times\text{N}$ so that calculating the eigen-decomposition takes $\mathcal{O}(\text{N}^3)$ operations, compared with $\mathcal{O}(\text{D}^3)$ operations in the original high-dimensional space. We then can therefore calculate the eigenvectors $\tilde{\mathbf{E}}$ and eigenvalues $\mathbf{\Lambda}$ of this matrix more easily. Once found, we use the fact that the eigenvalues of $\mathbf{S}$ are given by the diagonal entries of $\mathbf{\Lambda}$ and the eigenvectors by
$\mathbf{E}=\mathbf{X}\tilde{\mathbf{E}}\mathbf{\Lambda}^{-1}$
On the other hand, applying SVD to the data matrix $\mathbf{X}$ follows like:
$\mathbf{X}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}$
where $\mathbf{U}^\text{T}\mathbf{U}=\mathbf{I}\text{D}$ and $\mathbf{V}^\text{T}\mathbf{V}=\mathbf{I}\text{N}$ and $\mathbf{\Sigma}$ is a diagonal matrix of the (positive) singular values. We assume that the decomposition has ordered the singular values so that the upper left diagonal element of $\mathbf{\Sigma}$ contains the largest singular value.
Attempting to construct the covariance matrix $(\mathbf{X}\mathbf{X}^\text{T})$from this decomposition gives:
$\mathbf{X}\mathbf{X}^\text{T} = \left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}\right)\left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}\right)^\text{T}$
$\mathbf{X}\mathbf{X}^\text{T} = \left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}\right)\left(\mathbf{V}\mathbf{\Sigma}\mathbf{U}^\text{T}\right)$
and since $\mathbf{V}$ is an orthogonal matrix $\left(\mathbf{V}^\text{T}\mathbf{V}=\mathbf{I}\right),$
$\mathbf{X}\mathbf{X}^\text{T}=\left(\mathbf{U}\mathbf{\Sigma}^\mathbf{2}\mathbf{U}^\text{T}\right)$
Since it is in the form of an eigen-decomposition, the PCA solution given by performing the SVD decomposition of $\mathbf{X}$, for which the eigenvectors are then given by $\mathbf{U}$, and corresponding eigenvalues by the square of the singular values.
CPCA Class Reference (Shogun)
CPCA class of Shogun inherits from the CPreprocessor class. Preprocessors are transformation functions that doesn't change the domain of the input features. Specifically, CPCA performs principal component analysis on the input vectors and keeps only the specified number of eigenvectors. On preprocessing, the stored covariance matrix is used to project vectors into eigenspace.
Performance of PCA depends on the algorithm used according to the situation in hand.
Our PCA preprocessor class provides 3 method options to compute the transformation matrix:
$\text{PCA(EVD)}$ sets $\text{PCAmethod == EVD}$ : Eigen Value Decomposition of Covariance Matrix $(\mathbf{XX^T}).$
The covariance matrix $\mathbf{XX^T}$ is first formed internally and then
its eigenvectors and eigenvalues are computed using QR decomposition of the matrix.
The time complexity of this method is $\mathcal{O}(D^3)$ and should be used when $\text{N > D.}$
$\text{PCA(SVD)}$ sets $\text{PCAmethod == SVD}$ : Singular Value Decomposition of feature matrix $\mathbf{X}$.
The transpose of feature matrix, $\mathbf{X^T}$, is decomposed using SVD. $\mathbf{X^T = UDV^T}.$
The matrix V in this decomposition contains the required eigenvectors and
the diagonal entries of the diagonal matrix D correspond to the non-negative
eigenvalues.The time complexity of this method is $\mathcal{O}(DN^2)$ and should be used when $\text{N < D.}$
$\text{PCA(AUTO)}$ sets $\text{PCAmethod == AUTO}$ : This mode automagically chooses one of the above modes for the user based on whether $\text{N>D}$ (chooses $\text{EVD}$) or $\text{N<D}$ (chooses $\text{SVD}$)
PCA on 2D data
Step 1: Get some data
We will generate the toy data by adding orthogonal noise to a set of points lying on an arbitrary 2d line. We expect PCA to recover this line, which is a one-dimensional linear sub-space.
End of explanation
#convert the observation matrix into dense feature matrix.
train_features = RealFeatures(twoD_obsmatrix)
#PCA(EVD) is choosen since N=100 and D=2 (N>D).
#However we can also use PCA(AUTO) as it will automagically choose the appropriate method.
preprocessor = PCA(EVD)
#since we are projecting down the 2d data, the target dim is 1. But here the exhaustive method is detailed by
#setting the target dimension to 2 to visualize both the eigen vectors.
#However, in future examples we will get rid of this step by implementing it directly.
preprocessor.set_target_dim(2)
#Centralise the data by subtracting its mean from it.
preprocessor.init(train_features)
#get the mean for the respective dimensions.
mean_datapoints=preprocessor.get_mean()
mean_x=mean_datapoints[0]
mean_y=mean_datapoints[1]
Explanation: Step 2: Subtract the mean.
For PCA to work properly, we must subtract the mean from each of the data dimensions. The mean subtracted is the average across each dimension. So, all the $x$ values have $\bar{x}$ subtracted, and all the $y$ values have $\bar{y}$ subtracted from them, where:$$\bar{\mathbf{x}} = \frac{\sum\limits_{i=1}^{n}x_i}{n}$$ $\bar{\mathbf{x}}$ denotes the mean of the $x_i^{'s}$
Shogun's way of doing things :
Preprocessor PCA performs principial component analysis on input feature vectors/matrices. It provides an interface to set the target dimension by $\text{set_target_dim method}.$ When the $\text{init()}$ method in $\text{PCA}$ is called with proper
feature matrix $\text{X}$ (with say $\text{N}$ number of vectors and $\text{D}$ feature dimension), a transformation matrix is computed and stored internally.It inherenty also centralizes the data by subtracting the mean from it.
End of explanation
#Get the eigenvectors(We will get two of these since we set the target to 2).
E = preprocessor.get_transformation_matrix()
#Get all the eigenvalues returned by PCA.
eig_value=preprocessor.get_eigenvalues()
e1 = E[:,0]
e2 = E[:,1]
eig_value1 = eig_value[0]
eig_value2 = eig_value[1]
Explanation: Step 3: Calculate the covariance matrix
To understand the relationship between 2 dimension we define $\text{covariance}$. It is a measure to find out how much the dimensions vary from the mean $with$ $respect$ $to$ $each$ $other.$$$cov(X,Y)=\frac{\sum\limits_{i=1}^{n}(X_i-\bar{X})(Y_i-\bar{Y})}{n-1}$$
A useful way to get all the possible covariance values between all the different dimensions is to calculate them all and put them in a matrix.
Example: For a 3d dataset with usual dimensions of $x,y$ and $z$, the covariance matrix has 3 rows and 3 columns, and the values are this:
$$\mathbf{S} = \quad\begin{pmatrix}cov(x,x)&cov(x,y)&cov(x,z)\cov(y,x)&cov(y,y)&cov(y,z)\cov(z,x)&cov(z,y)&cov(z,z)\end{pmatrix}$$
Step 4: Calculate the eigenvectors and eigenvalues of the covariance matrix
Find the eigenvectors $e^1,....e^M$ of the covariance matrix $\mathbf{S}$.
Shogun's way of doing things :
Step 3 and Step 4 are directly implemented by the PCA preprocessor of Shogun toolbar. The transformation matrix is essentially a $\text{D}$$\times$$\text{M}$ matrix, the columns of which correspond to the eigenvectors of the covariance matrix $(\text{X}\text{X}^\text{T})$ having top $\text{M}$ eigenvalues.
End of explanation
#find out the M eigenvectors corresponding to top M number of eigenvalues and store it in E
#Here M=1
#slope of e1 & e2
m1=e1[1]/e1[0]
m2=e2[1]/e2[0]
#generate the two lines
x1=range(-50,50)
x2=x1
y1=multiply(m1,x1)
y2=multiply(m2,x2)
#plot the data along with those two eigenvectors
figure, axis = subplots(1,1)
xlim(-50, 50)
ylim(-50, 50)
axis.plot(x[:], y[:],'o',color='green', markersize=5, label="green")
axis.plot(x1[:], y1[:], linewidth=0.7, color='black')
axis.plot(x2[:], y2[:], linewidth=0.7, color='blue')
p1 = Rectangle((0, 0), 1, 1, fc="black")
p2 = Rectangle((0, 0), 1, 1, fc="blue")
legend([p1,p2],["1st eigenvector","2nd eigenvector"],loc='center left', bbox_to_anchor=(1, 0.5))
title('Eigenvectors selection')
xlabel("x axis")
_=ylabel("y axis")
Explanation: Step 5: Choosing components and forming a feature vector.
Lets visualize the eigenvectors and decide upon which to choose as the $principle$ $component$ of the data set.
End of explanation
#The eigenvector corresponding to higher eigenvalue(i.e eig_value2) is choosen (i.e e2).
#E is the feature vector.
E=e2
Explanation: In the above figure, the blue line is a good fit of the data. It shows the most significant relationship between the data dimensions.
It turns out that the eigenvector with the $highest$ eigenvalue is the $principle$ $component$ of the data set.
Form the matrix $\mathbf{E}=[\mathbf{e}^1,...,\mathbf{e}^M].$
Here $\text{M}$ represents the target dimension of our final projection
End of explanation
#transform all 2-dimensional feature matrices to target-dimensional approximations.
yn=preprocessor.apply_to_feature_matrix(train_features)
#Since, here we are manually trying to find the eigenvector corresponding to the top eigenvalue.
#The 2nd row of yn is choosen as it corresponds to the required eigenvector e2.
yn1=yn[1,:]
Explanation: Step 6: Projecting the data to its Principal Components.
This is the final step in PCA. Once we have choosen the components(eigenvectors) that we wish to keep in our data and formed a feature vector, we simply take the vector and multiply it on the left of the original dataset.
The lower dimensional representation of each data point $\mathbf{x}^n$ is given by
$\mathbf{y}^n=\mathbf{E}^T(\mathbf{x}^n-\mathbf{m})$
Here the $\mathbf{E}^T$ is the matrix with the eigenvectors in rows, with the most significant eigenvector at the top. The mean adjusted data, with data items in each column, with each row holding a seperate dimension is multiplied to it.
Shogun's way of doing things :
Step 6 can be performed by shogun's PCA preprocessor as follows:
The transformation matrix that we got after $\text{init()}$ is used to transform all $\text{D-dim}$ feature matrices (with $\text{D}$ feature dimensions) supplied, via $\text{apply_to_feature_matrix methods}$.This transformation outputs the $\text{M-Dim}$ approximation of all these input vectors and matrices (where $\text{M}$ $\leq$ $\text{min(D,N)}$).
End of explanation
x_new=(yn1 * E[0]) + tile(mean_x,[n,1]).T[0]
y_new=(yn1 * E[1]) + tile(mean_y,[n,1]).T[0]
Explanation: Step 5 and Step 6 can be applied directly with Shogun's PCA preprocessor (from next example). It has been done manually here to show the exhaustive nature of Principal Component Analysis.
Step 7: Form the approximate reconstruction of the original data $\mathbf{x}^n$
The approximate reconstruction of the original datapoint $\mathbf{x}^n$ is given by : $\tilde{\mathbf{x}}^n\approx\text{m}+\mathbf{E}\mathbf{y}^n$
End of explanation
figure, axis = subplots(1,1)
xlim(-50, 50)
ylim(-50, 50)
axis.plot(x[:], y[:],'o',color='green', markersize=5, label="green")
axis.plot(x_new, y_new, 'o', color='blue', markersize=5, label="red")
title('PCA Projection of 2D data into 1D subspace')
xlabel("x axis")
ylabel("y axis")
#add some legend for information
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="g")
p3 = Rectangle((0, 0), 1, 1, fc="b")
legend([p1,p2,p3],["normal projection","2d data","1d projection"],loc='center left', bbox_to_anchor=(1, 0.5))
#plot the projections in red:
for i in range(n):
axis.plot([x[i],x_new[i]],[y[i],y_new[i]] , color='red')
Explanation: The new data is plotted below
End of explanation
rcParams['figure.figsize'] = 8,8
#number of points
n=100
#generate the data
a=random.randint(1,20)
b=random.randint(1,20)
c=random.randint(1,20)
d=random.randint(1,20)
x1=random.random_integers(-20,20,n)
y1=random.random_integers(-20,20,n)
z1=-(a*x1+b*y1+d)/c
#generate the noise
noise=random.random_sample([n])*random.random_integers(-30,30,n)
#the normal unit vector is [a,b,c]/magnitude
magnitude=sqrt(square(a)+square(b)+square(c))
normal_vec=array([a,b,c]/magnitude)
#add the noise orthogonally
x=x1+noise*normal_vec[0]
y=y1+noise*normal_vec[1]
z=z1+noise*normal_vec[2]
threeD_obsmatrix=array([x,y,z])
#to visualize the data, we must plot it.
from mpl_toolkits.mplot3d import Axes3D
fig = pyplot.figure()
ax=fig.add_subplot(111, projection='3d')
#plot the noisy data generated by distorting a plane
ax.scatter(x, y, z,marker='o', color='g')
ax.set_xlabel('x label')
ax.set_ylabel('y label')
ax.set_zlabel('z label')
legend([p2],["3d data"],loc='center left', bbox_to_anchor=(1, 0.5))
title('Two dimensional subspace with noise')
xx, yy = meshgrid(range(-30,30), range(-30,30))
zz=-(a * xx + b * yy + d) / c
Explanation: PCA on a 3d data.
Step1: Get some data
We generate points from a plane and then add random noise orthogonal to it. The general equation of a plane is: $$\text{a}\mathbf{x}+\text{b}\mathbf{y}+\text{c}\mathbf{z}+\text{d}=0$$
End of explanation
#convert the observation matrix into dense feature matrix.
train_features = RealFeatures(threeD_obsmatrix)
#PCA(EVD) is choosen since N=100 and D=3 (N>D).
#However we can also use PCA(AUTO) as it will automagically choose the appropriate method.
preprocessor = PCA(EVD)
#If we set the target dimension to 2, Shogun would automagically preserve the required 2 eigenvectors(out of 3) according to their
#eigenvalues.
preprocessor.set_target_dim(2)
preprocessor.init(train_features)
#get the mean for the respective dimensions.
mean_datapoints=preprocessor.get_mean()
mean_x=mean_datapoints[0]
mean_y=mean_datapoints[1]
mean_z=mean_datapoints[2]
Explanation: Step 2: Subtract the mean.
End of explanation
#get the required eigenvectors corresponding to top 2 eigenvalues.
E = preprocessor.get_transformation_matrix()
Explanation: Step 3 & Step 4: Calculate the eigenvectors of the covariance matrix
End of explanation
#This can be performed by shogun's PCA preprocessor as follows:
yn=preprocessor.apply_to_feature_matrix(train_features)
Explanation: Steps 5: Choosing components and forming a feature vector.
Since we performed PCA for a target $\dim = 2$ for the $3 \dim$ data, we are directly given
the two required eigenvectors in $\mathbf{E}$
E is automagically filled by setting target dimension = M. This is different from the 2d data example where we implemented this step manually.
Step 6: Projecting the data to its Principal Components.
End of explanation
new_data=dot(E,yn)
x_new=new_data[0,:]+tile(mean_x,[n,1]).T[0]
y_new=new_data[1,:]+tile(mean_y,[n,1]).T[0]
z_new=new_data[2,:]+tile(mean_z,[n,1]).T[0]
#all the above points lie on the same plane. To make it more clear we will plot the projection also.
fig=pyplot.figure()
ax=fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z,marker='o', color='g')
ax.set_xlabel('x label')
ax.set_ylabel('y label')
ax.set_zlabel('z label')
legend([p1,p2,p3],["normal projection","3d data","2d projection"],loc='center left', bbox_to_anchor=(1, 0.5))
title('PCA Projection of 3D data into 2D subspace')
for i in range(100):
ax.scatter(x_new[i], y_new[i], z_new[i],marker='o', color='b')
ax.plot([x[i],x_new[i]],[y[i],y_new[i]],[z[i],z_new[i]],color='r')
Explanation: Step 7: Form the approximate reconstruction of the original data $\mathbf{x}^n$
The approximate reconstruction of the original datapoint $\mathbf{x}^n$ is given by : $\tilde{\mathbf{x}}^n\approx\text{m}+\mathbf{E}\mathbf{y}^n$
End of explanation
rcParams['figure.figsize'] = 10, 10
import os
def get_imlist(path):
Returns a list of filenames for all jpg images in a directory
return [os.path.join(path,f) for f in os.listdir(path) if f.endswith('.pgm')]
#set path of the training images
path_train='../../../data/att_dataset/training/'
#set no. of rows that the images will be resized.
k1=100
#set no. of columns that the images will be resized.
k2=100
filenames = get_imlist(path_train)
filenames = array(filenames)
#n is total number of images that has to be analysed.
n=len(filenames)
Explanation: PCA Performance
Uptill now, we were using the EigenValue Decomposition method to compute the transformation matrix$\text{(N>D)}$ but for the next example $\text{(N<D)}$ we will be using Singular Value Decomposition.
Practical Example : Eigenfaces
The problem with the image representation we are given is its high dimensionality. Two-dimensional $\text{p} \times \text{q}$ grayscale images span a $\text{m=pq}$ dimensional vector space, so an image with $\text{100}\times\text{100}$ pixels lies in a $\text{10,000}$ dimensional image space already.
The question is, are all dimensions really useful for us?
$\text{Eigenfaces}$ are based on the dimensional reduction approach of $\text{Principal Component Analysis(PCA)}$. The basic idea is to treat each image as a vector in a high dimensional space. Then, $\text{PCA}$ is applied to the set of images to produce a new reduced subspace that captures most of the variability between the input images. The $\text{Pricipal Component Vectors}$(eigenvectors of the sample covariance matrix) are called the $\text{Eigenfaces}$. Every input image can be represented as a linear combination of these eigenfaces by projecting the image onto the new eigenfaces space. Thus, we can perform the identfication process by matching in this reduced space. An input image is transformed into the $\text{eigenspace,}$ and the nearest face is identified using a $\text{Nearest Neighbour approach.}$
Step 1: Get some data.
Here data means those Images which will be used for training purposes.
End of explanation
# we will be using this often to visualize the images out there.
def showfig(image):
imgplot=imshow(image, cmap='gray')
imgplot.axes.get_xaxis().set_visible(False)
imgplot.axes.get_yaxis().set_visible(False)
import Image
from scipy import misc
# to get a hang of the data, lets see some part of the dataset images.
fig = pyplot.figure()
title('The Training Dataset')
for i in range(49):
fig.add_subplot(7,7,i+1)
train_img=array(Image.open(filenames[i]).convert('L'))
train_img=misc.imresize(train_img, [k1,k2])
showfig(train_img)
Explanation: Lets have a look on the data:
End of explanation
#To form the observation matrix obs_matrix.
#read the 1st image.
train_img = array(Image.open(filenames[0]).convert('L'))
#resize it to k1 rows and k2 columns
train_img=misc.imresize(train_img, [k1,k2])
#since Realfeatures accepts only data of float64 datatype, we do a type conversion
train_img=array(train_img, dtype='double')
#flatten it to make it a row vector.
train_img=train_img.flatten()
# repeat the above for all images and stack all those vectors together in a matrix
for i in range(1,n):
temp=array(Image.open(filenames[i]).convert('L'))
temp=misc.imresize(temp, [k1,k2])
temp=array(temp, dtype='double')
temp=temp.flatten()
train_img=vstack([train_img,temp])
#form the observation matrix
obs_matrix=train_img.T
Explanation: Represent every image $I_i$ as a vector $\Gamma_i$
End of explanation
train_features = RealFeatures(obs_matrix)
preprocessor=PCA(AUTO)
preprocessor.set_target_dim(100)
preprocessor.init(train_features)
mean=preprocessor.get_mean()
Explanation: Step 2: Subtract the mean
It is very important that the face images $I_1,I_2,...,I_M$ are $centered$ and of the $same$ size
We observe here that the no. of $\dim$ for each image is far greater than no. of training images. This calls for the use of $\text{SVD}$.
Setting the $\text{PCA}$ in the $\text{AUTO}$ mode does this automagically according to the situation.
End of explanation
#get the required eigenvectors corresponding to top 100 eigenvalues
E = preprocessor.get_transformation_matrix()
#lets see how these eigenfaces/eigenvectors look like:
fig1 = pyplot.figure()
title('Top 20 Eigenfaces')
for i in range(20):
a = fig1.add_subplot(5,4,i+1)
eigen_faces=E[:,i].reshape([k1,k2])
showfig(eigen_faces)
Explanation: Step 3 & Step 4: Calculate the eigenvectors and eigenvalues of the covariance matrix.
End of explanation
#we perform the required dot product.
yn=preprocessor.apply_to_feature_matrix(train_features)
Explanation: These 20 eigenfaces are not sufficient for a good image reconstruction. Having more eigenvectors gives us the most flexibility in the number of faces we can reconstruct. Though we are adding vectors with low variance, they are in directions of change nonetheless, and an external image that is not in our database could in fact need these eigenvectors to get even relatively close to it. But at the same time we must also keep in mind that adding excessive eigenvectors results in addition of little or no variance, slowing down the process.
Clearly a tradeoff is required.
We here set for M=100.
Step 5: Choosing components and forming a feature vector.
Since we set target $\dim = 100$ for this $n \dim$ data, we are directly given the $100$ required eigenvectors in $\mathbf{E}$
E is automagically filled. This is different from the 2d data example where we implemented this step manually.
Step 6: Projecting the data to its Principal Components.
The lower dimensional representation of each data point $\mathbf{x}^n$ is given by $$\mathbf{y}^n=\mathbf{E}^T(\mathbf{x}^n-\mathbf{m})$$
End of explanation
re=tile(mean,[n,1]).T[0] + dot(E,yn)
#lets plot the reconstructed images.
fig2 = pyplot.figure()
title('Reconstructed Images from 100 eigenfaces')
for i in range(1,50):
re1 = re[:,i].reshape([k1,k2])
fig2.add_subplot(7,7,i)
showfig(re1)
Explanation: Step 7: Form the approximate reconstruction of the original image $I_n$
The approximate reconstruction of the original datapoint $\mathbf{x}^n$ is given by : $\mathbf{x}^n\approx\text{m}+\mathbf{E}\mathbf{y}^n$
End of explanation
#set path of the training images
path_train='../../../data/att_dataset/testing/'
test_files=get_imlist(path_train)
test_img=array(Image.open(test_files[0]).convert('L'))
rcParams.update({'figure.figsize': (3, 3)})
#we plot the test image , for which we have to identify a good match from the training images we already have
fig = pyplot.figure()
title('The Test Image')
showfig(test_img)
#We flatten out our test image just the way we have done for the other images
test_img=misc.imresize(test_img, [k1,k2])
test_img=array(test_img, dtype='double')
test_img=test_img.flatten()
#We centralise the test image by subtracting the mean from it.
test_f=test_img-mean
Explanation: Recognition part.
In our face recognition process using the Eigenfaces approach, in order to recognize an unseen image, we proceed with the same preprocessing steps as applied to the training images.
Test images are represented in terms of eigenface coefficients by projecting them into face space$\text{(eigenspace)}$ calculated during training. Test sample is recognized by measuring the similarity distance between the test sample and all samples in the training. The similarity measure is a metric of distance calculated between two vectors. Traditional Eigenface approach utilizes $\text{Euclidean distance}$.
End of explanation
#We have already projected our training images into pca subspace as yn.
train_proj = yn
#Projecting our test image into pca subspace
test_proj = dot(E.T, test_f)
Explanation: Here we have to project our training image as well as the test image on the PCA subspace.
The Eigenfaces method then performs face recognition by:
1. Projecting all training samples into the PCA subspace.
2. Projecting the query image into the PCA subspace.
3. Finding the nearest neighbour between the projected training images and the projected query image.
End of explanation
#To get Eucledian Distance as the distance measure use EuclideanDistance.
workfeat = RealFeatures(mat(train_proj))
testfeat = RealFeatures(mat(test_proj).T)
RaRb=EuclideanDistance(testfeat, workfeat)
#The distance between one test image w.r.t all the training is stacked in matrix d.
d=empty([n,1])
for i in range(n):
d[i]= RaRb.distance(0,i)
#The one having the minimum distance is found out
min_distance_index = d.argmin()
iden=array(Image.open(filenames[min_distance_index]))
title('Identified Image')
showfig(iden)
Explanation: Shogun's way of doing things:
Shogun uses CEuclideanDistance class to compute the familiar Euclidean distance for real valued features. It computes the square root of the sum of squared disparity between the corresponding feature dimensions of two data points.
$\mathbf{d(x,x')=}$$\sqrt{\mathbf{\sum\limits_{i=0}^{n}}|\mathbf{x_i}-\mathbf{x'_i}|^2}$
End of explanation |
4,912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
To build an automaton, simply call translate() with a formula, and a list of options to characterize the automaton you want (those options have the same name as the long options name of the ltl2tgba tool, and they can be abbreviated).
Step1: The call the spot.setup() in the first cells has installed a default style for the graphviz output. If you want to change this style temporarily, you can call the show(style) method explicitely. For instance here is a vertical layout with the default font of GraphViz.
Step2: If you want to add some style options to the existing one, pass a dot to the show() function in addition to your own style options
Step3: The translate() function can also be called with a formula object. Either as a function, or as a method.
Step4: When used as a method, all the arguments are translation options. Here is a monitor
Step5: The following three cells show a formulas for which it makes a difference to select 'small' or 'deterministic'.
Step6: Here is how to build an unambiguous automaton
Step7: Compare with the standard translation
Step8: And here is the automaton above with state-based acceptance
Step9: Some example of running the self-loopization algorithm on an automaton
Step10: Reading from file (see automaton-io.ipynb for more examples).
Step11: Explicit determinization after translation
Step12: Determinization by translate(). The generic option allows any acceptance condition to be used instead of the default generalized Büchi.
Step13: Adding an automatic proposition to all edges
Step14: Adding an atomic proposition to the edge between 0 and 1 | Python Code:
a = spot.translate('(a U b) & GFc & GFd', 'BA', 'complete'); a
Explanation: To build an automaton, simply call translate() with a formula, and a list of options to characterize the automaton you want (those options have the same name as the long options name of the ltl2tgba tool, and they can be abbreviated).
End of explanation
a.show("v")
Explanation: The call the spot.setup() in the first cells has installed a default style for the graphviz output. If you want to change this style temporarily, you can call the show(style) method explicitely. For instance here is a vertical layout with the default font of GraphViz.
End of explanation
a.show(".ast")
Explanation: If you want to add some style options to the existing one, pass a dot to the show() function in addition to your own style options:
End of explanation
f = spot.formula('a U b'); f
spot.translate(f)
f.translate()
Explanation: The translate() function can also be called with a formula object. Either as a function, or as a method.
End of explanation
f.translate('mon')
Explanation: When used as a method, all the arguments are translation options. Here is a monitor:
End of explanation
f = spot.formula('Ga | Gb | Gc'); f
f.translate('ba', 'small').show('.v')
f.translate('ba', 'det').show('v.')
Explanation: The following three cells show a formulas for which it makes a difference to select 'small' or 'deterministic'.
End of explanation
spot.translate('GFa -> GFb', 'unambig')
Explanation: Here is how to build an unambiguous automaton:
End of explanation
spot.translate('GFa -> GFb')
Explanation: Compare with the standard translation:
End of explanation
spot.translate('GFa -> GFb', 'sbacc')
Explanation: And here is the automaton above with state-based acceptance:
End of explanation
a = spot.translate('F(a & X(!a &Xb))', "any"); a
spot.sl(a)
a.is_empty()
Explanation: Some example of running the self-loopization algorithm on an automaton:
End of explanation
%%file example1.aut
HOA: v1
States: 3
Start: 0
AP: 2 "a" "b"
acc-name: Buchi
Acceptance: 4 Inf(0)&Fin(1)&Fin(3) | Inf(2)&Inf(3) | Inf(1)
--BODY--
State: 0 {3}
[t] 0
[0] 1 {1}
[!0] 2 {0}
State: 1 {3}
[1] 0
[0&1] 1 {0}
[!0&1] 2 {2}
State: 2
[!1] 0
[0&!1] 1 {0}
[!0&!1] 2 {0}
--END--
a = spot.automaton('example1.aut')
display(a.show('.a'))
display(spot.remove_fin(a).show('.a'))
display(a.postprocess('TGBA', 'complete').show('.a'))
display(a.postprocess('BA'))
!rm example1.aut
spot.complete(a)
spot.complete(spot.translate('Ga'))
# Using +1 in the display options is a convient way to shift the
# set numbers in the output, as an aid in reading the product.
a1 = spot.translate('a W c'); display(a1.show('.bat'))
a2 = spot.translate('a U b'); display(a2.show('.bat+1'))
# the product should display pairs of states, unless asked not to (using 1).
p = spot.product(a1, a2); display(p.show('.bat')); display(p.show('.bat1'))
Explanation: Reading from file (see automaton-io.ipynb for more examples).
End of explanation
a = spot.translate('FGa')
display(a)
display(a.is_deterministic())
spot.tgba_determinize(a).show('.ba')
Explanation: Explicit determinization after translation:
End of explanation
aut = spot.translate('FGa', 'generic', 'deterministic'); aut
Explanation: Determinization by translate(). The generic option allows any acceptance condition to be used instead of the default generalized Büchi.
End of explanation
import buddy
b = buddy.bdd_ithvar(aut.register_ap('b'))
for e in aut.edges():
e.cond &= b
aut
Explanation: Adding an automatic proposition to all edges
End of explanation
c = buddy.bdd_ithvar(aut.register_ap('c'))
for e in aut.out(0):
if e.dst == 1:
e.cond &= c
aut
Explanation: Adding an atomic proposition to the edge between 0 and 1:
End of explanation |
4,913 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 1
Step1: Then, create the cell object using the LFPy.Cell
class, specifying the morphology file.
The passive mechanisms
are not switched on by default.
Step2: Then, align apical dendrite with z-axis
Step3: One can now use LFPy.Synapse class to insert a single
synapse onto the soma compartment, and set the spike time(s) using LFPy.Synapse.set_spike_times() method
Step4: We now have what we need in order to calculate the postsynaptic response,
using a built in method LFPy.Cell.simulate() to run the simulation.
Step5: Then
plot the model geometry, synaptic current and somatic potential | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import LFPy
Explanation: Example 1: Post-synaptic response of a single synapse
This is an example of LFPy running in an Jupyter notebook. To run through this example code and produce output, press <shift-Enter> in each code block below.
First step is to import LFPy and other packages for analysis and plotting:
End of explanation
cell = LFPy.Cell(morphology='morphologies/L5_Mainen96_LFPy.hoc', passive=True)
Explanation: Then, create the cell object using the LFPy.Cell
class, specifying the morphology file.
The passive mechanisms
are not switched on by default.
End of explanation
cell.set_rotation(x=4.98919, y=-4.33261, z=0.)
Explanation: Then, align apical dendrite with z-axis:
End of explanation
synapse = LFPy.Synapse(cell,
idx=cell.get_idx("soma[0]"),
syntype='Exp2Syn',
weight=0.005,
e=0,
tau1=0.5,
tau2=2,
record_current=True)
synapse.set_spike_times(np.array([20., 40]))
Explanation: One can now use LFPy.Synapse class to insert a single
synapse onto the soma compartment, and set the spike time(s) using LFPy.Synapse.set_spike_times() method:
End of explanation
cell.simulate()
Explanation: We now have what we need in order to calculate the postsynaptic response,
using a built in method LFPy.Cell.simulate() to run the simulation.
End of explanation
plt.figure(figsize=(12, 9))
plt.subplot(222)
plt.plot(cell.tvec, synapse.i, 'r')
plt.title('synaptic current (pA)')
plt.subplot(224)
plt.plot(cell.tvec, cell.somav, 'k')
plt.title('somatic voltage (mV)')
plt.subplot(121)
plt.plot(cell.x.T, cell.z.T, 'k')
plt.plot(synapse.x, synapse.z,
color='r', marker='o', markersize=10)
plt.axis([-500, 500, -400, 1200])
# savefig('LFPy-example-01.pdf', dpi=200)
Explanation: Then
plot the model geometry, synaptic current and somatic potential:
End of explanation |
4,914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Some Bayesian AB testing.
Thanks to my overly talented friend Maciej Kula for this example.
We'll reproduce some of his work from a blog post, and learn how to do AB testing with PyMC3.
We'll generate some fake data, and then apply some Bayesian priors to that.
Note that we also set up a treatment_effect variable, which will directly give us posteriors for the quantity of interest, the difference between the test and the control mean (the treatment effect).
When we run the function below, we will obtain a samples from the posterior distribution of $μ_{t}μ_{t}, μ_{c}, μ_{c}$, and σσ
Step3: So we have a small treatment effect, but some of the results are negative.
* This doesn't make any sense, a treatment effect (i.e. the difference between a treatment mean and control mean (or A and B) shouldn't be negative.
* Observations No model is perfect even a bayesian method.
* We probably have misspecificed one of our priors here.
* An in-depth discussion of this is offered on the Lyst blog which is well worth a read.
* When model evaluating you need to be very careful and think long and hard about what the results mean.
* Model evaluation is a very hard problem, and even after several years of doing data science I personally find this very hard.
* One way to resolve this is to carefully pick your priors.
Exercise
Try without looking at the Lyst blog to specify better priors, to improve the model.
* Playing with the models is a good way to improve your intuition.
* Don't worry too much if this was too much at once, you'll have a better understanding of these models by the end of these notebooks.
Hypothesis testing.
How do you test a hypothesis (with frequentism) in Python?
Examples shamelessly stolen from Chris Albon.
We can consider this like an A/B test.
Step4: Now we want to create another distribution which is not uniformly distributed. We'll then apply a test to this, to tell the difference.
Step5: Exercise
Do the Kolmogorov-Smirnov test with a different scipy distribution.
We see that the p-value is greater than 0.5
therefore we can say that this is not statistically significant, meaning both come from the same distribution
Why do we use the Kolmogorov-Smirnov test?
Nonparametric so makes no assumptions about the distributions.
There are other tests which we could use.
Step6: We see that the p-value is less than 0.5, therefore we can say that this is statistically significant, meaning that y is not a uniform distribution.
If you do A/B testing professionally, or work in say Pharma - you can spend a lot of time doing examples like this.
T-tests
t-tests - what are these or 'I hated stats at school too so I forgot all of this'
We'll use one of the t-tests from Scipy.
We expect the variances to be different from these two distributions.
Step8: Is there a Bayesian way to do this?
Or 'Peadar aren't you famous for being a Bayesian'?
Yes there is, there is a BEST (Bayesian Estimation Superseeds the t-test).
We'll use this.
Step9: We see pretty good convergence here, the sampler worked quite well.
Using Jon Sedars plotting functions we can see slightly better plots.
Step10: Model interpretation
Observations One of the advantages of the Bayesian way is that you get more information.
We get more information about the effect size. And credibility intervals built in
Step11: Most of these plots look good except the nu minus one one.
The rest decay quite quickly, indicating not much autocorrelation.
We'll move on from this now but we should be aware that this model isn't well specified.
We can adjust the sampling, the sampler, respecify priors.
Logistic regression - frequentist and Bayesian
Step12: We want to remove the nulls from this data set.
And then we want to filter by only in the United States.
Step13: Feature engineering or picking the covariates.
We want to restrict this study to just some of the variables. Our aim will be to predict if someone earns more than 50K or not.
We'll do a bit of exploring of the data first.
Step14: Exploring the data
Let us get a feel for the parameters.
* We see that age is a tailed distribution.
* Certainly not Gaussian! We don't see much of a correlation between many of the features, with the exception of Age and Age2.
* Hours worked has some interesting behaviour. How would one describe this distribution?
Step15: A machine learning model
We saw this already in the other notebook
Step16: A frequentist model
Let's look at a simple frequentist model
Step17: In statsmodels the thing we are trying to predict comes fist.
This confused me when writing this
Step18: Observations
In this case the McFaddens Pseudo R Squared (there are other variants) is slightly positive but not strongly positive.
One rule of thumb is that if it is between 0.2 and 0.4 this is a good model fit.
This is a bit below this, but still small, so we can interpret this as a 'not so bad' pseudo R squared value.
Let us make a few remarks about Pseudo $R^2$.
Let us recall that a non-pseudo R-squared is a statistic generated in ordinary least squares (OLS) regression in OLS
$$ R^2 = 1 - \frac{\sum_{i=1}^{N}(y_i - \hat{y}i)^2 }{\sum{i=1}^{N}(y_i - \bar{y}_{i})^2}$$
where N is the number of observations in the model, y is the dependent variable, y-bar is the mean of the y-values, and y-hat is the value produced by the model.
There are several approaches to thinking about the pseudo-r squared for dealing with categorical variables etc.
1) $R^2$ as explained variability
2) $R^2$ as improvements from null model to fitted model.
3) $R^2$ as the square of the correlation.
McFaddens Pseudo-R-squared (there are others) is the one used in Statsmodels.
$$R^2 = 1 - \frac{\ln \hat{L} (M_{full})}{\ln \hat{L} (M_{intercept})}$$
Where $\hat{L}$ is estimated likelihood.
and $M_{full}$ is model with predictors and $M_{intercept}$ is model without predictors.
The ratio of the likelihoods suggests the level of improvement over the intercept model offered by the full model.
A likelihood falls between 0 and 1, so the log of a likelihood is less than or equal to zero. If a model has a very low likelihood, then the log of the likelihood will have a larger magnitude than the log of a more likely model. Thus, a small ratio of log likelihoods indicates that the full model is a far better fit than the intercept model.
If comparing two models on the same data, McFadden's would be higher for the model with the greater likelihood.
We can write up the following Bayesian model
Step19: Some results
One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.
I'll use seaborn to look at the distribution of some of these factors.
Step20: So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models
Step21: Each curve shows how the probability of earning more than 50K50K changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values.
Step22: Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics
Step23: Model selection
The Deviance Information Criterion (DIC) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases. We'll run the model with a few changes to see what effect higher order terms have on this model.
One question that was immediately asked was what effect does age have on the model, and why should it be age^2 versus age? We'll use the DIC to answer this question. | Python Code:
def generate_data(no_samples,
treatment_proportion=0.1,
treatment_mu=1.2,
control_mu=1.0,
sigma=0.4):
Generate sample data from the experiment.
rnd = np.random.RandomState(seed=12345)
treatment = rnd.binomial(1, treatment_proportion, size=no_samples)
treatment_outcome = rnd.normal(treatment_mu, sigma, size=no_samples)
control_outcome = rnd.normal(control_mu, sigma, size=no_samples)
observed_outcome = (treatment * treatment_outcome + (1 - treatment) * control_outcome)
return pd.DataFrame({'treatment': treatment, 'outcome': observed_outcome})
def fit_uniform_priors(data):
Fit the data with uniform priors on mu.
# Theano doesn't seem to work unless we
# pull this out into a normal numpy array.
treatment = data['treatment'].values
with pm.Model() as model:
prior_mu=0.01
prior_sigma=0.001
treatment_sigma=0.001
control_mean = pm.Normal('Control mean',
prior_mu,
sd=prior_sigma)
# Specify priors for the difference in means
treatment_effect = pm.Normal('Treatment effect',
0.0,
sd=treatment_sigma)
# Recover the treatment mean
treatment_mean = pm.Deterministic('Treatment mean',
control_mean
+ treatment_effect)
# Specify prior for sigma
sigma = pm.InverseGamma('Sigma',
0.001,
1.1)
# Data model
outcome = pm.Normal('Outcome',
control_mean
+ treatment * treatment_effect,
sd=sigma, observed=data['outcome'])
# Fit
samples = 5000
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(samples, step, start, njobs=3)
# Discard burn-in
trace = trace[int(samples * 0.5):]
return pm.trace_to_dataframe(trace)
data = generate_data(1000)
fit_uniform_priors(data).describe()
Explanation: Some Bayesian AB testing.
Thanks to my overly talented friend Maciej Kula for this example.
We'll reproduce some of his work from a blog post, and learn how to do AB testing with PyMC3.
We'll generate some fake data, and then apply some Bayesian priors to that.
Note that we also set up a treatment_effect variable, which will directly give us posteriors for the quantity of interest, the difference between the test and the control mean (the treatment effect).
When we run the function below, we will obtain a samples from the posterior distribution of $μ_{t}μ_{t}, μ_{c}, μ_{c}$, and σσ: likely values of the parameters given our data and our prior beliefs.
In this simple example, we can look at the means of the to get our estimates. (Of course, we should in general examine the entire posterior and posterior predictive distribution.)
End of explanation
# Create x, which is uniformly distributed
x = np.random.uniform(size=1000)
# Plot x to double check its distribution
plt.hist(x)
plt.show()
Explanation: So we have a small treatment effect, but some of the results are negative.
* This doesn't make any sense, a treatment effect (i.e. the difference between a treatment mean and control mean (or A and B) shouldn't be negative.
* Observations No model is perfect even a bayesian method.
* We probably have misspecificed one of our priors here.
* An in-depth discussion of this is offered on the Lyst blog which is well worth a read.
* When model evaluating you need to be very careful and think long and hard about what the results mean.
* Model evaluation is a very hard problem, and even after several years of doing data science I personally find this very hard.
* One way to resolve this is to carefully pick your priors.
Exercise
Try without looking at the Lyst blog to specify better priors, to improve the model.
* Playing with the models is a good way to improve your intuition.
* Don't worry too much if this was too much at once, you'll have a better understanding of these models by the end of these notebooks.
Hypothesis testing.
How do you test a hypothesis (with frequentism) in Python?
Examples shamelessly stolen from Chris Albon.
We can consider this like an A/B test.
End of explanation
# Create y, which is NOT uniformly distributed
y = x**4
# Plot y to double check its distribution
plt.hist(y)
plt.show()
# Run kstest on x. We want the second returned value to be
# not statistically significant, meaning that both come from
# the same distribution.
from scipy import stats
stats.kstest(x, 'uniform', args=(min(x),max(x)))
Explanation: Now we want to create another distribution which is not uniformly distributed. We'll then apply a test to this, to tell the difference.
End of explanation
stats.kstest(y, 'uniform', args=(min(x),max(x)))
Explanation: Exercise
Do the Kolmogorov-Smirnov test with a different scipy distribution.
We see that the p-value is greater than 0.5
therefore we can say that this is not statistically significant, meaning both come from the same distribution
Why do we use the Kolmogorov-Smirnov test?
Nonparametric so makes no assumptions about the distributions.
There are other tests which we could use.
End of explanation
stats.ttest_ind(x, y, equal_var = False)
Explanation: We see that the p-value is less than 0.5, therefore we can say that this is statistically significant, meaning that y is not a uniform distribution.
If you do A/B testing professionally, or work in say Pharma - you can spend a lot of time doing examples like this.
T-tests
t-tests - what are these or 'I hated stats at school too so I forgot all of this'
We'll use one of the t-tests from Scipy.
We expect the variances to be different from these two distributions.
End of explanation
Bayesian Estimation Supersedes the T-Test
This model replicates the example used in:
Kruschke, John. (2012) Bayesian estimation supersedes the t test. Journal of Experimental Psychology: General.
The original pymc2 implementation was written by Andrew Straw and can be found here: https://github.com/strawlab/best
Ported to PyMC3 by Thomas Wiecki (c) 2015.
(Slightly altered version for this tutorial by Peadar Coyle (c) 2016)
import numpy as np
import pymc3 as pm
y1 = np.random.uniform(size=1000)
y2 = (np.random.uniform(size=1000)) ** 4
y = np.concatenate((y1, y2))
mu_m = np.mean( y )
mu_p = 0.000001 * 1/np.std(y)**2
sigma_low = np.std(y)/1000
sigma_high = np.std(y)*1000
with pm.Model() as model:
group1_mean = pm.Normal('group1_mean', mu=mu_m, tau=mu_p, testval=y1.mean())
group2_mean = pm.Normal('group2_mean', mu=mu_m, tau=mu_p, testval=y2.mean())
group1_std = pm.Uniform('group1_std', lower=sigma_low, upper=sigma_high, testval=y1.std())
group2_std = pm.Uniform('group2_std', lower=sigma_low, upper=sigma_high, testval=y2.std())
nu = pm.Exponential('nu_minus_one', 1/29.) + 1
lam1 = group1_std**-2
lam2 = group2_std**-2
group1 = pm.StudentT('treatment', nu=nu, mu=group1_mean, lam=lam1, observed=y1)
group2 = pm.StudentT('control', nu=nu, mu=group2_mean, lam=lam2, observed=y2)
diff_of_means = pm.Deterministic('difference of means', group1_mean - group2_mean)
diff_of_stds = pm.Deterministic('difference of stds', group1_std - group2_std)
effect_size = pm.Deterministic('effect size', diff_of_means / pm.sqrt((group1_std**2 + group2_std**2) / 2))
step = pm.NUTS()
trace = pm.sample(5000, step)
pm.traceplot(trace[1000:])
Explanation: Is there a Bayesian way to do this?
Or 'Peadar aren't you famous for being a Bayesian'?
Yes there is, there is a BEST (Bayesian Estimation Superseeds the t-test).
We'll use this.
End of explanation
plot_traces(trace, retain=1000)
Explanation: We see pretty good convergence here, the sampler worked quite well.
Using Jon Sedars plotting functions we can see slightly better plots.
End of explanation
pm.autocorrplot(trace)
Explanation: Model interpretation
Observations One of the advantages of the Bayesian way is that you get more information.
We get more information about the effect size. And credibility intervals built in :)
* Let us plot the autocorrelation plot.
End of explanation
data_log = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt',
'education-categorical', 'educ',
'marital-status', 'occupation',
'relationship', 'race', 'sex',
'captial-gain', 'capital-loss',
'hours', 'native-country',
'income'])
Explanation: Most of these plots look good except the nu minus one one.
The rest decay quite quickly, indicating not much autocorrelation.
We'll move on from this now but we should be aware that this model isn't well specified.
We can adjust the sampling, the sampler, respecify priors.
Logistic regression - frequentist and Bayesian
End of explanation
data_log = data_log[~pd.isnull(data_log['income'])]
data_log[data_log['native-country']==" United-States"]
Explanation: We want to remove the nulls from this data set.
And then we want to filter by only in the United States.
End of explanation
data_log.head()
income = 1 * (data_log['income'] == " >50K")
age2 = np.square(data_log['age'])
data_log = data_log[['age', 'educ', 'hours']]
data_log['age2'] = age2
data_log['income'] = income
income.value_counts()
Explanation: Feature engineering or picking the covariates.
We want to restrict this study to just some of the variables. Our aim will be to predict if someone earns more than 50K or not.
We'll do a bit of exploring of the data first.
End of explanation
import seaborn as seaborn
g = seaborn.pairplot(data_log)
# Compute the correlation matrix
corr = data_log.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = seaborn.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax)
Explanation: Exploring the data
Let us get a feel for the parameters.
* We see that age is a tailed distribution.
* Certainly not Gaussian! We don't see much of a correlation between many of the features, with the exception of Age and Age2.
* Hours worked has some interesting behaviour. How would one describe this distribution?
End of explanation
data_log
logreg = LogisticRegression(C=1e5)
age2 = np.square(data['age'])
data = data_log[['age', 'educ', 'hours']]
data['age2'] = age2
data['income'] = income
X = data[['age', 'age2', 'educ', 'hours']]
Y = data['income']
logreg.fit(X, Y)
# check the accuracy on the training set
logreg.score(X, Y)
Explanation: A machine learning model
We saw this already in the other notebook :)
End of explanation
train_cols = [col for col in data.columns if col not in ['income']]
logit = sm.Logit(data['income'], data[train_cols])
# fit the model
result = logit.fit()
Explanation: A frequentist model
Let's look at a simple frequentist model
End of explanation
train_cols
result.summary()
Explanation: In statsmodels the thing we are trying to predict comes fist.
This confused me when writing this :)
End of explanation
with pm.Model() as logistic_model:
pm.glm.glm('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial())
trace_logistic_model = pm.sample(2000, pm.NUTS(), progressbar=True)
plot_traces(trace_logistic_model, retain=1000)
pm.autocorrplot(trace_logistic_model)
Explanation: Observations
In this case the McFaddens Pseudo R Squared (there are other variants) is slightly positive but not strongly positive.
One rule of thumb is that if it is between 0.2 and 0.4 this is a good model fit.
This is a bit below this, but still small, so we can interpret this as a 'not so bad' pseudo R squared value.
Let us make a few remarks about Pseudo $R^2$.
Let us recall that a non-pseudo R-squared is a statistic generated in ordinary least squares (OLS) regression in OLS
$$ R^2 = 1 - \frac{\sum_{i=1}^{N}(y_i - \hat{y}i)^2 }{\sum{i=1}^{N}(y_i - \bar{y}_{i})^2}$$
where N is the number of observations in the model, y is the dependent variable, y-bar is the mean of the y-values, and y-hat is the value produced by the model.
There are several approaches to thinking about the pseudo-r squared for dealing with categorical variables etc.
1) $R^2$ as explained variability
2) $R^2$ as improvements from null model to fitted model.
3) $R^2$ as the square of the correlation.
McFaddens Pseudo-R-squared (there are others) is the one used in Statsmodels.
$$R^2 = 1 - \frac{\ln \hat{L} (M_{full})}{\ln \hat{L} (M_{intercept})}$$
Where $\hat{L}$ is estimated likelihood.
and $M_{full}$ is model with predictors and $M_{intercept}$ is model without predictors.
The ratio of the likelihoods suggests the level of improvement over the intercept model offered by the full model.
A likelihood falls between 0 and 1, so the log of a likelihood is less than or equal to zero. If a model has a very low likelihood, then the log of the likelihood will have a larger magnitude than the log of a more likely model. Thus, a small ratio of log likelihoods indicates that the full model is a far better fit than the intercept model.
If comparing two models on the same data, McFadden's would be higher for the model with the greater likelihood.
We can write up the following Bayesian model
End of explanation
plt.figure(figsize=(9,7))
trace = trace_logistic_model[1000:]
seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391")
plt.xlabel("beta_age")
plt.ylabel("beta_educ")
plt.show()
Explanation: Some results
One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.
I'll use seaborn to look at the distribution of some of these factors.
End of explanation
# Linear model with hours == 50 and educ == 12
lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*12 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 16
lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*16 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 19
lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*19 +
samples['hours']*50)))
Explanation: So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school).
End of explanation
# Plot the posterior predictive distributions of P(income > $50K) vs. age
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15)
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15)
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15)
import matplotlib.lines as mlines
blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education')
green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors')
red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School')
plt.legend(handles=[blue_line, green_line, red_line], loc='lower right')
plt.ylabel("P(Income > $50K)")
plt.xlabel("Age")
plt.show()
b = trace['educ']
plt.hist(np.exp(b), bins=20, normed=True)
plt.xlabel("Odds Ratio")
plt.show()
Explanation: Each curve shows how the probability of earning more than 50K50K changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values.
End of explanation
lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5)
print("P(%.3f < O.R. < %.3f) = 0.95"%(np.exp(lb),np.exp(ub)))
Explanation: Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval!
End of explanation
models_lin, traces_lin = run_models(data, 4)
dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])
dfdic.index.name = 'model'
for nm in dfdic.index:
dfdic.loc[nm, 'lin'] = pm.stats.dic(traces_lin[nm],models_lin[nm])
dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='dic')
g = seaborn.factorplot(x='model', y='dic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)
Explanation: Model selection
The Deviance Information Criterion (DIC) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases. We'll run the model with a few changes to see what effect higher order terms have on this model.
One question that was immediately asked was what effect does age have on the model, and why should it be age^2 versus age? We'll use the DIC to answer this question.
End of explanation |
4,915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
内容索引
通用函数
创建通用函数 --- frompyfunc工厂函数
通用函数的方法 --- reduce函数、accumulate函数、reduceat函数、outer函数
数组的除法运算 --- divide函数、true_divide函数、floor_divide函数
数组的模运算 --- mod函数、remainder函数、fmod函数
位操作函数和比较函数 ---
Step1: 1. 通用函数
通用函数的输入时一组标量、输出也是一组标量,他们通常可以对应于基本数学运算。如加减乘除。
通用函数的文档
通用函数是对普通的python函数进行矢量化,它是对ndarray对象的逐个元素的操作。
1.1 创建通用函数
使用NumPy中的frompyfunc函数,通过一个Python函数来创建通用函数。
Step2: 使用zeros_like函数创建一个和a形状相同的、元素全部为0的数组result。flat属性提供了一个扁平迭代器,可以逐个设置数组元素的值。
使用frompyfunc创建通用函数,制定输入参数为1,输出参数为1。
Step3: 使用在第五节介绍的vectorize函数
Step4: 1.2 通用函数的方法
通用函数并不是真正的函数,而是能够表示函数的numpy.ufunc的对象。frompyfunc是一个构造ufunc类对象的工厂函数。
通用函数类有4个方法:reduce、accumulate、reduceat、outer。这些方法只对输入两个参数、输出一个参数的ufunc对象有效。
1.3 在add函数上分别调用4个方法
(1) reduce方法:沿着指定的轴,在连续的数组元素之间递归调用通用函数,即可得到输入数组的规约(reduce)计算结果。对于add函数,其对数组的reduce计算结果等价于对数组元素求和。
Step5: (2) accumulate方法:可以递归作用于输入数组,与reduce不同的是,它将存储运算的中间结果并返回。add函数上调用accumulate方法,等价于直接调用cumsum函数。
Step6: (3) reduceat方法有点复杂,它需要输入一个数组以及一个索引值列表作为参数
Step7: 第一步:用到索引值列表中的0和5,实际上就是对数组中索引值在0到5之间的元素进行reduce操作
第二步:用到索引值5和2.由于2比5小,所以直接返回索引值为5的元素
第三步:用到索引值2和7,计算2到7的数组的reduce操作
第四步:用到索引值7,对索引值7开始直到数组末尾的元素进行reduce操作
Step8: (4) outer方法:返回一个数组,它的秩(rank)等于两个输入数组的秩的和。它会作用于两个输入数组之间存在的所有元素对。
Step9: 2. 数组的除法运算
在NumPy中,计算算术运算符+、-、* 隐式关联着通用函数add、subtrack和multiply。也就是说,当你对NumPy数组使用这些运算符时,对应的通用函数将自动被调用。
除法包含的过程比较复杂,在数组的除法运算中射击三个通用函数divide、true_divide和floor_division,以及两个对应的运算符/和//。
(1) divide函数在整数除法中均只保留整数部分
Step10: 运算结果的小数部分被截断了
Step11: divide函数如果有一方是浮点数,那么结果也是浮点数结果
(2) true_divide函数与数学中的除法定义更为接近,返回除法的浮点数结果不截断
Step12: (3) floor_divide函数总是返回整数结果,相当于先调用divide函数再调用floor函数。
floor函数对浮点数进行向下取整并返回整数。
Step13: 默认情况下,使用/运算符相当于调用divide函数,使用//运算符对应于floor_divide函数
3. 数组的模运算
计算模数或者余数,可以使用NumPy中的mod、remainder和fmod函数。也可以用%运算符。
(1) remainder函数逐个返回两个数组中元素相除后的余数,如果第二个数字为0,则直接返回0
Step14: mod函数与remainder函数的功能完全一致,%操作符仅仅是remainder函数的简写
(2) fmod函数处理负数的方式和remainder不同。所得余数的正负由被除数决定,与除数的正负无关
Step15: 4. 位操作函数和比较函数
位操作函数可以在整数或整数数组的位上进行操作,它们都是通用函数。
位操作符:^、&、|、<<、>>等。
比较操作符:<、>、==等。
4.1 检查两个整数的符号是否一致
这里要用到XOR或者^操作符。XOR操作符又称为不等运算符,因此当两个操作数的符号不一致时,XOR操作的结果为负数。
在NumPy中,^操作符对应于bitwise_xor函数,<操作符对应于less函数。
Step16: 除了等于0的情况,所有整数对的符号都不一样。
4.2 检查一个数是否为2的幂数
在二进制数中,2的幂数表示为一个1后面跟着一串0的形式。如果在2的幂数以及比它小1的数之间进行位与操作AND,那么应该等于0。
在NumPy中,&操作符对应于bitwise_and函数,==操作符对应于equal函数。
Step17: 4.3 计算一个数被2的幂数整除后的余数
计算余数的技巧只在模为2的幂数时有效。二进制的位左移一位,数值翻倍。
上一个例子看到,将2的幂数减去1,得到一串1组成的二进制数,这为我们提供了掩码,与这样的掩码做位与操作,即可得到以2的幂数作为模的余数。
在NumPy中,<<操作符对应于left_shift函数。 | Python Code:
import numpy as np
Explanation: 内容索引
通用函数
创建通用函数 --- frompyfunc工厂函数
通用函数的方法 --- reduce函数、accumulate函数、reduceat函数、outer函数
数组的除法运算 --- divide函数、true_divide函数、floor_divide函数
数组的模运算 --- mod函数、remainder函数、fmod函数
位操作函数和比较函数 ---
End of explanation
# 定义一个Python函数
def pyFunc(a):
result = np.zeros_like(a)
# 这里可以看出来对逐个元素的操作
result = 42
return result
Explanation: 1. 通用函数
通用函数的输入时一组标量、输出也是一组标量,他们通常可以对应于基本数学运算。如加减乘除。
通用函数的文档
通用函数是对普通的python函数进行矢量化,它是对ndarray对象的逐个元素的操作。
1.1 创建通用函数
使用NumPy中的frompyfunc函数,通过一个Python函数来创建通用函数。
End of explanation
ufunc1 = np.frompyfunc(pyFunc, 1, 1)
ret = ufunc1(np.arange(4))
print "The answer:\n", ret
ret = ufunc1(np.arange(4).reshape(2,2))
print "The answer:\n", ret
Explanation: 使用zeros_like函数创建一个和a形状相同的、元素全部为0的数组result。flat属性提供了一个扁平迭代器,可以逐个设置数组元素的值。
使用frompyfunc创建通用函数,制定输入参数为1,输出参数为1。
End of explanation
func2 = np.vectorize(pyFunc)
ret = func2(np.arange(4))
print "The answer:\n", ret
Explanation: 使用在第五节介绍的vectorize函数
End of explanation
a = np.arange(9)
print "a:\n", a
print "Reduce:\n", np.add.reduce(a)
Explanation: 1.2 通用函数的方法
通用函数并不是真正的函数,而是能够表示函数的numpy.ufunc的对象。frompyfunc是一个构造ufunc类对象的工厂函数。
通用函数类有4个方法:reduce、accumulate、reduceat、outer。这些方法只对输入两个参数、输出一个参数的ufunc对象有效。
1.3 在add函数上分别调用4个方法
(1) reduce方法:沿着指定的轴,在连续的数组元素之间递归调用通用函数,即可得到输入数组的规约(reduce)计算结果。对于add函数,其对数组的reduce计算结果等价于对数组元素求和。
End of explanation
print "Accumulate:\n", np.add.accumulate(a)
print "cumsum:\n", np.cumsum(a)
Explanation: (2) accumulate方法:可以递归作用于输入数组,与reduce不同的是,它将存储运算的中间结果并返回。add函数上调用accumulate方法,等价于直接调用cumsum函数。
End of explanation
print "Reduceat:\n", np.add.reduceat(a, [0,5,2,7])
Explanation: (3) reduceat方法有点复杂,它需要输入一个数组以及一个索引值列表作为参数
End of explanation
print "Reduceat step 1:", np.add.reduce(a[0:5])
print "Reduceat step 2:", a[5]
print "Reduceat step 3:", np.add.reduce(a[2:7])
print "Reduceat step 4:", np.add.reduce(a[7:])
Explanation: 第一步:用到索引值列表中的0和5,实际上就是对数组中索引值在0到5之间的元素进行reduce操作
第二步:用到索引值5和2.由于2比5小,所以直接返回索引值为5的元素
第三步:用到索引值2和7,计算2到7的数组的reduce操作
第四步:用到索引值7,对索引值7开始直到数组末尾的元素进行reduce操作
End of explanation
print "Outer:\n", np.add.outer(np.arange(3), a)
Explanation: (4) outer方法:返回一个数组,它的秩(rank)等于两个输入数组的秩的和。它会作用于两个输入数组之间存在的所有元素对。
End of explanation
a = np.array([2, 6, 5])
b = np.array([1, 2, 3])
print "Divide:\n", np.divide(a, b), np.divide(b, a)
Explanation: 2. 数组的除法运算
在NumPy中,计算算术运算符+、-、* 隐式关联着通用函数add、subtrack和multiply。也就是说,当你对NumPy数组使用这些运算符时,对应的通用函数将自动被调用。
除法包含的过程比较复杂,在数组的除法运算中射击三个通用函数divide、true_divide和floor_division,以及两个对应的运算符/和//。
(1) divide函数在整数除法中均只保留整数部分
End of explanation
c = np.array([2.1, 6.2, 5.0])
d = np.array([1, 2, 1.9])
print "Divide:\n", np.divide(c, d), np.divide(d, c)
Explanation: 运算结果的小数部分被截断了
End of explanation
print "True Divide:\n", np.true_divide(a, b), np.true_divide(b, a)
Explanation: divide函数如果有一方是浮点数,那么结果也是浮点数结果
(2) true_divide函数与数学中的除法定义更为接近,返回除法的浮点数结果不截断
End of explanation
print "Floor Divide:\n", np.floor_divide(a, b), np.floor_divide(b, a)
Explanation: (3) floor_divide函数总是返回整数结果,相当于先调用divide函数再调用floor函数。
floor函数对浮点数进行向下取整并返回整数。
End of explanation
a = np.arange(-4,4)
print "a:\n", a
print "Remainder:\n", np.remainder(a, 2)
Explanation: 默认情况下,使用/运算符相当于调用divide函数,使用//运算符对应于floor_divide函数
3. 数组的模运算
计算模数或者余数,可以使用NumPy中的mod、remainder和fmod函数。也可以用%运算符。
(1) remainder函数逐个返回两个数组中元素相除后的余数,如果第二个数字为0,则直接返回0
End of explanation
print "Fmod:\n", np.fmod(a, 2)
print np.fmod(a, -2)
Explanation: mod函数与remainder函数的功能完全一致,%操作符仅仅是remainder函数的简写
(2) fmod函数处理负数的方式和remainder不同。所得余数的正负由被除数决定,与除数的正负无关
End of explanation
x = np.arange(-9, 9)
y = -x
print "Sign different? ", (x^y) < 0
print "Sign different? ", np.less(np.bitwise_xor(x, y), 0)
Explanation: 4. 位操作函数和比较函数
位操作函数可以在整数或整数数组的位上进行操作,它们都是通用函数。
位操作符:^、&、|、<<、>>等。
比较操作符:<、>、==等。
4.1 检查两个整数的符号是否一致
这里要用到XOR或者^操作符。XOR操作符又称为不等运算符,因此当两个操作数的符号不一致时,XOR操作的结果为负数。
在NumPy中,^操作符对应于bitwise_xor函数,<操作符对应于less函数。
End of explanation
b = np.arange(20)
print b
print "Power of 2 ?\n", (b & (b-1)) == 0
print "Power of 2 ?\n", np.equal(np.bitwise_and(b, (b-1)), 0)
Explanation: 除了等于0的情况,所有整数对的符号都不一样。
4.2 检查一个数是否为2的幂数
在二进制数中,2的幂数表示为一个1后面跟着一串0的形式。如果在2的幂数以及比它小1的数之间进行位与操作AND,那么应该等于0。
在NumPy中,&操作符对应于bitwise_and函数,==操作符对应于equal函数。
End of explanation
print "Modulus 4:\n", x & ((1<<2) - 1)
def mod_2_pow(x, n):
mod = x & ((1<<n) - 1)
return mod
mod_2_pow(x,2)
mod_2_pow(x, 3)
Explanation: 4.3 计算一个数被2的幂数整除后的余数
计算余数的技巧只在模为2的幂数时有效。二进制的位左移一位,数值翻倍。
上一个例子看到,将2的幂数减去1,得到一串1组成的二进制数,这为我们提供了掩码,与这样的掩码做位与操作,即可得到以2的幂数作为模的余数。
在NumPy中,<<操作符对应于left_shift函数。
End of explanation |
4,916 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 1 - Data Structures and Algorithms
1.1 Unpacking a Sequence into Separate Variables
Step4: 1.2 Unpacking Elements from Iterables of Arbitrary Length
Step6: Discussion
Step8: 1.3 Keeping the Last N Items (in list queue with deque)
Step9: Generator functions (with yield) are common when searching for items. This decouples the process of searching from the code that uses results
Step10: 1.4 Finding the Largest or Smallest N Items
Problem
Step11: Discussion
When looking for N smallest/largest numbers, heapq provides superior performance. heap[0] is always the smallest number. Structures are converted into a list where items are ordered as a heap (underneath).
1.5 Implementing a Priority Queue
Problem
Step12: Discussion
Step13: 1.7 Keepping Dictionaries in Order
Problem
Step14: Discussion
Step15: Discussion
Step16: 1.9 Finding Commonalities in Two Dictionaries
Problem
Step17: Discussion
Step18: Discussion | Python Code:
p = (4, 5, 6, 7)
x, y, z, w = p # x -> 4
data = ['ACME', 50, 91.1, (2012, 12, 21)]
name, _, price, date = data # name -> 'ACME', data -> (2012, 12, 21)
s = 'Hello'
a, b, c, d, e = s # a -> H
p = (4, 5)
x, y, z = p # "ValueError"
Explanation: Chapter 1 - Data Structures and Algorithms
1.1 Unpacking a Sequence into Separate Variables:
Problem:
Unpacking tuple/sequence into a collection of variables
Solution:
Any sequence/iterable can be unpacked into variables using an assignment operation. The number of variables and structure must match the number of sequence items:
End of explanation
def drop_first_last(grades):
Drop first and last exams, then average the rest.
first, *middle, last = grades
return avg(middle)
def arbitrary_numbers():
Name and email followed by phone number(s).
record = ('Dave', 'dave@example.com', '555-555-5555', '555-555-5544')
name, email, *phone_numbers = record # phone_number always a list
return phone_numbers
def recent_to_first_n():
Most recent quarter compares to the average of the first n.
sales_records = ('23.444', '234.23', '0', 23.12, '15.56')
*trailing_qtrs, current_qtr = sales_record
trailing_avg = sum(trailing_qtrs) / len(trailing_qtrs)
return avg_comparison(trailing_avg, current_qtr)
Explanation: 1.2 Unpacking Elements from Iterables of Arbitrary Length:
Problem:
Unpacking unknown number of elements in tuple/sequence/iterables into variables
Solution:
Use "star expressions" for handling multiples:
End of explanation
####### 1 ##############
records = [ ('foo', 1, 2), ('bar', 'hello'), ('foo', 3, 4) ]
def do_foo(x, y):
print('foo', x, y)
def do_bar(s):
print('bar', s)
for tag, *args in records:
if tag == 'foo':
do_foo(*args)
elif tag == 'bar':
do_bar(*args)
#########################
######## 2 ##############
line = 'nobody:*:-2:-2:Unprivileged User:/var/empty:/usr/bin/false'
uname, *fields, homedir, sh = line.split(':') # uname -> nobody
#########################
######### 3 #############
record = ('ACME', 50, 123, 45, (12, 18, 2012))
name, *_, (*_, year) = record # name and year
#########################
######### 4 #############
def sum(items):
Recursions are not recommended w/ Python.
head, *tail = items
return head + sum(*tail) if tail else head
#########################
Explanation: Discussion:
This is often implemented with iterables of unknown(arbitrary) length, and known pattern: "everything after element 1 is a number".
Handy when iterating over a sequence of tuples of varying length or of tagged tuples.
Handy when unpacking with string processing operations
Handy when unpacking and throwing away some variables
Handly when spliting a list into head and tail components, which could be used to implement recursive solutions.
End of explanation
from collections import deque
def search(lines, pattern, history=5):
Returns a line that matches the pattern and 5 previous lines
previous_lines = deque(maxlen=history) # a generator of a list with max length
for line in lines:
if pattern in line:
yield line, previous_lines
previous_lines.append(line)
# Example use on a file
if __name__ == '__main__':
with open('somefile.txt') as f:
for line, prevlines in search(f, 'python', 5):
for pline in prevlines:
print(pline, end='')
print(line, end='')
print('-' * 20)
Explanation: 1.3 Keeping the Last N Items (in list queue with deque):
Problem:
Keep a limited history of the last few items seen during iteration or processing.
Solution:
Use collections.deque: perform a simple text search on a sequence of lines and yield matching lines with previous N lines of conext when found:
End of explanation
######## 1, 2, 3 ########
q = deque(maxlen=3)
q.append(1)
q.appendleft(4)
q.pop() # 1
q.popleft() # 4
#########################
Explanation: Generator functions (with yield) are common when searching for items. This decouples the process of searching from the code that uses results:
deque(maxlen=5) uses fixed-size queue; although we could append/delete items from a list, this is more elegant/faster
Handly when a simple queue structure is needed; without maxlen, use pop/append
Popping/appending/popleft/appendleft has O(1) vs O(N) complexity
End of explanation
import heapq
nums = [1, 8, 2, 23, 7, -4, 18, 23, 42, 37, 2]
print(heapq.nlargest(3, nums)) # [42, 37 ,23]
print(heapq.nsmallest(3, nums)) # [-4, 1, 2]
heap.heappop(nums) # -4
# use key parameter to use with complicated data structures
portfolio = [
{'name': 'IBM', 'shares': 100, 'price': 91.1},
{'name': 'AAPL', 'shares': 50, 'price': 543.22},
{'name': 'FB', 'shares': 200, 'price': 21.09},
{'name': 'HPQ', 'shares': 35, 'price': 31.75},
{'name': 'YHOO', 'shares': 45, 'price': 16.35},
{'name': 'ACME', 'shares': 75, 'price': 115.65}
]
cheap = heapq.nsmallest(3, portfolio, key=lambda s: s['price'])
expensive = heapq.nlargest(3, portfolio, key=lambda s: s['price'])
# if N is close to the size of the items:
sorted(nums)[:N] # a better approach
Explanation: 1.4 Finding the Largest or Smallest N Items
Problem:
Make a list of the largest or smallest N items in a collection.
Solution:
The heapq module has nlargest() and nsmallest()
End of explanation
import heapq
class PriorityQueue:
def __init__(self):
self._queue = []
self._index = 0
def __repr__(self):
return 'PriorityQueue({}) with index({})'.format(self._queue, self._index)
def push(self, item, priority):
heapq.heappush(self._queue, (-priority, self._index, item)) # heappush(list, ())
self._index += 1
def pop(self):
return heapq.heappop(self._queue)[-1] # self_queue includes [(priority, index, item)]
class Item:
def __init__(self, name):
self.name = name
def __repr__(self):
return 'Item({!r})'.format(self.name)
q = PriorityQueue()
print(q)
q.push(Item('foo'), 1)
print(q)
q.push(Item('bar'), 5)
print(q)
q.push(Item('spam'), 4)
print(q)
q.push(Item('grok'), 1)
print(q)
q.pop() # -> Item('bar')
print(q)
q.pop() # -> Item('spam')
print(q)
q.pop() # -> Item('foo')
print(q)
q.pop() # -> Item('grok')
print(q)
# foo and grok were popped in the same order in which they were inserted
Explanation: Discussion
When looking for N smallest/largest numbers, heapq provides superior performance. heap[0] is always the smallest number. Structures are converted into a list where items are ordered as a heap (underneath).
1.5 Implementing a Priority Queue
Problem:
Implement a queue that sorts items by a given priority and always returns the item with the highest priority on each pop operations.
Solution:
Use heapq to implement a simple priority queue
End of explanation
from collections import defaultdict
d = defaultdict(list) # multiple values will be added to a list
d['a'].append(1)
d['a'].append(2)
d['b'].append(4)
d = defaultdict(set) # multiple values will be added to a set
d['a'].add(1)
d['b'].add(2)
d['a'].add(5)
# Messier setdefault
d = {}
d.setdefault('a', []).append(1)
d.setdefault('a', []).append(2) # will add to the existing list
# Even messier
d = {}
for key, value in paiers:
if key not in d:
d[key] = []
d[key].append(value)
# Best!
d = defaultdict(list)
for key, value in pairs:
d[key].append(value)
Explanation: Discussion:
This recipe focuses on the use of heapq module. Functions heapq.heappush() and heapq.heappop() insert and remove items from a list _queue so that the first item in the list has the highest priority.
heappop() and heappush() have O(log N) complexity
a queue consists of tuples (-priority, index, item); priority is negated so that to add items with the highest priority to the beginning of the _queue
index value is used to properly order items with the same priority; index also works for comparison operations:
By introducing the extra index and making (priority, index, item) tuples, you avoid this problem entirely since no two tuples will ever have the same value for index (and Python never bothers to compare the remaining tuple values once the result of com‐ parison can be determined)::
a = (1, 0, Item('foo'))
b = (5, 1, Item('bar'))
c = (1, 2, Item('grok'))
a < b # True
a < c # True
we can use this queue for communication between threads, but we will need to add appropriate locking and signaling (look ahead)
1.6 Mapping Keys to Multiple Values in a Dictionary
Problem:
Make a dictionary that maps keys to more than one value (multidict)
Solution:
A dictionary is a mapping where each key is mapped to a single value. When mapping keys to multiple values, we need to store multiple values in a different container: list or set.
Use lists to preserve the insertion order of the items
Use sets to eliminate duplicates (when we don't care about the order)
Use defaultdict in the collections to construct such structure:
defaultdict automatically initializes the first value of the key
defaultdict automatically adds default values later on when accessing dictionary
if we don't want the above behavior, use setdefault (it is messier however)
End of explanation
from collections import OrderedDict
d = OrderedDict()
d['foo'] = 1
d['bar'] = 2
d['spam'] = 3
d['grok'] = 4
for key in d:
print(key, d[key]) # -> 'foo 1', 'bar 2', 'spam 3', 'grok 4'
# Use when serializing JSON
import json
json.dumps(d) # -> '{"foo": 1, "bar": 2, "spam": 3, "grok": 4}'
Explanation: 1.7 Keepping Dictionaries in Order
Problem:
Control the order of items in a dictionary when iterating or serializing
Solution:
Use OrderedDict from the collections to control dictionary order. It is particularly useful when building a mapping that later will be serialized or encoded into a different format. For example, when controlling the order of fields appearing in a JSON encoding, first build the data in OrderedDict and then json dump.
End of explanation
prices = {
'ACME': 45.23,
'AAPL': 612.78,
'IBM': 205.55,
'HPQ': 37.20,
'FB': 10.75
}
# to get calculated values first reverse and zip
min_price = min(zip(prices.values(), prices.keys())) # (10.75, 'FB')
max_price = max(zip(prices.values(), prices.keys())) # (612.78, 'AAPL')
# to rank the data use zip with sorted
prices_sorted = sorted(zip(prices.values(), prices.keys())) # [(10.75, 'FB'), (37.2, 'HPQ')...]
# the iterator can be consumed only once
prices_and_names = zip(prices.values(), prices.keys())
print(min(prices_and_names)) # result OK
print(max(prices_and_names)) # ValueError: max() arg is an empty sequence
Explanation: Discussion:
An OrderedDict is an expensive procedure - beware for items exceeding 100000 lines:
internally maintains a doubly linked list that orders the keys according to insertion order; when a new item is first inserted, it is placed at the end of this list; subsequent reassignment of an existing key doesn't change the order.
be aware that the size of OrderedDict is more than twice as large as a normal dictionary due to the extra linked list that's created.
if building a data structure involving a large number of OrderedDict instances (> 100000 lines of CSV file into a list of OrderedDict instances) be careful!
1.8 Calculating with Dictionaries
Problem:
Performing various calculations (min, max, sort) on a dictionary
Solution:
Reverse keys and values, then perform a calculation function on the zip result.
Important:
1. max/min/sort is performed on the keys
2. if the keys are the same, max/min/sort is then based on the values
3. zip creates an iterator, which can only be consumed once
End of explanation
#### 1 #############
min(prices) # 'AAPL'
max(prices) # 'IBM'
#### 2 ############
min(prices.values()) # 10.75
max(prices.values()) # 612.78
#### 3 ############
min(prices, key=lambda k: prices[k]) # 'FB'
max(prices, key=lambda k: prices[k]) # 'AAPL' -> perfrom calculation on values and return key
# to get the value as well as the key, additionally:
min_key = min(prices, key=lambda k: prices[k])
min_value = prices[min(prices, key=lambda k: prices[k])]
#### 4, 5 #########
prices = { 'AAA' : 45.23, 'ZZZ': 45.23 }
min(zip(prices.values(), prices.keys())) # (45.23, 'AAA')
max(zip(prices.values(), prices.keys())) # (45.23, 'ZZZ')
Explanation: Discussion:
Common reductions on a dicitionary process the keys and not the values
This is not (probably) what you want, as usually calcualtions are performed on values
In addition to a value result, we often need to know the corresponding key
That is why the zip solution works really well and not too clunky
As noted before, if the values in (values, keys) are the same, the keys will be used
For clear example on lambda functions and key attributes go to:
https://wiki.python.org/moin/HowTo/Sorting
End of explanation
a={
'x' : 1,
'y' : 2,
'z' : 3
}
b={
'w' : 10,
'x' : 11,
'y' : 2
}
# find keys in common
a.keys() & b.keys() # {'x', 'y'}
# find keys in a that are not in b
a.keys() - b.keys() # {'z'}
# find (key, value) pairs in common
a.items() & b.items() # {('y', 2)}
# alter/filter dictionary contents - make a new dict with selected keys removed
c = { key: a[key] for key in a.keys() - {'z', 'w'}} # {'x': 1, 'y': 2}
Explanation: 1.9 Finding Commonalities in Two Dictionaries
Problem:
Find out what two different dictionaries have in common (keys, values, etc.)
Solution:
Perfrom common set operations using the keys() or items() methods
End of explanation
###### 1 #########
def dedupe(items):
''' Add a unique item to the seen, and then check agains seen.'''
seen = set()
for item in items:
if item not in seen:
yield item
seen.add(item)
a = [1, 5, 2, 1, 9, 1, 5, 10]
list(dedupe(a)) # [1, 5, 2, 9, 10]
##### 2 ##########
def dedupe(items, key=None): # key is similar to min/max/sorted
''' Purpose of the key argument is to specify a function(lambda)
that converts sequence items into a hashable type for the
purposes of duplicate detection.
'''
seen = set()
for item in items:
val = item if key is None else key(item) # key could be lambda of values, keys, etc.
if val not in seen:
yield item
seen.add(val)
a = [ {'x':1, 'y':2}, {'x':1, 'y':3}, {'x':1, 'y':2}, {'x':2, 'y':4}]
# remove duplicates based on x/y values
list(dedupe(a, key=lambda d: (d['x'], d['y']))) # [{'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 2, 'y': 4}]
##### 3 #########
# remove duplicates based on x values - for each item in "a" sequence execute the lambda function
list(dedupe(a, key=lambda d: d['x'])) # [{'x': 1, 'y': 2}, {'x': 2, 'y': 4}]
Explanation: Discussion:
The keys() method of a dictionary returns a keys-view object that exposes the keys. Key views support set operations: unions, intersections, and differences.
The items() method of a dictionary returns an items-view object consisting of (key, value) pairs. This object supports similar set operations and can be used to perform operations such as finding out which key-value pairs two dictionaries have in common.
Although similar, the values() method of a dictionary does not support the set oper‐ ations described in this recipe. In part, this is due to the fact that unlike keys, the items contained in a values view aren’t guaranteed to be unique. However, if you must perform such calculations, they can be accomplished by simply converting the values to a set first.
1.10 Removing Duplicates form a Sequence while Maintaining Order
Problem:
Eliminate the duplicate values in a sequence, but preserve the order
Solution:
If the values in the sequence are hashable (preserver order), use a set and a generator.
If a sequence consists of unhashable types (dicts) use the key/lambda combo
The key/lambda combo also works well when eliminating duplicates based on the values of a single field, attribute, or a larger data structure
For an amazing explanation of iterables, iterators, generators and yield:
http://stackoverflow.com/questions/231767/what-does-the-yield-keyword-do-in-python
End of explanation
# let's eliminate duplicate lines from a file using the dedupe(items, key=None) generator
with open('somefile.txt', 'r') as f:
# the generator will spit out a single value (line) at a time,
# while keeping track (a pointer) to where it is located during each yield
for line in dedupe(f):
# process unique lines
pass
Explanation: Discussion:
To eliminate duplicates without preserving an order use a set
The generator functions allows us to be extremely general purpose: not only tied to list processing, but also to file
End of explanation |
4,917 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading a single rate
Step1: the original reaclib source
Step2: evaluate the rate at a given temperature (in K)
Step3: a human readable string describing the rate, and the nuclei involved
Step4: get the temperature sensitivity about some reference T
This is the exponent when we write the rate as $r = r_0 \left ( \frac{T}{T_0} \right )^\nu$
Step5: plot the rate's temperature dependence
Step6: the form of the rate with density and composition weighting -- this is what appears in a dY/dt equation
Step7: output a python function that can evaluate the rate (T-dependence part)
Step8: working with a group of rates
Step9: print an overview of the network described by this rate collection
Step10: show a network diagram
Step11: write a function containing the ODEs that evolve this network | Python Code:
r = reaclib.Rate("reaclib-rates/c13-pg-n14-nacr")
Explanation: Loading a single rate
End of explanation
print(r.original_source)
Explanation: the original reaclib source
End of explanation
r.eval(1.e9)
Explanation: evaluate the rate at a given temperature (in K)
End of explanation
print(r)
print(r.reactants)
print(r.products)
Explanation: a human readable string describing the rate, and the nuclei involved
End of explanation
print(r.get_rate_exponent(2.e7))
Explanation: get the temperature sensitivity about some reference T
This is the exponent when we write the rate as $r = r_0 \left ( \frac{T}{T_0} \right )^\nu$
End of explanation
r.plot()
Explanation: plot the rate's temperature dependence
End of explanation
print(r.ydot_string())
Explanation: the form of the rate with density and composition weighting -- this is what appears in a dY/dt equation
End of explanation
print(r.function_string())
Explanation: output a python function that can evaluate the rate (T-dependence part)
End of explanation
files = ["c12-pg-n13-ls09",
"c13-pg-n14-nacr",
"n13--c13-wc12",
"n13-pg-o14-lg06",
"n14-pg-o15-im05",
"n15-pa-c12-nacr",
"o14--n14-wc12",
"o15--n15-wc12"]
rc = reaclib.RateCollection(files)
Explanation: working with a group of rates
End of explanation
print(rc)
rc.print_network_overview()
Explanation: print an overview of the network described by this rate collection
End of explanation
rc.plot()
Explanation: show a network diagram
End of explanation
rc.make_network("test.py")
# %load test.py
import numpy as np
import reaclib
ip = 0
ihe4 = 1
ic12 = 2
ic13 = 3
in13 = 4
in14 = 5
in15 = 6
io14 = 7
io15 = 8
nnuc = 9
A = np.zeros((nnuc), dtype=np.int32)
A[ip] = 1
A[ihe4] = 4
A[ic12] = 12
A[ic13] = 13
A[in13] = 13
A[in14] = 14
A[in15] = 15
A[io14] = 14
A[io15] = 15
def o15_n15(tf):
# o15 --> n15
rate = 0.0
# wc12w
rate += np.exp( -5.17053)
return rate
def n15_pa_c12(tf):
# p + n15 --> he4 + c12
rate = 0.0
# nacrn
rate += np.exp( 27.4764 + -15.253*tf.T913i + 1.59318*tf.T913
+ 2.4479*tf.T9 + -2.19708*tf.T953 + -0.666667*tf.lnT9)
# nacrr
rate += np.exp( -6.57522 + -1.1638*tf.T9i + 22.7105*tf.T913
+ -2.90707*tf.T9 + 0.205754*tf.T953 + -1.5*tf.lnT9)
# nacrr
rate += np.exp( 20.8972 + -7.406*tf.T9i
+ -1.5*tf.lnT9)
# nacrr
rate += np.exp( -4.87347 + -2.02117*tf.T9i + 30.8497*tf.T913
+ -8.50433*tf.T9 + -1.54426*tf.T953 + -1.5*tf.lnT9)
return rate
def c13_pg_n14(tf):
# p + c13 --> n14
rate = 0.0
# nacrn
rate += np.exp( 18.5155 + -13.72*tf.T913i + -0.450018*tf.T913
+ 3.70823*tf.T9 + -1.70545*tf.T953 + -0.666667*tf.lnT9)
# nacrr
rate += np.exp( 13.9637 + -5.78147*tf.T9i + -0.196703*tf.T913
+ 0.142126*tf.T9 + -0.0238912*tf.T953 + -1.5*tf.lnT9)
# nacrr
rate += np.exp( 15.1825 + -13.5543*tf.T9i
+ -1.5*tf.lnT9)
return rate
def c12_pg_n13(tf):
# p + c12 --> n13
rate = 0.0
# ls09n
rate += np.exp( 17.1482 + -13.692*tf.T913i + -0.230881*tf.T913
+ 4.44362*tf.T9 + -3.15898*tf.T953 + -0.666667*tf.lnT9)
# ls09r
rate += np.exp( 17.5428 + -3.77849*tf.T9i + -5.10735*tf.T913i + -2.24111*tf.T913
+ 0.148883*tf.T9 + -1.5*tf.lnT9)
return rate
def n13_pg_o14(tf):
# p + n13 --> o14
rate = 0.0
# lg06n
rate += np.exp( 18.1356 + -15.1676*tf.T913i + 0.0955166*tf.T913
+ 3.0659*tf.T9 + -0.507339*tf.T953 + -0.666667*tf.lnT9)
# lg06r
rate += np.exp( 10.9971 + -6.12602*tf.T9i + 1.57122*tf.T913i
+ -1.5*tf.lnT9)
return rate
def n14_pg_o15(tf):
# p + n14 --> o15
rate = 0.0
# im05n
rate += np.exp( 17.01 + -15.193*tf.T913i + -0.161954*tf.T913
+ -7.52123*tf.T9 + -0.987565*tf.T953 + -0.666667*tf.lnT9)
# im05r
rate += np.exp( 6.73578 + -4.891*tf.T9i
+ 0.0682*tf.lnT9)
# im05r
rate += np.exp( 7.65444 + -2.998*tf.T9i
+ -1.5*tf.lnT9)
# im05n
rate += np.exp( 20.1169 + -15.193*tf.T913i + -4.63975*tf.T913
+ 9.73458*tf.T9 + -9.55051*tf.T953 + 0.333333*tf.lnT9)
return rate
def o14_n14(tf):
# o14 --> n14
rate = 0.0
# wc12w
rate += np.exp( -4.62354)
return rate
def n13_c13(tf):
# n13 --> c13
rate = 0.0
# wc12w
rate += np.exp( -6.7601)
return rate
def rhs(t, Y, rho, T):
tf = reaclib.Tfactors(T)
lambda_o15_n15 = o15_n15(tf)
lambda_n15_pa_c12 = n15_pa_c12(tf)
lambda_c13_pg_n14 = c13_pg_n14(tf)
lambda_c12_pg_n13 = c12_pg_n13(tf)
lambda_n13_pg_o14 = n13_pg_o14(tf)
lambda_n14_pg_o15 = n14_pg_o15(tf)
lambda_o14_n14 = o14_n14(tf)
lambda_n13_c13 = n13_c13(tf)
dYdt = np.zeros((nnuc), dtype=np.float64)
dYdt[ip] = (
-rho*Y[ip]*Y[in15]*lambda_n15_pa_c12
-rho*Y[ic13]*Y[ip]*lambda_c13_pg_n14
-rho*Y[ip]*Y[ic12]*lambda_c12_pg_n13
-rho*Y[ip]*Y[in13]*lambda_n13_pg_o14
-rho*Y[ip]*Y[in14]*lambda_n14_pg_o15
)
dYdt[ihe4] = (
+rho*Y[ip]*Y[in15]*lambda_n15_pa_c12
)
dYdt[ic12] = (
-rho*Y[ip]*Y[ic12]*lambda_c12_pg_n13
+rho*Y[ip]*Y[in15]*lambda_n15_pa_c12
)
dYdt[ic13] = (
-rho*Y[ic13]*Y[ip]*lambda_c13_pg_n14
+Y[in13]*lambda_n13_c13
)
dYdt[in13] = (
-rho*Y[ip]*Y[in13]*lambda_n13_pg_o14
-Y[in13]*lambda_n13_c13
+rho*Y[ip]*Y[ic12]*lambda_c12_pg_n13
)
dYdt[in14] = (
-rho*Y[ip]*Y[in14]*lambda_n14_pg_o15
+rho*Y[ic13]*Y[ip]*lambda_c13_pg_n14
+Y[io14]*lambda_o14_n14
)
dYdt[in15] = (
-rho*Y[ip]*Y[in15]*lambda_n15_pa_c12
+Y[io15]*lambda_o15_n15
)
dYdt[io14] = (
-Y[io14]*lambda_o14_n14
+rho*Y[ip]*Y[in13]*lambda_n13_pg_o14
)
dYdt[io15] = (
-Y[io15]*lambda_o15_n15
+rho*Y[ip]*Y[in14]*lambda_n14_pg_o15
)
return dYdt
Explanation: write a function containing the ODEs that evolve this network
End of explanation |
4,918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TRAPpy
Step1: Interactive Line Plotting of Data Frames
Interactive Line Plots Supports the same API as the LinePlot but provide an interactive plot that can be zoomed by clicking and dragging on the desired area. Double clicking resets the zoom.
We can create an interactive plot easily from a dataframe by passing the data frame and the columns we want to plot as parameters
Step2: Plotting independent series
It is also possible to plot traces with different index values (i.e. x-axis values)
Step3: This does not affect filtering or pivoting in any way
Step4: Interactive Line Plotting of Traces
We can also create them from trace objects
Step5: You can also change the drawstyle to "steps-post" for step plots. These are suited if the data is discrete
and linear interploation is not required between two data points
Step6: Synchronized zoom in multiple plots
ILinePlots can zoom all at the same time. You can do so using the group and sync_zoom parameter. All ILinePlots using the same group name zoom at the same time.
Step7: EventPlot
TRAPpy's Interactive Plotter features an Interactive Event TimeLine Plot. It accepts an input data of the type
<pre>
<code>
{ "A"
Step8: Lane names can also be specified as strings (or hashable objects that have an str representation) as follows
Step9: TracePlot
A specification of the EventPlot creates a kernelshark like plot if the sched_switch event is enabled in the traces | Python Code:
import sys,os
sys.path.append("..")
import numpy.random
import pandas as pd
import shutil
import tempfile
import trappy
trace_thermal = "./trace.txt"
trace_sched = "../tests/raw_trace.dat"
TEMP_BASE = "/tmp"
def setup_thermal():
tDir = tempfile.mkdtemp(dir="/tmp", prefix="trappy_doc", suffix = ".tempDir")
shutil.copyfile(trace_thermal, os.path.join(tDir, "trace.txt"))
return tDir
def setup_sched():
tDir = tempfile.mkdtemp(dir="/tmp", prefix="trappy_doc", suffix = ".tempDir")
shutil.copyfile(trace_sched, os.path.join(tDir, "trace.dat"))
return tDir
temp_thermal_location = setup_thermal()
trace1 = trappy.FTrace(temp_thermal_location)
trace2 = trappy.FTrace(temp_thermal_location)
trace2.thermal.data_frame["temp"] = trace1.thermal.data_frame["temp"] * 2
trace2.cpu_out_power.data_frame["power"] = trace1.cpu_out_power.data_frame["power"] * 2
Explanation: TRAPpy: Interactive Plotting
Re Run the cells to generate the graphs
End of explanation
columns = ["tick", "tock"]
df = pd.DataFrame(numpy.random.randn(1000, 2), columns=columns).cumsum()
trappy.ILinePlot(df, column=columns).view()
Explanation: Interactive Line Plotting of Data Frames
Interactive Line Plots Supports the same API as the LinePlot but provide an interactive plot that can be zoomed by clicking and dragging on the desired area. Double clicking resets the zoom.
We can create an interactive plot easily from a dataframe by passing the data frame and the columns we want to plot as parameters:
End of explanation
columns = ["tick", "tock", "bang"]
df_len = 1000
df1 = pd.DataFrame(numpy.random.randn(df_len, 3), columns=columns, index=range(df_len)).cumsum()
df2 = pd.DataFrame(numpy.random.randn(df_len, 3), columns=columns, index=(numpy.arange(0.5, df_len, 1))).cumsum()
trappy.ILinePlot([df1, df2], column="tick").view()
Explanation: Plotting independent series
It is also possible to plot traces with different index values (i.e. x-axis values)
End of explanation
df1["bang"] = df1["bang"].apply(lambda x: numpy.random.randint(0, 4))
df2["bang"] = df2["bang"].apply(lambda x: numpy.random.randint(0, 4))
trappy.ILinePlot([df1, df2], column="tick", filters = {'bang' : [2]}, title="tick column values for which bang is 2").view()
trappy.ILinePlot([df1, df2], column="tick", pivot="bang", title="tick column pivoted on bang column").view()
Explanation: This does not affect filtering or pivoting in any way
End of explanation
map_label = {
"00000000,00000006" : "A57",
"00000000,00000039" : "A53",
}
l = trappy.ILinePlot(
trace1, # TRAPpy FTrace Object
trappy.cpu_power.CpuInPower, # TRAPpy Event (maps to a unique word in the Trace)
column=[ # Column(s)
"dynamic_power",
"load1"],
filters={ # Filter the data
"cdev_state": [
1,
0]},
pivot="cpus", # One plot for each pivot will be created
map_label=map_label, # Optionally, provide an alternative label for pivots
per_line=1) # Number of graphs per line
l.view()
Explanation: Interactive Line Plotting of Traces
We can also create them from trace objects
End of explanation
l = trappy.ILinePlot(
trace1, # TRAPpy FTrace Object
trappy.cpu_power.CpuInPower, # TRAPpy Event (maps to a unique word in the Trace)
column=[ # Column(s)
"dynamic_power",
"load1"],
filters={ # Filter the data
"cdev_state": [
1,
0]},
pivot="cpus", # One plot for each pivot will be created
per_line=1, # Number of graphs per line
drawstyle="steps-post")
l.view()
Explanation: You can also change the drawstyle to "steps-post" for step plots. These are suited if the data is discrete
and linear interploation is not required between two data points
End of explanation
trappy.ILinePlot(
trace1,
signals=["cpu_in_power:dynamic_power", "cpu_in_power:load1"],
pivot="cpus",
group="synchronized",
sync_zoom=True
).view()
Explanation: Synchronized zoom in multiple plots
ILinePlots can zoom all at the same time. You can do so using the group and sync_zoom parameter. All ILinePlots using the same group name zoom at the same time.
End of explanation
A = [
[0, 3, 0],
[4, 5, 2],
]
B = [
[0, 2, 1],
[2, 3, 3],
[3, 4, 0],
]
C = [
[0, 2, 3],
[2, 3, 2],
[3, 4, 1],
]
EVENTS = {}
EVENTS["A"] = A
EVENTS["B"] = B
EVENTS["C"] = C
trappy.EventPlot(EVENTS,
keys=EVENTS.keys, # Name of the Process Element
lane_prefix="LANE: ", # Name of Each TimeLine
num_lanes=4, # Number of Timelines
domain=[0,5] # Time Domain
).view()
Explanation: EventPlot
TRAPpy's Interactive Plotter features an Interactive Event TimeLine Plot. It accepts an input data of the type
<pre>
<code>
{ "A" : [
[event_start, event_end, lane],
.
.
[event_start, event_end, lane],
],
.
.
.
"B" : [
[event_start, event_end, lane],
.
.
[event_start, event_end, lane],
.
.
.
}
</code>
</pre>
Hovering on the rectangles gives the name of the process element and scrolling on the Plot Area and the window in the summary controls the zoom. One can also click and drag for panning a zoomed graph.
For Example:
End of explanation
A = [
[0, 3, "zero"],
[4, 5, "two"],
]
B = [
[0, 2, 1],
[2, 3, "three"],
[3, 4, "zero"],
]
C = [
[0, 2, "three"],
[2, 3, "two"],
[3, 4, 1],
]
EVENTS = {}
EVENTS["A"] = A
EVENTS["B"] = B
EVENTS["C"] = C
trappy.EventPlot(EVENTS,
keys=EVENTS.keys, # Name of the Process Element
lanes=["zero", 1, "two", "three"],
domain=[0,5] # Time Domain
).view()
Explanation: Lane names can also be specified as strings (or hashable objects that have an str representation) as follows
End of explanation
f = setup_sched()
trappy.plotter.plot_trace(f)
Explanation: TracePlot
A specification of the EventPlot creates a kernelshark like plot if the sched_switch event is enabled in the traces
End of explanation |
4,919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recommendations on GCP with TensorFlow and WALS with Cloud Composer
This lab is adapted from the original solution created by lukmanr
This project deploys a solution for a recommendation service on GCP, using the WALS algorithm in TensorFlow. Components include
Step1: Setup environment variables
<span style="color
Step2: Setup Google App Engine permissions
In IAM, change permissions for "Compute Engine default service account" from Editor to Owner. This is required so you can create and deploy App Engine versions from within Cloud Datalab. Note
Step3: Part One
Step4: 2. Create empty BigQuery dataset and load sample JSON data
Note
Step5: Install WALS model training package and model data
1. Create a distributable package. Copy the package up to the code folder in the bucket you created previously.
Step6: 2. Run the WALS model on the sample data set
Step7: This will take a couple minutes, and create a job directory under wals_ml_engine/jobs like "wals_ml_local_20180102_012345/model", containing the model files saved as numpy arrays.
View the locally trained model directory
Step8: 3. Copy the model files from this directory to the model folder in the project bucket
Step9: Install the recserve endpoint
1. Prepare the deploy template for the Cloud Endpoint API
Step10: This will output somthing like
Step11: 3. Prepare the deploy template for the App Engine App
Step12: You can ignore the script output "ERROR
Step13: This will take 7 - 10 minutes to deploy the app. While you wait, consider starting on Part Two below and completing the Cloud Composer DAG file.
Query the API for Article Recommendations
Lastly, you are able to test the recommendation model API by submitting a query request. Note the example userId passed and numRecs desired as the URL parameters for the model input.
Step14: If the call is successful, you will see the article IDs recommended for that specific user by the WALS ML model <br/>
(Example
Step17: Complete the training.py DAG file
Apache Airflow orchestrates tasks out to other services through a DAG (Directed Acyclic Graph) file which specifies what services to call, what to do, and when to run these tasks. DAG files are written in python and are loaded automatically into Airflow once present in the Airflow/dags/ folder in your Cloud Composer bucket.
Your task is to complete the partially written DAG file below which will enable the automatic retraining and redeployment of our WALS recommendation model.
Complete the #TODOs in the Airflow DAG file below and execute the code block to save the file
Step18: Copy local Airflow DAG file and plugins into the DAGs folder | Python Code:
%%bash
pip install sh --upgrade pip # needed to execute shell scripts later
Explanation: Recommendations on GCP with TensorFlow and WALS with Cloud Composer
This lab is adapted from the original solution created by lukmanr
This project deploys a solution for a recommendation service on GCP, using the WALS algorithm in TensorFlow. Components include:
Recommendation model code, and scripts to train and tune the model on ML Engine
A REST endpoint using Google Cloud Endpoints for serving recommendations
An Airflow server managed by Cloud Composer for running scheduled model training
Confirm Prerequisites
Create a Cloud Composer Instance
Create a Cloud Composer instance
Specify 'composer' for name
Choose a location
Keep the remaining settings at their defaults
Select Create
This takes 15 - 20 minutes. Continue with the rest of the lab as you will be using Cloud Composer near the end.
End of explanation
import os
PROJECT = 'PROJECT' # REPLACE WITH YOUR PROJECT ID
REGION = 'us-central1' # REPLACE WITH YOUR REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = 'recserve_' + PROJECT
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
# create GCS bucket with recserve_PROJECT_NAME if not exists
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo "Not creating recserve_bucket since it already exists."
else
echo "Creating recserve_bucket"
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: Setup environment variables
<span style="color: blue">Replace the below settings with your own.</span> Note: you can leave AIRFLOW_BUCKET blank and come back to it after your Composer instance is created which automatically will create an Airflow bucket for you. <br><br>
1. Make a GCS bucket with the name recserve_[YOUR-PROJECT-ID]:
End of explanation
# %%bash
# run app engine creation commands
# gcloud app create --region ${REGION} # see: https://cloud.google.com/compute/docs/regions-zones/
# gcloud app update --no-split-health-checks
Explanation: Setup Google App Engine permissions
In IAM, change permissions for "Compute Engine default service account" from Editor to Owner. This is required so you can create and deploy App Engine versions from within Cloud Datalab. Note: the alternative is to run all app engine commands directly in Cloud Shell instead of from within Cloud Datalab.<br/><br/>
Create an App Engine instance if you have not already by uncommenting and running the below code
End of explanation
%%bash
gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/ga_sessions_sample.json.gz gs://${BUCKET}/data/ga_sessions_sample.json.gz
gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/recommendation_events.csv data/recommendation_events.csv
gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/recommendation_events.csv gs://${BUCKET}/data/recommendation_events.csv
Explanation: Part One: Setup and Train the WALS Model
Upload sample data to BigQuery
This tutorial comes with a sample Google Analytics data set, containing page tracking events from the Austrian news site Kurier.at. The schema file '''ga_sessions_sample_schema.json''' is located in the folder data in the tutorial code, and the data file '''ga_sessions_sample.json.gz''' is located in a public Cloud Storage bucket associated with this tutorial. To upload this data set to BigQuery:
Copy sample data files into our bucket
End of explanation
%%bash
# create BigQuery dataset if it doesn't already exist
exists=$(bq ls -d | grep -w GA360_test)
if [ -n "$exists" ]; then
echo "Not creating GA360_test since it already exists."
else
echo "Creating GA360_test dataset."
bq --project_id=${PROJECT} mk GA360_test
fi
# create the schema and load our sample Google Analytics session data
bq load --source_format=NEWLINE_DELIMITED_JSON \
GA360_test.ga_sessions_sample \
gs://${BUCKET}/data/ga_sessions_sample.json.gz \
data/ga_sessions_sample_schema.json # can't load schema files from GCS
Explanation: 2. Create empty BigQuery dataset and load sample JSON data
Note: Ingesting the 400K rows of sample data. This usually takes 5-7 minutes.
End of explanation
%%bash
cd wals_ml_engine
echo "creating distributable package"
python setup.py sdist
echo "copying ML package to bucket"
gsutil cp dist/wals_ml_engine-0.1.tar.gz gs://${BUCKET}/code/
Explanation: Install WALS model training package and model data
1. Create a distributable package. Copy the package up to the code folder in the bucket you created previously.
End of explanation
%%bash
# view the ML train local script before running
cat wals_ml_engine/mltrain.sh
%%bash
cd wals_ml_engine
# train locally with unoptimized hyperparams
./mltrain.sh local ../data/recommendation_events.csv --data-type web_views --use-optimized
# Options if we wanted to train on CMLE. We will do this with Cloud Composer later
# train on ML Engine with optimized hyperparams
# ./mltrain.sh train ../data/recommendation_events.csv --data-type web_views --use-optimized
# tune hyperparams on ML Engine:
# ./mltrain.sh tune ../data/recommendation_events.csv --data-type web_views
Explanation: 2. Run the WALS model on the sample data set:
End of explanation
ls wals_ml_engine/jobs
Explanation: This will take a couple minutes, and create a job directory under wals_ml_engine/jobs like "wals_ml_local_20180102_012345/model", containing the model files saved as numpy arrays.
View the locally trained model directory
End of explanation
%%bash
export JOB_MODEL=$(find wals_ml_engine/jobs -name "model" | tail -1)
gsutil cp ${JOB_MODEL}/* gs://${BUCKET}/model/
echo "Recommendation model file numpy arrays in bucket:"
gsutil ls gs://${BUCKET}/model/
Explanation: 3. Copy the model files from this directory to the model folder in the project bucket:
In the case of multiple models, take the most recent (tail -1)
End of explanation
%%bash
cd scripts
cat prepare_deploy_api.sh
%%bash
printf "\nCopy and run the deploy script generated below:\n"
cd scripts
./prepare_deploy_api.sh # Prepare config file for the API.
Explanation: Install the recserve endpoint
1. Prepare the deploy template for the Cloud Endpoint API:
End of explanation
%%bash
gcloud endpoints services deploy [REPLACE_WITH_TEMP_FILE_NAME.yaml]
Explanation: This will output somthing like:
To deploy: gcloud endpoints services deploy /var/folders/1m/r3slmhp92074pzdhhfjvnw0m00dhhl/T/tmp.n6QVl5hO.yaml
2. Run the endpoints deploy command output above:
<span style="color: blue">Be sure to replace the below [FILE_NAME] with the results from above before running.</span>
End of explanation
%%bash
# view the app deployment script
cat scripts/prepare_deploy_app.sh
%%bash
# prepare to deploy
cd scripts
./prepare_deploy_app.sh
Explanation: 3. Prepare the deploy template for the App Engine App:
End of explanation
%%bash
gcloud -q app deploy app/app_template.yaml_deploy.yaml
Explanation: You can ignore the script output "ERROR: (gcloud.app.create) The project [...] already contains an App Engine application. You can deploy your application using gcloud app deploy." This is expected.
The script will output something like:
To deploy: gcloud -q app deploy app/app_template.yaml_deploy.yaml
4. Run the command above:
End of explanation
%%bash
cd scripts
./query_api.sh # Query the API.
#./generate_traffic.sh # Send traffic to the API.
Explanation: This will take 7 - 10 minutes to deploy the app. While you wait, consider starting on Part Two below and completing the Cloud Composer DAG file.
Query the API for Article Recommendations
Lastly, you are able to test the recommendation model API by submitting a query request. Note the example userId passed and numRecs desired as the URL parameters for the model input.
End of explanation
AIRFLOW_BUCKET = 'us-central1-composer-21587538-bucket' # REPLACE WITH AIRFLOW BUCKET NAME
os.environ['AIRFLOW_BUCKET'] = AIRFLOW_BUCKET
Explanation: If the call is successful, you will see the article IDs recommended for that specific user by the WALS ML model <br/>
(Example: curl "https://qwiklabs-gcp-12345.appspot.com/recommendation?userId=5448543647176335931&numRecs=5"
{"articles":["299824032","1701682","299935287","299959410","298157062"]} )
Part One is done! You have successfully created the back-end architecture for serving your ML recommendation system. But we're not done yet, we still need to automatically retrain and redeploy our model once new data comes in. For that we will use Cloud Composer and Apache Airflow.<br/><br/>
Part Two: Setup a scheduled workflow with Cloud Composer
In this section you will complete a partially written training.py DAG file and copy it to the DAGS folder in your Composer instance.
Copy your Airflow bucket name
Navigate to your Cloud Composer instance<br/><br/>
Select DAGs Folder<br/><br/>
You will be taken to the Google Cloud Storage bucket that Cloud Composer has created automatically for your Airflow instance<br/><br/>
Copy the bucket name into the variable below (example: us-central1-composer-08f6edeb-bucket)
End of explanation
%%writefile airflow/dags/training.py
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
DAG definition for recserv model training.
import airflow
from airflow import DAG
# Reference for all available airflow operators:
# https://github.com/apache/incubator-airflow/tree/master/airflow/contrib/operators
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator
from airflow.hooks.base_hook import BaseHook
# from airflow.contrib.operators.mlengine_operator import MLEngineTrainingOperator
# above mlengine_operator currently doesnt support custom MasterType so we import our own plugins:
# custom plugins
from airflow.operators.app_engine_admin_plugin import AppEngineVersionOperator
from airflow.operators.ml_engine_plugin import MLEngineTrainingOperator
import datetime
def _get_project_id():
Get project ID from default GCP connection.
extras = BaseHook.get_connection('google_cloud_default').extra_dejson
key = 'extra__google_cloud_platform__project'
if key in extras:
project_id = extras[key]
else:
raise ('Must configure project_id in google_cloud_default '
'connection from Airflow Console')
return project_id
PROJECT_ID = _get_project_id()
# Data set constants, used in BigQuery tasks. You can change these
# to conform to your data.
# TODO: Specify your BigQuery dataset name and table name
DATASET = ''
TABLE_NAME = ''
ARTICLE_CUSTOM_DIMENSION = '10'
# TODO: Confirm bucket name and region
# GCS bucket names and region, can also be changed.
BUCKET = 'gs://recserve_' + PROJECT_ID
REGION = 'us-east1'
# The code package name comes from the model code in the wals_ml_engine
# directory of the solution code base.
PACKAGE_URI = BUCKET + '/code/wals_ml_engine-0.1.tar.gz'
JOB_DIR = BUCKET + '/jobs'
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': airflow.utils.dates.days_ago(2),
'email': ['airflow@example.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 5,
'retry_delay': datetime.timedelta(minutes=5)
}
# Default schedule interval using cronjob syntax - can be customized here
# or in the Airflow console.
# TODO: Specify a schedule interval in CRON syntax to run once a day at 2100 hours (9pm)
# Reference: https://airflow.apache.org/scheduler.html
schedule_interval = '' # example '00 XX 0 0 0'
# TODO: Title your DAG to be recommendations_training_v1
dag = DAG('',
default_args=default_args,
schedule_interval=schedule_interval)
dag.doc_md = __doc__
#
#
# Task Definition
#
#
# BigQuery training data query
bql='''
#legacySql
SELECT
fullVisitorId as clientId,
ArticleID as contentId,
(nextTime - hits.time) as timeOnPage,
FROM(
SELECT
fullVisitorId,
hits.time,
MAX(IF(hits.customDimensions.index={0},
hits.customDimensions.value,NULL)) WITHIN hits AS ArticleID,
LEAD(hits.time, 1) OVER (PARTITION BY fullVisitorId, visitNumber
ORDER BY hits.time ASC) as nextTime
FROM [{1}.{2}.{3}]
WHERE hits.type = "PAGE"
) HAVING timeOnPage is not null and contentId is not null;
'''
bql = bql.format(ARTICLE_CUSTOM_DIMENSION, PROJECT_ID, DATASET, TABLE_NAME)
# TODO: Complete the BigQueryOperator task to truncate the table if it already exists before writing
# Reference: https://airflow.apache.org/integration.html#bigqueryoperator
t1 = BigQuerySomething( # correct the operator name
task_id='bq_rec_training_data',
bql=bql,
destination_dataset_table='%s.recommendation_events' % DATASET,
write_disposition='WRITE_T_______', # specify to truncate on writes
dag=dag)
# BigQuery training data export to GCS
# TODO: Fill in the missing operator name for task #2 which
# takes a BigQuery dataset and table as input and exports it to GCS as a CSV
training_file = BUCKET + '/data/recommendation_events.csv'
t2 = BigQueryToCloudSomethingSomething( # correct the name
task_id='bq_export_op',
source_project_dataset_table='%s.recommendation_events' % DATASET,
destination_cloud_storage_uris=[training_file],
export_format='CSV',
dag=dag
)
# ML Engine training job
job_id = 'recserve_{0}'.format(datetime.datetime.now().strftime('%Y%m%d%H%M'))
job_dir = BUCKET + '/jobs/' + job_id
output_dir = BUCKET
training_args = ['--job-dir', job_dir,
'--train-files', training_file,
'--output-dir', output_dir,
'--data-type', 'web_views',
'--use-optimized']
# TODO: Fill in the missing operator name for task #3 which will
# start a new training job to Cloud ML Engine
# Reference: https://airflow.apache.org/integration.html#cloud-ml-engine
# https://cloud.google.com/ml-engine/docs/tensorflow/machine-types
t3 = MLEngineSomethingSomething( # complete the name
task_id='ml_engine_training_op',
project_id=PROJECT_ID,
job_id=job_id,
package_uris=[PACKAGE_URI],
training_python_module='trainer.task',
training_args=training_args,
region=REGION,
scale_tier='CUSTOM',
master_type='complex_model_m_gpu',
dag=dag
)
# App Engine deploy new version
t4 = AppEngineVersionOperator(
task_id='app_engine_deploy_version',
project_id=PROJECT_ID,
service_id='default',
region=REGION,
service_spec=None,
dag=dag
)
# TODO: Be sure to set_upstream dependencies for all tasks
t2.set_upstream(t1)
t3.set_upstream(t2)
t4.set_upstream(t) # complete
Explanation: Complete the training.py DAG file
Apache Airflow orchestrates tasks out to other services through a DAG (Directed Acyclic Graph) file which specifies what services to call, what to do, and when to run these tasks. DAG files are written in python and are loaded automatically into Airflow once present in the Airflow/dags/ folder in your Cloud Composer bucket.
Your task is to complete the partially written DAG file below which will enable the automatic retraining and redeployment of our WALS recommendation model.
Complete the #TODOs in the Airflow DAG file below and execute the code block to save the file
End of explanation
%%bash
gsutil cp airflow/dags/training.py gs://${AIRFLOW_BUCKET}/dags # overwrite if it exists
gsutil cp -r airflow/plugins gs://${AIRFLOW_BUCKET} # copy custom plugins
Explanation: Copy local Airflow DAG file and plugins into the DAGs folder
End of explanation |
4,920 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Download and get info from all EIA-923 Excel files
This setup downloads all the zip files, extracts the contents, and identifies the correct header row in the correct file. I'm only getting 2 columns of data (plant id and NERC region), but it can be modified for other data.
Step2: Export original data
Step3: Assign NERC region to pre-2005/6 facilities based on where they ended up
Somehow I'm having trouble doing this | Python Code:
%matplotlib inline
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import os
import glob
import numpy as np
import requests
from bs4 import BeautifulSoup
from urllib import urlretrieve
import zipfile
import fnmatch
url = 'https://www.eia.gov/electricity/data/eia923'
r = requests.get(url)
soup = BeautifulSoup(r.text, 'lxml')
table = soup.find_all('table', attrs={'class': 'simpletable'})[0]
fns = []
links = []
for row in table.find_all('td', attrs={'align': 'center'}):
href = row.a.get('href')
fns.append(href.split('/')[-1])
links.append(url + '/' + href)
fns
path = os.path.join('Data storage', '923 raw data')
os.mkdir(path)
base_path = os.path.join('Data storage', '923 raw data')
for fn, link in zip(fns, links):
path = os.path.join(base_path, fn)
urlretrieve(link, filename=path)
base_path = os.path.join('Data storage', '923 raw data')
for fn in fns:
zip_path = os.path.join(base_path, fn)
target_folder = os.path.join(base_path, fn.split('.')[0])
with zipfile.ZipFile(zip_path,"r") as zip_ref:
zip_ref.extractall(target_folder)
matches = []
for root, dirnames, filenames in os.walk(base_path):
for filename in fnmatch.filter(filenames, '*2_3*'):
matches.append(os.path.join(root, filename))
for filename in fnmatch.filter(filenames, 'eia923*'):
matches.append(os.path.join(root, filename))
for filename in fnmatch.filter(filenames, '*906920*.xls'):
matches.append(os.path.join(root, filename))
matches
def clip_at_header(df, year):
Find the appropriate header row, only keep Plant Id and NERC Region columns,
and add a column with the year
header = df.loc[df.iloc[:, 8].str.contains('NERC').replace(np.nan, False)].index[0]
# print header
# Drop rows above header
df = df.loc[header + 1:, :]
# Only keep columns 0 (plant id) and 8 (NERC Region)
df = df.iloc[:, [0, 8]]
df.columns = ['Plant Id', 'NERC Region']
df.reset_index(inplace=True, drop=True)
df.dropna(inplace=True)
df['Plant Id'] = pd.to_numeric(df['Plant Id'])
df['Year'] = year
return df
df_list = []
for fn in matches:
year = int(fn.split('/')[-2].split('_')[-1])
df = pd.read_excel(fn)
df_list.append(clip_at_header(df, year))
nerc_assignment = pd.concat(df_list)
nerc_assignment.reset_index(inplace=True, drop=True)
nerc_assignment.drop_duplicates(inplace=True)
nerc_assignment['Year'] = pd.to_numeric(nerc_assignment['Year'])
nerc_region = nerc_assignment['NERC Region']
nerc_year = nerc_assignment['Year']
for region in nerc_assignment['NERC Region'].unique():
years = nerc_assignment.loc[nerc_region == region, 'Year'].unique()
print (region, list(years))
Explanation: Download and get info from all EIA-923 Excel files
This setup downloads all the zip files, extracts the contents, and identifies the correct header row in the correct file. I'm only getting 2 columns of data (plant id and NERC region), but it can be modified for other data.
End of explanation
path = os.path.join('Data storage', 'Plant NERC regions.csv')
nerc_assignment.to_csv(path, index=False)
Explanation: Export original data
End of explanation
region_dict = dict(nerc_assignment.loc[nerc_assignment['Year'] == 2006,
['Plant Id', 'NERC Region']].values)
regions = ['ECAR', 'MAPP', 'MAIN', 'MAAC']
years = range(2001, 2006)
nerc_assignment.loc[(nerc_region.isin(regions)) &
(nerc_assignment['Year'].isin(years)),
'Corrected Region'] = nerc_assignment.loc[(nerc_region.isin(regions)) &
(nerc_assignment['Year'].isin(years)),
'Plant Id'].map(region_dict)
nerc_assignment.head()
nerc_assignment.loc[(nerc_assignment['Year'] == 2006) &
(nerc_assignment['Plant Id'] == 3), 'NERC Region'].values[0]
nerc_assignment.loc[(nerc_assignment['Plant Id'] == 3) &
(nerc_assignment['Year'].isin(years)), 'Corrected Region'] = 'SERC'
nerc_assignment.loc[(nerc_assignment['Plant Id'] == 3) &
(nerc_assignment['Year'].isin(years)), 'Corrected Region']
nerc_assignment.loc[nerc_assignment['Year'] == 2002].head()
nerc_assignment.index = pd.MultiIndex.from_arrays([nerc_assignment['Year'],
nerc_assignment['Plant Id']])
nerc_assignment.head()
idx = pd.IndexSlice
regions_2006 = nerc_assignment.loc[idx[2006, :], 'NERC Region'].copy()
regions_2006 = nerc_assignment.xs(2006, level='Year')['NERC Region']
regions_2006
for year in range(2001, 2006):
nerc_assignment.xs(year, level='Year')['Corrected NERC'] = regions_2006
nerc_assignment
Explanation: Assign NERC region to pre-2005/6 facilities based on where they ended up
Somehow I'm having trouble doing this
End of explanation |
4,921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 11 - pre-class assignment
Goals for today's pre-class assignment
Use random number generators to create a sequence of random floats and integers
Create a function and use it to do something.
Assignment instructions
Watch the videos below, read through Section 4.6, 4.7.1, and 4.7.2 of the Python Tutorial, and complete the programming problems assigned below.
This assignment is due by 11
Step1: Some possibly useful links
Step2: Tutorial on functions in python
Dive Into Python - section on functions
Question 2
Step3: Question 3
Step5: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! | Python Code:
# Imports the functionality that we need to display YouTube videos in a Jupyter Notebook.
# You need to run this cell before you run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("fF841G53fGo",width=640,height=360) # random numbers
Explanation: Day 11 - pre-class assignment
Goals for today's pre-class assignment
Use random number generators to create a sequence of random floats and integers
Create a function and use it to do something.
Assignment instructions
Watch the videos below, read through Section 4.6, 4.7.1, and 4.7.2 of the Python Tutorial, and complete the programming problems assigned below.
This assignment is due by 11:59 p.m. the day before class, and should be uploaded into the "Pre-class assignments" dropbox folder for Day 11. Submission instructions can be found at the end of the notebook.
End of explanation
# put your code here.
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("o_wzbAUZWQk",width=640,height=360) # functions
Explanation: Some possibly useful links:
Python "random" module documentation
Numpy "random" module
And some interesting links on what you use random numbers to do in programming:
Wikipedia article on "Applications of randomness"
Wikipedia article on random number generation
Question 1: Using the Python random module, first seed the random number generator with a number of your choosing and then use a loop to create and print out several floating-point numbers whose values are between 5 and 10. Verify that if you re-run this code several times, the random numbers stay the same, and that if you comment out the code they change!
End of explanation
# put your code here.
Explanation: Tutorial on functions in python
Dive Into Python - section on functions
Question 2: Give a function a list of floating-point numbers and have it return the min, max, and average of them as three separate variables. Store those in variables and print them out.
End of explanation
# put your code here
Explanation: Question 3: Give a function a list of floating-point numbers and have it return either the min, max, or mean value, depending on an optional keyword (that you give it as a second argument). The default should be to provide the mean value. Store that in a variable and print it out.
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/rTmsyHG72q8pF0cT2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation |
4,922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Orientation with IKPy
In this Notebook, we'll demonstrate inverse kinematics on orientation on a baxter robot
Step1: Inverse Kinematics with Orientation
Step2: Mastering orientation
Orientation vector and frame
Orientation in the example above IKPy is specified just by defining a unit vector we want the robot to align with.
In the example above, the orientation vector is simply the Z axis
Step3: We see that the arm's X axis (in green) is aligned with the absolute referential's Z axis (in orange)
Step4: We see that the arm frame's axes are all aligned with the absolute axes (in green/light blue/orange) | Python Code:
# Some necessary imports
import numpy as np
from ikpy.chain import Chain
from ikpy.utils import plot
# Optional: support for 3D plotting in the NB
%matplotlib widget
# turn this off, if you don't need it
# First, let's import the baxter chains
baxter_left_arm_chain = Chain.from_json_file("../resources/baxter/baxter_left_arm.json")
baxter_right_arm_chain = Chain.from_json_file("../resources/baxter/baxter_right_arm.json")
baxter_pedestal_chain = Chain.from_json_file("../resources/baxter/baxter_pedestal.json")
baxter_head_chain = Chain.from_json_file("../resources/baxter/baxter_head.json")
# Let's how it looks without kinematics first
from mpl_toolkits.mplot3d import Axes3D;
fig, ax = plot.init_3d_figure();
baxter_left_arm_chain.plot([0] * (len(baxter_left_arm_chain)), ax)
baxter_right_arm_chain.plot([0] * (len(baxter_right_arm_chain)), ax)
baxter_pedestal_chain.plot([0] * (2 + 2), ax)
baxter_head_chain.plot([0] * (4 + 2), ax)
ax.legend()
Explanation: Orientation with IKPy
In this Notebook, we'll demonstrate inverse kinematics on orientation on a baxter robot:
Setup
End of explanation
# Let's ask baxter to put his left arm at a target_position, with a target_orientation on the X axis.
# This means we want the X axis of his hand to follow the desired vector
target_orientation = [0, 0, 1]
target_position = [0.1, 0.5, -0.1]
# Compute the inverse kinematics with position
ik = baxter_left_arm_chain.inverse_kinematics(target_position, target_orientation, orientation_mode="X")
# Let's see what are the final positions and orientations of the robot
position = baxter_left_arm_chain.forward_kinematics(ik)[:3, 3]
orientation = baxter_left_arm_chain.forward_kinematics(ik)[:3, 0]
# And compare them with was what required
print("Requested position: {} vs Reached position: {}".format(target_position, position))
print("Requested orientation on the X axis: {} vs Reached orientation on the X axis: {}".format(target_orientation, orientation))
# We see that the chain reached its position!
# Plot how it goes
fig, ax = plot.init_3d_figure();
baxter_left_arm_chain.plot(ik, ax)
baxter_right_arm_chain.plot([0] * (len(baxter_right_arm_chain)), ax)
baxter_pedestal_chain.plot([0] * (2 + 2), ax)
baxter_head_chain.plot([0] * (4 + 2), ax)
ax.legend()
Explanation: Inverse Kinematics with Orientation
End of explanation
# Let's ask baxter to put his left arm's X axis to the absolute Z axis
orientation_axis = "X"
target_orientation = [0, 0, 1]
# Compute the inverse kinematics with position
ik = baxter_left_arm_chain.inverse_kinematics(
target_position=[0.1, 0.5, -0.1],
target_orientation=target_orientation,
orientation_mode=orientation_axis)
# Plot how it goes
fig, ax = plot.init_3d_figure();
baxter_left_arm_chain.plot(ik, ax)
baxter_right_arm_chain.plot([0] * (len(baxter_right_arm_chain)), ax)
baxter_pedestal_chain.plot([0] * (2 + 2), ax)
baxter_head_chain.plot([0] * (4 + 2), ax)
ax.legend()
Explanation: Mastering orientation
Orientation vector and frame
Orientation in the example above IKPy is specified just by defining a unit vector we want the robot to align with.
In the example above, the orientation vector is simply the Z axis:
target_orientation = [0, 0, 1]
However, in 3D, orientation is not only one vector, but a full referential.
That's, in IKPy you have two options:
Doing IK orientation on only one axis, as in the example above.
Doing IK orientation on a full referential
Let's see the differences with a robot with a hand with a pointing finger:
By doing orientation on only one axis, you only want your finger to point at a specific direction, but don't care at how the palm is
By doing orientation on a full referential, you want your finger to point at a specific direction, but also want your palm to be at a specific orientation
Orientation on a single axis
Orientation on a single axis is quite straightforward. You need to provide:
A target unit vector, in the absolute referential
The axis of your link you want to match that target unit vector
In the example below, we ask Baxter's arm's X axis (so relative to the link) to match the absolute Z axis
End of explanation
# Let's ask baxter to put his left arm as
target_orientation = np.eye(3)
# Compute the inverse kinematics with position
ik = baxter_left_arm_chain.inverse_kinematics(
target_position=[0.1, 0.5, -0.1],
target_orientation=target_orientation,
orientation_mode="all")
# Plot how it goes
fig, ax = plot.init_3d_figure();
baxter_left_arm_chain.plot(ik, ax)
baxter_right_arm_chain.plot([0] * (len(baxter_right_arm_chain)), ax)
baxter_pedestal_chain.plot([0] * (2 + 2), ax)
baxter_head_chain.plot([0] * (4 + 2), ax)
ax.legend()
Explanation: We see that the arm's X axis (in green) is aligned with the absolute referential's Z axis (in orange):
Orientation on a full referential
In this setting, you just provide a frame to which the link's frame must align with, in the absolute referential.
This frame is a 3x3 orientation matrix (i.e. an orthogonal matrix, i.e. a matrix where all columns norm is one, and all columns are orthogonal)
End of explanation
# First begin to place the arm at the given position, without orientation
# So it will be easier for the robot to just move its hand to reach the desired orientation
ik = baxter_left_arm_chain.inverse_kinematics(target_position)
ik = baxter_left_arm_chain.inverse_kinematics(target_position, target_orientation, initial_position=ik, orientation_mode="X")
Explanation: We see that the arm frame's axes are all aligned with the absolute axes (in green/light blue/orange):
Orientation and Position
When dealing with orientation, you may also want to set a target position.
IKPy can natively manage both orientation and position at the same time.
However, in some difficult cases, reaching a target position and orientation may be difficult, or even impossible.
In these cases, a solution is to cut this problem into two steps:
Reach the desired position
From this position, reach the desired orientation
This is what is done in the example below:
End of explanation |
4,923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Boosting a decision stump
The goal of this notebook is to implement your own boosting module.
Brace yourselves! This is going to be a fun and challenging assignment.
Use SFrames to do some feature engineering.
Modify the decision trees to incorporate weights.
Implement Adaboost ensembling.
Use your implementation of Adaboost to train a boosted decision stump ensemble.
Evaluate the effect of boosting (adding more decision stumps) on performance of the model.
Explore the robustness of Adaboost to overfitting.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
Step1: Getting the data ready
We will be using the same LendingClub dataset as in the previous assignment.
Step2: Extracting the target and the feature columns
We will now repeat some of the feature processing steps that we saw in the previous assignment
Step3: Transform categorical data into binary features
In this assignment, we will work with binary decision trees. Since all of our features are currently categorical features, we want to turn them into binary features using 1-hot encoding.
We can do so with the following code block (see the first assignments for more details)
Step4: Let's see what the feature columns look like now
Step7: Weighted decision trees
Let's modify our decision tree code from Module 5 to support weighting of individual data points.
Weighted error definition
Consider a model with $N$ data points with
Step8: Checkpoint
Step11: Recall that the classification error is defined as follows
Step12: Checkpoint
Step13: Note. If you get an exception in the line of "the logical filter has different size than the array", try upgradting your GraphLab Create installation to 1.8.3 or newer.
Very Optional. Relationship between weighted error and weight of mistakes
By definition, the weighted error is the weight of mistakes divided by the weight of all data points, so
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i} = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}.
$$
In the code above, we obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$ from the two weights of mistakes from both sides, $\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}})$ and $\mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})$. First, notice that the overall weight of mistakes $\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ can be broken into two weights of mistakes over either side of the split
Step15: We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions
Step16: Here is a recursive function to count the nodes in your tree
Step17: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step18: Let us take a quick look at what the trained tree is like. You should get something that looks like the following
{'is_leaf'
Step19: Making predictions with a weighted decision tree
We give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.
Step20: Evaluating the tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows
Step21: Example
Step22: Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).
Step23: Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data
Step24: The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.
So, what does this mean?
* The points with higher weights are the ones that are more important during the training process of the weighted decision tree.
* The points with zero weights are basically ignored during training.
Quiz Question
Step25: Implementing your own Adaboost (on decision stumps)
Now that we have a weighted decision tree working, it takes only a bit of work to implement Adaboost. For the sake of simplicity, let us stick with decision tree stumps by training trees with max_depth=1.
Recall from the lecture the procedure for Adaboost
Step26: Checking your Adaboost code
Train an ensemble of two tree stumps and see which features those stumps split on. We will run the algorithm with the following parameters
Step27: Here is what the first stump looks like
Step28: Here is what the next stump looks like
Step29: If your Adaboost is correctly implemented, the following things should be true
Step30: Making predictions
Recall from the lecture that in order to make predictions, we use the following formula
Step31: Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble
Step32: Quiz Question
Step33: Computing training error at the end of each iteration
Now, we will compute the classification error on the train_data and see how it is reduced as trees are added.
Step34: Visualizing training error vs number of iterations
We have provided you with a simple code snippet that plots classification error with the number of iterations.
Step35: Quiz Question
Step36: Visualize both the training and test errors
Now, let us plot the training & test error with the number of iterations. | Python Code:
import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Boosting a decision stump
The goal of this notebook is to implement your own boosting module.
Brace yourselves! This is going to be a fun and challenging assignment.
Use SFrames to do some feature engineering.
Modify the decision trees to incorporate weights.
Implement Adaboost ensembling.
Use your implementation of Adaboost to train a boosted decision stump ensemble.
Evaluate the effect of boosting (adding more decision stumps) on performance of the model.
Explore the robustness of Adaboost to overfitting.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
loans = pd.read_csv('lending-club-data.csv')
loans.head(2)
loans.columns
Explanation: Getting the data ready
We will be using the same LendingClub dataset as in the previous assignment.
End of explanation
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.drop('bad_loans', axis=1)
target = 'safe_loans'
loans = loans[features + [target]]
print loans.shape
Explanation: Extracting the target and the feature columns
We will now repeat some of the feature processing steps that we saw in the previous assignment:
First, we re-assign the target to have +1 as a safe (good) loan, and -1 as a risky (bad) loan.
Next, we select four categorical features:
1. grade of the loan
2. the length of the loan term
3. the home ownership status: own, mortgage, rent
4. number of years of employment.
End of explanation
categorical_variables = []
for feat_name, feat_type in zip(loans.columns, loans.dtypes):
if feat_type == object:
categorical_variables.append(feat_name)
for feature in categorical_variables:
loans_one_hot_encoded = pd.get_dummies(loans[feature],prefix=feature)
loans_one_hot_encoded.fillna(0)
#print loans_one_hot_encoded
loans = loans.drop(feature, axis=1)
for col in loans_one_hot_encoded.columns:
loans[col] = loans_one_hot_encoded[col]
print loans.head(2)
print loans.columns
Explanation: Transform categorical data into binary features
In this assignment, we will work with binary decision trees. Since all of our features are currently categorical features, we want to turn them into binary features using 1-hot encoding.
We can do so with the following code block (see the first assignments for more details):
End of explanation
with open('module-8-assignment-2-train-idx.json') as train_data_file:
train_idx = json.load(train_data_file)
with open('module-8-assignment-2-test-idx.json') as test_data_file:
test_idx = json.load(test_data_file)
print train_idx[:3]
print test_idx[:3]
print len(train_idx)
print len(test_idx)
train_data = loans.iloc[train_idx]
test_data = loans.iloc[test_idx]
print len(train_data.dtypes)
print len(loans.dtypes )
features = list(train_data.columns)
features.remove('safe_loans')
print list(train_data.columns)
print features
print len(features)
Explanation: Let's see what the feature columns look like now:
Train-test split
We split the data into training and test sets with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
End of explanation
def intermediate_node_weighted_mistakes(labels_in_node, data_weights):
# Sum the weights of all entries with label +1
print 'labels_in_node: '+ str(labels_in_node)
print 'data_weights: '+str(data_weights)
print data_weights[labels_in_node == +1]
print np.array(data_weights[labels_in_node == +1])
print np.sum(np.array(data_weights[labels_in_node == +1]))
labels_in_node = np.array(labels_in_node)
data_weights = np.array(data_weights)
total_weight_positive = np.sum(data_weights[labels_in_node == +1])
# Weight of mistakes for predicting all -1's is equal to the sum above
### YOUR CODE HERE
# Sum the weights of all entries with label -1
### YOUR CODE HERE
print np.array(data_weights[labels_in_node == -1])
print np.sum(np.array(data_weights[labels_in_node == -1]))
total_weight_negative = np.sum(data_weights[labels_in_node == -1])
# Weight of mistakes for predicting all +1's is equal to the sum above
### YOUR CODE HERE
# Return the tuple (weight, class_label) representing the lower of the two weights
# class_label should be an integer of value +1 or -1.
# If the two weights are identical, return (weighted_mistakes_all_positive,+1)
### YOUR CODE HERE
#print "total_weight_positive: {}, total_weight_negative: {}".format(total_weight_positive, total_weight_negative)
if total_weight_positive >= total_weight_negative:
return (total_weight_negative, +1)
else:
return (total_weight_positive, -1)
Explanation: Weighted decision trees
Let's modify our decision tree code from Module 5 to support weighting of individual data points.
Weighted error definition
Consider a model with $N$ data points with:
* Predictions $\hat{y}_1 ... \hat{y}_n$
* Target $y_1 ... y_n$
* Data point weights $\alpha_1 ... \alpha_n$.
Then the weighted error is defined by:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i}
$$
where $1[y_i \neq \hat{y_i}]$ is an indicator function that is set to $1$ if $y_i \neq \hat{y_i}$.
Write a function to compute weight of mistakes
Write a function that calculates the weight of mistakes for making the "weighted-majority" predictions for a dataset. The function accepts two inputs:
* labels_in_node: Targets $y_1 ... y_n$
* data_weights: Data point weights $\alpha_1 ... \alpha_n$
We are interested in computing the (total) weight of mistakes, i.e.
$$
\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}].
$$
This quantity is analogous to the number of mistakes, except that each mistake now carries different weight. It is related to the weighted error in the following way:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}
$$
The function intermediate_node_weighted_mistakes should first compute two weights:
* $\mathrm{WM}{-1}$: weight of mistakes when all predictions are $\hat{y}_i = -1$ i.e $\mathrm{WM}(\mathbf{\alpha}, \mathbf{-1}$)
* $\mathrm{WM}{+1}$: weight of mistakes when all predictions are $\hat{y}_i = +1$ i.e $\mbox{WM}(\mathbf{\alpha}, \mathbf{+1}$)
where $\mathbf{-1}$ and $\mathbf{+1}$ are vectors where all values are -1 and +1 respectively.
After computing $\mathrm{WM}{-1}$ and $\mathrm{WM}{+1}$, the function intermediate_node_weighted_mistakes should return the lower of the two weights of mistakes, along with the class associated with that weight. We have provided a skeleton for you with YOUR CODE HERE to be filled in several places.
End of explanation
example_labels = np.array([-1, -1, 1, 1, 1])
example_data_weights = np.array([1., 2., .5, 1., 1.])
if intermediate_node_weighted_mistakes(example_labels, example_data_weights) == (2.5, -1):
print 'Test passed!'
else:
print 'Test failed... try again!'
Explanation: Checkpoint: Test your intermediate_node_weighted_mistakes function, run the following cell:
End of explanation
# If the data is identical in each feature, this function should return None
def best_splitting_feature(data, features, target, data_weights):
# These variables will keep track of the best feature and the corresponding error
best_feature = None
best_error = float('+inf')
num_points = float(len(data))
print "len(data_weights): {}".format(len(data_weights))
data['data_weights'] = data_weights
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
# The right split will have all data points where the feature value is 1
left_split = data[data[feature] == 0]
right_split = data[data[feature] == 1]
# print "len(left_split): {}, len(right_split): {}".format(len(left_split), len(right_split))
# Apply the same filtering to data_weights to create left_data_weights, right_data_weights
## YOUR CODE HERE
left_data_weights = left_split['data_weights']
right_data_weights = right_split['data_weights']
print "left_data_weights: {}, right_data_weights: {}".format(left_data_weights, right_data_weights)
print "len(left_data_weights): {}, len(right_data_weights): {}".format(len(left_data_weights), len(right_data_weights))
print "sum(left_data_weights): {}, sum(right_data_weights): {}".format(sum(left_data_weights), sum(right_data_weights))
# DIFFERENT HERE
# Calculate the weight of mistakes for left and right sides
## YOUR CODE HERE
#print "np.array type: {}".format(np.array(left_split[target]))
left_weighted_mistakes, left_class = intermediate_node_weighted_mistakes(np.array(left_split[target]), np.array(left_data_weights))
right_weighted_mistakes, right_class = intermediate_node_weighted_mistakes(np.array(right_split[target]), np.array(right_data_weights))
# DIFFERENT HERE
# Compute weighted error by computing
# ( [weight of mistakes (left)] + [weight of mistakes (right)] ) / [total weight of all data points]
## YOUR CODE HERE
error = (left_weighted_mistakes + right_weighted_mistakes) * 1. / sum(data_weights)
print "left_weighted_mistakes: {}, right_weighted_mistakes: {}".format(left_weighted_mistakes, right_weighted_mistakes)
print "left_weighted_mistakes + right_weighted_mistakes: {}, error: {}".format(left_weighted_mistakes + right_weighted_mistakes, error)
print "feature and error: "
print "feature: {}, error: {}".format(feature, error)
# If this is the best error we have found so far, store the feature and the error
if error < best_error:
#print "best_feature: {}, best_error: {}".format(feature, error)
best_feature = feature
best_error = error
#print "best_feature: {}, best_error: {}".format(best_feature, best_error)
# Return the best feature we found
return best_feature
Explanation: Recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# all data points}}
$$
Quiz Question: If we set the weights $\mathbf{\alpha} = 1$ for all data points, how is the weight of mistakes $\mbox{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ related to the classification error?
equal
Function to pick best feature to split on
We continue modifying our decision tree code from the earlier assignment to incorporate weighting of individual data points. The next step is to pick the best feature to split on.
The best_splitting_feature function is similar to the one from the earlier assignment with two minor modifications:
1. The function best_splitting_feature should now accept an extra parameter data_weights to take account of weights of data points.
2. Instead of computing the number of mistakes in the left and right side of the split, we compute the weight of mistakes for both sides, add up the two weights, and divide it by the total weight of the data.
Complete the following function. Comments starting with DIFFERENT HERE mark the sections where the weighted version differs from the original implementation.
End of explanation
example_data_weights = np.array(len(train_data)* [1.5])
#print "example_data_weights: {}".format(example_data_weights)
#print "train_data: \n {}, features: {}, target: {}, example_data_weights: {}".format(train_data, features, target, example_data_weights)
#print best_splitting_feature(train_data, features, target, example_data_weights)
if best_splitting_feature(train_data, features, target, example_data_weights) == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
Explanation: Checkpoint: Now, we have another checkpoint to make sure you are on the right track.
End of explanation
def create_leaf(target_values, data_weights):
# Create a leaf node
leaf = {'splitting_feature' : None,
'is_leaf': True}
# Computed weight of mistakes.
weighted_error, best_class = intermediate_node_weighted_mistakes(target_values, data_weights)
# Store the predicted class (1 or -1) in leaf['prediction']
leaf['prediction'] = best_class
return leaf
Explanation: Note. If you get an exception in the line of "the logical filter has different size than the array", try upgradting your GraphLab Create installation to 1.8.3 or newer.
Very Optional. Relationship between weighted error and weight of mistakes
By definition, the weighted error is the weight of mistakes divided by the weight of all data points, so
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i} = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}.
$$
In the code above, we obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$ from the two weights of mistakes from both sides, $\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}})$ and $\mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})$. First, notice that the overall weight of mistakes $\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ can be broken into two weights of mistakes over either side of the split:
$$
\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})
= \sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]
= \sum_{\mathrm{left}} \alpha_i \times 1[y_i \neq \hat{y_i}]
+ \sum_{\mathrm{right}} \alpha_i \times 1[y_i \neq \hat{y_i}]\
= \mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}}) + \mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})
$$
We then divide through by the total weight of all data points to obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})
= \frac{\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}}) + \mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})}{\sum_{i=1}^{n} \alpha_i}
$$
Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Recall from the previous assignments that each node in the decision tree is represented as a dictionary which contains the following keys:
{
'is_leaf' : True/False.
'prediction' : Prediction at the leaf node.
'left' : (dictionary corresponding to the left tree).
'right' : (dictionary corresponding to the right tree).
'features_remaining' : List of features that are posible splits.
}
Let us start with a function that creates a leaf node given a set of target values:
End of explanation
def weighted_decision_tree_create(data, features, target, data_weights, current_depth = 1, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
data['data_weights'] = data_weights
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1. Error is 0.
if intermediate_node_weighted_mistakes(target_values, data_weights)[0] <= 1e-15:
print "Stopping condition 1 reached."
return create_leaf(target_values, data_weights)
# Stopping condition 2. No more features.
if remaining_features == []:
print "Stopping condition 2 reached."
return create_leaf(target_values, data_weights)
# Additional stopping condition (limit tree depth)
if current_depth > max_depth:
print "Reached maximum depth. Stopping for now."
return create_leaf(target_values, data_weights)
# If all the datapoints are the same, splitting_feature will be None. Create a leaf
splitting_feature = best_splitting_feature(data, features, target, data_weights)
remaining_features.remove(splitting_feature)
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
left_data_weights = data_weights[data[splitting_feature] == 0]
right_data_weights = data_weights[data[splitting_feature] == 1]
left_data_weights = np.array(left_split['data_weights'])
right_data_weights = np.array(right_split['data_weights'])
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target], data_weights)
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target], data_weights)
# Repeat (recurse) on left and right subtrees
left_tree = weighted_decision_tree_create(
left_split, remaining_features, target, left_data_weights, current_depth + 1, max_depth)
right_tree = weighted_decision_tree_create(
right_split, remaining_features, target, right_data_weights, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
Explanation: We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions:
1. All data points in a node are from the same class.
2. No more features to split on.
3. Stop growing the tree when the tree depth reaches max_depth.
End of explanation
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
Explanation: Here is a recursive function to count the nodes in your tree:
End of explanation
example_data_weights = np.array([1.0 for i in range(len(train_data))])
small_data_decision_tree = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
if count_nodes(small_data_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found:', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there: 7'
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
small_data_decision_tree
Explanation: Let us take a quick look at what the trained tree is like. You should get something that looks like the following
{'is_leaf': False,
'left': {'is_leaf': False,
'left': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},
'prediction': None,
'right': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},
'splitting_feature': 'grade.A'
},
'prediction': None,
'right': {'is_leaf': False,
'left': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},
'prediction': None,
'right': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},
'splitting_feature': 'grade.D'
},
'splitting_feature': 'term. 36 months'
}
End of explanation
def classify(tree, x, annotate = False):
# If the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# Split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
Explanation: Making predictions with a weighted decision tree
We give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.
End of explanation
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x), axis=1)
# Once you've made the predictions, calculate the classification error
return (data[target] != np.array(prediction)).values.sum() / float(len(data))
evaluate_classification_error(small_data_decision_tree, test_data)
evaluate_classification_error(small_data_decision_tree, train_data)
Explanation: Evaluating the tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# all data points}}
$$
The function called evaluate_classification_error takes in as input:
1. tree (as described above)
2. data (an SFrame)
The function does not change because of adding data point weights.
End of explanation
# Assign weights
example_data_weights = np.array([1.] * 10 + [0.]*(len(train_data) - 20) + [1.] * 10)
# Train a weighted decision tree model.
small_data_decision_tree_subset_20 = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
Explanation: Example: Training a weighted decision tree
To build intuition on how weighted data points affect the tree being built, consider the following:
Suppose we only care about making good predictions for the first 10 and last 10 items in train_data, we assign weights:
* 1 to the last 10 items
* 1 to the first 10 items
* and 0 to the rest.
Let us fit a weighted decision tree with max_depth = 2.
End of explanation
subset_20 = train_data.head(10).append(train_data.tail(10))
evaluate_classification_error(small_data_decision_tree_subset_20, subset_20)
Explanation: Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).
End of explanation
evaluate_classification_error(small_data_decision_tree_subset_20, train_data)
Explanation: Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data:
End of explanation
# Assign weights
sth_example_data_weights = np.array([1.] * 10 + [1.] * 10)
# Train a weighted decision tree model.
sth_test_model = weighted_decision_tree_create(subset_20, features, target,
sth_example_data_weights, max_depth=2)
small_data_decision_tree_subset_20
sth_test_model
Explanation: The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.
So, what does this mean?
* The points with higher weights are the ones that are more important during the training process of the weighted decision tree.
* The points with zero weights are basically ignored during training.
Quiz Question: Will you get the same model as small_data_decision_tree_subset_20 if you trained a decision tree with only the 20 data points with non-zero weights from the set of points in subset_20?
Yes
End of explanation
from math import log
from math import exp
def adaboost_with_tree_stumps(data, features, target, num_tree_stumps):
# start with unweighted data
alpha = np.array([1.]*len(data))
weights = []
tree_stumps = []
target_values = data[target]
for t in xrange(num_tree_stumps):
print '====================================================='
print 'Adaboost Iteration %d' % t
print '====================================================='
# Learn a weighted decision tree stump. Use max_depth=1
tree_stump = weighted_decision_tree_create(data, features, target, data_weights=alpha, max_depth=1)
tree_stumps.append(tree_stump)
# Make predictions
predictions = data.apply(lambda x: classify(tree_stump, x), axis=1)
# Produce a Boolean array indicating whether
# each data point was correctly classified
is_correct = predictions == target_values
is_wrong = predictions != target_values
# Compute weighted error
# YOUR CODE HERE
weighted_error = np.sum(np.array(is_wrong) * alpha) * 1. / np.sum(alpha)
# Compute model coefficient using weighted error
# YOUR CODE HERE
weight = 1. / 2 * log((1 - weighted_error) * 1. / (weighted_error))
weights.append(weight)
# Adjust weights on data point
adjustment = is_correct.apply(lambda is_correct : exp(-weight) if is_correct else exp(weight))
# Scale alpha by multiplying by adjustment
# Then normalize data points weights
## YOUR CODE HERE
alpha = alpha * np.array(adjustment)
alpha = alpha / np.sum(alpha)
return weights, tree_stumps
Explanation: Implementing your own Adaboost (on decision stumps)
Now that we have a weighted decision tree working, it takes only a bit of work to implement Adaboost. For the sake of simplicity, let us stick with decision tree stumps by training trees with max_depth=1.
Recall from the lecture the procedure for Adaboost:
1. Start with unweighted data with $\alpha_j = 1$
2. For t = 1,...T:
* Learn $f_t(x)$ with data weights $\alpha_j$
* Compute coefficient $\hat{w}t$:
$$\hat{w}_t = \frac{1}{2}\ln{\left(\frac{1- \mbox{E}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\mbox{E}(\mathbf{\alpha}, \mathbf{\hat{y}})}\right)}$$
* Re-compute weights $\alpha_j$:
$$\alpha_j \gets \begin{cases}
\alpha_j \exp{(-\hat{w}_t)} & \text{ if }f_t(x_j) = y_j\
\alpha_j \exp{(\hat{w}_t)} & \text{ if }f_t(x_j) \neq y_j
\end{cases}$$
* Normalize weights $\alpha_j$:
$$\alpha_j \gets \frac{\alpha_j}{\sum{i=1}^{N}{\alpha_i}} $$
Complete the skeleton for the following code to implement adaboost_with_tree_stumps. Fill in the places with YOUR CODE HERE.
End of explanation
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features, target, num_tree_stumps=2)
def print_stump(tree):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('_')
print ' root'
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0]{1}[{0} == 1] '.format(split_name, ' '*(27-len(split_name)))
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
Explanation: Checking your Adaboost code
Train an ensemble of two tree stumps and see which features those stumps split on. We will run the algorithm with the following parameters:
* train_data
* features
* target
* num_tree_stumps = 2
End of explanation
print_stump(tree_stumps[0])
Explanation: Here is what the first stump looks like:
End of explanation
print_stump(tree_stumps[1])
print stump_weights
Explanation: Here is what the next stump looks like:
End of explanation
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features,
target, num_tree_stumps=10)
Explanation: If your Adaboost is correctly implemented, the following things should be true:
tree_stumps[0] should split on term. 36 months with the prediction -1 on the left and +1 on the right.
tree_stumps[1] should split on grade.A with the prediction -1 on the left and +1 on the right.
Weights should be approximately [0.158, 0.177]
Reminders
- Stump weights ($\mathbf{\hat{w}}$) and data point weights ($\mathbf{\alpha}$) are two different concepts.
- Stump weights ($\mathbf{\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.
- Data point weights ($\mathbf{\alpha}$) tell you how important each data point is while training a decision stump.
Training a boosted ensemble of 10 stumps
Let us train an ensemble of 10 decision tree stumps with Adaboost. We run the adaboost_with_tree_stumps function with the following parameters:
* train_data
* features
* target
* num_tree_stumps = 10
End of explanation
def predict_adaboost(stump_weights, tree_stumps, data):
scores = np.array([0.]*len(data))
for i, tree_stump in enumerate(tree_stumps):
predictions = data.apply(lambda x: classify(tree_stump, x), axis=1)
# Accumulate predictions on scores array
# YOUR CODE HERE
scores = scores + stump_weights[i] * np.array(predictions)
# return the prediction
return np.array(1 * (scores > 0) + (-1) * (scores <= 0))
traindata_predictions = predict_adaboost(stump_weights, tree_stumps, train_data)
train_accuracy = np.sum(np.array(train_data[target]) == traindata_predictions) / float(len(traindata_predictions))
print 'training data Accuracy of 10-component ensemble = %s' % train_accuracy
predictions = predict_adaboost(stump_weights, tree_stumps, test_data)
accuracy = np.sum(np.array(test_data[target]) == predictions) / float(len(predictions))
print 'test data Accuracy of 10-component ensemble = %s' % accuracy
Explanation: Making predictions
Recall from the lecture that in order to make predictions, we use the following formula:
$$
\hat{y} = sign\left(\sum_{t=1}^T \hat{w}_t f_t(x)\right)
$$
We need to do the following things:
- Compute the predictions $f_t(x)$ using the $t$-th decision tree
- Compute $\hat{w}_t f_t(x)$ by multiplying the stump_weights with the predictions $f_t(x)$ from the decision trees
- Sum the weighted predictions over each stump in the ensemble.
Complete the following skeleton for making predictions:
End of explanation
stump_weights
plt.plot(stump_weights)
plt.show()
Explanation: Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble:
End of explanation
# this may take a while...
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data,
features, target, num_tree_stumps=30)
Explanation: Quiz Question: Are the weights monotonically decreasing, monotonically increasing, or neither?
Neither
Reminder: Stump weights ($\mathbf{\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.
Performance plots
In this section, we will try to reproduce some of the performance plots dicussed in the lecture.
How does accuracy change with adding stumps to the ensemble?
We will now train an ensemble with:
* train_data
* features
* target
* num_tree_stumps = 30
Once we are done with this, we will then do the following:
* Compute the classification error at the end of each iteration.
* Plot a curve of classification error vs iteration.
First, lets train the model.
End of explanation
error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], train_data)
error = np.sum(np.array(train_data[target]) != predictions) / float(len(predictions))
error_all.append(error)
print "Iteration %s, training error = %s" % (n, error_all[n-1])
Explanation: Computing training error at the end of each iteration
Now, we will compute the classification error on the train_data and see how it is reduced as trees are added.
End of explanation
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size': 16})
Explanation: Visualizing training error vs number of iterations
We have provided you with a simple code snippet that plots classification error with the number of iterations.
End of explanation
test_error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], test_data)
error = np.sum(np.array(test_data[target]) != predictions) / float(len(predictions))
test_error_all.append(error)
print "Iteration %s, test error = %s" % (n, test_error_all[n-1])
Explanation: Quiz Question: Which of the following best describes a general trend in accuracy as we add more and more components? Answer based on the 30 components learned so far.
Training error goes down monotonically, i.e. the training error reduces with each iteration but never increases.
Training error goes down in general, with some ups and downs in the middle.
Training error goes up in general, with some ups and downs in the middle.
Training error goes down in the beginning, achieves the best error, and then goes up sharply.
None of the above
Evaluation on the test data
Performing well on the training data is cheating, so lets make sure it works on the test_data as well. Here, we will compute the classification error on the test_data at the end of each iteration.
End of explanation
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.plot(range(1,31), test_error_all, '-', linewidth=4.0, label='Test error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.rcParams.update({'font.size': 16})
plt.legend(loc='best', prop={'size':15})
plt.tight_layout()
Explanation: Visualize both the training and test errors
Now, let us plot the training & test error with the number of iterations.
End of explanation |
4,924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wayne H Nixalo - 13 June 2017
Practical Deep Learning I
Lesson 5 - RNNs, NLP
Code along of char-rnn.ipynb
Step1: Setup
We haven't really looked into the detail of how this works yet - so this is provided for self-study for those who're interested. We'll look at it closely next week.
Step2: ^ This allows us to take the text & convert into a list of numbers, where the number represents the index in which the char appears in the unique-char list.
Step3: Preprocess and create model
Step4: In Lesson 6
Step5: Train | Python Code:
import theano
%matplotlib inline
import os, sys
sys.path.insert(1, os.path.join('utils'))
import utils; reload(utils)
from utils import *
from __future__ import print_function, division
from keras.layers import TimeDistributed, Activation
# https://keras.io/layers/wrappers/
# [Doc:TimeDistributed] this wrapper allows to apply a layer to every temporal slice of an input
# https://keras.io/activations/
# [Doc:Activation] activations can be used through an Activation layer
from numpy.random import choice
Explanation: Wayne H Nixalo - 13 June 2017
Practical Deep Learning I
Lesson 5 - RNNs, NLP
Code along of char-rnn.ipynb
End of explanation
path = get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
text = open(path).read().lower()
print('corpus length:', len(text))
!tail {path} -n25
# unique characters
chars = sorted(list(set(text)))
vocab_size = len(chars) + 1
print('total chars:', vocab_size)
chars.insert(0, "\0")
# the unique characters (in UpLoCase corpus, add 26)
''.join(chars[1:-6])
# create a mapping from char to index in which it appears
char_indices = dict((c, i) for i, c in enumerate(chars))
# create a mapping from index to char
indices_char = dict((i, c) for i, c in enumerate(chars))
Explanation: Setup
We haven't really looked into the detail of how this works yet - so this is provided for self-study for those who're interested. We'll look at it closely next week.
End of explanation
idx = [char_indices[c] for c in text]
idx[:10]
''.join(indices_char[i] for i in idx[:70])
Explanation: ^ This allows us to take the text & convert into a list of numbers, where the number represents the index in which the char appears in the unique-char list.
End of explanation
maxlen = 40
sentences = []
next_chars = []
for i in xrange(0, len(idx) - maxlen + 1):
sentences.append(idx[i: i + maxlen])
next_chars.append(idx[i + 1: i + maxlen + 1])
print('nb sequences:', len(sentences))
sentences = np.concatenate([[np.array(o)] for o in sentences[:-2]])
next_chars = np.concatenate([[np.array(o)] for o in next_chars[:-2]])
sentences.shape, next_chars.shape
n_fac = 24
Explanation: Preprocess and create model
End of explanation
model = Sequential([
Embedding(vocab_size, n_fac, input_length=maxlen),
LSTM(512, input_dim=n_fac, return_sequences=True, dropout_U=0.2, dropout_W=0.2,
consume_less='gpu'),
Dropout(0.2),
LSTM(512, return_sequences=True, dropout_U=0.2, dropout_W=0.2,
consume_less='gpu'),
Dropout(0.2),
TimeDistributed(Dense(vocab_size)),
Activation('softmax')
])
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
Explanation: In Lesson 6: improved: an RNN feeding into an RNN. See lecture at ~ 1:10:00
End of explanation
def print_example():
seed_string="ethics is a basic foundation of all that"
for i in range(320):
x=np.array([char_indices[c] for c in seed_string[-40:]])[np.newaxis,:]
preds = model.predict(x, verbose=0)[0][-1]
preds = preds/np.sum(preds)
next_char = choice(chars, p=preds)
seed_string = seed_string + next_char
print(seed_string)
model.fit(sentences, np.expand_dims(next_chars, -1), batch_size=64, nb_epoch=1)
print_example()
model.fit(sentences, np.expand_dims(next_chars, -1), batch_size=64, nb_epoch=1)
print_example()
model.optimizer.lr=1e-3
model.fit(sentences, np.expand_dims(next_chars, -1), batch_size=128, nb_epoch=1)
print_example()
model.optimizer.lr=1e-4
model.fit(sentences, np.expand_dims(next_chars, -1), batch_size=256, nb_epoch=1)
print_example()
%mkdir -p 'data/char_rnn/'
model.save_weights('data/char_rnn.h5')
model.optimizer.lr=1e-5
model.fit(sentences, np.expand_dims(next_chars, -1), batch_size=256, nb_epoch=1)
print_example()
model.fit(sentences, np.expand_dims(next_chars, -1), batch_size=128, nb_epoch=1)
print_example()
print_example()
model.save_weights('data/char_rnn.h5')
def print_example(seed_string=''):
if not seed_string:
seed_string="ethics is a basic foundation of all that"
for i in range(320):
x=np.array([char_indices[c] for c in seed_string[-40:]])[np.newaxis,:]
preds = model.predict(x, verbose=0)[0][-1]
preds = preds/np.sum(preds)
next_char = choice(chars, p=preds)
seed_string = seed_string + next_char
print(seed_string)
text ='so um first i was afraid i was petrified'
print_example(text)
Explanation: Train
End of explanation |
4,925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working Dir
Step1: Filename
Step2: Output Prefix
Step3: Others
Step4: Parse CUE
Step5: Covert Files | Python Code:
WORKING_DIR = u"/path/to/folder/to/music"
Explanation: Working Dir: It's supposed that your commands are run under this folder.
End of explanation
FILENAME_PREFIX = u"filename_without_ext"
FILENAME_EXTENSION = u"wav"
Explanation: Filename: This is the filename prefix. For example, if your files are CDImage.wav, CDImage.cue, set FILENAME_PREFIX to CDImage.
Filename Extension: It's the extension part of your audio file. If your audio file is CDImage.wav, set FILENAME_EXTENSION to wav.
End of explanation
OUTPUT_PATTERN = u"/path/to/your/music/<%(prefix)s >%(album)s< (%(suffix)s)>/<<%(discnumber)s->%(tracknumber)s >%(title)s.flac"
Explanation: Output Prefix: The output files will be saved to here.
End of explanation
PICTURE = u"Folder.jpg"
ANSI_ENCODING = "gbk"
FILES_TO_COPY = ["Artworks.tar"]
DELETE_TARGET_DIR = False # If clean the target folder at first
INPUT_EXTRAINFO = u"%s.ini" % FILENAME_PREFIX
INPUT_CUE = u"%s.cue" % FILENAME_PREFIX
INPUT_AUDIO = u"%s.%s" % (FILENAME_PREFIX, FILENAME_EXTENSION)
import sys
sys.path.append(u"/path/to/your/GatesMusicPet/")
from music_pet.meta import *
from music_pet.utils import *
from music_pet.audio import FLAC, init_flacs
import subprocess
import os, sys
cd $WORKING_DIR
global_report = []
NOT_PARSED = 1
NO_TRACK = 2
Explanation: Others: If you have cover picture, set PICTURE to that filename.
If your cue is not utf-8 encoded, set ANSI_ENCODING to the encoding of your cue sheet file.
At the end of conversion, all files defined in FILES_TO_COPY will be simply copied to the output position.
End of explanation
albumList = parse_cue(INPUT_CUE, encoding="U8")
extraMetas = parse_ini(INPUT_EXTRAINFO)
for album in albumList.values():
for extraMeta in extraMetas:
album.update_all_tracks(extraMeta)
albumList.fix_album_names()
flacs = []
for album in albumList.values():
flacs = init_flacs(album, OUTPUT_PATTERN)
for flac in flacs:
flac.set_input_file(u"%s/%s" % (
WORKING_DIR, filename_safe(flac.get_tag(u"original_file"))))
flac.set_next_start_time_from_album(album)
flac.cover_picture = PICTURE
for l in album.detail():
print(l)
commands = []
tmpified_files = {}
for flac in flacs:
b_is_wav = flac.get_tag(u"@input_fullpath").endswith(u".wav")
b_tempified = flac.get_tag(u"@input_fullpath") in tmpified_files
if not b_is_wav and not b_tempified:
commands.append(flac.command_build_tempwav(memoize=tmpified_files))
commands.append(flac.command())
commands.append(command_copy_to([PICTURE] + FILES_TO_COPY, parent_folder(flac.get_tag(u"@output_fullpath"))))
if not b_is_wav and not b_tempified:
commands.append(flac.command_clear_tempwav())
flac.create_target_dir()
for cmd in commands:
print(cmd)
print(u"")
Explanation: Parse CUE
End of explanation
cd $WORKING_DIR
for cmd in commands:
print(u"Executing:\n%s\n\n" % cmd)
try:
p = subprocess.check_output(cmd,
shell=True,
)
except subprocess.CalledProcessError as ex:
p = u"Process received an error! code=%s, output=%s" % (ex.returncode, ex.output)
global_report.append((3, u"Process Error, code=%s" % ex.returncode, cmd))
print(p)
print(u"\n\n")
for error in global_report:
print(u"%s\n%s\n\n" % (error[1], error[2]))
Explanation: Covert Files
End of explanation |
4,926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Item cold-start
Step1: Let's examine the data
Step2: The training and test set are divided chronologically
Step3: As a means of sanity checking, let's calculate the model's AUC on the training set first. If it's reasonably high, we can be sure that the model is not doing anything stupid and is fitting the training data well.
Step4: Fantastic, the model is fitting the training set well. But what about the test set?
Step5: This is terrible
Step6: A hybrid model
We can do much better by employing LightFM's hybrid model capabilities. The StackExchange data comes with content information in the form of tags users apply to their questions
Step7: We can use these features (instead of an identity feature matrix like in a pure CF model) to estimate a model which will generalize better to unseen examples
Step8: As before, let's sanity check the model on the training set.
Step9: Note that the training set AUC is lower than in a pure CF model. This is fine
Step10: This is as expected | Python Code:
import numpy as np
from lightfm.datasets import fetch_stackexchange
data = fetch_stackexchange('crossvalidated',
test_set_fraction=0.1,
indicator_features=False,
tag_features=True)
train = data['train']
test = data['test']
Explanation: Item cold-start: recommending StackExchange questions
In this example we'll use the StackExchange dataset to explore recommendations under item-cold start. Data dumps from the StackExchange network are available at https://archive.org/details/stackexchange, and we'll use one of them --- for stats.stackexchange.com --- here.
The consists of users answering questions: in the user-item interaction matrix, each user is a row, and each question is a column. Based on which users answered which questions in the training set, we'll try to recommend new questions in the training set.
Let's start by loading the data. We'll use the datasets module.
End of explanation
print('The dataset has %s users and %s items, '
'with %s interactions in the test and %s interactions in the training set.'
% (train.shape[0], train.shape[1], test.getnnz(), train.getnnz()))
Explanation: Let's examine the data:
End of explanation
# Import the model
from lightfm import LightFM
# Set the number of threads; you can increase this
# ify you have more physical cores available.
NUM_THREADS = 2
NUM_COMPONENTS = 30
NUM_EPOCHS = 3
ITEM_ALPHA = 1e-6
# Let's fit a WARP model: these generally have the best performance.
model = LightFM(loss='warp',
item_alpha=ITEM_ALPHA,
no_components=NUM_COMPONENTS)
# Run 3 epochs and time it.
%time model = model.fit(train, epochs=NUM_EPOCHS, num_threads=NUM_THREADS)
Explanation: The training and test set are divided chronologically: the test set contains the 10% of interactions that happened after the 90% in the training set. This means that many of the questions in the test set have no interactions. This is an accurate description of a questions answering system: it is most important to recommend questions that have not yet been answered to the expert users who can answer them.
A pure collaborative filtering model
This is clearly a cold-start scenario, and so we can expect a traditional collaborative filtering model to do very poorly. Let's check if that's the case:
End of explanation
# Import the evaluation routines
from lightfm.evaluation import auc_score
# Compute and print the AUC score
train_auc = auc_score(model, train, num_threads=NUM_THREADS).mean()
print('Collaborative filtering train AUC: %s' % train_auc)
Explanation: As a means of sanity checking, let's calculate the model's AUC on the training set first. If it's reasonably high, we can be sure that the model is not doing anything stupid and is fitting the training data well.
End of explanation
# We pass in the train interactions to exclude them from predictions.
# This is to simulate a recommender system where we do not
# re-recommend things the user has already interacted with in the train
# set.
test_auc = auc_score(model, test, train_interactions=train, num_threads=NUM_THREADS).mean()
print('Collaborative filtering test AUC: %s' % test_auc)
Explanation: Fantastic, the model is fitting the training set well. But what about the test set?
End of explanation
# Set biases to zero
model.item_biases *= 0.0
test_auc = auc_score(model, test, train_interactions=train, num_threads=NUM_THREADS).mean()
print('Collaborative filtering test AUC: %s' % test_auc)
Explanation: This is terrible: we do worse than random! This is not very surprising: as there is no training data for the majority of the test questions, the model cannot compute reasonable representations of the test set items.
The fact that we score them lower than other items (AUC < 0.5) is due to estimated per-item biases, which can be confirmed by setting them to zero and re-evaluating the model.
End of explanation
item_features = data['item_features']
tag_labels = data['item_feature_labels']
print('There are %s distinct tags, with values like %s.' % (item_features.shape[1], tag_labels[:3].tolist()))
Explanation: A hybrid model
We can do much better by employing LightFM's hybrid model capabilities. The StackExchange data comes with content information in the form of tags users apply to their questions:
End of explanation
# Define a new model instance
model = LightFM(loss='warp',
item_alpha=ITEM_ALPHA,
no_components=NUM_COMPONENTS)
# Fit the hybrid model. Note that this time, we pass
# in the item features matrix.
model = model.fit(train,
item_features=item_features,
epochs=NUM_EPOCHS,
num_threads=NUM_THREADS)
Explanation: We can use these features (instead of an identity feature matrix like in a pure CF model) to estimate a model which will generalize better to unseen examples: it will simply use its representations of item features to infer representations of previously unseen questions.
Let's go ahead and fit a model of this type.
End of explanation
# Don't forget the pass in the item features again!
train_auc = auc_score(model,
train,
item_features=item_features,
num_threads=NUM_THREADS).mean()
print('Hybrid training set AUC: %s' % train_auc)
Explanation: As before, let's sanity check the model on the training set.
End of explanation
test_auc = auc_score(model,
test,
train_interactions=train,
item_features=item_features,
num_threads=NUM_THREADS).mean()
print('Hybrid test set AUC: %s' % test_auc)
Explanation: Note that the training set AUC is lower than in a pure CF model. This is fine: by using a lower-rank item feature matrix, we have effectively regularized the model, giving it less freedom to fit the training data.
Despite this the model does much better on the test set:
End of explanation
def get_similar_tags(model, tag_id):
# Define similarity as the cosine of the angle
# between the tag latent vectors
# Normalize the vectors to unit length
tag_embeddings = (model.item_embeddings.T
/ np.linalg.norm(model.item_embeddings, axis=1)).T
query_embedding = tag_embeddings[tag_id]
similarity = np.dot(tag_embeddings, query_embedding)
most_similar = np.argsort(-similarity)[1:4]
return most_similar
for tag in (u'bayesian', u'regression', u'survival'):
tag_id = tag_labels.tolist().index(tag)
print('Most similar tags for %s: %s' % (tag_labels[tag_id],
tag_labels[get_similar_tags(model, tag_id)]))
Explanation: This is as expected: because items in the test set share tags with items in the training set, we can provide better test set recommendations by using the tag representations learned from training.
Bonus: tag embeddings
One of the nice properties of the hybrid model is that the estimated tag embeddings capture semantic characteristics of the tags. Like the word2vec model, we can use this property to explore semantic tag similarity:
End of explanation |
4,927 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Report03 - Nathan Yee
This notebook contains report03 for computational baysian statistics fall 2016
MIT License
Step2: The sock problem
Created by Yuzhong Huang
There are two drawers of socks. The first drawer has 40 white socks and 10 black socks; the second drawer has 20 white socks and 30 black socks. We randomly get 2 socks from a drawer, and it turns out to be a pair(same color) but we don't know the color of these socks. What is the chance that we picked the first drawer.
To make calculating our likelihood easier, we start by defining a multiply function. The function is written in a functional way primarily for fun.
Step4: Next we define a drawer suite. This suite will allow us to take n socks up to the least number of socks in a drawer. To make our likelihood function simpler, we ignore the case where we take 11 black socks and that only drawer 2 is possible.
Step5: Next, define our hypotheses and create the drawer Suite.
Step6: Next, update the drawers by taking two matching socks.
Step7: It seems that the drawer with many of a single sock (40 white 10 black) is more likely after the update. To confirm this suspicion, let's restart the problem by taking 5 pairs of socks.
Step8: We see that after we take 5 pairs of socks, the probability of the socks coming from drawer 1 is 80.6%. We can now conclude that the drawer with a more extreme numbers of socks is more likely be chosen if we are updating with matching color socks.
Chess-playing twins
Allen Downey
Two identical twins are members of my chess club, but they never show up on the same day; in fact, they strictly alternate the days they show up. I can't tell them apart except that one is a better player than the other
Step9: Now we update our hypotheses with us winning the first day. We have a 40% chance of winning against Avery and a 70% chance of winning against Blake.
Step10: At this point in time, there is only a 36% chance that we play Avery the first day while a 64% chance that we played Blake the first day.
However, let's see what happens when we update with a loss.
Step12: Interesting. Now there is a 53% chance that we played Avery then Blake and a 47% chance that we played Blake then Avery.
Who saw that movie?
Nathan Yee
Every year the MPAA (Motion Picture Association of America) publishes a report about theatrical market statistics. Included in the report, are both the gender and the ethnicity share of the top 5 most grossing films. If a randomly selected person in the United States went to Pixar's "Inside Out", what is the probability that they are both female and Asian?
Data
Step13: Next we make our hypotheses and input them as tuples into the Movie class.
Step14: We decided that we are picking a random person in the United states. So, we can use population demographics of the United States as an informed prior. We will assume that the United States is 50% male and 50% female. Population percent is defined in the order which we enumerate ethnicities.
Step15: Next update with the two movies
Step16: Given that a random person has seen Inside Out, the probability that the person is both female and Asian is .58%. Interestingly, when we update our hypotheses with our data, the the chance that the randomly selected person is caucasian goes up to 87%. It seems that our model just increases the chance that the randomly selected person is caucasian after seeing a movie.
Validation
Step17: Parking meter theft
From DASL(http
Step18: First we need to normalize the CON (contractor) collections by the amount gathered by the CITY. This will give us a ratio of contractor collections to city collections. If we just use the raw contractor collections, fluctuations throughout the months could mislead us.
Step19: Next, lets see what the means of the RATIO data compare between the general contractors and BRINK.
Step21: We see that for a dollar gathered by the city, general contractors report 244.7 dollars while BRINK only reports 230 dollars.
Now, we will fit the data to a Normal class to compute the likelihood of a sameple from the normal distribution. This is a similar process to what we did in the improved reading ability problem.
Step22: Next, we need to calculate a marginal distribution for both brink and general contractors. To get the marginal distribution of the general contractors, start by generating a bunch of prior distributions for mu and sigma. These will be generated uniformly.
Step23: Next, use itertools.product to enumerate all pairs of mu and sigma.
Step24: Next we will plot the probability of each mu-sigma pair on a contour plot.
Step25: Next, extract the marginal distribution of mu from general.
Step26: And the marginal distribution of sigma from the general.
Step27: Next, we will run this again for BRINK and see what the difference is between the group. This will give us insight into whether or not Brink employee's are stealing parking money from the city.
First use the same range of mus and sigmas calcualte the marginal distributions of brink.
Step28: Plot the mus and sigmas on a contour plot to see what is going on.
Step29: Extract the marginal distributions of mu from brink.
Step30: Extract the marginal distributions sigma from brink
Step31: From here, we want to compare the two distributions. To do this, we will start by taking the difference between the distributions.
Step32: From here we can calculate the probability that money was stolen from the city.
Step33: So we can calculate that the probability money was stolen from the city is 93.9%
Next, we want to calculate how much money was stolen from the city. We first need to calculate how much money the city collected during Brink times. Then we can multiply this times our pmf_diff to get a probability distribution of potential stolen money.
Step34: Above we see a plot of stolen money in millions. We have also calculated a credible interval that tells us that there is a 50% chance that Brink stole between 1.4 to 3.6 million dollars.
In pursuit of more evidence, we find the probability that the standard deviation in the Brink collections is higher than that of the general contractors. | Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
Explanation: Report03 - Nathan Yee
This notebook contains report03 for computational baysian statistics fall 2016
MIT License: https://opensource.org/licenses/MIT
End of explanation
from functools import reduce
import operator
def multiply(items):
multiply takes a list of numbers, multiplies all of them, and returns the result
Args:
items (list): The list of numbers
Return:
the items multiplied together
return reduce(operator.mul, items, 1)
Explanation: The sock problem
Created by Yuzhong Huang
There are two drawers of socks. The first drawer has 40 white socks and 10 black socks; the second drawer has 20 white socks and 30 black socks. We randomly get 2 socks from a drawer, and it turns out to be a pair(same color) but we don't know the color of these socks. What is the chance that we picked the first drawer.
To make calculating our likelihood easier, we start by defining a multiply function. The function is written in a functional way primarily for fun.
End of explanation
class Drawers(Suite):
def Likelihood(self, data, hypo):
Likelihood returns the likelihood given a bayesian update
consisting of a particular hypothesis and new data. In the
case of our drawer problem, the probabilities change with the
number of pairs we take (without replacement) so we we start
by defining lists for each color sock in each drawer.
Args:
data (int): The number of socks we take
hypo (str): The hypothesis we are updating
Return:
the likelihood for a hypothesis
drawer1W = []
drawer1B = []
drawer2W = []
drawer2B = []
for i in range(data):
drawer1W.append(40-i)
drawer1B.append(10-i)
drawer2W.append(20-i)
drawer2B.append(30-i)
if hypo == 'drawer1':
return multiply(drawer1W)+multiply(drawer1B)
if hypo == 'drawer2':
return multiply(drawer2W)+multiply(drawer2B)
Explanation: Next we define a drawer suite. This suite will allow us to take n socks up to the least number of socks in a drawer. To make our likelihood function simpler, we ignore the case where we take 11 black socks and that only drawer 2 is possible.
End of explanation
hypos = ['drawer1','drawer2']
drawers = Drawers(hypos)
drawers.Print()
Explanation: Next, define our hypotheses and create the drawer Suite.
End of explanation
drawers.Update(2)
drawers.Print()
Explanation: Next, update the drawers by taking two matching socks.
End of explanation
hypos = ['drawer1','drawer2']
drawers5 = Drawers(hypos)
drawers5.Update(5)
drawers5.Print()
Explanation: It seems that the drawer with many of a single sock (40 white 10 black) is more likely after the update. To confirm this suspicion, let's restart the problem by taking 5 pairs of socks.
End of explanation
twins = Pmf()
twins['AB'] = 1
twins['BA'] = 1
twins.Normalize()
twins.Print()
Explanation: We see that after we take 5 pairs of socks, the probability of the socks coming from drawer 1 is 80.6%. We can now conclude that the drawer with a more extreme numbers of socks is more likely be chosen if we are updating with matching color socks.
Chess-playing twins
Allen Downey
Two identical twins are members of my chess club, but they never show up on the same day; in fact, they strictly alternate the days they show up. I can't tell them apart except that one is a better player than the other: Avery beats me 60% of the time and I beat Blake 70% of the time. If I play one twin on Monday and win, and the other twin on Tuesday and lose, which twin did I play on which day?
To solve this problem, we first need to create our hypothesis. In this case, we have:
hypo1: Avery Monday, Blake Tuesday
hypo2: Blake Monday, Avery Tuesday
We will abreviate Avery to A and Blake to B.
End of explanation
#win day 1
twins['AB'] *= .4
twins['BA'] *= .7
twins.Normalize()
twins.Print()
Explanation: Now we update our hypotheses with us winning the first day. We have a 40% chance of winning against Avery and a 70% chance of winning against Blake.
End of explanation
#lose day 2
twins['AB'] *= .6
twins['BA'] *= .3
twins.Normalize()
twins.Print()
Explanation: At this point in time, there is only a 36% chance that we play Avery the first day while a 64% chance that we played Blake the first day.
However, let's see what happens when we update with a loss.
End of explanation
class Movie(Suite):
def Likelihood(self, data, hypo):
Likelihood returns the likelihood given a bayesian update consisting of a particular
hypothesis and data. In this case, we need to calculate the probability of seeing a
gender seeing a movie. Then we calculat the probability that an ethnicity saw a
movie. Finally we multiply the two to calculate the a person of a gender and
ethnicity saw a movie.
Args:
data (str): The title of the movie
hypo (str): The hypothesis we are updating
Return:
the likelihood for a hypothesis
movie = data
gender = hypo[0]
ethnicity = hypo[1]
# first calculate update based on gender
movies_gender = {'Furious 7' : {0:56, 1:44},
'Inside Out' : {0:46, 1:54},
'Avengers: Age of Ultron' : {0:58, 1:42},
'Star Wars: The Force Awakens' : {0:58, 1:42},
'Jurassic World' : {0:55, 1:45}
}
like_gender = movies_gender[movie][gender]
# second calculate update based on ethnicity
movies_ethnicity = {'Furious 7' : {0:40, 1:22, 2:25, 3:8 , 4:5},
'Inside Out' : {0:54, 1:15, 2:16, 3:9 , 4:4},
'Avengers: Age of Ultron' : {0:50, 1:16, 2:20, 3:10, 4:5},
'Star Wars: The Force Awakens' : {0:61, 1:12, 2:15, 3:7 , 4:5},
'Jurassic World' : {0:39, 1:16, 2:19, 3:11, 4:6}
}
like_ethnicity = movies_ethnicity[movie][ethnicity]
# multiply the two together and return
return like_gender * like_ethnicity
Explanation: Interesting. Now there is a 53% chance that we played Avery then Blake and a 47% chance that we played Blake then Avery.
Who saw that movie?
Nathan Yee
Every year the MPAA (Motion Picture Association of America) publishes a report about theatrical market statistics. Included in the report, are both the gender and the ethnicity share of the top 5 most grossing films. If a randomly selected person in the United States went to Pixar's "Inside Out", what is the probability that they are both female and Asian?
Data:
| Gender | Male (%) | Female (%) |
| :-------------------------- | :------- | :---------- |
| Furious 7 | 56 | 44 |
| Inside Out | 46 | 54 |
| Avengers: Age of Ultron | 58 | 42 |
| Star Wars: The Force Awakens| 58 | 42 |
| Jurassic World | 55 | 45 |
| Ethnicity | Caucasian (%) | African-American (%) | Hispanic (%) | Asian (%) | Other (%) |
| :-------------------------- | :------------ | :------------------- | :----------- | :-------- | :-------- |
| Furious 7 | 40 | 22 | 25 | 8 | 5 |
| Inside Out | 54 | 15 | 16 | 9 | 5 |
| Avengers: Age of Ultron | 50 | 16 | 20 | 10 | 5 |
| Star Wars: The Force Awakens| 61 | 12 | 15 | 7 | 5 |
| Jurassic World | 39 | 16 | 19 | 11 | 6 |
Since we are picking a random person in the United States, we can use demographics of the United States as an informed prior.
| Demographic | Caucasian (%) | African-American (%) | Hispanic (%) | Asian (%) | Other (%) |
| :-------------------------- | :------------ | :------------------- | :----------- | :-------- | :-------- |
| Population United States | 63.7 | 12.2 | 16.3 | 4.7 | 3.1 |
Note:
Demographic data was gathered from the US Census Bureau. There may be errors within 2% due to rounding. Also note that certian races were combined to fit our previous demographic groupings.
To make writing code easier, we will encoude data in a numerical structure. The first item in the tuple corresponds to gender, the second item in the tuple corresponds to ethnicity.
| Gender | Male | Female |
| :-------------------------- | :--- | :----- |
| Encoding number | 0 | 1 |
| Ethnicity | Caucasian | African-American | Hispanic | Asian | Other |
| :-------------------------- | :-------- | :--------------- | :------- | :---- | :---- |
| Encoding number | 0 | 1 | 2 | 3 | 4 |
Such that a (female, asian) = (1, 3)
The first piece of code we write will be our Movie class. This version of Suite will have a special likelihood function that takes in a movie, and returns the probability of the gender and the ethnicity.
End of explanation
genders = range(0,2)
ethnicities = range(0,5)
pairs = [(gender, ethnicity) for gender in genders for ethnicity in ethnicities]
movie = Movie(pairs)
Explanation: Next we make our hypotheses and input them as tuples into the Movie class.
End of explanation
population_percent = [63.7, 12.2, 16.3, 4.7, 3.1, 63.7, 12.2, 16.3, 4.7, 3.1]
for i in range(len(population_percent)):
movie[pairs[i]] = population_percent[i]
movie.Normalize()
movie.Print()
Explanation: We decided that we are picking a random person in the United states. So, we can use population demographics of the United States as an informed prior. We will assume that the United States is 50% male and 50% female. Population percent is defined in the order which we enumerate ethnicities.
End of explanation
movie.Update('Inside Out')
movie.Normalize()
movie.Print()
Explanation: Next update with the two movies
End of explanation
total = 0
for pair in pairs:
if pair[0] == 1:
total += movie[pair]
print(total)
Explanation: Given that a random person has seen Inside Out, the probability that the person is both female and Asian is .58%. Interestingly, when we update our hypotheses with our data, the the chance that the randomly selected person is caucasian goes up to 87%. It seems that our model just increases the chance that the randomly selected person is caucasian after seeing a movie.
Validation:
To make ourselves convinced that model is working properly, what happens if we just look at gender data. We know that 54% of people who saw inside out were female. So, if we sum together the female audience, we should get 54%.
End of explanation
import pandas as pd
df = pd.read_csv('parking.csv', skiprows=17, delimiter='\t')
df.head()
Explanation: Parking meter theft
From DASL(http://lib.stat.cmu.edu/DASL/Datafiles/brinkdat.html)
The variable CON in the datafile Parking Meter Theft represents monthly parking meter collections by the principle contractor in New York City from May 1977 to March 1981. In addition to contractor collections, the city made collections from a number of "control" meters close to City Hall. These are recorded under the varia- ble CITY. From May 1978 to April 1980 the contractor was Brink's. In 1983 the city presented evidence in court that Brink's employees has been stealing parking meter moneys - delivering to the city less than the total collections. The court was satisfied that theft has taken place, but the actual amount of shortage was in question. Assume that there was no theft before or after Brink's tenure and estimate the monthly short- age and its 95% confidence limits.
So we are asking three questions. What is the probability that that money has been stolen? What is the probability that the variance of the Brink collections is higher. And how much money was stolen?
This problem is very similar to that of "Improving Reading Ability" by Allen Downey
To do this, we want to calculate First we load our data from the csv file.
End of explanation
df['RATIO'] = df['CON'] / df['CITY']
Explanation: First we need to normalize the CON (contractor) collections by the amount gathered by the CITY. This will give us a ratio of contractor collections to city collections. If we just use the raw contractor collections, fluctuations throughout the months could mislead us.
End of explanation
grouped = df.groupby('BRINK')
for name, group in grouped:
print(name, group.RATIO.mean())
Explanation: Next, lets see what the means of the RATIO data compare between the general contractors and BRINK.
End of explanation
from scipy.stats import norm
class Normal(Suite, Joint):
def Likelihood(self, data, hypo):
data: sequence of test scores
hypo: mu, sigma
mu, sigma = hypo
likes = norm.pdf(data, mu, sigma)
return np.prod(likes)
Explanation: We see that for a dollar gathered by the city, general contractors report 244.7 dollars while BRINK only reports 230 dollars.
Now, we will fit the data to a Normal class to compute the likelihood of a sameple from the normal distribution. This is a similar process to what we did in the improved reading ability problem.
End of explanation
mus = np.linspace(210, 270, 301)
sigmas = np.linspace(10, 65, 301)
Explanation: Next, we need to calculate a marginal distribution for both brink and general contractors. To get the marginal distribution of the general contractors, start by generating a bunch of prior distributions for mu and sigma. These will be generated uniformly.
End of explanation
from itertools import product
general = Normal(product(mus, sigmas))
data = df[df.BRINK==0].RATIO
general.Update(data)
Explanation: Next, use itertools.product to enumerate all pairs of mu and sigma.
End of explanation
thinkplot.Contour(general, pcolor=True)
thinkplot.Config(xlabel='mu', ylabel='sigma')
Explanation: Next we will plot the probability of each mu-sigma pair on a contour plot.
End of explanation
pmf_mu0 = general.Marginal(0)
thinkplot.Pdf(pmf_mu0)
thinkplot.Config(xlabel='mu', ylabel='Pmf')
Explanation: Next, extract the marginal distribution of mu from general.
End of explanation
pmf_sigma0 = general.Marginal(1)
thinkplot.Pdf(pmf_sigma0)
thinkplot.Config(xlabel='sigma', ylabel='Pmf')
Explanation: And the marginal distribution of sigma from the general.
End of explanation
brink = Normal(product(mus, sigmas))
data = df[df.BRINK==1].RATIO
brink.Update(data)
Explanation: Next, we will run this again for BRINK and see what the difference is between the group. This will give us insight into whether or not Brink employee's are stealing parking money from the city.
First use the same range of mus and sigmas calcualte the marginal distributions of brink.
End of explanation
thinkplot.Contour(brink, pcolor=True)
thinkplot.Config(xlabel='mu', ylabel='sigma')
Explanation: Plot the mus and sigmas on a contour plot to see what is going on.
End of explanation
pmf_mu1 = brink.Marginal(0)
thinkplot.Pdf(pmf_mu1)
thinkplot.Config(xlabel='mu', ylabel='Pmf')
Explanation: Extract the marginal distributions of mu from brink.
End of explanation
pmf_sigma1 = brink.Marginal(1)
thinkplot.Pdf(pmf_sigma1)
thinkplot.Config(xlabel='sigma', ylabel='Pmf')
Explanation: Extract the marginal distributions sigma from brink
End of explanation
pmf_diff = pmf_mu1 - pmf_mu0
pmf_diff.Mean()
Explanation: From here, we want to compare the two distributions. To do this, we will start by taking the difference between the distributions.
End of explanation
cdf_diff = pmf_diff.MakeCdf()
thinkplot.Cdf(cdf_diff)
cdf_diff[0]
Explanation: From here we can calculate the probability that money was stolen from the city.
End of explanation
money_city = np.where(df['BRINK']==1, df['CITY'], 0).sum(0)
print((pmf_diff * money_city).CredibleInterval(50))
thinkplot.Pmf(pmf_diff * money_city)
Explanation: So we can calculate that the probability money was stolen from the city is 93.9%
Next, we want to calculate how much money was stolen from the city. We first need to calculate how much money the city collected during Brink times. Then we can multiply this times our pmf_diff to get a probability distribution of potential stolen money.
End of explanation
pmf_sigma1.ProbGreater(pmf_sigma0)
Explanation: Above we see a plot of stolen money in millions. We have also calculated a credible interval that tells us that there is a 50% chance that Brink stole between 1.4 to 3.6 million dollars.
In pursuit of more evidence, we find the probability that the standard deviation in the Brink collections is higher than that of the general contractors.
End of explanation |
4,928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>**ReadMe**</h1>
An important part in the profitability of a credit card product is the issuer's ability to detect and deny fraud. Purchase fraud can cost as much as 0.10% of purchase volumes, which must be paid for by the issuer. To help prevent fraud is to reduce the cost of purchase fraud to the issuer, so many issuers will spend tremendous analysis resources on detecting and denying fraudulent transactions.
Available in Kaggle is a dataset of purchase transactions with several attributes, including a flag for Fraud. Read more about the dataset here.
We'll start by importing the necessary packages and the dataset.
Step1: <h2>**Data Exploration**</h2>
Alright, now that we have our transcations loaded, let's take a look at the data.
Step2: Looks like the data is all transformed and renamed, probably to anonymize the fields. This will make our work much less interpretible. Oh well, onward!
Step3: Awesome, no missing values. Wish real life was this clean.
Let's see how Amount varies by Fraud / Not Fraud
Step4: Fraudulent transactions are larger, on average, than non fraudulent transactions, despit the much longer tail on non fraudulent transactions.
Okay, next let's see how cyclical fraudulent and non fraudulent transactions are, respecitively.
Step5: Fraud is less cyclical than not fraud. Legitimate transaction volume decreases dramatically twice, presumably at night. This may come in handy later.
Next let's consider both time and amount.
Step6: Yeah that wasn't very useful. Let's look now at the anonymized data.
Step8: Okay, I now have a good sense of which variables might be important in detecting fraud. Note that if the data were not anonymized and transformed, I would also use intuition in this step to choose which variables I expect to "pop".
Now that we analyzed the data, let's move on to building an actual model. Given that the independent variables are all non-null continuous variables and the outcome is binary, this is the perfect opportunity to use XGBoostClassifier. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.gridspec as gridspec
import xgboost as xgb
from sklearn.model_selection import train_test_split
transactions = pd.read_csv('creditcard.csv')
Explanation: <h1>**ReadMe**</h1>
An important part in the profitability of a credit card product is the issuer's ability to detect and deny fraud. Purchase fraud can cost as much as 0.10% of purchase volumes, which must be paid for by the issuer. To help prevent fraud is to reduce the cost of purchase fraud to the issuer, so many issuers will spend tremendous analysis resources on detecting and denying fraudulent transactions.
Available in Kaggle is a dataset of purchase transactions with several attributes, including a flag for Fraud. Read more about the dataset here.
We'll start by importing the necessary packages and the dataset.
End of explanation
transactions.head(n=10)
transactions.describe()
Explanation: <h2>**Data Exploration**</h2>
Alright, now that we have our transcations loaded, let's take a look at the data.
End of explanation
transactions.isnull().sum()
Explanation: Looks like the data is all transformed and renamed, probably to anonymize the fields. This will make our work much less interpretible. Oh well, onward!
End of explanation
print('Fraud')
print(transactions.Amount[transactions.Class==1].describe())
print()
print('Not Fraud')
print(transactions.Amount[transactions.Class==0].describe())
print()
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(12,4))
bins = 50
ax1.hist(transactions.Amount[transactions.Class == 1], bins=bins)
ax1.set_title('Fraud')
ax2.hist(transactions.Amount[transactions.Class == 0], bins=bins)
ax2.set_title('Not Fraud')
plt.xlabel('Amount ($)')
plt.ylabel('Number of Transactions')
plt.yscale('log')
plt.show()
Explanation: Awesome, no missing values. Wish real life was this clean.
Let's see how Amount varies by Fraud / Not Fraud
End of explanation
print('Fraud')
print(transactions.Time[transactions.Class==1].describe())
print()
print('Not Fraud')
print(transactions.Time[transactions.Class==0].describe())
print()
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(12,4))
bins = 50
ax1.hist(transactions.Time[transactions.Class == 1], bins=bins)
ax1.set_title('Fraud')
ax2.hist(transactions.Time[transactions.Class == 0], bins=bins)
ax2.set_title('Not Fraud')
plt.xlabel('Time (seconds from first transaction)')
plt.ylabel('Number of Transactions')
plt.yscale('log')
plt.show()
Explanation: Fraudulent transactions are larger, on average, than non fraudulent transactions, despit the much longer tail on non fraudulent transactions.
Okay, next let's see how cyclical fraudulent and non fraudulent transactions are, respecitively.
End of explanation
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(transactions.Time[transactions.Class == 0], transactions.Amount[transactions.Class == 0],\
c = 'b',label='Legit')
ax1.scatter(transactions.Time[transactions.Class == 1], transactions.Amount[transactions.Class == 1],\
c = 'g',label='Fraud')
plt.xlabel('Time (in Seconds)')
plt.ylabel('Amount($)')
plt.legend(loc='upper left');
plt.show()
Explanation: Fraud is less cyclical than not fraud. Legitimate transaction volume decreases dramatically twice, presumably at night. This may come in handy later.
Next let's consider both time and amount.
End of explanation
#Select only the anonymized features.
v_features = transactions.ix[:,1:29].columns
plt.figure(figsize=(12,28*4))
gs = gridspec.GridSpec(28, 1)
for i, cn in enumerate(transactions[v_features]):
ax = plt.subplot(gs[i])
sns.distplot(transactions[cn][transactions.Class == 1], bins=100,label='fraud')
sns.distplot(transactions[cn][transactions.Class == 0], bins=100,label='legit')
ax.set_xlabel('')
ax.set_title('histogram of feature: ' + str(cn))
ax.legend(loc='upper left');
plt.show()
Explanation: Yeah that wasn't very useful. Let's look now at the anonymized data.
End of explanation
# split data into train and test sets
seed = 7
test_size = 0.33
X_train, X_test, y_train, y_test = train_test_split(transactions[v_features], transactions['Class'],
test_size=test_size, random_state=seed)
#train model on train data
model = xgb.XGBClassifier()
model.fit(X_train,y_train)
print(model)
from sklearn.metrics import accuracy_score
# make predictions for test data
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
# evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
xgb.plot_importance(model)
plt.show()
for row in range(len(model.feature_importances_)):
print(v_features[row],model.feature_importances_[row],'')
def calc_lift(x,y,clf,bins=10):
Takes input arrays and trained SkLearn Classifier and returns a Pandas
DataFrame with the average lift generated by the model in each bin
Parameters
-------------------
x: Numpy array or Pandas Dataframe with shape = [n_samples, n_features]
y: A 1-d Numpy array or Pandas Series with shape = [n_samples]
IMPORTANT: Code is only configured for binary target variable
of 1 for success and 0 for failure
clf: A trained SkLearn classifier object
bins: Number of equal sized buckets to divide observations across
Default value is 10
#Actual Value of y
y_actual = y
#Predicted Probability that y = 1
y_prob = clf.predict_proba(x)
#Predicted Value of Y
y_pred = clf.predict(x)
cols = ['ACTUAL','PROB_POSITIVE','PREDICTED']
data = [y_actual,y_prob[:,1],y_pred]
df = pd.DataFrame(dict(zip(cols,data)))
#Observations where y=1
total_positive_n = df['ACTUAL'].sum()
#Total Observations
total_n = df.index.size
natural_positive_prob = total_positive_n/float(total_n)
#Create Bins where First Bin has Observations with the
#Highest Predicted Probability that y = 1
df['BIN_POSITIVE'] = pd.qcut(df['PROB_POSITIVE'],bins,labels=False)
pos_group_df = df.groupby('BIN_POSITIVE')
#Percentage of Observations in each Bin where y = 1
lift_positive = pos_group_df['ACTUAL'].sum()/pos_group_df['ACTUAL'].count()
lift_index_positive = (lift_positive/natural_positive_prob)*100
#Consolidate Results into Output Dataframe
lift_df = pd.DataFrame({'LIFT_POSITIVE':lift_positive,
'LIFT_POSITIVE_INDEX':lift_index_positive,
'BASELINE_POSITIVE':natural_positive_prob})
return lift_df
lift = calc_lift(X_test,y_test,model,bins=10)
lift
Explanation: Okay, I now have a good sense of which variables might be important in detecting fraud. Note that if the data were not anonymized and transformed, I would also use intuition in this step to choose which variables I expect to "pop".
Now that we analyzed the data, let's move on to building an actual model. Given that the independent variables are all non-null continuous variables and the outcome is binary, this is the perfect opportunity to use XGBoostClassifier.
End of explanation |
4,929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a utility notebook/script that goes through and writes all of the possible combinations of solutions to npz files.
Hyperbolic/Parabolic
Retrograde/Direct
CM Frame/M Frame
Equal Mass Disruptor/Heavy Mass Disruptor
This notebook probably takes 2 minutes or so to run in entirety
Step1: Equal Mass Disruptor Solutions
Step2: Writing Heavy Mass Disruptor Solutions | Python Code:
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from solve import *
M = 1e1
S = 1e1
Rmin = 25
e = 7 #Eccentricity
r_array = np.array([.2,.3,.4,.5,.6])*Rmin
N_array = np.array([12,18,24,30,36])
steps = 1e3
t = np.linspace(0,.4,steps) #Timescale of 1 billion years
atol=1e-6
rtol=1e-6
gamma = 4.49933e4 #Units ((kpc)^3)/((M_sun^10)(billion_years)^2)
Boolean = np.array([True,False])
Explanation: This is a utility notebook/script that goes through and writes all of the possible combinations of solutions to npz files.
Hyperbolic/Parabolic
Retrograde/Direct
CM Frame/M Frame
Equal Mass Disruptor/Heavy Mass Disruptor
This notebook probably takes 2 minutes or so to run in entirety
End of explanation
for bool1 in Boolean:
CenterOfMass = bool1
for bool2 in Boolean:
Hyperbolic_Approach = bool2
IC = set_IC(Hyperbolic_Approach,M,S)
for bool3 in Boolean:
Retrograde_Orbit = bool3
allic = all_ic(N_array,r_array,IC,Retrograde_Orbit)
sln = solution(t,allic,N_array,atol,rtol,gamma,Hyperbolic_Approach,M,S)
x,dx,y,dy,X,dX,Y,dY = coordinate_solution(sln,steps)
soln = np.array(frame(x,dx,y,dy,X,dX,Y,dY,CenterOfMass,M,S))
if CenterOfMass == False:
if Hyperbolic_Approach == True:
if Retrograde_Orbit == True:
np.savez('Hyp_Ret_M',x = soln[0], dx = soln[1],y = soln[2],dy = soln[3],X = soln[4],dX = soln[5],Y = soln[6],dY = soln[7])
else:
np.savez('Hyp_Dir_M',x = soln[0], dx = soln[1],y = soln[2],dy = soln[3],X = soln[4],dX = soln[5],Y = soln[6],dY = soln[7])
else:
if Retrograde_Orbit == True:
np.savez('Par_Ret_M',x = soln[0], dx = soln[1],y = soln[2],dy = soln[3],X = soln[4],dX = soln[5],Y = soln[6],dY = soln[7])
else:
np.savez('Par_Dir_M',x = soln[0], dx = soln[1],y = soln[2],dy = soln[3],X = soln[4],dX = soln[5],Y = soln[6],dY = soln[7])
else:
if Hyperbolic_Approach == True:
if Retrograde_Orbit == True:
np.savez('Hyp_Ret_Cen',x = soln[0], y = soln[1],X1 = soln[2],Y1 = soln[3],X2 = soln[4],Y2 = soln[5])
else:
np.savez('Hyp_Dir_Cen',x = soln[0], y = soln[1],X1 = soln[2],Y1 = soln[3],X2 = soln[4],Y2 = soln[5])
else:
if Retrograde_Orbit == True:
np.savez('Par_Ret_Cen',x = soln[0], y = soln[1],X1 = soln[2],Y1 = soln[3],X2 = soln[4],Y2 = soln[5])
else:
np.savez('Par_Dir_Cen',x = soln[0], y = soln[1],X1 = soln[2],Y1 = soln[3],X2 = soln[4],Y2 = soln[5])
Explanation: Equal Mass Disruptor Solutions
End of explanation
M = 1e1
S = 3e1
for bool1 in Boolean:
CenterOfMass = bool1
for bool2 in Boolean:
Hyperbolic_Approach = bool2
IC = set_IC(Hyperbolic_Approach,M,S)
for bool3 in Boolean:
Retrograde_Orbit = bool3
allic = all_ic(N_array,r_array,IC,Retrograde_Orbit)
sln = solution(t,allic,N_array,atol,rtol,gamma,Hyperbolic_Approach,M,S)
x,dx,y,dy,X,dX,Y,dY = coordinate_solution(sln,steps)
soln = np.array(frame(x,dx,y,dy,X,dX,Y,dY,CenterOfMass,M,S))
if CenterOfMass == False:
if Hyperbolic_Approach == True:
if Retrograde_Orbit == True:
np.savez('Hyp_Ret_M_H',x = soln[0], dx = soln[1],y = soln[2],dy = soln[3],X = soln[4],dX = soln[5],Y = soln[6],dY = soln[7])
else:
np.savez('Hyp_Dir_M_H',x = soln[0], dx = soln[1],y = soln[2],dy = soln[3],X = soln[4],dX = soln[5],Y = soln[6],dY = soln[7])
else:
if Retrograde_Orbit == True:
np.savez('Par_Ret_M_H',x = soln[0], dx = soln[1],y = soln[2],dy = soln[3],X = soln[4],dX = soln[5],Y = soln[6],dY = soln[7])
else:
np.savez('Par_Dir_M_H',x = soln[0], dx = soln[1],y = soln[2],dy = soln[3],X = soln[4],dX = soln[5],Y = soln[6],dY = soln[7])
else:
if Hyperbolic_Approach == True:
if Retrograde_Orbit == True:
np.savez('Hyp_Ret_Cen_H',x = soln[0], y = soln[1],X1 = soln[2],Y1 = soln[3],X2 = soln[4],Y2 = soln[5])
else:
np.savez('Hyp_Dir_Cen_H',x = soln[0], y = soln[1],X1 = soln[2],Y1 = soln[3],X2 = soln[4],Y2 = soln[5])
else:
if Retrograde_Orbit == True:
np.savez('Par_Ret_Cen_H',x = soln[0], y = soln[1],X1 = soln[2],Y1 = soln[3],X2 = soln[4],Y2 = soln[5])
else:
np.savez('Par_Dir_Cen_H',x = soln[0], y = soln[1],X1 = soln[2],Y1 = soln[3],X2 = soln[4],Y2 = soln[5])
Explanation: Writing Heavy Mass Disruptor Solutions
End of explanation |
4,930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Step1: Exercise 1
Step2: b. Spearman Rank Correlation
Find the Spearman rank correlation coefficient for the relationship between x and y using the stats.rankdata function and the formula
$$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$
where $d_i$ is the difference in rank of the ith pair of x and y values.
Step3: Check your results against scipy's Spearman rank function. stats.spearmanr
Step4: Exercise 2
Step5: b. Non-Monotonic Relationships
First, create a series d using the relationship $d=10c^2 - c + 2$. Then, find the Spearman rank rorrelation coefficient of the relationship between c and d.
Step7: Exercise 3
Step8: b. Rolling Spearman Rank Correlation
Repeat the above correlation for the first 60 days in the dataframe as opposed to just a single day. You should get a time series of Spearman rank correlations. From this we can start getting a better sense of how the factor correlates with forward returns.
What we're driving towards is known as an information coefficient. This is a very common way of measuring how predictive a model is. All of this plus much more is automated in our open source alphalens library. In order to see alphalens in action you can check out these resources
Step9: b. Rolling Spearman Rank Correlation
Plot out the rolling correlation as a time series, and compute the mean and standard deviation. | Python Code:
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import math
Explanation: Exercises: Spearman Rank Correlation
Lecture Link
This exercise notebook refers to this lecture. Please use the lecture for explanations and sample code.
https://www.quantopian.com/lectures#Spearman-Rank-Correlation
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
End of explanation
n = 100
x = np.linspace(1, n, n)
y = x**5
#Your code goes here
Explanation: Exercise 1: Finding Correlations of Non-Linear Relationships
a. Traditional (Pearson) Correlation
Find the correlation coefficient for the relationship between x and y.
End of explanation
#Your code goes here
Explanation: b. Spearman Rank Correlation
Find the Spearman rank correlation coefficient for the relationship between x and y using the stats.rankdata function and the formula
$$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$
where $d_i$ is the difference in rank of the ith pair of x and y values.
End of explanation
# Your code goes here
Explanation: Check your results against scipy's Spearman rank function. stats.spearmanr
End of explanation
n = 100
a = np.random.normal(0, 1, n)
#Your code goes here
Explanation: Exercise 2: Limitations of Spearman Rank Correlation
a. Lagged Relationships
First, create a series b that is identical to a but lagged one step (b[i] = a[i-1]). Then, find the Spearman rank correlation coefficient of the relationship between a and b.
End of explanation
n = 100
c = np.random.normal(0, 2, n)
#Your code goes here
Explanation: b. Non-Monotonic Relationships
First, create a series d using the relationship $d=10c^2 - c + 2$. Then, find the Spearman rank rorrelation coefficient of the relationship between c and d.
End of explanation
#Pipeline Setup
from quantopian.research import run_pipeline
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import CustomFactor, Returns, RollingLinearRegressionOfReturns
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.filters import QTradableStocksUS
from time import time
#MyFactor is our custom factor, based off of asset price momentum
class MyFactor(CustomFactor):
Momentum factor
inputs = [USEquityPricing.close]
window_length = 60
def compute(self, today, assets, out, close):
out[:] = close[-1]/close[0]
universe = QTradableStocksUS()
pipe = Pipeline(
columns = {
'MyFactor' : MyFactor(mask=universe),
},
screen=universe
)
start_timer = time()
results = run_pipeline(pipe, '2015-01-01', '2015-06-01')
end_timer = time()
results.fillna(value=0);
print "Time to run pipeline %.2f secs" % (end_timer - start_timer)
my_factor = results['MyFactor']
n = len(my_factor)
asset_list = results.index.levels[1].unique()
prices_df = get_pricing(asset_list, start_date='2015-01-01', end_date='2016-01-01', fields='price')
# Compute 10-day forward returns, then shift the dataframe back by 10
forward_returns_df = prices_df.pct_change(10).shift(-10)
# The first trading day is actually 2015-1-2
single_day_factor_values = my_factor['2015-1-2']
# Because prices are indexed over the total time period, while the factor values dataframe
# has a dynamic universe that excludes hard to trade stocks, each day there may be assets in
# the returns dataframe that are not present in the factor values dataframe. We have to filter down
# as a result.
single_day_forward_returns = forward_returns_df.loc['2015-1-2'][single_day_factor_values.index]
#Your code goes here
Explanation: Exercise 3: Real World Example
a. Factor and Forward Returns
Here we'll define a simple momentum factor (model). To evaluate it we'd need to look at how its predictions correlate with future returns over many days. We'll start by just evaluating the Spearman rank correlation between our factor values and forward returns on just one day.
Compute the Spearman rank correlation between factor values and 10 trading day forward returns on 2015-1-2.
For help on the pipeline API, see this tutorial: https://www.quantopian.com/tutorials/pipeline
End of explanation
rolling_corr = pd.Series(index=None, data=None)
#Your code goes here
Explanation: b. Rolling Spearman Rank Correlation
Repeat the above correlation for the first 60 days in the dataframe as opposed to just a single day. You should get a time series of Spearman rank correlations. From this we can start getting a better sense of how the factor correlates with forward returns.
What we're driving towards is known as an information coefficient. This is a very common way of measuring how predictive a model is. All of this plus much more is automated in our open source alphalens library. In order to see alphalens in action you can check out these resources:
A basic tutorial:
https://www.quantopian.com/tutorials/getting-started#lesson4
An in-depth lecture:
https://www.quantopian.com/lectures/factor-analysis
End of explanation
# Your code goes here
Explanation: b. Rolling Spearman Rank Correlation
Plot out the rolling correlation as a time series, and compute the mean and standard deviation.
End of explanation |
4,931 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: And we'll attach some dummy datasets. See Datasets for more details.
Step3: Available Backends
See the Compute Tutorial for details on adding compute options and using them to create synthetic models.
PHOEBE 1.0 Legacy
For more details, see Comparing PHOEBE 2.0 vs PHOEBE Legacy
Step4: Using Alternate Backends
Adding Compute Options
Adding a set of compute options for an alternate backend is just as easy as for the PHOEBE backend. Simply provide the function or name of the function in phoebe.parameters.compute that points to the parameters for that backend.
Here we'll add the default PHOEBE backend as well as the PHOEBE 1.0 (legacy) backend. Note that in order to use an alternate backend, that backend must be installed on your machine.
Step5: Running Compute
Nothing changes for running compute - simply provide the compute tag for those options. Do note, however, that not all backends support all dataset types.
But, since the legacy backend doesn't support ck2004 atmospheres and interpolated limb-darkening, we do need to choose a limb-darkening law.
Step6: Running Multiple Backends Simultaneously
Running multiple backends simultaneously is just as simple as running the PHOEBE backend with multiple sets of compute options (see Compute).
We just need to make sure that each dataset is only enabled for one (or none) of the backends that we want to use, and then send a list of the compute tags to run_compute. Here we'll use the PHOEBE backend to compute orbits and the legacy backend to compute light curves.
Step7: The parameters inside the returned model even remember which set of compute options (and therefore, in this case, which backend) were used to compute them. | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Advanced: Alternate Backends
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.add_dataset('orb', times=np.linspace(0,10,1000), dataset='orb01', component=['primary', 'secondary'])
b.add_dataset('lc', times=np.linspace(0,10,1000), dataset='lc01')
Explanation: And we'll attach some dummy datasets. See Datasets for more details.
End of explanation
b.add_compute('legacy', compute='legacybackend')
print b['legacybackend']
Explanation: Available Backends
See the Compute Tutorial for details on adding compute options and using them to create synthetic models.
PHOEBE 1.0 Legacy
For more details, see Comparing PHOEBE 2.0 vs PHOEBE Legacy
End of explanation
b.add_compute('phoebe', compute='phoebebackend')
print b['phoebebackend']
Explanation: Using Alternate Backends
Adding Compute Options
Adding a set of compute options for an alternate backend is just as easy as for the PHOEBE backend. Simply provide the function or name of the function in phoebe.parameters.compute that points to the parameters for that backend.
Here we'll add the default PHOEBE backend as well as the PHOEBE 1.0 (legacy) backend. Note that in order to use an alternate backend, that backend must be installed on your machine.
End of explanation
b.set_value_all('ld_func', 'logarithmic')
b.run_compute('legacybackend', model='legacyresults')
Explanation: Running Compute
Nothing changes for running compute - simply provide the compute tag for those options. Do note, however, that not all backends support all dataset types.
But, since the legacy backend doesn't support ck2004 atmospheres and interpolated limb-darkening, we do need to choose a limb-darkening law.
End of explanation
b.set_value_all('enabled@lc01@phoebebackend', False)
#b.set_value_all('enabled@orb01@legacybackend', False) # don't need this since legacy NEVER computes orbits
print b['enabled']
b.run_compute(['phoebebackend', 'legacybackend'], model='mixedresults')
Explanation: Running Multiple Backends Simultaneously
Running multiple backends simultaneously is just as simple as running the PHOEBE backend with multiple sets of compute options (see Compute).
We just need to make sure that each dataset is only enabled for one (or none) of the backends that we want to use, and then send a list of the compute tags to run_compute. Here we'll use the PHOEBE backend to compute orbits and the legacy backend to compute light curves.
End of explanation
print b['mixedresults'].computes
b['mixedresults@phoebebackend'].datasets
b['mixedresults@legacybackend'].datasets
Explanation: The parameters inside the returned model even remember which set of compute options (and therefore, in this case, which backend) were used to compute them.
End of explanation |
4,932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Вычисление элементарных функций
Вычисление значения функции на данном аргументе является одной из важнейших задач численных методов.
Несмотря на то, что вы уже огромное число раз вычисляли значения функций на практике, вам вряд ли приходилось самостоятельно реализовывать вычисление функций, не сводящихся к композиции элементарных.
Действительно, калькуляторы, стандартные библиотеки, математические пакеты и т.п. позволяют вам легко и зачастую с произвольной точностью вычислять значение широко известных функций.
Однако иногда вычисление элементраных функций приходится реализовывать самостоятельно, например, если вы пытаетесь добиться более высокой производительности, улучшить точность, эффективно распараллелить вычисления, используете среду/оборудование, для которого нет математических библиотек и т.п.
Алгоритмы вычисления элементарных функций сами по себе поучительны, так как учат нас избегать типичных ошибок расчетов на компьютере, подсказывают, как реализовать вычисления неэлементарных функций, а также позволяют рассмотреть нам некоторые методы, которые полностью проявляют свою мощь в более сложных задачах.
В этой лабораторной работе мы рассмотрим задачу вычисления натурального логарифма $y=\ln x$.
Функция выбрана достаточно произвольно, подобных оразом можно вычислить и другие элементарные функции.
Сразу стоит обратить внимание, что используемые методы достаточно универсальны, но не являются самыми быстрыми.
Элементарные свойства. Редукция аргумента.
По-определению, натуральным логарифмом называется функция, обратная к экспоненте, т.е. $y=\ln x$ тогда и только тогда, когда $x=e^y$.
Поэтому если мы можем вычислять показательную функцию, то легко построить график логарифмической функции, нужно просто поменять переменные местами.
Step1: Для графического представления данных часто используется логарифмическая шкала, на которой находищиеся на одном расстоянии точки отличаются в одно и то же число раз.
График логарифма в логарифмической шкале по аргументу $x$ выглядит как прямая линия.
Step2: Лоагрифм преобразует умножение в сложение
Step3: Задание 1. Выполните редукцию аргумента логарифма так, чтобы всегда получать значения из интервала $[1,1+\epsilon)$, где $\epsilon$ - маленькое положительное число. Каким свойством предпочтительнее воспользоваться $\ln x^2=2\ln x$ или $\ln \frac{x}{2}=\ln x-\ln 2$?
Результат даже точного вычислении логарифма имеет погрешность равную произведению погрешности аргумента на число обусловленности.
Число обусловленности можно найти по формуле
Step4: Формально при $x=1$ число обусловленности равно бесконечности (так как значение функции равно $0$), однако этот пик очень узкий, так что почти всюду значения могут быть найдены с машинной точностью, кроме узкого
Разложение в степенной ряд
Из математического анализа нам известно, что для $|x|<1$ справедливо разложение логарифма в ряд
Step5: Формула Эйлера дает аккуратное приближение функции только рядом с точкой разложения (в даном случае $x=1$), что мы и наблюдаем в эксперименте.
Наибольшую точность мы получили возле $x=1$, что противоречит нашей оценке через числа обусловленности.
Однако нужно принимать во внимание, что мы сравнивали нашу реализацию со встроенной, которая не дает (и не может дать) абсолютно правильный ответ.
Точность вычислений можно увеличить, добавляя слагаемые в частичную сумму.
Сколько слагаемых нужно взять, чтобы достигнуть желаемой точности?
Распространено заблуждение, что суммировать нужно до тех пор, пока последнее добавленное слагаемое не станет меньше желаемой точности.
Вообще говоря это не так.
Чтобы получить верную оценку погрешности отбрасывания остатка ряда, нужно оценить весь остаток ряда, а не только последнее слагаемое.
Для оценки остатка ряда можно воспользоваться формулой Лагранжа для остаточного члена
Step6: Как мы видим, погрешность стремится к нулю в узлах интерполяции, между узлами ошибка не растет выше некоторой величины, т.е. с точки зрения вычисления функции этот приближение гораздо лучше.
Задание 3. Как следует из графика ошибки, предложенный выбор узлов $x_n$ плох.
Подумайте, как лучше расположить узлы интерполяции?
Воспользуйтесь формулой приведения
$$x=\frac{1+2u/3}{1-2u/3},$$
позволяющей преобразовать интервал $x\in[1/5,5]$ в интервал $u\in[-1,1]$.
Будет ли разложение по степеням $u$ предпочтительнее разложения по степеням $a=x-1$?
Составьте интерполяционный многочлен Лагранжа от переменной $u$ с узлами в нулях многочлена Чебышева
Step7: Задание 4. Начальное приближение в вышеприведенном алгоритме выбрано очень грубо, предложите лучшее приближение. Оцените число итераций, необходимое для получения лучшей возможной точности. Реализуйте метод Ньютона для найденного числа итераций. Удалось ли получить машиную точность? Почему? Почему при использовании 1 в качестве начального приближения итерации расходятся для $x$ заметно отличающихся от 1?
Вычисление с помощью таблиц
Число с плавающей запятой представляется в виде $M\cdot 2^E$, где $M$ - мантисса, а $E$ - экспонента.
Согласно основному свойству логарифма
$$\ln (M\cdot 2^E)=E\ln 2+\ln M,$$
где константу $\ln 2$ можно предварительно вычислить и сохранить, экспонента представляет собой данное нам целое число, единственно что нам остается вычислить - это логарифм мантиссы.
Так как мантисса всегда лежит в интервале $(-1,1)$, а с учетом области определения логарима, в интервале $(0,1)$, то мы можем приближенно найти значение $\ln M$ как сохраненное в таблице значение логарифма в ближайшей к $M$ точке.
Для составления таблицы удобно отбросить все биты мантиссы, кроме нескольких старших,
перебрать все их возможные значения и вычислить логарифм этих значений. | Python Code:
y=np.linspace(-2,3,100)
x=np.exp(y)
plt.plot(x,y)
plt.xlabel('$x$')
plt.ylabel('$y=\ln x$')
plt.show()
Explanation: Вычисление элементарных функций
Вычисление значения функции на данном аргументе является одной из важнейших задач численных методов.
Несмотря на то, что вы уже огромное число раз вычисляли значения функций на практике, вам вряд ли приходилось самостоятельно реализовывать вычисление функций, не сводящихся к композиции элементарных.
Действительно, калькуляторы, стандартные библиотеки, математические пакеты и т.п. позволяют вам легко и зачастую с произвольной точностью вычислять значение широко известных функций.
Однако иногда вычисление элементраных функций приходится реализовывать самостоятельно, например, если вы пытаетесь добиться более высокой производительности, улучшить точность, эффективно распараллелить вычисления, используете среду/оборудование, для которого нет математических библиотек и т.п.
Алгоритмы вычисления элементарных функций сами по себе поучительны, так как учат нас избегать типичных ошибок расчетов на компьютере, подсказывают, как реализовать вычисления неэлементарных функций, а также позволяют рассмотреть нам некоторые методы, которые полностью проявляют свою мощь в более сложных задачах.
В этой лабораторной работе мы рассмотрим задачу вычисления натурального логарифма $y=\ln x$.
Функция выбрана достаточно произвольно, подобных оразом можно вычислить и другие элементарные функции.
Сразу стоит обратить внимание, что используемые методы достаточно универсальны, но не являются самыми быстрыми.
Элементарные свойства. Редукция аргумента.
По-определению, натуральным логарифмом называется функция, обратная к экспоненте, т.е. $y=\ln x$ тогда и только тогда, когда $x=e^y$.
Поэтому если мы можем вычислять показательную функцию, то легко построить график логарифмической функции, нужно просто поменять переменные местами.
End of explanation
plt.semilogx(x,y)
plt.xlabel('$x$')
plt.ylabel('$y=\ln x$')
plt.show()
Explanation: Для графического представления данных часто используется логарифмическая шкала, на которой находищиеся на одном расстоянии точки отличаются в одно и то же число раз.
График логарифма в логарифмической шкале по аргументу $x$ выглядит как прямая линия.
End of explanation
x=np.logspace(0,10,100)
y=np.log(x)
plt.semilogx(x,y)
plt.semilogx(1/x,-y)
plt.xlabel('$x$')
plt.ylabel('$y=\ln x$')
plt.show()
Explanation: Лоагрифм преобразует умножение в сложение:
$$\ln (xy)=\ln x+\ln y.$$
а возведение в степень в умножение
$$\ln x^a=a\ln x.$$
Это свойство, например, может быть использовано для вычисления произвольных вещественных степеней:
$$a^x=\exp(\ln a^x)=\exp(a\ln x).$$
Это свойство можно применить и для того, чтобы выразить значения логарифма в одних точках, через значения в других, избежав вычислений значений в неудобных точках.
Например, воспользовавшись свойством
$$\ln \frac1x=-\ln x,$$
можно вычислять значения логарифма на всей области определения, реализовав вычисление логарифма только на интервале $(0,1]$ или от $[1,\infty)$.
Этот подход называется редукцией аргумента, и ипользуется при вычислении почти всех функций.
End of explanation
x0=np.logspace(-5,5,1000,dtype=np.double)
epsilon=np.finfo(np.double).eps
best_precision=(epsilon/2)*np.abs(1./np.log(x0))
plt.loglog(x0,best_precision, '-k')
plt.loglog(x0,np.full(x0.shape, epsilon), '--r')
plt.xlabel("$Аргумент$")
plt.ylabel("$Относительная\,погрешность$")
plt.legend(["$Минимальная\,погр.$","$Машинная\,погр.$"])
plt.show()
Explanation: Задание 1. Выполните редукцию аргумента логарифма так, чтобы всегда получать значения из интервала $[1,1+\epsilon)$, где $\epsilon$ - маленькое положительное число. Каким свойством предпочтительнее воспользоваться $\ln x^2=2\ln x$ или $\ln \frac{x}{2}=\ln x-\ln 2$?
Результат даже точного вычислении логарифма имеет погрешность равную произведению погрешности аргумента на число обусловленности.
Число обусловленности можно найти по формуле:
$$\kappa(x)=\frac{|x(\ln x)'|}{|ln x|}=\frac{|x/x|}{|\ln x|}=\frac{1}{|\ln x|}.$$
Так как погрешность аргумента всегда не привосходит, но может достигать половины машинной точности, то лучшая реализация вычисления логарфима будет иметь следующую точность:
End of explanation
def relative_error(x0,x): return np.abs(x0-x)/np.abs(x0)
def log_teylor_series(x, N=5):
a=x-1
a_k=a # x в степени k. Сначала k=1
y=a # Значене логарифма, пока для k=1.
for k in range(2,N): # сумма по степеням
a_k=-a_k*a # последовательно увеличиваем степень и учитываем множитель со знаком
y=y+a_k/k
return y
x=np.logspace(-5,1,1001)
y0=np.log(x)
y=log_teylor_series(x)
plt.loglog(x,relative_error(y0,y),'-k')
plt.loglog(x0,best_precision,'--r')
plt.xlabel('$x$')
plt.ylabel('$(y-y_0)/y_0$')
plt.legend(["$Достигнутая\;погр.$", "$Минимальная\;погр.$"],loc=5)
plt.show()
Explanation: Формально при $x=1$ число обусловленности равно бесконечности (так как значение функции равно $0$), однако этот пик очень узкий, так что почти всюду значения могут быть найдены с машинной точностью, кроме узкого
Разложение в степенной ряд
Из математического анализа нам известно, что для $|x|<1$ справедливо разложение логарифма в ряд:
$$\ln (1+a)=\sum_{k=1}^\infty (-1)^{n+1}a^k/k=x-x^2/2+x^3/3+\ldots.$$
Так как правая часть содержит только арифметические операции, то возникает соблазн использовать частичную сумму этого ряда для приближенного вычисления логарифма.
Первое препятствие на этом пути - это сходимость ряда только на малом интервале, т.е. таким способом могут быть получены только значения $\ln x$ для $x\in(0,2)$.
Вторая сложность заключается в том, что частичная сумма $S_N$ из $N$ членов ряда
$$S_N=\sum_{k=1}^N (-1)^{n+1}{a^k}/k$$
дает только часть суммы, а остаток ряда
$$R_N=\sum_{k=N+1}^\infty (-1)^{n+1}{a^k}/k$$
быстро увеличивается, если значения $a$ увеличиваются по модулю.
Вычислим численно относительную погрешность отбрасывания остатка ряда.
End of explanation
# Узлы итерполяции
N=5
xn=1+1./(1+np.arange(N))
yn=np.log(xn)
# Тестовые точки
x=np.linspace(1+1e-10,2,1000)
y=np.log(x)
# Многочлен лагранжа
import scipy.interpolate
L=scipy.interpolate.lagrange(xn,yn)
yl=L(x)
plt.plot(x,y,'-k')
plt.plot(xn,yn,'.b')
plt.plot(x,yl,'-r')
plt.xlabel("$x$")
plt.ylabel("$y=\ln x$")
plt.show()
plt.semilogy(x,relative_error(y,yl))
plt.xlabel("$Аргумент$")
plt.ylabel("$Относительная\;погрешность$")
plt.show()
Explanation: Формула Эйлера дает аккуратное приближение функции только рядом с точкой разложения (в даном случае $x=1$), что мы и наблюдаем в эксперименте.
Наибольшую точность мы получили возле $x=1$, что противоречит нашей оценке через числа обусловленности.
Однако нужно принимать во внимание, что мы сравнивали нашу реализацию со встроенной, которая не дает (и не может дать) абсолютно правильный ответ.
Точность вычислений можно увеличить, добавляя слагаемые в частичную сумму.
Сколько слагаемых нужно взять, чтобы достигнуть желаемой точности?
Распространено заблуждение, что суммировать нужно до тех пор, пока последнее добавленное слагаемое не станет меньше желаемой точности.
Вообще говоря это не так.
Чтобы получить верную оценку погрешности отбрасывания остатка ряда, нужно оценить весь остаток ряда, а не только последнее слагаемое.
Для оценки остатка ряда можно воспользоваться формулой Лагранжа для остаточного члена:
$$R_N=\frac{a^{N+1}}{(N+1)!}\frac{d^{N+1}f(a\theta)}{da^{N+1}},$$
где как и выше $a=x-1$, а $\theta$ лежит на интервале $[0,1]$.
Задание 2. Найдите количество слагаемых в частичной сумме, достаточное для получения значения логарифма с заданной точностью. Реализуйте вычисления логарифма через сумму с заданной точностью. Какую максимальную точность удается достичь?
Аппроксимация многочленами
При вычислении логарифма через частичную суммы мы по сути приближали логарифм многочленами.
Многочлен Тейлора давал хорошее приближение функции и нескольких производных, но только в одной точке.
Мы же сейчас интересуемся только значением функции, но хотели бы иметь хорошую точность приближения на целом интервале.
Для достижения этой цели многочлены Тейлора подходят плохо, однако можно воспользоваться многочленами Лагранжа, Чебышева и т.п., или можно попытаться минимизировать непосредственно ошибку прилижения на отрезке, варьируя коэффициенты многочлена.
В качестве примера мы рассмотрим построение интерполяционного многочлена Лагранжа.
Этот многочлен будет точно совпадать с приближаемой функцией в $N+1$ узле, где $N$~-- степень многочлена, а между узлами мы надеямся, что погрешность не будет слишком силько расти. Зафиксируем несколько значений $x_n=1+1/(n+1)$, $n=0..N$, из интервала $[1,2]$ и вычислим в них точные значения логарифма в этих точках $y_n=\ln(x_n)$. Тогда интерполяционный многочлен имеет вид:
$$L(x)=\sum_{n=0}^{N}\prod_{k\neq n} \frac{x-x_k}{x_n-x}.$$
End of explanation
def log_newton(x, N=10):
y=1 # начальное приближение
for j in range(N):
y=y-1+x/np.exp(y)
return y
x=np.logspace(-3,3,1000)
y0=np.log(x)
y=log_newton(x)
plt.loglog(x,relative_error(y0,y),'-k')
plt.xlabel("$Аргумент$")
plt.ylabel("$Относительная\;погрешность$")
plt.show()
Explanation: Как мы видим, погрешность стремится к нулю в узлах интерполяции, между узлами ошибка не растет выше некоторой величины, т.е. с точки зрения вычисления функции этот приближение гораздо лучше.
Задание 3. Как следует из графика ошибки, предложенный выбор узлов $x_n$ плох.
Подумайте, как лучше расположить узлы интерполяции?
Воспользуйтесь формулой приведения
$$x=\frac{1+2u/3}{1-2u/3},$$
позволяющей преобразовать интервал $x\in[1/5,5]$ в интервал $u\in[-1,1]$.
Будет ли разложение по степеням $u$ предпочтительнее разложения по степеням $a=x-1$?
Составьте интерполяционный многочлен Лагранжа от переменной $u$ с узлами в нулях многочлена Чебышева:
$$u_n=\cos\frac{\pi(n+1/2)}{N+1},\quad n=0..N.$$
Сравните точности аппроксимации с узлами в $x_n$ и в $u_n$.
Задание A (повышенная сложность). Найдите многочлен данной степени $N$, дающий наименьшую погрешность приближения логарифма на интервале $[1/5,5]$.
Задание B (повышенная сложность). Постройте разложение логарифма на интервале $[1/5,5]$ по многочленам Чебышева от переменной $u$ методом Ланцоша.
Итерационный метод
Для нахождения $y$, такого что $y=\ln x$, можно численно решить уравнение $x=e^y$,
что может оказаться проще, чем считать логарифм напрямую.
Для решения уравнения воспользуемся методом Ньютона.
Перепишем уравнение в виде $F(y)=e^y-x=0$, т.е. будем искать нули функции $F$.
Пусть у нас есть начальное приближение для $y=y_0$.
Приблизим функцию $F$ рядом с $y_0$ с помощью касательной,
т.е. $F(y)\approx F'(y_0)(y-y_0)+F(y_0)$.
Если функция $F$ близка к линейной (что верно, если $y_0$ близко к нулю функции), то точки пересечения функции и касательной с осью абсцисс близки.
Составим уравнение на ноль касательной:
$$F'(y_0)(y-y_0)+F(y_0)=0,$$
следовательно следующим приближением выберем
$$y=y_0-\frac{F(y_0)}{F'(y_0)}.$$
Итерации по методу Ньютона определены следующей рекуррентной формулой:
$$y_{n+1}=y_n-\frac{F(y_n)}{F'(y_n)}.$$
Подставляя явный вид функции $F$, получаем
$$y_{n+1}=y_n-\frac{e^{y_n}-x}{e^{y_n}}=y_n-1+xe^{-y_n}.$$
Точное значение логарифма есть предел последовательности $y_n$ при $n\to\infty$.
Приближенное значение логарифма можно получить сделав несколько итераций.
При выполнении ряда условий метод Ньютона имеет квадратичную скорость сходимости, т.е.
$$|y_n-y^|<\alpha|y_{n-1}-y^|^2,$$
где $y^*=\lim_{n\to\infty} y_n$ - точное значение логарифма, и $\alpha\in(0,1]$ - некоторая константа.
Неформально выражаясь, квадратичная сходимость означает удвоение числа значащих цифр на каждой итерации.
End of explanation
B=8 # число используемых для составления таблицы бит мантиссы
table=np.log((np.arange(0,2**B, dtype=np.double)+0.5)/(2**B))
log2=np.log(2)
def log_table(x):
M,E=np.frexp(x)
return log2*E+table[(M*2**B).astype(np.int)]
x=np.logspace(-10,10,1000)
y0=np.log(x)
y=log_table(x)
plt.loglog(x,relative_error(y0,y),'-k')
plt.xlabel("$Аргумент$")
plt.ylabel("$Относительная\;погрешность$")
plt.show()
Explanation: Задание 4. Начальное приближение в вышеприведенном алгоритме выбрано очень грубо, предложите лучшее приближение. Оцените число итераций, необходимое для получения лучшей возможной точности. Реализуйте метод Ньютона для найденного числа итераций. Удалось ли получить машиную точность? Почему? Почему при использовании 1 в качестве начального приближения итерации расходятся для $x$ заметно отличающихся от 1?
Вычисление с помощью таблиц
Число с плавающей запятой представляется в виде $M\cdot 2^E$, где $M$ - мантисса, а $E$ - экспонента.
Согласно основному свойству логарифма
$$\ln (M\cdot 2^E)=E\ln 2+\ln M,$$
где константу $\ln 2$ можно предварительно вычислить и сохранить, экспонента представляет собой данное нам целое число, единственно что нам остается вычислить - это логарифм мантиссы.
Так как мантисса всегда лежит в интервале $(-1,1)$, а с учетом области определения логарима, в интервале $(0,1)$, то мы можем приближенно найти значение $\ln M$ как сохраненное в таблице значение логарифма в ближайшей к $M$ точке.
Для составления таблицы удобно отбросить все биты мантиссы, кроме нескольких старших,
перебрать все их возможные значения и вычислить логарифм этих значений.
End of explanation |
4,933 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running a model and loading the simulation it produces
Very simple example with the stationary dynamic system defined in test_lake.yaml. Basically it represents a stationary reservoir whose discretized mass balance equation is modelled as in the left side of the following table. On the right side, you find the pseudocode in the test_lake.yaml.
<table>
<tr>
<th>Equations</th>
<th>YAML</th>
</tr>
<tr>
<td>
$$\begin{aligned}
h_{t+1} &= h_t + \Delta_t * \left(a_{t+1} - r_{t+1} \right) \\
r_{t+1} &= \begin{cases}
h_t & u_t > h_t \\
u_t & h_t - 100 < u_t \leq h_t \\
h_t - 100 & u_t \leq h_t - 100
\end{cases} \\
u_t &= m\left(h_t; \theta\right) \\
a_{t+1} &= 40
\end{aligned}$$
</td>
<td>
<pre>
functions
Step1: Now load results and cleanup the simulation file
Step2: Plotting | Python Code:
from subprocess import Popen, PIPE, STDOUT
p = Popen(["../pydmmt/pydmmt.py", "test_lake.yml"], stdin=PIPE, stdout=PIPE, stderr=STDOUT)
output = p.communicate(".3".encode('utf-8'))[0]
# trim the '\n' newline char
print(output[:-1].decode('utf-8'))
Explanation: Running a model and loading the simulation it produces
Very simple example with the stationary dynamic system defined in test_lake.yaml. Basically it represents a stationary reservoir whose discretized mass balance equation is modelled as in the left side of the following table. On the right side, you find the pseudocode in the test_lake.yaml.
<table>
<tr>
<th>Equations</th>
<th>YAML</th>
</tr>
<tr>
<td>
$$\begin{aligned}
h_{t+1} &= h_t + \Delta_t * \left(a_{t+1} - r_{t+1} \right) \\
r_{t+1} &= \begin{cases}
h_t & u_t > h_t \\
u_t & h_t - 100 < u_t \leq h_t \\
h_t - 100 & u_t \leq h_t - 100
\end{cases} \\
u_t &= m\left(h_t; \theta\right) \\
a_{t+1} &= 40
\end{aligned}$$
</td>
<td>
<pre>
functions:
- "h[t+1] = h[t] + 1 * (a[t+1] - r[t+1])"
- "r[t+1] = max( max( h[t] - 100, 0 ), min( h[t], u[t] ) )"
- "u[t] = alfa * h[t]"
- "a[t+1] = 40"
- "h[0] = 100"
</pre>
</td>
</tr>
</table>
and the initial condition is given by $h_0 = 100$. Note that the control of this reservoir is demanded to a feedback
policy ($u_t = m\left(h_t; \theta\right)$) that is identified by a class $m(\cdot)$ of functions and a set of parameters $\theta$.
The system is operated to achieve certain objectives over the entire operational horizon simulated, namely:
* limit flooding along lake shores,
* supply a certain amount of water to downstream irrigation districts,
* supply water to an hydropower plant that has to meet a certain demand,
* limit flooding along the downstream water body.
These objectives are formulated as the daily mean of the following indicators:
<table>
<tr>
<th>Equations</th>
<th>YAML</th>
</tr>
<tr>
<td>
$$\begin{aligned}
h^\text{excess}_{t+1} &= \max\left( h_t - 50, 0 \right) \\
\text{deficit}^{irr}_{t+1} &= \max\left( 50 - r_{t+1}, 0 \right) \\
\text{deficit}^{HP}_{t+1} &= \max\left( 4.36 - HP_{t+1}, 0 \right) \\
HP_{t+1} &= \frac{1 * 9.81 * 1000 * h_t * \max\left( r_{t+1} - 0, 0 \right)}{3600 * 1000} \\
r^\text{excess}_{t+1} &= \max\left( r_{t+1} - 30, 0 \right)
\end{aligned}$$
</td>
<td>
<pre>
functions:
# indicators
- "h_excess[t+1] = max( h[t] - 50, 0 )"
- "irr_deficit[t+1] = max( 50 - r[t+1], 0 )"
- "hyd_deficit[t+1] = max( 4.36 - HP[t+1], 0 )"
- "HP[t+1] = 1 * 9.81 * 1000 / 3600000 * h[t] * max( r[t+1] - 0, 0 )"
- "r_excess[t+1] = max( r[t+1] - 30, 0 )"
# overall objectives
- "mean_daily_h_excess = mean( h_excess[1:100] )"
- "mean_daily_irr_deficit = mean( irr_deficit[2:101] )"
- "mean_daily_hyd_deficit = mean( hyd_deficit[2:101] )"
- "mean_daily_r_excess = mean( r_excess[2:101] )"
</pre>
</td>
</tr>
</table>
Running the actual model
End of explanation
from pathlib import Path
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
simfile = Path("lake_simulation.log")
# read sim_data_file and remove the character "#" from the first column
with simfile.open() as f:
the_sim = pd.read_csv(f)
the_sim = the_sim.rename(columns={'# t':'t'})
f.close()
os.remove(str(simfile))
print(the_sim)
Explanation: Now load results and cleanup the simulation file
End of explanation
level = pd.Series(the_sim["h[t]"], index=the_sim["t"])
inflow = pd.Series(the_sim["a[t+1]"], index=the_sim["t"])
release = pd.Series(the_sim["r[t+1]"], index=the_sim["t"])
plt.figure()
plt.subplot(211)
level.plot()
plt.axis([0, 20, 0, 150])
plt.ylabel("Lake level")
plt.title("Reservoir evolution")
plt.subplot(212)
inflow.plot()
release.plot()
plt.axis([0, 20, 0, 150])
plt.ylabel("Flow")
plt.legend(["Inflow", "Release"])
plt.show()
Explanation: Plotting
End of explanation |
4,934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3. How to Setup the Initial Condition
Here, we explain the basics of World classes. In E-Cell4, six types of World classes are supported now
Step1: 3.1. Common APIs of World
Even though World describes the spatial representation specific to the corresponding algorithm, it has compatible APIs. In this section, we introduce the common interfaces of the six World classes.
Step2: World classes accept different sets of arguments in the constructor, which determine the parameters specific to the algorithm. However, at least, all World classes can be instantiated only with their size, named edge_lengths. The type of edge_lengths is Real3, which represents a triplet of Reals. In E-Cell4, all 3-dimensional positions are treated as a Real3. See also 8. More about 1. Brief Tour of E-Cell4 Simulations.
Step3: World has getter methods for the size and volume.
Step4: spatiocyte.World (w3) would have a bit larger volume to fit regular hexagonal close-packed (HCP) lattice.
Next, let's add molecules into the World. Here, you must give Species attributed with radius and D to tell the shape of molecules. In the example below 0.0025 corresponds to radius and 1 to D. Positions of the molecules are randomly determined in the World if needed. 10 in add_molecules function represents the number of molecules to be added.
Step5: After a model is bound to the world, you do not need to rewrite the radius and D once set in Species (unless you want to change it).
Step6: Similarly, remove_molecules and num_molecules_exact are also available.
Step7: Unlike num_molecules_exact, num_molecules returns the numbers that match a given Species in rule-based fashion. When all Species in the World has a single UnitSpecies with no sites, num_molecules is same with num_molecules_exact.
Step8: World holds its simulation time.
Step9: Finally, you can save and load the state of a World into/from a HDF5 file.
Step10: All the World classes also accept a HDF5 file path as an unique argument of the constructor.
Step11: 3.2. How to Get Molecule Positions
World also has the common functions to access the coordinates of the molecules.
Step12: First, you can place a molecule at the certain position with new_particle.
Step13: new_particle returns a particle created and whether it's succeeded or not. The resolution in representation of molecules differs. For example, GillespieWorld has almost no information about the coordinate of molecules. Thus, it simply ignores the given position, and just counts up the number of molecules here.
ParticleID is a pair of Integers named lot and serial.
Step14: Particle simulators, i.e. spatiocyte, bd and egfrd, provide an interface to access a particle by its id. has_particle returns if a particles exists or not for the given ParticleID.
Step15: After checking the existence, you can get the partcle by get_particle as follows.
Step16: Particle consists of species, position, radius and D.
Step17: In the case of spatiocyte, a particle position is automatically round to the center of the voxel nearest to the given position.
You can even move the position of the particle. update_particle replace the particle specified with the given ParticleID with the given Particle and return False. If no corresponding particle is found, create new particle and return True. If you give a Particle with the different type of Species, the Species of the Particle will be also changed.
Step18: list_particles and list_particles_exact return a list of pairs of ParticleID and Particle in the World. World automatically makes up for the gap with random numbers. For example, GillespieWorld returns a list of positions randomly distributed in the World size.
Step19: You can remove a specific particle with remove_particle.
Step20: 3.3. Lattice-based Coordinate
In addition to the common interface, each World can have their own interfaces. As an example, we explain methods to handle lattice-based coordinate here. SpatiocyteWorld is based on a space discretized to hexiagonal close packing lattices, LatticeSpace.
Step21: The size of a single lattice, called Voxel, can be obtained by voxel_radius(). SpatiocyteWorld has methods to get the numbers of rows, columns, and layers. These sizes are automatically calculated based on the given edge_lengths at the construction.
Step22: A position in the lattice-based space is treated as an Integer3, column, row and layer, called a global coordinate. Thus, SpatiocyteWorld provides the function to convert the Real3 into a lattice-based coordinate.
Step23: In SpatiocyteWorld, the global coordinate is translated to a single integer. It is just called a coordinate. You can also treat the coordinate as in the same way with a global coordinate.
Step24: With these coordinates, you can handle a Voxel, which represents a Particle object. Instead of new_particle, new_voxel provides the way to create a new Voxel with a coordinate.
Step25: A Voxel consists of species, coordinate, radius and D.
Step26: Of course, you can get a voxel and list voxels with get_voxel and list_voxels_exact similar to get_particle and list_particles_exact.
Step27: You can move and update the voxel with update_voxel corresponding to update_particle.
Step28: Finally, remove_voxel remove a voxel as remove_particle does.
Step29: 3.4 Structure
Step30: By using a Shape object, you can confine initial positions of molecules to a part of World. In the case below, 60 molecules are positioned inside the given Sphere. Diffusion of the molecules placed here is NOT restricted in the Shape. This Shape is only for the initialization.
Step31: A property of Species, 'location', is available to restrict diffusion of molecules. 'location' is not fully supported yet, but only supported in spatiocyte and meso. add_structure defines a new structure given as a pair of Species and Shape.
NOTICE
Step32: After defining a structure, you can bind molecules to the structure as follows
Step33: The molecules bound to a Species named B diffuse on a structure named M, which has a shape of SphericalSurface (a hollow sphere). In spatiocyte, a structure is represented as a set of particles with Species M occupying a voxel. It means that molecules not belonging to the structure is not able to overlap the voxel and it causes a collision. On the other hand, in meso, a structure means a list of subvolumes. Thus, a structure doesn't avoid an incursion of other particles.
3.5. Random Number Generator
A random number generator is also a part of World. All Worlds except ODEWorld store a random number generator, and updates it when the simulation needs a random value. On E-Cell4, only one class GSLRandomNumberGenerator is implemented as a random number generator.
Step34: With no argument, the random number generator is always initialized with a seed, 0.
Step35: You can initialize the seed with an integer as follows
Step36: When you call the seed function with no input, the seed is drawn from the current time.
Step37: GSLRandomNumberGenerator provides several ways to get a random number.
Step38: World accepts a random number generator at the construction. As a default, GSLRandomNumberGenerator() is used. Thus, when you don't give a generator, behavior of the simulation is always same (determinisitc).
Step39: You can access the GSLRandomNumberGenerator in a World through rng function. | Python Code:
from ecell4_base.core import *
Explanation: 3. How to Setup the Initial Condition
Here, we explain the basics of World classes. In E-Cell4, six types of World classes are supported now: spatiocyte.SpatiocyteWorld, egfrd.EGFRDWorld, bd.BDWorld, meso.MesoscopicWorld, gillespie.GillespieWorld, and ode.ODEWorld.
In the most of softwares, the initial condition is supposed to be a part of Model. But, in E-Cell4, the initial condition must be set up as World separately from Model. World stores an information about the state, such as a current time, the number of molecules, coordinate of molecules, structures, and random number generator, at a time point. Meanwhile, Model contains the type of interactions between molecules and the common properties of molecules. Model is reusable among algorithms.
End of explanation
from ecell4_base import *
Explanation: 3.1. Common APIs of World
Even though World describes the spatial representation specific to the corresponding algorithm, it has compatible APIs. In this section, we introduce the common interfaces of the six World classes.
End of explanation
edge_lengths = Real3(1, 2, 3)
w1 = gillespie.World(edge_lengths)
w2 = ode.World(edge_lengths)
w3 = spatiocyte.World(edge_lengths)
w4 = bd.World(edge_lengths)
w5 = meso.World(edge_lengths)
w6 = egfrd.World(edge_lengths)
Explanation: World classes accept different sets of arguments in the constructor, which determine the parameters specific to the algorithm. However, at least, all World classes can be instantiated only with their size, named edge_lengths. The type of edge_lengths is Real3, which represents a triplet of Reals. In E-Cell4, all 3-dimensional positions are treated as a Real3. See also 8. More about 1. Brief Tour of E-Cell4 Simulations.
End of explanation
print(tuple(w1.edge_lengths()), w1.volume())
print(tuple(w2.edge_lengths()), w2.volume())
print(tuple(w3.edge_lengths()), w3.volume())
print(tuple(w4.edge_lengths()), w4.volume())
print(tuple(w5.edge_lengths()), w5.volume())
print(tuple(w6.edge_lengths()), w6.volume())
Explanation: World has getter methods for the size and volume.
End of explanation
sp1 = Species("A", 0.0025, 1)
w1.add_molecules(sp1, 10)
w2.add_molecules(sp1, 10)
w3.add_molecules(sp1, 10)
w4.add_molecules(sp1, 10)
w5.add_molecules(sp1, 10)
w6.add_molecules(sp1, 10)
Explanation: spatiocyte.World (w3) would have a bit larger volume to fit regular hexagonal close-packed (HCP) lattice.
Next, let's add molecules into the World. Here, you must give Species attributed with radius and D to tell the shape of molecules. In the example below 0.0025 corresponds to radius and 1 to D. Positions of the molecules are randomly determined in the World if needed. 10 in add_molecules function represents the number of molecules to be added.
End of explanation
m = NetworkModel()
m.add_species_attribute(Species("A", 0.0025, 1))
m.add_species_attribute(Species("B", 0.0025, 1))
w1.bind_to(m)
w2.bind_to(m)
w3.bind_to(m)
w4.bind_to(m)
w5.bind_to(m)
w6.bind_to(m)
w1.add_molecules(Species("B"), 20)
w2.add_molecules(Species("B"), 20)
w3.add_molecules(Species("B"), 20)
w4.add_molecules(Species("B"), 20)
w5.add_molecules(Species("B"), 20)
w6.add_molecules(Species("B"), 20)
Explanation: After a model is bound to the world, you do not need to rewrite the radius and D once set in Species (unless you want to change it).
End of explanation
w1.remove_molecules(Species("B"), 5)
w2.remove_molecules(Species("B"), 5)
w3.remove_molecules(Species("B"), 5)
w4.remove_molecules(Species("B"), 5)
w5.remove_molecules(Species("B"), 5)
w6.remove_molecules(Species("B"), 5)
print(w1.num_molecules_exact(Species("A")), w2.num_molecules_exact(Species("A")), w3.num_molecules_exact(Species("A")), w4.num_molecules_exact(Species("A")), w5.num_molecules_exact(Species("A")), w6.num_molecules_exact(Species("A")))
print(w1.num_molecules_exact(Species("B")), w2.num_molecules_exact(Species("B")), w3.num_molecules_exact(Species("B")), w4.num_molecules_exact(Species("B")), w5.num_molecules_exact(Species("B")), w6.num_molecules_exact(Species("B")))
Explanation: Similarly, remove_molecules and num_molecules_exact are also available.
End of explanation
print(w1.num_molecules(Species("A")), w2.num_molecules(Species("A")), w3.num_molecules(Species("A")), w4.num_molecules(Species("A")), w5.num_molecules(Species("A")), w6.num_molecules(Species("A")))
print(w1.num_molecules(Species("B")), w2.num_molecules(Species("B")), w3.num_molecules(Species("B")), w4.num_molecules(Species("B")), w5.num_molecules(Species("B")), w6.num_molecules(Species("B")))
Explanation: Unlike num_molecules_exact, num_molecules returns the numbers that match a given Species in rule-based fashion. When all Species in the World has a single UnitSpecies with no sites, num_molecules is same with num_molecules_exact.
End of explanation
print(w1.t(), w2.t(), w3.t(), w4.t(), w5.t(), w6.t())
w1.set_t(1.0)
w2.set_t(1.0)
w3.set_t(1.0)
w4.set_t(1.0)
w5.set_t(1.0)
w6.set_t(1.0)
print(w1.t(), w2.t(), w3.t(), w4.t(), w5.t(), w6.t())
Explanation: World holds its simulation time.
End of explanation
w1.save("gillespie.h5")
w2.save("ode.h5")
w3.save("spatiocyte.h5")
w4.save("bd.h5")
w5.save("meso.h5")
w6.save("egfrd.h5")
del w1, w2, w3, w4, w5, w6
w1 = gillespie.World()
w2 = ode.World()
w3 = spatiocyte.World()
w4 = bd.World()
w5 = meso.World()
w6 = egfrd.World()
print(w1.t(), tuple(w1.edge_lengths()), w1.volume(), w1.num_molecules(Species("A")), w1.num_molecules(Species("B")))
print(w2.t(), tuple(w2.edge_lengths()), w2.volume(), w2.num_molecules(Species("A")), w2.num_molecules(Species("B")))
print(w3.t(), tuple(w3.edge_lengths()), w3.volume(), w3.num_molecules(Species("A")), w3.num_molecules(Species("B")))
print(w4.t(), tuple(w4.edge_lengths()), w4.volume(), w4.num_molecules(Species("A")), w4.num_molecules(Species("B")))
print(w5.t(), tuple(w5.edge_lengths()), w5.volume(), w5.num_molecules(Species("A")), w5.num_molecules(Species("B")))
print(w6.t(), tuple(w6.edge_lengths()), w6.volume(), w6.num_molecules(Species("A")), w6.num_molecules(Species("B")))
w1.load("gillespie.h5")
w2.load("ode.h5")
w3.load("spatiocyte.h5")
w4.load("bd.h5")
w5.load("meso.h5")
w6.load("egfrd.h5")
print(w1.t(), tuple(w1.edge_lengths()), w1.volume(), w1.num_molecules(Species("A")), w1.num_molecules(Species("B")))
print(w2.t(), tuple(w2.edge_lengths()), w2.volume(), w2.num_molecules(Species("A")), w2.num_molecules(Species("B")))
print(w3.t(), tuple(w3.edge_lengths()), w3.volume(), w3.num_molecules(Species("A")), w3.num_molecules(Species("B")))
print(w4.t(), tuple(w4.edge_lengths()), w4.volume(), w4.num_molecules(Species("A")), w4.num_molecules(Species("B")))
print(w5.t(), tuple(w5.edge_lengths()), w5.volume(), w5.num_molecules(Species("A")), w5.num_molecules(Species("B")))
print(w6.t(), tuple(w6.edge_lengths()), w6.volume(), w6.num_molecules(Species("A")), w6.num_molecules(Species("B")))
del w1, w2, w3, w4, w5, w6
Explanation: Finally, you can save and load the state of a World into/from a HDF5 file.
End of explanation
print(gillespie.World("gillespie.h5").t())
print(ode.World("ode.h5").t())
print(spatiocyte.World("spatiocyte.h5").t())
print(bd.World("bd.h5").t())
print(meso.World("meso.h5").t())
print(egfrd.World("egfrd.h5").t())
Explanation: All the World classes also accept a HDF5 file path as an unique argument of the constructor.
End of explanation
w1 = gillespie.World()
w2 = ode.World()
w3 = spatiocyte.World()
w4 = bd.World()
w5 = meso.World()
w6 = egfrd.World()
Explanation: 3.2. How to Get Molecule Positions
World also has the common functions to access the coordinates of the molecules.
End of explanation
sp1 = Species("A", 0.0025, 1)
pos = Real3(0.5, 0.5, 0.5)
(pid1, p1), suc1 = w1.new_particle(sp1, pos)
(pid2, p2), suc2 = w2.new_particle(sp1, pos)
pid3 = w3.new_particle(sp1, pos)
(pid4, p4), suc4 = w4.new_particle(sp1, pos)
(pid5, p5), suc5 = w5.new_particle(sp1, pos)
(pid6, p6), suc6 = w6.new_particle(sp1, pos)
Explanation: First, you can place a molecule at the certain position with new_particle.
End of explanation
print(pid6.lot(), pid6.serial())
print(pid6 == ParticleID((0, 1)))
Explanation: new_particle returns a particle created and whether it's succeeded or not. The resolution in representation of molecules differs. For example, GillespieWorld has almost no information about the coordinate of molecules. Thus, it simply ignores the given position, and just counts up the number of molecules here.
ParticleID is a pair of Integers named lot and serial.
End of explanation
# print(w1.has_particle(pid1))
# print(w2.has_particle(pid2))
print(w3.has_particle(pid3)) # => True
print(w4.has_particle(pid4)) # => True
# print(w5.has_particle(pid5))
print(w6.has_particle(pid6)) # => True
Explanation: Particle simulators, i.e. spatiocyte, bd and egfrd, provide an interface to access a particle by its id. has_particle returns if a particles exists or not for the given ParticleID.
End of explanation
# pid1, p1 = w1.get_particle(pid1)
# pid2, p2 = w2.get_particle(pid2)
pid3, p3 = w3.get_particle(pid3)
pid4, p4 = w4.get_particle(pid4)
# pid5, p5 = w5.get_particle(pid5)
pid6, p6 = w6.get_particle(pid6)
Explanation: After checking the existence, you can get the partcle by get_particle as follows.
End of explanation
# print(p1.species().serial(), tuple(p1.position()), p1.radius(), p1.D())
# print(p2.species().serial(), tuple(p2.position()), p2.radius(), p2.D())
print(p3.species().serial(), tuple(p3.position()), p3.radius(), p3.D())
print(p4.species().serial(), tuple(p4.position()), p4.radius(), p4.D())
# print(p5.species().serial(), tuple(p5.position()), p5.radius(), p5.D())
print(p6.species().serial(), tuple(p6.position()), p6.radius(), p6.D())
Explanation: Particle consists of species, position, radius and D.
End of explanation
newp = Particle(sp1, Real3(0.3, 0.3, 0.3), 0.0025, 1)
# print(w1.update_particle(pid1, newp))
# print(w2.update_particle(pid2, newp))
print(w3.update_particle(pid3, newp))
print(w4.update_particle(pid4, newp))
# print(w5.update_particle(pid5, newp))
print(w6.update_particle(pid6, newp))
Explanation: In the case of spatiocyte, a particle position is automatically round to the center of the voxel nearest to the given position.
You can even move the position of the particle. update_particle replace the particle specified with the given ParticleID with the given Particle and return False. If no corresponding particle is found, create new particle and return True. If you give a Particle with the different type of Species, the Species of the Particle will be also changed.
End of explanation
print(w1.list_particles_exact(sp1))
# print(w2.list_particles_exact(sp1)) # ODEWorld has no member named list_particles
print(w3.list_particles_exact(sp1))
print(w4.list_particles_exact(sp1))
print(w5.list_particles_exact(sp1))
print(w6.list_particles_exact(sp1))
Explanation: list_particles and list_particles_exact return a list of pairs of ParticleID and Particle in the World. World automatically makes up for the gap with random numbers. For example, GillespieWorld returns a list of positions randomly distributed in the World size.
End of explanation
# w1.remove_particle(pid1)
# w2.remove_particle(pid2)
w3.remove_particle(pid3)
w4.remove_particle(pid4)
# w5.remove_particle(pid5)
w6.remove_particle(pid6)
# print(w1.has_particle(pid1))
# print(w2.has_particle(pid2))
print(w3.has_particle(pid3)) # => False
print(w4.has_particle(pid4)) # => False
# print(w5.has_particle(pid5))
print(w6.has_particle(pid6)) # => False
Explanation: You can remove a specific particle with remove_particle.
End of explanation
w = spatiocyte.World(Real3(1, 2, 3), voxel_radius=0.01)
w.bind_to(m)
Explanation: 3.3. Lattice-based Coordinate
In addition to the common interface, each World can have their own interfaces. As an example, we explain methods to handle lattice-based coordinate here. SpatiocyteWorld is based on a space discretized to hexiagonal close packing lattices, LatticeSpace.
End of explanation
print(w.voxel_radius()) # => 0.01
print(tuple(w.shape())) # => (64, 152, 118)
# print(w.col_size(), w.row_size(), w.layer_size()) # => (64, 152, 118)
print(w.size()) # => 1147904 = 64 * 152 * 118
Explanation: The size of a single lattice, called Voxel, can be obtained by voxel_radius(). SpatiocyteWorld has methods to get the numbers of rows, columns, and layers. These sizes are automatically calculated based on the given edge_lengths at the construction.
End of explanation
# p1 = Real3(0.5, 0.5, 0.5)
# g1 = w.position2global(p1)
# p2 = w.global2position(g1)
# print(tuple(g1)) # => (31, 25, 29)
# print(tuple(p2)) # => (0.5062278801751902, 0.5080682368868706, 0.5)
Explanation: A position in the lattice-based space is treated as an Integer3, column, row and layer, called a global coordinate. Thus, SpatiocyteWorld provides the function to convert the Real3 into a lattice-based coordinate.
End of explanation
# p1 = Real3(0.5, 0.5, 0.5)
# c1 = w.position2coordinate(p1)
# p2 = w.coordinate2position(c1)
# g1 = w.coord2global(c1)
# print(c1) # => 278033
# print(tuple(p2)) # => (0.5062278801751902, 0.5080682368868706, 0.5)
# print(tuple(g1)) # => (31, 25, 29)
Explanation: In SpatiocyteWorld, the global coordinate is translated to a single integer. It is just called a coordinate. You can also treat the coordinate as in the same way with a global coordinate.
End of explanation
# c1 = w.position2coordinate(Real3(0.5, 0.5, 0.5))
# ((pid, v), is_succeeded) = w.new_voxel(Species("A"), c1)
# print(pid, v, is_succeeded)
Explanation: With these coordinates, you can handle a Voxel, which represents a Particle object. Instead of new_particle, new_voxel provides the way to create a new Voxel with a coordinate.
End of explanation
# print(v.species().serial(), v.coordinate(), v.radius(), v.D()) # => (u'A', 278033, 0.0025, 1.0)
Explanation: A Voxel consists of species, coordinate, radius and D.
End of explanation
# print(w.num_voxels_exact(Species("A")))
# print(w.list_voxels_exact(Species("A")))
# print(w.get_voxel(pid))
Explanation: Of course, you can get a voxel and list voxels with get_voxel and list_voxels_exact similar to get_particle and list_particles_exact.
End of explanation
# c2 = w.position2coordinate(Real3(0.5, 0.5, 1.0))
# w.update_voxel(pid, Voxel(v.species(), c2, v.radius(), v.D()))
# pid, newv = w.get_voxel(pid)
# print(c2) # => 278058
# print(newv.species().serial(), newv.coordinate(), newv.radius(), newv.D()) # => (u'A', 278058, 0.0025, 1.0)
# print(w.num_voxels_exact(Species("A"))) # => 1
Explanation: You can move and update the voxel with update_voxel corresponding to update_particle.
End of explanation
# print(w.has_voxel(pid)) # => True
# w.remove_voxel(pid)
# print(w.has_voxel(pid)) # => False
Explanation: Finally, remove_voxel remove a voxel as remove_particle does.
End of explanation
w1 = gillespie.World()
w2 = ode.World()
w3 = spatiocyte.World()
w4 = bd.World()
w5 = meso.World()
w6 = egfrd.World()
Explanation: 3.4 Structure
End of explanation
sp1 = Species("A", 0.0025, 1)
sphere = Sphere(Real3(0.5, 0.5, 0.5), 0.3)
w1.add_molecules(sp1, 60, sphere)
w2.add_molecules(sp1, 60, sphere)
w3.add_molecules(sp1, 60, sphere)
w4.add_molecules(sp1, 60, sphere)
w5.add_molecules(sp1, 60, sphere)
w6.add_molecules(sp1, 60, sphere)
Explanation: By using a Shape object, you can confine initial positions of molecules to a part of World. In the case below, 60 molecules are positioned inside the given Sphere. Diffusion of the molecules placed here is NOT restricted in the Shape. This Shape is only for the initialization.
End of explanation
# The below codes defines a model and bind it to w3(spatiocyte world).
# Here, the model contains a species 'B' for the following context.
from ecell4 import species_attributes, get_model
with species_attributes():
M | {'dimension': 2}
B | {'radius': 0.0025, 'D': 0.1, 'location': 'M'}
model = get_model()
w3.bind_to(model)
membrane = SphericalSurface(Real3(0.5, 0.5, 0.5), 0.4) # This is equivalent to call `Sphere(Real3(0.5, 0.5, 0.5), 0.4).surface()`
w3.add_structure(Species("M"), membrane)
w5.add_structure(Species("M"), membrane)
Explanation: A property of Species, 'location', is available to restrict diffusion of molecules. 'location' is not fully supported yet, but only supported in spatiocyte and meso. add_structure defines a new structure given as a pair of Species and Shape.
NOTICE: To use add_structure with spatiocyte, you should define a model to describe the attributes of your Species and bind it to an instance of spatiocyte.World.
End of explanation
sp2 = Species("B", 0.0025, 0.1, "M") # `'location'` is the fourth argument
w3.add_molecules(sp2, 60)
w5.add_molecules(sp2, 60)
Explanation: After defining a structure, you can bind molecules to the structure as follows:
End of explanation
rng1 = GSLRandomNumberGenerator()
print([rng1.uniform_int(1, 6) for _ in range(20)])
Explanation: The molecules bound to a Species named B diffuse on a structure named M, which has a shape of SphericalSurface (a hollow sphere). In spatiocyte, a structure is represented as a set of particles with Species M occupying a voxel. It means that molecules not belonging to the structure is not able to overlap the voxel and it causes a collision. On the other hand, in meso, a structure means a list of subvolumes. Thus, a structure doesn't avoid an incursion of other particles.
3.5. Random Number Generator
A random number generator is also a part of World. All Worlds except ODEWorld store a random number generator, and updates it when the simulation needs a random value. On E-Cell4, only one class GSLRandomNumberGenerator is implemented as a random number generator.
End of explanation
rng2 = GSLRandomNumberGenerator()
print([rng2.uniform_int(1, 6) for _ in range(20)]) # => same as above
Explanation: With no argument, the random number generator is always initialized with a seed, 0.
End of explanation
rng2 = GSLRandomNumberGenerator()
rng2.seed(15)
print([rng2.uniform_int(1, 6) for _ in range(20)])
Explanation: You can initialize the seed with an integer as follows:
End of explanation
rng2 = GSLRandomNumberGenerator()
rng2.seed()
print([rng2.uniform_int(1, 6) for _ in range(20)])
Explanation: When you call the seed function with no input, the seed is drawn from the current time.
End of explanation
print(rng1.uniform(0.0, 1.0))
print(rng1.uniform_int(0, 100))
print(rng1.gaussian(1.0))
Explanation: GSLRandomNumberGenerator provides several ways to get a random number.
End of explanation
rng = GSLRandomNumberGenerator()
rng.seed()
w1 = gillespie.World(Real3(1, 1, 1), rng=rng)
Explanation: World accepts a random number generator at the construction. As a default, GSLRandomNumberGenerator() is used. Thus, when you don't give a generator, behavior of the simulation is always same (determinisitc).
End of explanation
print(w1.rng().uniform(0.0, 1.0))
Explanation: You can access the GSLRandomNumberGenerator in a World through rng function.
End of explanation |
4,935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rømer and Light Travel Time Effects (ltte)
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Now let's add a light curve dataset to see how ltte affects the timings of eclipses.
Step3: Relevant Parameters
The 'ltte' parameter in context='compute' defines whether light travel time effects are taken into account or not.
Step4: Comparing with and without ltte
In order to have a binary system with any noticeable ltte effects, we'll set a somewhat extreme mass-ratio and semi-major axis.
Step5: We'll just ignore the fact that this will be a completely unphysical system since we'll leave the radii and temperatures alone despite somewhat ridiculous masses - but since the masses and radii disagree so much, we'll have to abandon atmospheres and use blackbody. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Rømer and Light Travel Time Effects (ltte)
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger('error')
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=phoebe.linspace(-0.05, 0.05, 51), dataset='lc01')
Explanation: Now let's add a light curve dataset to see how ltte affects the timings of eclipses.
End of explanation
print b['ltte@compute']
Explanation: Relevant Parameters
The 'ltte' parameter in context='compute' defines whether light travel time effects are taken into account or not.
End of explanation
b['sma@binary'] = 100
b['q'] = 0.1
Explanation: Comparing with and without ltte
In order to have a binary system with any noticeable ltte effects, we'll set a somewhat extreme mass-ratio and semi-major axis.
End of explanation
b.set_value_all('atm', 'blackbody')
b.set_value_all('ld_func', 'logarithmic')
b.run_compute(irrad_method='none', ltte=False, model='ltte_off')
b.run_compute(irrad_method='none', ltte=True, model='ltte_on')
afig, mplfig = b.plot(show=True)
Explanation: We'll just ignore the fact that this will be a completely unphysical system since we'll leave the radii and temperatures alone despite somewhat ridiculous masses - but since the masses and radii disagree so much, we'll have to abandon atmospheres and use blackbody.
End of explanation |
4,936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Programming
Step1: Question
Step2: Passing values to functions
Step3: Conclusion
Step4: Initialization of variables within function definition
Step5: * operator
1. Unpacks a list or tuple into positional arguments
** operator
2. Unpacks a dictionary into keyword arguments
Types of parametres
Formal parameters (Done above, repeat)
Keyword Arguments (Done above, repeat)
*variable_name | Python Code:
#Example_1: return keyword
def straight_line(slope,intercept,x):
"Computes straight line y value"
y = slope*x + intercept
return y
print("y =",straight_line(1,0,5)) #Actual Parameters
print("y =",straight_line(0,3,10))
#By default, arguments have a positional behaviour
#Each of the parameters here is called a formal parameter
#Example_2
def straight_line(slope,intercept,x):
y = slope*x + intercept
print(y)
straight_line(1,0,5)
straight_line(0,3,10)
#By default, arguments have a positional behaviour
#Functions can have no inputs or return.
Explanation: Introduction to Programming : Lecture 5
Agenda for the class
Introduction to functions
Practice Questions
Functions in Python
Syntax
def function_name(input_1,input_2,...):
'''
Process input to get output
'''
return [output1,output2,..]
End of explanation
straight_line(x=2,intercept=7,slope=3)
Explanation: Question: Is it necessary to know the order of parametres to send values to a function?
End of explanation
list_zeroes=[0 for x in range(0,5)]
print(list_zeroes)
def case1(list1):
list1[1]=1
print(list1)
case1(list_zeroes)
print(list_zeroes)
#Passing variables to a function
list_zeroes=[0 for x in range(0,5)]
print(list_zeroes)
def case2(list1):
list1=[2,3,4,5,6]
print(list1)
case2(list_zeroes)
print(list_zeroes)
Explanation: Passing values to functions
End of explanation
def calculator(num1,num2,operator='+'):
if (operator == '+'):
result = num1 + num2
elif(operator == '-'):
result = num1 - num2
return result
n1=int(input("Enter value 1: "))
n2=int(input("Enter value 2: "))
v_1 = calculator(n1,n2)
print(v_1)
v_2 = calculator(n1,n2,'-')
print(v_2)
# Here, the function main is termed as the caller function, and the function
# calculator is termed as the called function
# The operator parameter here is called a keyword-argument
Explanation: Conclusion:
If the input is a mutable datatype, and we make changes to it, then the changes are refelected back on the original variable. (Case-1)
If the input is a mutable datatype, and we assign a new value to it, then the changes are not refelected back on the original variable. (Case-2)
Default Parameters
End of explanation
def f(a, L=[]):
L.append(a)
return L
print(f(1))
print(f(2))
print(f(3))
# Caution ! The list L[] was initialised only once.
#The paramter initialization to the default value happens at function definition and not at function call.
Explanation: Initialization of variables within function definition
End of explanation
def sum(*values):
s = 0
for v in values:
s = s + v
return s
s = sum(1, 2, 3, 4, 5)
print(s)
def get_a(**values):
return values['a']
s = get_a(a=1, b=2) # returns 1
print(s)
def sum(*values, **options):
s = 0
for i in values:
s = s + i
if "neg" in options:
if options["neg"]:
s = -s
return s
s = sum(1, 2, 3, 4, 5) # returns 15
print(s)
s = sum(1, 2, 3, 4, 5, neg=True) # returns -15
print(s)
s = sum(1, 2, 3, 4, 5, neg=False) # returns 15
print(s)
Explanation: * operator
1. Unpacks a list or tuple into positional arguments
** operator
2. Unpacks a dictionary into keyword arguments
Types of parametres
Formal parameters (Done above, repeat)
Keyword Arguments (Done above, repeat)
*variable_name : interprets the arguments as a tuple
**variable_name : interprets the arguments as a dictionary
End of explanation |
4,937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hough Transform
Step1: Hough transform combined with a polygonal mask
Notice that lines are more well-defined | Python Code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# convert to grayscale and smooth with a Gaussian
img = mpimg.imread('testimg.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
kernel_size = 5
blurred = cv2.GaussianBlur(gray_img, (kernel_size, kernel_size), 0)
# edge detect with Canny
low = 50
high = 150
edges = cv2.Canny(blurred, low, high)
# build lines with Hough transform
rho = 1
theta = np.pi/180
threshold = 1
min_line_length = 10
max_line_gap = 1
line_img = np.copy(img) * 0 # blank of same dim as our img
lines = cv2.HoughLinesP(edges, rho, theta,
threshold,np.array([]), min_line_length, max_line_gap)
# draw!
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(line_img, (x1,y1), (x2,y2), (255,0,0), 10)
# colorized binary image
colorized = np.dstack((edges, edges, edges))
# draw the colorized lines
combined = cv2.addWeighted(colorized, 0.8, line_img, 1, 0)
plt.imshow(combined)
plt.show()
Explanation: Hough Transform
End of explanation
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# convert to grayscale and smooth with a Gaussian
img = mpimg.imread('testimg.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
kernel_size = 5
blurred = cv2.GaussianBlur(gray_img, (kernel_size, kernel_size), 0)
# edge detect with Canny
low = 50
high = 150
edges = cv2.Canny(blurred, low, high)
# build masked edge
mask = np.zeros_like(edges)
mask_ignored = 255
imshape = img.shape
# TODO turn these knobs until the mask selects the lane area
#def draw(n1, n2, n3, n4):
vertices = np.array([[(275,imshape[0]),(650, 200),
(imshape[1], 1000),
(imshape[1],imshape[0])]], dtype=np.int32)
cv2.fillPoly(mask, vertices, mask_ignored)
masked_edges = cv2.bitwise_and(edges, mask)
# build lines with Hough transform
rho = 1
theta = np.pi/180
threshold = 1
min_line_length = 5
max_line_gap = 1
line_img = np.copy(img) * 0 # blank of same dim as our img
lines = cv2.HoughLinesP(masked_edges, rho, theta,
threshold, np.array([]),
min_line_length, max_line_gap)
# draw!
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(line_img, (x1,y1), (x2,y2), (255,0,0), 10)
# colorized binary image
colorized = np.dstack((edges, edges, edges))
# draw the colorized lines
combined = cv2.addWeighted(colorized, 0.8, line_img, 1, 0)
plt.imshow(combined)
plt.show()
Explanation: Hough transform combined with a polygonal mask
Notice that lines are more well-defined
End of explanation |
4,938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prepare babyweight dataset.
Learning Objectives
Setup up the environment
Preprocess natality dataset
Augment natality dataset
Create the train and eval tables in BigQuery
Export data from BigQuery to GCS in CSV format
Introduction
In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.
In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
Step1: Note
Step2: Lab Task #1
Step3: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called babyweight if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
Step4: Create the training and evaluation data tables
Since there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to weight_pounds, is_male, mother_age, plurality, and gestation_weeks as well as some simple filtering and a column to hash on for repeatable splitting.
Note
Step5: Lab Task #3
Step6: Lab Task #4
Step7: Split augmented dataset into eval dataset
Exercise
Step8: Verify table creation
Verify that you created the dataset and training data table.
Step9: Lab Task #5
Step10: Verify CSV creation
Verify that we correctly created the CSV files in our bucket. | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
Explanation: Prepare babyweight dataset.
Learning Objectives
Setup up the environment
Preprocess natality dataset
Augment natality dataset
Create the train and eval tables in BigQuery
Export data from BigQuery to GCS in CSV format
Introduction
In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.
In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
End of explanation
import os
from google.cloud import bigquery
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
Import necessary libraries.
End of explanation
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR BUCKET NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
Explanation: Lab Task #1: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
End of explanation
%%bash
## Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w # TODO: Add dataset name)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:# TODO: Add dataset name
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
Explanation: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called babyweight if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
# TODO: Add selected raw features and preprocessed features
FROM
publicdata.samples.natality
WHERE
# TODO: Add filters
Explanation: Create the training and evaluation data tables
Since there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to weight_pounds, is_male, mother_age, plurality, and gestation_weeks as well as some simple filtering and a column to hash on for repeatable splitting.
Note: The dataset in the create table code below is the one created previously, e.g. "babyweight".
Lab Task #2: Preprocess and filter dataset
We have some preprocessing and filtering we would like to do to get our data in the right format for training.
Preprocessing:
* Cast is_male from BOOL to STRING
* Cast plurality from INTEGER to STRING where [1, 2, 3, 4, 5] becomes ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]
* Add hashcolumn hashing on year and month
Filtering:
* Only want data for years later than 2000
* Only want baby weights greater than 0
* Only want mothers whose age is greater than 0
* Only want plurality to be greater than 0
* Only want the number of weeks of gestation to be greater than 0
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
# TODO: Replace is_male and plurality as indicated above
FROM
babyweight.babyweight_data
Explanation: Lab Task #3: Augment dataset to simulate missing data
Now we want to augment our dataset with our simulated babyweight data by setting all gender information to Unknown and setting plurality of all non-single births to Multiple(2+).
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
# TODO: Modulo hashmonth to be approximately 75% of the data
Explanation: Lab Task #4: Split augmented dataset into train and eval sets
Using hashmonth, apply a modulo to get approximately a 75/25 train/eval split.
Split augmented dataset into train dataset
Exercise: RUN the query to create the training data table.
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
# TODO: Modulo hashmonth to be approximately 25% of the data
Explanation: Split augmented dataset into eval dataset
Exercise: RUN the query to create the evaluation data table.
End of explanation
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
Explanation: Verify table creation
Verify that you created the dataset and training data table.
End of explanation
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = # TODO: Add dataset name
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in [# TODO: Loop over train and eval]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
Explanation: Lab Task #5: Export from BigQuery to CSVs in GCS
Use BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
Explanation: Verify CSV creation
Verify that we correctly created the CSV files in our bucket.
End of explanation |
4,939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FastText Model
Introduces Gensim's fastText model and demonstrates its use on the Lee Corpus.
Step1: Here, we'll learn to work with fastText library for training word-embedding
models, saving & loading them and performing similarity operations & vector
lookups analogous to Word2Vec.
When to use fastText?
The main principle behind fastText <https
Step2: Training hyperparameters
^^^^^^^^^^^^^^^^^^^^^^^^
Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the following parameters from the original word2vec
Step3: The save_word2vec_format is also available for fastText models, but will
cause all vectors for ngrams to be lost.
As a result, a model loaded in this way will behave as a regular word2vec model.
Word vector lookup
All information necessary for looking up fastText words (incl. OOV words) is
contained in its model.wv attribute.
If you don't need to continue training your model, you can export & save this .wv
attribute and discard model, to save space and RAM.
Step4: Similarity operations
Similarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have at least one character ngram present in the training data.
Step5: Syntactically similar words generally have high similarity in fastText models, since a large number of the component char-ngrams will be the same. As a result, fastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided here <Word2Vec_FastText_Comparison.ipynb>_.
Other similarity operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only
Step6: Word Movers distance
^^^^^^^^^^^^^^^^^^^^
You'll need the optional pyemd library for this section, pip install pyemd.
Let's start with two sentences
Step7: Remove their stopwords.
Step8: Compute the Word Movers Distance between the two sentences.
Step9: That's all! You've made it to the end of this tutorial. | Python Code:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: FastText Model
Introduces Gensim's fastText model and demonstrates its use on the Lee Corpus.
End of explanation
from pprint import pprint as print
from gensim.models.fasttext import FastText
from gensim.test.utils import datapath
# Set file names for train and test data
corpus_file = datapath('lee_background.cor')
model = FastText(vector_size=100)
# build the vocabulary
model.build_vocab(corpus_file=corpus_file)
# train the model
model.train(
corpus_file=corpus_file, epochs=model.epochs,
total_examples=model.corpus_count, total_words=model.corpus_total_words,
)
print(model)
Explanation: Here, we'll learn to work with fastText library for training word-embedding
models, saving & loading them and performing similarity operations & vector
lookups analogous to Word2Vec.
When to use fastText?
The main principle behind fastText <https://github.com/facebookresearch/fastText>_ is that the
morphological structure of a word carries important information about the meaning of the word.
Such structure is not taken into account by traditional word embeddings like Word2Vec, which
train a unique word embedding for every individual word.
This is especially significant for morphologically rich languages (German, Turkish) in which a
single word can have a large number of morphological forms, each of which might occur rarely,
thus making it hard to train good word embeddings.
fastText attempts to solve this by treating each word as the aggregation of its subwords.
For the sake of simplicity and language-independence, subwords are taken to be the character ngrams
of the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams.
According to a detailed comparison of Word2Vec and fastText in
this notebook <https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/Word2Vec_FastText_Comparison.ipynb>__,
fastText does significantly better on syntactic tasks as compared to the original Word2Vec,
especially when the size of the training corpus is small. Word2Vec slightly outperforms fastText
on semantic tasks though. The differences grow smaller as the size of the training corpus increases.
fastText can obtain vectors even for out-of-vocabulary (OOV) words, by summing up vectors for its
component char-ngrams, provided at least one of the char-ngrams was present in the training data.
Training models
For the following examples, we'll use the Lee Corpus (which you already have if you've installed Gensim) for training our model.
End of explanation
# Save a model trained via Gensim's fastText implementation to temp.
import tempfile
import os
with tempfile.NamedTemporaryFile(prefix='saved_model_gensim-', delete=False) as tmp:
model.save(tmp.name, separately=[])
# Load back the same model.
loaded_model = FastText.load(tmp.name)
print(loaded_model)
os.unlink(tmp.name) # demonstration complete, don't need the temp file anymore
Explanation: Training hyperparameters
^^^^^^^^^^^^^^^^^^^^^^^^
Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the following parameters from the original word2vec:
model: Training architecture. Allowed values: cbow, skipgram (Default cbow)
vector_size: Dimensionality of vector embeddings to be learnt (Default 100)
alpha: Initial learning rate (Default 0.025)
window: Context window size (Default 5)
min_count: Ignore words with number of occurrences below this (Default 5)
loss: Training objective. Allowed values: ns, hs, softmax (Default ns)
sample: Threshold for downsampling higher-frequency words (Default 0.001)
negative: Number of negative words to sample, for ns (Default 5)
epochs: Number of epochs (Default 5)
sorted_vocab: Sort vocab by descending frequency (Default 1)
threads: Number of threads to use (Default 12)
In addition, fastText has three additional parameters:
min_n: min length of char ngrams (Default 3)
max_n: max length of char ngrams (Default 6)
bucket: number of buckets used for hashing ngrams (Default 2000000)
Parameters min_n and max_n control the lengths of character ngrams that each word is broken down into while training and looking up embeddings. If max_n is set to 0, or to be lesser than min_n\ , no character ngrams are used, and the model effectively reduces to Word2Vec.
To bound the memory requirements of the model being trained, a hashing function is used that maps ngrams to integers in 1 to K. For hashing these character sequences, the Fowler-Noll-Vo hashing function <http://www.isthe.com/chongo/tech/comp/fnv>_ (FNV-1a variant) is employed.
Note: You can continue to train your model while using Gensim's native implementation of fastText.
Saving/loading models
Models can be saved and loaded via the load and save methods, just like
any other model in Gensim.
End of explanation
wv = model.wv
print(wv)
#
# FastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word.
#
print('night' in wv.key_to_index)
print('nights' in wv.key_to_index)
print(wv['night'])
print(wv['nights'])
Explanation: The save_word2vec_format is also available for fastText models, but will
cause all vectors for ngrams to be lost.
As a result, a model loaded in this way will behave as a regular word2vec model.
Word vector lookup
All information necessary for looking up fastText words (incl. OOV words) is
contained in its model.wv attribute.
If you don't need to continue training your model, you can export & save this .wv
attribute and discard model, to save space and RAM.
End of explanation
print("nights" in wv.key_to_index)
print("night" in wv.key_to_index)
print(wv.similarity("night", "nights"))
Explanation: Similarity operations
Similarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have at least one character ngram present in the training data.
End of explanation
print(wv.most_similar("nights"))
print(wv.n_similarity(['sushi', 'shop'], ['japanese', 'restaurant']))
print(wv.doesnt_match("breakfast cereal dinner lunch".split()))
print(wv.most_similar(positive=['baghdad', 'england'], negative=['london']))
print(wv.evaluate_word_analogies(datapath('questions-words.txt')))
Explanation: Syntactically similar words generally have high similarity in fastText models, since a large number of the component char-ngrams will be the same. As a result, fastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided here <Word2Vec_FastText_Comparison.ipynb>_.
Other similarity operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only
End of explanation
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
Explanation: Word Movers distance
^^^^^^^^^^^^^^^^^^^^
You'll need the optional pyemd library for this section, pip install pyemd.
Let's start with two sentences:
End of explanation
from gensim.parsing.preprocessing import STOPWORDS
sentence_obama = [w for w in sentence_obama if w not in STOPWORDS]
sentence_president = [w for w in sentence_president if w not in STOPWORDS]
Explanation: Remove their stopwords.
End of explanation
distance = wv.wmdistance(sentence_obama, sentence_president)
print(f"Word Movers Distance is {distance} (lower means closer)")
Explanation: Compute the Word Movers Distance between the two sentences.
End of explanation
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img = mpimg.imread('fasttext-logo-color-web.png')
imgplot = plt.imshow(img)
_ = plt.axis('off')
Explanation: That's all! You've made it to the end of this tutorial.
End of explanation |
4,940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Head model and forward computation
The aim of this tutorial is to be a getting started for forward computation.
For more extensive details and presentation of the general
concepts for forward modeling, see ch_forward.
Step1: Computing the forward operator
To compute a forward operator we need
Step2: Visualizing the coregistration
The coregistration is the operation that allows to position the head and the
sensors in a common coordinate system. In the MNE software the transformation
to align the head and the sensors in stored in a so-called trans file.
It is a FIF file that ends with -trans.fif. It can be obtained with
Step3: Compute Source Space
The source space defines the position and orientation of the candidate source
locations. There are two types of source spaces
Step4: The surface based source space src contains two parts, one for the left
hemisphere (258 locations) and one for the right hemisphere (258
locations). Sources can be visualized on top of the BEM surfaces in purple.
Step5: To compute a volume based source space defined with a grid of candidate
dipoles inside a sphere of radius 90mm centered at (0.0, 0.0, 40.0) mm
you can use the following code.
Obviously here, the sphere is not perfect. It is not restricted to the
brain and it can miss some parts of the cortex.
Step6: To compute a volume based source space defined with a grid of candidate
dipoles inside the brain (requires the
Step7: <div class="alert alert-info"><h4>Note</h4><p>Some sources may appear to be outside the BEM inner skull contour.
This is because the ``slices`` are decimated for plotting here.
Each slice in the figure actually represents several MRI slices,
but only the MRI voxels and BEM boundaries for a single (midpoint
of the given slice range) slice are shown, whereas the source space
points plotted on that midpoint slice consist of all points
for which that slice (out of all slices shown) was the closest.</p></div>
Now let's see how to view all sources in 3D.
Step8: Compute forward solution
We can now compute the forward solution.
To reduce computation we'll just compute a single layer BEM (just inner
skull) that can then be used for MEG (not EEG).
We specify if we want a one-layer or a three-layer BEM using the
conductivity parameter.
The BEM solution requires a BEM model which describes the geometry
of the head the conductivities of the different tissues.
Step9: Note that the
Step10: <div class="alert alert-danger"><h4>Warning</h4><p>Forward computation can remove vertices that are too close to (or outside)
the inner skull surface. For example, here we have gone from 516 to 474
vertices in use. For many functions, such as
Step11: We can explore the content of fwd to access the numpy array that contains
the gain matrix.
Step12: To extract the numpy array containing the forward operator corresponding to
the source space fwd['src'] with cortical orientation constraint
we can use the following | Python Code:
import os.path as op
import mne
from mne.datasets import sample
data_path = sample.data_path()
# the raw file containing the channel location + types
sample_dir = op.join(data_path, 'MEG', 'sample',)
raw_fname = op.join(sample_dir, 'sample_audvis_raw.fif')
# The paths to Freesurfer reconstructions
subjects_dir = op.join(data_path, 'subjects')
subject = 'sample'
Explanation: Head model and forward computation
The aim of this tutorial is to be a getting started for forward computation.
For more extensive details and presentation of the general
concepts for forward modeling, see ch_forward.
End of explanation
plot_bem_kwargs = dict(
subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', orientation='coronal',
slices=[50, 100, 150, 200])
mne.viz.plot_bem(**plot_bem_kwargs)
Explanation: Computing the forward operator
To compute a forward operator we need:
a -trans.fif file that contains the coregistration info.
a source space
the :term:BEM surfaces
Compute and visualize BEM surfaces
The :term:BEM surfaces are the triangulations of the interfaces between
different tissues needed for forward computation. These surfaces are for
example the inner skull surface, the outer skull surface and the outer skin
surface, a.k.a. scalp surface.
Computing the BEM surfaces requires FreeSurfer and makes use of
the command-line tools mne watershed_bem or mne flash_bem, or
the related functions :func:mne.bem.make_watershed_bem or
:func:mne.bem.make_flash_bem.
Here we'll assume it's already computed. It takes a few minutes per subject.
For EEG we use 3 layers (inner skull, outer skull, and skin) while for
MEG 1 layer (inner skull) is enough.
Let's look at these surfaces. The function :func:mne.viz.plot_bem
assumes that you have the bem folder of your subject's FreeSurfer
reconstruction, containing the necessary surface files. Here we use a smaller
than default subset of slices for speed.
End of explanation
# The transformation file obtained by coregistration
trans = op.join(sample_dir, 'sample_audvis_raw-trans.fif')
info = mne.io.read_info(raw_fname)
# Here we look at the dense head, which isn't used for BEM computations but
# is useful for coregistration.
mne.viz.plot_alignment(info, trans, subject=subject, dig=True,
meg=['helmet', 'sensors'], subjects_dir=subjects_dir,
surfaces='head-dense')
Explanation: Visualizing the coregistration
The coregistration is the operation that allows to position the head and the
sensors in a common coordinate system. In the MNE software the transformation
to align the head and the sensors in stored in a so-called trans file.
It is a FIF file that ends with -trans.fif. It can be obtained with
:func:mne.gui.coregistration (or its convenient command line
equivalent mne coreg), or mrilab if you're using a Neuromag
system.
Here we assume the coregistration is done, so we just visually check the
alignment with the following code.
End of explanation
src = mne.setup_source_space(subject, spacing='oct4', add_dist='patch',
subjects_dir=subjects_dir)
print(src)
Explanation: Compute Source Space
The source space defines the position and orientation of the candidate source
locations. There are two types of source spaces:
surface-based source space when the candidates are confined to a
surface.
volumetric or discrete source space when the candidates are discrete,
arbitrarily located source points bounded by the surface.
Surface-based source space is computed using
:func:mne.setup_source_space, while volumetric source space is computed
using :func:mne.setup_volume_source_space.
We will now compute a surface-based source space with an 'oct4'
resolution. See setting_up_source_space for details on source space
definition and spacing parameter.
<div class="alert alert-danger"><h4>Warning</h4><p>``'oct4'`` is used here just for speed, for real analyses the recommended
spacing is ``'oct6'``.</p></div>
End of explanation
mne.viz.plot_bem(src=src, **plot_bem_kwargs)
Explanation: The surface based source space src contains two parts, one for the left
hemisphere (258 locations) and one for the right hemisphere (258
locations). Sources can be visualized on top of the BEM surfaces in purple.
End of explanation
sphere = (0.0, 0.0, 0.04, 0.09)
vol_src = mne.setup_volume_source_space(
subject, subjects_dir=subjects_dir, sphere=sphere, sphere_units='m',
add_interpolator=False) # just for speed!
print(vol_src)
mne.viz.plot_bem(src=vol_src, **plot_bem_kwargs)
Explanation: To compute a volume based source space defined with a grid of candidate
dipoles inside a sphere of radius 90mm centered at (0.0, 0.0, 40.0) mm
you can use the following code.
Obviously here, the sphere is not perfect. It is not restricted to the
brain and it can miss some parts of the cortex.
End of explanation
surface = op.join(subjects_dir, subject, 'bem', 'inner_skull.surf')
vol_src = mne.setup_volume_source_space(
subject, subjects_dir=subjects_dir, surface=surface,
add_interpolator=False) # Just for speed!
print(vol_src)
mne.viz.plot_bem(src=vol_src, **plot_bem_kwargs)
Explanation: To compute a volume based source space defined with a grid of candidate
dipoles inside the brain (requires the :term:BEM surfaces) you can use the
following.
End of explanation
fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,
surfaces='white', coord_frame='mri',
src=src)
mne.viz.set_3d_view(fig, azimuth=173.78, elevation=101.75,
distance=0.30, focalpoint=(-0.03, -0.01, 0.03))
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Some sources may appear to be outside the BEM inner skull contour.
This is because the ``slices`` are decimated for plotting here.
Each slice in the figure actually represents several MRI slices,
but only the MRI voxels and BEM boundaries for a single (midpoint
of the given slice range) slice are shown, whereas the source space
points plotted on that midpoint slice consist of all points
for which that slice (out of all slices shown) was the closest.</p></div>
Now let's see how to view all sources in 3D.
End of explanation
conductivity = (0.3,) # for single layer
# conductivity = (0.3, 0.006, 0.3) # for three layers
model = mne.make_bem_model(subject='sample', ico=4,
conductivity=conductivity,
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
Explanation: Compute forward solution
We can now compute the forward solution.
To reduce computation we'll just compute a single layer BEM (just inner
skull) that can then be used for MEG (not EEG).
We specify if we want a one-layer or a three-layer BEM using the
conductivity parameter.
The BEM solution requires a BEM model which describes the geometry
of the head the conductivities of the different tissues.
End of explanation
fwd = mne.make_forward_solution(raw_fname, trans=trans, src=src, bem=bem,
meg=True, eeg=False, mindist=5.0, n_jobs=1,
verbose=True)
print(fwd)
Explanation: Note that the :term:BEM does not involve any use of the trans file. The BEM
only depends on the head geometry and conductivities.
It is therefore independent from the MEG data and the head position.
Let's now compute the forward operator, commonly referred to as the
gain or leadfield matrix.
See :func:mne.make_forward_solution for details on the meaning of each
parameter.
End of explanation
print(f'Before: {src}')
print(f'After: {fwd["src"]}')
Explanation: <div class="alert alert-danger"><h4>Warning</h4><p>Forward computation can remove vertices that are too close to (or outside)
the inner skull surface. For example, here we have gone from 516 to 474
vertices in use. For many functions, such as
:func:`mne.compute_source_morph`, it is important to pass ``fwd['src']``
or ``inv['src']`` so that this removal is adequately accounted for.</p></div>
End of explanation
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
Explanation: We can explore the content of fwd to access the numpy array that contains
the gain matrix.
End of explanation
fwd_fixed = mne.convert_forward_solution(fwd, surf_ori=True, force_fixed=True,
use_cps=True)
leadfield = fwd_fixed['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
Explanation: To extract the numpy array containing the forward operator corresponding to
the source space fwd['src'] with cortical orientation constraint
we can use the following:
End of explanation |
4,941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enter Team Member Names here (double click to edit)
Step1: Question 1
Step2: Question 3
Step3: <a id="svm_using"></a>
<a href="#top">Back to Top</a>
Using Linear SVMs
Exercise 1
Step4: Our calculation is the same. Between the 6200 items in the coefficient matrix and the 62 values in the intercept vector, the total number of weights computed is 6262, which is what we originally calculated
Exercise 2
Step5: <a id="nonlinear"></a>
<a href="#top">Back to Top</a>
Non-linear SVMs
Now let's explore the use of non-linear svms. More explicitly, using different kernels. Take a look at the example training and testing code below for the non-linear SVM. All parameters are left as default, except we change the kernel to be rbf. Run the block of code below.
Step6: Exercise 3
Step7: The most accurate kernel with default parameters is the poly kernel.
Exercise 4
Step8: A. The highest accuracy we could achieve was 1.0, or 100% accuracy. This was achieved with a gamma value of 0.02
B. We would not expect this to generalize because this gamma value is large enough that the model is likely fitting to noise in the original dataset. Therefore, this model may not work well on other photos and the given gamma value may not be ideal for creating other models, depending on the number of attributes.
Final Question | Python Code:
# fetch the images for the dataset
# this will take a long time the first run because it needs to download
# after the first time, the dataset will be save to your disk (in sklearn package somewhere)
# if this does not run, you may need additional libraries installed on your system (install at your own risk!!)
from sklearn.datasets import fetch_lfw_people
lfw_people = fetch_lfw_people(min_faces_per_person=20, resize=None)
# get some of the specifics of the dataset
X = lfw_people.data
y = lfw_people.target
names = lfw_people.target_names
n_samples, n_features = X.shape
_, h, w = lfw_people.images.shape
n_classes = len(names)
print("n_samples: {}".format(n_samples))
print("n_features: {}".format(n_features))
print("n_classes: {}".format(n_classes))
print("Original Image Sizes {} by {}".format(h,w))
print (125*94) # the size of the images are the size of the feature vectors
Explanation: Enter Team Member Names here (double click to edit):
Name 1: Ian Johnson
Name 2: Derek Phanekham
Name 3: Travis Siems
In Class Assignment Two
In the following assignment you will be asked to fill in python code and derivations for a number of different problems. Please read all instructions carefully and turn in the rendered notebook (or HTML of the rendered notebook) before the end of class (or right after class). The initial portion of this notebook is given before class and the remainder is given during class. Please answer the initial questions before class, to the best of your ability. Once class has started you may rework your answers as a team for the initial part of the assignment.
<a id="top"></a>
Contents
<a href="#Loading">Loading the Data</a>
<a href="#svm">Linear SVMs</a>
<a href="#svm_using">Using Linear SVMs</a>
<a href="#nonlinear">Non-linear SVMs</a>
<a id="Loading"></a>
<a href="#top">Back to Top</a>
Loading the Data
Please run the following code to read in the "olivetti faces" dataset from sklearn's data loading module.
This will load the data into the variable ds. ds is a bunch object with fields like ds.data and ds.target. The field ds.data is a numpy matrix of the continuous features in the dataset. The object is not a pandas dataframe. It is a numpy matrix. Each row is a set of observed instances, each column is a different feature. It also has a field called ds.target that is an integer value we are trying to predict (i.e., a specific integer represents a specific person). Each entry in ds.target is a label for each row of the ds.data matrix.
End of explanation
# Enter any scratchwork or calculations here
w_size = 62*(11750+1)
print(w_size)
Explanation: Question 1: For the faces dataset, describe what the data represents? That is, what is each column? What is each row? What do the unique class values represent?
Every column is a pixel location in a 125x94 photograph.
Each row is a single image of someone's face.
The unique class values are the names of the people in the photographs.
<a id="svm"></a>
<a href="#top">Back to Top</a>
Linear Support Vector Machines
Question 2: If we were to train a linear Support Vector Machine (SVM) upon the faces data, how many parameters would need to be optimized in the model? That is, how many coefficients would need to be calculated?
728,562 coefficients need to be calculated
End of explanation
# Enter any scratchwork or calculations here
print('Part C. With 100 features: ', 62*(101))
Explanation: Question 3:
- Part A: Given the number of parameters calculated above, would you expect the model to train quickly using batch optimization techniques? Why or why not?
- Part B: Is there a way to reduce training time?
- Part C: If we transformed the X data using principle components analysis (PCA) with 100 components, how many parameters would we need to find for a linear Support Vector Machine (SVM)?
Enter you answer here (double click)
A. It would be very slow using batch optimization techniques due to the cost of computing the gradient. Lots of multiplies and accumulates are required due to the large number of parameters in the data.
B. Yes. This could be done using minibatch or stochastic gradient descent, which would run a lot faster.
We could also use PCA to reduce the dimensionality and then train a model using the reduced dimensions.
C. 6262 parameters would need to be optimized.
End of explanation
from sklearn.svm import LinearSVC
from sklearn.decomposition import RandomizedPCA
n_components = 100
pca = RandomizedPCA(n_components=n_components)
Xpca = pca.fit_transform(X)
clf = LinearSVC()
clf.fit(Xpca,y)
#===================================================================
# Enter you code below to calculate the number of parameters in the model
print(sum([len(x) for x in clf.coef_]) + len(clf.intercept_))
#===================================================================
Explanation: <a id="svm_using"></a>
<a href="#top">Back to Top</a>
Using Linear SVMs
Exercise 1: Use the block of code below to check if the number of parameters you calculated is equal to the number of parameters returned by sklearn's implementation of the Linear SVM. Was your calculation correct? If different, can you think of a reason why the parameters would not match?
End of explanation
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
import numpy as np
yhat = clf.predict(Xpca)
#===================================================
# Enter your code below
print('Overall Accuracy is ',sum(yhat == y) / len(y))
#print(clf.score(Xpca, y))
list_accuracies = [x[i] / sum(x) for i, x in enumerate(confusion_matrix(y, yhat))]
print('The class accuracy is ',np.mean(list_accuracies), '+-', np.std(list_accuracies),end=' ')
print('(min,max) (',min(list_accuracies), ',' ,max(list_accuracies),')')
#===================================================
Explanation: Our calculation is the same. Between the 6200 items in the coefficient matrix and the 62 values in the intercept vector, the total number of weights computed is 6262, which is what we originally calculated
Exercise 2: Use the starter code below to calculate two quantities:
- Part A.: The overall accuracy of the trained linear svm on the training set
- Part B.: The mean, standard deviation, maximum, and minimum of the accuracy per class on the training set
You might be interested in the following documentation of the confusion matrix calculated by scikit-learn:
- http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
And an example matrix returned by the confusion matrix function:
<img src="http://scikit-learn.org/stable/_images/plot_confusion_matrix_001.png",width=400,height=400>
End of explanation
from sklearn.svm import SVC
clf = SVC(kernel='rbf')
clf.fit(Xpca,y)
yhat = clf.predict(Xpca)
print('Overall Accuracy is ',accuracy_score(y,yhat))
Explanation: <a id="nonlinear"></a>
<a href="#top">Back to Top</a>
Non-linear SVMs
Now let's explore the use of non-linear svms. More explicitly, using different kernels. Take a look at the example training and testing code below for the non-linear SVM. All parameters are left as default, except we change the kernel to be rbf. Run the block of code below.
End of explanation
#===================================================
# Enter your code below
clf = SVC(kernel='rbf')
clf.fit(Xpca,y)
yhat = clf.predict(Xpca)
print('Overall Accuracy is ',accuracy_score(y,yhat), 'for the rbf kernel')
clf = SVC(kernel='poly')
clf.fit(Xpca,y)
yhat = clf.predict(Xpca)
print('Overall Accuracy is ',accuracy_score(y,yhat), 'for the poly kernel')
clf = SVC(kernel='sigmoid')
clf.fit(Xpca,y)
yhat = clf.predict(Xpca)
print('Overall Accuracy is ',accuracy_score(y,yhat), 'for the sigmoid kernel')
#===================================================
Explanation: Exercise 3: Use the starter code from above to calculate the accuracy for three different non-linear SVM kernels. That is, repeat the code above for different kernel parameters. Which kernel is most accurate with the default parameters?
You might be interested in the documentation of the scikit-learn SVM implementation, available here:
- http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
End of explanation
#===================================================
# Enter your code below
kern = 'poly'
g = .02
yhat = SVC(kernel=kern, gamma=g).fit(Xpca,y).predict(Xpca)
print('Overall Accuracy is ', accuracy_score(y,yhat))
#===================================================
Explanation: The most accurate kernel with default parameters is the poly kernel.
Exercise 4: Choose the most accurate kernel and manipulate the settings for gamma to make the classification more accurate.
- Part A: How accurate can you make it?
- Part B: Would you expect the results to generalize well? Why or why not?
End of explanation
#===================================================
# Enter any scratchwork calculations you need below
svc = SVC(kernel=kern, gamma=.02)
svc.fit(Xpca, y)
print(sum([len(x) for x in svc.dual_coef_]) + len(svc.intercept_))
Explanation: A. The highest accuracy we could achieve was 1.0, or 100% accuracy. This was achieved with a gamma value of 0.02
B. We would not expect this to generalize because this gamma value is large enough that the model is likely fitting to noise in the original dataset. Therefore, this model may not work well on other photos and the given gamma value may not be ideal for creating other models, depending on the number of attributes.
Final Question: Using the most accurate non-linear SVM you found in the previous question, how many parameter coefficients does the trained model contain?
181,292 parameter coefficients exist in the trained model.
End of explanation |
4,942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creación de las imágenes georreferenciadas
El resultado final de la selección de puntos en NightCitiesISS y su correspondencia con las coordenadas, permite crear un fichero en formato GeoTIFF o un KMZ para que pueda ser utilizado con el software de gestión GIS. Cada imagen puede ser tratada como una capa que se superpone a la cartografía y sobre la que se pueden llevar a cabo análisis espaciales
El siguiente script permite leer la imagen original sin georrerenciar, conectarse a NightCitiesISS para descargarse las mediciones de los usuarios, y finalmente, generar un fichero que permite la georreferenciación y su visualización usando QGIS o GlobalMapper. Estos son dos programas ampliamente utilizados por la comunidad GIS, el primero de ellos Open Source. Por último, se genera un shell script que utiliza la librería GDAL para crear un KMZ y una imagen GeoTIFF
Step1: Como resultado obtenemos el fichero de puntos que relaciona la posición de cada pixel de la imagen con su posición geográfica, listo para usar en QGIS
Step2: También obtenemos el mismo resultado para usar con GlobalMapper
Step3: El shell script quedaría de la siguiente forma | Python Code:
import urllib2
import json
import asciitable
import time
import Image
idISS = 'ISS030-E-211378'
dirImagenesISS = 'images/'
dirGeoTIFF = 'geotiff/'
dirPuntosQGIS = 'puntosQGIS/'
dirPuntosGlobal = 'puntosGlobal/'
dirScriptsGDAL = 'scriptsGdal/'
def getKey(item):
return item[0]
hayProxy = False
# cargo las tareas acabadas de NightCitiesISS para usar todos los puntos disponibles
if hayProxy == True:
proxy = urllib2.ProxyHandler({'http': 'http://usuario:clave@proxy.empresa.es:8080'})
auth = urllib2.HTTPBasicAuthHandler()
opener = urllib2.build_opener(proxy, auth, urllib2.HTTPHandler)
urllib2.install_opener(opener)
else:
opener = urllib2.build_opener()
req = urllib2.Request("http://crowdcrafting.org/api/taskrun?app_id=1712&limit=10000")
f = opener.open(req)
json = json.loads(f.read())
# lista con el conjunto de imágenes a tratar
lista = asciitable.read('../completa.csv')
puntos = []
#Fichero de puntos para QGIS
gcp = open(dirPuntosQGIS + idISS + ".points", "w")
linea = "mapX,mapY,pixelX,pixelY,enable\n"
gcp.writelines(linea)
#Fichero de puntos para Global Mapper
fglobal = open(dirPuntosGlobal + idISS + ".gcp", "w")
# Fichero para script de GDAL
scriptGdal = open(dirScriptsGDAL + idISS + ".sh", "w")
#imagenes que si se han reconocido
imgSi = []
punto = 0
for i in range(len(json)):
if json[i]['info']['LONLAT'] != '':
link = json[i]['info']['img_big'].split('/')
idiss = link[8].split('.')[0]
if idiss == idISS:
im=Image.open(dirImagenesISS + idiss + '.jpg')
dimensiones = im.size # (width,height) tuple
xMax = dimensiones[0]
yMax = dimensiones[1]
#print 'Dimensiones ' + str(dimensiones[0]) + ',' + str(dimensiones[1])
linea1GDAL = 'gdal_translate -of GTiff '
posicionesGDAL = ''
for k in range(len(json[i]['info']['LONLAT'])):
punto = punto + 1
x = json[i]['info']['XY'][k].split(';')[0].split(',')[0]
y = json[i]['info']['XY'][k].split(';')[0].split(',')[1]
#x = xMax - int(x)
y = yMax - int(y)
lineaGCP = json[i]['info']['LONLAT'][k].split(';')[0] + ',' + str(x) + ',' + str(y) + ',' + '1\n'
#print lineaGCP
gcp.writelines(lineaGCP)
lineaGlobal = str(x) + ',' + str(y) + ',' + json[i]['info']['LONLAT'][k].split(';')[0] + ',"punto' + str(punto) + '",0\n'
fglobal.writelines(lineaGlobal)
# genero el script GDAL
posicionesGDAL = ' -gcp ' + str(x) + ' ' + str(y) + ' ' + json[i]['info']['LONLAT'][k].split(';')[0].replace(',', ' ') + ' ' + "origen" "tmpDestino\n"
linea1GDAL = linea1GDAL + posicionesGDAL + '"' + idiss + 'jpg"' '"tmp/' + idiss + '.jpg"\n'
linea2GDAL = 'gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/' + idiss + '.jpg" "' + dirGeoTIFF + idiss + '.tif"\n'
scriptGdal.writelines(linea1GDAL)
scriptGdal.writelines(linea2GDAL)
scriptGdal.close()
gcp.close()
fglobal.close()
Explanation: Creación de las imágenes georreferenciadas
El resultado final de la selección de puntos en NightCitiesISS y su correspondencia con las coordenadas, permite crear un fichero en formato GeoTIFF o un KMZ para que pueda ser utilizado con el software de gestión GIS. Cada imagen puede ser tratada como una capa que se superpone a la cartografía y sobre la que se pueden llevar a cabo análisis espaciales
El siguiente script permite leer la imagen original sin georrerenciar, conectarse a NightCitiesISS para descargarse las mediciones de los usuarios, y finalmente, generar un fichero que permite la georreferenciación y su visualización usando QGIS o GlobalMapper. Estos son dos programas ampliamente utilizados por la comunidad GIS, el primero de ellos Open Source. Por último, se genera un shell script que utiliza la librería GDAL para crear un KMZ y una imagen GeoTIFF
End of explanation
mapX,mapY,pixelX,pixelY,enable
-6.29923900000000003,36.53782000000000352,1451,2331,1
-6.27469199999999994,36.53312999999999988,1408,2206,1
-6.30627699999999969,36.52857900000000058,1513,2324,1
-6.26748200000000022,36.48946300000000065,1609,2033,1
-6.16431299999999993,36.5218190000000007,1173,1700,1
-6.22362199999999977,36.56829499999999911,1093,2098,1
-6.11581900000000012,36.66419299999999737,297,1941,1
-6.35837699999999995,36.6163969999999992,1201,2837,1
-6.42721300000000006,36.74621700000000146,706,3584,1
-6.44231900000000035,36.73810100000000034,789,3622,1
-6.08933000000000035,36.2724410000000006,2226,577,1
-6.06687400000000032,36.28812099999999674,2101,522,1
-6.20306599999999975,36.38529900000000339,1965,1422,1
-6.14375699999999991,36.42949999999999733,1590,1323,1
-6.12581799999999976,36.69798999999999722,147,2104,1
-6.13118300000000005,36.70036400000000043,153,2129,1
-6.28443299999999994,36.53378599999999921,1419,2242,1
Explanation: Como resultado obtenemos el fichero de puntos que relaciona la posición de cada pixel de la imagen con su posición geográfica, listo para usar en QGIS
End of explanation
1451,2331,-6.299239,36.537820,"punto1",0
1408,2206,-6.274692,36.533130,"punto2",0
1513,2324,-6.306277,36.528579,"punto3",0
1609,2033,-6.267482,36.489463,"punto4",0
1173,1700,-6.164313,36.521819,"punto5",0
1093,2098,-6.223622,36.568295,"punto6",0
297,1941,-6.115819,36.664193,"punto7",0
1201,2837,-6.358377,36.616397,"punto8",0
706,3584,-6.427213,36.746217,"punto9",0
789,3622,-6.442319,36.738101,"punto10",0
2226,577,-6.089330,36.272441,"punto11",0
2101,522,-6.066874,36.288121,"punto12",0
1965,1422,-6.203066,36.385299,"punto13",0
1590,1323,-6.143757,36.429500,"punto14",0
147,2104,-6.125818,36.697990,"punto15",0
153,2129,-6.131183,36.700364,"punto16",0
1419,2242,-6.284433,36.533786,"punto17",0
1954,2952,10.235532,36.804439,"punto18",0
Explanation: También obtenemos el mismo resultado para usar con GlobalMapper
End of explanation
gdal_translate -of GTiff "ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff -gcp 1590 1323 -6.143757 36.429500 origentmpDestino
"ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff -gcp 1419 2242 -6.284433 36.533786 origentmpDestino
"ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff "ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff -gcp 1954 2952 10.235532 36.804439 origentmpDestino
"ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff "ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
Explanation: El shell script quedaría de la siguiente forma
End of explanation |
4,943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Load and preprocess images
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download the flowers dataset
This tutorial uses a dataset of several thousand photos of flowers. The flowers dataset contains five sub-directories, one per class
Step3: After downloading (218MB), you should now have a copy of the flower photos available. There are 3,670 total images
Step4: Each directory contains images of that type of flower. Here are some roses
Step5: Load data using a Keras utility
Let's load these images off disk using the helpful tf.keras.utils.image_dataset_from_directory utility.
Create a dataset
Define some parameters for the loader
Step6: It's good practice to use a validation split when developing your model. You will use 80% of the images for training and 20% for validation.
Step7: You can find the class names in the class_names attribute on these datasets.
Step8: Visualize the data
Here are the first nine images from the training dataset.
Step9: You can train a model using these datasets by passing them to model.fit (shown later in this tutorial). If you like, you can also manually iterate over the dataset and retrieve batches of images
Step10: The image_batch is a tensor of the shape (32, 180, 180, 3). This is a batch of 32 images of shape 180x180x3 (the last dimension refers to color channels RGB). The label_batch is a tensor of the shape (32,), these are corresponding labels to the 32 images.
You can call .numpy() on either of these tensors to convert them to a numpy.ndarray.
Standardize the data
The RGB channel values are in the [0, 255] range. This is not ideal for a neural network; in general you should seek to make your input values small.
Here, you will standardize values to be in the [0, 1] range by using tf.keras.layers.Rescaling
Step11: There are two ways to use this layer. You can apply it to the dataset by calling Dataset.map
Step12: Or, you can include the layer inside your model definition to simplify deployment. You will use the second approach here.
Note
Step13: Train a model
For completeness, you will show how to train a simple model using the datasets you have just prepared.
The Sequential model consists of three convolution blocks (tf.keras.layers.Conv2D) with a max pooling layer (tf.keras.layers.MaxPooling2D) in each of them. There's a fully-connected layer (tf.keras.layers.Dense) with 128 units on top of it that is activated by a ReLU activation function ('relu'). This model has not been tuned in any way—the goal is to show you the mechanics using the datasets you just created. To learn more about image classification, visit the Image classification tutorial.
Step14: Choose the tf.keras.optimizers.Adam optimizer and tf.keras.losses.SparseCategoricalCrossentropy loss function. To view training and validation accuracy for each training epoch, pass the metrics argument to Model.compile.
Step15: Note
Step16: Note
Step17: The tree structure of the files can be used to compile a class_names list.
Step18: Split the dataset into training and validation sets
Step19: You can print the length of each dataset as follows
Step20: Write a short function that converts a file path to an (img, label) pair
Step21: Use Dataset.map to create a dataset of image, label pairs
Step22: Configure dataset for performance
To train a model with this dataset you will want the data
Step23: Visualize the data
You can visualize this dataset similarly to the one you created previously
Step24: Continue training the model
You have now manually built a similar tf.data.Dataset to the one created by tf.keras.utils.image_dataset_from_directory above. You can continue training the model with it. As before, you will train for just a few epochs to keep the running time short.
Step25: Using TensorFlow Datasets
So far, this tutorial has focused on loading data off disk. You can also find a dataset to use by exploring the large catalog of easy-to-download datasets at TensorFlow Datasets.
As you have previously loaded the Flowers dataset off disk, let's now import it with TensorFlow Datasets.
Download the Flowers dataset using TensorFlow Datasets
Step26: The flowers dataset has five classes
Step27: Retrieve an image from the dataset
Step28: As before, remember to batch, shuffle, and configure the training, validation, and test sets for performance | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import os
import PIL
import PIL.Image
import tensorflow as tf
import tensorflow_datasets as tfds
print(tf.__version__)
Explanation: Load and preprocess images
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/images"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial shows how to load and preprocess an image dataset in three ways:
First, you will use high-level Keras preprocessing utilities (such as tf.keras.utils.image_dataset_from_directory) and layers (such as tf.keras.layers.Rescaling) to read a directory of images on disk.
Next, you will write your own input pipeline from scratch using tf.data.
Finally, you will download a dataset from the large catalog available in TensorFlow Datasets.
Setup
End of explanation
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file(origin=dataset_url,
fname='flower_photos',
untar=True)
data_dir = pathlib.Path(data_dir)
Explanation: Download the flowers dataset
This tutorial uses a dataset of several thousand photos of flowers. The flowers dataset contains five sub-directories, one per class:
flowers_photos/
daisy/
dandelion/
roses/
sunflowers/
tulips/
Note: all images are licensed CC-BY, creators are listed in the LICENSE.txt file.
End of explanation
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
Explanation: After downloading (218MB), you should now have a copy of the flower photos available. There are 3,670 total images:
End of explanation
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[1]))
Explanation: Each directory contains images of that type of flower. Here are some roses:
End of explanation
batch_size = 32
img_height = 180
img_width = 180
Explanation: Load data using a Keras utility
Let's load these images off disk using the helpful tf.keras.utils.image_dataset_from_directory utility.
Create a dataset
Define some parameters for the loader:
End of explanation
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
Explanation: It's good practice to use a validation split when developing your model. You will use 80% of the images for training and 20% for validation.
End of explanation
class_names = train_ds.class_names
print(class_names)
Explanation: You can find the class names in the class_names attribute on these datasets.
End of explanation
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
Explanation: Visualize the data
Here are the first nine images from the training dataset.
End of explanation
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
Explanation: You can train a model using these datasets by passing them to model.fit (shown later in this tutorial). If you like, you can also manually iterate over the dataset and retrieve batches of images:
End of explanation
normalization_layer = tf.keras.layers.Rescaling(1./255)
Explanation: The image_batch is a tensor of the shape (32, 180, 180, 3). This is a batch of 32 images of shape 180x180x3 (the last dimension refers to color channels RGB). The label_batch is a tensor of the shape (32,), these are corresponding labels to the 32 images.
You can call .numpy() on either of these tensors to convert them to a numpy.ndarray.
Standardize the data
The RGB channel values are in the [0, 255] range. This is not ideal for a neural network; in general you should seek to make your input values small.
Here, you will standardize values to be in the [0, 1] range by using tf.keras.layers.Rescaling:
End of explanation
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixel values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
Explanation: There are two ways to use this layer. You can apply it to the dataset by calling Dataset.map:
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
Explanation: Or, you can include the layer inside your model definition to simplify deployment. You will use the second approach here.
Note: If you would like to scale pixel values to [-1,1] you can instead write tf.keras.layers.Rescaling(1./127.5, offset=-1)
Note: You previously resized images using the image_size argument of tf.keras.utils.image_dataset_from_directory. If you want to include the resizing logic in your model as well, you can use the tf.keras.layers.Resizing layer.
Configure the dataset for performance
Let's make sure to use buffered prefetching so you can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data:
Dataset.cache keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.
Dataset.prefetch overlaps data preprocessing and model execution while training.
Interested readers can learn more about both methods, as well as how to cache data to disk in the Prefetching section of the Better performance with the tf.data API guide.
End of explanation
num_classes = 5
model = tf.keras.Sequential([
tf.keras.layers.Rescaling(1./255),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(num_classes)
])
Explanation: Train a model
For completeness, you will show how to train a simple model using the datasets you have just prepared.
The Sequential model consists of three convolution blocks (tf.keras.layers.Conv2D) with a max pooling layer (tf.keras.layers.MaxPooling2D) in each of them. There's a fully-connected layer (tf.keras.layers.Dense) with 128 units on top of it that is activated by a ReLU activation function ('relu'). This model has not been tuned in any way—the goal is to show you the mechanics using the datasets you just created. To learn more about image classification, visit the Image classification tutorial.
End of explanation
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: Choose the tf.keras.optimizers.Adam optimizer and tf.keras.losses.SparseCategoricalCrossentropy loss function. To view training and validation accuracy for each training epoch, pass the metrics argument to Model.compile.
End of explanation
model.fit(
train_ds,
validation_data=val_ds,
epochs=3
)
Explanation: Note: You will only train for a few epochs so this tutorial runs quickly.
End of explanation
list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'), shuffle=False)
list_ds = list_ds.shuffle(image_count, reshuffle_each_iteration=False)
for f in list_ds.take(5):
print(f.numpy())
Explanation: Note: You can also write a custom training loop instead of using Model.fit. To learn more, visit the Writing a training loop from scratch tutorial.
You may notice the validation accuracy is low compared to the training accuracy, indicating your model is overfitting. You can learn more about overfitting and how to reduce it in this tutorial.
Using tf.data for finer control
The above Keras preprocessing utility—tf.keras.utils.image_dataset_from_directory—is a convenient way to create a tf.data.Dataset from a directory of images.
For finer grain control, you can write your own input pipeline using tf.data. This section shows how to do just that, beginning with the file paths from the TGZ file you downloaded earlier.
End of explanation
class_names = np.array(sorted([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"]))
print(class_names)
Explanation: The tree structure of the files can be used to compile a class_names list.
End of explanation
val_size = int(image_count * 0.2)
train_ds = list_ds.skip(val_size)
val_ds = list_ds.take(val_size)
Explanation: Split the dataset into training and validation sets:
End of explanation
print(tf.data.experimental.cardinality(train_ds).numpy())
print(tf.data.experimental.cardinality(val_ds).numpy())
Explanation: You can print the length of each dataset as follows:
End of explanation
def get_label(file_path):
# Convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
one_hot = parts[-2] == class_names
# Integer encode the label
return tf.argmax(one_hot)
def decode_img(img):
# Convert the compressed string to a 3D uint8 tensor
img = tf.io.decode_jpeg(img, channels=3)
# Resize the image to the desired size
return tf.image.resize(img, [img_height, img_width])
def process_path(file_path):
label = get_label(file_path)
# Load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
Explanation: Write a short function that converts a file path to an (img, label) pair:
End of explanation
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
train_ds = train_ds.map(process_path, num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(process_path, num_parallel_calls=AUTOTUNE)
for image, label in train_ds.take(1):
print("Image shape: ", image.numpy().shape)
print("Label: ", label.numpy())
Explanation: Use Dataset.map to create a dataset of image, label pairs:
End of explanation
def configure_for_performance(ds):
ds = ds.cache()
ds = ds.shuffle(buffer_size=1000)
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
Explanation: Configure dataset for performance
To train a model with this dataset you will want the data:
To be well shuffled.
To be batched.
Batches to be available as soon as possible.
These features can be added using the tf.data API. For more details, visit the Input Pipeline Performance guide.
End of explanation
image_batch, label_batch = next(iter(train_ds))
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].numpy().astype("uint8"))
label = label_batch[i]
plt.title(class_names[label])
plt.axis("off")
Explanation: Visualize the data
You can visualize this dataset similarly to the one you created previously:
End of explanation
model.fit(
train_ds,
validation_data=val_ds,
epochs=3
)
Explanation: Continue training the model
You have now manually built a similar tf.data.Dataset to the one created by tf.keras.utils.image_dataset_from_directory above. You can continue training the model with it. As before, you will train for just a few epochs to keep the running time short.
End of explanation
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
Explanation: Using TensorFlow Datasets
So far, this tutorial has focused on loading data off disk. You can also find a dataset to use by exploring the large catalog of easy-to-download datasets at TensorFlow Datasets.
As you have previously loaded the Flowers dataset off disk, let's now import it with TensorFlow Datasets.
Download the Flowers dataset using TensorFlow Datasets:
End of explanation
num_classes = metadata.features['label'].num_classes
print(num_classes)
Explanation: The flowers dataset has five classes:
End of explanation
get_label_name = metadata.features['label'].int2str
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
Explanation: Retrieve an image from the dataset:
End of explanation
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
test_ds = configure_for_performance(test_ds)
Explanation: As before, remember to batch, shuffle, and configure the training, validation, and test sets for performance:
End of explanation |
4,944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Форматирование" data-toc-modified-id="Форматирование-1">Форматирование</a></span><ul class="toc-item"><li><span><a href="#Используйте-4-пробела-для-отступов" data-toc-modified-id="Используйте-4-пробела-для-отступов-1.1">Используйте 4 пробела для отступов</a></span></li><li><span><a href="#Никогда-не-смешивайте-пробелы-и-символы-табуляции" data-toc-modified-id="Никогда-не-смешивайте-пробелы-и-символы-табуляции-1.2">Никогда не смешивайте пробелы и символы табуляции</a></span></li><li><span><a href="#Не-делайте-строки-длиннее-79-символов" data-toc-modified-id="Не-делайте-строки-длиннее-79-символов-1.3">Не делайте строки длиннее 79 символов</a></span></li><li><span><a href="#Пишите-import-каждого-модуля-в-отдельной-строке" data-toc-modified-id="Пишите-import-каждого-модуля-в-отдельной-строке-1.4">Пишите import каждого модуля в отдельной строке</a></span></li><li><span><a href="#Располагайте-import'ы-в-самом-начале-файла" data-toc-modified-id="Располагайте-import'ы-в-самом-начале-файла-1.5">Располагайте import'ы в самом начале файла</a></span></li><li><span><a href="#Не-ставьте-пробелов-сразу-после-скобок" data-toc-modified-id="Не-ставьте-пробелов-сразу-после-скобок-1.6">Не ставьте пробелов сразу после скобок</a></span></li><li><span><a href="#Не-ставьте-пробелов-перед-скобками" data-toc-modified-id="Не-ставьте-пробелов-перед-скобками-1.7">Не ставьте пробелов перед скобками</a></span></li><li><span><a href="#Не-ставьте-пробелов-перед-запятой,-точкой-с-запятой,-двоеточием" data-toc-modified-id="Не-ставьте-пробелов-перед-запятой,-точкой-с-запятой,-двоеточием-1.8">Не ставьте пробелов перед запятой, точкой с запятой, двоеточием</a></span></li><li><span><a href="#Окружайте-бинарные-операторы-ровно-одним-пробелом-с-каждой-стороны" data-toc-modified-id="Окружайте-бинарные-операторы-ровно-одним-пробелом-с-каждой-стороны-1.9">Окружайте бинарные операторы ровно одним пробелом с каждой стороны</a></span></li><li><span><a href="#Не-пишите-в-одну-строку" data-toc-modified-id="Не-пишите-в-одну-строку-1.10">Не пишите в одну строку</a></span></li></ul></li><li><span><a href="#Комментарии" data-toc-modified-id="Комментарии-2">Комментарии</a></span></li><li><span><a href="#Имена" data-toc-modified-id="Имена-3">Имена</a></span><ul class="toc-item"><li><span><a href="#Переменные-и-функции" data-toc-modified-id="Переменные-и-функции-3.1">Переменные и функции</a></span></li><li><span><a href="#Константы" data-toc-modified-id="Константы-3.2">Константы</a></span></li></ul></li><li><span><a href="#Сравнения" data-toc-modified-id="Сравнения-4">Сравнения</a></span><ul class="toc-item"><li><span><a href="#Проверка-последовательности-на-пустоту" data-toc-modified-id="Проверка-последовательности-на-пустоту-4.1">Проверка последовательности на пустоту</a></span></li><li><span><a href="#Сравнение-с-True-и-False?!" data-toc-modified-id="Сравнение-с-True-и-False?!-4.2">Сравнение с True и False?!</a></span></li><li><span><a href="#Сравнение-с-None" data-toc-modified-id="Сравнение-с-None-4.3">Сравнение с None</a></span></li><li><span><a href="#Сравнение-с-частью-строки" data-toc-modified-id="Сравнение-с-частью-строки-4.4">Сравнение с частью строки</a></span></li></ul></li><li><span><a href="#Функции" data-toc-modified-id="Функции-5">Функции</a></span><ul class="toc-item"><li><span><a href="#Исключения-из-правил" data-toc-modified-id="Исключения-из-правил-5.1">Исключения из правил</a></span></li></ul></li></ul></div>
Стиль
Форматирование
Используйте 4 пробела для отступов
Step1: Никогда не смешивайте пробелы и символы табуляции
Не делайте строки длиннее 79 символов
Разделяйте строки на несколько с помощью заключения выражения в скобки. Это способ лучше, чем использование обратного слэша. Делайте соответствующий отступ при переносе. При разбиении строки около бинарного оператора делайте разрыв после него.
Step2: Пишите import каждого модуля в отдельной строке
Step3: Располагайте import'ы в самом начале файла
Step4: Не ставьте пробелов сразу после скобок
Step5: Не ставьте пробелов перед скобками
Перед открывающей круглой скобкой при вызове функции
Перед открывающей квадратной скобкой, обозначающей индексирование или срез
Step6: Не ставьте пробелов перед запятой, точкой с запятой, двоеточием
Step7: Окружайте бинарные операторы ровно одним пробелом с каждой стороны
Верно для операторов
Step8: Не пишите в одну строку
Не располагайте несколько инструкций в одной строке. Разнесите их по разным строкам.
Step9: Не располагайте блок из нескольких инструкций на той же строке сразу после двоеточия (после if, while и т.д.)
Step10: Комментарии
Комментарии, противоречащие коду, хуже, чем их отсутствие.
Располагайте однострочные комментарии после кода в той же строке и отделяйте их от кода не менее, чем двумя пробелами. Комментарии должны начинаться с # и одного пробела.
Step11: Имена
Не используйте символы l, O, I как имена переменных. В некоторых шрифтах они очень похожи на цифры.
Переменные и функции
Имена переменных и функций должны содержать только маленькие буквы. Слова разделяются символами подчёркивания. Не используйте транслит для переменных, называйте их по-английски.
Step12: Константы
Имена констант должны содержать только заглавные буквы. Слова разделяются символами подчёркивания.
Step13: Сравнения
Проверка последовательности на пустоту
Для последовательностей (строк, списков, кортежей) используйте факт, что пустая последовательность эквивалентна False.
Step14: Сравнение с True и False?!
Не проверяйте переменные логического типа на True и False используя ==.
Step15: Сравнение с None
Сравнение с None должно осуществляться с помощью операторов is и is not, а не операторами сравнения.
Step16: Сравнение с частью строки
Используйте методы .startswith() и .endswith() вместо срезов для проверки начала и конца строк.
Step17: Функции
Не ставьте пробелы вокруг знака = при передаче функции именованных аргументов.
Step18: Разделяйте определения функций двумя пустыми строками. Тело функции от основной программы также отделяется двумя пустыми строками. | Python Code:
# Правильно
if 1 == 3:
print(1)
if 2 == 3:
print(2)
# Неверно
if 1 == 3:
print(1)
if 2 == 3:
print(2)
Explanation: <h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Форматирование" data-toc-modified-id="Форматирование-1">Форматирование</a></span><ul class="toc-item"><li><span><a href="#Используйте-4-пробела-для-отступов" data-toc-modified-id="Используйте-4-пробела-для-отступов-1.1">Используйте 4 пробела для отступов</a></span></li><li><span><a href="#Никогда-не-смешивайте-пробелы-и-символы-табуляции" data-toc-modified-id="Никогда-не-смешивайте-пробелы-и-символы-табуляции-1.2">Никогда не смешивайте пробелы и символы табуляции</a></span></li><li><span><a href="#Не-делайте-строки-длиннее-79-символов" data-toc-modified-id="Не-делайте-строки-длиннее-79-символов-1.3">Не делайте строки длиннее 79 символов</a></span></li><li><span><a href="#Пишите-import-каждого-модуля-в-отдельной-строке" data-toc-modified-id="Пишите-import-каждого-модуля-в-отдельной-строке-1.4">Пишите import каждого модуля в отдельной строке</a></span></li><li><span><a href="#Располагайте-import'ы-в-самом-начале-файла" data-toc-modified-id="Располагайте-import'ы-в-самом-начале-файла-1.5">Располагайте import'ы в самом начале файла</a></span></li><li><span><a href="#Не-ставьте-пробелов-сразу-после-скобок" data-toc-modified-id="Не-ставьте-пробелов-сразу-после-скобок-1.6">Не ставьте пробелов сразу после скобок</a></span></li><li><span><a href="#Не-ставьте-пробелов-перед-скобками" data-toc-modified-id="Не-ставьте-пробелов-перед-скобками-1.7">Не ставьте пробелов перед скобками</a></span></li><li><span><a href="#Не-ставьте-пробелов-перед-запятой,-точкой-с-запятой,-двоеточием" data-toc-modified-id="Не-ставьте-пробелов-перед-запятой,-точкой-с-запятой,-двоеточием-1.8">Не ставьте пробелов перед запятой, точкой с запятой, двоеточием</a></span></li><li><span><a href="#Окружайте-бинарные-операторы-ровно-одним-пробелом-с-каждой-стороны" data-toc-modified-id="Окружайте-бинарные-операторы-ровно-одним-пробелом-с-каждой-стороны-1.9">Окружайте бинарные операторы ровно одним пробелом с каждой стороны</a></span></li><li><span><a href="#Не-пишите-в-одну-строку" data-toc-modified-id="Не-пишите-в-одну-строку-1.10">Не пишите в одну строку</a></span></li></ul></li><li><span><a href="#Комментарии" data-toc-modified-id="Комментарии-2">Комментарии</a></span></li><li><span><a href="#Имена" data-toc-modified-id="Имена-3">Имена</a></span><ul class="toc-item"><li><span><a href="#Переменные-и-функции" data-toc-modified-id="Переменные-и-функции-3.1">Переменные и функции</a></span></li><li><span><a href="#Константы" data-toc-modified-id="Константы-3.2">Константы</a></span></li></ul></li><li><span><a href="#Сравнения" data-toc-modified-id="Сравнения-4">Сравнения</a></span><ul class="toc-item"><li><span><a href="#Проверка-последовательности-на-пустоту" data-toc-modified-id="Проверка-последовательности-на-пустоту-4.1">Проверка последовательности на пустоту</a></span></li><li><span><a href="#Сравнение-с-True-и-False?!" data-toc-modified-id="Сравнение-с-True-и-False?!-4.2">Сравнение с True и False?!</a></span></li><li><span><a href="#Сравнение-с-None" data-toc-modified-id="Сравнение-с-None-4.3">Сравнение с None</a></span></li><li><span><a href="#Сравнение-с-частью-строки" data-toc-modified-id="Сравнение-с-частью-строки-4.4">Сравнение с частью строки</a></span></li></ul></li><li><span><a href="#Функции" data-toc-modified-id="Функции-5">Функции</a></span><ul class="toc-item"><li><span><a href="#Исключения-из-правил" data-toc-modified-id="Исключения-из-правил-5.1">Исключения из правил</a></span></li></ul></li></ul></div>
Стиль
Форматирование
Используйте 4 пробела для отступов
End of explanation
# Правильно
var = (a * 10 +
b / 15)
# Неверно
var = a * 10 \
+ b / 15
Explanation: Никогда не смешивайте пробелы и символы табуляции
Не делайте строки длиннее 79 символов
Разделяйте строки на несколько с помощью заключения выражения в скобки. Это способ лучше, чем использование обратного слэша. Делайте соответствующий отступ при переносе. При разбиении строки около бинарного оператора делайте разрыв после него.
End of explanation
# Правильно
import math
import sys
# Неверно
import math, sys
Explanation: Пишите import каждого модуля в отдельной строке
End of explanation
# Правильно
import sys
N = 5
def f():
pass
# Неверно
N = 5
import sys
Explanation: Располагайте import'ы в самом начале файла
End of explanation
# Правильно
spam(ham[1], {eggs: 2})
# Неверно
spam( ham[ 1 ], { eggs: 2 } )
Explanation: Не ставьте пробелов сразу после скобок
End of explanation
# Правильно
spam(kind['key'], lst[1:3])
# Неверно
spam (kind ['key'], lst [1:3])
Explanation: Не ставьте пробелов перед скобками
Перед открывающей круглой скобкой при вызове функции
Перед открывающей квадратной скобкой, обозначающей индексирование или срез
End of explanation
# Правильно
if x == 4:
print(x, y); x, y = y, x
# Неверно
if x == 4 :
print(x , y) ; x , y = y , x
Explanation: Не ставьте пробелов перед запятой, точкой с запятой, двоеточием
End of explanation
# Правильно
a == b
a and b
3 + 5 * 8
# Неверно
a==b
a and b
3+5 * 8
Explanation: Окружайте бинарные операторы ровно одним пробелом с каждой стороны
Верно для операторов:
Присваивания (=, +=, -= и т.д.)
Сравнения (==, <, >, !=, <=, >=, in, not in, is, is not)
Логических (and, or, not)
Арифметических (+, -, *, /, // и т.д.)
End of explanation
# Правильно
x = 3
func(10)
# Неверно
x = 3; func(10)
Explanation: Не пишите в одну строку
Не располагайте несколько инструкций в одной строке. Разнесите их по разным строкам.
End of explanation
# Правильно
if x == 3:
print(x)
# Неверно
if x == 3: print(x)
Explanation: Не располагайте блок из нескольких инструкций на той же строке сразу после двоеточия (после if, while и т.д.)
End of explanation
# Правильно
a = 5 # правильно оформленный комментарий
# Неверно
a = 5 #слишком близко и без пробела
Explanation: Комментарии
Комментарии, противоречащие коду, хуже, чем их отсутствие.
Располагайте однострочные комментарии после кода в той же строке и отделяйте их от кода не менее, чем двумя пробелами. Комментарии должны начинаться с # и одного пробела.
End of explanation
# Правильно
name, name_from_several_words
# Неверно
Name, anotheRANDOMname, NAME, imya
Explanation: Имена
Не используйте символы l, O, I как имена переменных. В некоторых шрифтах они очень похожи на цифры.
Переменные и функции
Имена переменных и функций должны содержать только маленькие буквы. Слова разделяются символами подчёркивания. Не используйте транслит для переменных, называйте их по-английски.
End of explanation
# Правильно
NAME, NAME_FROM_SEVERAL_WORDS
# Неверно
NAMEFROMSEVERALWORDS, name, Name
Explanation: Константы
Имена констант должны содержать только заглавные буквы. Слова разделяются символами подчёркивания.
End of explanation
# Правильно
if lst:
print(lst[0])
# Неверно
if lst != []:
print(lst[0])
Explanation: Сравнения
Проверка последовательности на пустоту
Для последовательностей (строк, списков, кортежей) используйте факт, что пустая последовательность эквивалентна False.
End of explanation
# Правильно
b = True
if b:
print("Yes")
# Неверно
b = True
if b == True:
print("No")
Explanation: Сравнение с True и False?!
Не проверяйте переменные логического типа на True и False используя ==.
End of explanation
# Правильно
if a is None:
pass
# Неверно
if a == None:
pass
Explanation: Сравнение с None
Сравнение с None должно осуществляться с помощью операторов is и is not, а не операторами сравнения.
End of explanation
# Правильно
s = 'Hello, world!'
if s.startswith('Hello'):
print('Hi')
# Неверно
s = 'Hello, world!'
if s[:5] == 'Hello':
print('Hi')
Explanation: Сравнение с частью строки
Используйте методы .startswith() и .endswith() вместо срезов для проверки начала и конца строк.
End of explanation
# Правильно
f(5, x=7)
# Неверно
f(5, x = 7)
Explanation: Функции
Не ставьте пробелы вокруг знака = при передаче функции именованных аргументов.
End of explanation
# Правильно
def f1():
pass
def f2():
pass
pass
# Неверно
def f1():
pass
def f2():
pass
pass
Explanation: Разделяйте определения функций двумя пустыми строками. Тело функции от основной программы также отделяется двумя пустыми строками.
End of explanation |
4,945 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced SQLAlchemy Queries
Step1: Using MySql
Import the create_engine function from the sqlalchemy library.
Create an engine to the census database by concatenating the following strings and passing them to create_engine()
Step2: Define a select statement to return i) the state and ii) the difference in population count between 2008 and 2000 labeled as pop_change. Store the statement as stmt. The state column is given by census.columns.state and the 2008 population count column by census.columns.pop2008.
Use the group_by() method on stmt to group by the state. Do so by passing it census.columns.state.
Use the group_by() method on stmt to order the population changes ('pop_change') in descending order. Do so by passing it desc('pop_change').
Use the limit() method to return only 5 records. Do so by passing it the desired number of records.
Use the connection to execute stmt and fetch all the records store as results.
The print statement has already been written for you. Hit 'Submit Answer' to view the results!
Step3: Import case, cast, and Float from sqlalchemy.
Build an expression female_pop2000to calculate female population in 2000. To achieve this
Step4: Build a statement to join the census and state_fact tables and select the pop2000 column from the first and the abbreviation column from the second.
Execute the statement to get the first result and save it as result.
Hit submit to loop over the keys of the result object, and print the key and value for each!
Step5: Build a statement to select ALL the columns from the census and state_fact tables. To select ALL the columns from two tables employees and sales, for example, you would use stmt = select([employees, sales]).
Append a select_from to stmt to join the census table to the state_fact table by the state column in census and the name column in the state_fact table.
Execute the statement to get the first result and save it as result. This code is alrady written.
Hit submit to loop over the keys of the result object, and print the key and value for each!
Step6: Build a statement to select from the census table the following
Step7: Save an alias of the employees table as managers. To do so, apply the method alias() to employees.
Build a query to select the employee name and their manager's name. You can use label to label the name column of employees as 'employee'.
Append a where clause to stmt to match where the mgr column of the employees table corresponds to the id column of the managers table.
Append an order by clause to stmt so that it is ordered by the name column of the managers table.
Execute the statement and store all the results. This code is already written. Hit submit to print the names of the managers and all their employees.
Step8: Save an alias of the employees table as managers.
Build a query to select the manager's name and the count of the number of their employees. The function func.count() has been imported and will be useful!
Append a where clause that filters for records where the manager id and employee mgr are equal.
Use a group_by() clause to group the query by the name column of the managers table.
Execute the statement and store all the results. Print the names of the managers and their employees. This code has already been written so hit submit and check out the results!
Step9: Use a while loop that checks if there are more_results.
Inside the loop, apply the method fetchmany() to results_proxy to get 50 records at a time and store those records as partial_results.
After fetching the records, if partial_results is an empty list (that is, if it is equal to []), set more_results to False.
Loop over the partial_results and, if row.state is a key in the state_count dictionary, increment state_count[row.state] by 1; otherwise set state_count[row.state] to 1.
After the while loop, close the ResultProxy results_proxy.
Hit 'Submit' to print state_count. | Python Code:
# import
Explanation: Advanced SQLAlchemy Queries
End of explanation
# # Import create_engine function
# from sqlalchemy import create_engine
# # Create an engine to the census database
# engine = create_engine('mysql+pymysql://student:datacamp@courses.csrrinzqubik.us-east-1.rds.amazonaws.com:3306/census')
# # Use the `table_names()` method on the engine to print the table names
# print(engine.table_names())
Explanation: Using MySql
Import the create_engine function from the sqlalchemy library.
Create an engine to the census database by concatenating the following strings and passing them to create_engine():
'mysql+pymysql://'
'student:datacamp'
'@courses.csrrinzqubik.us-east-1.rds.amazonaws.com'
':3306/census'
Use the table_names() method on engine to print the table names.
End of explanation
# # Build query to return state names by population difference from 2008 to 2000: stmt
# stmt = select([census.columns.state, (census.columns.pop2008-census.columns.pop2000).label('pop_change')])
# # Append group by for the state: stmt
# stmt = stmt.group_by(census.columns.state)
# # Append order by for pop_change descendingly: stmt
# stmt = stmt.order_by(desc('pop_change'))
# # Return only 5 results: stmt
# stmt = stmt.limit(5)
# # Use connection to execute the statement and fetch all results
# results = connection.execute(stmt).fetchall()
# # Print the state and population change for each record
# for result in results:
# print('{}-{}'.format(result.state, result.pop_change))
Explanation: Define a select statement to return i) the state and ii) the difference in population count between 2008 and 2000 labeled as pop_change. Store the statement as stmt. The state column is given by census.columns.state and the 2008 population count column by census.columns.pop2008.
Use the group_by() method on stmt to group by the state. Do so by passing it census.columns.state.
Use the group_by() method on stmt to order the population changes ('pop_change') in descending order. Do so by passing it desc('pop_change').
Use the limit() method to return only 5 records. Do so by passing it the desired number of records.
Use the connection to execute stmt and fetch all the records store as results.
The print statement has already been written for you. Hit 'Submit Answer' to view the results!
End of explanation
# # import case, cast and Float from sqlalchemy
# from sqlalchemy import case, cast, Float
# # Build an expression to calculatae female population in 2000
# female_pop2000 = func.sum(
# case([
# (census.columns.sex == 'F', census.columns.pop2000)
# ], else_=0))
# # Cast an expression to calculate total population in 2000 to Float
# total_pop2000 = cast(func.sum(census.columns.pop2000), Float)
# # Build a query to calculate the percentage of females in 2000: stmt
# stmt = select([female_pop2000 / total_pop2000* 100])
# # Execute the query and store the scalar result: percent_female
# percent_female = connection.execute(stmt).scalar()
# # Print the percentage
# print(percent_female)
Explanation: Import case, cast, and Float from sqlalchemy.
Build an expression female_pop2000to calculate female population in 2000. To achieve this:
Use case() inside func.sum()
Make the first argument of case() a list containing a tuple of i) a boolean checking that census.columns.sex is equal to 'F' and ii) the column census.columns.pop2000.
Use cast() to cast an expression to calculate total population in 2000 to Float.
Build a query to calculate the percentage of females in 2000.
Execute the query by passing stmt to connection.execute(). Apply the scalar() method to it and store the result as percent_female.
Print percent_female.
End of explanation
# # Build a statement to join census and state_fact tables: stmt
# stmt = select([census.columns.pop2000, state_fact.columns.abbreviation])
# # Execute the statement and get the first result: result
# result = connection.execute(stmt).first()
# # Loop over the keys in the result object and print the key and value
# for key in result.keys():
# print(key, getattr(result, key))
Explanation: Build a statement to join the census and state_fact tables and select the pop2000 column from the first and the abbreviation column from the second.
Execute the statement to get the first result and save it as result.
Hit submit to loop over the keys of the result object, and print the key and value for each!
End of explanation
# # Build a statement to select the census and state_fact tables: stmt
# stmt = select([census, state_fact])
# # Add a select_from clause that wraps a join for the census and state_fact
# # tables where the census state column and state_fact name column match
# stmt = stmt.select_from(
# census.join(state_fact, census.columns.state == state_fact.columns.name))
# # Execute the statement and get the first result: result
# result = connection.execute(stmt).first()
# # Loop over the keys in the result object and print the key and value
# for key in result.keys():
# print(key, getattr(result, key))
Explanation: Build a statement to select ALL the columns from the census and state_fact tables. To select ALL the columns from two tables employees and sales, for example, you would use stmt = select([employees, sales]).
Append a select_from to stmt to join the census table to the state_fact table by the state column in census and the name column in the state_fact table.
Execute the statement to get the first result and save it as result. This code is alrady written.
Hit submit to loop over the keys of the result object, and print the key and value for each!
End of explanation
# # Build a statement to select the state, sum of 2008 population and census
# # division name: stmt
# stmt = select([
# census.columns.state,
# func.sum(census.columns.pop2008),
# state_fact.columns.census_division_name
# ])
# # Append select_from to join the census and state_fact tables by the census state and state_fact name columns
# stmt = stmt.select_from(
# census.join(state_fact, census.columns.state == state_fact.columns.name)
# )
# # Append a group by for the state_fact name column
# stmt = stmt.group_by(state_fact.columns.name)
# # Execute the statement and get the results: results
# results = connection.execute(stmt).fetchall()
# # Loop over the the results object and print each record.
# for record in results:
# print(record)
Explanation: Build a statement to select from the census table the following:
the state column,
the sum of the pop2008 column and
the census_division_name column.
Append a select_from() to stmt in order to join the census and state_fact tables by the state and name columns.
Append a group_by to stmt in order to group by the name column from the state_fact table.
Execute the statement to get all the records and save it as results.
Hit submit to loop over the results object and print each record.
End of explanation
# # Make an alias of the employees table: managers
# managers = employees.alias()
# # Build a query to select manager's and their employees names: stmt
# stmt = select(
# [managers.columns.name.label('manager'),
# employees.columns.name.label('employee')]
# )
# # Append where to match manager ids with employees managers: stmt
# stmt = stmt.where(managers.columns.id==employees.columns.mgr)
# # Append order by managers name: stmt
# stmt = stmt.order_by(managers.columns.name)
# # Execute statement: results
# results = connection.execute(satmt).fetchall()
# # Print records
# for record in results:
# print(record)
Explanation: Save an alias of the employees table as managers. To do so, apply the method alias() to employees.
Build a query to select the employee name and their manager's name. You can use label to label the name column of employees as 'employee'.
Append a where clause to stmt to match where the mgr column of the employees table corresponds to the id column of the managers table.
Append an order by clause to stmt so that it is ordered by the name column of the managers table.
Execute the statement and store all the results. This code is already written. Hit submit to print the names of the managers and all their employees.
End of explanation
# # Make an alias of the employees table: managers
# managers = employees.alias()
# # Build a query to select managers and counts of their employees: stmt
# stmt = select([managers.columns.name, func.count(employees.columns.id)])
# # Append a where clause that ensures the manager id and employee mgr are equal
# stmt = stmt.where(managers.columns.id==employees.columns.mgr)
# # Group by Managers Name
# stmt = stmt.group_by(managers.columns.name)
# # Execute statement: results
# results = connection.execute(stmt).fetchall()
# # print manager
# for record in results:
# print(record)a
Explanation: Save an alias of the employees table as managers.
Build a query to select the manager's name and the count of the number of their employees. The function func.count() has been imported and will be useful!
Append a where clause that filters for records where the manager id and employee mgr are equal.
Use a group_by() clause to group the query by the name column of the managers table.
Execute the statement and store all the results. Print the names of the managers and their employees. This code has already been written so hit submit and check out the results!
End of explanation
# # Start a while loop checking for more results
# while more_results:
# # Fetch the first 50 results from the ResultProxy: partial_results
# partial_results = results_proxy.fetchmany(50)
# # if empty list, set more_results to False
# if partial_results == []:
# more_results = False
# # Loop over the fetched records and increment the count for the state: state_count
# for row in partial_results:
# if row.state in state_count:
# state_count[row.state] = state_count[row.state]+1
# else:
# state_count[row.state] =1
# # Close the ResultProxy, and thus the connection
# results_proxy.close()
# # Print the count by state
# print(state_count)
Explanation: Use a while loop that checks if there are more_results.
Inside the loop, apply the method fetchmany() to results_proxy to get 50 records at a time and store those records as partial_results.
After fetching the records, if partial_results is an empty list (that is, if it is equal to []), set more_results to False.
Loop over the partial_results and, if row.state is a key in the state_count dictionary, increment state_count[row.state] by 1; otherwise set state_count[row.state] to 1.
After the while loop, close the ResultProxy results_proxy.
Hit 'Submit' to print state_count.
End of explanation |
4,946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Info data structure
This tutorial describes the
Step1: As seen in the introductory tutorial <tut-overview>, when a
Step2: However, it is not strictly necessary to load the
Step3: As you can see, the
Step4: Most of the fields contain
Step5: Obtaining subsets of channels
It is often useful to convert between channel names and the integer indices
identifying rows of the data array where those channels' measurements are
stored. The
Step6:
Step7: Note that the meg and fnirs parameters of
Step8:
Step9: To obtain several channel types at once, you could embed
Step10: Alternatively, you can get the indices of all channels of all channel types
present in the data, using
Step11: Dropping channels from an Info object
If you want to modify an
Step12: We can also get a nice HTML representation in IPython like | Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
Explanation: The Info data structure
This tutorial describes the :class:mne.Info data structure, which keeps track
of various recording details, and is attached to :class:~mne.io.Raw,
:class:~mne.Epochs, and :class:~mne.Evoked objects.
We'll begin by loading the Python modules we need, and loading the same
example data <sample-dataset> we used in the introductory tutorial
<tut-overview>:
End of explanation
print(raw.info)
Explanation: As seen in the introductory tutorial <tut-overview>, when a
:class:~mne.io.Raw object is loaded, an :class:~mne.Info object is
created automatically, and stored in the raw.info attribute:
End of explanation
info = mne.io.read_info(sample_data_raw_file)
print(info)
Explanation: However, it is not strictly necessary to load the :class:~mne.io.Raw object
in order to view or edit the :class:~mne.Info object; you can extract all
the relevant information into a stand-alone :class:~mne.Info object using
:func:mne.io.read_info:
End of explanation
print(info.keys())
print() # insert a blank line
print(info['ch_names'])
Explanation: As you can see, the :class:~mne.Info object keeps track of a lot of
information about:
the recording system (gantry angle, HPI details, sensor digitizations,
channel names, ...)
the experiment (project name and ID, subject information, recording date,
experimenter name or ID, ...)
the data (sampling frequency, applied filter frequencies, bad channels,
projectors, ...)
The complete list of fields is given in :class:the API documentation
<mne.Info>.
Querying the Info object
The fields in a :class:~mne.Info object act like Python :class:dictionary
<dict> keys, using square brackets and strings to access the contents of a
field:
End of explanation
print(info['chs'][0].keys())
Explanation: Most of the fields contain :class:int, :class:float, or :class:list
data, but the chs field bears special mention: it contains a list of
dictionaries (one :class:dict per channel) containing everything there is
to know about a channel other than the data it recorded. Normally it is not
necessary to dig into the details of the chs field — various MNE-Python
functions can extract the information more cleanly than iterating over the
list of dicts yourself — but it can be helpful to know what is in there. Here
we show the keys for the first channel's :class:dict:
End of explanation
print(mne.pick_channels(info['ch_names'], include=['MEG 0312', 'EEG 005']))
print(mne.pick_channels(info['ch_names'], include=[],
exclude=['MEG 0312', 'EEG 005']))
Explanation: Obtaining subsets of channels
It is often useful to convert between channel names and the integer indices
identifying rows of the data array where those channels' measurements are
stored. The :class:~mne.Info object is useful for this task; two
convenience functions that rely on the :class:mne.Info object for picking
channels are :func:mne.pick_channels and :func:mne.pick_types.
:func:~mne.pick_channels minimally takes a list of all channel names and a
list of channel names to include; it is also possible to provide an empty
list to include and specify which channels to exclude instead:
End of explanation
print(mne.pick_types(info, meg=False, eeg=True, exclude=[]))
Explanation: :func:~mne.pick_types works differently, since channel type cannot always
be reliably determined from channel name alone. Consequently,
:func:~mne.pick_types needs an :class:~mne.Info object instead of just a
list of channel names, and has boolean keyword arguments for each channel
type. Default behavior is to pick only MEG channels (and MEG reference
channels if present) and exclude any channels already marked as "bad" in the
bads field of the :class:~mne.Info object. Therefore, to get all and
only the EEG channel indices (including the "bad" EEG channels) we must
pass meg=False and exclude=[]:
End of explanation
print(mne.pick_channels_regexp(info['ch_names'], '^E.G'))
Explanation: Note that the meg and fnirs parameters of :func:~mne.pick_types
accept strings as well as boolean values, to allow selecting only
magnetometer or gradiometer channels (via meg='mag' or meg='grad') or
to pick only oxyhemoglobin or deoxyhemoglobin channels (via fnirs='hbo'
or fnirs='hbr', respectively).
A third way to pick channels from an :class:~mne.Info object is to apply
regular expression_ matching to the channel names using
:func:mne.pick_channels_regexp. Here the ^ represents the beginning of
the string and . character matches any single character, so both EEG and
EOG channels will be selected:
End of explanation
print(mne.channel_type(info, 25))
Explanation: :func:~mne.pick_channels_regexp can be especially useful for channels named
according to the 10-20 <ten-twenty_>_ system (e.g., to select all channels
ending in "z" to get the midline, or all channels beginning with "O" to get
the occipital channels). Note that :func:~mne.pick_channels_regexp uses the
Python standard module :mod:re to perform regular expression matching; see
the documentation of the :mod:re module for implementation details.
<div class="alert alert-danger"><h4>Warning</h4><p>Both :func:`~mne.pick_channels` and :func:`~mne.pick_channels_regexp`
operate on lists of channel names, so they are unaware of which channels
(if any) have been marked as "bad" in ``info['bads']``. Use caution to
avoid accidentally selecting bad channels.</p></div>
Obtaining channel type information
Sometimes it can be useful to know channel type based on its index in the
data array. For this case, use :func:mne.channel_type, which takes
an :class:~mne.Info object and a single integer channel index:
End of explanation
picks = (25, 76, 77, 319)
print([mne.channel_type(info, x) for x in picks])
print(raw.get_channel_types(picks=picks))
Explanation: To obtain several channel types at once, you could embed
:func:~mne.channel_type in a :term:list comprehension, or use the
:meth:~mne.io.Raw.get_channel_types method of a :class:~mne.io.Raw,
:class:~mne.Epochs, or :class:~mne.Evoked instance:
End of explanation
ch_idx_by_type = mne.channel_indices_by_type(info)
print(ch_idx_by_type.keys())
print(ch_idx_by_type['eog'])
Explanation: Alternatively, you can get the indices of all channels of all channel types
present in the data, using :func:~mne.channel_indices_by_type,
which returns a :class:dict with channel types as keys, and lists of
channel indices as values:
End of explanation
print(info['nchan'])
eeg_indices = mne.pick_types(info, meg=False, eeg=True)
print(mne.pick_info(info, eeg_indices)['nchan'])
Explanation: Dropping channels from an Info object
If you want to modify an :class:~mne.Info object by eliminating some of the
channels in it, you can use the :func:mne.pick_info function to pick the
channels you want to keep and omit the rest:
End of explanation
info
Explanation: We can also get a nice HTML representation in IPython like:
End of explanation |
4,947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-3', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
4,948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Architectures
Step1: We also set up the backend and load the data.
Step2: Now its your turn! Set up the branch nodes and layer structure above. Some tips
Step3: Now let's fit our model! First, set up multiple costs for each of the three branches using MultiCost
Step4: To test that your model was constructed properly, we first initialize the model with a dataset (so that it configures the layer shapes appropriately) and a cost, then print the model.
Step5: Then, we set up the remaining components and run fit! | Python Code:
from neon.callbacks.callbacks import Callbacks
from neon.initializers import Gaussian
from neon.layers import GeneralizedCost, Affine, BranchNode, Multicost, SingleOutputTree
from neon.models import Model
from neon.optimizers import GradientDescentMomentum
from neon.transforms import Rectlin, Logistic, Softmax
from neon.transforms import CrossEntropyBinary, CrossEntropyMulti, Misclassification
from neon.backends import gen_backend
Explanation: Model Architectures: Part 1
Neon supports the ability to build more complex models than just a linear list of layers. In this series of notebooks, you will implement several models and understand how data should be passed when a model may have multiple inputs/outputs.
Tree Models
Neon supports models with a main trunk that includes branch points to leaf nodes. In this scenario, the models takes a single input but produces multiple outputs that can be matched against multiple targets. For example, consider the below topology:
cost1 cost3
| /
m_l4 b2_l2
| /
| ___b2_l1
|/
m_l3 cost2
| /
m_l2 b1_l2
| /
| ___b1_l1
|/
|
m_l1
|
|
data
Suppose we wanted to apply this model to the MNIST dataset. The MNIST data iterator returns, for each minibatch, a tuple of tensors (X, Y). Since there are multiple outputs, the single target labels Y are used to match against all these outputs. Alternatively, we could write a custom iterator that yields for each minibatch, a nested tuple (X, (Y1, Y2, Y3)). Then, each target label will mapped to its respective output layer.
We will guide you through implementing such a branching model. We first import all the needed ingredients:
End of explanation
be = gen_backend(batch_size=128)
from neon.data import MNIST
mnist = MNIST(path='data/')
train_set = mnist.train_iter
valid_set = mnist.valid_iter
Explanation: We also set up the backend and load the data.
End of explanation
# define common parameters as dictionary (see above)
init_norm = Gaussian(loc=0.0, scale=0.01)
normrelu = dict(init=init_norm, activation=Rectlin())
normsigm = dict(init=init_norm, activation=Logistic(shortcut=True))
normsoft = dict(init=init_norm, activation=Softmax())
# define your branch nodes
b1 = BranchNode(name="b1")
b2 = BranchNode(name="b2")
# define the main trunk (cost1 above)
p1 = [Affine(nout=100, name="m_l1", **normrelu),
b1,
Affine(nout=32, name="m_l2", **normrelu),
Affine(nout=16, name="m_l3", **normrelu),
b2,
Affine(nout=10, name="m_l4", **normsoft)]
# define the branch (cost2)
p2 = [b1,
Affine(nout=16, name="b1_l1", **normrelu),
Affine(nout=10, name="b1_l2", **normsoft)]
# define the branch (cost3)
p3 = [b2,
Affine(nout=16, name="b2_l1", **normrelu),
Affine(nout=10, name="b2_l2", **normsoft)]
# build the model as a Tree
alphas = [1, 0.25, 0.25]
model = Model(layers=SingleOutputTree([p1, p2, p3], alphas=alphas))
Explanation: Now its your turn! Set up the branch nodes and layer structure above. Some tips:
- Use Affine layers.
- You can choose your hidden unit sizes, just make sure that the three final output layers have 10 units for the 10 categories in the MNIST dataset.
- The three final output layers should also use Softmax activation functions to ensure that the probability sums to 1.
As a reminder, to define a single layer, we need a weight initialization and an activation function:
```
define a layers
layer1 = Affine(nout=100, init=Gaussian(0.01), activation=Rectlin())
alternative, you can take advantage of common parameters by constructing
a dictionary:
normrelu = dict(init=init_norm, activation=Rectlin())
pass the dictionary to the layers as keyword arguments using the ** syntax.
layer1 = Affine(nout=100, normrelu)
layer2 = Affine(nout=10, normrelu)
```
To set up a simple Tree:
```
define a branch mode
b1 = BranchNode()
define the main trunk
path1 = [layer1, b1, layer2]
define the branch
path2 = [b1, layer3]
build the model as a Tree
alphas are the weights given to the branches of Tree during backpropagation.
model = Model(layers=SingleOutputTree([path1, path2]), alphas = [1, 1])
```
We have included below skeleton of the code for you to fill out to build the model above.
End of explanation
cost = Multicost(costs=[GeneralizedCost(costfunc=CrossEntropyMulti()),
GeneralizedCost(costfunc=CrossEntropyMulti()),
GeneralizedCost(costfunc=CrossEntropyMulti())])
Explanation: Now let's fit our model! First, set up multiple costs for each of the three branches using MultiCost:
End of explanation
model.initialize(train_set, cost)
print model
Explanation: To test that your model was constructed properly, we first initialize the model with a dataset (so that it configures the layer shapes appropriately) and a cost, then print the model.
End of explanation
# setup optimizer
optimizer = GradientDescentMomentum(0.1, momentum_coef=0.9)
# setup standard fit callbacks
callbacks = Callbacks(model, eval_set=valid_set, eval_freq=1)
model.fit(train_set, optimizer=optimizer, num_epochs=10, cost=cost, callbacks=callbacks)
Explanation: Then, we set up the remaining components and run fit!
End of explanation |
4,949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dans ce notebook, nous allons essayer de faire du text mining pour récuper des versions locales des programmes des présidentielles 2017 des candidats suivants
Step1: François Fillon
Le projet de François Fillon ne sera annoncé que le 13 mars
Step2: Marine Le Pen
Les 144 engagements de Marine Le Pen peuvent être consultés ici
Step3: Maintenant, chercons à extraire tous les paragraphes, à l'aide d'une fonction qui vérifie que le paragraphe commence par un nombre suivi d'un point (et peut-être d'un espace).
Step4: Bien, on peut maintenant écrire ces données dans un fichier texte.
Step5: Benoît Hamon
Le site de Benoît Hamon ne permet pas d'accéder à une page avec toutes les propositions facilement. Du coup, il faut explorer trois sous-catégories.
https
Step6: On peut extraire de ces propositions la moëlle essentielle
Step7: Construisons une table de données avec ces propositions.
Step8: On peut transformer ces propositions en DataFrame.
Step9: Jean-Luc Mélenchon
On peut trouver une version inofficielle du programme ici
Step10: On peut étendre cette manière de récupérer les données à toutes les sous-sections
Step11: Combien de propositions trouvons-nous ?
Step12: Construisons les url complètes.
Step13: On écrit un fichier.
Step14: Emmanuel Macron
Il faut dans un premier temps aller chercher les pages individuelles du site.
Step15: On extrait toutes les propositions.
Step16: Yannick Jadot
http
Step17: Extraction du titre d'une des pages.
Step18: Nicolas Dupont-Aignan | Python Code:
from bs4 import BeautifulSoup
import requests
import re
import pandas as pd
from ipywidgets import interact
def make_df_from_props_sources(props_sources):
"Makes a big dataframe from props_sources."
dfs = []
for key in props_sources:
df = pd.DataFrame(props_sources[key], columns=['proposition'])
df['source'] = key
dfs.append(df)
df = pd.concat(dfs).reset_index(drop=True)
return df
Explanation: Dans ce notebook, nous allons essayer de faire du text mining pour récuper des versions locales des programmes des présidentielles 2017 des candidats suivants :
François Fillon (apparemment, le projet sort le 13 mars seulement)
Marine Le Pen
Benoît Hamon
Jean-Luc Mélenchon
Emmanuel Macron
Différentes manières de récupérer les programmes sont possibles : soit à partir des fichiers .pdf, soit à partir des sites des campagnes. Nous allons faire au plus simple, à l'aide des sites de campagnes.
End of explanation
r = requests.get('https://www.fillon2017.fr/projet/')
soup = BeautifulSoup(r.text, 'html.parser')
tags = soup.find_all('a', class_='projectItem__inner')
sublinks = [tag.attrs['href'] for tag in tags]
r = requests.get('https://www.fillon2017.fr/projet/competitivite/')
soup = BeautifulSoup(r.text, 'html.parser')
tags = soup.find_all('li', class_='singleProject__propositionItem')
len(tags)
tag = tags[0]
tag.find('div', class_='singleProject__propositionItem-content').text
for tag in tags:
tag.find('div', class_='singleProject__propositionItem-content').text
def extract_propositions(url):
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
tags = soup.find_all('li', class_='singleProject__propositionItem')
return [tag.find('div', class_='singleProject__propositionItem-content').text for tag in tags]
extract_propositions(sublinks[0])
props_sources = {}
for sublink in sublinks:
props = extract_propositions(sublink)
props_sources[sublink] = props
df = make_df_from_props_sources(props_sources)
df.head()
df.to_csv('../projets/francois_fillon.csv', index=False, quoting=1)
Explanation: François Fillon
Le projet de François Fillon ne sera annoncé que le 13 mars : https://www.fillon2017.fr/projet/
End of explanation
r = requests.get('https://www.marine2017.fr/programme/')
soup = BeautifulSoup(r.text, "html.parser")
Explanation: Marine Le Pen
Les 144 engagements de Marine Le Pen peuvent être consultés ici : https://www.marine2017.fr/programme/
Analyse de la structure du site
Apparemment, les différentes propositions sont imbriquées dans des balises <p>.
```
<p>3. <strong>Permettre la représentation de tous les Français</strong> par le scrutin proportionnel à toutes les élections. À l’Assemblée nationale, la proportionnelle sera intégrale avec une prime majoritaire de 30 % des sièges pour la liste arrivée en tête et un seuil de 5 % des suffrages pour obtenir des élus.</p>
```
On peut donc extraire ces éléments et les trier ensuite.
Extraction des paragraphes
Téléchargeons le code source de la page.
End of explanation
pattern = re.compile('^\d+.\s*')
def filter_func(tag):
if tag.text is not None:
return pattern.match(tag.text) is not None
else:
return False
all_paragraphs = [re.split(pattern, tag.text)[1:] for tag in soup.find_all('p') if filter_func(tag)]
len(all_paragraphs)
@interact
def disp_para(n=(0, len(all_paragraphs) - 1)):
print(all_paragraphs[n])
props_sources = {}
props_sources['https://www.marine2017.fr/programme/'] = all_paragraphs
df = make_df_from_props_sources(props_sources)
df.head(10)
Explanation: Maintenant, chercons à extraire tous les paragraphes, à l'aide d'une fonction qui vérifie que le paragraphe commence par un nombre suivi d'un point (et peut-être d'un espace).
End of explanation
df.to_csv('../projets/marine_le_pen.csv', index=False, quoting=1)
Explanation: Bien, on peut maintenant écrire ces données dans un fichier texte.
End of explanation
r = requests.get('https://www.benoithamon2017.fr/thematique/pour-un-progres-social-et-ecologique/')
r
soup = BeautifulSoup(r.text, 'html.parser')
all_propositions = soup.find_all(class_='Propositions-Proposition')
len(all_propositions)
p = all_propositions[0]
p.text
p.find('h1').text
p.find('p').text
Explanation: Benoît Hamon
Le site de Benoît Hamon ne permet pas d'accéder à une page avec toutes les propositions facilement. Du coup, il faut explorer trois sous-catégories.
https://www.benoithamon2017.fr/thematique/pour-un-progres-social-et-ecologique/
End of explanation
def extract_data(tag):
"Extracts title for tag and content."
subject = tag.find('h1').text
content = tag.find('p').text
return subject, content
Explanation: On peut extraire de ces propositions la moëlle essentielle :
End of explanation
df = pd.DataFrame([extract_data(p) for p in all_propositions], columns=['titre', 'contenu'])
df
df[df['contenu'].str.contains('ascension')]
Explanation: Construisons une table de données avec ces propositions.
End of explanation
props_sources = {}
props_sources['https://www.benoithamon2017.fr/thematique/pour-un-progres-social-et-ecologique/'] = df['contenu'].values.tolist()
df = make_df_from_props_sources(props_sources)
df.head()
df.to_csv('../projets/benoit_hamon.csv', index=False, quoting=1)
Explanation: On peut transformer ces propositions en DataFrame.
End of explanation
r = requests.get('https://laec.fr/chapitre/1/la-6e-republique')
soup = BeautifulSoup(r.text, 'html.parser')
sublinks = soup.find_all('a', class_='list-group-item')
sublinks
Explanation: Jean-Luc Mélenchon
On peut trouver une version inofficielle du programme ici : https://laec.fr/sommaire
Un peu comme pour le site d'Hamon, il y a des rubriques. Commençons par la première.
End of explanation
suburls = ['https://laec.fr/chapitre/1/la-6e-republique',
'https://laec.fr/chapitre/2/proteger-et-partager',
'https://laec.fr/chapitre/3/la-planification-ecologique',
'https://laec.fr/chapitre/4/sortir-des-traites-europeens',
'https://laec.fr/chapitre/5/pour-l-independance-de-la-france',
'https://laec.fr/chapitre/6/le-progres-humain-d-abord',
'https://laec.fr/chapitre/7/la-france-aux-frontieres-de-l-humanite']
sublinks = []
for suburl in suburls:
r = requests.get(suburl)
soup = BeautifulSoup(r.text, 'html.parser')
sublinks.extend(soup.find_all('a', class_='list-group-item'))
sublinks[:5]
Explanation: On peut étendre cette manière de récupérer les données à toutes les sous-sections :
End of explanation
len(sublinks)
Explanation: Combien de propositions trouvons-nous ?
End of explanation
full_urls = ['https://laec.fr' + link.attrs['href'] for link in sublinks]
full_urls[:10]
full_url = full_urls[13]
#full_url = full_urls[0]
r = requests.get(full_url)
print(r.text[:800])
soup = BeautifulSoup(r.text, 'html.parser')
tags = soup.find_all('li', class_='list-group-item')
tag = tags[0]
tag.text
tag.find_all('li')
tag.p.text
"\n".join([t.text for t in tag.find_all('li')])
len(tags)
[tag.text for tag in tags]
def extract_data(url):
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
tags = soup.find_all('li', class_='list-group-item')
contents = []
for tag in tags:
if len(tag.find_all('li')) == 0:
contents.append(tag.text)
else:
contents.append(tag.p.text + '\n\t' + "\n\t".join([t.text for t in tag.find_all('li')]))
return contents
extract_data(full_url)
extract_data(full_urls[13])
props_sources = {}
for url in full_urls:
props_sources[url] = extract_data(url)
df = make_df_from_props_sources(props_sources)
df
Explanation: Construisons les url complètes.
End of explanation
df.to_csv('../projets/jean_luc_melenchon.csv', index=False, quoting=1)
Explanation: On écrit un fichier.
End of explanation
r = requests.get('https://en-marche.fr/emmanuel-macron/le-programme')
soup = BeautifulSoup(r.text, 'html.parser')
proposals = soup.find_all(class_='programme__proposal')
proposals = [p for p in proposals if 'programme__proposal--category' not in p.attrs['class']]
len(proposals)
full_urls = ["https://en-marche.fr" + p.find('a').attrs['href'] for p in proposals]
url = full_urls[1]
r = requests.get(url)
text = r.text
text = text.replace('</br>', '')
soup = BeautifulSoup(text, 'html.parser')
article_tag = soup.find_all('article', class_='l__wrapper--slim')[0]
for line in article_tag.find_all(class_='arrows'):
print(line.text)
tag = article_tag.find_all(class_='arrows')[-1]
tag.text
tag.next_sibling
def extract_items(url):
r = requests.get(url)
text = r.text.replace('</br>', '')
soup = BeautifulSoup(text, 'html.parser')
article_tag = soup.find_all('article', class_='l__wrapper--slim')[0]
return [line.text.strip() for line in article_tag.find_all(class_='arrows')]
extract_items(full_urls[1])
Explanation: Emmanuel Macron
Il faut dans un premier temps aller chercher les pages individuelles du site.
End of explanation
propositions = [extract_items(url) for url in full_urls]
len(propositions)
full_urls[18]
@interact
def print_prop(n=(0, len(propositions) - 1)):
print(propositions[n])
props_sources = {}
for url, props in zip(full_urls, propositions):
props_sources[url] = props
df = make_df_from_props_sources(props_sources)
df.head()
df.iloc[0, 1]
df.to_csv('../projets/emmanuel_macron.csv', index=False, quoting=1)
Explanation: On extrait toutes les propositions.
End of explanation
r = requests.get('http://avecjadot.fr/lafrancevive/')
soup = BeautifulSoup(r.text, 'html.parser')
tags = soup.find_all('div', class_='bloc-mesure')
links = [tag.find('a').attrs['href'] for tag in tags]
all([link.startswith('http://avecjadot.fr/') for link in links])
Explanation: Yannick Jadot
http://avecjadot.fr/lafrancevive/
End of explanation
link = links[0]
r = requests.get(link)
soup = BeautifulSoup(r.text, 'html.parser')
soup.find('div', class_='texte-mesure').text.strip().replace('\n', ' ')
def extract_data(link):
r = requests.get(link)
soup = BeautifulSoup(r.text, 'html.parser')
return soup.find('div', class_='texte-mesure').text.strip().replace('\n', ' ')
extract_data(link)
all_props = [extract_data(link) for link in links]
props_sources = {}
for url, props in zip(links, all_props):
props_sources[url] = [props]
props_sources
df = make_df_from_props_sources(props_sources)
df.head()
df.to_csv('../projets/yannick_jadot.csv', index=False, quoting=1)
Explanation: Extraction du titre d'une des pages.
End of explanation
r = requests.get('http://www.nda-2017.fr/themes.html')
soup = BeautifulSoup(r.text, 'html.parser')
len(soup.find_all('div', class_='theme'))
links = ['http://www.nda-2017.fr' + tag.find('a').attrs['href'] for tag in soup.find_all('div', class_='theme')]
link = links[0]
r = requests.get(link)
soup = BeautifulSoup(r.text, 'html.parser')
tags = soup.find_all('div', class_='proposition')
len(tags)
tags[0].find('a').text.strip()
tags[0].find('a').attrs['href']
def extract_data(link):
r = requests.get(link)
soup = BeautifulSoup(r.text, 'html.parser')
tags = soup.find_all('div', class_='proposition')
return [tag.find('a').text.strip() for tag in tags]
all_props = [extract_data(link) for link in links]
len(all_props)
props_sources = {}
for url, props in zip(links, all_props):
props_sources[url] = props
df = make_df_from_props_sources(props_sources)
df
df.to_csv('../projets/nicolas_dupont_aignan.csv', index=False, quoting=1)
Explanation: Nicolas Dupont-Aignan
End of explanation |
4,950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyTorch Implementation
Simple version
가장 일반적인 파이토치 구현 방법으로 케라스된 예제 1-1을 변환한다.
Step1: Detail version with monitoring variables
파이토치로 변환된 코드가 제대로 동작하는지 중간 중간을 모니터링 한다.
Step2: Compatible version
케라스에서 사용한 코드를 최대한 고치지 않고 사용하는 방법이다. 모델 클래스를 작성할 때 입력에 대해 부가적인 처리를 함으로 나머지 코드는 케라스 경우와 비슷하게 사용할 수 있다. 먼저 forward(.)에서 Numpy의 어레이 입력을 토치 어레이로 바꾸는 과정을 포함한다. 그리고 앞선 Simple version에서 모델을 학습하는 코드를 fit()라는 모델 클래스 함수를 만들어 처리하도록 해 주었다. 그러고 나면 실제 수행코드는 케라스의 경우와 매우 유사하게 작성이 가능함을 알 수 있다.
Step3: GPU Version
GPU 버전으로 파이토치 코드를 작성할 경우는 model.to(device), x.to(device), y.to(device)라는 세가지 과정이 필요하다. 아래 코드는 예제 1-1을 파이토치 코드로 바꾼 뒤에 타켓 프로세서에 맞게 돌도로 디바이스가 CPU인지 GPU인지를 검사하여 해당 프로세서에 맞게 동작하게 만드는 코드이다.
Step4: GPU인지 CPU인지 검사
프로세서로 GPU를 사용하는 있는지 아닌지를 검사하고 device 변수에 프로세서 타입을 적어넣는다. | Python Code:
import torch
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
class Model(torch.nn.Module):
def __init__(self):
super(Model,self).__init__()
self.layer = torch.nn.Linear(1,1)
def forward(self, x):
return self.layer(x)
model = Model()
Optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for epoch in range(1000):
x_tr = torch.from_numpy(x[:2,:1]).type(torch.FloatTensor)
y_tr = torch.from_numpy(y[:2,:1]).type(torch.FloatTensor)
y_pr = model(x_tr)
loss = torch.pow(torch.abs(y_tr - y_pr),2)
Optimizer.zero_grad()
torch.sum(loss).backward()
Optimizer.step()
print(model(torch.from_numpy(x).type(torch.FloatTensor)).detach().numpy())
Explanation: PyTorch Implementation
Simple version
가장 일반적인 파이토치 구현 방법으로 케라스된 예제 1-1을 변환한다.
End of explanation
import torch
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
class Model(torch.nn.Module):
def __init__(self):
super(Model,self).__init__()
self.layer = torch.nn.Linear(1,1)
def forward(self, x):
return self.layer(x)
model = Model()
Optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
print('w=', list(model.parameters())[0].detach().numpy())
print('b=', list(model.parameters())[1].detach().numpy())
print()
for epoch in range(1000):
x_tr = torch.from_numpy(x[:2,:1]).type(torch.FloatTensor)
y_tr = torch.from_numpy(y[:2,:1]).type(torch.FloatTensor)
y_pr = model(x_tr)
loss = torch.pow(torch.abs(y_tr - y_pr),2)
if epoch < 3:
print(f'Epoch:{epoch}')
print('y_pr:', y_pr.detach().numpy())
print('y_tr:', y[:2,:1])
print('loss:', loss.detach().numpy())
print()
Optimizer.zero_grad()
torch.sum(loss).backward()
Optimizer.step()
print(model(torch.from_numpy(x).type(torch.FloatTensor)).detach().numpy())
Explanation: Detail version with monitoring variables
파이토치로 변환된 코드가 제대로 동작하는지 중간 중간을 모니터링 한다.
End of explanation
import torch
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
class Model(torch.nn.Module):
def __init__(self):
super(Model,self).__init__()
self.layer = torch.nn.Linear(1,1)
self.Optimizer = torch.optim.SGD(self.parameters(), lr=0.01)
def forward(self, x):
x = torch.from_numpy(x).type(torch.FloatTensor)
return self.layer(x)
def fit(self, x, y, epochs):
for epoch in range(epochs):
y_tr = torch.from_numpy(y).type(torch.FloatTensor)
y_pr = model(x)
loss = torch.pow(torch.abs(y_tr - y_pr),2)
self.Optimizer.zero_grad()
torch.sum(loss).backward()
self.Optimizer.step()
model = Model()
model.fit(x[:2], y[:2], epochs=1000)
print(model(x))
Explanation: Compatible version
케라스에서 사용한 코드를 최대한 고치지 않고 사용하는 방법이다. 모델 클래스를 작성할 때 입력에 대해 부가적인 처리를 함으로 나머지 코드는 케라스 경우와 비슷하게 사용할 수 있다. 먼저 forward(.)에서 Numpy의 어레이 입력을 토치 어레이로 바꾸는 과정을 포함한다. 그리고 앞선 Simple version에서 모델을 학습하는 코드를 fit()라는 모델 클래스 함수를 만들어 처리하도록 해 주었다. 그러고 나면 실제 수행코드는 케라스의 경우와 매우 유사하게 작성이 가능함을 알 수 있다.
End of explanation
import torch
import numpy as np
Explanation: GPU Version
GPU 버전으로 파이토치 코드를 작성할 경우는 model.to(device), x.to(device), y.to(device)라는 세가지 과정이 필요하다. 아래 코드는 예제 1-1을 파이토치 코드로 바꾼 뒤에 타켓 프로세서에 맞게 돌도로 디바이스가 CPU인지 GPU인지를 검사하여 해당 프로세서에 맞게 동작하게 만드는 코드이다.
End of explanation
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print('Using PyTorch version:', torch.__version__, ' Device:', device)
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
class Model(torch.nn.Module):
def __init__(self):
super(Model,self).__init__()
self.layer = torch.nn.Linear(1,1)
self.Optimizer = torch.optim.SGD(self.parameters(), lr=0.01)
def forward(self, x):
x = torch.from_numpy(x).type(torch.FloatTensor).to(device)
return self.layer(x)
def fit(self, x, y, epochs):
for epoch in range(epochs):
y_tr = torch.from_numpy(y).type(torch.FloatTensor).to(device)
y_pr = model(x)
loss = torch.pow(torch.abs(y_tr - y_pr),2)
self.Optimizer.zero_grad()
torch.sum(loss).backward()
self.Optimizer.step()
model = Model().to(device)
model.fit(x[:2], y[:2], epochs=1000)
print(model(x))
Explanation: GPU인지 CPU인지 검사
프로세서로 GPU를 사용하는 있는지 아닌지를 검사하고 device 변수에 프로세서 타입을 적어넣는다.
End of explanation |
4,951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model checking and diagnostics
Convergence Diagnostics
Valid inferences from sequences of MCMC samples are based on the
assumption that the samples are derived from the true posterior
distribution of interest. Theory guarantees this condition as the number
of iterations approaches infinity. It is important, therefore, to
determine the minimum number of samples required to ensure a reasonable
approximation to the target posterior density. Unfortunately, no
universal threshold exists across all problems, so convergence must be
assessed independently each time MCMC estimation is performed. The
procedures for verifying convergence are collectively known as
convergence diagnostics.
One approach to analyzing convergence is analytical, whereby the
variance of the sample at different sections of the chain are compared
to that of the limiting distribution. These methods use distance metrics
to analyze convergence, or place theoretical bounds on the sample
variance, and though they are promising, they are generally difficult to
use and are not prominent in the MCMC literature. More common is a
statistical approach to assessing convergence. With this approach,
rather than considering the properties of the theoretical target
distribution, only the statistical properties of the observed chain are
analyzed. Reliance on the sample alone restricts such convergence
criteria to heuristics. As a result, convergence cannot be guaranteed.
Although evidence for lack of convergence using statistical convergence
diagnostics will correctly imply lack of convergence in the chain, the
absence of such evidence will not guarantee convergence in the chain.
Nevertheless, negative results for one or more criteria may provide some
measure of assurance to users that their sample will provide valid
inferences.
For most simple models, convergence will occur quickly, sometimes within
a the first several hundred iterations, after which all remaining
samples of the chain may be used to calculate posterior quantities. For
more complex models, convergence requires a significantly longer burn-in
period; sometimes orders of magnitude more samples are needed.
Frequently, lack of convergence will be caused by poor mixing.
Recall that mixing refers to the degree to which the Markov
chain explores the support of the posterior distribution. Poor mixing
may stem from inappropriate proposals (if one is using the
Metropolis-Hastings sampler) or from attempting to estimate models with
highly correlated variables.
Step1: Informal Methods
The most straightforward approach for assessing convergence is based on
simply plotting and inspecting traces and histograms of the observed
MCMC sample. If the trace of values for each of the stochastics exhibits
asymptotic behavior over the last $m$ iterations, this may be
satisfactory evidence for convergence.
Step2: A similar approach involves
plotting a histogram for every set of $k$ iterations (perhaps 50-100)
beyond some burn in threshold $n$; if the histograms are not visibly
different among the sample intervals, this may be considered some evidence for
convergence. Note that such diagnostics should be carried out for each
stochastic estimated by the MCMC algorithm, because convergent behavior
by one variable does not imply evidence for convergence for other
variables in the analysis.
Step3: An extension of this approach can be taken
when multiple parallel chains are run, rather than just a single, long
chain. In this case, the final values of $c$ chains run for $n$
iterations are plotted in a histogram; just as above, this is repeated
every $k$ iterations thereafter, and the histograms of the endpoints are
plotted again and compared to the previous histogram. This is repeated
until consecutive histograms are indistinguishable.
Another ad hoc method for detecting lack of convergence is to examine
the traces of several MCMC chains initialized with different starting
values. Overlaying these traces on the same set of axes should (if
convergence has occurred) show each chain tending toward the same
equilibrium value, with approximately the same variance. Recall that the
tendency for some Markov chains to converge to the true (unknown) value
from diverse initial values is called ergodicity. This property is
guaranteed by the reversible chains constructed using MCMC, and should
be observable using this technique. Again, however, this approach is
only a heuristic method, and cannot always detect lack of convergence,
even though chains may appear ergodic.
Step4: A principal reason that evidence from informal techniques cannot
guarantee convergence is a phenomenon called metastability. Chains may
appear to have converged to the true equilibrium value, displaying
excellent qualities by any of the methods described above. However,
after some period of stability around this value, the chain may suddenly
move to another region of the parameter space. This period
of metastability can sometimes be very long, and therefore escape
detection by these convergence diagnostics. Unfortunately, there is no
statistical technique available for detecting metastability.
Formal Methods
Along with the ad hoc techniques described above, a number of more
formal methods exist which are prevalent in the literature. These are
considered more formal because they are based on existing statistical
methods, such as time series analysis.
PyMC currently includes three formal convergence diagnostic methods. The
first, proposed by Geweke (1992), is a time-series approach that
compares the mean and variance of segments from the beginning and end of
a single chain.
$$z = \frac{\bar{\theta}_a - \bar{\theta}_b}{\sqrt{S_a(0) + S_b(0)}}$$
where $a$ is the early interval and $b$ the late interval, and $S_i(0)$ is the spectral density estimate at zero frequency for chain segment $i$. If the
z-scores (theoretically distributed as standard normal variates) of
these two segments are similar, it can provide evidence for convergence.
PyMC calculates z-scores of the difference between various initial
segments along the chain, and the last 50% of the remaining chain. If
the chain has converged, the majority of points should fall within 2
standard deviations of zero.
In PyMC, diagnostic z-scores can be obtained by calling the geweke function. It
accepts either (1) a single trace, (2) a Node or Stochastic object, or
(4) an entire Model object
Step5: The arguments expected are the following
Step6: The arguments are
Step7: For the best results, each chain should be initialized to highly
dispersed starting values for each stochastic node.
By default, when calling the summary_plot function using nodes with
multiple chains, the $\hat{R}$ values will be plotted alongside the
posterior intervals.
Step9: Goodness of Fit
Checking for model convergence is only the first step in the evaluation
of MCMC model outputs. It is possible for an entirely unsuitable model
to converge, so additional steps are needed to ensure that the estimated
model adequately fits the data. One intuitive way of evaluating model
fit is to compare model predictions with the observations used to fit
the model. In other words, the fitted model can be used to simulate
data, and the distribution of the simulated data should resemble the
distribution of the actual data.
Fortunately, simulating data from the model is a natural component of
the Bayesian modelling framework. Recall, from the discussion on
imputation of missing data, the posterior predictive distribution
Step10: The posterior predictive distribution of deaths uses the same functional
form as the data likelihood, in this case a binomial stochastic. Here is
the corresponding sample from the posterior predictive distribution
Step11: Notice that the observed stochastic Binomial has been replaced
with a stochastic node that is identical in every respect to `deaths`,
except that its values are not fixed to be the observed data -- they are
left to vary according to the values of the fitted parameters.
The degree to which simulated data correspond to observations can be
evaluated in at least two ways. First, these quantities can simply be
compared visually. This allows for a qualitative comparison of
model-based replicates and observations. If there is poor fit, the true
value of the data may appear in the tails of the histogram of replicated
data, while a good fit will tend to show the true data in
high-probability regions of the posterior predictive distribution.
The Matplot package in PyMC provides an easy way of producing such
plots, via the gof_plot function.
Step12: A second approach for evaluating goodness of fit using samples from the
posterior predictive distribution involves the use of a statistical
criterion. For example, the Bayesian p-value (Gelman et al. 1996) uses a
discrepancy measure that quantifies the difference between data
(observed or simulated) and the expected value, conditional on some
model. One such discrepancy measure is the Freeman-Tukey statistic
(Brooks et al. 2000)
Step13: For a dataset of size $n$ and an MCMC chain of length $r$, this implies
that x is size (n,), x_sim is size (r,n) and x_exp is either
size (r,) or (r,n). A call to this function returns two arrays of
discrepancy values (simulated and observed), which can be passed to the
discrepancy_plot function in the `Matplot` module to generate a
scatter plot, and if desired, a p value
Step14: Exercise | Python Code:
%matplotlib inline
from pymc.examples import gelman_bioassay
from pymc import MCMC, Matplot, Metropolis
import seaborn as sns; sns.set_context('notebook')
M = MCMC(gelman_bioassay)
M.use_step_method(Metropolis, M.alpha, scale=0.001)
M.sample(1000, tune_interval=1000)
Matplot.plot(M.alpha)
Explanation: Model checking and diagnostics
Convergence Diagnostics
Valid inferences from sequences of MCMC samples are based on the
assumption that the samples are derived from the true posterior
distribution of interest. Theory guarantees this condition as the number
of iterations approaches infinity. It is important, therefore, to
determine the minimum number of samples required to ensure a reasonable
approximation to the target posterior density. Unfortunately, no
universal threshold exists across all problems, so convergence must be
assessed independently each time MCMC estimation is performed. The
procedures for verifying convergence are collectively known as
convergence diagnostics.
One approach to analyzing convergence is analytical, whereby the
variance of the sample at different sections of the chain are compared
to that of the limiting distribution. These methods use distance metrics
to analyze convergence, or place theoretical bounds on the sample
variance, and though they are promising, they are generally difficult to
use and are not prominent in the MCMC literature. More common is a
statistical approach to assessing convergence. With this approach,
rather than considering the properties of the theoretical target
distribution, only the statistical properties of the observed chain are
analyzed. Reliance on the sample alone restricts such convergence
criteria to heuristics. As a result, convergence cannot be guaranteed.
Although evidence for lack of convergence using statistical convergence
diagnostics will correctly imply lack of convergence in the chain, the
absence of such evidence will not guarantee convergence in the chain.
Nevertheless, negative results for one or more criteria may provide some
measure of assurance to users that their sample will provide valid
inferences.
For most simple models, convergence will occur quickly, sometimes within
a the first several hundred iterations, after which all remaining
samples of the chain may be used to calculate posterior quantities. For
more complex models, convergence requires a significantly longer burn-in
period; sometimes orders of magnitude more samples are needed.
Frequently, lack of convergence will be caused by poor mixing.
Recall that mixing refers to the degree to which the Markov
chain explores the support of the posterior distribution. Poor mixing
may stem from inappropriate proposals (if one is using the
Metropolis-Hastings sampler) or from attempting to estimate models with
highly correlated variables.
End of explanation
M = MCMC(gelman_bioassay)
M.sample(10000, burn=5000)
Matplot.plot(M.beta)
Explanation: Informal Methods
The most straightforward approach for assessing convergence is based on
simply plotting and inspecting traces and histograms of the observed
MCMC sample. If the trace of values for each of the stochastics exhibits
asymptotic behavior over the last $m$ iterations, this may be
satisfactory evidence for convergence.
End of explanation
import matplotlib.pyplot as plt
fig, axes = plt.subplots(2, 5, figsize=(14,6))
axes = axes.ravel()
for i in range(10):
axes[i].hist(M.beta.trace()[500*i:500*(i+1)])
plt.tight_layout()
Explanation: A similar approach involves
plotting a histogram for every set of $k$ iterations (perhaps 50-100)
beyond some burn in threshold $n$; if the histograms are not visibly
different among the sample intervals, this may be considered some evidence for
convergence. Note that such diagnostics should be carried out for each
stochastic estimated by the MCMC algorithm, because convergent behavior
by one variable does not imply evidence for convergence for other
variables in the analysis.
End of explanation
from pymc.examples import disaster_model
M = MCMC(disaster_model)
M.early_mean.set_value(0.5)
M.sample(1000)
M.early_mean.set_value(5)
M.sample(1000)
plt.plot(M.early_mean.trace(chain=0)[:200], 'r--')
plt.plot(M.early_mean.trace(chain=1)[:200], 'k--')
Explanation: An extension of this approach can be taken
when multiple parallel chains are run, rather than just a single, long
chain. In this case, the final values of $c$ chains run for $n$
iterations are plotted in a histogram; just as above, this is repeated
every $k$ iterations thereafter, and the histograms of the endpoints are
plotted again and compared to the previous histogram. This is repeated
until consecutive histograms are indistinguishable.
Another ad hoc method for detecting lack of convergence is to examine
the traces of several MCMC chains initialized with different starting
values. Overlaying these traces on the same set of axes should (if
convergence has occurred) show each chain tending toward the same
equilibrium value, with approximately the same variance. Recall that the
tendency for some Markov chains to converge to the true (unknown) value
from diverse initial values is called ergodicity. This property is
guaranteed by the reversible chains constructed using MCMC, and should
be observable using this technique. Again, however, this approach is
only a heuristic method, and cannot always detect lack of convergence,
even though chains may appear ergodic.
End of explanation
from pymc import geweke
M = MCMC(gelman_bioassay)
M.sample(5000)
d = geweke(M.beta, intervals=15)
Matplot.geweke_plot(d, 'alpha')
Explanation: A principal reason that evidence from informal techniques cannot
guarantee convergence is a phenomenon called metastability. Chains may
appear to have converged to the true equilibrium value, displaying
excellent qualities by any of the methods described above. However,
after some period of stability around this value, the chain may suddenly
move to another region of the parameter space. This period
of metastability can sometimes be very long, and therefore escape
detection by these convergence diagnostics. Unfortunately, there is no
statistical technique available for detecting metastability.
Formal Methods
Along with the ad hoc techniques described above, a number of more
formal methods exist which are prevalent in the literature. These are
considered more formal because they are based on existing statistical
methods, such as time series analysis.
PyMC currently includes three formal convergence diagnostic methods. The
first, proposed by Geweke (1992), is a time-series approach that
compares the mean and variance of segments from the beginning and end of
a single chain.
$$z = \frac{\bar{\theta}_a - \bar{\theta}_b}{\sqrt{S_a(0) + S_b(0)}}$$
where $a$ is the early interval and $b$ the late interval, and $S_i(0)$ is the spectral density estimate at zero frequency for chain segment $i$. If the
z-scores (theoretically distributed as standard normal variates) of
these two segments are similar, it can provide evidence for convergence.
PyMC calculates z-scores of the difference between various initial
segments along the chain, and the last 50% of the remaining chain. If
the chain has converged, the majority of points should fall within 2
standard deviations of zero.
In PyMC, diagnostic z-scores can be obtained by calling the geweke function. It
accepts either (1) a single trace, (2) a Node or Stochastic object, or
(4) an entire Model object:
End of explanation
from pymc import raftery_lewis
M = MCMC(gelman_bioassay)
M.sample(1000)
raftery_lewis(M.alpha, q=0.025, r=0.01)
Explanation: The arguments expected are the following:
pymc_object: The object that is or contains the output trace(s).
first (optional): First portion of chain to be used in Geweke
diagnostic. Defaults to 0.1 (i.e. first 10% of chain).
last (optional): Last portion of chain to be used in Geweke
diagnostic. Defaults to 0.5 (i.e. last 50% of chain).
intervals (optional): Number of sub-chains to analyze. Defaults to
20.
The resulting scores are best interpreted graphically, using the
geweke_plot function. This displays the scores in series, in relation
to the 2 standard deviation boundaries around zero. Hence, it is easy to
see departures from the standard normal assumption.
The second diagnostic provided by PyMC is the Raftery and Lewis (1992)
procedure. This approach estimates the number of iterations required to
reach convergence, along with the number of burn-in samples to be
discarded and the appropriate thinning interval. A separate estimate of
both quantities can be obtained for each variable in a given model.
As the criterion for determining convergence, the Raftery and Lewis
approach uses the accuracy of estimation of a user-specified quantile.
For example, we may want to estimate the quantile $q=0.975$ to within
$r=0.005$ with probability $s=0.95$. In other words,
$$Pr(|\hat{q}-q| \le r) = s$$
From any sample of $\theta$, one can construct a binary chain:
$$Z^{(j)} = I(\theta^{(j)} \le u_q)$$
where $u_q$ is the quantile value and $I$ is the indicator function.
While ${\theta^{(j)}}$ is a Markov chain, ${Z^{(j)}}$ is not
necessarily so. In any case, the serial dependency among $Z^{(j)}$
decreases as the thinning interval $k$ increases. A value of $k$ is
chosen to be the smallest value such that the first order Markov chain
is preferable to the second order Markov chain.
This thinned sample is used to determine number of burn-in samples. This
is done by comparing the remaining samples from burn-in intervals of
increasing length to the limiting distribution of the chain. An
appropriate value is one for which the truncated sample's distribution
is within $\epsilon$ (arbitrarily small) of the limiting distribution.
Estimates for sample size tend to be conservative.
This diagnostic is best used on a short pilot run of a particular model,
and the results used to parameterize a subsequent sample that is to be
used for inference.
End of explanation
from pymc import gelman_rubin
M = MCMC(gelman_bioassay)
M.sample(1000)
M.sample(1000)
M.sample(1000)
gelman_rubin(M)
Explanation: The arguments are:
pymc_object: The object that contains the Geweke scores. Can be a
list (one set) or a dictionary (multiple sets).
q: Desired quantile to be estimated.
r: Desired accuracy for quantile.
s (optional): Probability of attaining the requested accuracy
(defaults to 0.95).
epsilon (optional) : Half width of the tolerance interval required
for the q-quantile (defaults to 0.001).
The third convergence diagnostic provided by PyMC is the Gelman-Rubin
statistic Gelman and Rubin (1992). This diagnostic uses multiple chains to
check for lack of convergence, and is based on the notion that if
multiple chains have converged, by definition they should appear very
similar to one another; if not, one or more of the chains has failed to
converge.
The Gelman-Rubin diagnostic uses an analysis of variance approach to
assessing convergence. That is, it calculates both the between-chain
varaince (B) and within-chain varaince (W), and assesses whether they
are different enough to worry about convergence. Assuming $m$ chains,
each of length $n$, quantities are calculated by:
$$\begin{align}B &= \frac{n}{m-1} \sum_{j=1}^m (\bar{\theta}{.j} - \bar{\theta}{..})^2 \
W &= \frac{1}{m} \sum_{j=1}^m \left[ \frac{1}{n-1} \sum_{i=1}^n (\theta_{ij} - \bar{\theta}_{.j})^2 \right]
\end{align}$$
for each scalar estimand $\theta$. Using these values, an estimate of
the marginal posterior variance of $\theta$ can be calculated:
$$\hat{\text{Var}}(\theta | y) = \frac{n-1}{n} W + \frac{1}{n} B$$
Assuming $\theta$ was initialized to arbitrary starting points in each
chain, this quantity will overestimate the true marginal posterior
variance. At the same time, $W$ will tend to underestimate the
within-chain variance early in the sampling run. However, in the limit
as $n \rightarrow
\infty$, both quantities will converge to the true variance of $\theta$.
In light of this, the Gelman-Rubin statistic monitors convergence using
the ratio:
$$\hat{R} = \sqrt{\frac{\hat{\text{Var}}(\theta | y)}{W}}$$
This is called the potential scale reduction, since it is an estimate of
the potential reduction in the scale of $\theta$ as the number of
simulations tends to infinity. In practice, we look for values of
$\hat{R}$ close to one (say, less than 1.1) to be confident that a
particular estimand has converged. In PyMC, the function
gelman_rubin will calculate $\hat{R}$ for each stochastic node in
the passed model:
End of explanation
plt.figure(figsize=(10,6))
Matplot.summary_plot(M)
Explanation: For the best results, each chain should be initialized to highly
dispersed starting values for each stochastic node.
By default, when calling the summary_plot function using nodes with
multiple chains, the $\hat{R}$ values will be plotted alongside the
posterior intervals.
End of explanation
from pymc import Normal, Binomial, deterministic, invlogit
n = [5]*4
dose = [-.86,-.3,-.05,.73]
x = [0,1,3,5]
alpha = Normal('alpha', mu=0.0, tau=0.01)
beta = Normal('beta', mu=0.0, tau=0.01)
@deterministic
def theta(a=alpha, b=beta, d=dose):
theta = inv_logit(a+b)
return invlogit(a+b*d)
# deaths ~ binomial(n, p)
deaths = Binomial('deaths', n=n, p=theta, value=x, observed=True)
Explanation: Goodness of Fit
Checking for model convergence is only the first step in the evaluation
of MCMC model outputs. It is possible for an entirely unsuitable model
to converge, so additional steps are needed to ensure that the estimated
model adequately fits the data. One intuitive way of evaluating model
fit is to compare model predictions with the observations used to fit
the model. In other words, the fitted model can be used to simulate
data, and the distribution of the simulated data should resemble the
distribution of the actual data.
Fortunately, simulating data from the model is a natural component of
the Bayesian modelling framework. Recall, from the discussion on
imputation of missing data, the posterior predictive distribution:
$$p(\tilde{y}|y) = \int p(\tilde{y}|\theta) f(\theta|y) d\theta$$
Here, $\tilde{y}$ represents some hypothetical new data that would be
expected, taking into account the posterior uncertainty in the model
parameters. Sampling from the posterior predictive distribution is easy
in PyMC. The code looks identical to the corresponding data stochastic,
with two modifications: (1) the node should be specified as
deterministic and (2) the statistical likelihoods should be replaced by
random number generators. Consider the gelman_bioassay example,
where deaths are modeled as a binomial random variable for which
the probability of death is a logit-linear function of the dose of a
particular drug.
End of explanation
deaths_sim = Binomial('deaths_sim', n=n, p=theta)
Explanation: The posterior predictive distribution of deaths uses the same functional
form as the data likelihood, in this case a binomial stochastic. Here is
the corresponding sample from the posterior predictive distribution:
End of explanation
M_gof = MCMC([alpha, beta, theta, deaths, deaths_sim])
M_gof.sample(2000, 1000)
Matplot.gof_plot(deaths_sim.trace(), x, bins=10)
Explanation: Notice that the observed stochastic Binomial has been replaced
with a stochastic node that is identical in every respect to `deaths`,
except that its values are not fixed to be the observed data -- they are
left to vary according to the values of the fitted parameters.
The degree to which simulated data correspond to observations can be
evaluated in at least two ways. First, these quantities can simply be
compared visually. This allows for a qualitative comparison of
model-based replicates and observations. If there is poor fit, the true
value of the data may appear in the tails of the histogram of replicated
data, while a good fit will tend to show the true data in
high-probability regions of the posterior predictive distribution.
The Matplot package in PyMC provides an easy way of producing such
plots, via the gof_plot function.
End of explanation
from pymc import discrepancy
expected = theta.trace
d = discrepancy(x, deaths_sim, (theta.trace() * n).T)
d[0][:10], d[1][:10]
Explanation: A second approach for evaluating goodness of fit using samples from the
posterior predictive distribution involves the use of a statistical
criterion. For example, the Bayesian p-value (Gelman et al. 1996) uses a
discrepancy measure that quantifies the difference between data
(observed or simulated) and the expected value, conditional on some
model. One such discrepancy measure is the Freeman-Tukey statistic
(Brooks et al. 2000):
$$D(x|\theta) = \sum_j (\sqrt{x_j}-\sqrt{e_j})^2,$$
where the $x_j$ are data and $e_j$ are the corresponding expected
values, based on the model. Model fit is assessed by comparing the
discrepancies from observed data to those from simulated data. On
average, we expect the difference between them to be zero; hence, the
Bayesian p value is simply the proportion of simulated discrepancies
that are larger than their corresponding observed discrepancies:
$$p = Pr[ D(x_{\text{sim}}|\theta) > D(x_{\text{obs}}|\theta) ]$$
If $p$ is very large (e.g. $>0.975$) or very small (e.g. $\lt 0.025$) this
implies that the model is not consistent with the data, and thus is
evidence of lack of fit. Graphically, data and simulated discrepancies
plotted together should be clustered along a 45 degree line passing
through the origin.
The discrepancy function in the diagnostics package can be used to
generate discrepancy statistics from arrays of data, simulated values,
and expected values:
D = pymc.discrepancy(x, x_sim, x_exp)
End of explanation
Matplot.discrepancy_plot(d)
Explanation: For a dataset of size $n$ and an MCMC chain of length $r$, this implies
that x is size (n,), x_sim is size (r,n) and x_exp is either
size (r,) or (r,n). A call to this function returns two arrays of
discrepancy values (simulated and observed), which can be passed to the
discrepancy_plot function in the `Matplot` module to generate a
scatter plot, and if desired, a p value:
End of explanation
r_t_obs = [3, 7, 5, 102, 28, 4, 98, 60, 25, 138, 64, 45, 9, 57, 25, 33, 28, 8, 6, 32, 27, 22]
n_t_obs = [38, 114, 69, 1533, 355, 59, 945, 632, 278,1916, 873, 263, 291, 858, 154, 207, 251, 151, 174, 209, 391, 680]
r_c_obs = [3, 14, 11, 127, 27, 6, 152, 48, 37, 188, 52, 47, 16, 45, 31, 38, 12, 6, 3, 40, 43, 39]
n_c_obs = [39, 116, 93, 1520, 365, 52, 939, 471, 282, 1921, 583, 266, 293, 883, 147, 213, 122, 154, 134, 218, 364, 674]
N = len(n_c_obs)
# Write your answer here
Explanation: Exercise: Meta-analysis of beta blocker effectiveness
Carlin (1992) considers a Bayesian approach to meta-analysis, and includes the following examples of 22 trials of beta-blockers to prevent mortality after myocardial infarction.
In a random effects meta-analysis we assume the true effect (on a log-odds scale) $d_i$ in a trial $i$
is drawn from some population distribution. Let $r^C_i$ denote number of events in the control group in trial $i$,
and $r^T_i$ denote events under active treatment in trial $i$. Our model is:
$$\begin{aligned}
r^C_i &\sim \text{Binomial}\left(p^C_i, n^C_i\right) \
r^T_i &\sim \text{Binomial}\left(p^T_i, n^T_i\right) \
\text{logit}\left(p^C_i\right) &= \mu_i \
\text{logit}\left(p^T_i\right) &= \mu_i + \delta_i \
\delta_i &\sim \text{Normal}(d, t) \
\mu_i &\sim \text{Normal}(m, s)
\end{aligned}$$
We want to make inferences about the population effect $d$, and the predictive distribution for the effect $\delta_{\text{new}}$ in a new trial. Build a model to estimate these quantities in PyMC, and (1) use convergence diagnostics to check for convergence and (2) use posterior predictive checks to assess goodness-of-fit.
Here are the data:
End of explanation |
4,952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output
Step5: Problem set #2
Step6: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output
Step7: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step8: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step9: EXTREME BONUS ROUND
Step10: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint
Step11: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint
Step12: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step13: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step14: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step15: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output | Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
Explanation: Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
raw_data = numbers_str.split(",")
numbers = []
for i in raw_data:
numbers.append(int(i))
numbers
#max(numbers)
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
sorted(numbers)[11:]
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
[i for i in numbers if i % 3 == 0]
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
from math import sqrt
[sqrt(i) for i in numbers if i < 100]
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
earth_diameter = [i['diameter'] for i in planets if i['name'] == "Earth"]
earth = int(earth_diameter[0])
[i['name'] for i in planets if i['diameter'] > 4 * earth]
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
#count = 0
#for i in planets:
#count = count + i['mass']
#print(count)
sum([i['mass'] for i in planets])
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
[i['name'] for i in planets if "giant" in i['type']]
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
[line for line in poem_lines if re.search(r"\b\w\w\w\w\b \b\w\w\w\w\b", line)]
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
[line for line in poem_lines if re.search(r"\b\w{5}[^0-9a-zA-Z]?$", line)]
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
all_lines = " ".join(poem_lines)
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
re.findall(r"I (\b\w+\b)", all_lines)
#re.findall(r"New York (\b\w+\b)", all_subjects)
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
menu = []
for item in entrees:
menu_items = {}
match = re.search(r"^(.*) \$(\d{1,2}\.\d{2})", item)
#print("name",match.group(1))
#print("price", match.group(2))
#menu_items.update({'name': match.group(1), 'price': match.group(2)})
if re.search("v$", item):
menu_items.update({'name': match.group(1), 'price': match.group(2), 'vegetarian': True})
else:
menu_items.update({'name': match.group(1), 'price': match.group(2),'vegetarian': False})
menu_items
menu.append(menu_items)
menu
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
End of explanation |
4,953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exporting Epochs to Pandas DataFrames
This tutorial shows how to export the data in
Step1: Next we'll load a list of events from file, map them to condition names with
an event dictionary, set some signal rejection thresholds (cf.
tut-reject-epochs-section), and segment the continuous data into
epochs
Step2: Converting an Epochs object to a DataFrame
Once we have our
Step3: Scaling time and channel values
By default, time values are converted from seconds to milliseconds and
then rounded to the nearest integer; if you don't want this, you can pass
time_format=None to keep time as a
Step4: Notice that the time values are no longer integers, and the channel values
have changed by several orders of magnitude compared to the earlier
DataFrame.
Setting the index
It is also possible to move one or more of the indicator columns (event name,
epoch number, and sample time) into the index <pandas
Step5: Wide- versus long-format DataFrames
Another parameter, long_format, determines whether each channel's data
is in a separate column of the
Step6: Generating the
Step7: We can also now use all the power of Pandas for grouping and transforming our
data. Here, we find the latency of peak activation of 2 gradiometers (one
near auditory cortex and one near visual cortex), and plot the distribution
of the timing of the peak in each channel as a | Python Code:
import os
import matplotlib.pyplot as plt
import seaborn as sns
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
Explanation: Exporting Epochs to Pandas DataFrames
This tutorial shows how to export the data in :class:~mne.Epochs objects to a
:class:Pandas DataFrame <pandas.DataFrame>, and applies a typical Pandas
:doc:split-apply-combine <pandas:user_guide/groupby> workflow to examine the
latencies of the response maxima across epochs and conditions.
We'll use the sample-dataset dataset, but load a version of the raw file
that has already been filtered and downsampled, and has an average reference
applied to its EEG channels. As usual we'll start by importing the modules we
need and loading the data:
End of explanation
sample_data_events_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw-eve.fif')
events = mne.read_events(sample_data_events_file)
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
reject_criteria = dict(mag=3000e-15, # 3000 fT
grad=3000e-13, # 3000 fT/cm
eeg=100e-6, # 100 µV
eog=200e-6) # 200 µV
tmin, tmax = (-0.2, 0.5) # epoch from 200 ms before event to 500 ms after it
baseline = (None, 0) # baseline period from start of epoch to time=0
epochs = mne.Epochs(raw, events, event_dict, tmin, tmax, proj=True,
baseline=baseline, reject=reject_criteria, preload=True)
del raw
Explanation: Next we'll load a list of events from file, map them to condition names with
an event dictionary, set some signal rejection thresholds (cf.
tut-reject-epochs-section), and segment the continuous data into
epochs:
End of explanation
df = epochs.to_data_frame()
df.iloc[:5, :10]
Explanation: Converting an Epochs object to a DataFrame
Once we have our :class:~mne.Epochs object, converting it to a
:class:~pandas.DataFrame is simple: just call :meth:epochs.to_data_frame()
<mne.Epochs.to_data_frame>. Each channel's data will be a column of the new
:class:~pandas.DataFrame, alongside three additional columns of event name,
epoch number, and sample time. Here we'll just show the first few rows and
columns:
End of explanation
df = epochs.to_data_frame(time_format=None,
scalings=dict(eeg=1, mag=1, grad=1))
df.iloc[:5, :10]
Explanation: Scaling time and channel values
By default, time values are converted from seconds to milliseconds and
then rounded to the nearest integer; if you don't want this, you can pass
time_format=None to keep time as a :class:float value in seconds, or
convert it to a :class:~pandas.Timedelta value via
time_format='timedelta'.
Note also that, by default, channel measurement values are scaled so that EEG
data are converted to µV, magnetometer data are converted to fT, and
gradiometer data are converted to fT/cm. These scalings can be customized
through the scalings parameter, or suppressed by passing
scalings=dict(eeg=1, mag=1, grad=1).
End of explanation
df = epochs.to_data_frame(index=['condition', 'epoch'],
time_format='timedelta')
df.iloc[:5, :10]
Explanation: Notice that the time values are no longer integers, and the channel values
have changed by several orders of magnitude compared to the earlier
DataFrame.
Setting the index
It is also possible to move one or more of the indicator columns (event name,
epoch number, and sample time) into the index <pandas:indexing>, by
passing a string or list of strings as the index parameter. We'll also
demonstrate here the effect of time_format='timedelta', yielding
:class:~pandas.Timedelta values in the "time" column.
End of explanation
long_df = epochs.to_data_frame(time_format=None, index='condition',
long_format=True)
long_df.head()
Explanation: Wide- versus long-format DataFrames
Another parameter, long_format, determines whether each channel's data
is in a separate column of the :class:~pandas.DataFrame
(long_format=False), or whether the measured values are pivoted into a
single 'value' column with an extra indicator column for the channel name
(long_format=True). Passing long_format=True will also create an
extra column ch_type indicating the channel type.
End of explanation
plt.figure()
channels = ['MEG 1332', 'MEG 1342']
data = long_df.loc['auditory/left'].query('channel in @channels')
# convert channel column (CategoryDtype → string; for a nicer-looking legend)
data['channel'] = data['channel'].astype(str)
data.reset_index(drop=True, inplace=True) # speeds things up
sns.lineplot(x='time', y='value', hue='channel', data=data)
Explanation: Generating the :class:~pandas.DataFrame in long format can be helpful when
using other Python modules for subsequent analysis or plotting. For example,
here we'll take data from the "auditory/left" condition, pick a couple MEG
channels, and use :func:seaborn.lineplot to automatically plot the mean and
confidence band for each channel, with confidence computed across the epochs
in the chosen condition:
End of explanation
plt.figure()
df = epochs.to_data_frame(time_format=None)
peak_latency = (df.filter(regex=r'condition|epoch|MEG 1332|MEG 2123')
.groupby(['condition', 'epoch'])
.aggregate(lambda x: df['time'].iloc[x.idxmax()])
.reset_index()
.melt(id_vars=['condition', 'epoch'],
var_name='channel',
value_name='latency of peak')
)
ax = sns.violinplot(x='channel', y='latency of peak', hue='condition',
data=peak_latency, palette='deep', saturation=1)
Explanation: We can also now use all the power of Pandas for grouping and transforming our
data. Here, we find the latency of peak activation of 2 gradiometers (one
near auditory cortex and one near visual cortex), and plot the distribution
of the timing of the peak in each channel as a :func:~seaborn.violinplot:
End of explanation |
4,954 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejemplo de word2vec con gensim
En la siguiente celda, importamos las librerías necesarias y configuramos los mensajes de los logs.
Step1: Entrenamiento de un modelo
Implemento una clase Corpus con un iterador sobre un directorio que contiene ficheros de texto. Utilizaré una instancia de Corpus para poder procesar de manera más eficiente una colección, sin necesidad de cargarlo previamente en memoria.
Step2: CORPUSDIR contiene una colección de noticias en español (normalizada previamente a minúsculas y sin signos de puntuación) con alrededor de 150 millones de palabras. Entrenamos un modelo en un solo paso, ignorando aquellos tokens que aparecen menos de 10 veces, para descartar erratas.
Step3: Una vez completado el entrenamiento (después de casi 30 minutos), guardamos el modelo en disco.
Step4: En el futuro, podremos utilizar este modelo cargándolo en memoria con la instrucción
Step5: Probando nuestro modelo
El objeto model contiene una enorme matriz de números
Step6: Cada término del vocabulario está representado como un vector con 150 dimensiones
Step7: Estos vectores no nos dicen mucho, salvo que contienen números muy pequeños
Step8: Podemos seleccionar el término que no encaja a partir de una determinada lista de términos usando el método doesnt_match
Step9: Podemos buscar los términos más similares usando el método most_similar de nuestro modelo
Step10: Con el mismo método most_similar podemos combinar vectores de palabras tratando de jugar con los rasgos semánticos de cada una de ellas para descubrir nuevas relaciones. | Python Code:
import gensim, logging, os
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Ejemplo de word2vec con gensim
En la siguiente celda, importamos las librerías necesarias y configuramos los mensajes de los logs.
End of explanation
class Corpus(object):
'''Clase Corpus que permite leer de manera secuencial un directorio de documentos de texto'''
def __init__(self, directorio):
self.directory = directorio
def __iter__(self):
for fichero in os.listdir(self.directory):
for linea in open(os.path.join(self.directory, fichero)):
yield linea.split()
Explanation: Entrenamiento de un modelo
Implemento una clase Corpus con un iterador sobre un directorio que contiene ficheros de texto. Utilizaré una instancia de Corpus para poder procesar de manera más eficiente una colección, sin necesidad de cargarlo previamente en memoria.
End of explanation
CORPUSDIR = 'PATH_TO_YOUR_CORPUS_DIRECTORY'
oraciones = Corpus(CORPUSDIR)
model = gensim.models.Word2Vec(oraciones, min_count=10, size=150, workers=2)
# el modelo puede entrenarse en dos pasos sucesivos pero por separado
#model = gensim.models.Word2Vec() # modelo vacío
#model.build_vocab(oraciones) # primera pasada para crear la lista de vocabulario
#model.train(other_sentences) # segunda pasada para calcula vectores
Explanation: CORPUSDIR contiene una colección de noticias en español (normalizada previamente a minúsculas y sin signos de puntuación) con alrededor de 150 millones de palabras. Entrenamos un modelo en un solo paso, ignorando aquellos tokens que aparecen menos de 10 veces, para descartar erratas.
End of explanation
model.save('PATH_TO_YOUR_MODEL.w2v')
Explanation: Una vez completado el entrenamiento (después de casi 30 minutos), guardamos el modelo en disco.
End of explanation
#model = gensim.models.Word2Vec.load('PATH_TO_YOUR_MODEL.w2v')
Explanation: En el futuro, podremos utilizar este modelo cargándolo en memoria con la instrucción:
End of explanation
print(model.corpus_count)
Explanation: Probando nuestro modelo
El objeto model contiene una enorme matriz de números: una tabla, donde cada fila es uno de los términos del vocabulario reconocido y cada columna es una de las características que permiten modelar el significado de dicho término.
En nuestro modelo, tal y como está entrenado, tenemos más de 26 millones de términos:
End of explanation
print(model['azul'], '\n')
print(model['verde'], '\n')
print(model['microsoft'])
Explanation: Cada término del vocabulario está representado como un vector con 150 dimensiones: 105 características. Podemos acceder al vector de un término concreto:
End of explanation
print('hombre - mujer', model.similarity('hombre', 'mujer'))
print('madrid - parís', model.similarity('madrid', 'parís'))
print('perro - gato', model.similarity('perro', 'gato'))
print('gato - periódico', model.similarity('gato', 'periódico'))
Explanation: Estos vectores no nos dicen mucho, salvo que contienen números muy pequeños :-/
El mismo objeto model permite acceder a una serie de funcionalidades ya implementadas que nos van a permitir evaluar formal e informalmente el modelo. Por el momento, nos contentamos con los segundo: vamos a revisar visualmente los significados que nuestro modelo ha aprendido por su cuenta.
Podemos calcular la similitud semántica entre dos términos usando el método similarity, que nos devuelve un número entre 0 y 1:
End of explanation
lista1 = 'madrid barcelona gonzález washington'.split()
print('en la lista', ' '.join(lista1), 'sobra:', model.doesnt_match(lista1))
lista2 = 'psoe pp ciu epi'.split()
print('en la lista', ' '.join(lista2), 'sobra:', model.doesnt_match(lista2))
lista3 = 'publicaron declararon soy negaron'.split()
print('en la lista', ' '.join(lista3), 'sobra:', model.doesnt_match(lista3))
lista3 = 'homero saturno cervantes shakespeare cela'.split()
print('en la lista', ' '.join(lista3), 'sobra:', model.doesnt_match(lista3))
Explanation: Podemos seleccionar el término que no encaja a partir de una determinada lista de términos usando el método doesnt_match:
End of explanation
terminos = 'psoe chicago sevilla aznar podemos estuvieron'.split()
for t in terminos:
print(t, '==>', model.most_similar(t), '\n')
Explanation: Podemos buscar los términos más similares usando el método most_similar de nuestro modelo:
End of explanation
print('==> alcalde + mujer - hombre')
most_similar = model.most_similar(positive=['alcalde', 'mujer'], negative=['hombre'], topn=3)
for item in most_similar:
print(item)
print('==> madrid + filipinas - españa')
most_similar = model.most_similar(positive=['madrid', 'filipinas'], negative=['españa'], topn=3)
for item in most_similar:
print(item)
Explanation: Con el mismo método most_similar podemos combinar vectores de palabras tratando de jugar con los rasgos semánticos de cada una de ellas para descubrir nuevas relaciones.
End of explanation |
4,955 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
What is an efficient way of splitting a column into multiple rows using dask dataframe? For example, let's say I have a csv file which I read using dask to produce the following dask dataframe: | Problem:
import pandas as pd
df = pd.DataFrame([["A", "Z,Y"], ["B", "X"], ["C", "W,U,V"]], index=[1,2,3], columns=['var1', 'var2'])
def g(df):
return df.drop('var2', axis=1).join(df.var2.str.split(',', expand=True).stack().
reset_index(drop=True, level=1).rename('var2'))
result = g(df.copy()) |
4,956 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
..# Activ Spyder - Captura de página do ActivUfrj
* This file is part of program Activ Spyder
* Copyright © 2022 Carlo Oliveira carlo@nce.ufrj.br,
* Labase labase.selfip.org; GPL is.gd/3Udt.
* SPDX-License-Identifier
Step1: Campos existentes dos dados originais
| Nome | Descrição dos campos relevantes |
|-------------
Step2: Contagem da Produção de Palavras
As palavras no texto da página são contadas como forma deum aumento na habilidade de trabalhar com registros textuais.
Step3: Contagem da produção de imagens
As imagens da página são contadas como forma deum aumento na habilidade de trabalhar com registros visuais.
Step4: Segunda derivada da contagem de palavras
Os valores consecutivos de contagem de palavras são subtraídos para formar a velocidade. Velocidades consecutivas são subtraídas para obter a aceleração. Segundo a teoria metacognitiva do aprendizado, a aceleração na produção de resultados caracteriza a cognição como apta a entender o conteúdo estudado.
Step5: Segunda derivada da contagem de imagens
Os valores consecutivos de contagem de imagens são subtraídos para formar a velocidade.
Velocidades consecutivas são subtraídas para obter a aceleração.
Como já explicado anteriormente, a aceleração na produção de imagens caracteriza
um aumento na habilidade de trabalhar com registros visuais.
Step6: Estatística da aceleração de palavras
Distribuição estatística da aceleração na produção de texto pelos autores ao longo das versões.
Step7: Estatística da aceleração de palavras por autor
Distribuição estatística da aceleração na produção de texto ao longo das versões para cada autor.
Step8: Mapa da produção dos autores nas diversas versões
O mapa mostra as dimensões de aceleração de palavras e imagens nos eixos y e x. Os autores são representados pelas cores e as versões pelo tamanho dos pontos. Este mapeamento permite observar a evolução cognitiva ao longo das versões tanto da produção textual como da visual.
Step9: Estatística da aceleração de imagens por autor
Distribuição estatística da aceleração na produção visual ao longo das versões para cada autor.
Step10: Regressão correlacionando produção textual e visual
No quadrante tendo como eixos as acelerações textuais e visuais, calcula-se uma regressão linear da produções dos autores ao longo das versões.
Os coeficientes angulares das retas revelam tendências a evoluções que correlacionam as produções visuais com as textuais, com diversas abordagens cognitivas na aprendizagem. | Python Code:
import pandas as pd
df = pd.read_json("../author_data.json")
df.info()
df
Explanation: ..# Activ Spyder - Captura de página do ActivUfrj
* This file is part of program Activ Spyder
* Copyright © 2022 Carlo Oliveira carlo@nce.ufrj.br,
* Labase labase.selfip.org; GPL is.gd/3Udt.
* SPDX-License-Identifier: (GPLv3-or-later AND LGPL-2.0-only) WITH bison-exception
Crawler for SuperGame -
Obtem versões dos relatórios dos games.
codeauthor:: Carlo Oliveira carlo@ufrj.br
Changelog
versionadded:: 22.05
Criação do raspador de página.
versionchanged:: 22.06
Grráficos de aceleração.
Leitura do Arquivo Capturado pelo Crawler
End of explanation
import json
import csv
import bs4
# load data using Python JSON module
with open('../stopwords.txt','r') as f:
stopwords = f.read().split()
def image_count(html):
soup = bs4.BeautifulSoup(html)
image_tags = soup.find_all('img')
return len([img for img in image_tags if "/file/MATERIAIS.DESIGN.ARQUITETURA" in img['src']])
def word_count(html):
soup = bs4.BeautifulSoup(html)
text = soup.get_text()
words = [word for word in text.split() if word not in stopwords]
return len(words)
with open('../author_data.json','r') as f:
data = json.loads(f.read())# Flatten data
headings = ["author",
"version",
"data_cri",
"data_alt",
"alterado_por",
"owner",
"text_size",
"conta_imagem",
"conta_palavra",
"conteudo"]
datan = {key: [] for key in headings}
[datan[key].append(val) for aut in data for line in aut for key, val in line.items() if key in headings]
[datan["text_size"].append(len(line["conteudo"])) for aut in data for line in aut]
[datan["conta_imagem"].append(image_count(line["conteudo"])) for aut in data for line in aut]
[datan["conta_palavra"].append(word_count(line["conteudo"])) for aut in data for line in aut]
# datan
with open('../author_data.csv','w') as fw:
w = csv.DictWriter(fw, datan.keys())
w.writeheader()
w.writerow(datan)
df = pd.DataFrame(datan)
pd.to_datetime(df.data_cri) #, errors = 'ignore')
df.data_cri = pd.to_datetime(df.data_cri)
df.data_alt = pd.to_datetime(df.data_alt)
df["velocidade_palavra"] = df.groupby('author')['conta_palavra'].apply(lambda x: x.shift(1) - x)
df["acelera_palavra"] = df.groupby('author')['velocidade_palavra'].apply(lambda x: x.shift(1) - x)
df["velocidade_imagem"] = df.groupby('author')['conta_imagem'].apply(lambda x: x.shift(1) - x)
df["acelera_imagem"] = df.groupby('author')['velocidade_imagem'].apply(lambda x: x.shift(1) - x)
df.info()
df
df.hist()
Explanation: Campos existentes dos dados originais
| Nome | Descrição dos campos relevantes |
|-------------:|---------------------------------|
| author | Nome do participante |
| version | Versão corrente do texto |
| data_cri | Data de criação |
| data_alt | Data de alteração |
| alterado_por | Autor de alteração |
| conteudo | Conteúdo da página |
Campos Gerados a partir dos dados originais
| Nome | Descrição dos campos relevantes |
|-------------------:|-------------------------------------|
| text_size | Tamanho em letras do conteúdo |
| conta_imagem | Contagem de imagens da página |
| velocidade_imagem | Aumento de imagens da página |
| acelera_imagem | Seg. derivada de imagens da página |
| conta_palavra | Contagem de palavras da página |
| velocidade_palavra | Aumento de palavras da página |
| acelera_palavra | Seg. derivada de palavras da página |
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
sns.lineplot(x="version", y="conta_palavra", hue="author", data=df).set(
title='Contagem das palavras ao longo das versões', ylabel='Contagem das palavras')
plt.gcf().set_size_inches(20, 10)
Explanation: Contagem da Produção de Palavras
As palavras no texto da página são contadas como forma deum aumento na habilidade de trabalhar com registros textuais.
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
sns.lineplot(x="version", y="conta_imagem", hue="author", data=df).set(
title='Contagem das imagens ao longo das versões', ylabel='Contagem das imagens')
plt.gcf().set_size_inches(20,10)
Explanation: Contagem da produção de imagens
As imagens da página são contadas como forma deum aumento na habilidade de trabalhar com registros visuais.
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
sns.lineplot(x="version", y="acelera_palavra", hue="author", data=df).set(
title='Aceleração da contagem das palavras ao longo das versões', ylabel='Variação do número de palavras')
plt.gcf().set_size_inches(20, 10)
Explanation: Segunda derivada da contagem de palavras
Os valores consecutivos de contagem de palavras são subtraídos para formar a velocidade. Velocidades consecutivas são subtraídas para obter a aceleração. Segundo a teoria metacognitiva do aprendizado, a aceleração na produção de resultados caracteriza a cognição como apta a entender o conteúdo estudado.
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
sns.lineplot(x="version", y="acelera_imagem", hue="author", data=df).set(
title='Aceleração das imagens ao longo das versões', ylabel='Variação do número de imagens')
plt.gcf().set_size_inches(20, 10)
Explanation: Segunda derivada da contagem de imagens
Os valores consecutivos de contagem de imagens são subtraídos para formar a velocidade.
Velocidades consecutivas são subtraídas para obter a aceleração.
Como já explicado anteriormente, a aceleração na produção de imagens caracteriza
um aumento na habilidade de trabalhar com registros visuais.
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
sns.boxplot(x="version", y="acelera_palavra", data=df).set(
title='Distribuição da aceleração ao longo das versões', ylabel='Variação do número de palavras')
plt.gcf().set_size_inches(20, 10)
Explanation: Estatística da aceleração de palavras
Distribuição estatística da aceleração na produção de texto pelos autores ao longo das versões.
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
sns.boxplot(x="author", y="acelera_palavra", data=df).set(
title='Distribuição da aceleração ao longo das versões por autor', ylabel='Variação do número de palavras')
plt.gcf().set_size_inches(20, 10)
Explanation: Estatística da aceleração de palavras por autor
Distribuição estatística da aceleração na produção de texto ao longo das versões para cada autor.
End of explanation
sns.relplot(x="acelera_palavra", y="acelera_imagem", hue="author", size="version",
alpha=.5, palette="muted", sizes=(40, 400),
data=df)
plt.gcf().set_size_inches(20, 10)
Explanation: Mapa da produção dos autores nas diversas versões
O mapa mostra as dimensões de aceleração de palavras e imagens nos eixos y e x. Os autores são representados pelas cores e as versões pelo tamanho dos pontos. Este mapeamento permite observar a evolução cognitiva ao longo das versões tanto da produção textual como da visual.
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
sns.boxplot(x="author", y="acelera_imagem", data=df).set(
title='Distribuição da aceleração ao longo das versões por autor', ylabel='Variação do número de imagens')
plt.gcf().set_size_inches(20, 10)
Explanation: Estatística da aceleração de imagens por autor
Distribuição estatística da aceleração na produção visual ao longo das versões para cada autor.
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
sns.lmplot(x="acelera_imagem", y="acelera_palavra", hue="author", data=df).set(
title='Aceleração texto x imagem ao longo das versões por autor',
ylabel='Aceleração do volume de texto', xlabel='Aceleração da contagem de imagens')
plt.xlim(-15, 15)
plt.ylim(-100, 100)
plt.gcf().set_size_inches(20, 10)
Explanation: Regressão correlacionando produção textual e visual
No quadrante tendo como eixos as acelerações textuais e visuais, calcula-se uma regressão linear da produções dos autores ao longo das versões.
Os coeficientes angulares das retas revelam tendências a evoluções que correlacionam as produções visuais com as textuais, com diversas abordagens cognitivas na aprendizagem.
End of explanation |
4,957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programming hands on
習うより慣れろ!(narau yori narero) more practice, less learning; practice makes perfect.
a + b
input
Step1: a+b modified (1)
Add grand total at the end of line.
input
Step2: Prime number (2)
write a function, which takes 1 parameter (n), and returns n-th prime number.
example | Python Code:
# write answer
Explanation: Programming hands on
習うより慣れろ!(narau yori narero) more practice, less learning; practice makes perfect.
a + b
input: each line contains 2 integer
output: print sum of the 2 integer
input: input_ab.txt, output: standard out
```
sample input:
1 1
100 150
123 321
11112222 22223333
sample output:
2
250
444
33335555
```
End of explanation
def is_prime_not_efficient(n):
if n < 2: return False
for div in range(2,n):
if n % div == 0:
return False # not Prime
return True
print(2, is_prime_not_efficient(2))
print(97, is_prime_not_efficient(97))
print(1000000, is_prime_not_efficient(1000000))
print(1000003, is_prime_not_efficient(1000003))
#!/usr/bin/python3
# -*- coding: utf-8 -*-
'''
find prime numbers up to MAX_NUMBER
sieve of Eratosthenes
'''
MAX_NUMBER = 100
prime = [True for i in range(MAX_NUMBER+1)]
prime[0] = False # 0 is not prime
prime[1] = False # 1 is not prime
sqrt_max = int(MAX_NUMBER ** 0.5) # check up to root(MAX_NUMBER)
for i in range(2, sqrt_max + 1):
if prime[i]:
for n in range(i+i, MAX_NUMBER+1, i):
prime[n] = False
num_prime = 0
for i in range(MAX_NUMBER):
if prime[i]:
num_prime += 1
print(i, end=' ')
# print(num_prime, 'prime number found under', MAX_NUMBER)
Explanation: a+b modified (1)
Add grand total at the end of line.
input: input_ab.txt, output: standard out
```
sample input:
1 1
100 150
123 321
11112222 22223333
sample output:
2
250
444
33335555
33336251
```
a+b modified (2)
input: each line contains 1 or more integer
output: print sum of the input, and grand total
input: inputab3.txt, output: standard out
```
sample input:
1
1 2 3
123 321
10 9 8 7 6 5 4 3 2 1
sample output:
1
6
444
55
506
```
a+b modified (3)
input: each line contains 1 or more integer
output: print sum of the input. Grand Total, Average, Minimum of line total, Maximum of line total
Use round() function for average.
input: inputab3.txt, output: standard out
```
sample input:
1
1 2 3
123 321
10 9 8 7 6 5 4 3 2 1
sample output:
1
6
444
55
Grand Total: 506 Average: 126.5 Min: 1 Max: 444
```
Prime numbers (1)
print all prime numbers below 100 (2 3 5 7 .... 97) (hint: Sieve of Eratosthenes)
The definition of Prime Number is that it can be divided only 1 and itself. (see is_prime_not_efficient() function below.
But in order to check if the number $N$ is prime or not, it's not necessary to check up to $N-1$, but the prime number equal or less of $\sqrt{N}$, because if N is a product of $A$ and $B$, one of the divider is equal or less than $\sqrt{N}$.
Definition-1: Prime number is a integer bigger than 1 and can be divided only by 1 and itself
Definition-2: Composite Number is a integer bigger than 1, and not Prime number (i.e. has more than 1 divider except 1 and itself)
Definition-3: Prime Number is not Composit number
Definition-4: Composite Number can be factorized. If a number is a composite number, it is a product of smaller Prime numbers.
Definition-5: Every integer bigger than 1 is either Prime or Composite number
Premise-1: Composite Number has a divider of Prime number equal or less than $\sqrt{N}$
Conclusion: If $N$ can not be divided by the integer eqaul or less than $\sqrt{N}$, it's Prime number
Proof of Premise-1:
If N is a Composite Number (Non-Prime-Number) and product of positive integer A and B, and if A is smaller or equal to B, then A is smaller or equal to $\sqrt{N}$
$$(N = A \cdot B) \ (1 \lt A \le B \lt N) \rightarrow (A \le \sqrt{N})$$
If $A$ is bigger than $\sqrt{N}$, then $A \cdot B \gt N$. Let's assume that $$A = \sqrt{N} + a \ B = \sqrt{N} + b \ 0 \lt a \le b$$
Then $$A \cdot B \ = (\sqrt{N} + a)(\sqrt{N} + b) \ = (\sqrt{N}^2 + (a+b)\sqrt{N} + a \cdot b) \gt N$$
End of explanation
#!/usr/bin/python3
# -*- coding: utf-8 -*-
'''
find N-th prime number
'''
prime = [2, 3]
def is_prime(candidate):
sqrt_c = int(candidate ** 0.5)
for p in prime:
if p > sqrt_c:
return True
if candidate % p == 0:
return False # Not Prime number
return True
def nth_prime(n):
if not isinstance(n, int):
raise TypeError
if n < 1:
raise ValueError
prime_list_len = len(prime)
if n <= prime_list_len:
print('Prime List cache hit', prime_list_len)
return prime[n-1]
candidate = prime[-1]
while prime_list_len < n:
candidate += 2
if is_prime(candidate):
candidate += 2 prime.append(candidate)
prime_list_len += 1
return prime[-1]
print(nth_prime(5))
print(nth_prime(100))
print(nth_prime(25))
Explanation: Prime number (2)
write a function, which takes 1 parameter (n), and returns n-th prime number.
example: n=1 output=2, n=2 output=3, n=3 output=5, n=5 output=11
nth_primt(10)
End of explanation |
4,958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step3: 使用膨胀 3D CNN 进行动作识别
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step4: 使用 UCF101 数据集
Step5: 运行 ID3 模型并打印前 5 个动作预测。
Step6: 现在,尝试一个新的视频,地址为:https | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install -q imageio
!pip install -q opencv-python
!pip install -q git+https://github.com/tensorflow/docs
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import ssl
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "https://www.crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
# As of July 2020, crcv.ucf.edu doesn't use a certificate accepted by the
# default Colab environment anymore.
unverified_context = ssl._create_unverified_context()
def list_ucf_videos():
Lists videos available in UCF101 dataset.
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT, context=unverified_context).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
Fetchs a video and cache into local filesystem.
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath, context=unverified_context).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def to_gif(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
return embed.embed_file('./animation.gif')
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
Explanation: 使用膨胀 3D CNN 进行动作识别
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/action_recognition_with_tf_hub"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View on TensorFlow.org</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/action_recognition_with_tf_hub.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">Run in Google Colab</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/action_recognition_with_tf_hub.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">View on GitHub</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/action_recognition_with_tf_hub.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">Download notebook</a> </td>
<td> <a href="https://tfhub.dev/deepmind/i3d-kinetics-400/1"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a> </td>
</table>
此 Colab 演示了如何使用 tfhub.dev/deepmind/i3d-kinetics-400/1 模块从视频数据进行动作识别。
Joao Carreira 和 Andrew Zisserman 在其论文“Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset”中对底层模型进行了介绍。该论文于 2017 年 5 月在 arXiv 上发表,并被选为 CVPR 2017 会议论文。源代码已在 GitHub 上公开。
“Quo Vadis”介绍了一种用于视频分类的新架构,即膨胀 3D 卷积神经网络或 I3D。此架构通过对上述模型进行微调,在 UCF101 和 HMDB51 数据集上取得了目前最优秀的结果。在 Kinetics 上预训练的 I3D 模型也在 CVPR 2017 Charades 挑战中排名第一。
原始模块在 kinetics-400 数据集上进行了训练,能够识别约 400 种不同动作。这些动作的标签可在标签映射文件中找到。
在此 Colab 中,我们将用它识别来自 UCF101 数据集的视频中的行为。
设置
End of explanation
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
video_path = fetch_ucf_video("v_CricketShot_g04_c02.avi")
sample_video = load_video(video_path)
sample_video.shape
i3d = hub.load("https://tfhub.dev/deepmind/i3d-kinetics-400/1").signatures['default']
Explanation: 使用 UCF101 数据集
End of explanation
def predict(sample_video):
# Add a batch axis to the to the sample video.
model_input = tf.constant(sample_video, dtype=tf.float32)[tf.newaxis, ...]
logits = i3d(model_input)['default'][0]
probabilities = tf.nn.softmax(logits)
print("Top 5 actions:")
for i in np.argsort(probabilities)[::-1][:5]:
print(f" {labels[i]:22}: {probabilities[i] * 100:5.2f}%")
predict(sample_video)
Explanation: 运行 ID3 模型并打印前 5 个动作预测。
End of explanation
!curl -O https://upload.wikimedia.org/wikipedia/commons/8/86/End_of_a_jam.ogv
video_path = "End_of_a_jam.ogv"
sample_video = load_video(video_path)[:100]
sample_video.shape
to_gif(sample_video)
predict(sample_video)
Explanation: 现在,尝试一个新的视频,地址为:https://commons.wikimedia.org/wiki/Category:Videos_of_sports
再试试 Patrick Gillett 的这个视频:
End of explanation |
4,959 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https
Step1: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
Step2: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code)
Step3: Below I'm running images through the VGG network in batches.
Exercise
Step4: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
Step5: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
Step10: Training
Here, we'll train the network.
Exercise
Step11: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. | Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.
git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder (tf.float32, [None, 224, 224, 3])
with tf.name_scope ('content_vgg'):
vgg.build (input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_ : images}
codes_batch = sess.run (vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
# read codes and labels from file
import numpy as np
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit (labels)
labels_vecs = lb.transform (labels)
#print ('Labels: {}'.format([labels[i] for i in range (0, len(labels), 200)]))
#print ('One-hot: {}'.format([labels_vecs[i] for i in range (0, len(labels), 200)]))
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
from sklearn.model_selection import StratifiedShuffleSplit
sss = StratifiedShuffleSplit (n_splits=1, test_size=0.2)
train_index, test_index = next (sss.split (codes, labels))
val_test_split = int(len(test_index)/2)
train_x, train_y = codes[train_index], labels_vecs[train_index]
val_x, val_y = codes[test_index[:val_test_split]], labels_vecs[test_index[:val_test_split]]
test_x, test_y = codes[test_index[val_test_split:]], labels_vecs[test_index[val_test_split:]]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
def fully_connected (x_tensor, num_outputs):
weights = tf.Variable (tf.truncated_normal (shape=[x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1, dtype=tf.float32), name='weights')
biases = tf.Variable (tf.zeros (shape=[num_outputs], dtype=tf.float32), name='biases')
activations = tf.add (tf.matmul (x_tensor, weights), biases)
return activations
def create_nn (x_tensor, num_outputs, keep_prob):
conn = fully_connected (x_tensor, 512)
conn = tf.nn.relu (conn)
conn = tf.nn.dropout (conn, keep_prob)
conn2 = fully_connected (conn, 128)
conn2 = tf.nn.relu (conn2)
conn2 = tf.nn.dropout (conn2, keep_prob)
conn3 = fully_connected (conn2, 32)
conn3 = tf.nn.relu (conn3)
conn3 = tf.nn.dropout (conn3, keep_prob)
out = fully_connected (conn3, num_outputs)
return tf.nn.softmax (out, name='softmax')
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
keep_prob = tf.placeholder (tf.float32, name='keep_prob')
# TODO: Classifier layers and operations
logits = create_nn (inputs_, labels_vecs.shape[1], keep_prob)
#logits = tf.identity (logits, name='logits')
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_))
optimizer = tf.train.AdamOptimizer(learning_rate=0.00005).minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
epochs = 10000
saver = tf.train.Saver()
with tf.Session() as sess:
# TODO: Your training code here
sess.run (tf.global_variables_initializer())
for epoch in range(epochs):
for x, y in get_batches(train_x, train_y):
entropy_cost, _, train_accuracy = sess.run ([cost, optimizer, accuracy], feed_dict={inputs_:x, labels_:y, keep_prob:0.5})
if (epoch+1) % 10 == 0:
valid_accuracy = sess.run (accuracy, feed_dict={inputs_:val_x, labels_:val_y, keep_prob:1.0})
print ('Epoch: {:3d}/{} Cost = {:8.5f} Train accuracy = {:.4f}, Validation Accuracy = {:.4f}'.format (epoch+1, epochs, entropy_cost, train_accuracy, valid_accuracy))
if (epoch+1) % 1000 == 0:
print ('Saving checkpoint')
saver.save(sess, "checkpoints/flowers.ckpt")
print ('Saving checkpoint')
saver.save(sess, "checkpoints/flowers.ckpt")
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y,
keep_prob:1.0}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
#test_img_path = 'flower_photos/dandelion/9939430464_5f5861ebab.jpg'
test_img_path = 'flower_photos/daisy/9922116524_ab4a2533fe_n.jpg'
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code, keep_prob:1.0}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
test_img = imread(test_img_path)
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation |
4,960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class Session 4 Exercise
Step1: Now, define a function that returns the index numbers of the neighbors of a vertex i, when the
graph is stored in adjacency matrix format. So your function will accept as an input a NxN numpy matrix.
Step2: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in adjacency list format (a list of lists)
Step3: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in edge-list format (a numpy array of length-two-lists); use numpy.where and numpy.unique
Step4: This next function is the simulation funtion. "n" is the number of vertices.
It returns a length-three list containing the average running time for enumerating the neighbor vertices of a vertex in the graph.
Step5: A simulation with 1000 vertices clearly shows that adjacency list is fastest
Step6: We see the expected behavior, with the running time for the adjacency-matrix and edge-list formats going up when we increase "n", but there is hardly any change in the running time for the graph stored in adjacency list format | Python Code:
import numpy as np
import igraph
import timeit
import itertools
Explanation: Class Session 4 Exercise:
Comparing asymptotic running time for enumerating neighbors of all vertices in a graph
We will measure the running time for enumerating the neighbor vertices for three different data structures for representing an undirected graph:
adjacency matrix
adjacency list
edge list
Let's assume that each vertex is labeled with a unique integer number. So if there are N vertices, the vertices are labeled 0, 2, 3, 4, ..., N-1.
First, we will import all of the Python modules that we will need for this exercise:
note how we assign a short name, "np" to the numpy module. This will save typing.
End of explanation
def enumerate_matrix(gmat, i):
return np.nonzero(gmat[i,:])[1].tolist()
Explanation: Now, define a function that returns the index numbers of the neighbors of a vertex i, when the
graph is stored in adjacency matrix format. So your function will accept as an input a NxN numpy matrix.
End of explanation
def enumerate_adj_list(adj_list, i):
return adj_list[i]
Explanation: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in adjacency list format (a list of lists):
End of explanation
def enumerate_edge_list(edge_list, i):
inds1 = np.where(edge_list[:,0] == i)[0]
elems1 = edge_list[inds1, 1].tolist()
inds2 = np.where(edge_list[:,1] == i)[0]
elems2 = edge_list[inds2, 0].tolist()
return np.unique(elems1 + elems2).tolist()
Explanation: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in edge-list format (a numpy array of length-two-lists); use numpy.where and numpy.unique:
End of explanation
def do_sim(n):
retlist = []
nrep = 10
nsubrep = 10
# this is (sort of) a Python way of doing the R function "replicate":
for _ in itertools.repeat(None, nrep):
# make a random undirected graph with fixed (average) vertex degree = 5
g = igraph.Graph.Barabasi(n, 5)
# get the graph in three different representations
g_matrix = np.matrix(g.get_adjacency().data)
g_adj_list = g.get_adjlist()
g_edge_list = np.array(g.get_edgelist())
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_matrix(g_matrix, i)
matrix_elapsed = timeit.default_timer() - start_time
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_adj_list(g_adj_list, i)
adjlist_elapsed = timeit.default_timer() - start_time
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_edge_list(g_edge_list, i)
edgelist_elapsed = timeit.default_timer() - start_time
retlist.append([matrix_elapsed, adjlist_elapsed, edgelist_elapsed])
# average over replicates and then
# divide by n so that the running time results are on a per-vertex basis
return np.mean(np.array(retlist), axis=0)/n
Explanation: This next function is the simulation funtion. "n" is the number of vertices.
It returns a length-three list containing the average running time for enumerating the neighbor vertices of a vertex in the graph.
End of explanation
do_sim(1000)*1000
Explanation: A simulation with 1000 vertices clearly shows that adjacency list is fastest:
(I multiply by 1000 just so the results are in ms.)
End of explanation
do_sim(2000)*1000
Explanation: We see the expected behavior, with the running time for the adjacency-matrix and edge-list formats going up when we increase "n", but there is hardly any change in the running time for the graph stored in adjacency list format:
End of explanation |
4,961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Lecture 7
Software design, documentation, and testing
Design of a program
From the Practice of Programming
Step3: Documenting Invariants
An invariant is something that is true at some point in the code.
Invariants and the contract are what we use to guide our implementation.
Pre-conditions and post-conditions are special cases of invariants.
Pre-conditions are true at function entry. They constrain the user.
Post-conditions are true at function exit. They constrain the implementation.
You can change implementations, stuff under the hood, etc, but once the software is in the wild you can't change the pre-conditions and post-conditions since the client user is depending upon them.
Step4: Accessing Documentation (1)
Documentation can be accessed by calling the __doc__ special method
Simply calling function_name.__doc__ will give a pretty ugly output
You can make it cleaner by making use of splitlines()
Step5: Accessing Documentation (2)
A nice way to access the documentation is to use the pydoc module.
Step6: Testing
There are different kinds of tests inspired by the interface principles just described.
acceptance tests verify that a program meets a customer's expectations. In a sense these are a test of the interface to the customer
Step7: Principles of Testing
Test simple parts first
Test code at its boundaries
The idea is that most errors happen at data boundaries such as empty input, single input item, exactly full array, wierd values, etc. If a piece of code works at the boundaries, its likely to work elsewhere...
Program defensively
"Program defensively. A useful technique is to add code to handle "can't happen" cases, situations where it is not logically possible for something to happen but (because of some failure elsewhere) it might anyway. As an example, a program processing grades might expect that there would be no negative or huge values but should check anyway.
Automate using a test harness
Test incrementally
Test simple parts first
Step8: Test at the boundaries
Here we write a test to handle the crazy case in which the user passes strings in as the coefficients.
Step9: We can also check to make sure the $a=0$ case is handled okay
Step11: When you get an error
It could be that
Step12: Let's put our tests into one file.
Step15: Code Coverage
In some sense, it would be nice to somehow check that every line in a program has been covered by a test. If you could do this, you might know that a particular line has not contributed to making something wrong. But this is hard to do
Step16: Run the tests and check code coverage
Step17: Run the tests, report code coverage, and report missing lines.
Step18: Run tests, including the doctests, report code coverage, and report missing lines.
Step19: Let's put some tests in for the linear roots function.
Step20: Now run the tests and check code coverage. | Python Code:
def quad_roots(a=1.0, b=2.0, c=0.0):
Returns the roots of a quadratic equation: ax^2 + bx + c = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of quadratic term
b: float, optional, default value is 2
Coefficient of linear term
c: float, optional, default value is 0
Constant term
RETURNS
========
roots: 2-tuple of complex floats
Has the form (root1, root2) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> quad_roots(1.0, 1.0, -12.0)
((3+0j), (-4+0j))
import cmath # Can return complex numbers from square roots
if a == 0:
raise ValueError("The quadratic coefficient is zero. This is not a quadratic equation.")
else:
sqrtdisc = cmath.sqrt(b * b - 4.0 * a * c)
r1 = -b + sqrtdisc
r2 = -b - sqrtdisc
return (r1 / 2.0 / a, r2 / 2.0 / a)
Explanation: Lecture 7
Software design, documentation, and testing
Design of a program
From the Practice of Programming:
The essence of design is to balance competing goals and constraints. Although there may be many tradeoffs when one is writing a small self-contained system, the ramifications of particular choices remain within the system and affect only the individual programmer. But when code is to be used by others, decisions have wider repercussions.
Software Design Desirables
Documentation
names (understandable names)
pre+post conditions or requirements
Maintainability
Extensibility
Modularity and Encapsulation
Portability
Installability
Generality
Data Abstraction (change types, change data structures)
Functional Abstraction (the object model, overloading)
Robustness
Provability: Invariants, preconditions, postconditions
User Proofing, Adversarial Inputs
Efficiency
Use of appropriate algorithms and data structures
Optimization (but no premature optimization)
Issues to be aware of:
Interfaces
Your program is being designed to be used by someone: either an end user, another programmer, or even yourself. This interface is a contract between you and the user.
Hiding Information
There is information hiding between layers (a higher up layer can be more abstract). Encapsulation, abstraction, and modularization, are some of the techniques used here.
Resource Management
Resource management issues: who allocates storage for data structures. Generally we want resource allocation/deallocation to happen in the same layer.
How to Deal with Errors
Do we return special values? Do we throw exceptions? Who handles them?
Interface principles
Interfaces should:
hide implementation details
have a small set of operations exposed, the smallest possible, and these should be orthogonal. Be stingy with the user.
be transparent with the user in what goes on behind the scenes
be consistent internally: library functions should have similar signature, classes similar methods, and external programs should have the same cli flags
Testing should deal with ALL of the issues above, and each layer ought to be tested separately .
Testing
There are different kinds of tests inspired by the interface principles just described.
acceptance tests verify that a program meets a customer's expectations. In a sense these are a test of the interface to the customer: does the program do everything you promised the customer it would do?
unit tests are tests which test a unit of the program for use by another unit. These could test the interface for a client, but they must also test the internal functions that you want to use.
Exploratory testing, regression testing, and integration testing are done in both of these categories, with the latter trying to combine layers and subsystems, not necessarily at the level of an entire application.
One can also performance test, random and exploratorily test, and stress test a system (to create adversarial situations).
Documentation
Documentation is a contract between a user (client) and an implementor (library writer).
Write good documentation
Follow standards of PEP 257
Clearly outline the inputs, outputs, default values, and expected behavior
Include basic usage examples when possible
End of explanation
def quad_roots(a=1.0, b=2.0, c=0.0):
Returns the roots of a quadratic equation: ax^2 + bx + c.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of quadratic term
b: float, optional, default value is 2
Coefficient of linear term
c: float, optional, default value is 0
Constant term
RETURNS
========
roots: 2-tuple of complex floats
Has the form (root1, root2) unless a = 0
in which case a ValueError exception is raised
NOTES
=====
PRE:
- a, b, c have numeric type
- three or fewer inputs
POST:
- a, b, and c are not changed by this function
- raises a ValueError exception if a = 0
- returns a 2-tuple of roots
EXAMPLES
=========
>>> quad_roots(1.0, 1.0, -12.0)
((3+0j), (-4+0j))
import cmath # Can return complex numbers from square roots
if a == 0:
raise ValueError("The quadratic coefficient is zero. This is not a quadratic equation.")
else:
sqrtdisc = cmath.sqrt(b * b - 4.0 * a * c)
r1 = -b + sqrtdisc
r2 = -b - sqrtdisc
return (r1 / 2.0 / a, r2 / 2.0 / a)
Explanation: Documenting Invariants
An invariant is something that is true at some point in the code.
Invariants and the contract are what we use to guide our implementation.
Pre-conditions and post-conditions are special cases of invariants.
Pre-conditions are true at function entry. They constrain the user.
Post-conditions are true at function exit. They constrain the implementation.
You can change implementations, stuff under the hood, etc, but once the software is in the wild you can't change the pre-conditions and post-conditions since the client user is depending upon them.
End of explanation
quad_roots.__doc__.splitlines()
Explanation: Accessing Documentation (1)
Documentation can be accessed by calling the __doc__ special method
Simply calling function_name.__doc__ will give a pretty ugly output
You can make it cleaner by making use of splitlines()
End of explanation
import pydoc
pydoc.doc(quad_roots)
Explanation: Accessing Documentation (2)
A nice way to access the documentation is to use the pydoc module.
End of explanation
import doctest
doctest.testmod(verbose=True)
Explanation: Testing
There are different kinds of tests inspired by the interface principles just described.
acceptance tests verify that a program meets a customer's expectations. In a sense these are a test of the interface to the customer: does the program do everything you promised the customer it would do?
unit tests are tests which test a unit of the program for use by another unit. These could test the interface for a client, but they must also test the internal functions that you want to use.
Exploratory testing, regression testing, and integration testing are done in both of these categories, with the latter trying to combine layers and subsystems, not necessarily at the level of an entire application.
One can also performance test, random and exploratorily test, and stress test a system (to create adversarial situations).
Testing of a program
Test as you write your program.
This is so important that I repeat it.
Test as you go.
From The Practice of Programming:
The effort of testing as you go is minimal and pays off handsomely. Thinking about testing as you write a program will lead to better code, because that's when you know best what the code should do. If instead you wait until something breaks, you will probably have forgotten how the code works. Working under pressure, you will need to figure it out again, which takes time, and the fixes will be less thorough and more fragile because your refreshed understanding is likely to be incomplete.
Test Driven Develoment
doctest
The doctest module allows us to test pieces of code that we put into our doc. string.
The doctests are a type of unit test, which document the interface of the function by example.
Doctests are an example of a test harness. We write some tests and execute them all at once. Note that individual tests can be written and executed individually in an ad-hoc manner. However, that is especially inefficient.
Of course, too many doctests clutter the documentation section.
The doctests should not cover every case; they should describe the various ways a class or function can be used. There are better ways to do more comprehensive testing.
End of explanation
def test_quadroots():
assert quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j))
test_quadroots()
Explanation: Principles of Testing
Test simple parts first
Test code at its boundaries
The idea is that most errors happen at data boundaries such as empty input, single input item, exactly full array, wierd values, etc. If a piece of code works at the boundaries, its likely to work elsewhere...
Program defensively
"Program defensively. A useful technique is to add code to handle "can't happen" cases, situations where it is not logically possible for something to happen but (because of some failure elsewhere) it might anyway. As an example, a program processing grades might expect that there would be no negative or huge values but should check anyway.
Automate using a test harness
Test incrementally
Test simple parts first:
A test for the quad_roots function:
End of explanation
def test_quadroots_types():
try:
quad_roots("", "green", "hi")
except TypeError as err:
assert(type(err) == TypeError)
test_quadroots_types()
Explanation: Test at the boundaries
Here we write a test to handle the crazy case in which the user passes strings in as the coefficients.
End of explanation
def test_quadroots_zerocoeff():
try:
quad_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
test_quadroots_zerocoeff()
Explanation: We can also check to make sure the $a=0$ case is handled okay:
End of explanation
%%file roots.py
def quad_roots(a=1.0, b=2.0, c=0.0):
Returns the roots of a quadratic equation: ax^2 + bx + c = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of quadratic term
b: float, optional, default value is 2
Coefficient of linear term
c: float, optional, default value is 0
Constant term
RETURNS
========
roots: 2-tuple of complex floats
Has the form (root1, root2) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> quad_roots(1.0, 1.0, -12.0)
((3+0j), (-4+0j))
import cmath # Can return complex numbers from square roots
if a == 0:
raise ValueError("The quadratic coefficient is zero. This is not a quadratic equation.")
else:
sqrtdisc = cmath.sqrt(b * b - 4.0 * a * c)
r1 = -b + sqrtdisc
r2 = -b - sqrtdisc
return (r1 / 2.0 / a, r2 / 2.0 / a)
Explanation: When you get an error
It could be that:
you messed up an implementation
you did not handle a case
your test was messed up (be careful of this)
If the error was not found in an existing test, create a new test that represents the problem before you do anything else. The test should capture the essence of the problem: this process itself is useful in uncovering bugs. Then this error may even suggest more tests.
Automate Using a Test Harness
Great! So we've written some ad-hoc tests. It's pretty clunky. We should use a test harness.
As mentioned already, doctest is a type of test harness. It has it's uses, but gets messy quickly.
We'll talk about pytest here.
Preliminaries
The idea is that our code consists of several different pieces (or objects)
The objects are grouped based on how they are related to each other
e.g. you may have a class that contains different statistical operations
We'll get into this idea much more in the coming weeks
For now, we can think of having related functions all in one file
We want to test each of those functions
Tests should include checking correctness of output, correctness of input, fringe cases, etc
I will work in the Jupyter notebook for demo purposes.
To create and save a file in the Jupyter notebook, you type %%file file_name.py.
I highly recommend that you actually write your code using a text editor (like vim) or an IDE like Sypder.
The toy examples that we've been working with in the class so far can be done in Jupyter, but a real project can be done more efficiently through other means.
End of explanation
%%file test_roots.py
import roots
def test_quadroots_result():
assert roots.quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j))
def test_quadroots_types():
try:
roots.quad_roots("", "green", "hi")
except TypeError as err:
assert(type(err) == TypeError)
def test_quadroots_zerocoeff():
try:
roots.quad_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
!pytest
Explanation: Let's put our tests into one file.
End of explanation
%%file roots.py
def linear_roots(a=1.0, b=0.0):
Returns the roots of a linear equation: ax+ b = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of linear term
b: float, optional, default value is 0
Coefficient of constant term
RETURNS
========
roots: 1-tuple of real floats
Has the form (root) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> linear_roots(1.0, 2.0)
-2.0
if a == 0:
raise ValueError("The linear coefficient is zero. This is not a linear equation.")
else:
return ((-b / a))
def quad_roots(a=1.0, b=2.0, c=0.0):
Returns the roots of a quadratic equation: ax^2 + bx + c = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of quadratic term
b: float, optional, default value is 2
Coefficient of linear term
c: float, optional, default value is 0
Constant term
RETURNS
========
roots: 2-tuple of complex floats
Has the form (root1, root2) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> quad_roots(1.0, 1.0, -12.0)
((3+0j), (-4+0j))
import cmath # Can return complex numbers from square roots
if a == 0:
raise ValueError("The quadratic coefficient is zero. This is not a quadratic equation.")
else:
sqrtdisc = cmath.sqrt(b * b - 4.0 * a * c)
r1 = -b + sqrtdisc
r2 = -b - sqrtdisc
return (r1 / 2.0 / a, r2 / 2.0 / a)
Explanation: Code Coverage
In some sense, it would be nice to somehow check that every line in a program has been covered by a test. If you could do this, you might know that a particular line has not contributed to making something wrong. But this is hard to do: it would be hard to use normal input data to force a program to go through particular statements. So we settle for testing the important lines. The pytest-cov module makes sure that this works.
Coverage does not mean that every edge case has been tried, but rather, every critical statement has been.
Let's add a new function to our roots file.
End of explanation
!pytest --cov
Explanation: Run the tests and check code coverage
End of explanation
!pytest --cov --cov-report term-missing
Explanation: Run the tests, report code coverage, and report missing lines.
End of explanation
!pytest --doctest-modules --cov --cov-report term-missing
Explanation: Run tests, including the doctests, report code coverage, and report missing lines.
End of explanation
%%file test_roots.py
import roots
def test_quadroots_result():
assert roots.quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j))
def test_quadroots_types():
try:
roots.quad_roots("", "green", "hi")
except TypeError as err:
assert(type(err) == TypeError)
def test_quadroots_zerocoeff():
try:
roots.quad_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
def test_linearoots_result():
assert roots.linear_roots(2.0, -3.0) == 1.5
def test_linearroots_types():
try:
roots.linear_roots("ocean", 6.0)
except TypeError as err:
assert(type(err) == TypeError)
def test_linearroots_zerocoeff():
try:
roots.linear_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
Explanation: Let's put some tests in for the linear roots function.
End of explanation
!pytest --doctest-modules --cov --cov-report term-missing
Explanation: Now run the tests and check code coverage.
End of explanation |
4,962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parse_data_to_tfrecord_library Test
This file consists several function test for the functions in the Parse_data_to_tfrecord_lib file.
Step1: Test function img_to_example()
Step2: Function Test
Step3: Test batch_read_write_tfrecords()
This file reads input tfrecords in batches, and process the bboxes that meet the conditions. And write back the labeles, and cropped images to a new tfrecord file.
The batch_read_write_tfrecords also utilize read_tfrecord(), generate_tfexamples_from_detections(), write_tfexample_to_tfrecord(), parse_detection_confidences(), strip_top(all)_confidence_bbox(), img_to_example(), read_and_check_image()
NOTE
Step4: Read back the generated tfrecords and check if the data stored inside the file meets the expectation. | Python Code:
from parse_data_to_tfrecord_lib import img_to_example, read_tfrecord, generate_tfexamples_from_detections, batch_read_write_tfrecords
from PIL import Image # used to read images from directory
import tensorflow as tf
import os
import io
import IPython.display as display
import numpy as np
tf.enable_eager_execution()
Explanation: Parse_data_to_tfrecord_library Test
This file consists several function test for the functions in the Parse_data_to_tfrecord_lib file.
End of explanation
IMG_PATH = './TC11/svt1/img/19_00.jpg'
features={'image': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64)}
try:
img = Image.open(IMG_PATH, "r")
except Exception as e:
print(e)
print(IMG_PATH + " is not valid")
example = img_to_example(img, label=0)
features = tf.io.parse_single_example(example.SerializeToString(), features)
# Testing
# The label feature should be value of 0
assert features['label'].numpy() == 0
# The pixel values of the original image and the stored image should be the same
decode_image = tf.image.decode_image(features['image']).numpy()
original_image = np.array(img.getdata())
assert decode_image.flatten().all() == original_image.flatten().all()
Explanation: Test function img_to_example():
The function reads in image files and outputs tf.examples. The output should store the information feeded into the function.
End of explanation
# Global constants
# Information from input tfrecord files
SOURCE_ID = 'image/source_id'
BBOX_CONFIDENCE = 'image/object/bbox/confidence'
BBOX_XMIN = 'image/object/bbox/xmin'
BBOX_YMIN = 'image/object/bbox/ymin'
BBOX_XMAX = 'image/object/bbox/xmax'
BBOX_YMAX = 'image/object/bbox/ymax'
INPUT_RECORD_DIR = './streetlearn-detections/'
file_name = "./streetlearn_detections_tfexample-00000-of-01000.tfrecord"
ID = "b'/cns/is-d/home/cityblock-streetsmart/yuxizhang/data/public/streetlearn/003419_2.jpg'"
CONFIDENCE = np.array([0.6700151, 0.45046127, 0.22411232, 0.09745394, 0.07810514, 0.06079888, 0.0587763, 0.05148118])
XMIN = np.array([9., 714., 18., 703., 821., 420., 421., 370.])
YMIN = np.array([298., 441., 538., 613., 655., 649., 656., 637.])
XMAX = np.array([450., 823., 424., 844., 873., 445., 493., 435.])
YMAX = np.array([737., 735., 750., 740., 719., 737., 738., 741.])
parsed_image_dataset = read_tfrecord(os.path.join(INPUT_RECORD_DIR, file_name))
# Testing
# Check the data in the parsed_image_dataset
for example in parsed_image_dataset.take(1):
confidence = example[BBOX_CONFIDENCE].values.numpy()
xmin = example[BBOX_XMIN].values.numpy()
ymin = example[BBOX_YMIN].values.numpy()
xmax = example[BBOX_XMAX].values.numpy()
ymax = example[BBOX_YMAX].values.numpy()
assert str(example[SOURCE_ID].numpy()) == ID
assert confidence.all() == CONFIDENCE.all()
assert xmin.all() == XMIN.all()
assert ymin.all() == YMIN.all()
assert xmax.all() == XMAX.all()
assert ymax.all() == YMAX.all()
Explanation: Function Test: read_tfrecord()
The function should read tfrecord files as input, and return DatasetV1Adapter storing a list of examples
End of explanation
INPUT_RECORD_DIR = './streetlearn-detections/'
INPUT_UCF_IMG_DIR = './UCF_Streetview_Dataset/raw/'
TF_FILE_DIR = './test_file.tfrecord'
writer = tf.io.TFRecordWriter(TF_FILE_DIR)
detection_property = {'include_top_camera':True, 'only_keep_top_confidence':True, 'balance':False}
file_range = [0, 1]
batch_read_write_tfrecords(file_range, INPUT_RECORD_DIR, INPUT_UCF_IMG_DIR, writer, detection_property)
writer.close()
Explanation: Test batch_read_write_tfrecords()
This file reads input tfrecords in batches, and process the bboxes that meet the conditions. And write back the labeles, and cropped images to a new tfrecord file.
The batch_read_write_tfrecords also utilize read_tfrecord(), generate_tfexamples_from_detections(), write_tfexample_to_tfrecord(), parse_detection_confidences(), strip_top(all)_confidence_bbox(), img_to_example(), read_and_check_image()
NOTE: This test is a functional test for all functions listed above. Also, this part is harder to compared to the ground truth. Therefore, a visulization of the results is performed here.
End of explanation
# Read the files back from the generated tfrecords
def parse_tf_records(file_dir):
raw_image_dataset = tf.data.TFRecordDataset(file_dir)
# Create a dictionary describing the features.
image_feature_description = {
'label': tf.io.FixedLenFeature([], tf.int64),
'image': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
return parsed_image_dataset
parsed_image_dataset = parse_tf_records(TF_FILE_DIR)
for image_features in parsed_image_dataset:
print(int(image_features['label']))
image_raw = image_features['image'].numpy()
display.display(display.Image(data=image_raw))
Explanation: Read back the generated tfrecords and check if the data stored inside the file meets the expectation.
End of explanation |
4,963 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src=images/continuum_analytics_b&w.png align="left" width="15%" style="margin-right
Step1: Exercise
Step2: Exercise | Python Code:
# Import the functions from your file
# Create your plots with your new functions
# Test the visualizations in the notebook
from bokeh.plotting import show, output_notebook
# Show climate map
# Show legend
# Show timeseries
Explanation: <img src=images/continuum_analytics_b&w.png align="left" width="15%" style="margin-right:15%">
<h1 align='center'>Bokeh Tutorial</h1>
1.6 Layout
Exercise: Wrap your visualizations in functions
Wrap each of the previous visualizations in a function in a python file (e.g. viz.py):
Climate + Map: climate_map()
Legend: legend()
Timeseries: timeseries()
End of explanation
from bokeh.plotting import vplot, hplot
# Create your layout
# Show layout
Explanation: Exercise: Layout your plots using hplot and vplot
End of explanation
from bokeh.plotting import output_file
Explanation: Exercise: Store your layout in an html page
End of explanation |
4,964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Showcase of various CogStat analyses
Below you can see a few examples what analyses are perfomed for a specific task in CogStat. Note that the specific analyses that are applied depend on the task, number of variables, variable measurement levels, various other properties (e.g. normality), therefore, these examples show only some of the possibilities. See a more extensive list of the available analyses details in the online help.
(The table of contents below may not be visible on all systems.)
<h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
<script src="https
Step1: Data
Step2: Explore variable in interval, ordinal and nominal variables
Step3: Explore relation pairs in interval, ordinal and nominal variables
Step4: Compare repeated measures variables with interval, ordinal and nominal variables
Step5: Compare groups in interval, ordinal and nominal dependent variables, with one or two grouping variables with 2 or 3 group levels | Python Code:
%matplotlib inline
import os
import warnings
warnings.filterwarnings('ignore')
from cogstat import cogstat as cs
print(cs.__version__)
cs_dir, dummy_filename = os.path.split(cs.__file__) # We use this for the demo data
Explanation: Showcase of various CogStat analyses
Below you can see a few examples what analyses are perfomed for a specific task in CogStat. Note that the specific analyses that are applied depend on the task, number of variables, variable measurement levels, various other properties (e.g. normality), therefore, these examples show only some of the possibilities. See a more extensive list of the available analyses details in the online help.
(The table of contents below may not be visible on all systems.)
<h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
<script src="https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js"></script>
End of explanation
# Load some data
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'))
# Display the data
cs.display(data.print_data())
Explanation: Data
End of explanation
### Explore variable ###
# Get the most important statistics of a single variable
cs.display(data.explore_variable('X'))
cs.display(data.explore_variable('Z'))
cs.display(data.explore_variable('CONDITION'))
Explanation: Explore variable in interval, ordinal and nominal variables
End of explanation
### Explore variable pair ###
# Get the statistics of a variable pair
cs.display(data.explore_variable_pair('X', 'Y'))
cs.display(data.explore_variable_pair('Z', 'ZZ'))
cs.display(data.explore_variable_pair('TIME', 'CONDITION'))
### Behavioral data diffusion analyses ###
# cs.display(data.diffusion(error_name=['error'], RT_name=['RT'], participant_name=['participant_id'], condition_names=['loudness', 'side']))
Explanation: Explore relation pairs in interval, ordinal and nominal variables
End of explanation
### Compare variables ###
cs.display(data.compare_variables(['X', 'Y'], factors=[]))
cs.display(data.compare_variables(['Z', 'ZZ'], factors=[]))
cs.display(data.compare_variables(['CONDITION', 'CONDITION2'], factors=[]))
Explanation: Compare repeated measures variables with interval, ordinal and nominal variables
End of explanation
### Compare groups ###
cs.display(data.compare_groups('X', grouping_variables=['TIME']))
cs.display(data.compare_groups('X', grouping_variables=['TIME3']))
cs.display(data.compare_groups('Y', grouping_variables=['TIME']))
cs.display(data.compare_groups('Y', grouping_variables=['TIME3']))
cs.display(data.compare_groups('CONDITION', grouping_variables=['TIME']))
cs.display(data.compare_groups('X', grouping_variables=['TIME', 'CONDITION']))
Explanation: Compare groups in interval, ordinal and nominal dependent variables, with one or two grouping variables with 2 or 3 group levels
End of explanation |
4,965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train nodule detector with LUNA16 dataset
Step1: Analyse input data
Let us import annotations
Step2: Lets take a look at some images
Step3: Classes are heaviliy unbalanced, hardly 0.2% percent are positive.
The best way to move forward will be to undersample the negative class and then augment the positive class heaviliy to balance out the samples.
Plan of attack
Step4: Ok the class to get image data works
Next thing to do is to undersample negative class drastically. Since the number of positives in the data set of 551065 are 1351 and rest are negatives, I plan to make the dataset less skewed. Like a 70%/30% split.
Step5: Prepare input data
Split into test train set
Step6: Create a validation dataset
Step7: We will need to augment the positive dataset like mad! Add new keys to X_train and Y_train for augmented data
Step8: Prepare output dir
Step9: Create HDF5 dataset with input data | Python Code:
INPUT_DIR = '../../input/'
OUTPUT_DIR = '../../output/lung-cancer/01/'
IMAGE_DIMS = (50,50,50,1)
%matplotlib inline
import numpy as np
import pandas as pd
import h5py
import matplotlib.pyplot as plt
import sklearn
import os
import glob
from modules.logging import logger
import modules.utils as utils
from modules.utils import Timer
import modules.logging
import modules.cnn as cnn
import modules.ctscan as ctscan
Explanation: Train nodule detector with LUNA16 dataset
End of explanation
annotations = pd.read_csv(INPUT_DIR + 'annotations.csv')
candidates = pd.read_csv(INPUT_DIR + 'candidates.csv')
print(annotations.iloc[1]['seriesuid'])
print(str(annotations.head()))
annotations.info()
print(candidates.iloc[1]['seriesuid'])
print(str(candidates.head()))
candidates.info()
print(len(candidates[candidates['class'] == 1]))
print(len(candidates[candidates['class'] == 0]))
Explanation: Analyse input data
Let us import annotations
End of explanation
scan = ctscan.CTScanMhd(INPUT_DIR, '1.3.6.1.4.1.14519.5.2.1.6279.6001.979083010707182900091062408058')
pixels = scan.get_image()
plt.imshow(pixels[80])
pixels = scan.get_subimage((40,40,10), (230,230,230))
plt.imshow(pixels[40])
Explanation: Lets take a look at some images
End of explanation
positives = candidates[candidates['class']==1].index
negatives = candidates[candidates['class']==0].index
Explanation: Classes are heaviliy unbalanced, hardly 0.2% percent are positive.
The best way to move forward will be to undersample the negative class and then augment the positive class heaviliy to balance out the samples.
Plan of attack:
Get an initial subsample of negative class and keep all of the positives such that we have a 80/20 class distribution
Create a training set such that we augment minority class heavilby rotating to get a 50/50 class distribution
End of explanation
positives
np.random.seed(42)
negIndexes = np.random.choice(negatives, len(positives)*5, replace = False)
print(len(positives))
print(len(negIndexes))
candidatesDf = candidates.iloc[list(positives)+list(negIndexes)]
Explanation: Ok the class to get image data works
Next thing to do is to undersample negative class drastically. Since the number of positives in the data set of 551065 are 1351 and rest are negatives, I plan to make the dataset less skewed. Like a 70%/30% split.
End of explanation
from sklearn.cross_validation import train_test_split
X = candidatesDf.iloc[:,:-1]
Y = candidatesDf.iloc[:,-1]
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.20, random_state = 42)
#print(str(X_test))
#print(str(Y_test))
Explanation: Prepare input data
Split into test train set
End of explanation
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size = 0.20, random_state = 42)
print(len(X_train))
print(len(X_val))
print(len(X_test))
print('number of positive cases are ' + str(Y_train.sum()))
print('total set size is ' + str(len(Y_train)))
print('percentage of positive cases are ' + str(Y_train.sum()*1.0/len(Y_train)))
Explanation: Create a validation dataset
End of explanation
tempDf = X_train[Y_train == 1]
tempDf = tempDf.set_index(X_train[Y_train == 1].index + 1000000)
X_train_new = X_train.append(tempDf)
tempDf = tempDf.set_index(X_train[Y_train == 1].index + 2000000)
X_train_new = X_train_new.append(tempDf)
ytemp = Y_train.reindex(X_train[Y_train == 1].index + 1000000)
ytemp.loc[:] = 1
Y_train_new = Y_train.append(ytemp)
ytemp = Y_train.reindex(X_train[Y_train == 1].index + 2000000)
ytemp.loc[:] = 1
Y_train_new = Y_train_new.append(ytemp)
X_train = X_train_new
Y_train = Y_train_new
print(len(X_train), len(Y_train))
print('After undersampling')
print('number of positive cases are ' + str(Y_train.sum()))
print('total set size is ' + str(len(Y_train)))
print('percentage of positive cases are ' + str(Y_train.sum()*1.0/len(Y_train)))
print(len(X_train))
print(len(X_val))
print(len(X_test))
print(X_train.head())
print(Y_train.head())
Explanation: We will need to augment the positive dataset like mad! Add new keys to X_train and Y_train for augmented data
End of explanation
utils.mkdirs(OUTPUT_DIR, recreate=True)
modules.logging.setup_file_logger(OUTPUT_DIR + 'out.log')
logger.info('Dir ' + OUTPUT_DIR + ' created')
Explanation: Prepare output dir
End of explanation
def create_dataset(file_path, x_data, y_data):
logger.info('Creating dataset ' + file_path + ' size=' + str(len(x_data)))
file_path_tmp = file_path + '.tmp'
with h5py.File(file_path_tmp, 'w') as h5f:
x_ds = h5f.create_dataset('X', (len(x_data), IMAGE_DIMS[0], IMAGE_DIMS[1], IMAGE_DIMS[2], IMAGE_DIMS[3]), chunks=(1, IMAGE_DIMS[0], IMAGE_DIMS[1], IMAGE_DIMS[2], IMAGE_DIMS[3]), dtype='f')
y_ds = h5f.create_dataset('Y', (len(y_data), 2), dtype='f')
valid = []
for c, idx in enumerate(x_data.index):
#if(c>3): break
d = x_data.loc[idx]
filename = d[0]
t = Timer('Loading scan ' + str(filename))
scan = ctscan.CTScanMhd(INPUT_DIR, filename)
pixels = scan.get_subimage((d[3],d[2],d[1]), IMAGE_DIMS)
#add color channel dimension
pixels = np.expand_dims(pixels, axis=3)
#plt.imshow(pixels[round(np.shape(pixels)[0]/2),:,:,0])
#plt.show()
if(np.shape(pixels) == (50,50,50,1)):
x_ds[c] = pixels
y_ds[c] = [1,0]
if(y_data.loc[idx] == 1):
y_ds[c] = [0,1]
valid.append(c)
else:
logger.warning('Invalid shape detected in image. Skipping. ' + str(np.shape(pixels)))
t.stop()
#dump only valid entries to dataset file
c = 0
with h5py.File(file_path, 'w') as h5fw:
x_dsw = h5fw.create_dataset('X', (len(valid), IMAGE_DIMS[0], IMAGE_DIMS[1], IMAGE_DIMS[2], IMAGE_DIMS[3]), chunks=(1, IMAGE_DIMS[0], IMAGE_DIMS[1], IMAGE_DIMS[2], IMAGE_DIMS[3]), dtype='f')
y_dsw = h5fw.create_dataset('Y', (len(valid), 2), dtype='f')
with h5py.File(file_path_tmp, 'r') as h5fr:
x_dsr = h5fr['X']
y_dsr = h5fr['Y']
for i in range(len(x_dsr)):
if(i in valid):
x_dsw[c] = x_dsr[i]
y_dsw[c] = y_dsr[i]
c = c + 1
os.remove(file_path_tmp)
utils.validate_xy_dataset(file_path, save_dir=OUTPUT_DIR + 'samples/')
#create_dataset(OUTPUT_DIR + 'nodules-train.h5', X_train, Y_train)
#create_dataset(OUTPUT_DIR + 'nodules-validate.h5', X_val, Y_val)
create_dataset(OUTPUT_DIR + 'nodules-test.h5', X_test, Y_test)
Explanation: Create HDF5 dataset with input data
End of explanation |
4,966 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Automatically find the center of the speckle pattern and most intense rings
Step1: Two examples to demonstarte automatically find the center of the speckle pattern and most intense 4 rings
First image
Step2: Plot the image with the center and the radii
Step3: Second example | Python Code:
import skbeam.core.roi as roi
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
x = np.linspace(-5,5,200)
X,Y = np.meshgrid(x,x)
Z = 100*np.cos(np.sqrt(x**2 + Y**2))**2 + 50
center, image, radii = roi.auto_find_center_rings(Z, sigma=20, no_rings=5)
fig, ax = plt.subplots()
ax.scatter(center[0], center[1], s=50, c='red')
im = ax.imshow(image, cmap="viridis")
cbar = fig.colorbar(im)
center
radii
Explanation: Automatically find the center of the speckle pattern and most intense rings
End of explanation
duke_img = np.load("image_data/duke_img.npy" )
fig, ax = plt.subplots()
im = ax.imshow(duke_img, vmax=1e0, cmap="viridis");
fig.colorbar(im)
plt.show()
center_a, image_a, radii_a = roi.auto_find_center_rings(duke_img, sigma=2, no_rings=4)
radii_a
Explanation: Two examples to demonstarte automatically find the center of the speckle pattern and most intense 4 rings
First image
End of explanation
fig, ax = plt.subplots()
plt.scatter(center_a[0], center_a[1], s=50, c='green')
ax.set_title("Center and 4 most intense rings")
ax.set_xlabel("pixels")
ax.set_ylabel("pixels")
im_a = ax.imshow(image_a, vmax=1e0, cmap="viridis");
fig.colorbar(im_a)
Explanation: Plot the image with the center and the radii
End of explanation
nipa_avg = np.load("image_data/nipa_avg.npy")
center_n, image_n, radii_n = roi.auto_find_center_rings(nipa_avg, sigma=20, no_rings=2)
fig, ax = plt.subplots()
ax.scatter(center_n[0], center_n[1], s=50, c='red')
im_n = ax.imshow(image_n, cmap="viridis")
plt.colorbar(im_n)
radii_n
center_n
import skbeam
skbeam.__version__
Explanation: Second example
End of explanation |
4,967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Generation
Step1: Solution
Step2: Use matching indices
Instead of iterating through indices, one can use them directly to parallelize the operations with Numpy.
Step3: Use a library
scipy is the equivalent of matlab toolboxes and have a lot to offer. Actually the pairwise computation is part of the library through the spatial module.
Step4: Numpy Magic
Step5: Compare methods | Python Code:
np.random.seed(10)
p, q = (np.random.rand(i, 2) for i in (4, 5))
p_big, q_big = (np.random.rand(i, 80) for i in (100, 120))
print(p, "\n\n", q)
Explanation: Data Generation
End of explanation
def naive(p, q):
''' fill your code in here...
'''
Explanation: Solution
End of explanation
rows, cols = np.indices((p.shape[0], q.shape[0]))
print(rows, end='\n\n')
print(cols)
print(p[rows.ravel()], end='\n\n')
print(q[cols.ravel()])
def with_indices(p, q):
''' fill your code in here...
'''
Explanation: Use matching indices
Instead of iterating through indices, one can use them directly to parallelize the operations with Numpy.
End of explanation
from scipy.spatial.distance import cdist
def scipy_version(p, q):
return cdist(p, q)
Explanation: Use a library
scipy is the equivalent of matlab toolboxes and have a lot to offer. Actually the pairwise computation is part of the library through the spatial module.
End of explanation
def tensor_broadcasting(p, q):
return np.sqrt(np.sum((p[:,np.newaxis,:]-q[np.newaxis,:,:])**2, axis=2))
Explanation: Numpy Magic
End of explanation
methods = [naive, with_indices, scipy_version, tensor_broadcasting]
timers = []
for f in methods:
r = %timeit -o f(p_big, q_big)
timers.append(r)
plt.figure(figsize=(10,6))
plt.bar(np.arange(len(methods)), [r.best*1000 for r in timers], log=False) # Set log to True for logarithmic scale
plt.xticks(np.arange(len(methods))+0.2, [f.__name__ for f in methods], rotation=30)
plt.xlabel('Method')
plt.ylabel('Time (ms)')
Explanation: Compare methods
End of explanation |
4,968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Brief
This tutorial is an introduction to Python 3. This should give you the set of pythonic skills that you will need to proceed with this tutorial series.
If you don't have the Jupyter installed, shame on you. No just kidding you can follow this tutorial using an online jupyter service
Step1: The print function
Notice
Step2: Variables
There are many variable types in Python 3. Here is a list of the most common types
Step3: Other Value Types
Step4: Selecting / Slicing
Use len function to measure the length of a list.
Step5: To access a single value in a list use this syntax
Step6: To select multiple value from a list use this syntax
Step7: Notice
Step8: You can use negative indexing in selecting multiple values.
Step9: The third location in the index is the step. If the step is negative the the list is returned in descending order.
Step10: Working with Strings
You can select from a string like a list suing this syntax
Step11: Notice
Step12: Unicode
Notice
Step13: String Formatting
You can use this syntax to format a string
Step14: Other formatters could be used to format numbers
Step15: To find unicode symbols
Step16: Using format(*args, **kwargs) function
Step17: Mathematics
Step18: Notice
Step19: To raise a number to any power use down asterisk **. To represent $a^{n}$
Step20: To calculate the remainder (modulo operator) use %. To represent $a \mod b = r$
Step21: You can
You can use the math library to access a varaity of tools for algebra and geometry. To import a library, you can use one of these syntaxes
Step22: Loops
Step23: range
In Python 3 range is a data type that generates a list of numbers.
python
range(stop)
range(start,stop[ ,step])
Notice
Step24: Notice
Step25: Notice
Step26: While Loop
Step27: If .. Else
Step28: If you like Math
Step30: Functions
Functions are defined in Python using def keyword. | Python Code:
1+2
1+1
1+2
Explanation: Tutorial Brief
This tutorial is an introduction to Python 3. This should give you the set of pythonic skills that you will need to proceed with this tutorial series.
If you don't have the Jupyter installed, shame on you. No just kidding you can follow this tutorial using an online jupyter service:
https://try.jupyter.org/
Cell Input and Output
End of explanation
print(1+2)
Explanation: The print function
Notice: print is a function in Python 3. You should use parentheses around your parameter.
End of explanation
a = 4
b = 1.5
c = 121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212
d = 1j
e = 1/3
f = True
a+b
a*c
(b+d)*a
a+f
type(1.5)
Explanation: Variables
There are many variable types in Python 3. Here is a list of the most common types:
Numerical Types:
bool (Boolean)
int (Integer/Long)
float
complex
Notice: In Python 3 integer represents integer and long. Because there is no more long data type, you will not get L at the end of long integers.
End of explanation
my_name = "Roshan"
print(my_name)
my_list = [1,2,3,4,5]
my_list
my_list + [6]
my_list
my_list += [6,7,8]
my_list
my_list.append(9)
my_list
my_tuple = (1,2,3)
my_tuple
my_tuple + (4,5,6)
my_dict = {"name":"Roshan", "credit":100}
my_dict
my_dict["name"]
my_dict["level"] = 4
my_dict
my_dict.values()
my_dict.keys()
Explanation: Other Value Types:
str (String)
list (Ordered Array)
tuple (Ordered Immutable Array)
dict (Unordered list of keys and values)
End of explanation
len(my_list)
Explanation: Selecting / Slicing
Use len function to measure the length of a list.
End of explanation
my_list[0]
Explanation: To access a single value in a list use this syntax:
python
list_name[index]
End of explanation
my_list[1:2]
my_list[:3]
my_list[3:]
Explanation: To select multiple value from a list use this syntax:
python
index[start:end:step]
End of explanation
my_list[-1]
my_list[-2]
Explanation: Notice: negative index selected from the end of the list
End of explanation
my_list[-2:]
my_list[:-2]
my_list[3:-1]
Explanation: You can use negative indexing in selecting multiple values.
End of explanation
my_list[::2]
my_list[3::2]
my_list[::-1]
Explanation: The third location in the index is the step. If the step is negative the the list is returned in descending order.
End of explanation
my_name
my_name[0]
Explanation: Working with Strings
You can select from a string like a list suing this syntax:
python
my_string[star:end:step]
End of explanation
my_name[:2]
Explanation: Notice: You can also use negative indexing.
End of explanation
# Sorted by most spoken languages in order
divide_by_zero = {"zho":"你不能除以零",
"eng":"You cannot divide by zero",
"esp":"No se puede dividir por cero",
"hin":"आप शून्य से विभाजित नहीं किया जा सकता \u2248",
"arb":"لا يمكن القسمة على صفر"}
print(divide_by_zero["hin"])
type(divide_by_zero["hin"])
Explanation: Unicode
Notice: You can use unicode inside your string variables. Unlike Python 2, no need to use u"" to use unicode.
End of explanation
first_name = "Roshan"
last_name = "Rush"
formatted_name = "%s, %s." % (last_name, first_name[0])
print(formatted_name)
Explanation: String Formatting
You can use this syntax to format a string:
python
some_variable = 50
x = "Value: %s" % some_variable
print(x) # Value: 50
End of explanation
print("π ≈ %.2f" % 3.14159)
Explanation: Other formatters could be used to format numbers:
End of explanation
homeworks = 15.75
midterm = 22
final = 51
total = homeworks + midterm + final
print("Homeworks: %.2f\nMid-term: %.2f\nFinal: %.2f\nTotal: %.2f/100" % (homeworks, midterm, final, total))
Explanation: To find unicode symbols:
http://www.fileformat.info/info/unicode/char/search.htm
End of explanation
url = "http://{language}.wikipedia.org/"
url = url.format(language="en")
url
Explanation: Using format(*args, **kwargs) function
End of explanation
1+1
4-5
Explanation: Mathematics
End of explanation
14/5
14//5
2*5
Explanation: Notice: The default behavior of division in Python 3 is float division. To use integer division like Python 2, use //
End of explanation
2**3
Explanation: To raise a number to any power use down asterisk **. To represent $a^{n}$:
python
a**n
End of explanation
10 % 3
Explanation: To calculate the remainder (modulo operator) use %. To represent $a \mod b = r$:
python
a % b # Returns r
End of explanation
import math
n=52
k=1
math.factorial(n) / (math.factorial(k) * math.factorial(n-k))
Explanation: You can
You can use the math library to access a varaity of tools for algebra and geometry. To import a library, you can use one of these syntaxes:
python
import library_name
import library_name as alias
from module_name import some_class
End of explanation
for counter in [1,2,3,4]:
print(counter)
Explanation: Loops
End of explanation
for counter in range(5):
print(counter)
Explanation: range
In Python 3 range is a data type that generates a list of numbers.
python
range(stop)
range(start,stop[ ,step])
Notice: In Python 2 range is a function that returns a list. In Python 3, range returns an iterable of type range. If you need to get a list you can use the list() function:
python
list(range(start,stop[, step]))
End of explanation
list(range(1,10)) == list(range(1,5)) + list(range(5,10))
Explanation: Notice: The list doesn't reach the stop value and stops one step before. The reason behind that is to make this syntax possible:
End of explanation
for counter in range(1,5):
print(counter)
for counter in range(2,10,2):
print(counter)
Explanation: Notice: In Python 3 use use == to check if two values are equal. To check if two values are not equal use != and don't use <> from Python 2 because it is not supported any more in Python 3.
End of explanation
counter =1
while counter < 5:
print(counter)
counter += 1
Explanation: While Loop
End of explanation
if math.pi == 3.2:
print("Edward J. Goodwin was right!")
else:
print("π is irrational")
if math.sqrt(2) == (10/7):
print("Edward J. Goodwin was right!")
elif math.sqrt(2) != (10/7):
print("Square root of 2 is irrational")
Explanation: If .. Else
End of explanation
probability = 0.3
if probability >= 0.75:
print("Sure thing")
elif probability >= 0.5:
print("Maybe")
elif probability >= 0.25:
print("Unusual")
else:
print("No way")
Explanation: If you like Math:
Fun story about pi where it was almost set by law to be equal to 3.2!
If you don't what is the "pi bill" you can read about it here:
http://en.wikipedia.org/wiki/Indiana_Pi_Bill
Or watch Numberphile video about it:
https://www.youtube.com/watch?v=bFNjA9LOPsg
End of explanation
def get_circumference(r):
return math.pi * r * 2
get_circumference(5)
def binomilal_coef(n,k):
This function returns the binominal coef
Parameters:
===========
n, k int
return n!/(k!*(n-k)!)
value = math.factorial(n)/(math.factorial(k)*math.factorial(n-k))
return value
binomilal_coef(52,2)
Explanation: Functions
Functions are defined in Python using def keyword.
End of explanation |
4,969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
No Show Appointments Analysis
<a id='intro'></a>
Introduction
Purpose To perform a Data analysis on a sample Dataset of No-show Appointments
This Dataset contains the records of the patients with various types of diseases who booked appointments and did not showed up on their appointment Day.
Questions
What factors made people to miss their Appointments ?
How many Female and male of different Age Group in the Dataset missed the Appointments ?
Did Age, regardless of age_group and sex, determine the patients missing the Appointments ?
Did women and children preferred to attend their appointments ?
Did the Scholarship of the patients helped in the attendence of their appointments?
<a id='wrangling'></a>
Data Wrangling
Data Description
Gender Gender
age Age
age_group Age Group
people_showed_up Patients who attended or missed their appointment (0 = Missed; 1 = Attended)
scholarship Medical Scholarship
Step1: Note PatientId column have exponential values in it.
Note No-show displays No if the patient visited and Yes if the Patient did not visited.
Data Cleanup
From the data description and questions to answer, I've determined that some of the dataset columns are not necessary for the analysis process and will therefore be removed. This will help to process the Data Analysis Faster.
PatientId
ScheduledDay
Sms_received
AppointmentID
AppointmentDay
i'll take a 3 step approach to data cleanup
Identify and remove duplicate entries
Remove unnecessary columns
Fix missing and data format issues
Step 1 - Removing Duplicate entries
Concluded that no duplicates entries exists, based on the tests below
Step2: Step 2 - Remove unnecessary columns
Columns(PatientId, ScheduledDay, Sms_received, AppointmentID, AppointmentDay) removed
Step3: Step 3 - Fix any missing or data format issues
Concluded that there is no missing data
Step4: Data Exploration And Visualization
Step5: The Fixed_Age column is created in order to replace the negative value available in the Age column. The newly created Fixed_Age column will help in the proper calculation in the further questions and the results will be perfect and clear too. The negative value (-1) is changed in to a positive value (1) by using the .abd() function.
Step6: Creation and Addition of Age_Group in the data set will help in the Q1 - How many Female and male of different Age Group in the Dataset missed the Appointments ?
Step7: Question 1
How many Female and male of different Age Group in the Dataset missed the Appointments ?
Step8: The graph above shows the number of people who attended their appointment and those who did not attended their appointments acccording to the Gender of the people having the appointment in the hospital.
- According to the graph above, women are more concious about their health regardless of the age group.
Step9: The graph above shows the number of people who attended their appointment and those who did not attended their appointments.
- False denotes that the people did not attended the appointments.
- True denotes that the people did attended the appointments.
The graphs is categorized according to the Age Group.
Based on the raw numbers it would appear that the age group of 65-75 is the most health cautious Age Group because they have the highest percentage of Appointment attendence followed by the Age Group of 55-65 which is just about 1% less than the 65-75 Age Group in the Appointment Attendence.
The Age group with the least percentage of Appointment Attendence is 15-25.
Note 105-115 is not the least percentage age group because the number of patients in that Age group are too low. So, the comparision is not possible.
Question 2
Did Age, regardless of Gender, determine the patients missing the Appointments ?
Step10: Based on the boxplot and the calculated data above, it would appear that
Step11: Based on the calculated data and the Graphs, it would appear that
Step12: According to the Bar graph above | Python Code:
# Render plots inline
%matplotlib inline
# Import Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Set style for all graphs
sns.set(style="whitegrid")
# Read in the Dataset, creat dataframe
appointment_data = pd.read_csv('noshow.csv')
# Print the first few records to review data and format
appointment_data.head()
# Print the Last few records to review data and format
appointment_data.tail()
Explanation: No Show Appointments Analysis
<a id='intro'></a>
Introduction
Purpose To perform a Data analysis on a sample Dataset of No-show Appointments
This Dataset contains the records of the patients with various types of diseases who booked appointments and did not showed up on their appointment Day.
Questions
What factors made people to miss their Appointments ?
How many Female and male of different Age Group in the Dataset missed the Appointments ?
Did Age, regardless of age_group and sex, determine the patients missing the Appointments ?
Did women and children preferred to attend their appointments ?
Did the Scholarship of the patients helped in the attendence of their appointments?
<a id='wrangling'></a>
Data Wrangling
Data Description
Gender Gender
age Age
age_group Age Group
people_showed_up Patients who attended or missed their appointment (0 = Missed; 1 = Attended)
scholarship Medical Scholarship
End of explanation
# Identify and remove duplicate entries
appointment_data_duplicates = appointment_data.duplicated()
print 'Number of duplicate entries is/are {}'.format(appointment_data_duplicates.sum())
# Lets make sure that this is working
duplication_test = appointment_data.duplicated('Age').head()
print 'Number of entries with duplicate age in top entries are {}'.format(duplication_test.sum())
appointment_data.head()
Explanation: Note PatientId column have exponential values in it.
Note No-show displays No if the patient visited and Yes if the Patient did not visited.
Data Cleanup
From the data description and questions to answer, I've determined that some of the dataset columns are not necessary for the analysis process and will therefore be removed. This will help to process the Data Analysis Faster.
PatientId
ScheduledDay
Sms_received
AppointmentID
AppointmentDay
i'll take a 3 step approach to data cleanup
Identify and remove duplicate entries
Remove unnecessary columns
Fix missing and data format issues
Step 1 - Removing Duplicate entries
Concluded that no duplicates entries exists, based on the tests below
End of explanation
# Create new dataset without unwanted columns
clean_appointment_data = appointment_data.drop(['PatientId','ScheduledDay','SMS_received','AppointmentID','AppointmentDay'], axis=1)
clean_appointment_data.head()
Explanation: Step 2 - Remove unnecessary columns
Columns(PatientId, ScheduledDay, Sms_received, AppointmentID, AppointmentDay) removed
End of explanation
# Calculate number of missing values
clean_appointment_data.isnull().sum()
# Taking a look at the datatypes
clean_appointment_data.info()
Explanation: Step 3 - Fix any missing or data format issues
Concluded that there is no missing data
End of explanation
# Looking at some typical descriptive statistics
clean_appointment_data.describe()
# Age minimum at -1.0 looks a bit weird so give a closer look
clean_appointment_data[clean_appointment_data['Age'] == -1]
# Fixing the negative value and creating a new column named Fixed_Age.
clean_appointment_data['Fixed_Age'] = clean_appointment_data['Age'].abs()
# Checking whether the negative value is still there or is it removed and changed into a positive value.
clean_appointment_data[clean_appointment_data['Fixed_Age'] == -1]
Explanation: Data Exploration And Visualization
End of explanation
# Create AgeGroups for further Analysis
'''bins = [0, 25, 50, 75, 100, 120]
group_names = ['0-25', '25-50', '50-75', '75-100', '100-120']
clean_appointment_data['age-group'] = pd.cut(clean_appointment_data['Fixed_Age'], bins, labels=group_names)
clean_appointment_data.head()'''
clean_appointment_data['Age_rounded'] = np.round(clean_appointment_data['Fixed_Age'], -1)
categories_dict = {0: '0-5',
10: '5-15',
20: '15-25',
30 : '25-35',
40 : '35-45',
50 : '45-55',
60: '55-65',
70 : '65-75',
80 : '75-85',
90: '85-95',
100: '95-105',
120: '105-115'}
clean_appointment_data['age_group'] = clean_appointment_data['Age_rounded'].map(categories_dict)
clean_appointment_data['age_group']
Explanation: The Fixed_Age column is created in order to replace the negative value available in the Age column. The newly created Fixed_Age column will help in the proper calculation in the further questions and the results will be perfect and clear too. The negative value (-1) is changed in to a positive value (1) by using the .abd() function.
End of explanation
# Simplifying the analysis by Fixing Yes and No issue in the No-show
# The issue is that in the No-show No means that the person visited at the time of their appointment and Yes means that they did not visited.
# First I will change Yes to 0 and No to 1 so that there is no confusion
clean_appointment_data['people_showed_up'] = clean_appointment_data['No-show'].replace(['Yes', 'No'], [0, 1])
clean_appointment_data
# Taking a look at the age of people who showed up and those who missed the appointment
youngest_to_showup = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['Fixed_Age'].min()
youngest_to_miss = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['Fixed_Age'].min()
oldest_to_showup = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['Fixed_Age'].max()
oldest_to_miss = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['Fixed_Age'].max()
print 'Youngest to Show up: {} \nYoungest to Miss: {} \nOldest to Show Up: {} \nOldest to Miss: {}'.format(
youngest_to_showup, youngest_to_miss, oldest_to_showup, oldest_to_miss)
Explanation: Creation and Addition of Age_Group in the data set will help in the Q1 - How many Female and male of different Age Group in the Dataset missed the Appointments ?
End of explanation
# Returns the percentage of male and female who visited the
# hospital on their appointment day with their Age
def people_visited(age_group, gender):
grouped_by_total = clean_appointment_data.groupby(['age_group', 'Gender']).size()[age_group,gender].astype('float')
grouped_by_visiting_gender = \
clean_appointment_data.groupby(['age_group', 'people_showed_up', 'Gender']).size()[age_group,1,gender].astype('float')
visited_gender_pct = (grouped_by_visiting_gender / grouped_by_total * 100).round(2)
return visited_gender_pct
# Get the actual numbers grouped by Age, No-show, Gender
groupedby_visitors = clean_appointment_data.groupby(['age_group','people_showed_up','Gender']).size()
# Print - Grouped by Age Group, Patients showing up on thier appointments and Gender
print groupedby_visitors
print '0-5 - Female Appointment Attendence: {}%'.format(people_visited('0-5','F'))
print '0-5 - Male Appointment Attendence: {}%'.format(people_visited('0-5','M'))
print '5-15 - Female Appointment Attendence: {}%'.format(people_visited('5-15','F'))
print '5-15 - Male Appointment Attendence: {}%'.format(people_visited('5-15','M'))
print '15-25 - Female Appointment Attendence: {}%'.format(people_visited('15-25','F'))
print '15-25 - Male Appointment Attendence: {}%'.format(people_visited('15-25','M'))
print '25-35 - Female Appointment Attendence: {}%'.format(people_visited('25-35','F'))
print '25-35 - Male Appointment Attendence: {}%'.format(people_visited('25-35','M'))
print '35-45 - Female Appointment Attendence: {}%'.format(people_visited('35-45','F'))
print '35-45 - Male Appointment Attendence: {}%'.format(people_visited('35-45','M'))
print '45-55 - Female Appointment Attendence: {}%'.format(people_visited('45-55','F'))
print '45-55 - Male Appointment Attendence: {}%'.format(people_visited('45-55','M'))
print '55-65 - Female Appointment Attendence: {}%'.format(people_visited('55-65','F'))
print '55-65 - Male Appointment Attendence: {}%'.format(people_visited('55-65','M'))
print '65-75 - Female Appointment Attendence: {}%'.format(people_visited('65-75','F'))
print '65-75 - Male Appointment Attendence: {}%'.format(people_visited('65-75','M'))
print '75-85 - Female Appointment Attendence: {}%'.format(people_visited('75-85','F'))
print '75-85 - Male Appointment Attendence: {}%'.format(people_visited('75-85','M'))
print '85-95 - Female Appointment Attendence: {}%'.format(people_visited('85-95','F'))
print '85-95 - Male Appointment Attendence: {}%'.format(people_visited('85-95','M'))
print '95-105 - Female Appointment Attendence: {}%'.format(people_visited('95-105','F'))
print '95-105 - Male Appointment Attendence: {}%'.format(people_visited('95-105','M'))
print '105-115 - Female Appointment Attendence: {}%'.format(people_visited('105-115','F'))
# Graph - Grouped by class, survival and sex
g = sns.factorplot(x="Gender", y="people_showed_up", col="age_group", data=clean_appointment_data,
saturation=4, kind="bar", ci=None, size=12, aspect=.35)
# Fix up the labels
(g.set_axis_labels('', 'People Visited')
.set_xticklabels(["Men", "Women"], fontsize = 30)
.set_titles("Age Group {col_name}")
.set(ylim=(0, 1))
.despine(left=True, bottom=True))
Explanation: Question 1
How many Female and male of different Age Group in the Dataset missed the Appointments ?
End of explanation
# Graph - Actual count of passengers by survival, group and sex
g = sns.factorplot('people_showed_up', col='Gender', hue='age_group', data=clean_appointment_data, kind='count', size=15, aspect=.6)
# Fix up the labels
(g.set_axis_labels('People Who Attended', 'No. of Appointment')
.set_xticklabels(["False", "True"], fontsize=20)
.set_titles('{col_name}')
)
titles = ['Men', 'Women']
for ax, title in zip(g.axes.flat, titles):
ax.set_title(title)
Explanation: The graph above shows the number of people who attended their appointment and those who did not attended their appointments acccording to the Gender of the people having the appointment in the hospital.
- According to the graph above, women are more concious about their health regardless of the age group.
End of explanation
# Find the total number of people who showed up and those who missed their appointments
number_showed_up = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['people_showed_up'].count()
number_missed = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['people_showed_up'].count()
# Find the average number of people who showed up and those who missed their appointments
mean_age_showed_up = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['Age'].mean()
mean_age_missed = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['Age'].mean()
# Displaying a few Totals
print 'Total number of People Who Showed Up {} \n\
Total number of People who missed the appointment {} \n\
Mean age of people who Showed up {} \n\
Mean age of people who missed the appointment {} \n\
Oldest to show up {} \n\
Oldest to miss the appointment {}' \
.format(number_showed_up, number_missed, np.round(mean_age_showed_up),
np.round(mean_age_missed), oldest_to_showup, oldest_to_miss)
# Graph of age of passengers across sex of those who survived
g = sns.factorplot(x="people_showed_up", y="Fixed_Age", hue='Gender', data=clean_appointment_data, kind="box", size=7, aspect=.8)
# Fixing the labels
(g.set_axis_labels('Appointment Attendence', 'Age of Patients')
.set_xticklabels(["False", "True"])
)
Explanation: The graph above shows the number of people who attended their appointment and those who did not attended their appointments.
- False denotes that the people did not attended the appointments.
- True denotes that the people did attended the appointments.
The graphs is categorized according to the Age Group.
Based on the raw numbers it would appear that the age group of 65-75 is the most health cautious Age Group because they have the highest percentage of Appointment attendence followed by the Age Group of 55-65 which is just about 1% less than the 65-75 Age Group in the Appointment Attendence.
The Age group with the least percentage of Appointment Attendence is 15-25.
Note 105-115 is not the least percentage age group because the number of patients in that Age group are too low. So, the comparision is not possible.
Question 2
Did Age, regardless of Gender, determine the patients missing the Appointments ?
End of explanation
# Create Category and Categorize people
clean_appointment_data.loc[
((clean_appointment_data['Gender'] == 'F') &
(clean_appointment_data['Age'] >= 18)),
'Category'] = 'Woman'
clean_appointment_data.loc[
((clean_appointment_data['Gender'] == 'M') &
(clean_appointment_data['Age'] >= 18)),
'Category'] = 'Man'
clean_appointment_data.loc[
(clean_appointment_data['Age'] < 18),
'Category'] = 'Child'
# Get the totals grouped by Men, Women and Children
print clean_appointment_data.groupby(['Category', 'people_showed_up']).size()
# Graph - Comapre the number of Men, Women and Children who showed up on their appointments
g = sns.factorplot('people_showed_up', col='Category', data=clean_appointment_data, kind='count', size=7, aspect=0.8)
# Fix up the labels
(g.set_axis_labels('Appointment Attendence', 'No. of Patients')
.set_xticklabels(['False', 'True'])
)
titles = ['Women', 'Men', 'Children']
for ax, title in zip(g.axes.flat, titles):
ax.set_title(title)
Explanation: Based on the boxplot and the calculated data above, it would appear that:
Regardless of the Gender, age was not a deciding factor in the appointment attendence rate of the Patients
The number of female who attended the appointment as well as who missed the appointment is more than the number of male
Question 3
Did women and children preferred to attend their appointments ?
Assumption: With 'child' not classified in the data, I'll need to assume a cutoff point. Therefore, I'll be using today's standard of under 18 as those to be considered as a child vs adult.
End of explanation
# Determine the number of Man, Woman and Children who had scholarship
man_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Man') &
(clean_appointment_data['Scholarship'] == 1)]
man_without_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Man') &
(clean_appointment_data['Scholarship'] == 0)]
woman_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Woman') &
(clean_appointment_data['Scholarship'] == 1)]
woman_without_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Woman') &
(clean_appointment_data['Scholarship'] == 0)]
children_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Child') &
(clean_appointment_data['Scholarship'] == 1)]
children_without_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Child') &
(clean_appointment_data['Scholarship'] == 0)]
# Graph - Compare how many man, woman and children with or without scholarship attended thier apoointments
g = sns.factorplot('Scholarship', col='Category', data=clean_appointment_data, kind='count', size=8, aspect=0.3)
# Fix up the labels
(g.set_axis_labels('Scholarship', 'No of Patients')
.set_xticklabels(['Missed', 'Attended'])
)
titles = ['Women', 'Men', 'Children']
for ax, title in zip(g.axes.flat, titles):
ax.set_title(title)
Explanation: Based on the calculated data and the Graphs, it would appear that:
- The appointment attendence of the women is significantly higher than that of the men and children
- The number of Men and children who attended the appointment is almost the same, the difference between the number of men and children is about :- 967
Question 4
Did the Scholarship of the patients helped in the attendence of their appointments?
End of explanation
# Determine the Total Number of Men, Women and Children with Scholarship
total_male_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Man') &
(clean_appointment_data['Scholarship'] < 2)]
total_female_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Woman') &
(clean_appointment_data['Scholarship'] < 2)]
total_child_with_scholarship = clean_appointment_data.loc[
(clean_appointment_data['Category'] == 'Child') &
(clean_appointment_data['Scholarship'] < 2)]
total_man_with_scholarship = total_male_with_scholarship.Scholarship.count()
total_woman_with_scholarship = total_female_with_scholarship.Scholarship.count()
total_children_with_scholarship = total_child_with_scholarship.Scholarship.count()
# Determine the number of Men, Women and Children with scholarship who Attended the Appointments
man_with_scholarship_attendence = man_with_scholarship.Scholarship.count()
woman_with_scholarship_attendence = woman_with_scholarship.Scholarship.sum()
children_with_scholarship_attendence = children_with_scholarship.Scholarship.sum()
# Determine the Percentage of Men, Women and Children with Scholarship who Attended or Missed the Appointments
pct_man_with_scholarship_attendence = ((float(man_with_scholarship_attendence)/total_man_with_scholarship)*100)
pct_man_with_scholarship_attendence = np.round(pct_man_with_scholarship_attendence,2)
pct_woman_with_scholarship_attendence = ((float(woman_with_scholarship_attendence)/total_woman_with_scholarship)*100)
pct_woman_with_scholarship_attendence = np.round(pct_woman_with_scholarship_attendence,2)
pct_children_with_scholarship_attendence = ((float(children_with_scholarship_attendence)/total_children_with_scholarship)*100)
pct_children_with_scholarship_attendence = np.round(pct_children_with_scholarship_attendence,2)
# Determine the Average Age of Men, Women and Children with Scholarship who Attended or Missed the Appointments
man_with_scholarship_avg_age = np.round(man_with_scholarship.Age.mean())
woman_with_scholarship_avg_age = np.round(woman_with_scholarship.Age.mean())
children_with_scholarship_avg_age = np.round(children_with_scholarship.Age.mean())
# Display Results
print '1. Total number of Men with Scholarship: {}\n\
2. Total number of Women with Scholarship: {}\n\
3. Total number of Children with Scholarship: {}\n\
4. Men with Scholarship who attended the Appointment: {}\n\
5. Women with Scholarship who attended the Appointment: {}\n\
6. Children with Scholarship who attended the Appointment: {}\n\
7. Men with Scholarship who missed the Appointment: {}\n\
8. Women with Scholarship who missed the Appointment: {}\n\
9. Children with Scholarship who missed the Appointment: {}\n\
10. Percentage of Men with Scholarship who attended the Appointment: {}%\n\
11. Percentage of Women with Scholarship who attended the Appointment: {}%\n\
12. Percentage of Children with Scholarship who attended the Appointment: {}%\n\
13. Average Age of Men with Scholarship who attended the Appointment: {}\n\
14. Average Age of Women with Scholarship who attended the Appointment: {}\n\
15. Average Age of Children with Scholarship who attended the Appointment: {}'\
.format(total_man_with_scholarship, total_woman_with_scholarship, total_children_with_scholarship,
man_with_scholarship_attendence, woman_with_scholarship_attendence, children_with_scholarship_attendence,
total_man_with_scholarship-man_with_scholarship_attendence, total_woman_with_scholarship-woman_with_scholarship_attendence,
total_children_with_scholarship-children_with_scholarship_attendence,
pct_man_with_scholarship_attendence, pct_woman_with_scholarship_attendence, pct_children_with_scholarship_attendence,
man_with_scholarship_avg_age, woman_with_scholarship_avg_age, children_with_scholarship_avg_age)
Explanation: According to the Bar graph above :-
- The number of people with scholarship did not affected the number of people visiting the hospital on their appointment.
- Women with Scholarship attended the appointments the most followed by Children. Men visited the hospital on their appointments the least.
The conclusion is that the Scholarship did not encouraged the number of people attending their appointments regardless of their age or gender.
End of explanation |
4,970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to Compare LDA Models
Demonstrates how you can visualize and compare trained topic models.
Step2: First, clean up the 20 Newsgroups dataset. We will use it to fit LDA.
Step3: Second, fit two LDA models.
Step6: Time to visualize, yay!
We use two slightly different visualization methods depending on how you're running this tutorial.
If you're running via a Jupyter notebook, then you'll get a nice interactive Plotly heatmap.
If you're viewing the static version of the page, you'll get a similar matplotlib heatmap, but it won't be interactive.
Step7: Gensim can help you visualise the differences between topics. For this purpose, you can use the diff() method of LdaModel.
diff() returns a matrix with distances mdiff and a matrix with annotations annotation. Read the docstring for more detailed info.
In each mdiff[i][j] cell you'll find a distance between topic_i from the first model and topic_j from the second model.
In each annotation[i][j] cell you'll find [tokens from intersection, tokens from difference between topic_i from first model and topic_j from the second model.
Step8: Case 1
Step9: Unfortunately, in real life, not everything is so good, and the matrix looks different.
Short description (interactive annotations only)
Step10: If you compare a model with itself, you want to see as many red elements as
possible (except on the diagonal). With this picture, you can look at the
"not very red elements" and understand which topics in the model are very
similar and why (you can read annotation if you move your pointer to cell).
Jaccard is a stable and robust distance function, but sometimes not sensitive
enough. Let's try to use the Hellinger distance instead.
Step11: You see that everything has become worse, but remember that everything depends on the task.
Choose a distance function that matches your upstream task better | Python Code:
# sphinx_gallery_thumbnail_number = 2
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: How to Compare LDA Models
Demonstrates how you can visualize and compare trained topic models.
End of explanation
from string import punctuation
from nltk import RegexpTokenizer
from nltk.stem.porter import PorterStemmer
from nltk.corpus import stopwords
from sklearn.datasets import fetch_20newsgroups
newsgroups = fetch_20newsgroups()
eng_stopwords = set(stopwords.words('english'))
tokenizer = RegexpTokenizer(r'\s+', gaps=True)
stemmer = PorterStemmer()
translate_tab = {ord(p): u" " for p in punctuation}
def text2tokens(raw_text):
Split the raw_text string into a list of stemmed tokens.
clean_text = raw_text.lower().translate(translate_tab)
tokens = [token.strip() for token in tokenizer.tokenize(clean_text)]
tokens = [token for token in tokens if token not in eng_stopwords]
stemmed_tokens = [stemmer.stem(token) for token in tokens]
return [token for token in stemmed_tokens if len(token) > 2] # skip short tokens
dataset = [text2tokens(txt) for txt in newsgroups['data']] # convert a documents to list of tokens
from gensim.corpora import Dictionary
dictionary = Dictionary(documents=dataset, prune_at=None)
dictionary.filter_extremes(no_below=5, no_above=0.3, keep_n=None) # use Dictionary to remove un-relevant tokens
dictionary.compactify()
d2b_dataset = [dictionary.doc2bow(doc) for doc in dataset] # convert list of tokens to bag of word representation
Explanation: First, clean up the 20 Newsgroups dataset. We will use it to fit LDA.
End of explanation
from gensim.models import LdaMulticore
num_topics = 15
lda_fst = LdaMulticore(
corpus=d2b_dataset, num_topics=num_topics, id2word=dictionary,
workers=4, eval_every=None, passes=10, batch=True,
)
lda_snd = LdaMulticore(
corpus=d2b_dataset, num_topics=num_topics, id2word=dictionary,
workers=4, eval_every=None, passes=20, batch=True,
)
Explanation: Second, fit two LDA models.
End of explanation
def plot_difference_plotly(mdiff, title="", annotation=None):
Plot the difference between models.
Uses plotly as the backend.
import plotly.graph_objs as go
import plotly.offline as py
annotation_html = None
if annotation is not None:
annotation_html = [
[
"+++ {}<br>--- {}".format(", ".join(int_tokens), ", ".join(diff_tokens))
for (int_tokens, diff_tokens) in row
]
for row in annotation
]
data = go.Heatmap(z=mdiff, colorscale='RdBu', text=annotation_html)
layout = go.Layout(width=950, height=950, title=title, xaxis=dict(title="topic"), yaxis=dict(title="topic"))
py.iplot(dict(data=[data], layout=layout))
def plot_difference_matplotlib(mdiff, title="", annotation=None):
Helper function to plot difference between models.
Uses matplotlib as the backend.
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(18, 14))
data = ax.imshow(mdiff, cmap='RdBu_r', origin='lower')
plt.title(title)
plt.colorbar(data)
try:
get_ipython()
import plotly.offline as py
except Exception:
#
# Fall back to matplotlib if we're not in a notebook, or if plotly is
# unavailable for whatever reason.
#
plot_difference = plot_difference_matplotlib
else:
py.init_notebook_mode()
plot_difference = plot_difference_plotly
Explanation: Time to visualize, yay!
We use two slightly different visualization methods depending on how you're running this tutorial.
If you're running via a Jupyter notebook, then you'll get a nice interactive Plotly heatmap.
If you're viewing the static version of the page, you'll get a similar matplotlib heatmap, but it won't be interactive.
End of explanation
print(LdaMulticore.diff.__doc__)
Explanation: Gensim can help you visualise the differences between topics. For this purpose, you can use the diff() method of LdaModel.
diff() returns a matrix with distances mdiff and a matrix with annotations annotation. Read the docstring for more detailed info.
In each mdiff[i][j] cell you'll find a distance between topic_i from the first model and topic_j from the second model.
In each annotation[i][j] cell you'll find [tokens from intersection, tokens from difference between topic_i from first model and topic_j from the second model.
End of explanation
import numpy as np
mdiff = np.ones((num_topics, num_topics))
np.fill_diagonal(mdiff, 0.)
plot_difference(mdiff, title="Topic difference (one model) in ideal world")
Explanation: Case 1: How topics within ONE model correlate with each other.
Short description:
x-axis - topic;
y-axis - topic;
.. role:: raw-html-m2r(raw)
:format: html
:raw-html-m2r:<span style="color:red">almost red cell</span> - strongly decorrelated topics;
.. role:: raw-html-m2r(raw)
:format: html
:raw-html-m2r:<span style="color:blue">almost blue cell</span> - strongly correlated topics.
In an ideal world, we would like to see different topics decorrelated between themselves.
In this case, our matrix would look like this:
End of explanation
mdiff, annotation = lda_fst.diff(lda_fst, distance='jaccard', num_words=50)
plot_difference(mdiff, title="Topic difference (one model) [jaccard distance]", annotation=annotation)
Explanation: Unfortunately, in real life, not everything is so good, and the matrix looks different.
Short description (interactive annotations only):
+++ make, world, well - words from the intersection of topics = present in both topics;
--- money, day, still - words from the symmetric difference of topics = present in one topic but not the other.
End of explanation
mdiff, annotation = lda_fst.diff(lda_fst, distance='hellinger', num_words=50)
plot_difference(mdiff, title="Topic difference (one model)[hellinger distance]", annotation=annotation)
Explanation: If you compare a model with itself, you want to see as many red elements as
possible (except on the diagonal). With this picture, you can look at the
"not very red elements" and understand which topics in the model are very
similar and why (you can read annotation if you move your pointer to cell).
Jaccard is a stable and robust distance function, but sometimes not sensitive
enough. Let's try to use the Hellinger distance instead.
End of explanation
mdiff, annotation = lda_fst.diff(lda_snd, distance='jaccard', num_words=50)
plot_difference(mdiff, title="Topic difference (two models)[jaccard distance]", annotation=annotation)
Explanation: You see that everything has become worse, but remember that everything depends on the task.
Choose a distance function that matches your upstream task better: what kind of "similarity" is
relevant to you. From my (Ivan's) experience, Jaccard is fine.
Case 2: How topics from DIFFERENT models correlate with each other.
Sometimes, we want to look at the patterns between two different models and compare them.
You can do this by constructing a matrix with the difference.
End of explanation |
4,971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem
The number of shoes sold by an e-commerce company during the first three months(12 weeks) of the year were
Step1: On average, the sales after optimization is more than the sales before optimization. But is the difference legit? Could it be due to chance?
Classical Method
Step2: Exercise
Step3: Did the conclusion change now?
Effect Size
Because you can't argue with all the fools in the world. It's easier to let them have their way, then trick them when they're not paying attention - Christopher Paolini
In the first case, how much did the price optimization increase the sales on average?
Step4: Would business feel comfortable spending millions of dollars if the increase is going to be just 1.75%. Does it make sense? Maybe yes - if margins are thin and any increase is considered good. But if the returns from the price optimization module does not let the company break even, it makes no sense to take that path.
Someone tells you the result is statistically significant. The first question you should ask?
How large is the effect?
To answer such a question, we will make use of the concept confidence interval
In plain english, confidence interval is the range of values the measurement metric is going to take.
An example would be | Python Code:
import numpy as np
import seaborn as sns
sns.set(color_codes=True)
%matplotlib inline
#Load the data
before_opt = np.array([23, 21, 19, 24, 35, 17, 18, 24, 33, 27, 21, 23])
after_opt = np.array([31, 28, 19, 24, 32, 27, 16, 41, 23, 32, 29, 33])
before_opt.mean()
after_opt.mean()
observed_difference = after_opt.mean() - before_opt.mean()
print("Difference between the means is:", observed_difference)
Explanation: Problem
The number of shoes sold by an e-commerce company during the first three months(12 weeks) of the year were:
<br>
23 21 19 24 35 17 18 24 33 27 21 23
Meanwhile, the company developed some dynamic price optimization algorithms and the sales for the next 12 weeks were:
<br>
31 28 19 24 32 27 16 41 23 32 29 33
Did the dynamic price optimization algorithm deliver superior results? Can it be trusted?
Solution
Before we get onto different approaches, let's quickly get a feel for the data
End of explanation
#Step 1: Create the dataset. Let's give Label 0 to before_opt and Label 1 to after_opt
#Learn about the following three functions
?np.append
?np.zeros
?np.ones
shoe_sales = np.array([np.append(np.zeros(before_opt.shape[0]), np.ones(after_opt.shape[0])),
np.append(before_opt, after_opt)], dtype=int)
print("Shape:", shoe_sales.shape)
print("Data:", "\n", shoe_sales)
shoe_sales = shoe_sales.T
print("Shape:",shoe_sales.shape)
print("Data:", "\n", shoe_sales)
#This is the approach we are going to take
#We are going to randomly shuffle the labels. Then compute the mean between the two groups.
#Find the % of times when the difference between the means computed is greater than what we observed above
#If the % of times is less than 5%, we would make the call that the improvements are real
np.random.shuffle(shoe_sales)
shoe_sales
experiment_label = np.random.randint(0,2,shoe_sales.shape[0])
experiment_label
experiment_data = np.array([experiment_label, shoe_sales[:,1]])
experiment_data = experiment_data.T
print(experiment_data)
experiment_diff_mean = experiment_data[experiment_data[:,0]==1].mean() \
- experiment_data[experiment_data[:,0]==0].mean()
experiment_diff_mean
#Like the previous notebook, let's repeat this experiment 100 and then 100000 times
def shuffle_experiment(number_of_times):
experiment_diff_mean = np.empty([number_of_times,1])
for times in np.arange(number_of_times):
experiment_label = np.random.randint(0,2,shoe_sales.shape[0])
experiment_data = np.array([experiment_label, shoe_sales[:,1]]).T
experiment_diff_mean[times] = experiment_data[experiment_data[:,0]==1].mean() \
- experiment_data[experiment_data[:,0]==0].mean()
return experiment_diff_mean
experiment_diff_mean = shuffle_experiment(100)
experiment_diff_mean[:10]
sns.distplot(experiment_diff_mean, kde=False)
#Finding % of times difference of means is greater than observed
print("Data: Difference in mean greater than observed:", \
experiment_diff_mean[experiment_diff_mean>=observed_difference])
print("Number of times diff in mean greater than observed:", \
experiment_diff_mean[experiment_diff_mean>=observed_difference].shape[0])
print("% of times diff in mean greater than observed:", \
experiment_diff_mean[experiment_diff_mean>=observed_difference].shape[0]/float(experiment_diff_mean.shape[0])*100)
Explanation: On average, the sales after optimization is more than the sales before optimization. But is the difference legit? Could it be due to chance?
Classical Method : We could cover this method later on. This entails doing a t-test
Hacker's Method : Let's see if we can provide a hacker's perspective to this problem, similar to what we did in the previous notebook.
End of explanation
before_opt = np.array([230, 210, 190, 240, 350, 170, 180, 240, 330, 270, 210, 230])
after_opt = np.array([310, 180, 190, 240, 220, 240, 160, 410, 130, 320, 290, 210])
print("Mean sales before price optimization:", np.mean(before_opt))
print("Mean sales after price optimization:", np.mean(after_opt))
print("Difference in mean sales:", np.mean(after_opt) - np.mean(before_opt)) #Same as above
shoe_sales = np.array([np.append(np.zeros(before_opt.shape[0]), np.ones(after_opt.shape[0])),
np.append(before_opt, after_opt)], dtype=int)
shoe_sales = shoe_sales.T
experiment_diff_mean = shuffle_experiment(100000)
sns.distplot(experiment_diff_mean, kde=False)
#Finding % of times difference of means is greater than observed
print("Number of times diff in mean greater than observed:", \
experiment_diff_mean[experiment_diff_mean>=observed_difference].shape[0])
print("% of times diff in mean greater than observed:", \
experiment_diff_mean[experiment_diff_mean>=observed_difference].shape[0]/float(experiment_diff_mean.shape[0])*100)
Explanation: Exercise: Repeat the above for 100,000 runs and report the results
Is the result by chance?
What is the justification for shuffling the labels?
Thought process is this: If price optimization had no real effect, then, the sales before optimization would often give more sales than sales after optimization. By shuffling, we are simulating the situation where that happens - sales before optimization is greater than sales after optimization. If many such trials provide improvements, then, the price optimization has no effect. In statistical terms, the observed difference could have occurred by chance.
Now, to show that the same difference in mean might lead to a different conclusion, let's try the same experiment with a different dataset.
End of explanation
before_opt = np.array([23, 21, 19, 24, 35, 17, 18, 24, 33, 27, 21, 23])
after_opt = np.array([31, 28, 19, 24, 32, 27, 16, 41, 23, 32, 29, 33])
print("The % increase of sales in the first case:", \
(np.mean(after_opt) - np.mean(before_opt))/np.mean(before_opt)*100,"%")
before_opt = np.array([230, 210, 190, 240, 350, 170, 180, 240, 330, 270, 210, 230])
after_opt = np.array([310, 180, 190, 240, 220, 240, 160, 410, 130, 320, 290, 210])
print("The % increase of sales in the second case:", \
(np.mean(after_opt) - np.mean(before_opt))/np.mean(before_opt)*100,"%")
Explanation: Did the conclusion change now?
Effect Size
Because you can't argue with all the fools in the world. It's easier to let them have their way, then trick them when they're not paying attention - Christopher Paolini
In the first case, how much did the price optimization increase the sales on average?
End of explanation
#Load the data
before_opt = np.array([23, 21, 19, 24, 35, 17, 18, 24, 33, 27, 21, 23])
after_opt = np.array([31, 28, 19, 24, 32, 27, 16, 41, 23, 32, 29, 33])
#generate a uniform random sample
random_before_opt = np.random.choice(before_opt, size=before_opt.size, replace=True)
print("Actual sample before optimization:", before_opt)
print("Bootstrapped sample before optimization: ", random_before_opt)
print("Mean for actual sample:", np.mean(before_opt))
print("Mean for bootstrapped sample:", np.mean(random_before_opt))
random_after_opt = np.random.choice(after_opt, size=after_opt.size, replace=True)
print("Actual sample after optimization:", after_opt)
print("Bootstrapped sample after optimization: ", random_after_opt)
print("Mean for actual sample:", np.mean(after_opt))
print("Mean for bootstrapped sample:", np.mean(random_after_opt))
print("Difference in means of actual samples:", np.mean(after_opt) - np.mean(before_opt))
print("Difference in means of bootstrapped samples:", np.mean(random_after_opt) - np.mean(random_before_opt))
#Like always, we will repeat this experiment 100,000 times.
def bootstrap_experiment(number_of_times):
mean_difference = np.empty([number_of_times,1])
for times in np.arange(number_of_times):
random_before_opt = np.random.choice(before_opt, size=before_opt.size, replace=True)
random_after_opt = np.random.choice(after_opt, size=after_opt.size, replace=True)
mean_difference[times] = np.mean(random_after_opt) - np.mean(random_before_opt)
return mean_difference
mean_difference = bootstrap_experiment(100000)
sns.distplot(mean_difference, kde=False)
mean_difference = np.sort(mean_difference, axis=0)
mean_difference #Sorted difference
np.percentile(mean_difference, [5,95])
Explanation: Would business feel comfortable spending millions of dollars if the increase is going to be just 1.75%. Does it make sense? Maybe yes - if margins are thin and any increase is considered good. But if the returns from the price optimization module does not let the company break even, it makes no sense to take that path.
Someone tells you the result is statistically significant. The first question you should ask?
How large is the effect?
To answer such a question, we will make use of the concept confidence interval
In plain english, confidence interval is the range of values the measurement metric is going to take.
An example would be: 90% of the times, the increase in average sales (before and after price optimization) would be within the bucket 3.4 and 6.7 (These numbers are illustrative. We will derive those numbers below)
What is the hacker's way of doing it? We will do the following steps:
From actual sales data, we sample the data with repetition (separately for before and after) - sample size will be the same as the original
Find the differences between the mean of the two samples.
Repeat steps 1 and 2 , say 100,000 times.
Sort the differences. For getting 90% interval, take the 5% and 95% number. That range gives you the 90% confidence interval on the mean.
This process of generating the samples is called bootstrapping
End of explanation |
4,972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: 使用 tf.data 加载 pandas dataframes
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: 下载包含心脏数据集的 csv 文件。
Step3: 使用 pandas 读取 csv 文件。
Step4: 将 thal 列(数据帧(dataframe)中的 object )转换为离散数值。
Step5: 使用 tf.data.Dataset 读取数据
使用 tf.data.Dataset.from_tensor_slices 从 pandas dataframe 中读取数值。
使用 tf.data.Dataset 的其中一个优势是可以允许您写一些简单而又高效的数据管道(data pipelines)。从 loading data guide 可以了解更多。
Step6: 由于 pd.Series 实现了 __array__ 协议,因此几乎可以在任何使用 np.array 或 tf.Tensor 的地方透明地使用它。
Step7: 随机读取(shuffle)并批量处理数据集。
Step8: 创建并训练模型
Step9: 代替特征列
将字典作为输入传输给模型就像创建 tf.keras.layers.Input 层的匹配字典一样简单,应用任何预处理并使用 functional api。 您可以使用它作为 feature columns 的替代方法。
Step10: 与 tf.data 一起使用时,保存 pd.DataFrame 列结构的最简单方法是将 pd.DataFrame 转换为 dict ,并对该字典进行切片。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install tensorflow-gpu==2.0.0-rc1
import pandas as pd
import tensorflow as tf
Explanation: 使用 tf.data 加载 pandas dataframes
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/pandas_dataframe"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">Download notebook</a>
</td>
</table>
本教程提供了如何将 pandas dataframes 加载到 tf.data.Dataset。
本教程使用了一个小型数据集,由克利夫兰诊所心脏病基金会(Cleveland Clinic Foundation for Heart Disease)提供. 此数据集中有几百行CSV。每行表示一个患者,每列表示一个属性(describe)。我们将使用这些信息来预测患者是否患有心脏病,这是一个二分类问题。
使用 pandas 读取数据
End of explanation
csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/applied-dl/heart.csv')
Explanation: 下载包含心脏数据集的 csv 文件。
End of explanation
df = pd.read_csv(csv_file)
df.head()
df.dtypes
Explanation: 使用 pandas 读取 csv 文件。
End of explanation
df['thal'] = pd.Categorical(df['thal'])
df['thal'] = df.thal.cat.codes
df.head()
Explanation: 将 thal 列(数据帧(dataframe)中的 object )转换为离散数值。
End of explanation
target = df.pop('target')
dataset = tf.data.Dataset.from_tensor_slices((df.values, target.values))
for feat, targ in dataset.take(5):
print ('Features: {}, Target: {}'.format(feat, targ))
Explanation: 使用 tf.data.Dataset 读取数据
使用 tf.data.Dataset.from_tensor_slices 从 pandas dataframe 中读取数值。
使用 tf.data.Dataset 的其中一个优势是可以允许您写一些简单而又高效的数据管道(data pipelines)。从 loading data guide 可以了解更多。
End of explanation
tf.constant(df['thal'])
Explanation: 由于 pd.Series 实现了 __array__ 协议,因此几乎可以在任何使用 np.array 或 tf.Tensor 的地方透明地使用它。
End of explanation
train_dataset = dataset.shuffle(len(df)).batch(1)
Explanation: 随机读取(shuffle)并批量处理数据集。
End of explanation
def get_compiled_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
return model
model = get_compiled_model()
model.fit(train_dataset, epochs=15)
Explanation: 创建并训练模型
End of explanation
inputs = {key: tf.keras.layers.Input(shape=(), name=key) for key in df.keys()}
x = tf.stack(list(inputs.values()), axis=-1)
x = tf.keras.layers.Dense(10, activation='relu')(x)
output = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model_func = tf.keras.Model(inputs=inputs, outputs=output)
model_func.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
Explanation: 代替特征列
将字典作为输入传输给模型就像创建 tf.keras.layers.Input 层的匹配字典一样简单,应用任何预处理并使用 functional api。 您可以使用它作为 feature columns 的替代方法。
End of explanation
dict_slices = tf.data.Dataset.from_tensor_slices((df.to_dict('list'), target.values)).batch(16)
for dict_slice in dict_slices.take(1):
print (dict_slice)
model_func.fit(dict_slices, epochs=15)
Explanation: 与 tf.data 一起使用时,保存 pd.DataFrame 列结构的最简单方法是将 pd.DataFrame 转换为 dict ,并对该字典进行切片。
End of explanation |
4,973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advent of Code 2017
December 4th
To ensure security, a valid passphrase must contain no duplicate words.
For example
Step1: I'll assume the input is a Joy sequence of sequences of integers.
[[5 1 9 5]
[7 5 4 3]
[2 4 6 8]]
So, obviously, the initial form will be a step function
Step2: AoC2017.4 == [F +] step_zero | Python Code:
from notebook_preamble import J, V, define
Explanation: Advent of Code 2017
December 4th
To ensure security, a valid passphrase must contain no duplicate words.
For example:
aa bb cc dd ee is valid.
aa bb cc dd aa is not valid - the word aa appears more than once.
aa bb cc dd aaa is valid - aa and aaa count as different words.
The system's full passphrase list is available as your puzzle input. How many passphrases are valid?
End of explanation
J('[step_zero] help')
Explanation: I'll assume the input is a Joy sequence of sequences of integers.
[[5 1 9 5]
[7 5 4 3]
[2 4 6 8]]
So, obviously, the initial form will be a step function:
AoC2017.4 == 0 swap [F +] step
F == [size] [unique size] cleave =
The step_zero combinator includes the 0 swap that would normally open one of these definitions:
End of explanation
define('AoC2017.4 == [[size] [unique size] cleave = +] step_zero')
J('''
[[5 1 9 5]
[7 5 4 3]
[2 4 6 8]] AoC2017.4
''')
Explanation: AoC2017.4 == [F +] step_zero
End of explanation |
4,974 | Given the following text description, write Python code to implement the functionality described.
Description:
Number of ways to arrange a word such that no vowels occur together
Function to check if a character is vowel or consonent ; Function to calculate factorial of a number ; Calculating no of ways for arranging vowels ; Iterate the map and count the number of vowels and calculate no of ways to arrange vowels ; calculating no of ways to arrange the given word such that vowels come together ; calculate no of ways to arrange vowels ; to store denominator of fraction ; count of consonents ; calculate the number of ways to arrange the word such that vowels come together ; To calculate total number of permutations ; To store length of the given word ; denominator of fraction ; return total number of permutations of the given word ; Function to calculate number of permutations such that no vowels come together ; to store frequency of character ; count frequency of acharacters ; calculate total number of permutations ; calculate total number of permutations such that vowels come together ; substrat vwl_tgthr from total to get the result ; return the result ; Driver code
| Python Code:
def isVowel(ch ) :
if(ch == ' a ' or ch == ' e ' or ch == ' i ' or ch == ' o ' or ch == ' u ' ) :
return True
else :
return False
def fact(n ) :
if(n < 2 ) :
return 1
return n * fact(n - 1 )
def only_vowels(freq ) :
denom = 1
cnt_vwl = 0
for itr in freq :
if(isVowel(itr ) ) :
denom *= fact(freq[itr ] )
cnt_vwl += freq[itr ]
return fact(cnt_vwl ) // denom
def all_vowels_together(freq ) :
vow = only_vowels(freq )
denom = 1
cnt_cnst = 0
for itr in freq :
if(isVowel(itr ) == False ) :
denom *= fact(freq[itr ] )
cnt_cnst += freq[itr ]
ans = fact(cnt_cnst + 1 ) // denom
return(ans * vow )
def total_permutations(freq ) :
cnt = 0
denom = 1
for itr in freq :
denom *= fact(freq[itr ] )
cnt += freq[itr ]
return fact(cnt ) // denom
def no_vowels_together(word ) :
freq = dict()
for i in word :
ch = i . lower()
freq[ch ] = freq . get(ch , 0 ) + 1
total = total_permutations(freq )
vwl_tgthr = all_vowels_together(freq )
res = total - vwl_tgthr
return res
word = "allahabad "
ans = no_vowels_together(word )
print(ans )
word = "geeksforgeeks "
ans = no_vowels_together(word )
print(ans )
word = "abcd "
ans = no_vowels_together(word )
print(ans )
|
4,975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example with real audio recordings
The iterations are dropped in contrast to the offline version. To use past observations the correlation matrix and the correlation vector are calculated recursively with a decaying window. $\alpha$ is the decay factor.
Setup
Step1: Audio data
Step2: Online buffer
For simplicity the STFT is performed before providing the frames.
Shape
Step3: Non-iterative frame online approach
A frame online example requires, that certain state variables are kept from frame to frame. That is the inverse correlation matrix $\text{R}_{t, f}^{-1}$ which is stored in Q and initialized with an identity matrix, as well as filter coefficient matrix that is stored in G and initialized with zeros.
Again for simplicity the ISTFT is applied afterwards.
Step4: Frame online WPE in class fashion
Step5: Power spectrum
Before and after applying WPE. | Python Code:
channels = 8
sampling_rate = 16000
delay = 3
alpha=0.9999
taps = 10
frequency_bins = stft_options['size'] // 2 + 1
Explanation: Example with real audio recordings
The iterations are dropped in contrast to the offline version. To use past observations the correlation matrix and the correlation vector are calculated recursively with a decaying window. $\alpha$ is the decay factor.
Setup
End of explanation
file_template = 'AMI_WSJ20-Array1-{}_T10c0201.wav'
signal_list = [
sf.read(str(project_root / 'data' / file_template.format(d + 1)))[0]
for d in range(channels)
]
y = np.stack(signal_list, axis=0)
IPython.display.Audio(y[0], rate=sampling_rate)
Explanation: Audio data
End of explanation
Y = stft(y, **stft_options).transpose(1, 2, 0)
T, _, _ = Y.shape
def aquire_framebuffer():
buffer = list(Y[:taps+delay, :, :])
for t in range(taps+delay+1, T):
buffer.append(Y[t, :, :])
yield np.array(buffer)
buffer.pop(0)
Explanation: Online buffer
For simplicity the STFT is performed before providing the frames.
Shape: (frames, frequency bins, channels)
frames: K+delay+1
End of explanation
Z_list = []
Q = np.stack([np.identity(channels * taps) for a in range(frequency_bins)])
G = np.zeros((frequency_bins, channels * taps, channels))
for Y_step in tqdm(aquire_framebuffer()):
Z, Q, G = online_wpe_step(
Y_step,
get_power_online(Y_step.transpose(1, 2, 0)),
Q,
G,
alpha=alpha,
taps=taps,
delay=delay
)
Z_list.append(Z)
Z_stacked = np.stack(Z_list)
z = istft(np.asarray(Z_stacked).transpose(2, 0, 1), size=stft_options['size'], shift=stft_options['shift'])
IPython.display.Audio(z[0], rate=sampling_rate)
Explanation: Non-iterative frame online approach
A frame online example requires, that certain state variables are kept from frame to frame. That is the inverse correlation matrix $\text{R}_{t, f}^{-1}$ which is stored in Q and initialized with an identity matrix, as well as filter coefficient matrix that is stored in G and initialized with zeros.
Again for simplicity the ISTFT is applied afterwards.
End of explanation
Z_list = []
online_wpe = OnlineWPE(
taps=taps,
delay=delay,
alpha=alpha
)
for Y_step in tqdm(aquire_framebuffer()):
Z_list.append(online_wpe.step_frame(Y_step))
Z = np.stack(Z_list)
z = istft(np.asarray(Z).transpose(2, 0, 1), size=stft_options['size'], shift=stft_options['shift'])
IPython.display.Audio(z[0], rate=sampling_rate)
Explanation: Frame online WPE in class fashion:
Online WPE class holds the correlation Matrix and the coefficient matrix.
End of explanation
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(20, 8))
im1 = ax1.imshow(20 * np.log10(np.abs(Y[200:400, :, 0])).T, origin='lower')
ax1.set_xlabel('')
_ = ax1.set_title('reverberated')
im2 = ax2.imshow(20 * np.log10(np.abs(Z_stacked[200:400, :, 0])).T, origin='lower')
_ = ax2.set_title('dereverberated')
cb = fig.colorbar(im1)
Explanation: Power spectrum
Before and after applying WPE.
End of explanation |
4,976 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
qp Demo
Alex Malz & Phil Marshall
In this notebook we use the qp module to approximate some simple, standard, 1-D PDFs using sets of quantiles, samples, and histograms, and assess their relative accuracy. We also show how such analyses can be extended to use "composite" PDFs made up of mixtures of standard distributions.
Requirements
To run qp, you will need to first install the module.
Step1: The qp.PDF Class
This is the basic element of qp - an object representing a probability density function. This class is stored in the module pdf.py. The PDF must be initialized with some representation of the distribution.
Step2: Approximating a Gaussian
Let's summon a PDF object, and initialize it with a standard function - a Gaussian.
Step3: Samples
Let's sample the PDF to see how it looks. When we plot the PDF object, both the true and sampled distributions are displayed.
Step4: Quantile Parametrization
Now, let's compute a set of evenly spaced quantiles. These will be carried by the PDF object as p.quantiles. We also demonstrate the initialization of a PDF object with quantiles and no truth function.
Step5: Histogram Parametrization
Let's also compute a histogram representation, that will be carried by the PDF object as p.histogram. The values in each bin are the integrals of the PDF over the range defined by bin ends. We can also initialize a PDF object with a histogram and no truth function.
Step6: Evaluating the Approximate PDF by Interpolation
Once we have chosen a parametrization to approximate the PDF with, we can evaluate the approximate PDF at any point by interpolation (or extrapolation). qp uses scipy.intepolate.interp1d to do this, with linear as the default interpolation scheme. (Most other options do not enable extrapolation, nearest being the exception.)
Let's test this interpolation by evaluating an approximation at a single point using the quantile parametrization.
Step7: (We can also integrate any approximation.)
Step8: We can also interpolate the function onto an evenly spaced grid with points within and out of the quantile range, as follows
Step9: We can also change the interpolation scheme
Step10: The "Evaluated" or "Gridded" Parametrization
A qp.PDF object may also be initialized with a parametrization of a function evaluated on a grid. This is also what is produced by the qp.PDF.approximate() method. So, let's take the output of a qp.PDF approximation evaluation, and use it to instantiate a new qp.PDF object. Note that the evaluate method can be used to return PDF evaluations from either the true PDF or one of its approximations, via the using keyword argument.
Step11: Let's unpack this a little. The G PDF object has an attribute G.gridded which contains the initial gridded function. This lookup table is used when making further approximations. To check this, let's look at whether this G PDF object knows what the true PDF is, which approximation it's going to use, and then how it performs at making a new approximation to the PDF on a coarser grid
Step12: Mixture Model Fit
We can fit a parametric mixture model to samples from any parametrization. Currently, only a Gaussian mixture model is supported.
Step13: Comparing Parametrizations
qp supports both qualitative and quantitative comparisons between different distributions, across parametrizations.
Qualitative Comparisons
Step14: Quantitative Comparisons
Step15: Next, let's compare the different parametrizations to the truth using the Kullback-Leibler Divergence (KLD). The KLD is a measure of how close two probability distributions are to one another -- a smaller value indicates closer agreement. It is measured in units of bits of information, the information lost in going from the second distribution to the first distribution. The KLD calculator here takes in a shared grid upon which to evaluate the true distribution and the interpolated approximation of that distribution and returns the KLD of the approximation relative to the truth, which is not in general the same as the KLD of the truth relative to the approximation. Below, we'll calculate the KLD of the approximation relative to the truth over different ranges, showing that it increases as it includes areas where the true distribution and interpolated distributions diverge.
Step16: Holy smokes, does the quantile approximation blow everything else out of the water, thanks to using spline interpolation.
The progression of KLD values should follow that of the root mean square error (RMSE), another measure of how close two functions are to one another. The RMSE also increases as it includes areas where the true distribution and interpolated distribution diverge. Unlike the KLD, the RMSE is symmetric, meaning the distance measured is not that of one distribution from the other but of the symmetric distance between them.
Step17: Both the KLD and RMSE metrics suggest that the quantile approximation is better in the high density region, but samples work better when the tails are included. We might expect the answer to the question of which approximation to use to depend on the application, and whether the tails need to be captured or not.
Finally, we can compare the meoments of each approximation and compare those to the moments ofthe true distribution.
Step18: The first three moments have an interesting interpretation. The zeroth moment should always be 1 when calculated over the entire range of redshifts, but the quantile approximation is off by about $7\%$. We know the first moment in this case is 0, and indeed the evaluation of the first moment for the true distribution deviates from 0 by less than Python's floating point precision. The samples parametrization has a biased estimate for the first moment to the tune of $2\%$. The second moment for the true distribution is 1, and the quantile parametrization (and, to a lesser extent, the histogram parametrization) fails to provide a good estimate of it.
Advanced Usage
Composite PDFs
In addition to individual scipy.stats.rv_continuous objects, qp can be initialized with true distributions that are linear combinations of scipy.stats.rv_continuous objects. To do this, one must create the component distributions and specify their relative weights. This can be done by running qp.PDF.mix_mod_fit() on an existing qp.PDF object once samples have been calculated, or it can be done by hand.
Step19: We can calculate the quantiles for such a distribution.
Step20: Similarly, the histogram parametrization is also supported for composite PDFs.
Step21: Finally, samples from this distribution may also be taken, and a PDF may be reconstructed from them. Note
Step22: PDF Ensembles
qp also includes infrastructure for handling ensembles of PDF objects with shared metaparameters, such as histogram bin ends, but unique per-object parameters, such as histogram bin heights. A qp.Ensemble object takes as input the number of items in the ensemble and, optionally, a list, with contents corresponding to one of the built-in formats.
Let's demonstrate on PDFs with a functional form, which means the list of information for each member of the ensemble is scipy.stats.rv_continuous or qp.composite objects.
Step23: As with individual qp.PDF objects, we can evaluate the PDFs at given points, convert to other formats, and integrate.
Step25: Previous versions of qp included a built-in function for "stacking" the member PDFs of a qp.Ensemble object. This functionality has been removed to discourage use of this procedure in science applications. However, we provide a simple function one may use should this functionality be desired. | Python Code:
import numpy as np
import scipy.stats as sps
import scipy.interpolate as spi
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
%matplotlib inline
import qp
Explanation: qp Demo
Alex Malz & Phil Marshall
In this notebook we use the qp module to approximate some simple, standard, 1-D PDFs using sets of quantiles, samples, and histograms, and assess their relative accuracy. We also show how such analyses can be extended to use "composite" PDFs made up of mixtures of standard distributions.
Requirements
To run qp, you will need to first install the module.
End of explanation
# ! cat qp/pdf.py
P = qp.PDF(vb=True)
Explanation: The qp.PDF Class
This is the basic element of qp - an object representing a probability density function. This class is stored in the module pdf.py. The PDF must be initialized with some representation of the distribution.
End of explanation
dist = sps.norm(loc=0, scale=1)
print(type(dist))
demo_limits = (-5., 5.)
P = qp.PDF(funcform=dist, limits=demo_limits)
P.plot()
Explanation: Approximating a Gaussian
Let's summon a PDF object, and initialize it with a standard function - a Gaussian.
End of explanation
np.random.seed(42)
samples = P.sample(1000, using='mix_mod', vb=False)
S = qp.PDF(samples=samples, limits=demo_limits)
S.plot()
Explanation: Samples
Let's sample the PDF to see how it looks. When we plot the PDF object, both the true and sampled distributions are displayed.
End of explanation
quantiles = P.quantize(N=10)
Q = qp.PDF(quantiles=quantiles, limits=demo_limits)
Q.plot()
Explanation: Quantile Parametrization
Now, let's compute a set of evenly spaced quantiles. These will be carried by the PDF object as p.quantiles. We also demonstrate the initialization of a PDF object with quantiles and no truth function.
End of explanation
histogram = P.histogramize(N=10, binrange=demo_limits)
H = qp.PDF(histogram=histogram, limits=demo_limits)
H.plot()
print H.truth
Explanation: Histogram Parametrization
Let's also compute a histogram representation, that will be carried by the PDF object as p.histogram. The values in each bin are the integrals of the PDF over the range defined by bin ends. We can also initialize a PDF object with a histogram and no truth function.
End of explanation
print P.approximate(np.array([0.314]), using='quantiles')
P.mix_mod
Explanation: Evaluating the Approximate PDF by Interpolation
Once we have chosen a parametrization to approximate the PDF with, we can evaluate the approximate PDF at any point by interpolation (or extrapolation). qp uses scipy.intepolate.interp1d to do this, with linear as the default interpolation scheme. (Most other options do not enable extrapolation, nearest being the exception.)
Let's test this interpolation by evaluating an approximation at a single point using the quantile parametrization.
End of explanation
print P.integrate([0., 1.], using='quantiles')
Explanation: (We can also integrate any approximation.)
End of explanation
grid = np.linspace(-3., 3., 100)
gridded = P.approximate(grid, using='quantiles')
Explanation: We can also interpolate the function onto an evenly spaced grid with points within and out of the quantile range, as follows:
End of explanation
print P.scheme
print P.approximate(np.array([0.314]), using='quantiles', scheme='nearest')
print P.scheme
Explanation: We can also change the interpolation scheme:
End of explanation
grid = np.linspace(-3., 3., 20)
gridded = P.evaluate(grid, using='mix_mod', vb=False)
G = qp.PDF(gridded=gridded, limits=demo_limits)
G.sample(100, vb=False)
G.plot()
Explanation: The "Evaluated" or "Gridded" Parametrization
A qp.PDF object may also be initialized with a parametrization of a function evaluated on a grid. This is also what is produced by the qp.PDF.approximate() method. So, let's take the output of a qp.PDF approximation evaluation, and use it to instantiate a new qp.PDF object. Note that the evaluate method can be used to return PDF evaluations from either the true PDF or one of its approximations, via the using keyword argument.
End of explanation
print G.truth
print G.last,'approximation, ', G.scheme, 'interpolation'
# 10-point grid for a coarse approximation:
coarse_grid = np.linspace(-3.5, 3.5, 10)
coarse_evaluation = G.approximate(coarse_grid, using='gridded')
print coarse_evaluation
Explanation: Let's unpack this a little. The G PDF object has an attribute G.gridded which contains the initial gridded function. This lookup table is used when making further approximations. To check this, let's look at whether this G PDF object knows what the true PDF is, which approximation it's going to use, and then how it performs at making a new approximation to the PDF on a coarser grid:
End of explanation
MM = qp.PDF(funcform=dist, limits=demo_limits)
MM.sample(1000, vb=False)
MM.mix_mod_fit(n_components=5)
MM.plot()
Explanation: Mixture Model Fit
We can fit a parametric mixture model to samples from any parametrization. Currently, only a Gaussian mixture model is supported.
End of explanation
P.plot()
Explanation: Comparing Parametrizations
qp supports both qualitative and quantitative comparisons between different distributions, across parametrizations.
Qualitative Comparisons: Plotting
Let's visualize the PDF object in order to compare the truth and the approximations. The solid, black line shows the true PDF evaluated between the bounds. The green rugplot shows the locations of the 1000 samples we took. The vertical, dotted, blue lines show the percentiles we asked for, and the hotizontal, dotted, red lines show the 10 equally spaced bins we asked for. Note that the quantiles refer to the probability distribution between the bounds, because we are not able to integrate numerically over an infinite range. Interpolations of each parametrization are given as dashed lines in their corresponding colors. Note that the interpolations of the quantile and histogram parametrizations are so close to each other that the difference is almost imperceptible!
End of explanation
symm_lims = np.array([-1., 1.])
all_lims = [symm_lims, 2.*symm_lims, 3.*symm_lims]
Explanation: Quantitative Comparisons
End of explanation
for PDF in [Q, H, S]:
D = []
for lims in all_lims:
D.append(qp.metrics.calculate_kld(P, PDF, limits=lims, vb=False))
print(PDF.truth+' approximation: KLD over 1, 2, 3, sigma ranges = '+str(D))
Explanation: Next, let's compare the different parametrizations to the truth using the Kullback-Leibler Divergence (KLD). The KLD is a measure of how close two probability distributions are to one another -- a smaller value indicates closer agreement. It is measured in units of bits of information, the information lost in going from the second distribution to the first distribution. The KLD calculator here takes in a shared grid upon which to evaluate the true distribution and the interpolated approximation of that distribution and returns the KLD of the approximation relative to the truth, which is not in general the same as the KLD of the truth relative to the approximation. Below, we'll calculate the KLD of the approximation relative to the truth over different ranges, showing that it increases as it includes areas where the true distribution and interpolated distributions diverge.
End of explanation
for PDF in [Q, H, S]:
D = []
for lims in all_lims:
D.append(qp.metrics.calculate_rmse(P, PDF, limits=lims, vb=False))
print(PDF.truth+' approximation: RMSE over 1, 2, 3, sigma ranges = '+str(D))
Explanation: Holy smokes, does the quantile approximation blow everything else out of the water, thanks to using spline interpolation.
The progression of KLD values should follow that of the root mean square error (RMSE), another measure of how close two functions are to one another. The RMSE also increases as it includes areas where the true distribution and interpolated distribution diverge. Unlike the KLD, the RMSE is symmetric, meaning the distance measured is not that of one distribution from the other but of the symmetric distance between them.
End of explanation
pdfs = [P, Q, H, S]
which_moments = range(3)
all_moments = []
for pdf in pdfs:
moments = []
for n in which_moments:
moments.append(qp.metrics.calculate_moment(pdf, n))
all_moments.append(moments)
print('moments: '+str(which_moments))
for i in range(len(pdfs)):
print(pdfs[i].first+': '+str(all_moments[i]))
Explanation: Both the KLD and RMSE metrics suggest that the quantile approximation is better in the high density region, but samples work better when the tails are included. We might expect the answer to the question of which approximation to use to depend on the application, and whether the tails need to be captured or not.
Finally, we can compare the meoments of each approximation and compare those to the moments ofthe true distribution.
End of explanation
component_1 = {}
component_1['function'] = sps.norm(loc=-2., scale=1.)
component_1['coefficient'] = 4.
component_2 = {}
component_2['function'] = sps.norm(loc=2., scale=1.)
component_2['coefficient'] = 1.
dist_info = [component_1, component_2]
composite_lims = (-5., 5.)
C_dist = qp.composite(dist_info)
C = qp.PDF(funcform=C_dist, limits=composite_lims)
C.plot()
Explanation: The first three moments have an interesting interpretation. The zeroth moment should always be 1 when calculated over the entire range of redshifts, but the quantile approximation is off by about $7\%$. We know the first moment in this case is 0, and indeed the evaluation of the first moment for the true distribution deviates from 0 by less than Python's floating point precision. The samples parametrization has a biased estimate for the first moment to the tune of $2\%$. The second moment for the true distribution is 1, and the quantile parametrization (and, to a lesser extent, the histogram parametrization) fails to provide a good estimate of it.
Advanced Usage
Composite PDFs
In addition to individual scipy.stats.rv_continuous objects, qp can be initialized with true distributions that are linear combinations of scipy.stats.rv_continuous objects. To do this, one must create the component distributions and specify their relative weights. This can be done by running qp.PDF.mix_mod_fit() on an existing qp.PDF object once samples have been calculated, or it can be done by hand.
End of explanation
Cq = qp.PDF(funcform=C_dist, limits = composite_lims)
Cq.quantize(N=20, limits=composite_lims, vb=False)
Cq.plot()
Explanation: We can calculate the quantiles for such a distribution.
End of explanation
Ch = qp.PDF(funcform=C_dist, limits = composite_lims)
Ch.histogramize(N=20, binrange=composite_lims, vb=True)
Ch.plot()
Explanation: Similarly, the histogram parametrization is also supported for composite PDFs.
End of explanation
Cs = qp.PDF(funcform=C_dist, limits = composite_lims)
Cs.sample(N=20, using='mix_mod', vb=False)
Cs.plot()
qD = qp.metrics.calculate_kld(C, Cq, limits=composite_lims, dx=0.001, vb=True)
hD = qp.metrics.calculate_kld(C, Ch, limits=composite_lims, dx=0.001, vb=True)
sD = qp.metrics.calculate_kld(C, Cs, limits=composite_lims, dx=0.001, vb=True)
print(qD, hD, sD)
Explanation: Finally, samples from this distribution may also be taken, and a PDF may be reconstructed from them. Note: this uses scipy.stats.gaussian_kde, which determines its bandwidth/kernel size using Scott's Rule, Silverman's Rule, a fixed bandwidth, or a callable function that returns a bandwidth.
End of explanation
N = 10
in_dists = []
for i in range(N):
dist = sps.norm(loc=sps.uniform.rvs(), scale=sps.uniform.rvs())
in_dists.append(dist)
E = qp.Ensemble(N, funcform=in_dists, vb=True)
Explanation: PDF Ensembles
qp also includes infrastructure for handling ensembles of PDF objects with shared metaparameters, such as histogram bin ends, but unique per-object parameters, such as histogram bin heights. A qp.Ensemble object takes as input the number of items in the ensemble and, optionally, a list, with contents corresponding to one of the built-in formats.
Let's demonstrate on PDFs with a functional form, which means the list of information for each member of the ensemble is scipy.stats.rv_continuous or qp.composite objects.
End of explanation
eval_range = np.linspace(-5., 5., 100)
E.evaluate(eval_range, using='mix_mod', vb=False)
E.quantize(N=10)
E.integrate(demo_limits, using='mix_mod')
Explanation: As with individual qp.PDF objects, we can evaluate the PDFs at given points, convert to other formats, and integrate.
End of explanation
def stack(ensemble, loc, using, vb=True):
Produces an average of the PDFs in the ensemble
Parameters
----------
ensemble: qp.Ensemble
the ensemble of PDFs to stack
loc: ndarray, float or float
location(s) at which to evaluate the PDFs
using: string
which parametrization to use for the approximation
vb: boolean
report on progress
Returns
-------
stacked: tuple, ndarray, float
pair of arrays for locations where approximations were evaluated
and the values of the stacked PDFs at those points
evaluated = ensemble.evaluate(loc, using=using, norm=True, vb=vb)
stack = np.mean(evaluated[1], axis=0)
stacked = (evaluated[0], stack)
return stacked
stacked = stack(E, eval_range, using='quantiles')
plt.plot(stacked[0], stacked[-1])
Explanation: Previous versions of qp included a built-in function for "stacking" the member PDFs of a qp.Ensemble object. This functionality has been removed to discourage use of this procedure in science applications. However, we provide a simple function one may use should this functionality be desired.
End of explanation |
4,977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stats practice
Testing for normality
Step1: Make probability plots
Step2: Interesting. Normal distribution follows the quantiles well and has the highest $R^2$ value, but both the uniform and Weibull distributions aren't very different. Need to temper what I think of as a convincing $R^2$ value.
Run Anderson-Darling test
Step3: Note that critical and significance values are always the same in the Anderson-Darling test regardless of the input. The A^2 value must be compared to them; if the test statistic is greater than the critical value at a given significance, then the null hypothesis is rejected with that level of confidence.
Step4: Practice problems
Gender ratio
In a certain country, girls are highly prized. Every couple having children wants exactly one girl. When they begin having children, if they have a girl, they stop. If they have a boy, they keep having children until they get a girl.
What is the expected ratio of boys to girls in the country? | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
from random import normalvariate, uniform, weibullvariate
# Make several sets of data; one randomly sampled
# from a normal distribution and others that aren't.
n = 100
d_norm = [normalvariate(0,1) for x in range(n)]
d_unif = [uniform(0,1) for x in range(n)]
d_weib = [weibullvariate(1,1.5) for x in range(n)]
fig,ax = plt.subplots(1,1,figsize=(5,5))
bins = 20
xmin,xmax = -3,3
ax.hist(d_norm,histtype='step',bins=bins,range=(xmin,xmax),lw=2,
color='red',label='normal')
ax.hist(d_unif,histtype='step',bins=bins,range=(xmin,xmax),lw=2,
color='green',label='uniform')
ax.hist(d_weib,histtype='step',bins=bins,range=(xmin,xmax),lw=2,
color='blue',label='Weibull')
ax.legend(loc='upper left',fontsize=10);
Explanation: Stats practice
Testing for normality
End of explanation
from scipy.stats import norm,probplot
dists = (d_norm,d_unif,d_weib)
labels = ('Normal','Uniform','Weibull')
fig,axarr = plt.subplots(1,3,figsize=(14,4))
for d,ax,l in zip(dists,axarr.ravel(),labels):
probplot(d, dist=norm, plot=ax)
ax.set_title(l)
Explanation: Make probability plots
End of explanation
from scipy.stats import anderson
Explanation: Interesting. Normal distribution follows the quantiles well and has the highest $R^2$ value, but both the uniform and Weibull distributions aren't very different. Need to temper what I think of as a convincing $R^2$ value.
Run Anderson-Darling test
End of explanation
for d,l in zip(dists,labels):
a2, crit, sig = anderson(d,dist='norm')
if a2 > crit[2]:
print "Anderson-Darling value for {:7} is A^2={:.3f}; reject H0 at 95%.".format(l,a2)
else:
print "Anderson-Darling value for {:7} is A^2={:.3f}; cannot reject H0 at 95%.".format(l,a2)
Explanation: Note that critical and significance values are always the same in the Anderson-Darling test regardless of the input. The A^2 value must be compared to them; if the test statistic is greater than the critical value at a given significance, then the null hypothesis is rejected with that level of confidence.
End of explanation
from numpy.random import binomial
# Monte Carlo solution
N = 100000
p_girl = 0.5
p_boy = 1 - p_girl
n_girl = 0
n_boy = 0
for i in range(N):
has_girl = False
while not has_girl:
child = binomial(1,p_girl)
if child:
n_girl += 1
has_girl = True
else:
n_boy += 1
n_child = n_girl + n_boy
print "Gender ratio is {:.1f}%/{:.1f}% boy/girl.".format(n_boy * 100./n_child, n_girl * 100./n_child)
Explanation: Practice problems
Gender ratio
In a certain country, girls are highly prized. Every couple having children wants exactly one girl. When they begin having children, if they have a girl, they stop. If they have a boy, they keep having children until they get a girl.
What is the expected ratio of boys to girls in the country?
End of explanation |
4,978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modelado de un sistema con ipython
Para el correcto funcionamiento del extrusor de filamento, es necesario regular correctamente la temperatura a la que está el cañon. Por ello se usará un sistema consistente en una resitencia que disipe calor, y un sensor de temperatura PT100 para poder cerrar el lazo y controlar el sistema. A continuación, desarrollaremos el proceso utilizado.
Step1: Respuesta del sistema
El primer paso será someter al sistema a un escalon en lazo abierto para ver la respuesta temporal del mismo. A medida que va calentando, registraremos los datos para posteriormente representarlos.
Step2: Cálculo del polinomio
Hacemos una regresión con un polinomio de orden 2 para calcular cual es la mejor ecuación que se ajusta a la tendencia de nuestros datos.
Step3: El polinomio caracteristico de nuestro sistema es
Step4: En este caso hemos establecido un setpoint de 80ºC Como vemos, una vez introducido el controlador, la temperatura tiende a estabilizarse, sin embargo tiene mucha sobreoscilación. Por ello aumentaremos los valores de $K_i$ y $K_d$, siendo los valores de esta segunda iteracción los siguientes
Step5: En esta segunda iteracción hemos logrado bajar la sobreoscilación inicial, pero tenemos mayor error en regimen permanente. Por ello volvemos a aumentar los valores de $K_i$ y $K_d$ siendo los valores de esta tercera iteracción los siguientes
Step6: En este caso, se puso un setpoint de 160ºC. Como vemos, la sobreoscilación inicial ha disminuido en comparación con la anterior iteracción y el error en regimen permanente es menor. Para intentar minimar el error, aumentaremos únicamente el valor de $K_i$. Siendo los valores de esta cuarta iteracción del regulador los siguientes | Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pylab as plt
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Mostramos todos los gráficos en el notebook
%pylab inline
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('datos.csv')
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
#columns = ['temperatura', 'entrada']
columns = ['temperatura', 'entrada']
Explanation: Modelado de un sistema con ipython
Para el correcto funcionamiento del extrusor de filamento, es necesario regular correctamente la temperatura a la que está el cañon. Por ello se usará un sistema consistente en una resitencia que disipe calor, y un sensor de temperatura PT100 para poder cerrar el lazo y controlar el sistema. A continuación, desarrollaremos el proceso utilizado.
End of explanation
#Mostramos en varias gráficas la información obtenida tras el ensay
#ax = datos['temperatura'].plot(figsize=(10,5), ylim=(0,60),label="Temperatura")
#ax.set_xlabel('Tiempo')
#ax.set_ylabel('Temperatura [ºC]')
#ax.set_ylim(20,60)
#ax = datos['entrada'].plot(secondary_y=True, label="Entrada")#.set_ylim(-1,55)
fig, ax1 = plt.subplots()
ax1.plot(datos['time'], datos['temperatura'], 'b-')
ax1.set_xlabel('Tiempo (s)')
ax1.set_ylabel('Temperatura', color='b')
ax2 = ax1.twinx()
ax2.plot(datos['time'], datos['entrada'], 'r-')
ax2.set_ylabel('Escalón', color='r')
ax2.set_ylim(-1,55)
plt.figure(figsize=(10,5))
plt.show()
Explanation: Respuesta del sistema
El primer paso será someter al sistema a un escalon en lazo abierto para ver la respuesta temporal del mismo. A medida que va calentando, registraremos los datos para posteriormente representarlos.
End of explanation
# Buscamos el polinomio de orden 4 que determina la distribución de los datos
reg = np.polyfit(datos['time'],datos['temperatura'],2)
# Calculamos los valores de y con la regresión
ry = np.polyval(reg,datos['time'])
print (reg)
plt.plot(datos['time'],datos['temperatura'],'b^', label=('Datos experimentales'))
plt.plot(datos['time'],ry,'ro', label=('regresión polinómica'))
plt.legend(loc=0)
plt.grid(True)
plt.xlabel('Tiempo')
plt.ylabel('Temperatura [ºC]')
Explanation: Cálculo del polinomio
Hacemos una regresión con un polinomio de orden 2 para calcular cual es la mejor ecuación que se ajusta a la tendencia de nuestros datos.
End of explanation
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it1 = pd.read_csv('Regulador1.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax = datos_it1[columns].plot(figsize=(10,5), ylim=(20,100),title='Modelo matemático del sistema con regulador',)
ax.set_xlabel('Tiempo')
ax.set_ylabel('Temperatura [ºC]')
ax.hlines([80],0,3500,colors='r')
#Calculamos MP
Tmax = datos_it1.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
Sp=80.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it1.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
Explanation: El polinomio caracteristico de nuestro sistema es:
$$P_x= 25.9459 -1.5733·10^{-4}·X - 8.18174·10^{-9}·X^2$$
Transformada de laplace
Si calculamos la transformada de laplace del sistema, obtenemos el siguiente resultado:
$$G_s = \frac{25.95·S^2 - 0.00015733·S + 1.63635·10^{-8}}{S^3}$$
Cálculo del PID mediante OCTAVE
Aplicando el método de sintonizacion de Ziegler-Nichols calcularemos el PID para poder regular correctamente el sistema.Este método, nos da d emanera rápida unos valores de $K_p$, $K_i$ y $K_d$ orientativos, para que podamos ajustar correctamente el controlador. Esté método consiste en el cálculo de tres parámetros característicos, con los cuales calcularemos el regulador:
$$G_s=K_p(1+\frac{1}{T_i·S}+T_d·S)=K_p+\frac{K_i}{S}+K_d$$
En esta primera iteración, los datos obtenidos son los siguientes:
$K_p = 6082.6$ $K_i=93.868 K_d=38.9262$
Con lo que nuestro regulador tiene la siguiente ecuación característica:
$$G_s = \frac{38.9262·S^2 + 6082.6·S + 93.868}{S}$$
Iteracción 1 de regulador
End of explanation
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it2 = pd.read_csv('Regulador2.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax2 = datos_it2[columns].plot(figsize=(10,5), ylim=(20,100),title='Modelo matemático del sistema con regulador',)
ax2.set_xlabel('Tiempo')
ax2.set_ylabel('Temperatura [ºC]')
ax2.hlines([80],0,3500,colors='r')
#Calculamos MP
Tmax = datos_it2.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
Sp=80.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it2.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
Explanation: En este caso hemos establecido un setpoint de 80ºC Como vemos, una vez introducido el controlador, la temperatura tiende a estabilizarse, sin embargo tiene mucha sobreoscilación. Por ello aumentaremos los valores de $K_i$ y $K_d$, siendo los valores de esta segunda iteracción los siguientes:
$K_p = 6082.6$ $K_i=103.25 K_d=51.425$
Iteracción 2 del regulador
End of explanation
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it3 = pd.read_csv('Regulador3.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax3 = datos_it3[columns].plot(figsize=(10,5), ylim=(20,180),title='Modelo matemático del sistema con regulador',)
ax3.set_xlabel('Tiempo')
ax3.set_ylabel('Temperatura [ºC]')
ax3.hlines([160],0,6000,colors='r')
#Calculamos MP
Tmax = datos_it3.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
Sp=160.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it3.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
Explanation: En esta segunda iteracción hemos logrado bajar la sobreoscilación inicial, pero tenemos mayor error en regimen permanente. Por ello volvemos a aumentar los valores de $K_i$ y $K_d$ siendo los valores de esta tercera iteracción los siguientes:
$K_p = 6082.6$ $K_i=121.64 K_d=60$
Iteracción 3 del regulador
End of explanation
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it4 = pd.read_csv('Regulador4.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax4 = datos_it4[columns].plot(figsize=(10,5), ylim=(20,180))
ax4.set_xlabel('Tiempo')
ax4.set_ylabel('Temperatura [ºC]')
ax4.hlines([160],0,7000,colors='r')
#Calculamos MP
Tmax = datos_it4.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
print (" {:.2f}".format(Tmax))
Sp=160.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it4.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
Explanation: En este caso, se puso un setpoint de 160ºC. Como vemos, la sobreoscilación inicial ha disminuido en comparación con la anterior iteracción y el error en regimen permanente es menor. Para intentar minimar el error, aumentaremos únicamente el valor de $K_i$. Siendo los valores de esta cuarta iteracción del regulador los siguientes:
$K_p = 6082.6$ $K_i=121.64 K_d=150$
Iteracción 4
End of explanation |
4,979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EEG forward operator with a template MRI
This tutorial explains how to compute the forward operator from EEG data
using the standard template MRI subject fsaverage.
.. caution
Step1: Load the data
We use here EEG data from the BCI dataset.
<div class="alert alert-info"><h4>Note</h4><p>See `plot_montage` to view all the standard EEG montages
available in MNE-Python.</p></div>
Step2: Setup source space and compute forward | Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Joan Massich <mailsik@gmail.com>
#
# License: BSD Style.
import os.path as op
import mne
from mne.datasets import eegbci
from mne.datasets import fetch_fsaverage
# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)
# The files live in:
subject = 'fsaverage'
trans = 'fsaverage' # MNE has a built-in fsaverage transformation
src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')
bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')
Explanation: EEG forward operator with a template MRI
This tutorial explains how to compute the forward operator from EEG data
using the standard template MRI subject fsaverage.
.. caution:: Source reconstruction without an individual T1 MRI from the
subject will be less accurate. Do not over interpret
activity locations which can be off by multiple centimeters.
:depth: 2
End of explanation
raw_fname, = eegbci.load_data(subject=1, runs=[6])
raw = mne.io.read_raw_edf(raw_fname, preload=True)
# Clean channel names to be able to use a standard 1005 montage
new_names = dict(
(ch_name,
ch_name.rstrip('.').upper().replace('Z', 'z').replace('FP', 'Fp'))
for ch_name in raw.ch_names)
raw.rename_channels(new_names)
# Read and set the EEG electrode locations
montage = mne.channels.make_standard_montage('standard_1005')
raw.set_montage(montage)
raw.set_eeg_reference(projection=True) # needed for inverse modeling
# Check that the locations of EEG electrodes is correct with respect to MRI
mne.viz.plot_alignment(
raw.info, src=src, eeg=['original', 'projected'], trans=trans,
show_axes=True, mri_fiducials=True, dig='fiducials')
Explanation: Load the data
We use here EEG data from the BCI dataset.
<div class="alert alert-info"><h4>Note</h4><p>See `plot_montage` to view all the standard EEG montages
available in MNE-Python.</p></div>
End of explanation
fwd = mne.make_forward_solution(raw.info, trans=trans, src=src,
bem=bem, eeg=True, mindist=5.0, n_jobs=1)
print(fwd)
# for illustration purposes use fwd to compute the sensitivity map
eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')
eeg_map.plot(time_label='EEG sensitivity', subjects_dir=subjects_dir,
clim=dict(lims=[5, 50, 100]))
Explanation: Setup source space and compute forward
End of explanation |
4,980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Topic Modelling
Author
Step1: 1. Corpus acquisition.
In this notebook we will explore some tools for text processing and analysis and two topic modeling algorithms available from Python toolboxes.
To do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes easy the capture of content from wikimedia sites.
(As a side note, there are many other available text collections to test topic modelling algorithm. In particular, the NLTK library has many examples, that can explore them using the nltk.download() tool.
import nltk
nltk.download()
for instance, you can take the gutemberg dataset
Mycorpus = nltk.corpus.gutenberg
text_name = Mycorpus.fileids()[0]
raw = Mycorpus.raw(text_name)
Words = Mycorpus.words(text_name)
Also, tools like Gensim or Sci-kit learn include text databases to work with).
In order to use Wikipedia data, we will select a single category of articles
Step2: You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https
Step3: Now, we have stored the whole text collection in two lists
Step4: 2. Corpus Processing
Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.
Thus, we will proceed with the following steps
Step5: 2.2. Stemming vs Lemmatization
At this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.
Task
Step6: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
Step7: Task
Step8: One of the advantages of the lemmatizer method is that the result of lemmmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
However, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why "is" or "are" are preserved and not replaced by infinitive "be".
As an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').
2.3. Vectorization
Up to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library.
As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.
Step9: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Task
Step10: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
After that, we have transformed each article (in corpus_clean) in a list tuples (id, n).
Step11: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
[(0, 1), (3, 3), (5,2)]
for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.
[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]
These sparse vectors will be the inputs to the topic modeling algorithms.
Note that, at this point, we have built a Dictionary containing
Step12: and a bow representation of a corpus with
Step13: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
Step14: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
Step15: which appears
Step16: In the following we plot the most frequent terms in the corpus.
Step17: Exercise
Step18: Exercise
Step19: 3. Semantic Analysis
The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms. In this section we will explore two algorithms
Step20: From now on, tfidf can be used to convert any vector from the old representation (bow integer counts) to the new one (TfIdf real-valued weights)
Step21: Or to apply a transformation to a whole corpus
Step22: 3.1. Latent Semantic Indexing (LSI)
Now we are ready to apply a topic modeling algorithm. Latent Semantic Indexing is provided by LsiModel.
Task
Step23: From LSI, we can check both the topic-tokens matrix and the document-topics matrix.
Now we can check the topics generated by LSI. An intuitive visualization is provided by the show_topics method.
Step24: However, a more useful representation of topics is as a list of tuples (token, value). This is provided by the show_topic method.
Task
Step25: LSI approximates any document as a linear combination of the topic vectors. We can compute the topic weights for any input corpus entered as input to the lsi model.
Step26: Task
Step27: 3.2. Latent Dirichlet Allocation (LDA)
There are several implementations of the LDA topic model in python
Step28: 3.2.2. LDA using python lda library
An alternative to gensim for LDA is the lda library from python. It requires a doc-frequency matrix as input
Step29: Document-topic distribution
Step30: It allows incremental updates
3.2.2. LDA using Sci-kit Learn
The input matrix to the sklearn implementation of LDA contains the token-counts for all documents in the corpus.
sklearn contains a powerfull CountVectorizer method that can be used to construct the input matrix from the corpus_bow.
First, we will define an auxiliary function to print the top tokens in the model, that has been taken from the sklearn documentation.
Step31: Now, we need a dataset to feed the Count_Vectorizer object, by joining all tokens in corpus_clean in a single string, using a space ' ' as separator.
Step32: Now we are ready to compute the token counts.
Step33: Now we can apply the LDA algorithm.
Task
Step34: Task
Step35: Exercise | Python Code:
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# import pylab
# Required imports
from wikitools import wiki
from wikitools import category
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import gensim
import lda
import lda.datasets
from time import time
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
from test_helper import Test
Explanation: Topic Modelling
Author: Jesús Cid Sueiro
Date: 2017/04/21
In this notebook we will explore some tools for text analysis in python. To do so, first we will import the requested python libraries.
End of explanation
site = wiki.Wiki("https://en.wikipedia.org/w/api.php")
# Select a category with a reasonable number of articles (>100)
# cat = "Economics"
cat = "Pseudoscience"
print cat
Explanation: 1. Corpus acquisition.
In this notebook we will explore some tools for text processing and analysis and two topic modeling algorithms available from Python toolboxes.
To do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes easy the capture of content from wikimedia sites.
(As a side note, there are many other available text collections to test topic modelling algorithm. In particular, the NLTK library has many examples, that can explore them using the nltk.download() tool.
import nltk
nltk.download()
for instance, you can take the gutemberg dataset
Mycorpus = nltk.corpus.gutenberg
text_name = Mycorpus.fileids()[0]
raw = Mycorpus.raw(text_name)
Words = Mycorpus.words(text_name)
Also, tools like Gensim or Sci-kit learn include text databases to work with).
In order to use Wikipedia data, we will select a single category of articles:
End of explanation
# Loading category data. This may take a while
print "Loading category data. This may take a while..."
cat_data = category.Category(site, cat)
corpus_titles = []
corpus_text = []
for n, page in enumerate(cat_data.getAllMembersGen()):
print "\r Loading article {0}".format(n + 1),
corpus_titles.append(page.title)
corpus_text.append(page.getWikiText())
n_art = len(corpus_titles)
print "\nLoaded " + str(n_art) + " articles from category " + cat
Explanation: You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https://en.wikipedia.org/wiki/Category:Contents, for instance.
We start downloading the text collection.
End of explanation
# n = 5
# print corpus_titles[n]
# print corpus_text[n]
Explanation: Now, we have stored the whole text collection in two lists:
corpus_titles, which contains the titles of the selected articles
corpus_text, with the text content of the selected wikipedia articles
You can browse the content of the wikipedia articles to get some intuition about the kind of documents that will be processed.
End of explanation
# You can comment this if the package is already available.
nltk.download("punkt")
nltk.download("stopwords")
stopwords_en = stopwords.words('english')
corpus_clean = []
for n, art in enumerate(corpus_text):
print "\rProcessing article {0} out of {1}".format(n + 1, n_art),
# This is to make sure that all characters have the appropriate encoding.
art = art.decode('utf-8')
# Tokenize each text entry.
# scode: tokens = <FILL IN>
token_list = word_tokenize(art)
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: filtered_tokens = <FILL IN>
filtered_tokens = [token.lower() for token in token_list if token.isalnum()]
# Remove all tokens in the stopwords list and append the result to corpus_clean
# scode: clean_tokens = <FILL IN>
clean_tokens = [token for token in filtered_tokens if token not in stopwords_en]
# scode: <FILL IN>
corpus_clean.append(clean_tokens)
print "\nLet's check the first tokens from document 0 after processing:"
print corpus_clean[0][0:30]
Test.assertTrue(len(corpus_clean) == n_art, 'List corpus_clean does not contain the expected number of articles')
Test.assertTrue(len([c for c in corpus_clean[0] if c in stopwords_en])==0, 'Stopwords have not been removed')
Explanation: 2. Corpus Processing
Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.
Thus, we will proceed with the following steps:
Tokenization, filtering and cleaning
Homogeneization (stemming or lemmatization)
Vectorization
2.1. Tokenization, filtering and cleaning.
The first steps consists on the following:
Tokenization: convert text string into lists of tokens.
Filtering:
Removing capitalization: capital alphabetic characters will be transformed to their corresponding lowercase characters.
Removing non alphanumeric tokens (e.g. punktuation signs)
Cleaning: Removing stopwords, i.e., those words that are very common in language and do not carry out useful semantic content (articles, pronouns, etc).
To do so, we will need some packages from the Natural Language Toolkit.
End of explanation
# Select stemmer.
stemmer = nltk.stem.SnowballStemmer('english')
corpus_stemmed = []
for n, token_list in enumerate(corpus_clean):
print "\rStemming article {0} out of {1}".format(n + 1, n_art),
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: stemmed_tokens = <FILL IN>
stemmed_tokens = [stemmer.stem(token) for token in token_list]
# Add art to the stemmed corpus
# scode: <FILL IN>
corpus_stemmed.append(stemmed_tokens)
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_stemmed[0][0:30]
Test.assertTrue((len([c for c in corpus_stemmed[0] if c!=stemmer.stem(c)]) < 0.1*len(corpus_stemmed[0])),
'It seems that stemming has not been applied properly')
Explanation: 2.2. Stemming vs Lemmatization
At this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.
Task: Apply the .stem() method, from the stemmer object created in the first line, to corpus_filtered.
End of explanation
# You can comment this if the package is already available.
nltk.download("wordnet")
Explanation: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
End of explanation
wnl = WordNetLemmatizer()
# Select stemmer.
corpus_lemmat = []
for n, token_list in enumerate(corpus_clean):
print "\rLemmatizing article {0} out of {1}".format(n + 1, n_art),
# scode: lemmat_tokens = <FILL IN>
lemmat_tokens = [wnl.lemmatize(token) for token in token_list]
# Add art to the stemmed corpus
# scode: <FILL IN>
corpus_lemmat.append(lemmat_tokens)
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_lemmat[0][0:30]
Explanation: Task: Apply the .lemmatize() method, from the WordNetLemmatizer object created in the first line, to corpus_filtered.
End of explanation
# Create dictionary of tokens
D = gensim.corpora.Dictionary(corpus_clean)
n_tokens = len(D)
print "The dictionary contains {0} tokens".format(n_tokens)
print "First tokens in the dictionary: "
for n in range(10):
print str(n) + ": " + D[n]
Explanation: One of the advantages of the lemmatizer method is that the result of lemmmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
However, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why "is" or "are" are preserved and not replaced by infinitive "be".
As an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').
2.3. Vectorization
Up to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library.
As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.
End of explanation
# Transform token lists into sparse vectors on the D-space
# scode: corpus_bow = <FILL IN>
corpus_bow = [D.doc2bow(doc) for doc in corpus_clean]
Test.assertTrue(len(corpus_bow)==n_art, 'corpus_bow has not the appropriate size')
Explanation: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Task: Apply the doc2bow method from gensim dictionary D, to all tokens in every article in corpus_clean. The result must be a new list named corpus_bow where each element is a list of tuples (token_id, number_of_occurrences).
End of explanation
print "Original article (after cleaning): "
print corpus_clean[0][0:30]
print "Sparse vector representation (first 30 components):"
print corpus_bow[0][0:30]
print "The first component, {0} from document 0, states that token 0 ({1}) appears {2} times".format(
corpus_bow[0][0], D[0], corpus_bow[0][0][1])
Explanation: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
After that, we have transformed each article (in corpus_clean) in a list tuples (id, n).
End of explanation
print "{0} tokens".format(len(D))
Explanation: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
[(0, 1), (3, 3), (5,2)]
for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.
[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]
These sparse vectors will be the inputs to the topic modeling algorithms.
Note that, at this point, we have built a Dictionary containing
End of explanation
print "{0} Wikipedia articles".format(len(corpus_bow))
Explanation: and a bow representation of a corpus with
End of explanation
# SORTED TOKEN FREQUENCIES (I):
# Create a "flat" corpus with all tuples in a single list
corpus_bow_flat = [item for sublist in corpus_bow for item in sublist]
# Initialize a numpy array that we will use to count tokens.
# token_count[n] should store the number of ocurrences of the n-th token, D[n]
token_count = np.zeros(n_tokens)
# Count the number of occurrences of each token.
for x in corpus_bow_flat:
# Update the proper element in token_count
# scode: <FILL IN>
token_count[x[0]] += x[1]
# Sort by decreasing number of occurences
ids_sorted = np.argsort(- token_count)
tf_sorted = token_count[ids_sorted]
Explanation: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
End of explanation
print D[ids_sorted[0]]
Explanation: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
End of explanation
print "{0} times in the whole corpus".format(tf_sorted[0])
Explanation: which appears
End of explanation
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
# Example data
n_bins = 25
hot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]]
y_pos = np.arange(len(hot_tokens))
z = tf_sorted[n_bins-1::-1]/n_art
plt.barh(y_pos, z, align='center', alpha=0.4)
plt.yticks(y_pos, hot_tokens)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
display()
# SORTED TOKEN FREQUENCIES:
# Example data
plt.semilogy(tf_sorted)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
display()
Explanation: In the following we plot the most frequent terms in the corpus.
End of explanation
# scode: <WRITE YOUR CODE HERE>
# Example data
cold_tokens = [D[i] for i in ids_sorted if tf_sorted[i]==1]
print "There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary".format(
len(cold_tokens), float(len(cold_tokens))/n_tokens*100)
Explanation: Exercise: There are usually many tokens that appear with very low frequency in the corpus. Count the number of tokens appearing only once, and what is the proportion of them in the token list.
End of explanation
# scode: <WRITE YOUR CODE HERE>
# SORTED TOKEN FREQUENCIES (I):
# Count the number of occurrences of each token.
token_count2 = np.zeros(n_tokens)
for x in corpus_bow_flat:
token_count2[x[0]] += (x[1]>0)
# Sort by decreasing number of occurences
ids_sorted2 = np.argsort(- token_count2)
tf_sorted2 = token_count2[ids_sorted2]
# SORTED TOKEN FREQUENCIES (II):
# Example data
n_bins = 25
hot_tokens2 = [D[i] for i in ids_sorted2[n_bins-1::-1]]
y_pos2 = np.arange(len(hot_tokens2))
z2 = tf_sorted2[n_bins-1::-1]/n_art
plt.figure()
plt.barh(y_pos2, z2, align='center', alpha=0.4)
plt.yticks(y_pos2, hot_tokens2)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
display()
Explanation: Exercise: Represent graphically those 20 tokens that appear in the highest number of articles. Note that you can use the code above (headed by # SORTED TOKEN FREQUENCIES) with a very minor modification.
End of explanation
tfidf = gensim.models.TfidfModel(corpus_bow)
Explanation: 3. Semantic Analysis
The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms. In this section we will explore two algorithms:
Latent Semantic Indexing (LSI)
Latent Dirichlet Allocation (LDA)
The topic model algorithms in gensim assume that input documents are parameterized using the tf-idf model. This can be done using
End of explanation
doc_bow = [(0, 1), (1, 1)]
tfidf[doc_bow]
Explanation: From now on, tfidf can be used to convert any vector from the old representation (bow integer counts) to the new one (TfIdf real-valued weights):
End of explanation
corpus_tfidf = tfidf[corpus_bow]
print corpus_tfidf[0][0:5]
Explanation: Or to apply a transformation to a whole corpus
End of explanation
# Initialize an LSI transformation
n_topics = 5
# scode: lsi = <FILL IN>
lsi = gensim.models.LsiModel(corpus_tfidf, id2word=D, num_topics=n_topics)
Explanation: 3.1. Latent Semantic Indexing (LSI)
Now we are ready to apply a topic modeling algorithm. Latent Semantic Indexing is provided by LsiModel.
Task: Generate a LSI model with 5 topics for corpus_tfidf and dictionary D. You can check de sintaxis for gensim.models.LsiModel.
End of explanation
lsi.show_topics(num_topics=-1, num_words=10, log=False, formatted=True)
Explanation: From LSI, we can check both the topic-tokens matrix and the document-topics matrix.
Now we can check the topics generated by LSI. An intuitive visualization is provided by the show_topics method.
End of explanation
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
n_bins = 25
# Example data
y_pos = range(n_bins-1, -1, -1)
# pylab.rcParams['figure.figsize'] = 16, 8 # Set figure size
plt.figure(figsize=(16, 8))
for i in range(n_topics):
### Plot top 25 tokens for topic i
# Read i-thtopic
# scode: <FILL IN>
topic_i = lsi.show_topic(i, topn=n_bins)
tokens = [t[0] for t in topic_i]
weights = [t[1] for t in topic_i]
# Plot
# scode: <FILL IN>
plt.subplot(1, n_topics, i+1)
plt.barh(y_pos, weights, align='center', alpha=0.4)
plt.yticks(y_pos, tokens)
plt.xlabel('Top {0} topic weights'.format(n_bins))
plt.title('Topic {0}'.format(i))
plt.show()
display()
Explanation: However, a more useful representation of topics is as a list of tuples (token, value). This is provided by the show_topic method.
Task: Represent the columns of the topic-token matrix as a series of bar diagrams (one per topic) with the top 25 tokens of each topic.
End of explanation
# On real corpora, target dimensionality of
# 200–500 is recommended as a “golden standard”
# Create a double wrapper over the original
# corpus bow tfidf fold-in-lsi
corpus_lsi = lsi[corpus_tfidf]
print corpus_lsi[0]
Explanation: LSI approximates any document as a linear combination of the topic vectors. We can compute the topic weights for any input corpus entered as input to the lsi model.
End of explanation
# Extract weights from corpus_lsi
# scode weight0 = <FILL IN>
weight0 = [doc[0][1] if doc != [] else -np.inf for doc in corpus_lsi]
# Locate the maximum positive weight
nmax = np.argmax(weight0)
print nmax
print weight0[nmax]
print corpus_lsi[nmax]
# Get topic 0
# scode: topic_0 = <FILL IN>
topic_0 = lsi.show_topic(0, topn=n_bins)
# Compute a list of tuples (token, wordcount) for all tokens in topic_0, where wordcount is the number of
# occurences of the token in the article.
# scode: token_counts = <FILL IN>
token_counts = [(t[0], corpus_clean[nmax].count(t[0])) for t in topic_0]
print "Topic 0 is:"
print topic_0
print "Token counts:"
print token_counts
Explanation: Task: Find the document with the largest positive weight for topic 0. Compare the document and the topic.
End of explanation
ldag = gensim.models.ldamodel.LdaModel(
corpus=corpus_tfidf, id2word=D, num_topics=10, update_every=1, passes=10)
ldag.print_topics()
Explanation: 3.2. Latent Dirichlet Allocation (LDA)
There are several implementations of the LDA topic model in python:
Python library lda.
Gensim module: gensim.models.ldamodel.LdaModel
Sci-kit Learn module: sklearn.decomposition
3.2.1. LDA using Gensim
The use of the LDA module in gensim is similar to LSI. Furthermore, it assumes that a tf-idf parametrization is used as an input, which is not in complete agreement with the theoretical model, which assumes documents represented as vectors of token-counts.
To use LDA in gensim, we must first create a lda model object.
End of explanation
# For testing LDA, you can use the reuters dataset
# X = lda.datasets.load_reuters()
# vocab = lda.datasets.load_reuters_vocab()
# titles = lda.datasets.load_reuters_titles()
X = np.int32(np.zeros((n_art, n_tokens)))
for n, art in enumerate(corpus_bow):
for t in art:
X[n, t[0]] = t[1]
print X.shape
print X.sum()
vocab = D.values()
titles = corpus_titles
# Default parameters:
# model = lda.LDA(n_topics, n_iter=2000, alpha=0.1, eta=0.01, random_state=None, refresh=10)
model = lda.LDA(n_topics=10, n_iter=1500, random_state=1)
model.fit(X) # model.fit_transform(X) is also available
topic_word = model.topic_word_ # model.components_ also works
# Show topics...
n_top_words = 8
for i, topic_dist in enumerate(topic_word):
topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]
print('Topic {}: {}'.format(i, ' '.join(topic_words)))
Explanation: 3.2.2. LDA using python lda library
An alternative to gensim for LDA is the lda library from python. It requires a doc-frequency matrix as input
End of explanation
doc_topic = model.doc_topic_
for i in range(10):
print("{} (top topic: {})".format(titles[i], doc_topic[i].argmax()))
# This is to apply the model to a new doc(s)
# doc_topic_test = model.transform(X_test)
# for title, topics in zip(titles_test, doc_topic_test):
# print("{} (top topic: {})".format(title, topics.argmax()))
Explanation: Document-topic distribution
End of explanation
# Adapted from an example in sklearn site
# http://scikit-learn.org/dev/auto_examples/applications/topics_extraction_with_nmf_lda.html
# You can try also with the dataset provided by sklearn in
# from sklearn.datasets import fetch_20newsgroups
# dataset = fetch_20newsgroups(shuffle=True, random_state=1,
# remove=('headers', 'footers', 'quotes'))
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
Explanation: It allows incremental updates
3.2.2. LDA using Sci-kit Learn
The input matrix to the sklearn implementation of LDA contains the token-counts for all documents in the corpus.
sklearn contains a powerfull CountVectorizer method that can be used to construct the input matrix from the corpus_bow.
First, we will define an auxiliary function to print the top tokens in the model, that has been taken from the sklearn documentation.
End of explanation
print("Loading dataset...")
# scode: data_samples = <FILL IN>
print "*".join(['Esto', 'es', 'un', 'ejemplo'])
data_samples = [" ".join(c) for c in corpus_clean]
print 'Document 0:'
print data_samples[0][0:200], '...'
Explanation: Now, we need a dataset to feed the Count_Vectorizer object, by joining all tokens in corpus_clean in a single string, using a space ' ' as separator.
End of explanation
# Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
n_features = 1000
n_samples = 2000
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
print tf[0][0][0]
Explanation: Now we are ready to compute the token counts.
End of explanation
print("Fitting LDA models with tf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
# scode: lda = <FILL IN>
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=10,
learning_method='online', learning_offset=50., random_state=0)
# doc_topic_prior= 1.0/n_topics, topic_word_prior= 1.0/n_topics)
Explanation: Now we can apply the LDA algorithm.
Task: Create an LDA object with the following parameters:
n_topics=n_topics, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0
End of explanation
t0 = time()
corpus_lda = lda.fit_transform(tf)
print corpus_lda[10]/np.sum(corpus_lda[10])
print("done in %0.3fs." % (time() - t0))
print corpus_titles[10]
# print corpus_text[10]
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, 20)
topics = lda.components_
topic_probs = [t/np.sum(t) for t in topics]
#print topic_probs[0]
print -np.sort(-topic_probs[0])
Explanation: Task: Fit model lda with the token frequencies computed by tf_vectorizer.
End of explanation
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
n_bins = 50
# Example data
y_pos = range(n_bins-1, -1, -1)
# pylab.rcParams['figure.figsize'] = 16, 8 # Set figure size
plt.figure(figsize=(16, 8))
for i in range(n_topics):
### Plot top 25 tokens for topic i
# Read i-thtopic
# scode: <FILL IN>
topic_i = topic_probs[i]
rank = np.argsort(- topic_i)[0:n_bins]
tokens = [tf_feature_names[r] for r in rank]
weights = [topic_i[r] for r in rank]
# Plot
# scode: <FILL IN>
plt.subplot(1, n_topics, i+1)
plt.barh(y_pos, weights, align='center', alpha=0.4)
plt.yticks(y_pos, tokens)
plt.xlabel('Top {0} topic weights'.format(n_bins))
plt.title('Topic {0}'.format(i))
plt.show()
display()
Explanation: Exercise: Represent graphically the topic distributions for the top 25 tokens with highest probability for each topic.
End of explanation |
4,981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Usage example
To showcase the use of this toolkit, we first create a simple learning task, and then learn an OOM model using spectral learning.
We start by importing the toolkit and initializing a random generator.
Step1: 1. The learning task
First, we randomly create a sparse 7-dimensional OOM with an alphabet size of $|\Sigma| = 5$. This describes a stationary and ergodic symbol process. We sample a training sequence of length $10^6$ and five test sequences each of length $10^4$.
We will use initial subsequences of the training sequence of increasing lengths ${10^2, 10^{2.5}, 10^3, 10^{3.5}, 10^4, 10^{4.5}, 10^{5}, 10^{5.5}, 10^6 }$ as data for the OOM estimation, and test the performance of the learnt models on the test sequences by computing the time-averaged negative $\log_2$-likelihood.
Step2: 2. Performing spectral learning
Spectral learning requires the following steps. For details consult the publication
Step3: 3. Evaluate the learnt models and plot the results
We first print the estimated model dimension to see if the dimension estimation has produced reasonable values.
Next we evaluate the learnt models by computing the time-averaged negative $\log_2$-likelihood (cross-entropy) on the test sequences by the member function Oom.l2l(test_sequence). Note that a value of $\log_2(|\Sigma|) \approx 2.32$ corresponds to pure chance level (i.e., a model guessing the next symbol uniformly randomly). Furthermore, we can estimate the best possible value by computing the time-averaged negative $\log_2$-"likelihood" of the true model on the test sequences, which samples the entropy of the stochastic process.
We then plot the performance of the estimated models (y-axis), where we scale the plot such that the minimum corresponds to the best possible model, and the maximum corresponds to pure chance. | Python Code:
import tom
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
rand = tom.Random(1234567)
Explanation: Usage example
To showcase the use of this toolkit, we first create a simple learning task, and then learn an OOM model using spectral learning.
We start by importing the toolkit and initializing a random generator.
End of explanation
oom = tom.Oom(7, 5, 0, 20, 1e-7, rand)
train_sequence = oom.sample(10**6, rand)
test_sequences = []
for i in range(5):
oom.reset()
test_sequences.append(oom.sample(10**4, rand))
train_lengths = [int(10**(k/2)) for k in range(4,13)]
Explanation: 1. The learning task
First, we randomly create a sparse 7-dimensional OOM with an alphabet size of $|\Sigma| = 5$. This describes a stationary and ergodic symbol process. We sample a training sequence of length $10^6$ and five test sequences each of length $10^4$.
We will use initial subsequences of the training sequence of increasing lengths ${10^2, 10^{2.5}, 10^3, 10^{3.5}, 10^4, 10^{4.5}, 10^{5}, 10^{5.5}, 10^6 }$ as data for the OOM estimation, and test the performance of the learnt models on the test sequences by computing the time-averaged negative $\log_2$-likelihood.
End of explanation
# Initialize a tom.Data object that computes the desired estimates from the training
# data (using a suffix tree representation internally) and provides the required
# data matrices including variance estimates.
data = tom.Data()
# For every training sequence length, learn a model via spectral learning
learnt_ooms = []
for train_length in train_lengths:
# 1. Use the current training sequence to obtain estimates
data.sequence = train_sequence.sub(train_length)
# 2. Select sets of indicative and characteristic words:
data.X = data.Y = tom.wordsFromData(data.stree, maxWords = 1000)
# 3. Estimate an appropriate target dimension (using no weights here):
d = tom.learn.dimension_estimate(data, v=(1,1))
# 4. Perform spectral learning to estimate an OOM:
learnt_oom = tom.learn.model_estimate(data, d)
# 5. Set default stabilization parameters for the learnt model:
learnt_oom.stabilization(preset='default')
learnt_ooms.append(learnt_oom)
# Print a very simple progress indicator:
print('.', end='', flush=True)
print('done!')
Explanation: 2. Performing spectral learning
Spectral learning requires the following steps. For details consult the publication: Michael Thon and Herbert Jaeger. Links between multiplicity automata, observable operator models and predictive state representations -- a unified learning framework. Journal of Machine Learning Research, 16:103–147, 2015.
For words $\bar{x}\in\Sigma^*$, estimate from the available data the values $\hat{f}(\bar{x})$, where $f(\bar{x}) = P(\bar{x})$ is the stationary probability of observing $\bar{x}$. This is accomplished by a tom.Estimator object, which uses a suffix tree representation of the data in the form of a tom.STree to compute these estimates efficiently.
Select sets $X, Y \subseteq \Sigma^*$ of "indicative" and "characteristic" words that determine which of the above estimates will be used for the spectral learning. Here, we will use the at most 1000 words occurring most often in the training sequence. This is computed efficiently by the function tom.getWordsFromData from a suffix tree representation of the training data.
Estimate an appropriate target dimension $d$ by the numerical rank of the matrix $\hat{F}^{Y,X} = [\hat{f}(\bar{x}\bar{y})]_{\bar{y}\in Y, \bar{x}\in X}$.
Perform the actual spectral learning using the function tom.learn.spectral. This consists of the following steps:
Find the best rank-$d$ approximation $BA \approx \hat{F}^{Y,X}$ to the matrix $\hat{F}^{Y,X}$.
Project the columns of $\hat{F}^{Y,X}$ and $\hat{F}z^{Y,X} = [\hat{f}(\bar{x} z \bar{y})]{\bar{y}\in Y, \bar{x}\in X}$, as well as the vector $\hat{F}^{X, \varepsilon} = [\hat{f}(\bar{x})]{\bar{x}\in X}$ to the principal subspace spanned by $B$, giving the coordinate representations $A$, $A_z$ and $\hat{\omega}\varepsilon$, respectively.
Solve $\hat{\tau_z} A = A_z$ in the least-squares sense for each symbol $z\in \Sigma$, as well as $\hat{\sigma} A = \hat{F}^{\varepsilon, Y} = [\hat{f}(\bar{y})]^\top_{\bar{y}\in Y}$.
The estimated model should be "stabilized" to insure that is cannot produce negative probability estimates.
This is performed once for each training sequence length.
End of explanation
# Let's examine the estimated model dimensions:
print('Estimated model dimensions: ', [learnt_oom.dimension() for learnt_oom in learnt_ooms])
# The time-averaged negative log2-likelihood is computed by the function `oom.l2l(test_sequence)`.
results = [np.average([ learnt_oom.l2l(test_sequence) for test_sequence in test_sequences ])
for learnt_oom in learnt_ooms]
# Compute an approximation to the optimum value:
l2l_opt = np.average([oom.l2l(test_sequence) for test_sequence in test_sequences])
# Plot the performance of the estimated models:
plt.semilogx(train_lengths, results);
plt.xlim((train_lengths[0], train_lengths[-1]));
plt.ylim((l2l_opt, np.log2(5)));
plt.title('Performance of the estimated models');
plt.ylabel('cross-entropy');
plt.xlabel('Length of training data');
Explanation: 3. Evaluate the learnt models and plot the results
We first print the estimated model dimension to see if the dimension estimation has produced reasonable values.
Next we evaluate the learnt models by computing the time-averaged negative $\log_2$-likelihood (cross-entropy) on the test sequences by the member function Oom.l2l(test_sequence). Note that a value of $\log_2(|\Sigma|) \approx 2.32$ corresponds to pure chance level (i.e., a model guessing the next symbol uniformly randomly). Furthermore, we can estimate the best possible value by computing the time-averaged negative $\log_2$-"likelihood" of the true model on the test sequences, which samples the entropy of the stochastic process.
We then plot the performance of the estimated models (y-axis), where we scale the plot such that the minimum corresponds to the best possible model, and the maximum corresponds to pure chance.
End of explanation |
4,982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gradient Boosting From Scratch
Let's implement gradient boosting from scratch.
Step1: Exploration
Let explore the data before building a model. The goal is to predict the median value of owner-occupied homes in $1000s.
Step5: Exercise #1
Step6: Let's see how the base model performs on out test data. Let's visualize performance compared to the LSTAT feature.
Step7: There is definitely room for improvement. We can also look at the residuals
Step9: Train Boosting model
Returning back to boosting, let's use our very first base model as are initial prediction. We'll then perform subsequent boosting iterations to improve upon this model.
create_weak_model
Step10: Make initial prediction.
Exercise #3
Step11: Interpret results
Can you improve the model results?
Step12: Let's visualize how the performance changes across iterations | Python Code:
from __future__ import print_function
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn.tree import DecisionTreeRegressor
from tensorflow.keras.datasets import boston_housing
np.random.seed(0)
plt.rcParams['figure.figsize'] = (8.0, 5.0)
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['ytick.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 14
(x_train, y_train), (x_test, y_test) = boston_housing.load_data()
x_train.shape
Explanation: Gradient Boosting From Scratch
Let's implement gradient boosting from scratch.
End of explanation
# Create training/test dataframes for visualization/data exploration.
# Description of features: https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html
feature_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD','TAX', 'PTRATIO', 'B', 'LSTAT']
df_train = pd.DataFrame(x_train, columns=feature_names)
df_test = pd.DataFrame(x_test, columns=feature_names)
Explanation: Exploration
Let explore the data before building a model. The goal is to predict the median value of owner-occupied homes in $1000s.
End of explanation
class BaseModel(object):
Initial model that predicts mean of train set.
def __init__(self, y_train):
self.train_mean = # TODO
def predict(self, x):
Return train mean for every prediction.
return # TODO
def compute_residuals(label, pred):
Compute difference of labels and predictions.
When using mean squared error loss function, the residual indicates the
negative gradient of the loss function in prediction space. Thus by fitting
the residuals, we performing gradient descent in prediction space. See for
more detail:
https://explained.ai/gradient-boosting/L2-loss.html
return label - pred
def compute_rmse(x):
return np.sqrt(np.mean(np.square(x)))
# Build a base model that predicts the mean of the training set.
base_model = BaseModel(y_train)
test_pred = base_model.predict(x_test)
test_residuals = compute_residuals(y_test, test_pred)
compute_rmse(test_residuals)
Explanation: Exercise #1: What are the most predictive features? Determine correlation for each feature with the label. You may find the corr function useful.
Train Gradient Boosting model
Training Steps to build model an ensemble of $K$ estimators.
1. At $k=0$ build base model , $\hat{y}{0}$: $\hat{y}{0}=base_predicted$
3. Compute residuals $r = \sum_{i=0}^n (y_{k,i} - \hat{y}{k,i})$; $n: number\ train\ examples$
4. Train new model, fitting on residuals, $r$. We will call the predictions from this model $e{k}_predicted$
5. Update model predictions at step $k$ by adding residual to current predictions: $\hat{y}{k} = \hat{y}{k-1} + e_{k}_predicted$
6. Repeat steps 2 - 5 K times.
In summary, the goal is to build K estimators that learn to predict the residuals from the prior model; thus we are learning to "correct" the
predictions up until this point.
<br>
$\hat{y}{K} = base_predicted\ +\ \sum{j=1}^Ke_{j}_predicted$
Build base model
Exercise #2: Make an initial prediction using the BaseModel class -- configure the predict method to predict the training mean.
End of explanation
feature = df_test.LSTAT
# Pick a predictive feature for plotting.
plt.plot(feature, y_test, 'go', alpha=0.7, markersize=10)
plt.plot(feature, test_pred, label='initial prediction')
plt.xlabel('LSTAT', size=20)
plt.legend(prop={'size': 20});
Explanation: Let's see how the base model performs on out test data. Let's visualize performance compared to the LSTAT feature.
End of explanation
plt.plot(feature, test_residuals, 'bo', alpha=0.7, markersize=10)
plt.ylabel('residuals', size=20)
plt.xlabel('LSTAT', size=20)
plt.plot([feature.min(), feature.max()], [0, 0], 'b--', label='0 error');
plt.legend(prop={'size': 20});
Explanation: There is definitely room for improvement. We can also look at the residuals:
End of explanation
def create_weak_learner(**tree_params):
Initialize a Decision Tree model.
model = DecisionTreeRegressor(**tree_params)
return model
Explanation: Train Boosting model
Returning back to boosting, let's use our very first base model as are initial prediction. We'll then perform subsequent boosting iterations to improve upon this model.
create_weak_model
End of explanation
base_model = BaseModel(y_train)
# Training parameters.
tree_params = {
'max_depth': 1,
'criterion': 'mse',
'random_state': 123
}
N_ESTIMATORS = 50
BOOSTING_LR = 0.1
# Initial prediction, residuals.
train_pred = base_model.prediction(x_train)
test_pred = base_model.prediction(x_test)
train_residuals = compute_residuals(y_train, train_pred)
test_residuals = compute_residuals(y_test, test_pred)
# Boosting.
train_rmse, test_rmse = [], []
for _ in range(0, N_ESTIMATORS):
train_rmse.append(compute_rmse(train_residuals))
test_rmse.append(compute_rmse(test_residuals))
# Train weak learner.
model = create_weak_learner(**tree_params)
model.fit(x_train, train_residuals)
# Boosting magic happens here: add the residual prediction to correct
# the prior model.
grad_approx = # TODO
train_pred += # TODO
train_residuals = compute_residuals(y_train, train_pred)
# Keep track of residuals on validation set.
grad_approx = # TODO
test_pred += # TODO
test_residuals = compute_residuals(y_test, test_pred)
Explanation: Make initial prediction.
Exercise #3: Update the prediction on the training set (train_pred) and on the testing set (test_pred) using the weak learner that predicts the residuals.
End of explanation
plt.figure()
plt.plot(train_rmse, label='train error')
plt.plot(test_rmse, label='test error')
plt.ylabel('rmse', size=20)
plt.xlabel('Boosting Iterations', size=20);
plt.legend()
Explanation: Interpret results
Can you improve the model results?
End of explanation
feature = df_test.LSTAT
ix = np.argsort(feature)
# Pick a predictive feature for plotting.
plt.plot(feature, y_test, 'go', alpha=0.7, markersize=10)
plt.plot(feature[ix], test_pred[ix], label='boosted prediction', linewidth=2)
plt.xlabel('feature', size=20)
plt.legend(prop={'size': 20});
Explanation: Let's visualize how the performance changes across iterations
End of explanation |
4,983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom Jupyter Widgets
The Hello World Example of the Cookie Cutter
The widget framework is built on top of the Comm framework (short for communication). The Comm framework is a framework that allows you send/receive JSON messages to/from the front end (as seen below).
To create a custom widget, you need to define the widget both in the browser and on the kernel size.
Python Kernel
DOMWidget and Widget
DOMWidget
Step1: Front end (JavaScript)
Models and Views
Jupyter widgets rely on Backbone.js.
Backbone.js is an MVC (model view controller) framework.
Widgets defined in the back end are automatically synchronized with generic Backbone.js models in the front end. The traitlets are added to the front end instance automatically on first state push. The _view_name trait that you defined earlier is used by the widget framework to create the corresponding Backbone.js view and link that view to the model.
Import jupyter-js-widgets, define the view, implement the render method
Step2: Test
You should be able to display your widget just like any other widget now.
Step3: Making the widget stateful
Instead of displaying a static "hello world" message, we can display a string set by the back end.
First you need to add a traitlet in the back end.
(Use the name of value to stay consistent with the rest of the widget framework and to allow your widget to be used with interact.)
Step4: Dynamic updates
Adding and registering a change handler.
Step5: An example including bidirectional communication
Step6: Test of the spinner widget
Step7: Wiring the spinner with another widget | Python Code:
import ipywidgets as widgets
from traitlets import Unicode
class HelloWidget(widgets.DOMWidget):
_view_name = Unicode('HelloView').tag(sync=True)
_view_module = Unicode('hello').tag(sync=True)
Explanation: Custom Jupyter Widgets
The Hello World Example of the Cookie Cutter
The widget framework is built on top of the Comm framework (short for communication). The Comm framework is a framework that allows you send/receive JSON messages to/from the front end (as seen below).
To create a custom widget, you need to define the widget both in the browser and on the kernel size.
Python Kernel
DOMWidget and Widget
DOMWidget: Intended to be displayed in the Jupyter notebook
Widget: A terrible name for a synchronized object. It could not have any visual representation.
_view_name
Inheriting from the DOMWidget does not tell the widget framework what front end widget to associate with your back end widget. Instead, you must tell it yourself by defining a specially named traitlet, _view_name (as seen below).
End of explanation
%%javascript
require.undef('hello');
define('hello', ["@jupyter-widgets/base"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
console.log(this)
this.el.innerText = 'Hello World!';
},
});
return {
HelloView: HelloView
};
});
Explanation: Front end (JavaScript)
Models and Views
Jupyter widgets rely on Backbone.js.
Backbone.js is an MVC (model view controller) framework.
Widgets defined in the back end are automatically synchronized with generic Backbone.js models in the front end. The traitlets are added to the front end instance automatically on first state push. The _view_name trait that you defined earlier is used by the widget framework to create the corresponding Backbone.js view and link that view to the model.
Import jupyter-js-widgets, define the view, implement the render method
End of explanation
HelloWidget()
Explanation: Test
You should be able to display your widget just like any other widget now.
End of explanation
class HelloWidget(widgets.DOMWidget):
_view_name = Unicode('HelloView').tag(sync=True)
_view_module = Unicode('hello').tag(sync=True)
value = Unicode('Hello World!').tag(sync=True)
%%javascript
require.undef('hello');
define('hello', ["@jupyter-widgets/base"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
render: function() {
this.el.innerText = this.model.get('value');
},
});
return {
HelloView : HelloView
};
});
Explanation: Making the widget stateful
Instead of displaying a static "hello world" message, we can display a string set by the back end.
First you need to add a traitlet in the back end.
(Use the name of value to stay consistent with the rest of the widget framework and to allow your widget to be used with interact.)
End of explanation
%%javascript
require.undef('hello');
define('hello', ["@jupyter-widgets/base"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
render: function() {
this.value_changed();
this.model.on('change:value', this.value_changed, this);
},
value_changed: function() {
this.el.innerText = this.model.get('value');
},
});
return {
HelloView : HelloView
};
});
w = HelloWidget()
w
w.value = 'Hello!'
Explanation: Dynamic updates
Adding and registering a change handler.
End of explanation
from traitlets import CInt
class SpinnerWidget(widgets.DOMWidget):
_view_name = Unicode('SpinnerView').tag(sync=True)
_view_module = Unicode('spinner').tag(sync=True)
value = CInt().tag(sync=True)
%%javascript
requirejs.undef('spinner');
define('spinner', ["@jupyter-widgets/base"], function(widgets) {
var SpinnerView = widgets.DOMWidgetView.extend({
render: function() {
var that = this;
this.$input = $('<input />');
this.$el.append(this.$input);
this.$spinner = this.$input.spinner({
change: function( event, ui ) {
that.handle_spin(that.$spinner.spinner('value'));
},
spin: function( event, ui ) {
//ui.value is the new value of the spinner
that.handle_spin(ui.value);
}
});
this.value_changed();
this.model.on('change:value', this.value_changed, this);
},
value_changed: function() {
this.$spinner.spinner('value', this.model.get('value'));
},
handle_spin: function(value) {
this.model.set('value', value);
this.touch();
},
});
return {
SpinnerView: SpinnerView
};
});
Explanation: An example including bidirectional communication: A Spinner Widget
End of explanation
w = SpinnerWidget(value=5)
w
w.value = 7
Explanation: Test of the spinner widget
End of explanation
from IPython.display import display
w1 = SpinnerWidget(value=0)
w2 = widgets.IntSlider()
display(w1,w2)
from traitlets import link
mylink = link((w1, 'value'), (w2, 'value'))
Explanation: Wiring the spinner with another widget
End of explanation |
4,984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST Image Classification with TensorFlow on Cloud AI Platform
This notebook demonstrates how to implement different image models on MNIST using the tf.keras API.
Learning objectives
Understand how to build a Dense Neural Network (DNN) for image classification
Understand how to use dropout (DNN) for image classification
Understand how to use Convolutional Neural Networks (CNN)
Know how to deploy and use an image classification model using Google Cloud's Vertex AI
Each learning objective will correspond to a #TODO in the notebook, where you will complete the notebook cell's code before running the cell. Refer to the solution notebook)for reference.
First things first. Configure the parameters below to match your own Google Cloud project details.
Step3: Building a dynamic model
In the previous notebook, <a href="1_mnist_linear.ipynb">1_mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.
The boilerplate structure for this module has already been set up in the folder mnist_models. The module lives in the sub-folder, trainer, and is designated as a python package with the empty __init__.py (mnist_models/trainer/__init__.py) file. It still needs the model and a trainer to run it, so let's make them.
Let's start with the trainer file first. This file parses command line arguments to feed into the model.
Step6: Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the scale and load_dataset functions from the previous lab.
Step10: Finally, let's code the models! The tf.keras API accepts an array of layers into a model object, so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions
Step11: Local Training
With everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script mnist_models/trainer/test.py to make sure the model still passes our previous checks. On line 13, you can specify which model types you would like to check. line 14 and line 15 has the number of epochs and steps per epoch respectively.
Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
Step12: Now that we know that our models are working as expected, let's run it on the Google Cloud AI Platform. We can run it as a python module locally first using the command line.
The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
Step13: The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in our mnist_models/trainer/task.py file.
Step14: Training on the cloud
Since we're using an unreleased version of TensorFlow on AI Platform, we can instead use a Deep Learning Container in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple Dockerlife which copies our code to be used in a TF2 environment.
Step15: The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up here with the name mnist_models. (Click here to enable Cloud Build)
Step16: Finally, we can kickoff the AI Platform training job. We can pass in our docker image using the master-image-uri flag.
Step17: AI platform job could take around 10 minutes to complete. Enable the AI Platform Training & Prediction API, if required.
Step18: Deploying and predicting with model
Once you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but ${JOB_DIR}keras_export/ can always be changed to a different path.
Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
Step19: To predict with the model, let's take one of the example images.
TODO 4
Step20: Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab! | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Here we'll show the currently installed version of TensorFlow
import tensorflow as tf
print(tf.__version__)
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "cnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.6" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
Explanation: MNIST Image Classification with TensorFlow on Cloud AI Platform
This notebook demonstrates how to implement different image models on MNIST using the tf.keras API.
Learning objectives
Understand how to build a Dense Neural Network (DNN) for image classification
Understand how to use dropout (DNN) for image classification
Understand how to use Convolutional Neural Networks (CNN)
Know how to deploy and use an image classification model using Google Cloud's Vertex AI
Each learning objective will correspond to a #TODO in the notebook, where you will complete the notebook cell's code before running the cell. Refer to the solution notebook)for reference.
First things first. Configure the parameters below to match your own Google Cloud project details.
End of explanation
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
Parses command-line arguments.
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
Parses command line arguments and kicks off model training.
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
Explanation: Building a dynamic model
In the previous notebook, <a href="1_mnist_linear.ipynb">1_mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.
The boilerplate structure for this module has already been set up in the folder mnist_models. The module lives in the sub-folder, trainer, and is designated as a python package with the empty __init__.py (mnist_models/trainer/__init__.py) file. It still needs the model and a trainer to run it, so let's make them.
Let's start with the trainer file first. This file parses command line arguments to feed into the model.
End of explanation
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
Scales images from a 0-255 int range to a 0-1 float range
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
Loads MNIST dataset into a tf.data.Dataset
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
Explanation: Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the scale and load_dataset functions from the previous lab.
End of explanation
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
Constructs layers for a keras model based on a dict of model types.
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
Compiles keras model for image classification.
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
Compiles keras model and loads data into it for training.
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
Explanation: Finally, let's code the models! The tf.keras API accepts an array of layers into a model object, so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: get_layers and create_and_train_model. We will build the structure of our model in get_layers. Last but not least, we'll copy over the training code from the previous lab into train_and_evaluate.
TODO 1: Define the Keras layers for a DNN model
TODO 2: Define the Keras layers for a dropout model
TODO 3: Define the Keras layers for a CNN model
Hint: These models progressively build on each other. Look at the imported tensorflow.keras.layers modules and the default values for the variables defined in get_layers for guidance.
End of explanation
!python3 -m mnist_models.trainer.test
Explanation: Local Training
With everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script mnist_models/trainer/test.py to make sure the model still passes our previous checks. On line 13, you can specify which model types you would like to check. line 14 and line 15 has the number of epochs and steps per epoch respectively.
Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
End of explanation
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
Explanation: Now that we know that our models are working as expected, let's run it on the Google Cloud AI Platform. We can run it as a python module locally first using the command line.
The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
End of explanation
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
Explanation: The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in our mnist_models/trainer/task.py file.
End of explanation
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
Explanation: Training on the cloud
Since we're using an unreleased version of TensorFlow on AI Platform, we can instead use a Deep Learning Container in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple Dockerlife which copies our code to be used in a TF2 environment.
End of explanation
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
Explanation: The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up here with the name mnist_models. (Click here to enable Cloud Build)
End of explanation
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
Explanation: Finally, we can kickoff the AI Platform training job. We can pass in our docker image using the master-image-uri flag.
End of explanation
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
Explanation: AI platform job could take around 10 minutes to complete. Enable the AI Platform Training & Prediction API, if required.
End of explanation
%%bash
gcloud config set ai_platform/region global
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.6
Explanation: Deploying and predicting with model
Once you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but ${JOB_DIR}keras_export/ can always be changed to a different path.
Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
End of explanation
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
Explanation: To predict with the model, let's take one of the example images.
TODO 4: Write a .json file with image data to send to an AI Platform deployed model
End of explanation
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
Explanation: Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
End of explanation |
4,985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka, 2015
Python Machine Learning
Chapter 13 - Parallelizing Neural Network Training with Theano
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
Step1: Sections
Building, compiling, and running expressions with Theano
First steps with Theano
Configuring Theano
Working with array structures
Wrapping things up
Step2: <br>
<br>
Configuring Theano
[back to top]
Configuring Theano. For more options, see
- http
Step3: To change the float type globally, execute
export THEANO_FLAGS=floatX=float32
in your bash shell. Or execute Python script as
THEANO_FLAGS=floatX=float32 python your_script.py
Running Theano on GPU(s). For prerequisites, please see
Step4: You can run a Python script on CPU via
Step5: Updating shared arrays.
More info about memory management in Theano can be found here
Step6: We can use the givens variable to insert values into the graph before compiling it. Using this approach we can reduce the number of transfers from RAM (via CPUs) to GPUs to speed up learning with shared variables. If we use inputs, a datasets is transferred from the CPU to the GPU multiple times, for example, if we iterate over a dataset multiple times (epochs) during gradient descent. Via givens, we can keep the dataset on the GPU if it fits (e.g., a mini-batch).
Step7: <br>
<br>
Wrapping things up
Step8: Implementing the training function.
Step9: Plotting the sum of squared errors cost vs epochs.
Step10: Making predictions.
Step11: <br>
<br>
Choosing activation functions for feedforward neural networks
[back to top]
<br>
<br>
Logistic function recap
[back to top]
The logistic function, often just called "sigmoid function" is in fact a special case of a sigmoid function.
Net input $z$
Step12: Now, imagine a MLP perceptron with 3 hidden units + 1 bias unit in the hidden unit. The output layer consists of 3 output units.
Step13: <br>
<br>
Estimating probabilities in multi-class classification via the softmax function
[back to top]
The softmax function is a generalization of the logistic function and allows us to compute meaningful class-probalities in multi-class settings (multinomial logistic regression).
$$P(y=j|z) =\phi_{softmax}(z) = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$
the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x is
Step14: <br>
<br>
Broadening the output spectrum using a hyperbolic tangent
[back to top]
Another special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function.
$$\phi_{tanh}(z) = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$$
Output range
Step16: <br>
<br>
Keras
[back to top]
Loading MNIST
1) Download the 4 MNIST datasets from http
Step17: Multi-layer Perceptron in Keras
Once you have Theano installed, Keras can be installed via
pip install Keras
In order to run the following code via GPU, you can execute the Python script that was placed in this directory via
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py
Step18: One-hot encoding of the class variable | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,matplotlib,theano,keras
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
Explanation: Sebastian Raschka, 2015
Python Machine Learning
Chapter 13 - Parallelizing Neural Network Training with Theano
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
import theano
from theano import tensor as T
# initialize
x1 = T.scalar()
w1 = T.scalar()
w0 = T.scalar()
z1 = w1 * x1 + w0
# compile
net_input = theano.function(inputs=[w1, x1, w0], outputs=z1)
# execute
net_input(2.0, 1.0, 0.5)
Explanation: Sections
Building, compiling, and running expressions with Theano
First steps with Theano
Configuring Theano
Working with array structures
Wrapping things up: A linear regression example
Choosing activation functions for feedforward neural networks
Logistic function recap
Estimating probabilities in multi-class classification via the softmax function
Broadening the output spectrum using a hyperbolic tangent
<br>
<br>
Building, compiling, and running expressions with Theano
[back to top]
Depending on your system setup, it is typically sufficient to install Theano via
pip install Theano
For more help with the installation, please see: http://deeplearning.net/software/theano/install.html
<br>
<br>
First steps with Theano
Introducing the TensorType variables. For a complete list, see http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors
End of explanation
print(theano.config.floatX)
theano.config.floatX = 'float32'
Explanation: <br>
<br>
Configuring Theano
[back to top]
Configuring Theano. For more options, see
- http://deeplearning.net/software/theano/library/config.html
- http://deeplearning.net/software/theano/library/floatX.html
End of explanation
print(theano.config.device)
Explanation: To change the float type globally, execute
export THEANO_FLAGS=floatX=float32
in your bash shell. Or execute Python script as
THEANO_FLAGS=floatX=float32 python your_script.py
Running Theano on GPU(s). For prerequisites, please see: http://deeplearning.net/software/theano/tutorial/using_gpu.html
Note that float32 is recommended for GPUs; float64 on GPUs is currently still relatively slow.
End of explanation
import numpy as np
# initialize
x = T.fmatrix(name='x')
x_sum = T.sum(x, axis=0)
# compile
calc_sum = theano.function(inputs=[x], outputs=x_sum)
# execute (Python list)
ary = [[1, 2, 3], [1, 2, 3]]
print('Column sum:', calc_sum(ary))
# execute (NumPy array)
ary = np.array([[1, 2, 3], [1, 2, 3]], dtype=theano.config.floatX)
print('Column sum:', calc_sum(ary))
Explanation: You can run a Python script on CPU via:
THEANO_FLAGS=device=cpu,floatX=float64 python your_script.py
or GPU via
THEANO_FLAGS=device=gpu,floatX=float32 python your_script.py
It may also be convenient to create a .theanorc file in your home directory to make those configurations permanent. For example, to always use float32, execute
echo -e "\n[global]\nfloatX=float32\n" >> ~/.theanorc
Or, create a .theanorc file manually with the following contents
[global]
floatX = float32
device = gpu
<br>
<br>
Working with array structures
[back to top]
End of explanation
# initialize
x = T.fmatrix(name='x')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
z = x.dot(w.T)
update = [[w, w + 1.0]]
# compile
net_input = theano.function(inputs=[x],
updates=update,
outputs=z)
# execute
data = np.array([[1, 2, 3]], dtype=theano.config.floatX)
for i in range(5):
print('z%d:' % i, net_input(data))
Explanation: Updating shared arrays.
More info about memory management in Theano can be found here: http://deeplearning.net/software/theano/tutorial/aliasing.html
End of explanation
# initialize
data = np.array([[1, 2, 3]],
dtype=theano.config.floatX)
x = T.fmatrix(name='x')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
z = x.dot(w.T)
update = [[w, w + 1.0]]
# compile
net_input = theano.function(inputs=[],
updates=update,
givens={x: data},
outputs=z)
# execute
for i in range(5):
print('z:', net_input())
Explanation: We can use the givens variable to insert values into the graph before compiling it. Using this approach we can reduce the number of transfers from RAM (via CPUs) to GPUs to speed up learning with shared variables. If we use inputs, a datasets is transferred from the CPU to the GPU multiple times, for example, if we iterate over a dataset multiple times (epochs) during gradient descent. Via givens, we can keep the dataset on the GPU if it fits (e.g., a mini-batch).
End of explanation
import numpy as np
X_train = np.asarray([[0.0], [1.0], [2.0], [3.0], [4.0],
[5.0], [6.0], [7.0], [8.0], [9.0]],
dtype=theano.config.floatX)
y_train = np.asarray([1.0, 1.3, 3.1, 2.0, 5.0,
6.3, 6.6, 7.4, 8.0, 9.0],
dtype=theano.config.floatX)
Explanation: <br>
<br>
Wrapping things up: A linear regression example
[back to top]
Creating some training data.
End of explanation
import theano
from theano import tensor as T
import numpy as np
def train_linreg(X_train, y_train, eta, epochs):
costs = []
# Initialize arrays
eta0 = T.fscalar('eta0')
y = T.fvector(name='y')
X = T.fmatrix(name='X')
w = theano.shared(np.zeros(
shape=(X_train.shape[1] + 1),
dtype=theano.config.floatX),
name='w')
# calculate cost
net_input = T.dot(X, w[1:]) + w[0]
errors = y - net_input
cost = T.sum(T.pow(errors, 2))
# perform gradient update
gradient = T.grad(cost, wrt=w)
update = [(w, w - eta0 * gradient)]
# compile model
train = theano.function(inputs=[eta0],
outputs=cost,
updates=update,
givens={X: X_train,
y: y_train,})
for _ in range(epochs):
costs.append(train(eta))
return costs, w
Explanation: Implementing the training function.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
costs, w = train_linreg(X_train, y_train, eta=0.001, epochs=10)
plt.plot(range(1, len(costs)+1), costs)
plt.tight_layout()
plt.xlabel('Epoch')
plt.ylabel('Cost')
plt.tight_layout()
# plt.savefig('./figures/cost_convergence.png', dpi=300)
plt.show()
Explanation: Plotting the sum of squared errors cost vs epochs.
End of explanation
def predict_linreg(X, w):
Xt = T.matrix(name='X')
net_input = T.dot(Xt, w[1:]) + w[0]
predict = theano.function(inputs=[Xt], givens={w: w}, outputs=net_input)
return predict(X)
plt.scatter(X_train, y_train, marker='s', s=50)
plt.plot(range(X_train.shape[0]),
predict_linreg(X_train, w),
color='gray',
marker='o',
markersize=4,
linewidth=3)
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
# plt.savefig('./figures/linreg.png', dpi=300)
plt.show()
Explanation: Making predictions.
End of explanation
# note that first element (X[0] = 1) to denote bias unit
X = np.array([[1, 1.4, 1.5]])
w = np.array([0.0, 0.2, 0.4])
def net_input(X, w):
z = X.dot(w)
return z
def logistic(z):
return 1.0 / (1.0 + np.exp(-z))
def logistic_activation(X, w):
z = net_input(X, w)
return logistic(z)
print('P(y=1|x) = %.3f' % logistic_activation(X, w)[0])
Explanation: <br>
<br>
Choosing activation functions for feedforward neural networks
[back to top]
<br>
<br>
Logistic function recap
[back to top]
The logistic function, often just called "sigmoid function" is in fact a special case of a sigmoid function.
Net input $z$:
$$z = w_1x_{1} + \dots + w_mx_{m} = \sum_{j=1}^{m} x_{j}w_{j} \ = \mathbf{w}^T\mathbf{x}$$
Logistic activation function:
$$\phi_{logistic}(z) = \frac{1}{1 + e^{-z}}$$
Output range: (0, 1)
End of explanation
# W : array, shape = [n_output_units, n_hidden_units+1]
# Weight matrix for hidden layer -> output layer.
# note that first column (A[:][0] = 1) are the bias units
W = np.array([[1.1, 1.2, 1.3, 0.5],
[0.1, 0.2, 0.4, 0.1],
[0.2, 0.5, 2.1, 1.9]])
# A : array, shape = [n_hidden+1, n_samples]
# Activation of hidden layer.
# note that first element (A[0][0] = 1) is for the bias units
A = np.array([[1.0],
[0.1],
[0.3],
[0.7]])
# Z : array, shape = [n_output_units, n_samples]
# Net input of output layer.
Z = W.dot(A)
y_probas = logistic(Z)
print('Probabilities:\n', y_probas)
y_class = np.argmax(Z, axis=0)
print('predicted class label: %d' % y_class[0])
Explanation: Now, imagine a MLP perceptron with 3 hidden units + 1 bias unit in the hidden unit. The output layer consists of 3 output units.
End of explanation
def softmax(z):
return np.exp(z) / np.sum(np.exp(z))
def softmax_activation(X, w):
z = net_input(X, w)
return sigmoid(z)
y_probas = softmax(Z)
print('Probabilities:\n', y_probas)
y_probas.sum()
y_class = np.argmax(Z, axis=0)
y_class
Explanation: <br>
<br>
Estimating probabilities in multi-class classification via the softmax function
[back to top]
The softmax function is a generalization of the logistic function and allows us to compute meaningful class-probalities in multi-class settings (multinomial logistic regression).
$$P(y=j|z) =\phi_{softmax}(z) = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$
the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x is:
Output range: (0, 1)
End of explanation
def tanh(z):
e_p = np.exp(z)
e_m = np.exp(-z)
return (e_p - e_m) / (e_p + e_m)
import matplotlib.pyplot as plt
%matplotlib inline
z = np.arange(-5, 5, 0.005)
log_act = logistic(z)
tanh_act = tanh(z)
# alternatives:
# from scipy.special import expit
# log_act = expit(z)
# tanh_act = np.tanh(z)
plt.ylim([-1.5, 1.5])
plt.xlabel('net input $z$')
plt.ylabel('activation $\phi(z)$')
plt.axhline(1, color='black', linestyle='--')
plt.axhline(0.5, color='black', linestyle='--')
plt.axhline(0, color='black', linestyle='--')
plt.axhline(-1, color='black', linestyle='--')
plt.plot(z, tanh_act,
linewidth=2,
color='black',
label='tanh')
plt.plot(z, log_act,
linewidth=2,
color='lightgreen',
label='logistic')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/activation.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Broadening the output spectrum using a hyperbolic tangent
[back to top]
Another special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function.
$$\phi_{tanh}(z) = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$$
Output range: (-1, 1)
End of explanation
import os
import struct
import numpy as np
def load_mnist(path, kind='train'):
Load MNIST data from `path`
labels_path = os.path.join(path,
'%s-labels-idx1-ubyte'
% kind)
images_path = os.path.join(path,
'%s-images-idx3-ubyte'
% kind)
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II',
lbpath.read(8))
labels = np.fromfile(lbpath,
dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, num, rows, cols = struct.unpack(">IIII",
imgpath.read(16))
images = np.fromfile(imgpath,
dtype=np.uint8).reshape(len(labels), 784)
return images, labels
X_train, y_train = load_mnist('mnist', kind='train')
print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))
X_test, y_test = load_mnist('mnist', kind='t10k')
print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))
Explanation: <br>
<br>
Keras
[back to top]
Loading MNIST
1) Download the 4 MNIST datasets from http://yann.lecun.com/exdb/mnist/
train-images-idx3-ubyte.gz: training set images (9912422 bytes)
train-labels-idx1-ubyte.gz: training set labels (28881 bytes)
t10k-images-idx3-ubyte.gz: test set images (1648877 bytes)
t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)
2) Unzip those files
3 Copy the unzipped files to a directory ./mnist
End of explanation
import theano
theano.config.floatX = 'float32'
X_train = X_train.astype(theano.config.floatX)
X_test = X_test.astype(theano.config.floatX)
Explanation: Multi-layer Perceptron in Keras
Once you have Theano installed, Keras can be installed via
pip install Keras
In order to run the following code via GPU, you can execute the Python script that was placed in this directory via
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py
End of explanation
from keras.utils import np_utils
print('First 3 labels: ', y_train[:3])
y_train_ohe = np_utils.to_categorical(y_train)
print('\nFirst 3 labels (one-hot):\n', y_train_ohe[:3])
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD
np.random.seed(1)
model = Sequential()
model.add(Dense(input_dim=X_train.shape[1],
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(input_dim=50,
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(input_dim=50,
output_dim=y_train_ohe.shape[1],
init='uniform',
activation='softmax'))
sgd = SGD(lr=0.001, decay=1e-7, momentum=.9)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, y_train_ohe,
nb_epoch=50,
batch_size=300,
verbose=1,
validation_split=0.1,
show_accuracy=True)
y_train_pred = model.predict_classes(X_train, verbose=0)
print('First 3 predictions: ', y_train_pred[:3])
train_acc = np.sum(y_train == y_train_pred, axis=0) / X_train.shape[0]
print('Training accuracy: %.2f%%' % (train_acc * 100))
y_test_pred = model.predict_classes(X_test, verbose=0)
test_acc = np.sum(y_test == y_test_pred, axis=0) / X_test.shape[0]
print('Test accuracy: %.2f%%' % (test_acc * 100))
Explanation: One-hot encoding of the class variable:
End of explanation |
4,986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_hidden, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
End of explanation
from collections import Counter
total_counts = # bag of words here
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation).
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = ## create the word-to-index dictionary here
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
pass
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' ').
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_hidden, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
text = "This movie is so bad. It was awful and the worst"
positive_prob = model.predict([text_to_vector(text.lower())])[0][1]
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
Explanation: Try out your own text!
End of explanation |
4,987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First TensorFlow Graphs
In this notebook, we execute elementary TensorFlow computational graphs.
Load dependencies
Step1: Simple arithmetic
Step2: Simple array arithmetic | Python Code:
import numpy as np
import tensorflow as tf
Explanation: First TensorFlow Graphs
In this notebook, we execute elementary TensorFlow computational graphs.
Load dependencies
End of explanation
x1 = tf.placeholder(tf.float32)
x2 = tf.placeholder(tf.float32)
sum_op = tf.add(x1, x2)
product_op = tf.multiply(x1, x2)
with tf.Session() as session:
sum_result = session.run(sum_op, feed_dict={x1: 2.0, x2: 0.5}) # run again with {x1: [2.0, 2.0, 2.0], x2: [0.5, 1.0, 2.0]}
product_result = session.run(product_op, feed_dict={x1: 2.0, x2: 0.5}) # ...and with {x1: [2.0, 4.0], x2: 0.5}
sum_result
product_result
Explanation: Simple arithmetic
End of explanation
with tf.Session() as session:
sum_result = session.run(sum_op, feed_dict={x1: [2.0, 2.0, 2.0], x2: [0.5, 1.0, 2.0]})
product_result = session.run(product_op, feed_dict={x1: [2.0, 4.0], x2: 0.5})
sum_result
product_result
Explanation: Simple array arithmetic
End of explanation |
4,988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 2
Step1: Mass-spring-damper system
The differential equation that governs an unforced, single degree-of-freedom mass-spring-damper system is
$$
m \frac{d^{2}y}{dt^{2}} + \lambda \frac{dy}{dt} + ky = 0
$$
To solve this problem using SymPy, we first define the symbols $t$ (time), $m$ (mass), $\lambda$ (damper coefficient) and $k$ (spring stiffness), and the function $y$ (displacement)
Step2: Note that we mis-spell $\lambda$ as lmbda because lambda is a protected keyword in Python.
Next, we define the differential equation, and print it to the screen
Step3: Checking the order of the ODE
Step4: and now classifying the ODE
Step5: we see as expected that the equation is linear, constant coefficient, homogeneous and second order.
The dsolve function solves the differential equation
Step6: The solution looks very complicated because we have not specified values for the constants $m$, $\lambda$ and $k$. The nature of the solution depends heavily on the relative values of the coefficients, as we will see later. We have four constants because the most general case the solution is complex, with two complex constants having four real coefficients.
Note that the solution is make up of expoential functions and sinusoidal functions. This is typical of second-order ODEs.
Second order, constant coefficient equation
We'll now solve
$$
\frac{d^{2}y}{dx^{2}} + 2 \frac{dy}{dx} - 3 y = 0
$$
The solution for this problem will appear simpler because we have concrete values for the coefficients.
Entering the differential equation
Step7: Solving this equation,
Step8: which is the general solution. As expected for a second-order equation, there are two constants.
Note that the general solution is of the form
$$
y = C_{1} e^{\lambda_{1} x} + C_{2} e^{\lambda_{2} x}
$$
The constants $\lambda_{1}$ and $\lambda_{2}$ are roots of the \emph{characteristic} equation
$$
\lambda^{2} + 2\lambda - 3 = 0
$$
This quadratic equation is trivial to solve, but for completeness we'll look at how to solve it using SymPy. We first define the quadratic equation
Step9: and then compute the roots | Python Code:
from sympy import *
# This initialises pretty printing
init_printing()
from IPython.display import display
# This command makes plots appear inside the browser window
%matplotlib inline
Explanation: Lecture 2: second-order ordinary differential equations
We now look at solving second-order ordinary differential equations using a computer algebra system.
To use SymPy, we first need to import it and call init_printing() to get nicely typeset equations:
End of explanation
t, m, lmbda, k = symbols("t m lambda k")
y = Function("y")
Explanation: Mass-spring-damper system
The differential equation that governs an unforced, single degree-of-freedom mass-spring-damper system is
$$
m \frac{d^{2}y}{dt^{2}} + \lambda \frac{dy}{dt} + ky = 0
$$
To solve this problem using SymPy, we first define the symbols $t$ (time), $m$ (mass), $\lambda$ (damper coefficient) and $k$ (spring stiffness), and the function $y$ (displacement):
End of explanation
eqn = Eq(m*Derivative(y(t), t, t) + lmbda*Derivative(y(t), t) + k*y(t), 0)
display(eqn)
Explanation: Note that we mis-spell $\lambda$ as lmbda because lambda is a protected keyword in Python.
Next, we define the differential equation, and print it to the screen:
End of explanation
print("This order of the ODE is: {}".format(ode_order(eqn, y(t))))
Explanation: Checking the order of the ODE:
End of explanation
print("Properties of the ODE are: {}".format(classify_ode(eqn)))
Explanation: and now classifying the ODE:
End of explanation
y = dsolve(eqn, y(t))
display(y)
Explanation: we see as expected that the equation is linear, constant coefficient, homogeneous and second order.
The dsolve function solves the differential equation:
End of explanation
y = Function("y")
x = symbols("x")
eqn = Eq(Derivative(y(x), x, x) + 2*Derivative(y(x), x) - 3*y(x), 0)
display(eqn)
Explanation: The solution looks very complicated because we have not specified values for the constants $m$, $\lambda$ and $k$. The nature of the solution depends heavily on the relative values of the coefficients, as we will see later. We have four constants because the most general case the solution is complex, with two complex constants having four real coefficients.
Note that the solution is make up of expoential functions and sinusoidal functions. This is typical of second-order ODEs.
Second order, constant coefficient equation
We'll now solve
$$
\frac{d^{2}y}{dx^{2}} + 2 \frac{dy}{dx} - 3 y = 0
$$
The solution for this problem will appear simpler because we have concrete values for the coefficients.
Entering the differential equation:
End of explanation
y1 = dsolve(eqn)
display(y1)
Explanation: Solving this equation,
End of explanation
eqn = Eq(lmbda**2 + 2*lmbda -3, 0)
display(eqn)
Explanation: which is the general solution. As expected for a second-order equation, there are two constants.
Note that the general solution is of the form
$$
y = C_{1} e^{\lambda_{1} x} + C_{2} e^{\lambda_{2} x}
$$
The constants $\lambda_{1}$ and $\lambda_{2}$ are roots of the \emph{characteristic} equation
$$
\lambda^{2} + 2\lambda - 3 = 0
$$
This quadratic equation is trivial to solve, but for completeness we'll look at how to solve it using SymPy. We first define the quadratic equation:
End of explanation
solve(eqn)
Explanation: and then compute the roots:
End of explanation |
4,989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 3
Step1: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x
Step2: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
Step3: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree
Step4: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call
Step5: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
Step6: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step7: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
Step8: NOTE
Step9: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?
Step10: The resulting model looks like half a parabola. Try on your own to see what the cubic looks like
Step11: Now try a 15th degree polynomial
Step12: What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.
Changing the data and re-learning
We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results.
To split the sales data into four subsets, we perform the following steps
Step13: Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.
Step14: Some questions you will be asked on your quiz
Step15: Next you should write a loop that does the following
Step16: Quiz Question
Step17: Now that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz.
Step18: Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 3: Assessing Fit (polynomial regression)
In this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will:
* Write a function to take an SArray and a degree and return an SFrame where each column is the SArray to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the SArray column 2 is the SArray squared and column 3 is the SArray cubed
* Use matplotlib to visualize polynomial regressions
* Use matplotlib to visualize the same polynomial degree on different subsets of the data
* Use a validation set to select a polynomial degree
* Assess the final fit using test data
We will continue to use the House data from previous notebooks.
Fire up graphlab create
End of explanation
tmp = graphlab.SArray([1., 2., 3.])
tmp_cubed = tmp.apply(lambda x: x**3)
print tmp
print tmp_cubed
Explanation: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions.
For example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab)
End of explanation
ex_sframe = graphlab.SFrame()
ex_sframe['power_1'] = tmp
print ex_sframe
Explanation: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
End of explanation
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature.apply(lambda x : x**power)
return poly_sframe
Explanation: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree:
End of explanation
print polynomial_sframe(tmp, 3)
Explanation: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call:
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
sales.head()
Explanation: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
End of explanation
sales = sales.sort(['sqft_living', 'price'])
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
poly1_data = polynomial_sframe(sales['sqft_living'], 1)
poly1_data['price'] = sales['price'] # add price to the data since it's the target
poly1_data
Explanation: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
End of explanation
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(poly1_data['power_1'],poly1_data['price'],'.',
poly1_data['power_1'], model1.predict(poly1_data),'-')
Explanation: NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users.
End of explanation
poly2_data = polynomial_sframe(sales['sqft_living'], 2)
my_features = poly2_data.column_names() # get the name of the features
poly2_data['price'] = sales['price'] # add price to the data since it's the target
model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)
model2.get("coefficients")
plt.plot(poly2_data['power_1'],poly2_data['price'],'.',
poly2_data['power_1'], model2.predict(poly2_data),'-')
Explanation: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?
End of explanation
poly3_data = polynomial_sframe(sales['sqft_living'], 3)
my_features = poly3_data.column_names() # get the name of the features
poly3_data['price'] = sales['price'] # add price to the data since it's the target
model3 = graphlab.linear_regression.create(poly3_data, target = 'price', features = my_features, validation_set = None)
plt.plot(poly3_data['power_1'],poly3_data['price'],'.',
poly3_data['power_1'], model3.predict(poly3_data),'-')
Explanation: The resulting model looks like half a parabola. Try on your own to see what the cubic looks like:
End of explanation
poly15_data = polynomial_sframe(sales['sqft_living'], 15)
my_features = poly15_data.column_names() # get the name of the features
poly15_data['price'] = sales['price'] # add price to the data since it's the target
model15 = graphlab.linear_regression.create(poly15_data, target = 'price', features = my_features, validation_set = None)
plt.plot(poly15_data['power_1'],poly15_data['price'],'.',
poly15_data['power_1'], model15.predict(poly15_data),'-')
Explanation: Now try a 15th degree polynomial:
End of explanation
tmp_1, tmp_2 = sales.random_split(0.5, seed=0)
set_1, set_2 = tmp_1.random_split(0.5, seed=0)
set_3, set_4 = tmp_2.random_split(0.5, seed=0)
print "size of set_1 = " + str(len(set_1)) + " ; set_2 = " + str(len(set_2)) + " ; set_3 = " + str(len(set_3)) + " ; set_4 = " + str(len(set_4))
print "size of sales/4 = " + str(len(sales) / 4)
if (len(set_1) + len(set_2) + len(set_3) + len(set_4)) == len(sales):
print "assertion passed"
else:
print "check the code"
Explanation: What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.
Changing the data and re-learning
We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results.
To split the sales data into four subsets, we perform the following steps:
* First split sales into 2 subsets with .random_split(0.5, seed=0).
* Next split the resulting subsets into 2 more subsets each. Use .random_split(0.5, seed=0).
We set seed=0 in these steps so that different users get consistent results.
You should end up with 4 subsets (set_1, set_2, set_3, set_4) of approximately equal size.
End of explanation
set_1_15_data = polynomial_sframe(set_1['sqft_living'], 15)
my_features = set_1_15_data.column_names() # get the name of the features
set_1_15_data['price'] = set_1['price'] # add price to the data since it's the target
model_set_1_15 = graphlab.linear_regression.create(
set_1_15_data,
target = 'price',
features = my_features,
validation_set = None
)
plt.plot(set_1_15_data['power_1'],set_1_15_data['price'],'.',
set_1_15_data['power_1'], model_set_1_15.predict(set_1_15_data),'-')
print "set_1"
model_set_1_15.get("coefficients").print_rows(16)
set_2_15_data = polynomial_sframe(set_2['sqft_living'], 15)
my_features = set_2_15_data.column_names() # get the name of the features
set_2_15_data['price'] = set_2['price'] # add price to the data since it's the target
model_set_2_15 = graphlab.linear_regression.create(
set_2_15_data,
target = 'price',
features = my_features,
validation_set = None
)
plt.plot(set_2_15_data['power_1'],set_2_15_data['price'],'.',
set_2_15_data['power_1'], model_set_2_15.predict(set_2_15_data),'-')
print "set_2"
model_set_2_15.get("coefficients").print_rows(16)
set_3_15_data = polynomial_sframe(set_3['sqft_living'], 15)
my_features = set_3_15_data.column_names() # get the name of the features
set_3_15_data['price'] = set_3['price'] # add price to the data since it's the target
model_set_3_15 = graphlab.linear_regression.create(
set_3_15_data,
target = 'price',
features = my_features,
validation_set = None
)
plt.plot(set_3_15_data['power_1'],set_3_15_data['price'],'.',
set_3_15_data['power_1'], model_set_3_15.predict(set_3_15_data),'-')
print "set_3"
model_set_3_15.get("coefficients").print_rows(16)
set_4_15_data = polynomial_sframe(set_4['sqft_living'], 15)
my_features = set_4_15_data.column_names() # get the name of the features
set_4_15_data['price'] = set_4['price'] # add price to the data since it's the target
model_set_4_15 = graphlab.linear_regression.create(
set_4_15_data,
target = 'price',
features = my_features,
validation_set = None
)
plt.plot(set_4_15_data['power_1'],set_4_15_data['price'],'.',
set_4_15_data['power_1'], model_set_4_15.predict(set_4_15_data),'-')
print "set_4"
model_set_4_15.get("coefficients").print_rows(16)
Explanation: Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.
End of explanation
training_and_validation, testing = sales.random_split(0.9, seed=1)
training, validation = training_and_validation.random_split(0.5, seed=1)
Explanation: Some questions you will be asked on your quiz:
Quiz Question: Is the sign (positive or negative) for power_15 the same in all four models?
Quiz Question: (True/False) the plotted fitted lines look the same in all four plots
Selecting a Polynomial Degree
Whenever we have a "magic" parameter like the degree of the polynomial there is one well-known way to select these parameters: validation set. (We will explore another approach in week 4).
We split the sales dataset 3-way into training set, test set, and validation set as follows:
Split our sales data into 2 sets: training_and_validation and testing. Use random_split(0.9, seed=1).
Further split our training data into two sets: training and validation. Use random_split(0.5, seed=1).
Again, we set seed=1 to obtain consistent results for different users.
End of explanation
trained_models_history = []
validation_rss_history = []
for i in xrange(1, 16):
#obtain the model data
this_model_data = polynomial_sframe(training['sqft_living'], i)
my_features = this_model_data.column_names() # get the name of the features
this_model_data['price'] = training['price'] # add price to the data since it's the target
# learn the model for this degree on train data
this_model = graphlab.linear_regression.create(
this_model_data,
target = 'price',
features = my_features,
validation_set = None,
verbose=False
)
trained_models_history.append(this_model)
# find rss for the validation data
this_model_validation_data = polynomial_sframe(validation['sqft_living'], i)
this_model_prediction = this_model.predict(this_model_validation_data)
this_model_error = this_model_prediction - validation['price']
this_model_error_squared = this_model_error * this_model_error
this_model_rss = this_model_error_squared.sum()
print "Model " + str(i) + " validation rss = " + str(this_model_rss)
validation_rss_history.append(this_model_rss)
validation_rss_history
Explanation: Next you should write a loop that does the following:
* For degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (to get this in python type range(1, 15+1))
* Build an SFrame of polynomial data of train_data['sqft_living'] at the current degree
* hint: my_features = poly_data.column_names() gives you a list e.g. ['power_1', 'power_2', 'power_3'] which you might find useful for graphlab.linear_regression.create( features = my_features)
* Add train_data['price'] to the polynomial SFrame
* Learn a polynomial regression model to sqft vs price with that degree on TRAIN data
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that degree and you will need to make a polynmial SFrame using validation data.
* Report which degree had the lowest RSS on validation data (remember python indexes from 0)
(Note you can turn off the print out of linear_regression.create() with verbose = False)
End of explanation
best_model = 6
print best_model
Explanation: Quiz Question: Which degree (1, 2, …, 15) had the lowest RSS on Validation data?
End of explanation
# find rss for the testing data
best_model_testing_data = polynomial_sframe(testing['sqft_living'], best_model)
best_model_test_prediction = trained_models_history[best_model - 1].predict(best_model_testing_data)
best_model_test_error = best_model_test_prediction - testing['price']
best_model_test_error_squared = best_model_test_error * best_model_test_error
best_model_test_rss = best_model_test_error_squared.sum()
Explanation: Now that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz.
End of explanation
print "Testing rss on best model = " + str(best_model_test_rss)
Explanation: Quiz Question: what is the RSS on TEST data for the model with the degree selected from Validation data?
End of explanation |
4,990 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
normalized_x = x / 255.
return normalized_x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
nb_classes = 10
return np.eye(nb_classes)[x]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=[None, *image_shape], name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=[None, n_classes], name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([*conv_ksize, x_tensor.get_shape().as_list()[3], conv_num_outputs],
stddev = 0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
padding = 'SAME'
convex_layer = tf.nn.bias_add(tf.nn.conv2d(x_tensor, weights, [1, *conv_strides, 1], padding), bias)
convex_layer = tf.nn.relu(convex_layer)
max_pool_layer = tf.nn.max_pool(convex_layer, [1, *pool_ksize, 1], [1, *pool_strides, 1], padding)
return max_pool_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
shape = x_tensor.get_shape().as_list()
dim = np.prod(shape[1:])
flatten_x_tensor = tf.reshape(x_tensor, [-1, dim])
return flatten_x_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
input_dim = x_tensor.get_shape().as_list()[1]
weight = tf.Variable(tf.truncated_normal([input_dim, num_outputs], stddev = 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
fully_conn_layer = tf.add(tf.matmul(x_tensor, weight), bias)
fully_conn_layer = tf.nn.relu(fully_conn_layer)
return fully_conn_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
input_dim = x_tensor.get_shape().as_list()[1]
weight = tf.Variable(tf.truncated_normal([input_dim, num_outputs], stddev = 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
output_layer = tf.add(tf.matmul(x_tensor, weight), bias)
return output_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize = (5, 5)
conv_stride = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
conv2d_maxpool_layer = conv2d_maxpool(x, 64, conv_ksize, conv_stride, pool_ksize, pool_strides)
conv2d_maxpool_layer = conv2d_maxpool(conv2d_maxpool_layer, 128,
conv_ksize, conv_stride, pool_ksize, pool_strides)
conv2d_maxpool_layer = conv2d_maxpool(conv2d_maxpool_layer, 256,
conv_ksize, conv_stride, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flatten_layer = flatten(conv2d_maxpool_layer)
# flatten_layer = tf.nn.dropout(flatten_layer, keep_prob)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fully_conn_layer = fully_conn(flatten_layer, 1024)
fully_conn_layer = tf.nn.dropout(fully_conn_layer, keep_prob)
# fully_conn_layer = fully_conn(flatten_layer, 128)
# fully_conn_layer = tf.nn.dropout(fully_conn_layer, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_layer = output(fully_conn_layer, 10)
# TODO: return output
return output_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
feed_dict = {x: feature_batch, y: label_batch, keep_prob: keep_probability}
session.run(optimizer, feed_dict)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict = {x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict = {x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 10
batch_size = 128
keep_probability = 0.75
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
4,991 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3. Data preparation
Step1: 3.1 Select Data
Outputs
Step2: 3.3 Construct Data
Outputs | Python Code:
import nltk
import pandas as pd
import math
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import gridspec
from sklearn import datasets, linear_model
import numpy as np
from numbers import Number
from sklearn import preprocessing
def correlation_matrix(df,figsize=(15,15)):
from matplotlib import pyplot as plt
from matplotlib import cm as cm
fig = plt.figure(figsize=figsize)
ax1 = fig.add_subplot(111)
cmap = cm.get_cmap('jet', 30)
cax = ax1.imshow(df.corr(), interpolation="nearest", cmap=cmap)
ax1.grid(True)
# Add colorbar, make sure to specify tick locations to match desired ticklabels
fig.colorbar(cax, ticks=[.75,.8,.85,.90,.95,1])
plt.show()
train= pd.read_csv("../data/train.csv")
test=pd.read_csv('../data/test.csv')
Explanation: 3. Data preparation
End of explanation
train=train[train['LotArea']<55000]
train=train[train['LotFrontage']<300]
train=train[train['MasVnrArea']<1200]
train=train[train['MasVnrArea']<1200]
train=train[train['BsmtFinSF1']<5000]
train=train[train['BsmtFinSF2']<1400]
train=train[train['TotalBsmtSF']<3500]
train['Electrical'].dropna(inplace=True)
train=train[train['1stFlrSF']!=0]
train=train[train['1stFlrSF']<4000]
train=train[train['GrLivArea']!=0]
train=train[train['WoodDeckSF']<750]
train=train[train['OpenPorchSF']<400]
train=train[train['EnclosedPorch']<400]
dataset=pd.concat([train,test],keys=['train','test'])
del dataset['Utilities']
del dataset['TotalBsmtSF']
del dataset['TotRmsAbvGrd']
del dataset['GarageYrBlt']
del dataset['GarageCars']
dataset['LotFrontage'].fillna(dataset['LotFrontage'].mean(),inplace=True)
dataset['Alley'].fillna('XX',inplace=True)
dataset['MasVnrArea'].fillna(dataset['MasVnrArea'].mean(),inplace=True)
dataset['BsmtQual'].fillna('XX',inplace=True)
dataset['BsmtCond'].fillna('XX',inplace=True)
dataset['BsmtExposure'].fillna('XX',inplace=True)
dataset['BsmtFinSF1'].fillna(0,inplace=True)
dataset['BsmtFinSF2'].fillna(0,inplace=True)
dataset['BsmtUnfSF'].fillna(0,inplace=True)
dataset['BsmtFinType1'].fillna('XX',inplace=True)
dataset['BsmtFinType2'].fillna('XX',inplace=True)
dataset['BsmtFullBath'].fillna(dataset['BsmtFullBath'].mean(),inplace=True)
dataset['BsmtHalfBath'].fillna(dataset['BsmtHalfBath'].mean(),inplace=True)
dataset['MiscFeature'].fillna('XX',inplace=True)
dataset['MiscVal'].fillna(0,inplace=True)
dataset['FireplaceQu'].fillna('XX',inplace=True)
dataset['GarageType'].fillna('XX',inplace=True)
dataset['GarageFinish'].fillna('XX',inplace=True)
dataset['GarageQual'].fillna('XX',inplace=True)
dataset['GarageCond'].fillna('XX',inplace=True)
dataset['GarageArea'].fillna(0,inplace=True)
dataset['PoolQC'].fillna('XX',inplace=True)
dataset['Fence'].fillna('XX',inplace=True)
dataset.loc['train'].to_csv('../data/train_cleaned.csv')
dataset.loc['test'].to_csv('../data/test_cleaned.csv')
Explanation: 3.1 Select Data
Outputs:
Rationale for Inclusion/Exclusion
3.2 Clean Data
Outputs:
Data Cleaning Report
End of explanation
dataset['MSSubClass'] = dataset['MSSubClass'].apply(str)
Numeric_columns=['LotFrontage',
'LotArea',
'OverallQual',
'OverallCond',
'YearBuilt',
'YearRemodAdd',
'MasVnrArea',
'BsmtFinSF1',
'BsmtFinSF2',
'BsmtUnfSF',
'1stFlrSF',
'2ndFlrSF',
'LowQualFinSF',
'GrLivArea',
'BsmtFullBath',
'BsmtHalfBath',
'FullBath',
'HalfBath',
'BedroomAbvGr',
'KitchenAbvGr',
'Fireplaces',
'GarageArea',
'WoodDeckSF',
'OpenPorchSF',
'EnclosedPorch',
'3SsnPorch',
'ScreenPorch',
'PoolArea',
'MiscVal',
'MoSold',
'YrSold']
for i in Numeric_columns:
dataset[i]=preprocessing.scale(dataset[i])
dataset=pd.get_dummies(dataset)
train_dummied=dataset.loc['train']
test_dummied=dataset.loc['test']
train_dummied=train_dummied.set_index('Id')
test_dummied=test_dummied.set_index('Id')
train_dummied.to_csv('../data/train_dummied.csv')
test_dummied.to_csv('../data/test_dummied.csv')
#correlation_matrix(dataset_dummied)
Explanation: 3.3 Construct Data
Outputs:
Derived Attributes
Generated Records
3.4 Integrate Data
Outputs:
Merged Data
3.5 Format Data
Outputs:
Reformatted Data
Dataset
Dataset Description
End of explanation |
4,992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-3', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
4,993 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Today's meeting opened the topic of building interactive figures in Python. This notebook will show an example of using ipywidgets module, more specifically interact() function. The full documentation can be found on the ipywidgets website, but beware
Step1: In the beginning there was a sine wave
Imagine you have a plotting function with two arguments, for example, a line plot of a sine wave.
First, import the necessary modules and tell matplotlib to embed figures in the notebook.
Step2: Next, define a 1d array to pass into sin() function.
Step3: Define a trivial function to plot a sine wave depending on frequency and amplitude inputs.
Step4: Test it with arbitrary arguments
Step5: Changing arguments
Now, if you want to see how the arguments affect the result you would need to rerun the cell above over and over again. Luckily, the ipywidgets make it more fun.
Step6: Just pass the function name into interact() as a first argument. Then add its arguments and their respective range (start, stop, step)
Step8: And voila, you can change frequency and amplitude interactively using the two independent sliders.
Another example
Of course, interact() can be used not only for plotting. For example, using code from this StackOverflow answer, we can print out a sequence of prime numbers smaller than a given number n.
Step9: And then make the function interactive
Step10: ipywidgets + contourf + real data
How can we apply interact() to real data analysis in Earth sciences? Well, one of the trivial application is to explore N-dimensional fields stored NetCDF files.
Step11: As a sample data file we will use the same data.nc file from previous examples.
Step12: Here we create a function ncfun(), whose arguments are
Step13: This function is easily wrapped by interact()
Step14: ncview clone in Jupyter
Tools
Step16: For colour schemes we will use palettable package (brewer2mpl successor). It is available on PyPi (pip install palettable).
Step18: The interesting part is below. We use another function that have only one argument - a file name. It opens the file and then allows us to choose a variable to plot (in the previous example we had to know variable names prior to executing the function). | Python Code:
import warnings
warnings.filterwarnings('ignore')
Explanation: Today's meeting opened the topic of building interactive figures in Python. This notebook will show an example of using ipywidgets module, more specifically interact() function. The full documentation can be found on the ipywidgets website, but beware: since the project is young and is evolving quickly, the documentation can be incomplete or sometimes outdated.
The examples below were bluntly taken from Nikolay Koldunov's post on his awesome EarthPy blog. Note that since then IPython.html.widgets migrated into a separate package.
There are dozens of examples on the web on how to use ipywidgets in many cool ways. You'd better start from the project's collection of notebooks on GitHub.
<div class="alert alert-warning">
In the static HTML version of this notebook the interactive mode is unavailable.
<b>To play with figures you can switch to <a href=http://mybinder.org/repo/ueapy/interactive_notebooks>binder</a>.</b>
Or download the notebook using the link in the end and launch it on your machine.
Hopefully, future releases of ipywidgets will include <a href=https://jakevdp.github.io/blog/2013/12/05/static-interactive-widgets/>static widgets</a>.
</div>
End of explanation
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
Explanation: In the beginning there was a sine wave
Imagine you have a plotting function with two arguments, for example, a line plot of a sine wave.
First, import the necessary modules and tell matplotlib to embed figures in the notebook.
End of explanation
x = np.linspace(0,1,100)
Explanation: Next, define a 1d array to pass into sin() function.
End of explanation
def pltsin(freq, ampl):
y = ampl*np.sin(2*np.pi*x*freq)
plt.plot(x, y)
plt.ylim(-10,10) # fix limits of the vertical axis
Explanation: Define a trivial function to plot a sine wave depending on frequency and amplitude inputs.
End of explanation
pltsin(10, 3)
Explanation: Test it with arbitrary arguments:
End of explanation
from ipywidgets import interact
Explanation: Changing arguments
Now, if you want to see how the arguments affect the result you would need to rerun the cell above over and over again. Luckily, the ipywidgets make it more fun.
End of explanation
_ = interact(pltsin, freq=(1,10,0.1), ampl=(1,10,1))
Explanation: Just pass the function name into interact() as a first argument. Then add its arguments and their respective range (start, stop, step):
End of explanation
def primesfrom3to(n):
Returns a array of primes, 3 <= p < n
sieve = np.ones(n//2, dtype=np.bool)
for i in range(3,int(n**0.5)+1,2):
if sieve[i//2]:
sieve[i*i//2::i] = False
res = 2*np.nonzero(sieve)[0][1::]+1
seq = ''
for i in res:
seq += ' {}'.format(i)
return seq[1:]
Explanation: And voila, you can change frequency and amplitude interactively using the two independent sliders.
Another example
Of course, interact() can be used not only for plotting. For example, using code from this StackOverflow answer, we can print out a sequence of prime numbers smaller than a given number n.
End of explanation
_ = interact(primesfrom3to, n=(3,100,1)) # _ used to suppress output
Explanation: And then make the function interactive:
End of explanation
import netCDF4 as nc
Explanation: ipywidgets + contourf + real data
How can we apply interact() to real data analysis in Earth sciences? Well, one of the trivial application is to explore N-dimensional fields stored NetCDF files.
End of explanation
fpath = '../data/data.nc'
Explanation: As a sample data file we will use the same data.nc file from previous examples.
End of explanation
def ncfun(filename, varname='', time=0, lev=0):
with nc.Dataset(filename) as da:
arr = da.variables[varname][:]
lon = da.variables['longitude'][:]
lat = da.variables['latitude'][:]
fig = plt.figure(figsize=(8,5))
ax = fig.add_subplot(111)
c = ax.contourf(lon, lat, arr[time, lev, ...], cmap='viridis')
fig.colorbar(c, ax=ax, shrink=0.5)
Explanation: Here we create a function ncfun(), whose arguments are:
NetCDF file name
name of one of the variables stored in that file
assuming we have 4d-arrays, time and level indices (=0 by default).
In a nutshell, the function opens a file using netCDF4 module, reads the variable labelled varname, as well as longitude and latitude arrays, and then displays lon-lat horizontal cross-section.
End of explanation
_ = interact(ncfun, filename=fpath,
varname=['u','v'],
time=(0,1,1), lev=(0,3,1))
Explanation: This function is easily wrapped by interact():
End of explanation
import iris
import cartopy.crs as ccrs
iris.FUTURE.netcdf_promote = True # see explanation in previous posts
Explanation: ncview clone in Jupyter
Tools: iris, cartopy, ipywidgets
We can improve that function and effectively create a clone of the ncview or Panoply. We also will use the capabilities of iris and cartopy packages.
End of explanation
import palettable
def plot_cube(cube, time=0, lev=0, cmap='viridis'):
Display a cross-section of iris.cube.Cube on a map
# Get cube data and extract a 2d lon-lat slice
arr = cube.data[time, lev, ...]
# Find longitudes and latitudes
lon = cube.coords(axis='x')[0].points
lat = cube.coords(axis='y')[0].points
# Create a figure with the size 8x5 inches
fig = plt.figure(figsize=(8,5))
# Create a geo-references Axes inside the figure
ax = fig.add_subplot(111, projection=ccrs.PlateCarree())
# Plot coastlines
ax.coastlines()
# Plot the data as filled contour map
c = ax.contourf(lon, lat, arr, cmap=cmap)
# Attach a colorbar shrinked by 50%
fig.colorbar(c, ax=ax, shrink=0.5)
Explanation: For colour schemes we will use palettable package (brewer2mpl successor). It is available on PyPi (pip install palettable).
End of explanation
def iris_view(filename):
Interactively display NetCDF data
# Load file as iris.cube.CubeList
cubelist = iris.load(filename)
# Create a dict of variable names and iris cubes
vardict = {i.name(): cubelist.extract(i.name())[0] for i in cubelist}
# Use sequential colorbrewer palettes for colormap keyword
cmaps = [i for i in palettable.colorbrewer.COLOR_MAPS['Sequential']]
interact(plot_cube,
cube=vardict,
time=(0,1,1),
lev=(0,3,1),
cmap=cmaps)
iris_view(fpath)
Explanation: The interesting part is below. We use another function that have only one argument - a file name. It opens the file and then allows us to choose a variable to plot (in the previous example we had to know variable names prior to executing the function).
End of explanation |
4,994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 1
Imports
Step2: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral
Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
Explanation: Integration Exercise 1
Imports
End of explanation
def trapz(f, a, b, N):
Integrate the function f(x) over the range [a,b] with N points.
k = np.arange(1,N)
h = (b-a)/N
I = h*0.5*f(a) + h*0.5*f(b) + h*f(a+k*h).sum()
return I
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
Explanation: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral:
$$ I(a,b) = \int_a^b f(x) dx $$
by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$:
$$ h = (b-a)/N $$
Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points.
Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points).
End of explanation
print(trapz(f, 0, 1, 1000))
print(integrate.quad(f, 0, 1)[0])
print('error: '+ str(integrate.quad(f, 0, 1)[1]))
print(trapz(g, 0, np.pi, 1000))
print(integrate.quad(g, 0, np.pi)[0])
print('error: '+ str(integrate.quad(g, 0, np.pi)[1]))
assert True # leave this cell to grade the previous one
Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
End of explanation |
4,995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Live-updating multi-tau one-time correlation with synthetic and real data
Step1: First, let's demo with synthetic data.
The plot a few cells down should live update with the first value of approximately sqrt(2) and the remaining values should be essentially one
Step2: Define a helper function to process and update the live plot
Step3: Show the correlator working with synthetic data
Step4: Now let's do it with some real data
Step5: Plot the ROIs over the averaged image
Step6: Use the class-based partial data correlator in scikit-beam
Step7: Show incremental updates with the generator implementation of the correlator
Step8: multi_tau_auto_corr used to be the reference implementation. It now wraps the generator implementation
Step9: Demonstrate that the two implementations produce the same result | Python Code:
from skbeam.core.correlation import lazy_one_time
import numpy as np
import time as ttime
import matplotlib.pyplot as plt
%matplotlib notebook
Explanation: Live-updating multi-tau one-time correlation with synthetic and real data
End of explanation
num_levels = 5
num_bufs = 4 # must be even
xdim = 512
ydim = 512
stack_size = 100
synthetic_data = np.random.randint(1, 10, (stack_size, xdim, ydim))
rois = np.zeros_like(synthetic_data[0])
# make sure that the ROIs can be any integers greater than 1. They do not
# have to start at 1 and be continuous
rois[0:xdim//10, 0:ydim//10] = 5
rois[xdim//10:xdim//5, ydim//10:ydim//5] = 3
Explanation: First, let's demo with synthetic data.
The plot a few cells down should live update with the first value of approximately sqrt(2) and the remaining values should be essentially one
End of explanation
def update_plot(ax, g2s, lags, img_num):
ax.cla()
for n, g2 in enumerate(g2s.T):
ax.plot(lags[:len(g2)], g2, '-o', label='roi%s' % n)
ax.set_title('processed %s images' % img_num)
ax.legend(loc=0)
ax.set_xlabel('Log time (s)')
ax.set_ylabel('Correlation')
try:
ax.set_xscale('log')
ax.figure.canvas.draw()
except ValueError:
# this happens on the first few draws
ax.set_xscale('linear')
ax.figure.canvas.draw()
ttime.sleep(0.001)
Explanation: Define a helper function to process and update the live plot
End of explanation
fig, ax = plt.subplots()
corr_gen = lazy_one_time(synthetic_data, num_levels, num_bufs, rois)
for counter, res in enumerate(corr_gen):
# only update the plot every 5th image processed.
if counter % 5 == 0:
update_plot(ax, res.g2, res.lag_steps, counter)
Explanation: Show the correlator working with synthetic data
End of explanation
from skbeam.core import roi
from xray_vision.mpl_plotting import show_label_array
# multi-tau scheme info
real_data_levels = 7
real_data_bufs = 8
real_data = np.load("100_500_NIPA_GEL.npy")
# generate some circular ROIs
# define the ROIs
roi_start = 65 # in pixels
roi_width = 9 # in pixels
roi_spacing = (5.0, 4.0)
x_center = 7. # in pixels
y_center = (129.) # in pixels
num_rings = 3
# get the edges of the rings
edges = roi.ring_edges(roi_start, width=roi_width,
spacing=roi_spacing, num_rings=num_rings)
# get the label array from the ring shaped 3 region of interests(ROI's)
labeled_roi_array = roi.rings(
edges, (y_center, x_center), real_data.shape[1:])
Explanation: Now let's do it with some real data
End of explanation
fig, ax = plt.subplots()
ax.imshow(np.sum(real_data, axis=0) / len(real_data))
show_label_array(ax, labeled_roi_array)
Explanation: Plot the ROIs over the averaged image
End of explanation
fig2, ax2 = plt.subplots()
ax2.set_xscale('log')
Explanation: Use the class-based partial data correlator in scikit-beam
End of explanation
gen = lazy_one_time(real_data, real_data_levels, real_data_bufs, labeled_roi_array)
for counter, result in enumerate(gen):
# update image every 10th image for performance
if counter % 10 == 0:
update_plot(ax2, result.g2, result.lag_steps, counter)
else:
# do a final update to get the last bit
update_plot(ax2, result.g2, result.lag_steps, counter)
Explanation: Show incremental updates with the generator implementation of the correlator
End of explanation
from skbeam.core.correlation import multi_tau_auto_corr
%%timeit -n5
pass
gen = lazy_one_time(real_data, real_data_levels, real_data_bufs, labeled_roi_array)
results = list(gen)[-1]
results.g2, results.lag_steps
%%timeit -n5
pass
g2, lags = multi_tau_auto_corr(
real_data_levels, real_data_bufs, labeled_roi_array, (im for im in real_data))
Explanation: multi_tau_auto_corr used to be the reference implementation. It now wraps the generator implementation
End of explanation
gen = lazy_one_time(real_data, real_data_levels, real_data_bufs, labeled_roi_array)
results = list(gen)
g2, lags = multi_tau_auto_corr(
real_data_levels, real_data_bufs, labeled_roi_array, (im for im in real_data))
assert np.all(results[-1].g2 == g2)
import skbeam
print(skbeam.__version__)
Explanation: Demonstrate that the two implementations produce the same result
End of explanation |
4,996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MatPlotLib Basics
Draw a line graph
Step1: Mutiple Plots on One Graph
Step2: Save it to a File
Step3: Adjust the Axes
Step4: Add a Grid
Step5: Change Line Types and Colors
Step6: Labeling Axes and Adding a Legend
Step7: XKCD Style
Step8: Pie Chart
Step9: Bar Chart
Step10: Scatter Plot
Step11: Histogram
Step12: Box & Whisker Plot
Useful for visualizing the spread & skew of data.
The red line represents the median of the data, and the box represents the bounds of the 1st and 3rd quartiles.
So, half of the data exists within the box.
The dotted-line "whiskers" indicate the range of the data - except for outliers, which are plotted outside the whiskers. Outliers are 1.5X or more the interquartile range.
This example below creates uniformly distributed random numbers between -40 and 60, plus a few outliers above 100 and below -100
Step13: Activity
Try creating a scatter plot representing random data on age vs. time spent watching TV. Label the axes. | Python Code:
%matplotlib inline
from scipy.stats import norm
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-3, 3, 0.001)
plt.plot(x, norm.pdf(x))
plt.show()
Explanation: MatPlotLib Basics
Draw a line graph
End of explanation
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
Explanation: Mutiple Plots on One Graph
End of explanation
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.savefig('C:\\Users\\anirban\\Dropbox\\DataScience\\DataScience\\MyPlot.png', format='png')
Explanation: Save it to a File
End of explanation
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
Explanation: Adjust the Axes
End of explanation
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
Explanation: Add a Grid
End of explanation
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.plot(x, norm.pdf(x), 'b-')
plt.plot(x, norm.pdf(x, 1.0, 0.5), 'g-.')
plt.show()
Explanation: Change Line Types and Colors
End of explanation
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.xlabel('Greebles')
plt.ylabel('Probability')
plt.plot(x, norm.pdf(x), 'b-')
plt.plot(x, norm.pdf(x, 1.0, 0.5), 'r:')
plt.legend(['Sneetches', 'Gacks'], loc=4)
plt.show()
Explanation: Labeling Axes and Adding a Legend
End of explanation
plt.xkcd()
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.xticks([])
plt.yticks([])
ax.set_ylim([-30, 10])
data = np.ones(100)
data[70:] -= np.arange(30)
plt.annotate(
'THE DAY I REALIZED\nI COULD COOK BACON\nWHENEVER I WANTED',
xy=(70, 1), arrowprops=dict(arrowstyle='->'), xytext=(15, -10))
plt.plot(data)
plt.xlabel('time')
plt.ylabel('my overall health')
Explanation: XKCD Style :)
End of explanation
# Remove XKCD mode:
plt.rcdefaults()
values = [12, 55, 4, 32, 14]
colors = ['r', 'g', 'b', 'c', 'm']
explode = [0.2, 0, 0.0, 0, 0]
labels = ['India', 'United States', 'Russia', 'China', 'Europe']
plt.pie(values, colors= colors, labels=labels, explode = explode)
plt.title('Student Locations')
plt.show()
Explanation: Pie Chart
End of explanation
values = [12, 55, 4, 32, 14]
colors = ['r', 'g', 'b', 'c', 'm']
plt.bar(range(0,5), values, color= colors)
plt.show()
Explanation: Bar Chart
End of explanation
from pylab import randn
X = randn(500)
Y = randn(500)
plt.scatter(X,Y)
plt.show()
Explanation: Scatter Plot
End of explanation
incomes = np.random.normal(27000, 15000, 10000)
plt.hist(incomes, 50)
plt.show()
Explanation: Histogram
End of explanation
uniformSkewed = np.random.rand(100) * 100 - 40
high_outliers = np.random.rand(10) * 50 + 100
low_outliers = np.random.rand(10) * -50 - 100
data = np.concatenate((uniformSkewed, high_outliers, low_outliers))
plt.boxplot(data)
plt.show()
Explanation: Box & Whisker Plot
Useful for visualizing the spread & skew of data.
The red line represents the median of the data, and the box represents the bounds of the 1st and 3rd quartiles.
So, half of the data exists within the box.
The dotted-line "whiskers" indicate the range of the data - except for outliers, which are plotted outside the whiskers. Outliers are 1.5X or more the interquartile range.
This example below creates uniformly distributed random numbers between -40 and 60, plus a few outliers above 100 and below -100:
End of explanation
import numpy as np
axes = plt.axes()
axes.grid()
plt.xlabel('time')
plt.ylabel('age')
X = np.random.randint(low=0, high =24, size =100)
Y = np.random.randint(low=0, high =100, size =100)
plt.scatter(X,Y)
plt.show()
x.cov(Y)
Explanation: Activity
Try creating a scatter plot representing random data on age vs. time spent watching TV. Label the axes.
End of explanation |
4,997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Concepts
What is "learning from data"?
In general Learning from Data is a scientific discipline that is concerned with the design and development of algorithms that allow computers to infer (from data) a model that allows compact representation (unsupervised learning) and/or good generalization (supervised learning).
This is an important technology because it enables computational systems to adaptively improve their performance with experience accumulated from the observed data.
Most of these algorithms are based on the iterative solution of a mathematical problem that involves data and model. If there was an analytical solution to the problem, this should be the adopted one, but this is not the case for most of the cases.
So, the most common strategy for learning from data is based on solving a system of equations as a way to find a series of parameters of the model that minimizes a mathematical problem. This is called optimization.
The most important technique for solving optimization problems is gradient descend.
Preliminary
Step1: The limit as $h$ approaches zero, if it exists, should represent the slope of the tangent line to $(x, f(x))$.
For values that are not zero it is only an approximation.
Step2: It can be shown that the “centered difference formula" is better when computing numerical derivatives
Step3: There are two problems with numerical derivatives
Step4: Second approach
To find the local minimum using gradient descend
Step5: To fix this, we multiply the gradient by a step size. This step size (often called alpha) has to be chosen carefully, as a value too small will result in a long computation time, while a value too large will not give you the right result (by overshooting) or even fail to converge.
In this example, we'll set the step size to 0.01, which means we'll subtract $24×0.01$ from $15$, which is $14.76$.
This is now our new temporary local minimum
Step6: An important feature of gradient descent is that there should be a visible improvement over time
Step7: From derivatives to gradient
Step8: The function we have evaluated, $f({\mathbf x}) = x_1^2+x_2^2+x_3^2$, is $3$ at $(1,1,1)$ and the gradient vector at this point is $(2,2,2)$.
Then, we can follow this steps to maximize (or minimize) the function
Step9: Let's start by choosing a random vector and then walking a step in the opposite direction of the gradient vector. We will stop when the difference between the new solution and the old solution is less than a tolerance value.
Step10: Alpha
The step size, alpha, is a slippy concept
Step11: Learning from data
In general, we have
Step12: Stochastic Gradient Descend
The last function evals the whole dataset $(\mathbf{x}_i,y_i)$ at every step.
If the dataset is large, this strategy is too costly. In this case we will use a strategy called SGD (Stochastic Gradient Descend).
When learning from data, the cost function is additive
Step13: Exercise
Step14: Complete the following code in order to
Step15: Mini-batch Gradient Descent
In code, general batch gradient descent looks something like this
Step16: Loss Funtions
Loss functions $L(y, f(\mathbf{x})) = \frac{1}{n} \sum_i \ell(y_i, f(\mathbf{x_i}))$ represent the price paid for inaccuracy of predictions in classification/regression problems.
In classification this function is often the zero-one loss, that is, $ \ell(y_i, f(\mathbf{x_i}))$ is zero when $y_i = f(\mathbf{x}_i)$ and one otherwise.
This function is discontinuous with flat regions and is thus extremely hard to optimize using gradient-based methods. For this reason it is usual to consider a proxy to the loss called a surrogate loss function. For computational reasons this is usually convex function. Here we have some examples | Python Code:
# numerical derivative at a point x
def f(x):
return x**2
def fin_dif(x,
f,
h = 0.00001):
'''
This method returns the derivative of f at x
by using the finite difference method
'''
return (f(x+h) - f(x))/h
x = 2.0
print "{:2.4f}".format(fin_dif(x,f))
Explanation: Basic Concepts
What is "learning from data"?
In general Learning from Data is a scientific discipline that is concerned with the design and development of algorithms that allow computers to infer (from data) a model that allows compact representation (unsupervised learning) and/or good generalization (supervised learning).
This is an important technology because it enables computational systems to adaptively improve their performance with experience accumulated from the observed data.
Most of these algorithms are based on the iterative solution of a mathematical problem that involves data and model. If there was an analytical solution to the problem, this should be the adopted one, but this is not the case for most of the cases.
So, the most common strategy for learning from data is based on solving a system of equations as a way to find a series of parameters of the model that minimizes a mathematical problem. This is called optimization.
The most important technique for solving optimization problems is gradient descend.
Preliminary: Nelder-Mead method for function minimization.
The most simple thing we can try to minimize a function $f(x)$ would be to sample two points relatively near each other, and just repeatedly take a step down away from the largest value. This simple algorithm has a severe limitation: it can't get closer to the true minima than the step size.
The Nelder-Mead method dynamically adjusts the step size based off the loss of the new point. If the new point is better than any previously seen value, it expands the step size to accelerate towards the bottom. Likewise if the new point is worse it contracts the step size to converge around the minima. The usual settings are to half the step size when contracting and double the step size when expanding.
This method can be easily extended into higher dimensional examples, all that's required is taking one more point than there are dimensions. Then, the simplest approach is to replace the worst point with a point reflected through the centroid of the remaining n points. If this point is better than the best current point, then we can try stretching exponentially out along this line. On the other hand, if this new point isn't much better than the previous value, then we are stepping across a valley, so we shrink the step towards a better point.
See "An Interactive Tutorial on Numerical Optimization": http://www.benfrederickson.com/numerical-optimization/
Gradient descend (for hackers) for function minimization: 1-D
Let's suppose that we have a function $f: \Re \rightarrow \Re$. For example:
$$f(x) = x^2$$
Our objective is to find the argument $x$ that minimizes this function (for maximization, consider $-f(x)$). To this end, the critical concept is the derivative.
The derivative of $f$ of a variable $x$, $f'(x)$ or $\frac{\mathrm{d}f}{\mathrm{d}x}$, is a measure of the rate at which the value of the function changes with respect to the change of the variable. It is defined as the following limit:
$$ f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} $$
The derivative specifies how to scale a small change in the input in order to obtain the corresponding change in the output:
$$ f(x + h) \approx f(x) + h f'(x)$$
End of explanation
for h in np.linspace(0.0, 1.0 , 5):
print "{:3.6f}".format(f(5+h)), "{:3.6f}".format(f(5)+h*fin_dif(5,f))
x = np.linspace(-1.5,-0.5, 100)
f = [i**2 for i in x]
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,f, 'r-')
plt.plot([-1.5, -0.5], [2, 0.0], 'k-', lw=2)
plt.plot([-1.4, -1.0], [1.96, 1.0], 'b-', lw=2)
plt.plot([-1],[1],'o')
plt.plot([-1.4],[1.96],'o')
plt.text(-1.0, 1.2, r'$x,f(x)$')
plt.text(-1.4, 2.2, r'$(x-h),f(x-h)$')
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show
Explanation: The limit as $h$ approaches zero, if it exists, should represent the slope of the tangent line to $(x, f(x))$.
For values that are not zero it is only an approximation.
End of explanation
x = np.linspace(-15,15,100)
y = x**2
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.plot([0],[0],'o')
plt.ylim([-10,250])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
ax.text(0,
20,
'Minimum',
ha='center',
color=sns.xkcd_rgb['pale red'],
)
plt.show
x = np.linspace(-15,15,100)
y = -x**2
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.plot([0],[0],'o')
plt.ylim([-250,10])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
ax.text(0,
-30,
'Maximum',
ha='center',
color=sns.xkcd_rgb['pale red'],
)
plt.show
x = np.linspace(-15,15,100)
y = x**3
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.plot([0],[0],'o')
plt.ylim([-3000,3000])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
ax.text(0,
400,
'Saddle Point',
ha='center',
color=sns.xkcd_rgb['pale red'],
)
plt.show
Explanation: It can be shown that the “centered difference formula" is better when computing numerical derivatives:
$$ \lim_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h} $$
The error in the "finite difference" approximation can be derived from Taylor's theorem and, assuming that $f$ is differentiable, is $O(h)$. In the case of “centered difference" the error is $O(h^2)$.
The derivative tells how to chage $x$ in order to make a small improvement in $f$.
Then, we can follow these steps to decrease the value of the function:
Start from a random $x$ value.
Compute the derivative $f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h}$.
Walk a small step (possibly weighted by the derivative module) in the opposite direction of the derivative, because we know that $f(x - h \mbox{ sign}(f'(x))$ is less than $f(x)$ for small enough $h$.
The search for the minima ends when the derivative is zero because we have no more information about which direction to move. $x$ is a critical o stationary point if $f'(x)=0$.
A minimum (maximum) is a critical point where $f(x)$ is lower (higher) than at all neighboring points.
There is a third class of critical points: saddle points.
If $f$ is a convex function, this should be the minimum (maximum) of our functions. In other cases it could be a local minimum (maximum) or a saddle point.
End of explanation
x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.plot([3],[3**2 - 6*3 + 5],'o')
plt.ylim([-10,250])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
ax.text(3,
10,
'Min: x = 3',
ha='center',
color=sns.xkcd_rgb['pale red'],
)
plt.show
Explanation: There are two problems with numerical derivatives:
+ It is approximate.
+ It is very slow to evaluate (two function evaluations: $f(x + h) , f(x - h)$ ).
Our knowledge from Calculus could help!
We know that we can get an analytical expression of the derivative for some functions.
For example, let's suppose we have a simple quadratic function, $f(x)=x^2−6x+5$, and we want to find the minimum of this function.
First approach
We can solve this analytically using Calculus, by finding the derivate $f'(x) = 2x-6$ and setting it to zero:
\begin{equation}
\begin{split}
2x-6 & = & 0 \
2x & = & 6 \
x & = & 3 \
\end{split}
\end{equation}
End of explanation
x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
start = 15
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.plot([start],[start**2 - 6*start + 5],'o')
ax.text(start,
start**2 - 6*start + 35,
'Start',
ha='center',
color=sns.xkcd_rgb['blue'],
)
d = 2 * start - 6
end = start - d
plt.plot([end],[end**2 - 6*end + 5],'o')
plt.ylim([-10,250])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
ax.text(end,
start**2 - 6*start + 35,
'End',
ha='center',
color=sns.xkcd_rgb['green'],
)
plt.show
Explanation: Second approach
To find the local minimum using gradient descend: you start at a random point, and move into the direction of steepest descent relative to the derivative:
Start from a random $x$ value.
Compute the derivative $f'(x)$ analitically.
Walk a small step in the opposite direction of the derivative.
In this example, let's suppose we start at $x=15$. The derivative at this point is $2×15−6=24$.
Because we're using gradient descent, we need to subtract the gradient from our $x$-coordinate: $f(x - f'(x))$. However, notice that $15−24$ gives us $−9$, clearly overshooting over target of $3$.
End of explanation
old_min = 0
temp_min = 15
step_size = 0.01
precision = 0.0001
def f(x):
return x**2 - 6*x + 5
def f_derivative(x):
import math
return 2*x -6
mins = []
cost = []
while abs(temp_min - old_min) > precision:
old_min = temp_min
gradient = f_derivative(old_min)
move = gradient * step_size
temp_min = old_min - move
cost.append((3-temp_min)**2)
mins.append(temp_min)
# rounding the result to 2 digits because of the step size
print "Local minimum occurs at {:3.6f}.".format(round(temp_min,2))
Explanation: To fix this, we multiply the gradient by a step size. This step size (often called alpha) has to be chosen carefully, as a value too small will result in a long computation time, while a value too large will not give you the right result (by overshooting) or even fail to converge.
In this example, we'll set the step size to 0.01, which means we'll subtract $24×0.01$ from $15$, which is $14.76$.
This is now our new temporary local minimum: We continue this method until we either don't see a change after we subtracted the derivative step size, or until we've completed a pre-set number of iterations.
End of explanation
x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
x, y = (zip(*enumerate(cost)))
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-', alpha=0.7)
plt.ylim([-10,150])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show
x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.ylim([-10,250])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.plot(mins,cost,'o', alpha=0.3)
ax.text(start,
start**2 - 6*start + 25,
'Start',
ha='center',
color=sns.xkcd_rgb['blue'],
)
ax.text(mins[-1],
cost[-1]+20,
'End (%s steps)' % len(mins),
ha='center',
color=sns.xkcd_rgb['blue'],
)
plt.show
Explanation: An important feature of gradient descent is that there should be a visible improvement over time: In this example, we simply plotted the squared distance from the local minima calculated by gradient descent and the true local minimum, cost, against the iteration during which it was calculated. As we can see, the distance gets smaller over time, but barely changes in later iterations.
End of explanation
def f(x):
return sum(x_i**2 for x_i in x)
def fin_dif_partial_centered(x,
f,
i,
h=1e-6):
'''
This method returns the partial derivative of the i-th component of f at x
by using the centered finite difference method
'''
w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]
w2 = [x_j - (h if j==i else 0) for j, x_j in enumerate(x)]
return (f(w1) - f(w2))/(2*h)
def fin_dif_partial_old(x,
f,
i,
h=1e-6):
'''
This method returns the partial derivative of the i-th component of f at x
by using the (non-centered) finite difference method
'''
w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]
return (f(w1) - f(x))/h
def gradient_centered(x,
f,
h=1e-6):
'''
This method returns the gradient vector of f at x
by using the centered finite difference method
'''
return[round(fin_dif_partial_centered(x,f,i,h), 10) for i,_ in enumerate(x)]
def gradient_old(x,
f,
h=1e-6):
'''
This method returns the the gradient vector of f at x
by using the (non-centered)ç finite difference method
'''
return[round(fin_dif_partial_old(x,f,i,h), 10) for i,_ in enumerate(x)]
x = [1.0,1.0,1.0]
print '{:.6f}'.format(f(x)), gradient_centered(x,f)
print '{:.6f}'.format(f(x)), gradient_old(x,f)
Explanation: From derivatives to gradient: $n$-dimensional function minimization.
Let's consider a $n$-dimensional function $f: \Re^n \rightarrow \Re$. For example:
$$f(\mathbf{x}) = \sum_{n} x_n^2$$
Our objective is to find the argument $\mathbf{x}$ that minimizes this function.
The gradient of $f$ is the vector whose components are the $n$ partial derivatives of $f$. It is thus a vector-valued function.
The gradient points in the direction of the greatest rate of increase of the function.
$$\nabla {f} = (\frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_n})$$
End of explanation
def euc_dist(v1,v2):
import numpy as np
import math
v = np.array(v1)-np.array(v2)
return math.sqrt(sum(v_i ** 2 for v_i in v))
Explanation: The function we have evaluated, $f({\mathbf x}) = x_1^2+x_2^2+x_3^2$, is $3$ at $(1,1,1)$ and the gradient vector at this point is $(2,2,2)$.
Then, we can follow this steps to maximize (or minimize) the function:
Start from a random $\mathbf{x}$ vector.
Compute the gradient vector.
Walk a small step in the opposite direction of the gradient vector.
It is important to be aware that this gradient computation is very expensive: if $\mathbf{x}$ has dimension $n$, we have to evaluate $f$ at $2*n$ points.
How to use the gradient.
$f(x) = \sum_i x_i^2$, takes its mimimum value when all $x$ are 0.
Let's check it for $n=3$:
End of explanation
# choosing a random vector
import random
import numpy as np
x = [random.randint(-10,10) for i in range(3)]
x
def step(x,
grad,
alpha):
'''
This function makes a step in the opposite direction of the gradient vector
in order to compute a new value for the target function.
'''
return [x_i - alpha * grad_i for x_i, grad_i in zip(x,grad)]
tol = 1e-15
alpha = 0.01
while True:
grad = gradient_centered(x,f)
next_x = step(x,grad,alpha)
if euc_dist(next_x,x) < tol:
break
x = next_x
print [round(i,10) for i in x]
Explanation: Let's start by choosing a random vector and then walking a step in the opposite direction of the gradient vector. We will stop when the difference between the new solution and the old solution is less than a tolerance value.
End of explanation
step_size = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
Explanation: Alpha
The step size, alpha, is a slippy concept: if it is too small we will slowly converge to the solution, if it is too large we can diverge from the solution.
There are several policies to follow when selecting the step size:
Constant size steps. In this case, the size step determines the precision of the solution.
Decreasing step sizes.
At each step, select the optimal step.
The last policy is good, but too expensive. In this case we would consider a fixed set of values:
End of explanation
import numpy as np
import random
# f = 2x
x = np.arange(10)
y = np.array([2*i for i in x])
# f_target = 1/n Sum (y - wx)**2
def target_f(x,y,w):
return np.sum((y - x * w)**2.0) / x.size
# gradient_f = 2/n Sum 2wx**2 - 2xy
def gradient_f(x,y,w):
return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size
def step(w,grad,alpha):
return w - alpha * grad
def BGD_multi_step(target_f,
gradient_f,
x,
y,
toler = 1e-6):
'''
Batch gradient descend by using a multi-step approach
'''
alphas = [100, 10, 1, 0.1, 0.001, 0.00001]
w = random.random()
val = target_f(x,y,w)
i = 0
while True:
i += 1
gradient = gradient_f(x,y,w)
next_ws = [step(w, gradient, alpha) for alpha in alphas]
next_vals = [target_f(x,y,w) for w in next_ws]
min_val = min(next_vals)
next_w = next_ws[next_vals.index(min_val)]
next_val = target_f(x,y,next_w)
if (abs(val - next_val) < toler):
return w
else:
w, val = next_w, next_val
print '{:.6f}'.format(BGD_multi_step(target_f, gradient_f, x, y))
%%timeit
BGD_multi_step(target_f, gradient_f, x, y)
def BGD(target_f,
gradient_f,
x,
y,
toler = 1e-6,
alpha=0.01):
'''
Batch gradient descend by using a given step
'''
w = random.random()
val = target_f(x,y,w)
i = 0
while True:
i += 1
gradient = gradient_f(x,y,w)
next_w = step(w, gradient, alpha)
next_val = target_f(x,y,next_w)
if (abs(val - next_val) < toler):
return w
else:
w, val = next_w, next_val
print '{:.6f}'.format(BGD(target_f, gradient_f, x, y))
%%timeit
BGD(target_f, gradient_f, x, y)
Explanation: Learning from data
In general, we have:
A dataset $(\mathbf{x},y)$ of $n$ examples.
A target function $f_\mathbf{w}$, that we want to minimize, representing the discrepancy between our data and the model we want to fit. The model is represented by a set of parameters $\mathbf{w}$.
The gradient of the target function, $g_f$.
In the most common case $f$ represents the errors from a data representation model $M$. To fit the model is to find the optimal parameters $\mathbf{w}$ that minimize the following expression:
$$ f_\mathbf{w} = \frac{1}{n} \sum_{i} (y_i - M(\mathbf{x}_i,\mathbf{w}))^2 $$
For example, $(\mathbf{x},y)$ can represent:
$\mathbf{x}$: the behavior of a "Candy Crush" player; $y$: monthly payments.
$\mathbf{x}$: sensor data about your car engine; $y$: probability of engine error.
$\mathbf{x}$: finantial data of a bank customer; $y$: customer rating.
If $y$ is a real value, it is called a regression problem.
If $y$ is binary/categorical, it is called a classification problem.
Let's suppose that our model is a one-dimensional linear model $M(\mathbf{x},\mathbf{w}) = w \cdot x $.
Batch gradient descend
We can implement gradient descend in the following way (batch gradient descend):
End of explanation
import numpy as np
x = np.arange(10)
y = np.array([2*i for i in x])
data = zip(x,y)
for (x_i,y_i) in data:
print '{:3d} {:3d}'.format(x_i,y_i)
print
def in_random_order(data):
'''
Random data generator
'''
import random
indexes = [i for i,_ in enumerate(data)]
random.shuffle(indexes)
for i in indexes:
yield data[i]
for (x_i,y_i) in in_random_order(data):
print '{:3d} {:3d}'.format(x_i,y_i)
import numpy as np
import random
def SGD(target_f,
gradient_f,
x,
y,
toler = 1e-6,
epochs=100,
alpha_0=0.01):
'''
Stochastic gradient descend with automatic step adaptation (by
reducing the step to its 95% when there are iterations with no increase)
'''
data = zip(x,y)
w = random.random()
alpha = alpha_0
min_w, min_val = float('inf'), float('inf')
epoch = 0
iteration_no_increase = 0
while epoch < epochs and iteration_no_increase < 100:
val = target_f(x, y, w)
if min_val - val > toler:
min_w, min_val = w, val
alpha = alpha_0
iteration_no_increase = 0
else:
iteration_no_increase += 1
alpha *= 0.95
for x_i, y_i in in_random_order(data):
gradient_i = gradient_f(x_i, y_i, w)
w = w - (alpha * gradient_i)
epoch += 1
return min_w
print 'w: {:.6f}'.format(SGD(target_f, gradient_f, x, y))
Explanation: Stochastic Gradient Descend
The last function evals the whole dataset $(\mathbf{x}_i,y_i)$ at every step.
If the dataset is large, this strategy is too costly. In this case we will use a strategy called SGD (Stochastic Gradient Descend).
When learning from data, the cost function is additive: it is computed by adding sample reconstruction errors.
Then, we can compute the estimate the gradient (and move towards the minimum) by using only one data sample (or a small data sample).
Thus, we will find the minimum by iterating this gradient estimation over the dataset.
A full iteration over the dataset is called epoch. During an epoch, data must be used in a random order.
If we apply this method we have some theoretical guarantees to find a good minimum:
+ SGD essentially uses the inaccurate gradient per iteration. Since there is no free food, what is the cost by using approximate gradient? The answer is that the convergence rate is slower than the gradient descent algorithm.
+ The convergence of SGD has been analyzed using the theories of convex minimization and of stochastic approximation: it converges almost surely to a global minimum when the objective function is convex or pseudoconvex, and otherwise converges almost surely to a local minimum.
End of explanation
%reset
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets.samples_generator import make_regression
from scipy import stats
import random
%matplotlib inline
# x: input data
# y: noisy output data
x = np.random.uniform(0,1,20)
# f = 2x + 0
def f(x): return 2*x + 0
noise_variance =0.1
noise = np.random.randn(x.shape[0])*noise_variance
y = f(x) + noise
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.xlabel('$x$', fontsize=15)
plt.ylabel('$f(x)$', fontsize=15)
plt.plot(x, y, 'o', label='y')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')
plt.ylim([0,2])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show
Explanation: Exercise: Stochastic Gradient Descent and Linear Regression
The linear regression model assumes a linear relationship between data:
$$ y_i = w_1 x_i + w_0 $$
Let's generate a more realistic dataset (with noise), where $w_1 = 2$ and $w_0 = 0$.
End of explanation
# Write your target function as f_target 1/n Sum (y - wx)**2
def target_f(x,y,w):
# your code here
# Write your gradient function
def gradient_f(x,y,w):
# your code here
def in_random_order(data):
'''
Random data generator
'''
import random
indexes = [i for i,_ in enumerate(data)]
random.shuffle(indexes)
for i in indexes:
yield data[i]
# Modify the SGD function to return a 'target_value' vector
def SGD(target_f,
gradient_f,
x,
y,
toler = 1e-6,
epochs=100,
alpha_0=0.01):
# Insert your code among the following lines
data = zip(x,y)
w = random.random()
alpha = alpha_0
min_w, min_val = float('inf'), float('inf')
iteration_no_increase = 0
epoch = 0
while epoch < epochs and iteration_no_increase < 100:
val = target_f(x, y, w)
if min_val - val > toler:
min_w, min_val = w, val
alpha = alpha_0
iteration_no_increase = 0
else:
iteration_no_increase += 1
alpha *= 0.95
for x_i, y_i in in_random_order(data):
gradient_i = gradient_f(x_i, y_i, w)
w = w - (alpha * gradient_i)
epoch += 1
return min_w
# Print the value of the solution
w, target_value = SGD(target_f, gradient_f, x, y)
print 'w: {:.6f}'.format(w)
# Visualize the solution regression line
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x, y, 'o', label='t')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)', alpha=0.5)
plt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line', alpha=0.5, linestyle='--')
plt.xlabel('input x')
plt.ylabel('target t')
plt.title('input vs. target')
plt.ylim([0,2])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show
# Visualize the evolution of the target function value during iterations.
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(np.arange(target_value.size), target_value, 'o', alpha = 0.2)
plt.xlabel('Iteration')
plt.ylabel('Cost')
plt.grid()
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show()
Explanation: Complete the following code in order to:
+ Compute the value of $w$ by using a estimator based on minimizing the squared error.
+ Get from SGD function a vector, target_value, representing the value of the target function at each iteration.
End of explanation
def get_batches(iterable,
num_elem_batch = 1):
'''
Generator of batches from an iterable that contains data
'''
current_batch = []
for item in iterable:
current_batch.append(item)
if len(current_batch) == num_elem_batch:
yield current_batch
current_batch = []
if current_batch:
yield current_batch
x = np.array(range(0, 10))
y = np.array(range(10, 20))
data = zip(x,y)
np.random.shuffle(data)
for x in get_batches(data, 3):
print x
print
for batch in get_batches(data, 3):
print np.array(zip(*batch)[0]), np.array(zip(*batch)[1])
%reset
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets.samples_generator import make_regression
from scipy import stats
import random
%matplotlib inline
# x: input data
# y: noisy output data
x = np.random.uniform(0,1,2000)
# f = 2x + 0
def f(x): return 2*x + 0
noise_variance =0.1
noise = np.random.randn(x.shape[0])*noise_variance
y = f(x) + noise
plt.plot(x, y, 'o', label='y')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')
plt.xlabel('$x$', fontsize=15)
plt.ylabel('$t$', fontsize=15)
plt.ylim([0,2])
plt.title('inputs (x) vs targets (y)')
plt.grid()
plt.legend(loc=2)
plt.gcf().set_size_inches((10,3))
plt.show()
# f_target = 1/n Sum (y - wx)**2
def target_f(x,
y,
w):
return np.sum((y - x * w)**2.0) / x.size
# gradient_f = 2/n Sum 2wx**2 - 2xy
def gradient_f(x,
y,
w):
return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size
def in_random_order(data):
'''
Random data generator
'''
import random
indexes = [i for i,_ in enumerate(data)]
random.shuffle(indexes)
for i in indexes:
yield data[i]
def get_batches(iterable,
num_elem_batch = 1):
'''
Generator of batches from an iterable that contains data
'''
current_batch = []
for item in iterable:
current_batch.append(item)
if len(current_batch) == num_elem_batch:
yield current_batch
current_batch = []
if current_batch:
yield current_batch
def SGD_MB(target_f, gradient_f, x, y, epochs=100, alpha_0=0.01):
data = zip(x,y)
w = random.random()
alpha = alpha_0
min_w, min_val = float('inf'), float('inf')
epoch = 0
while epoch < epochs:
val = target_f(x, y, w)
if val < min_val:
min_w, min_val = w, val
alpha = alpha_0
else:
alpha *= 0.9
np.random.shuffle(data)
for batch in get_batches(data, num_elem_batch = 100):
x_batch = np.array(zip(*batch)[0])
y_batch = np.array(zip(*batch)[1])
gradient = gradient_f(x_batch, y_batch, w)
w = w - (alpha * gradient)
epoch += 1
return min_w
w = SGD_MB(target_f, gradient_f, x, y)
print 'w: {:.6f}'.format(w)
plt.plot(x, y, 'o', label='t')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)', alpha=0.5)
plt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line', alpha=0.5, linestyle='--')
plt.xlabel('input x')
plt.ylabel('target t')
plt.ylim([0,2])
plt.title('input vs. target')
plt.grid()
plt.legend(loc=2)
plt.gcf().set_size_inches((10,3))
plt.show()
Explanation: Mini-batch Gradient Descent
In code, general batch gradient descent looks something like this:
python
nb_epochs = 100
for i in range(nb_epochs):
grad = evaluate_gradient(target_f, data, w)
w = w - learning_rate * grad
For a pre-defined number of epochs, we first compute the gradient vector of the target function for the whole dataset w.r.t. our parameter vector.
Stochastic gradient descent (SGD) in contrast performs a parameter update for each training example and label:
python
nb_epochs = 100
for i in range(nb_epochs):
np.random.shuffle(data)
for sample in data:
grad = evaluate_gradient(target_f, sample, w)
w = w - learning_rate * grad
Mini-batch gradient descent finally takes the best of both worlds and performs an update for every mini-batch of $n$ training examples:
python
nb_epochs = 100
for i in range(nb_epochs):
np.random.shuffle(data)
for batch in get_batches(data, batch_size=50):
grad = evaluate_gradient(target_f, batch, w)
w = w - learning_rate * grad
Minibatch SGD has the advantage that it works with a slightly less noisy estimate of the gradient. However, as the minibatch size increases, the number of updates done per computation done decreases (eventually it becomes very inefficient, like batch gradient descent).
There is an optimal trade-off (in terms of computational efficiency) that may vary depending on the data distribution and the particulars of the class of function considered, as well as how computations are implemented.
End of explanation
%reset
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets.samples_generator import make_regression
from scipy import stats
import random
%matplotlib inline
# the function that I'm going to plot
def f(x,y):
return x**2 + 5*y**2
x = np.arange(-3.0,3.0,0.1)
y = np.arange(-3.0,3.0,0.1)
X,Y = np.meshgrid(x, y, indexing='ij') # grid of point
Z = f(X, Y) # evaluation of the function on the grid
plt.pcolor(X, Y, Z, cmap=plt.cm.gist_earth)
plt.axis([x.min(), x.max(), y.min(), y.max()])
plt.gca().set_aspect('equal', adjustable='box')
plt.gcf().set_size_inches((6,6))
plt.show()
def target_f(x):
return x[0]**2.0 + 5*x[1]**2.0
def part_f(x,
f,
i,
h=1e-6):
w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]
w2 = [x_j - (h if j==i else 0) for j, x_j in enumerate(x)]
return (f(w1) - f(w2))/(2*h)
def gradient_f(x,
f,
h=1e-6):
return np.array([round(part_f(x,f,i,h), 10) for i,_ in enumerate(x)])
def SGD(target_f,
gradient_f,
x,
alpha_0=0.01,
toler = 0.000001):
alpha = alpha_0
min_val = float('inf')
steps = 0
iteration_no_increase = 0
trace = []
while iteration_no_increase < 100:
val = target_f(x)
if min_val - val > toler:
min_val = val
alpha = alpha_0
iteration_no_increase = 0
else:
alpha *= 0.95
iteration_no_increase += 1
trace.append(x)
gradient_i = gradient_f(x, target_f)
x = x - (alpha * gradient_i)
steps += 1
return x, val, steps, trace
x = np.array([2,-2])
x, val, steps, trace = SGD(target_f, gradient_f, x)
print x
print 'Val: {:.6f}, steps: {:.0f}'.format(val, steps)
def SGD_M(target_f,
gradient_f,
x,
alpha_0=0.01,
toler = 0.000001,
m = 0.9):
alpha = alpha_0
min_val = float('inf')
steps = 0
iteration_no_increase = 0
v = 0.0
trace = []
while iteration_no_increase < 100:
val = target_f(x)
if min_val - val > toler:
min_val = val
alpha = alpha_0
iteration_no_increase = 0
else:
alpha *= 0.95
iteration_no_increase += 1
trace.append(x)
gradient_i = gradient_f(x, target_f)
v = m * v + (alpha * gradient_i)
x = x - v
steps += 1
return x, val, steps, trace
x = np.array([2,-2])
x, val, steps, trace2 = SGD_M(target_f, gradient_f, x)
print '\n',x
print 'Val: {:.6f}, steps: {:.0f}'.format(val, steps)
x2 = np.array(range(len(trace)))
x3 = np.array(range(len(trace2)))
plt.xlim([0,len(trace)])
plt.gcf().set_size_inches((10,3))
plt.plot(x3, trace2)
plt.plot(x2, trace, '-')
Explanation: Loss Funtions
Loss functions $L(y, f(\mathbf{x})) = \frac{1}{n} \sum_i \ell(y_i, f(\mathbf{x_i}))$ represent the price paid for inaccuracy of predictions in classification/regression problems.
In classification this function is often the zero-one loss, that is, $ \ell(y_i, f(\mathbf{x_i}))$ is zero when $y_i = f(\mathbf{x}_i)$ and one otherwise.
This function is discontinuous with flat regions and is thus extremely hard to optimize using gradient-based methods. For this reason it is usual to consider a proxy to the loss called a surrogate loss function. For computational reasons this is usually convex function. Here we have some examples:
Square / Euclidean Loss
In regression problems, the most common loss function is the square loss function:
$$ L(y, f(\mathbf{x})) = \frac{1}{n} \sum_i (y_i - f(\mathbf{x}_i))^2 $$
The square loss function can be re-written and utilized for classification:
$$ L(y, f(\mathbf{x})) = \frac{1}{n} \sum_i (1 - y_i f(\mathbf{x}_i))^2 $$
Hinge / Margin Loss (i.e. Suport Vector Machines)
The hinge loss function is defined as:
$$ L(y, f(\mathbf{x})) = \frac{1}{n} \sum_i \mbox{max}(0, 1 - y_i f(\mathbf{x}_i)) $$
The hinge loss provides a relatively tight, convex upper bound on the 0–1 Loss.
<img src="images/loss_functions.png">
Logistic Loss (Logistic Regression)
This function displays a similar convergence rate to the hinge loss function, and since it is continuous, simple gradient descent methods can be utilized.
$$ L(y, f(\mathbf{x})) = \frac{1}{n} log(1 + exp(-y_i f(\mathbf{x}_i))) $$
Sigmoid Cross-Entropy Loss (Softmax classifier)
Cross-Entropy is a loss function that is very used for training multiclass problems. We'll focus on models that assume that classes are mutually exclusive.
In this case, our labels have this form $\mathbf{y}_i =(1.0,0.0,0.0)$. If our model predicts a different distribution, say $ f(\mathbf{x}_i)=(0.4,0.1,0.5)$, then we'd like to nudge the parameters so that $f(\mathbf{x}_i)$ gets closer to $\mathbf{y}_i$.
C.Shannon showed that if you want to send a series of messages composed of symbols from an alphabet with distribution $y$ ($y_j$ is the probability of the $j$-th symbol), then to use the smallest number of bits on average, you should assign $\log(\frac{1}{y_j})$ bits to the $j$-th symbol.
The optimal number of bits is known as entropy:
$$ H(\mathbf{y}) = \sum_j y_j \log\frac{1}{y_j} = - \sum_j y_j \log y_j$$
Cross entropy is the number of bits we'll need if we encode symbols by using a wrong distribution $\hat y$:
$$ H(y, \hat y) = - \sum_j y_j \log \hat y_j $$
In our case, the real distribution is $\mathbf{y}$ and the "wrong" one is $f(\mathbf{x}_i)$. So, minimizing cross entropy with respect our model parameters will result in the model that best approximates our labels if considered as a probabilistic distribution.
Cross entropy is used in combination with Softmax classifier. In order to classify $\mathbf{x}_i$ we could take the index corresponding to the max value of $f(\mathbf{x}_i)$, but Softmax gives a slightly more intuitive output (normalized class probabilities) and also has a probabilistic interpretation:
$$ P(\mathbf{y}_i = j \mid \mathbf{x_i}) = - log \left( \frac{e^{f_j(\mathbf{x_i})}}{\sum_k e^{f_k(\mathbf{x_i})} } \right) $$
where $f_k$ is a linear classifier.
Advanced gradient descend
Momentum
SGD has trouble navigating ravines, i.e. areas where the surface curves much more steeply in one dimension than in another, which are common around local optima. In these scenarios, SGD oscillates across the slopes of the ravine while only making hesitant progress along the bottom towards the local optimum.
<img src="images/ridge2.png">
Momentum is a method that helps accelerate SGD in the relevant direction and dampens oscillations. It does this by adding a fraction of the update vector of the past time step to the current update vector:
$$ v_t = m v_{t-1} + \alpha \nabla_w f $$
$$ w = w - v_t $$
The momentum $m$ is commonly set to $0.9$.
Nesterov
However, a ball that rolls down a hill, blindly following the slope, is highly unsatisfactory. We'd like to have a smarter ball, a ball that has a notion of where it is going so that it knows to slow down before the hill slopes up again.
Nesterov accelerated gradient (NAG) is a way to give our momentum term this kind of prescience. We know that we will use our momentum term $m v_{t-1}$ to move the parameters $w$. Computing
$w - m v_{t-1}$ thus gives us an approximation of the next position of the parameters (the gradient is missing for the full update), a rough idea where our parameters are going to be. We can now effectively look ahead by calculating the gradient not w.r.t. to our current parameters $w$ but w.r.t. the approximate future position of our parameters:
$$ w_{new} = w - m v_{t-1} $$
$$ v_t = m v_{t-1} + \alpha \nabla_{w_{new}} f $$
$$ w = w - v_t $$
Adagrad
All previous approaches manipulated the learning rate globally and equally for all parameters. Tuning the learning rates is an expensive process, so much work has gone into devising methods that can adaptively tune the learning rates, and even do so per parameter.
Adagrad is an algorithm for gradient-based optimization that does just this: It adapts the learning rate to the parameters, performing larger updates for infrequent and smaller updates for frequent parameters.
$$ c = c + (\nabla_w f)^2 $$
$$ w = w - \frac{\alpha}{\sqrt{c}} $$
RMProp
RMSProp update adjusts the Adagrad method in a very simple way in an attempt to reduce its aggressive, monotonically decreasing learning rate. In particular, it uses a moving average of squared gradients instead, giving:
$$ c = \beta c + (1 - \beta)(\nabla_w f)^2 $$
$$ w = w - \frac{\alpha}{\sqrt{c}} $$
where $\beta$ is a decay rate that controls the size of the moving average.
<img src="images/g1.gif">
(Image credit: Alec Radford)
<img src="images/g2.gif">
(Image credit: Alec Radford)
End of explanation |
4,998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.e - Correction de l'interrogation écrite du 14 novembre 2014
coût algorithmique, calcul de séries mathématiques
Step1: Enoncé 1
Q1
Le code suivant produit une erreur. Corrigez le programme.
Step2: L'objectif de ce petit programme est de calculer la somme des éléments de la liste nbs. L'exception est déclenché la variable s n'est jamais créé. Il manque l'instruction s=0.
Step3: Q2
Que vaut nbs dans le programme suivant
Step4: Q3
On considère le programme suivant, il affiche None, pourquoi ?
Step5: Le 2 correspond au premier print(d), le None correspond au second. Pour s'en convaincre, il suffit d'ajouter quelques caractères supplémentaires
Step6: Donc la variable d en dehors de la fonction vaut None, cela veut que le résultat de la fonction ma_fonction est None. Il peut être None soit parce que la fonction contient explicitiement l'instruction return None soit parce qu'aucune instruction return n'ext exécutée. C'est le cas ici puisqu'il n'y a qu'une instruction print. On remplace print par return.
Step7: Q4
Que vaut n en fonction de N ?
Step8: Pour être plus précis, 495000 = $\frac{N^2(N-1)}{2}$.
Q5
Une des lignes suivantes provoque une erreur, laquelle ?
Step9: Lorsqu'on multiplie une chaîne de caractères par un entier, cela revient à la répliquer
Step10: Le type tuple sont immutable. On ne peut pas le modifier. Mais les listes peuvent l'être.
Step11: Q2
Que vaut c ?
Step12: La méthode get retourne la valeur associée à une clé ou une autre valeur (ici None) si elle ne s'y trouve pas. La raison pour laquelle le résultat est None ici est que '4' != 4. La clé '4' ne fait pas partie du dictionnaire.
Q3
Que vaut x ?
Step13: A chaque passage dans la boucle for, on ajoute N à s. A chaque passage dans la boucle while, on divise N par 2. Donc, après la boucle while, $s = N + N/2 + N/4 + N/8 + ...$. On répète cela jusqu'à ce que $N / 2^k$ soit plus grand que 0. Or, les divisions sont entières (symbole //), 1//2 vaut 0. La condition devient jusqu'à ce que $N / 2^k <1$.
Pour le reste, c'est une suite géométrique. Si on pose $N=2^k$, on calcule donc la somme
Step14: Q5
Par quoi faut-il remplacer les ??? pour avoir l'erreur ci-dessous ?
Step15: Cette erreur se produit car ma_liste vaut None. Si la fonction fonction retourne None, c'est que l'instruction l = [ ] n'est jamais exécutée, donc que la condition if l is None n'est jamais vérifiée. On ne passe donc jamais dans la boucle for et ceci arrive si N est négatif ou nul.
Enoncé 3
Q1
Que se passe-t-il ?
Step16: L'erreur est due au fait que la boucle parcourt la liste en même temps qu'elle supprime des éléments. Le résultat est souvent une erreur. On vérifie en affichant i et l.
Step17: Q2
Que vaut a ?
Step18: La variable a double à chaque fois qu'on passe dans la boucle. On y passe 4 fois et on part de a=2. Donc
Step19: La fonction revient à arrondir au demi inférieur, donc $2.5$.
Q4
Combien d'étoiles le programme suivant affiche ?
Step20: C'est un peu long à afficher, modifions le programme pour compter les étoiles plutôt que de les afficher.
Step21: Si $n$ est la longueur de la liste l, le coût de la fonction moyenne est $O(n)$. Le coût de la fonction variance est $n$ fois le coût de la fonction moyenne, soit $O(n^2)$. Celle-ci pourrait être beaucoup plus efficace en écrivant
Step22: Q5
Que vaut x ? | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.e - Correction de l'interrogation écrite du 14 novembre 2014
coût algorithmique, calcul de séries mathématiques
End of explanation
nbs = [ 1, 5, 4, 7 ] #
for n in nbs: #
s += n #
Explanation: Enoncé 1
Q1
Le code suivant produit une erreur. Corrigez le programme.
End of explanation
nbs = [ 1, 5, 4, 7 ]
s = 0
for n in nbs:
s += n
s
Explanation: L'objectif de ce petit programme est de calculer la somme des éléments de la liste nbs. L'exception est déclenché la variable s n'est jamais créé. Il manque l'instruction s=0.
End of explanation
def f(x) : return x%2
nbs = { i:f(i) for i in range(0,5) }
nbs
Explanation: Q2
Que vaut nbs dans le programme suivant :
End of explanation
def ma_fonction(x1,y1,x2,y2):
d = (x1-x2)**2 +(y1-y2)**2
print(d)
d = ma_fonction(0,0,1,1)
print(d)
Explanation: Q3
On considère le programme suivant, il affiche None, pourquoi ?
End of explanation
def ma_fonction(x1,y1,x2,y2):
d = (x1-x2)**2 +(y1-y2)**2
print("A",d)
d = ma_fonction(0,0,1,1)
print("B",d)
Explanation: Le 2 correspond au premier print(d), le None correspond au second. Pour s'en convaincre, il suffit d'ajouter quelques caractères supplémentaires :
End of explanation
def ma_fonction(x1,y1,x2,y2):
d = (x1-x2)**2 +(y1-y2)**2
return d
d = ma_fonction(0,0,1,1)
print(d)
Explanation: Donc la variable d en dehors de la fonction vaut None, cela veut que le résultat de la fonction ma_fonction est None. Il peut être None soit parce que la fonction contient explicitiement l'instruction return None soit parce qu'aucune instruction return n'ext exécutée. C'est le cas ici puisqu'il n'y a qu'une instruction print. On remplace print par return.
End of explanation
n = 0
N = 100
for i in range(0,N):
for k in range(0,i):
n += N
n
Explanation: Q4
Que vaut n en fonction de N ?
End of explanation
a = 3 #
b = "6" #
a+b #
a*b #
Explanation: Pour être plus précis, 495000 = $\frac{N^2(N-1)}{2}$.
Q5
Une des lignes suivantes provoque une erreur, laquelle ?
End of explanation
nbs = ( 1, 5, 4, 7 ) #
nbs[0] = 0 #
Explanation: Lorsqu'on multiplie une chaîne de caractères par un entier, cela revient à la répliquer : 3*"6" = "666". L'addition est impossible car on ne peut pas additionner un nombre avec une chaîne de caractères.
Enoncé 2
Q1
Le code suivant produit une erreur. Proposez une correction.
End of explanation
nbs = [ 1, 5, 4, 7 ]
nbs[0] = 0
nbs
Explanation: Le type tuple sont immutable. On ne peut pas le modifier. Mais les listes peuvent l'être.
End of explanation
d = {4: 'quatre'}
c = d.get('4', None)
print(c)
Explanation: Q2
Que vaut c ?
End of explanation
N = 8
s = 0
while N > 0 :
for i in range(N):
s += 1
N //= 2
x = (s+1)//2
x
Explanation: La méthode get retourne la valeur associée à une clé ou une autre valeur (ici None) si elle ne s'y trouve pas. La raison pour laquelle le résultat est None ici est que '4' != 4. La clé '4' ne fait pas partie du dictionnaire.
Q3
Que vaut x ?
End of explanation
l = ['a', 'b', 'c']
c = l[1]
c
Explanation: A chaque passage dans la boucle for, on ajoute N à s. A chaque passage dans la boucle while, on divise N par 2. Donc, après la boucle while, $s = N + N/2 + N/4 + N/8 + ...$. On répète cela jusqu'à ce que $N / 2^k$ soit plus grand que 0. Or, les divisions sont entières (symbole //), 1//2 vaut 0. La condition devient jusqu'à ce que $N / 2^k <1$.
Pour le reste, c'est une suite géométrique. Si on pose $N=2^k$, on calcule donc la somme :
$$s = 2^k + 2 ^{k-1} + ... + 1 = \sum_{i=1}^{k} 2^i = \frac{2^{k+1}-1}{2-1} = 2^{k+1}-1$$
Et comme :
$$x = \frac{s+1}{2} = 2^k = N$$
Q4
Que vaut c ?
End of explanation
def fonction(N):
li = None # on évite la variable l pour ne pas la confondre avec 1
for i in range(N):
if li is None:
li = [ ]
li.append(i)
return li
ma_liste = fonction(0)
ma_liste.append(-1)
Explanation: Q5
Par quoi faut-il remplacer les ??? pour avoir l'erreur ci-dessous ?
End of explanation
l = [ 0, 1,2,3]
for i in range(len(l)):
print(i)
del l[i] #
Explanation: Cette erreur se produit car ma_liste vaut None. Si la fonction fonction retourne None, c'est que l'instruction l = [ ] n'est jamais exécutée, donc que la condition if l is None n'est jamais vérifiée. On ne passe donc jamais dans la boucle for et ceci arrive si N est négatif ou nul.
Enoncé 3
Q1
Que se passe-t-il ?
End of explanation
l = [ 0, 1,2,3]
for i in range(len(l)):
print("i=",i,"l=",l)
del l[i] #
Explanation: L'erreur est due au fait que la boucle parcourt la liste en même temps qu'elle supprime des éléments. Le résultat est souvent une erreur. On vérifie en affichant i et l.
End of explanation
a = 2
for i in range(1,5):
a += a
a
Explanation: Q2
Que vaut a ?
End of explanation
x = 2.67
y = int ( x * 2 ) / 2
y
Explanation: La variable a double à chaque fois qu'on passe dans la boucle. On y passe 4 fois et on part de a=2. Donc : $22222=2^5=32$.
Q3
Que vaut y ?
End of explanation
import random
def moyenne(l):
s = 0
for x in l :
print("*")
s += x
return s / len(l)
def variance(l):
return sum ( [ (x - moyenne(l))**2 for x in l ] ) / len(l)
l = [ random.random() for i in range(0,100) ]
print(variance(l)**0.5)
Explanation: La fonction revient à arrondir au demi inférieur, donc $2.5$.
Q4
Combien d'étoiles le programme suivant affiche ?
End of explanation
star = 0
def moyenne(l):
global star
s = 0
for x in l :
star += 1
s += x
return s / len(l)
def variance(l):
return sum ( [ (x - moyenne(l))**2 for x in l ] ) / len(l)
l = [ random.random() for i in range(0,100) ]
print(variance(l)**0.5)
print("star=",star)
Explanation: C'est un peu long à afficher, modifions le programme pour compter les étoiles plutôt que de les afficher.
End of explanation
star = 0
def moyenne(l):
global star
s = 0
for x in l :
star += 1
s += x
return s / len(l)
def variance(l):
m = moyenne(l) # on mémorise le résultat
return sum ( [ (x - m)**2 for x in l ] ) / len(l)
l = [ random.random() for i in range(0,100) ]
print(variance(l)**0.5)
print("star=",star)
Explanation: Si $n$ est la longueur de la liste l, le coût de la fonction moyenne est $O(n)$. Le coût de la fonction variance est $n$ fois le coût de la fonction moyenne, soit $O(n^2)$. Celle-ci pourrait être beaucoup plus efficace en écrivant :
End of explanation
import random
x = random.randint(0,100)
while x != 50:
x = random.randint(0,100)
x
Explanation: Q5
Que vaut x ?
End of explanation |
4,999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
From raw data to dSPM on SPM Faces dataset
Runs a full pipeline using MNE-Python
Step1: Load and filter data, set up epochs
Step2: Visualize fields on MEG helmet
Step3: Look at the whitened evoked daat
Step4: Compute forward model
Step5: Compute inverse solution | Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne.datasets import spm_face
from mne.preprocessing import ICA, create_eog_epochs
from mne import io, combine_evoked
from mne.minimum_norm import make_inverse_operator, apply_inverse
print(__doc__)
data_path = spm_face.data_path()
subjects_dir = data_path + '/subjects'
Explanation: From raw data to dSPM on SPM Faces dataset
Runs a full pipeline using MNE-Python:
- artifact removal
- averaging Epochs
- forward model computation
- source reconstruction using dSPM on the contrast : "faces - scrambled"
<div class="alert alert-info"><h4>Note</h4><p>This example does quite a bit of processing, so even on a
fast machine it can take several minutes to complete.</p></div>
End of explanation
raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds'
raw = io.read_raw_ctf(raw_fname % 1, preload=True) # Take first run
# Here to save memory and time we'll downsample heavily -- this is not
# advised for real data as it can effectively jitter events!
raw.resample(120., npad='auto')
picks = mne.pick_types(raw.info, meg=True, exclude='bads')
raw.filter(1, 30, method='fir', fir_design='firwin')
events = mne.find_events(raw, stim_channel='UPPT001')
# plot the events to get an idea of the paradigm
mne.viz.plot_events(events, raw.info['sfreq'])
event_ids = {"faces": 1, "scrambled": 2}
tmin, tmax = -0.2, 0.6
baseline = None # no baseline as high-pass is applied
reject = dict(mag=5e-12)
epochs = mne.Epochs(raw, events, event_ids, tmin, tmax, picks=picks,
baseline=baseline, preload=True, reject=reject)
# Fit ICA, find and remove major artifacts
ica = ICA(n_components=0.95, random_state=0).fit(raw, decim=1, reject=reject)
# compute correlation scores, get bad indices sorted by score
eog_epochs = create_eog_epochs(raw, ch_name='MRT31-2908', reject=reject)
eog_inds, eog_scores = ica.find_bads_eog(eog_epochs, ch_name='MRT31-2908')
ica.plot_scores(eog_scores, eog_inds) # see scores the selection is based on
ica.plot_components(eog_inds) # view topographic sensitivity of components
ica.exclude += eog_inds[:1] # we saw the 2nd ECG component looked too dipolar
ica.plot_overlay(eog_epochs.average()) # inspect artifact removal
ica.apply(epochs) # clean data, default in place
evoked = [epochs[k].average() for k in event_ids]
contrast = combine_evoked(evoked, weights=[-1, 1]) # Faces - scrambled
evoked.append(contrast)
for e in evoked:
e.plot(ylim=dict(mag=[-400, 400]))
plt.show()
# estimate noise covarariance
noise_cov = mne.compute_covariance(epochs, tmax=0, method='shrunk',
rank=None)
Explanation: Load and filter data, set up epochs
End of explanation
# The transformation here was aligned using the dig-montage. It's included in
# the spm_faces dataset and is named SPM_dig_montage.fif.
trans_fname = data_path + ('/MEG/spm/SPM_CTF_MEG_example_faces1_3D_'
'raw-trans.fif')
maps = mne.make_field_map(evoked[0], trans_fname, subject='spm',
subjects_dir=subjects_dir, n_jobs=1)
evoked[0].plot_field(maps, time=0.170)
Explanation: Visualize fields on MEG helmet
End of explanation
evoked[0].plot_white(noise_cov)
Explanation: Look at the whitened evoked daat
End of explanation
src = data_path + '/subjects/spm/bem/spm-oct-6-src.fif'
bem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif'
forward = mne.make_forward_solution(contrast.info, trans_fname, src, bem)
Explanation: Compute forward model
End of explanation
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'dSPM'
inverse_operator = make_inverse_operator(contrast.info, forward, noise_cov,
loose=0.2, depth=0.8)
# Compute inverse solution on contrast
stc = apply_inverse(contrast, inverse_operator, lambda2, method, pick_ori=None)
# stc.save('spm_%s_dSPM_inverse' % contrast.comment)
# Plot contrast in 3D with PySurfer if available
brain = stc.plot(hemi='both', subjects_dir=subjects_dir, initial_time=0.170,
views=['ven'], clim={'kind': 'value', 'lims': [3., 6., 9.]})
# brain.save_image('dSPM_map.png')
Explanation: Compute inverse solution
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.