Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
1,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson One
This lesson will go over digital data input and introduction to programming in Python.
Sensors
If someone were to ask you what color an apple is, you would look at the apple and tell them what color you saw. If they then asked you which smelled better, roses or lillies, you would use your nose to smell both of them. Humans get information around their surroundings through our five senses, sight, hearing, touch, smell, and taste. Similarly, robots and computers can collect information by using sensors. Sensors are the wardrobe between real life and the digital world. Whenever your favorite singer records a new album, she has to record it using a microphone, which is a sensor that can detect and record sound. When producers record the latest action movie, they use video cameras, which is a sensor that can detect light.
With the class or in a group, discuss what other types of sensors a computer might use to get information.
Types of Sensors
There are two main types of sensors, digital and analog. A digital sensor is one that can record whether a signal is on or off, for example a button. On the other hand, an analog sensor is one that can record a spectrum of values, or possibly infinitly many. One example is a temperature sensor. If a temperature sensor can record values from $0 ^\circ$C to $30^\circ$C. One possible value it can record is $10^\circ$C, or $15^\circ$C, or $14.12^\circ$C, or $3.141592^\circ$C. If I were to keep on listing numbers, it would be impossible for me to list every single possible value that the sensor could record!
With the class, for each sensor that you listed in the exercise above, indicate whether it is digital or analog.
Introduction to Python
Here we will introduce you to python, a high level language that will allow us to send commands to the Raspberry Pi and PSoC.
Firstly, python can be used as a calculator, and can be used to easily do arithmetic. In the code blocks (blocks with In[] and are shaded gray), you can write python code and run it by pressing "shift + enter. In the example below, see how you can perform simple operations. Remember to press "shift + enter" or the "run" button in the toolbar.
Step1: We use the print statement in order to indicate that we want to display the answer to the screen. In the box below, write code to find the answers to the following.
5719*35914
7491484+49144192
7028004/17
4910441-104313
Step2: We can also use the print system to print letters and words, such as the example below. In order to print a word, the word has to be surrounded by a pair of quotation marks (" ")
Step3: Write some code below to display "Hi, my name is (your name here)!".
Step4: One important concept/ability that programming can do is to remember past values using variables. We can them to remember past values, then call on them later on when we need their values. For example, the example below stores a name, then prints it out.
Step5: However, if we then reassign the variable later on, it will remember the latest assignment.
Step6: Interfacing with the Raspberry Pi
Now we will see how we can use python to read sensors with the Raspberry pi. If you are unfamiliar with the raspberry pi please read the getting started with raspberry pi lesson here.
Once you have set up the PSoC board with the raspberry pi, follow instructions on how to connect the button with the pi.
Step7: Remember how before we dealt with numbers such as 1,49,3 in the calculations section. Those are a class of values called integers, and the words, such as "Bambi", "Bobby", "Hi", are values known as strings. Here we will introduce another type of values called booleans. Simply put, booleans are either true or false and are denoted as "True" and "False". We can print them like we did with integers and strings.
Step8: Booleans are useful because they are needed for control flow statements. Control flow statements specifies the order individual statements should be made. The one covered here is the if/else structure. An example is shown below.
Step9: The first line in the program asks whether or not the statement in the parenthesis is True, or False. Since it is true, it runs the lines following the if statement, thus it prints "THIS IS TRUE". Compare it to the example below.
Step10: In the example above, the boolean value in the "if" statement's parenthesis is False, so it skips over it and runs the statement in code following the "else" block. If statements don't have just take in True or False, but also variables that contain boolean values. See the example below.
Step11: We set the variable "like_candy" to True and then pass it in the if statement. Since like_candy is true, we run the code following the if block.
Now we will see how we can use these tools to write a simple program that can interface with sensors. A button is a digital sensor, and one way we can see it is to call b.buttonPressed(). This is pretty complicated syntax, so don't worry about it for now. Just know that b.buttonPressed() returns a boolean value, corresponding to whether the button is pressed or not. If the button is pressed, b.buttonPressed() is True, if the button is not presesd, it is false.
Do not press the button and run the following code.
Step12: Now press the button and run the code above again (just click on it and do "shift + enter").
Predict what the following code would do
```python
if(b.buttonPressed()) | Python Code:
print 40 + 2
print 7*6
print 67-25
print 798/19
Explanation: Lesson One
This lesson will go over digital data input and introduction to programming in Python.
Sensors
If someone were to ask you what color an apple is, you would look at the apple and tell them what color you saw. If they then asked you which smelled better, roses or lillies, you would use your nose to smell both of them. Humans get information around their surroundings through our five senses, sight, hearing, touch, smell, and taste. Similarly, robots and computers can collect information by using sensors. Sensors are the wardrobe between real life and the digital world. Whenever your favorite singer records a new album, she has to record it using a microphone, which is a sensor that can detect and record sound. When producers record the latest action movie, they use video cameras, which is a sensor that can detect light.
With the class or in a group, discuss what other types of sensors a computer might use to get information.
Types of Sensors
There are two main types of sensors, digital and analog. A digital sensor is one that can record whether a signal is on or off, for example a button. On the other hand, an analog sensor is one that can record a spectrum of values, or possibly infinitly many. One example is a temperature sensor. If a temperature sensor can record values from $0 ^\circ$C to $30^\circ$C. One possible value it can record is $10^\circ$C, or $15^\circ$C, or $14.12^\circ$C, or $3.141592^\circ$C. If I were to keep on listing numbers, it would be impossible for me to list every single possible value that the sensor could record!
With the class, for each sensor that you listed in the exercise above, indicate whether it is digital or analog.
Introduction to Python
Here we will introduce you to python, a high level language that will allow us to send commands to the Raspberry Pi and PSoC.
Firstly, python can be used as a calculator, and can be used to easily do arithmetic. In the code blocks (blocks with In[] and are shaded gray), you can write python code and run it by pressing "shift + enter. In the example below, see how you can perform simple operations. Remember to press "shift + enter" or the "run" button in the toolbar.
End of explanation
#Write your code here
#Solution
print 5719*35914
print 7491484+49144192
print 7028004/17
print 4910441-104313
Explanation: We use the print statement in order to indicate that we want to display the answer to the screen. In the box below, write code to find the answers to the following.
5719*35914
7491484+49144192
7028004/17
4910441-104313
End of explanation
print "Hi, I Love Python"
Explanation: We can also use the print system to print letters and words, such as the example below. In order to print a word, the word has to be surrounded by a pair of quotation marks (" ")
End of explanation
#Write your code here
#Solution
print "Hi, my name is Nom!"
Explanation: Write some code below to display "Hi, my name is (your name here)!".
End of explanation
name = "Billy"
print name
Explanation: One important concept/ability that programming can do is to remember past values using variables. We can them to remember past values, then call on them later on when we need their values. For example, the example below stores a name, then prints it out.
End of explanation
name = "Bobby"
name = "Bambi"
print name
Explanation: However, if we then reassign the variable later on, it will remember the latest assignment.
End of explanation
#import python library with buttonPressed() method
#b = new Button()
Explanation: Interfacing with the Raspberry Pi
Now we will see how we can use python to read sensors with the Raspberry pi. If you are unfamiliar with the raspberry pi please read the getting started with raspberry pi lesson here.
Once you have set up the PSoC board with the raspberry pi, follow instructions on how to connect the button with the pi.
End of explanation
yes = True
no = False
print yes
print no
Explanation: Remember how before we dealt with numbers such as 1,49,3 in the calculations section. Those are a class of values called integers, and the words, such as "Bambi", "Bobby", "Hi", are values known as strings. Here we will introduce another type of values called booleans. Simply put, booleans are either true or false and are denoted as "True" and "False". We can print them like we did with integers and strings.
End of explanation
if(True):
print "THIS IS TRUE"
else:
print "THIS IS FALSE"
Explanation: Booleans are useful because they are needed for control flow statements. Control flow statements specifies the order individual statements should be made. The one covered here is the if/else structure. An example is shown below.
End of explanation
if(False):
print "THIS IS TRUE"
else:
print "THIS IS FALSE"
Explanation: The first line in the program asks whether or not the statement in the parenthesis is True, or False. Since it is true, it runs the lines following the if statement, thus it prints "THIS IS TRUE". Compare it to the example below.
End of explanation
like_candy = True
if(like_candy):
print "You like candy"
else:
print "You do not like candy"
Explanation: In the example above, the boolean value in the "if" statement's parenthesis is False, so it skips over it and runs the statement in code following the "else" block. If statements don't have just take in True or False, but also variables that contain boolean values. See the example below.
End of explanation
#print b.buttonPressed()
Explanation: We set the variable "like_candy" to True and then pass it in the if statement. Since like_candy is true, we run the code following the if block.
Now we will see how we can use these tools to write a simple program that can interface with sensors. A button is a digital sensor, and one way we can see it is to call b.buttonPressed(). This is pretty complicated syntax, so don't worry about it for now. Just know that b.buttonPressed() returns a boolean value, corresponding to whether the button is pressed or not. If the button is pressed, b.buttonPressed() is True, if the button is not presesd, it is false.
Do not press the button and run the following code.
End of explanation
#code here
Explanation: Now press the button and run the code above again (just click on it and do "shift + enter").
Predict what the following code would do
```python
if(b.buttonPressed()):
print "I AM IM-PRESSED"
else:
print "I AM NOT IM-PRESSED"
```
Write some code below that prints your name if the button is pressed, and your friends name if it is not pressed.
End of explanation |
1,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2. Categorical Predictors
The syntax for handling categorical predictors is different between standard regression models/two-stage-models (i.e.
Step1: Dummy-coded/Treatment contrasts
+++++++++++++++++++++++++++++++
Step2: Orthogonal Polynomial Contrasts
+++++++++++++++++++++++++++++++
Step3: Sum-to-zero contrasts
+++++++++++++++++++++
Step4: Scaling/Centering
+++++++++++++++++
Step5: Please refer to the patsy documentation <https
Step6: Dummy-coding factors
++++++++++++++++++++
First we'll use dummy-coding/treatment contrasts with 1.0 as the reference level. This will compute two coefficients
Step7: Polynomial contrast coding
++++++++++++++++++++++++++
Second we'll use orthogonal polynomial contrasts. This is accomplished using the
Step8: Custom contrasts
++++++++++++++++
Step9: User-created contrasts (without R)
++++++++++++++++++++++++++++++++++
Another option available to you is fitting a model with only your desired contrast(s) rather than a full set of k-1 contrasts. Contrary to how statistics is usually taught, you don't ever have to include a full set of k-1 contrasts for a k level factor! The upside to doing this is that you won't need to rely on R to compute anything for you (aside from the model fit), and you will have a model with exactly the number of terms as contrasts you desire, giving you complete control. The downside is that post-hoc tests will no longer be available (see tutorial 3 for more information on post-hoc tests), but it's unlikely you're doing post-hoc tests if you are computing a subset of specific contrasts anyway. This is also a useful approach if you don't want to use patsy's formula syntax with
Step10: Now we can use this variable as a continuous predictor without the need for the | Python Code:
# import basic libraries and sample data
import os
import pandas as pd
from pymer4.utils import get_resource_path
from pymer4.models import Lm
# IV3 is a categorical predictors with 3 levels in the sample data
df = pd.read_csv(os.path.join(get_resource_path(), "sample_data.csv"))
Explanation: 2. Categorical Predictors
The syntax for handling categorical predictors is different between standard regression models/two-stage-models (i.e. :code:Lm and :code:Lm2) and multi-level models (:code:Lmer) in :code:pymer4. This is because formula parsing is passed to R for :code:Lmer models, but handled by Python for other models.
Lm and Lm2 Models
:code:Lm and :code:Lm2 models use patsy <https://patsy.readthedocs.io/en/latest/>_ to parse model formulae. Patsy is very powerful and has built-in support for handling categorical coding schemes by wrapping a predictor in then :code:C() within the module formula. Patsy can also perform some pre-processing such as scaling and standardization using special functions like :code:center(). Here are some examples.
End of explanation
# Estimate a model using Treatment contrasts (dummy-coding)
# with '1.0' as the reference level
# This is the default of the C() function
model = Lm("DV ~ C(IV3, levels=[1.0, 0.5, 1.5])", data=df)
print(model.fit())
Explanation: Dummy-coded/Treatment contrasts
+++++++++++++++++++++++++++++++
End of explanation
# Patsy can do this using the Poly argument to the
# C() function
model = Lm("DV ~ C(IV3, Poly)", data=df)
print(model.fit())
Explanation: Orthogonal Polynomial Contrasts
+++++++++++++++++++++++++++++++
End of explanation
# Similar to before but with the Sum argument
model = Lm("DV ~ C(IV3, Sum)", data=df)
print(model.fit())
Explanation: Sum-to-zero contrasts
+++++++++++++++++++++
End of explanation
# Moderation with IV2, but centering IV2 first
model = Lm("DV ~ center(IV2) * C(IV3, Sum)", data=df)
print(model.fit())
Explanation: Scaling/Centering
+++++++++++++++++
End of explanation
from pymer4.models import Lmer
# We're going to fit a multi-level logistic regression using the
# dichotomous DV_l variable and the same categorical predictor (IV3)
# as before
model = Lmer("DV_l ~ IV3 + (IV3|Group)", data=df, family="binomial")
Explanation: Please refer to the patsy documentation <https://patsy.readthedocs.io/en/latest/categorical-coding.html>_ for more details when working categorical predictors in :code:Lm or :code:Lm2 models.
Lmer Models
:code:Lmer() models currently have support for handling categorical predictors in one of three ways based on how R's :code:factor() works (see the note at the end of this tutorial):
Dummy-coded factor levels (treatment contrasts) in which each model term is the difference between a factor level and a selected reference level
Orthogonal polynomial contrasts in which each model term is a polynomial contrast across factor levels (e.g. linear, quadratic, cubic, etc)
Custom contrasts for each level of a factor, which should be provided in the manner expected by R.
To make re-parameterizing models easier, factor codings are passed as a dictionary to the :code:factors argument of a model's :code:.fit(). This obviates the need for adjusting data-frame properties as in R. Note that this is different from :code:Lm and :code:Lm2 models above which expect factor codings in their formulae (because patsy does).
Each of these ways also enables you to easily compute post-hoc comparisons between factor levels, as well as interactions between continuous predictors and each factor level. See tutorial 3 for more on post-hoc tests.
End of explanation
print(model.fit(factors={"IV3": ["1.0", "0.5", "1.5"]}))
Explanation: Dummy-coding factors
++++++++++++++++++++
First we'll use dummy-coding/treatment contrasts with 1.0 as the reference level. This will compute two coefficients: 0.5 > 1.0 and 1.5 > 1.0.
End of explanation
print(model.fit(factors={"IV3": ["0.5", "1.0", "1.5"]}, ordered=True))
Explanation: Polynomial contrast coding
++++++++++++++++++++++++++
Second we'll use orthogonal polynomial contrasts. This is accomplished using the :code:ordered=True argument and specifying the order of the linear contrast in increasing order. R will automatically compute higher order polynomial contrats that are orthogonal to this linear contrast. In this example, since there are 3 factor levels this will result in two polynomial terms: a linear contrast we specify below corresponding to 0.5 < 1.0 < 1.5 and an orthogonal quadratic contrast automatically determined by R, corresponding to 0.5 > 1 < 1.5
End of explanation
# Compare level '1.0' to the mean of levels '0.5' and '1.5'
# and let R determine the second contrast orthogonal to it
print(model.fit(factors={"IV3": {"1.0": 1, "0.5": -0.5, "1.5": -0.5}}))
Explanation: Custom contrasts
++++++++++++++++
:code:Lmer models can also take custom factor contrasts based on how they are expected by R (see the note at the end of this tutorial for how contrasts work in R). Remember that there can be at most k-1 model terms representing any k level factor without over-parameterizing a model. If you specify a custom contrast, R will generate set of orthogonal contrasts for the rest of your model terms.
End of explanation
# Create a new column in the dataframe with a custom (linear) contrast
df = df.assign(IV3_custom_lin=df["IV3"].map({0.5: -1, 1.0: 0, 1.5: 1}))
print(df.head())
Explanation: User-created contrasts (without R)
++++++++++++++++++++++++++++++++++
Another option available to you is fitting a model with only your desired contrast(s) rather than a full set of k-1 contrasts. Contrary to how statistics is usually taught, you don't ever have to include a full set of k-1 contrasts for a k level factor! The upside to doing this is that you won't need to rely on R to compute anything for you (aside from the model fit), and you will have a model with exactly the number of terms as contrasts you desire, giving you complete control. The downside is that post-hoc tests will no longer be available (see tutorial 3 for more information on post-hoc tests), but it's unlikely you're doing post-hoc tests if you are computing a subset of specific contrasts anyway. This is also a useful approach if you don't want to use patsy's formula syntax with :code:Lm and :code:Lm2 as noted above.
This can be accomplished by creating new columns in your dataframe to test specific hypotheses and is trivial to do with pandas map <https://pandas.pydata.org/pandas-docs/version/0.25/reference/api/pandas.Series.map.html/> and assign <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html/> methods. For example, here we manually compute a linear contrast by creating a new column in our dataframe and treating it as a continuous variable.
End of explanation
# Estimate model
model = Lmer(
"DV_l ~ IV3_custom_lin + (IV3_custom_lin|Group)", data=df, family="binomial"
)
print(model.fit())
Explanation: Now we can use this variable as a continuous predictor without the need for the :code:factors argument. Notice how the z-stat and p-value of the estimate are the same as the linear polynomial contrast estimated above. The coefficients differ in scale only because R uses [~-0.707, ~0, ~0.707] for its polynomial contrasts rather than [-1, 0, 1] like we did.
End of explanation |
1,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Import the necessary packages to read in the data, plot, and create a linear regression model
Step1: 2. Read in the hanford.csv file
Step2: County
Step3: 3. Calculate the basic descriptive statistics on the data
Step4: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step5: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step6: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step7: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10 | Python Code:
import pandas as pd
import pandas as pd
import matplotlib.pyplot as plt # package for doing plotting (necessary for adding the line)
import statsmodels.formula.api as smf # package we'll be using for linear regression
%matplotlib inline
Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model
End of explanation
df = pd.read_csv('../data/hanford.csv')
Explanation: 2. Read in the hanford.csv file
End of explanation
df
Explanation: County: Name of county
Exposuere: Inde of exposure
Mortality: Cancer mortality per 100000 man-years
End of explanation
df.describe()
Explanation: 3. Calculate the basic descriptive statistics on the data
End of explanation
correlation = df.corr()
print(correlation)
df.plot(kind='scatter', x='Exposure', y='Mortality')
Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
End of explanation
lm = smf.ols(formula="Mortality~Exposure",data=df).fit()
lm.params
intercept, height = lm.params
# Function using the built math.
def simplest_predictor(exposure, height, intercept):
height = float(height)
intercept = float(intercept)
exposure = float(exposure)
return height*exposure+intercept
# Input the data
exposure = input("Please enter the exposure: ")
print("The mortality rate for your exposure lies at", simplest_predictor(exposure,height,intercept), ".")
Explanation: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
End of explanation
df.plot(kind="scatter",x="Exposure",y="Mortality")
plt.plot(df["Exposure"],height*df["Exposure"]+intercept,"-",color="darkgrey") #we create the best fit line from the values in the fit model
Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
End of explanation
def predictiong_mortality_rate(exposure):
return intercept + float()
Explanation: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
End of explanation |
1,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Editing BEM surfaces in Blender
Sometimes when creating a BEM model the surfaces need manual correction because
of a series of problems that can arise (e.g. intersection between surfaces).
Here, we will see how this can be achieved by exporting the surfaces to the 3D
modeling program Blender <https
Step1: Exporting surfaces to Blender
In this tutorial, we are working with the MNE-Sample set, for which the
surfaces have no issues. To demonstrate how to fix problematic surfaces, we
are going to manually place one of the inner-skull vertices outside the
outer-skill mesh.
We then convert the surfaces to .obj
<https
Step2: Editing in Blender
We can now open Blender and import the surfaces. Go to File > Import >
Wavefront (.obj). Navigate to the conv folder and select the file you
want to import. Make sure to select the Keep Vert Order option. You can
also select the Y Forward option to load the axes in the correct direction
(RAS)
Step3: Back in Python, you can read the fixed .obj files and save them as
FreeSurfer .surf files. For the
Step4: Editing the head surfaces
Sometimes the head surfaces are faulty and require manual editing. We use
Step5: High-resolution head
We use | Python Code:
# Authors: Marijn van Vliet <[email protected]>
# Ezequiel Mikulan <[email protected]>
# Manorama Kadwani <[email protected]>
#
# License: BSD-3-Clause
import os
import os.path as op
import shutil
import mne
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, 'sample', 'bem', 'flash')
Explanation: Editing BEM surfaces in Blender
Sometimes when creating a BEM model the surfaces need manual correction because
of a series of problems that can arise (e.g. intersection between surfaces).
Here, we will see how this can be achieved by exporting the surfaces to the 3D
modeling program Blender <https://blender.org>_, editing them, and
re-importing them.
This tutorial is based on https://github.com/ezemikulan/blender_freesurfer by
Ezequiel Mikulan.
:depth: 2
End of explanation
# Put the converted surfaces in a separate 'conv' folder
conv_dir = op.join(subjects_dir, 'sample', 'conv')
os.makedirs(conv_dir, exist_ok=True)
# Load the inner skull surface and create a problem
# The metadata is empty in this example. In real study, we want to write the
# original metadata to the fixed surface file. Set read_metadata=True to do so.
coords, faces = mne.read_surface(op.join(bem_dir, 'inner_skull.surf'))
coords[0] *= 1.1 # Move the first vertex outside the skull
# Write the inner skull surface as an .obj file that can be imported by
# Blender.
mne.write_surface(op.join(conv_dir, 'inner_skull.obj'), coords, faces,
overwrite=True)
# Also convert the outer skull surface.
coords, faces = mne.read_surface(op.join(bem_dir, 'outer_skull.surf'))
mne.write_surface(op.join(conv_dir, 'outer_skull.obj'), coords, faces,
overwrite=True)
Explanation: Exporting surfaces to Blender
In this tutorial, we are working with the MNE-Sample set, for which the
surfaces have no issues. To demonstrate how to fix problematic surfaces, we
are going to manually place one of the inner-skull vertices outside the
outer-skill mesh.
We then convert the surfaces to .obj
<https://en.wikipedia.org/wiki/Wavefront_.obj_file>_ files and create a new
folder called conv inside the FreeSurfer subject folder to keep them in.
End of explanation
coords, faces = mne.read_surface(op.join(conv_dir, 'inner_skull.obj'))
coords[0] /= 1.1 # Move the first vertex back inside the skull
mne.write_surface(op.join(conv_dir, 'inner_skull_fixed.obj'), coords, faces,
overwrite=True)
Explanation: Editing in Blender
We can now open Blender and import the surfaces. Go to File > Import >
Wavefront (.obj). Navigate to the conv folder and select the file you
want to import. Make sure to select the Keep Vert Order option. You can
also select the Y Forward option to load the axes in the correct direction
(RAS):
<img src="file://../../_static/blender_import_obj/blender_import_obj1.jpg" width="800" alt="Importing .obj files in Blender">
For convenience, you can save these settings by pressing the + button
next to Operator Presets.
Repeat the procedure for all surfaces you want to import (e.g. inner_skull
and outer_skull).
You can now edit the surfaces any way you like. See the
Beginner Blender Tutorial Series
<https://www.youtube.com/playlist?list=PLxLGgWrla12dEW5mjO09kR2_TzPqDTXdw>
to learn how to use Blender. Specifically, part 2
<http://www.youtube.com/watch?v=RaT-uG5wgUw&t=5m30s> will teach you how to
use the basic editing tools you need to fix the surface.
<img src="file://../../_static/blender_import_obj/blender_import_obj2.jpg" width="800" alt="Editing surfaces in Blender">
Using the fixed surfaces in MNE-Python
In Blender, you can export a surface as an .obj file by selecting it and go
to File > Export > Wavefront (.obj). You need to again select the Y
Forward option and check the Keep Vertex Order box.
<img src="file://../../_static/blender_import_obj/blender_import_obj3.jpg" width="200" alt="Exporting .obj files in Blender">
Each surface needs to be exported as a separate file. We recommend saving
them in the conv folder and ending the file name with _fixed.obj,
although this is not strictly necessary.
In order to be able to run this tutorial script top to bottom, we here
simulate the edits you did manually in Blender using Python code:
End of explanation
# Read the fixed surface
coords, faces = mne.read_surface(op.join(conv_dir, 'inner_skull_fixed.obj'))
# Backup the original surface
shutil.copy(op.join(bem_dir, 'inner_skull.surf'),
op.join(bem_dir, 'inner_skull_orig.surf'))
# Overwrite the original surface with the fixed version
# In real study you should provide the correct metadata using ``volume_info=``
# This could be accomplished for example with:
#
# _, _, vol_info = mne.read_surface(op.join(bem_dir, 'inner_skull.surf'),
# read_metadata=True)
# mne.write_surface(op.join(bem_dir, 'inner_skull.surf'), coords, faces,
# volume_info=vol_info, overwrite=True)
Explanation: Back in Python, you can read the fixed .obj files and save them as
FreeSurfer .surf files. For the :func:mne.make_bem_model function to find
them, they need to be saved using their original names in the surf
folder, e.g. bem/inner_skull.surf. Be sure to first backup the original
surfaces in case you make a mistake!
End of explanation
# Load the fixed surface
coords, faces = mne.read_surface(op.join(bem_dir, 'outer_skin.surf'))
# Make sure we are in the correct directory
head_dir = op.dirname(bem_dir)
# Remember to backup the original head file in advance!
# Overwrite the original head file
#
# mne.write_head_bem(op.join(head_dir, 'sample-head.fif'), coords, faces,
# overwrite=True)
Explanation: Editing the head surfaces
Sometimes the head surfaces are faulty and require manual editing. We use
:func:mne.write_head_bem to convert the fixed surfaces to .fif files.
Low-resolution head
For EEG forward modeling, it is possible that outer_skin.surf would be
manually edited. In that case, remember to save the fixed version of
-head.fif from the edited surface file for coregistration.
End of explanation
# If ``-head-dense.fif`` does not exist, you need to run
# ``mne make_scalp_surfaces`` first.
# [0] because a list of surfaces is returned
surf = mne.read_bem_surfaces(op.join(head_dir, 'sample-head.fif'))[0]
# For consistency only
coords = surf['rr']
faces = surf['tris']
# Write the head as an .obj file for editing
mne.write_surface(op.join(conv_dir, 'sample-head.obj'),
coords, faces, overwrite=True)
# Usually here you would go and edit your meshes.
#
# Here we just use the same surface as if it were fixed
# Read in the .obj file
coords, faces = mne.read_surface(op.join(conv_dir, 'sample-head.obj'))
# Remember to backup the original head file in advance!
# Overwrite the original head file
#
# mne.write_head_bem(op.join(head_dir, 'sample-head.fif'), coords, faces,
# overwrite=True)
Explanation: High-resolution head
We use :func:mne.read_bem_surfaces to read the head surface files. After
editing, we again output the head file with :func:mne.write_head_bem.
Here we use -head.fif for speed.
End of explanation |
1,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: OpenCV
OpenCV is an open-source computer vision library. It comes packaged with many powerful computer vision tools, including image and video processing utilities. The library has a lot of the same functionality as the Python Image Library (PIL) but also includes some computer vision support that PIL doesn't include.
In this lesson we will learn how to use OpenCV to process images.
Load an Image
Start by downloading a small (640x360) version of this image of a car from Pixabay and then uploading it to this Colab.
Be sure to load the small 640x360 version of the image for this lab.
After loading the image, we can use matplotlib to view the image.
Step2: Color Ordering
Does something look off? Wasn't the car red when we downloaded the image?
OpenCV assumes the image is stored with blue-green-red (BGR) encoding instead of red-green-blue (RGB), but matplotlib assumes RGB. So, the reds and blues in the image are inverted when displayed.
Why does OpenCV assume images are BGR?
BGR was historically a popular storage format used by digital camera manufacturers and many software packages. At the time it was a good choice for a default. Defaults are difficult to change, so BGR is here to stay in OpenCV.
It doesn't really matter which format is used as long as the inputs to our model are consistent. However, it can be annoying to look at images with inverted colors. You just need to know how to tell OpenCV to fix it.
Luckily it is easy to change from BGR to RGB. We can just use cvtColor. There are scores of conversions possible.
Step3: Drawing on Images
Drawing Rectangles on Images
Suppose we want to draw a rectangle around objects we identify in an image. This can be done with the OpenCV rectangle method.
Step4: Drawing Text on Images
You can also draw text on images using putText.
Step5: Image Scaling
Models are trained with images scaled to a specific size and are sensitive to the input size being consistent. One solution is to simply scale the image to the required size using the resize method.
In the example below, we scale the image to 300x300 pixels. This creates a pretty distorted image, which might affect the training and predictions made by the model.
Step6: Cropping With Edge Detection
Another strategy is to crop the image using "edge detection", then scale the image after you have cropped it down. This strategy can be error-prone, but it can also be really helpful in isolating individual objects in an image.
In the case of the car image that we have loaded, cropping based on edge detection is both simple and effective. In images with more noise in the background, automatic cropping will be much more difficult.
To begin cropping, we'll rely on OpenCV's Canny detection algorithm.
Step7: The threshold parameter is a tuning value set to the images you are processing. More details can be found on Canny's Wikipedia page.
Let's see a few different thresholds in action.
Step8: None of these settings do too badly, though a threshold of 10 has a lot of noise, and a threshold of 500 barely outlines the car. We have to remember that our goal is to build a bounding box around the car and crop on that bounding box.
Another consideration is that the edge detection algorithm is often more effective if the image is grayscale and if there is some blurring.
First let's convert the image to grayscale.
Step9: And now we'll blur the image a bit.
Step10: Given this new grayscale and blurred image, we can run the edge detection algorithm again.
Step11: In this case our edges completely disappear at higher thresholds!
The threshold of 200 seemed to perform reasonably well in both situations, so let's stick with that.
Step12: We now need to find the bounding box around the item in the image that we want to crop. The first step in doing this is to utilize the findContours function. This function returns a list of contours found in the output of the Canny algorithm. The contours are defined by lists of $(x, y)$ values.
Step13: Given the contours, we can approximate the polygon that the contour forms and then create a bounding box around each contour.
Step14: Let's take a look at all of the bounding boxes on the car.
Step15: No single box seems to capture the entire car, but we can use the outer boundaries to find a unified box.
We'll use a very simple algorithm that simply finds the outer boundaries and doesn't care if the boxes overlap. In practice you'd likely want to use a more sophisticated algorithm.
Step16: And then we can draw the box.
Step17: The box does clip the car a bit, but for the most part, the car is within the box.
Now we need to crop the image to just the car itself.
Notice that we pair the x coordinate with height and the y with width. This is because we want all of the rows for a given height and the columns for a given width.
Step18: Now we need to make the image into a square by padding the image. We find the longest side and then pad the shorter side with the necessary pixels to make the image a square.
To add the padding we use OpenCV's copyMakeBorder function.
Step19: And finally, we can scale the image down to a 300x300 image to feed to our model using OpenCV's resize function again.
Step20: Rotating Images
It is sometimes useful to rotate images before feeding them to your model. This increases the size of your training data, and it makes your model more resilient to subtle patterns that might exist within your base images.
For example, in a popular fashion image dataset, most boots are pointed in one direction and sandals in the other. When the model attempts to identify a boot pointed in the wrong direction, it will often predict 'sandal' based purely on the orientation of the object.
To flip an image on the horizontal or vertical axis, we can just use the flip function.
Here is an example of flipping an image on the horizontal axis.
Step21: And now the vertical axis.
Step22: And finally, both.
Step23: Resources
OpenCV Documentation on Edge Detection
Canny Edge Detector | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/04_classification/06_images_and_video/01-open_cv.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
import cv2 as cv
import matplotlib.pyplot as plt
image_file = 'car-49278_640.jpg'
image = cv.imread(image_file)
plt.imshow(image)
plt.show()
Explanation: OpenCV
OpenCV is an open-source computer vision library. It comes packaged with many powerful computer vision tools, including image and video processing utilities. The library has a lot of the same functionality as the Python Image Library (PIL) but also includes some computer vision support that PIL doesn't include.
In this lesson we will learn how to use OpenCV to process images.
Load an Image
Start by downloading a small (640x360) version of this image of a car from Pixabay and then uploading it to this Colab.
Be sure to load the small 640x360 version of the image for this lab.
After loading the image, we can use matplotlib to view the image.
End of explanation
image = cv.cvtColor(image, cv.COLOR_BGR2RGB)
plt.imshow(image)
plt.show()
Explanation: Color Ordering
Does something look off? Wasn't the car red when we downloaded the image?
OpenCV assumes the image is stored with blue-green-red (BGR) encoding instead of red-green-blue (RGB), but matplotlib assumes RGB. So, the reds and blues in the image are inverted when displayed.
Why does OpenCV assume images are BGR?
BGR was historically a popular storage format used by digital camera manufacturers and many software packages. At the time it was a good choice for a default. Defaults are difficult to change, so BGR is here to stay in OpenCV.
It doesn't really matter which format is used as long as the inputs to our model are consistent. However, it can be annoying to look at images with inverted colors. You just need to know how to tell OpenCV to fix it.
Luckily it is easy to change from BGR to RGB. We can just use cvtColor. There are scores of conversions possible.
End of explanation
left = 100
right = 580
top = 100
bottom = 300
r = 255
g = 0
b = 0
cv.rectangle(image, (left, top), (right, bottom), (r, g, b), thickness=2)
plt.imshow(image)
plt.show()
Explanation: Drawing on Images
Drawing Rectangles on Images
Suppose we want to draw a rectangle around objects we identify in an image. This can be done with the OpenCV rectangle method.
End of explanation
left = 150
top = 50
r = 0
g = 0
b = 0
scale = 1.0
thickness = 2
cv.putText(image, "It is a car!", (left, top), cv.FONT_HERSHEY_SIMPLEX, scale,
[r, g, b], thickness)
plt.imshow(image)
plt.show()
Explanation: Drawing Text on Images
You can also draw text on images using putText.
End of explanation
image_scaled = cv.resize(image, (300, 300))
plt.imshow(image_scaled)
plt.show()
Explanation: Image Scaling
Models are trained with images scaled to a specific size and are sensitive to the input size being consistent. One solution is to simply scale the image to the required size using the resize method.
In the example below, we scale the image to 300x300 pixels. This creates a pretty distorted image, which might affect the training and predictions made by the model.
End of explanation
threshold = 200
image = cv.imread(image_file)
image = cv.cvtColor(image, cv.COLOR_BGR2RGB)
edges = cv.Canny(image, threshold, threshold*2)
fig, (orig, edge) = plt.subplots(2)
orig.imshow(image, cmap='gray')
edge.imshow(edges, cmap='gray')
plt.show()
Explanation: Cropping With Edge Detection
Another strategy is to crop the image using "edge detection", then scale the image after you have cropped it down. This strategy can be error-prone, but it can also be really helpful in isolating individual objects in an image.
In the case of the car image that we have loaded, cropping based on edge detection is both simple and effective. In images with more noise in the background, automatic cropping will be much more difficult.
To begin cropping, we'll rely on OpenCV's Canny detection algorithm.
End of explanation
fig, (orig, t1, t50, t100, t200, t300, t500) = plt.subplots(7, figsize=(5, 25))
orig.imshow(image)
t1.imshow(cv.Canny(image, 10, 10*2), cmap='gray')
t50.imshow(cv.Canny(image, 50, 50*2), cmap='gray')
t100.imshow(cv.Canny(image, 100, 100*2), cmap='gray')
t200.imshow(cv.Canny(image, 200, 200*2), cmap='gray')
t300.imshow(cv.Canny(image, 300, 300*2), cmap='gray')
t500.imshow(cv.Canny(image, 500, 500*2), cmap='gray')
plt.show()
Explanation: The threshold parameter is a tuning value set to the images you are processing. More details can be found on Canny's Wikipedia page.
Let's see a few different thresholds in action.
End of explanation
img_gray = cv.cvtColor(image, cv.COLOR_RGB2GRAY)
_ = plt.imshow(img_gray, cmap='gray')
Explanation: None of these settings do too badly, though a threshold of 10 has a lot of noise, and a threshold of 500 barely outlines the car. We have to remember that our goal is to build a bounding box around the car and crop on that bounding box.
Another consideration is that the edge detection algorithm is often more effective if the image is grayscale and if there is some blurring.
First let's convert the image to grayscale.
End of explanation
img_gray = cv.blur(img_gray, (3,3))
_ = plt.imshow(img_gray, cmap='gray')
Explanation: And now we'll blur the image a bit.
End of explanation
fig, (orig, t1, t50, t100, t200, t300, t500) = plt.subplots(7, figsize=(5, 25))
orig.imshow(img_gray, cmap='gray')
t1.imshow(cv.Canny(img_gray, 10, 10*2), cmap='gray')
t50.imshow(cv.Canny(img_gray, 50, 50*2), cmap='gray')
t100.imshow(cv.Canny(img_gray, 100, 100*2), cmap='gray')
t200.imshow(cv.Canny(img_gray, 200, 200*2), cmap='gray')
t300.imshow(cv.Canny(img_gray, 300, 300*2), cmap='gray')
t500.imshow(cv.Canny(img_gray, 500, 500*2), cmap='gray')
plt.show()
Explanation: Given this new grayscale and blurred image, we can run the edge detection algorithm again.
End of explanation
img_canny = cv.Canny(img_gray, 200, 200*2)
plt.imshow(img_canny, cmap='gray')
plt.show()
Explanation: In this case our edges completely disappear at higher thresholds!
The threshold of 200 seemed to perform reasonably well in both situations, so let's stick with that.
End of explanation
contours, _ = cv.findContours(img_canny, cv.RETR_TREE,
cv.CHAIN_APPROX_SIMPLE)
print(len(contours))
print(contours[0])
Explanation: We now need to find the bounding box around the item in the image that we want to crop. The first step in doing this is to utilize the findContours function. This function returns a list of contours found in the output of the Canny algorithm. The contours are defined by lists of $(x, y)$ values.
End of explanation
bounding_boxes = []
contours_poly = []
for contour in contours:
polygon = cv.approxPolyDP(contour, 3, True)
contours_poly.append(polygon)
bounding_boxes.append(cv.boundingRect(polygon))
print(len(contours_poly))
print(len(bounding_boxes))
print(bounding_boxes)
Explanation: Given the contours, we can approximate the polygon that the contour forms and then create a bounding box around each contour.
End of explanation
import numpy as np
image_copy = np.copy(image)
x, y, width, height = largest_box
for box in bounding_boxes:
cv.rectangle(image_copy,
(box[0], box[1]), (box[0]+box[2], box[1]+box[3]),
[0, 0, 255],
2)
_ = plt.imshow(image_copy)
Explanation: Let's take a look at all of the bounding boxes on the car.
End of explanation
x1, y1, x2, y2 = 640, 640, 0, 0
for box in bounding_boxes:
if box[0] < x1:
x1 = box[0]
if box[1] < y1:
y1 = box[1]
if box[0] + box[2] > x2:
x2 = box[0] + box[2]
if box[1] + box[3] > y2:
y2 = box[1] + box[3]
x1, y1, x2, y2
Explanation: No single box seems to capture the entire car, but we can use the outer boundaries to find a unified box.
We'll use a very simple algorithm that simply finds the outer boundaries and doesn't care if the boxes overlap. In practice you'd likely want to use a more sophisticated algorithm.
End of explanation
import numpy as np
image_copy = np.copy(image)
cv.rectangle(image_copy,
(x1, y1), (x2, y2),
[0, 0, 255],
2)
_ = plt.imshow(image_copy)
Explanation: And then we can draw the box.
End of explanation
x, y, width, height = largest_box
cropped_img = image[y1:y2, x1:x2]
_ = plt.imshow(cropped_img)
Explanation: The box does clip the car a bit, but for the most part, the car is within the box.
Now we need to crop the image to just the car itself.
Notice that we pair the x coordinate with height and the y with width. This is because we want all of the rows for a given height and the columns for a given width.
End of explanation
height = cropped_img.shape[0]
width = cropped_img.shape[1]
left_pad, right_pad, top_pad, bottom_pad = 0, 0, 0, 0
if height > width:
left_pad = int((height-width) / 2)
right_pad = height-width-left_pad
elif width > height:
top_pad = int((width-height) / 2)
bottom_pad = width-height-top_pad
img_square = cv.copyMakeBorder(
cropped_img,
top_pad,
bottom_pad,
left_pad,
right_pad,
cv.BORDER_CONSTANT,
value=(255,255,255))
_ = plt.imshow(img_square)
Explanation: Now we need to make the image into a square by padding the image. We find the longest side and then pad the shorter side with the necessary pixels to make the image a square.
To add the padding we use OpenCV's copyMakeBorder function.
End of explanation
image_scaled = cv.resize(img_square, (300, 300))
plt.imshow(image_scaled)
plt.show()
Explanation: And finally, we can scale the image down to a 300x300 image to feed to our model using OpenCV's resize function again.
End of explanation
horizontal_img = cv.flip(image_scaled, 0)
plt.imshow(horizontal_img)
plt.show()
Explanation: Rotating Images
It is sometimes useful to rotate images before feeding them to your model. This increases the size of your training data, and it makes your model more resilient to subtle patterns that might exist within your base images.
For example, in a popular fashion image dataset, most boots are pointed in one direction and sandals in the other. When the model attempts to identify a boot pointed in the wrong direction, it will often predict 'sandal' based purely on the orientation of the object.
To flip an image on the horizontal or vertical axis, we can just use the flip function.
Here is an example of flipping an image on the horizontal axis.
End of explanation
vertical_img = cv.flip(image_scaled, 1)
plt.imshow(vertical_img)
plt.show()
Explanation: And now the vertical axis.
End of explanation
horizontal_and_vertical_img = cv.flip(image_scaled, -1)
plt.imshow(horizontal_and_vertical_img)
plt.show()
Explanation: And finally, both.
End of explanation
# Your answer goes here
Explanation: Resources
OpenCV Documentation on Edge Detection
Canny Edge Detector: Wikipedia, OpenCV Documentation
Exercises
Exercise 1
We have seen how to rotate an image on its horizontal and vertical axes. This technique works well for increasing the size of your training set and the capabilities of your model, while also providing resiliency to biases that might be hidden in your data.
It is also possible to rotate an image by different angles.
Use OpenCV to take our image_scaled image from above and rotate it so that the car is angled at 45 degrees. Do this for every corner of the squared image.
There should be eight images in total. The order of the images isn't important, but the variety is. There should be one image for each case below:
Car pointed to the top-left corner of the image
Upside-down car pointed to the top-left corner of the image
Car pointed to the top-right corner of the image
Upside-down car pointed to the top-right corner of the image
Car pointed to the bottom-left corner of the image
Upside-down car pointed to the bottom-left corner of the image
Car pointed to the bottom-right corner of the image
Upside-down car pointed to the bottom-right corner of the image
Display the images using matplotlib.pyplot.
Hint: Check out the getRotationMatrix2D and warpAffine methods.
Student Solution
End of explanation |
1,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This model was developed by Permamodel workgroup.
Basic theory is Kudryavtsev's method.
Reference
Step1: Spatially visualize active layer thickness
Step2: Spatially visualize mean annual ground temperature | Python Code:
import os,sys
sys.path.append('../../permamodel/')
from permamodel.components import bmi_Ku_component
from permamodel import examples_directory
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap, addcyclic
import matplotlib as mpl
print examples_directory
cfg_file = os.path.join(examples_directory, 'Ku_method_2D.cfg')
x = bmi_Ku_component.BmiKuMethod()
x.initialize(cfg_file)
y0 = x.get_value('datetime__start')
y1 = x.get_value('datetime__end')
for i in np.linspace(y0,y1,y1-y0+1):
x.update()
print i
x.finalize()
ALT = x.get_value('soil__active_layer_thickness')
TTOP = x.get_value('soil__temperature')
LAT = x.get_value('latitude')
LON = x.get_value('longitude')
SND = x.get_value('snowpack__depth')
LONS, LATS = np.meshgrid(LON, LAT)
#print np.shape(ALT)
#print np.shape(LONS)
Explanation: This model was developed by Permamodel workgroup.
Basic theory is Kudryavtsev's method.
Reference:
Anisimov, O. A., Shiklomanov, N. I., & Nelson, F. E. (1997).
Global warming and active-layer thickness: results from transient general circulation models.
Global and Planetary Change, 15(3), 61-77.
End of explanation
fig=plt.figure(figsize=(8,4.5))
ax = fig.add_axes([0.05,0.05,0.9,0.85])
m = Basemap(llcrnrlon=-145.5,llcrnrlat=1.,urcrnrlon=-2.566,urcrnrlat=46.352,\
rsphere=(6378137.00,6356752.3142),\
resolution='l',area_thresh=1000.,projection='lcc',\
lat_1=50.,lon_0=-107.,ax=ax)
X, Y = m(LONS, LATS)
m.drawcoastlines(linewidth=1.25)
# m.fillcontinents(color='0.8')
m.drawparallels(np.arange(-80,81,20),labels=[1,1,0,0])
m.drawmeridians(np.arange(0,360,60),labels=[0,0,0,1])
clev = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0])
cs = m.contourf(X, Y, ALT, clev, cmap=plt.cm.PuBu_r, extend='both')
cbar = m.colorbar(cs)
cbar.set_label('m')
plt.show()
# print x._values["ALT"][:]
ALT2 = np.reshape(ALT, np.size(ALT))
ALT2 = ALT2[np.where(~np.isnan(ALT2))]
print 'Simulated ALT:'
print 'Max:', np.nanmax(ALT2),'m', '75% = ', np.percentile(ALT2, 75)
print 'Min:', np.nanmin(ALT2),'m', '25% = ', np.percentile(ALT2, 25)
plt.hist(ALT2)
Explanation: Spatially visualize active layer thickness:
End of explanation
fig2=plt.figure(figsize=(8,4.5))
ax2 = fig2.add_axes([0.05,0.05,0.9,0.85])
m2 = Basemap(llcrnrlon=-145.5,llcrnrlat=1.,urcrnrlon=-2.566,urcrnrlat=46.352,\
rsphere=(6378137.00,6356752.3142),\
resolution='l',area_thresh=1000.,projection='lcc',\
lat_1=50.,lon_0=-107.,ax=ax2)
X, Y = m2(LONS, LATS)
m2.drawcoastlines(linewidth=1.25)
# m.fillcontinents(color='0.8')
m2.drawparallels(np.arange(-80,81,20),labels=[1,1,0,0])
m2.drawmeridians(np.arange(0,360,60),labels=[0,0,0,1])
clev = np.linspace(start=-10, stop=0, num =11)
cs2 = m2.contourf(X, Y, TTOP, clev, cmap=plt.cm.seismic, extend='both')
cbar2 = m2.colorbar(cs2)
cbar2.set_label('Ground Temperature ($^\circ$C)')
plt.show()
# # print x._values["ALT"][:]
TTOP2 = np.reshape(TTOP, np.size(TTOP))
TTOP2 = TTOP2[np.where(~np.isnan(TTOP2))]
# Hist plot:
plt.hist(TTOP2)
mask = x._model.mask
print np.shape(mask)
plt.imshow(mask)
print np.nanmin(x._model.tot_percent)
Explanation: Spatially visualize mean annual ground temperature:
End of explanation |
1,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Run a Web Server in a Notebook
In this notebook, we show how to run a Tornado or Flask web server within a notebook, and access it from the public Internet. It sounds hacky, but the technique can prove useful
Step1: We can test our function by showing its output inline using the Image utility from IPython.
Step2: Create a Simple Dashboard Page
Now we'll craft a simple dashboard page that includes our plot. We don't have to do anything fancy here other than use an <img> tag and a <button>. But to demonstate what's possible, we'll make it pretty with Bootstrap and jQuery, and use a Jinja template that accepts the demo title as a parameter.
Note that the image tag points to a /plot resource on the server. Nothing dictates that we must fetch the plot image from our dashboard page. Another application could treat our web server as an API and use it in other ways.
Step3: We can now expose both the plotting function and the template via our web servers (Tornado first, then Flask) using the following endpoints
Step4: Next we import the Tornado models we need.
Step5: Then we define the request handlers for our two endpoints.
Step6: Now we define the application object which maps the web paths to the handlers.
Step7: Finally, we create a new HTTP server bound to a publicly exposed port on our notebook server (e.g., 9000) and using the self-signed certificate with corresponding key.
<div class="alert" style="border
Step8: To see the result, we need to visit the public IP address of our notebook server. For example, if our IP address is 192.168.11.10, we would visit https
Step9: Run Flask in a Notebook
The same technique works with Flask, albeit with different pros and cons. First, we need to install Flask since it does not come preinstalled in the notebook environment by default.
Step10: Now we import our Flask requirements, define our app, and create our route mappings.
Step11: Finally, we run the Flask web server. Flask supports the generation of an ad-hoc HTTP certificate and key so we don't need to explicitly put one on disk like we did in the case of Tornado.
Step12: Unlike in the Tornado case, the run command above blocks the notebook kernel from returning for as long as the web server is running. To stop the server, we need to interrupt the kernel (Kernel → Interrupt).
Run Flask in a Tornado WSGIContainer
If we are in love with Flask syntax, but miss the cool, non-blocking ability of Tornado, we can run the Flask application in a Tornado WSGIContainer like so.
Step13: And once we do, we can view the dashboard in a web browser even while executing cells in the notebook. When we're done, we can cleanup with the same logic as in the pure Tornado case. | Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import numpy
import io
pd.options.display.mpl_style = 'default'
def plot_random_numbers(n=50):
'''
Plot random numbers as a line graph.
'''
fig, ax = plt.subplots()
# generate some random numbers
arr = numpy.random.randn(n)
ax.plot(arr)
ax.set_title('Random numbers!')
# fetch the plot bytes
output = io.BytesIO()
plt.savefig(output, format='png')
png = output.getvalue()
plt.close()
return png
Explanation: Run a Web Server in a Notebook
In this notebook, we show how to run a Tornado or Flask web server within a notebook, and access it from the public Internet. It sounds hacky, but the technique can prove useful:
To quickly prototype a REST API for an external web application to consume
To quickly expose a simple web dashboard to select external users
In this notebook, we'll demonstrate the technique using both Tornado and Flask as the web server. In both cases, the servers will listen for HTTPS connections and use a self-signed certificate. The servers will not authenticate connecting users / clients. (We want to keep things simple for this demo, but such authentication is an obvious next step in securing the web service for real-world use.)
Define the Demo Scenario
Suppose we have completed a notebook that, among other things, can plot a point-in-time sample of data from an external source. Assume we now want to surface this plot in a very simple UI that has:
The title of the demo
The current plot
A refresh button that takes a new sample and updates the plot
Create the Plotting Function
Suppose we have a function that generates a plot and returns the image as a PNG in a Python string.
End of explanation
from IPython.display import Image
Image(plot_random_numbers())
Explanation: We can test our function by showing its output inline using the Image utility from IPython.
End of explanation
import jinja2
page = jinja2.Template('''\
<!doctype html>
<html>
<head>
<link rel="stylesheet" type="text/css" href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap.min.css" />
<title>{{ title }}</title>
</head>
<body>
<nav class="navbar navbar-default">
<div class="container-fluid">
<div class="navbar-header">
<a class="navbar-brand" href="#">{{ title }}</a>
</div>
</div>
</nav>
<div class="container text-center">
<div class="row">
<img src="/plot" alt="Random numbers for a plot" />
</div>
<div class="row">
<button class="btn btn-primary">Refresh Plot</button>
</div>
</div>
<script type="text/javascript" src="//code.jquery.com/jquery-2.1.3.min.js"></script>
<script type="text/javascript">
console.debug('running');
$('button').on('click', function() {
$('img').attr('src', '/plot?'+(new Date().getTime()));
});
</script>
</body>
</html>''')
Explanation: Create a Simple Dashboard Page
Now we'll craft a simple dashboard page that includes our plot. We don't have to do anything fancy here other than use an <img> tag and a <button>. But to demonstate what's possible, we'll make it pretty with Bootstrap and jQuery, and use a Jinja template that accepts the demo title as a parameter.
Note that the image tag points to a /plot resource on the server. Nothing dictates that we must fetch the plot image from our dashboard page. Another application could treat our web server as an API and use it in other ways.
End of explanation
%%bash
mkdir -p -m 700 ~/.ssh
openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 \
-subj "/C=XX/ST=Unknown/L=Somewhere/O=None/CN=None" \
-keyout /home/notebook/.ssh/notebook.key -out /home/notebook/.ssh/notebook.crt
Explanation: We can now expose both the plotting function and the template via our web servers (Tornado first, then Flask) using the following endpoints:
/ will serve the dashboard HTML.
/plot will serve the plot PNG.
Run Tornado in a Notebook
First we create a self-signed certificate using the openssl command line library. If we had a real cert, we could use it instead.
End of explanation
import tornado.ioloop
import tornado.web
import tornado.httpserver
Explanation: Next we import the Tornado models we need.
End of explanation
class MainHandler(tornado.web.RequestHandler):
def get(self):
'''Renders the template with a title on HTTP GET.'''
self.finish(page.render(title='Tornado Demo'))
class PlotHandler(tornado.web.RequestHandler):
def get(self):
'''Creates the plot and returns it on HTTP GET.'''
self.set_header('content-type', 'image/png')
png = plot_random_numbers()
self.finish(png)
Explanation: Then we define the request handlers for our two endpoints.
End of explanation
application = tornado.web.Application([
(r"/", MainHandler),
(r"/plot", PlotHandler)
])
Explanation: Now we define the application object which maps the web paths to the handlers.
End of explanation
server = tornado.httpserver.HTTPServer(application, ssl_options = {
"certfile": '/home/notebook/.ssh/notebook.crt',
"keyfile": '/home/notebook/.ssh/notebook.key'
})
server.listen(9000, '0.0.0.0')
Explanation: Finally, we create a new HTTP server bound to a publicly exposed port on our notebook server (e.g., 9000) and using the self-signed certificate with corresponding key.
<div class="alert" style="border: 1px solid #aaa; background: radial-gradient(ellipse at center, #ffffff 50%, #eee 100%);">
<div class="row">
<div class="col-sm-1"><img src="https://knowledgeanyhow.org/static/images/favicon_32x32.png" style="margin-top: -6px"/></div>
<div class="col-sm-11">In IBM Knowledge Anyhow Workbench, ports 9000 through 9004 are exposed on a public interface. We can bind our webserver to any of those ports.</div>
</div>
</div>
End of explanation
server.close_all_connections()
server.stop()
Explanation: To see the result, we need to visit the public IP address of our notebook server. For example, if our IP address is 192.168.11.10, we would visit https://192.168.11.10:9000.
<div class="alert" style="border: 1px solid #aaa; background: radial-gradient(ellipse at center, #ffffff 50%, #eee 100%);">
<div class="row">
<div class="col-sm-1"><img src="https://knowledgeanyhow.org/static/images/favicon_32x32.png" style="margin-top: -6px"/></div>
<div class="col-sm-11">In IBM Knowledge Anyhow Workbench, we can get our public IP address from an environment variable by executing the code below in our notebook:
<pre style="background-color: transparent">import os
os.getenv('HOST_PUBLIC_IP')</pre>
</div>
</div>
</div>
When we visit the web server in a browser and accept the self-signed cert warning, we should see the resulting dashboard. Clicking Refresh Plot in the dashboard shows us a new plot.
Note that since IPython itself is based on Tornado, we are able to run other cells and get ouput while the web server is running. In fact, we can even modify the plotting function and template and see the changes the next time we refresh the dashboard in our browser.
When we want to shut the server down, we execute the lines below. Restarting the notebook kernel has the same net effect.
End of explanation
!pip install flask
Explanation: Run Flask in a Notebook
The same technique works with Flask, albeit with different pros and cons. First, we need to install Flask since it does not come preinstalled in the notebook environment by default.
End of explanation
from flask import Flask, make_response
flask_app = Flask('flask_demo')
@flask_app.route('/')
def index():
'''Renders the template with a title on HTTP GET.'''
return page.render(title='Flask Demo')
@flask_app.route('/plot')
def get_plot():
'''Creates the plot and returns it on HTTP GET.'''
response = make_response(plot_random_numbers())
response.mimetype = 'image/png'
return response
Explanation: Now we import our Flask requirements, define our app, and create our route mappings.
End of explanation
flask_app.run(host='0.0.0.0', port=9000, ssl_context='adhoc')
Explanation: Finally, we run the Flask web server. Flask supports the generation of an ad-hoc HTTP certificate and key so we don't need to explicitly put one on disk like we did in the case of Tornado.
End of explanation
from tornado.wsgi import WSGIContainer
server = tornado.httpserver.HTTPServer(WSGIContainer(flask_app), ssl_options = {
"certfile": '/home/notebook/.ssh/notebook.crt',
"keyfile": '/home/notebook/.ssh/notebook.key'
})
server.listen(9000, '0.0.0.0')
Explanation: Unlike in the Tornado case, the run command above blocks the notebook kernel from returning for as long as the web server is running. To stop the server, we need to interrupt the kernel (Kernel → Interrupt).
Run Flask in a Tornado WSGIContainer
If we are in love with Flask syntax, but miss the cool, non-blocking ability of Tornado, we can run the Flask application in a Tornado WSGIContainer like so.
End of explanation
server.close_all_connections()
server.stop()
Explanation: And once we do, we can view the dashboard in a web browser even while executing cells in the notebook. When we're done, we can cleanup with the same logic as in the pure Tornado case.
End of explanation |
1,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Randomized LASSO
This selection algorithm allows the researcher to form a model
after observing the subgradient of this optimization problem
$$
\text{minimize}_{\beta} \frac{1}{2} \|y-X\beta\|^2_2 + \sum_j \lambda_j |\beta_j| - \omega^T\beta + \frac{\epsilon}{2} \|\beta\|^2_2
$$
where $\omega \sim N(0,\Sigma)$ is Gaussian randomization with a covariance specified by the user. Data splitting
is (asymptotically) a special case of this randomization mechanism.
Step1: Randomization mechanism
By default, isotropic Gaussian randomization is chosen with variance chosen based on
mean diagonal of $X^TX$ and the standard deviation of $y$.
Step2: We see that variables [1,6,17,18] are chosen here.
Inference
For inference, the user can in principle choose any target jointly normal with $\nabla \ell(\beta^;X,y) =
X^T(X\beta^-y)$ where $\beta^*$ is the population minimizer under the model $(X_i,y_i) \overset{IID}{\sim} F$.
For convenience, we have provided some targets, though our functions expect boolean representation of the active set.
Step3: Given our target $\widehat{\theta}$ and its estimated covariance $\Sigma$
as well as its joint covariance $\tilde{\Gamma}$ with $\nabla \ell(\beta^; X,y)$ we use th linear
decomposition
$$
\begin{aligned}
\nabla \ell(\beta^; X,y) &= \nabla \ell(\beta^*; X,y) - \tilde{\Gamma} \Sigma^{-1} \widehat{\theta} + \tilde{\Gamma} \Sigma^{-1} \widehat{\theta} \
&= N + \Gamma \widehat{\theta}.
\end{aligned}
$$
We have arranged things so that (pre-selection) $N$ is uncorrelated (and asympotically independent of) $\widehat{\theta}$.
We can then form univariate tests of $H_{0,j} | Python Code:
import numpy as np
from selectinf.randomized.api import lasso
from selectinf.tests.instance import gaussian_instance
np.random.seed(0) # for reproducibility
X, y = gaussian_instance(n=100,
p=20,
s=5,
signal=3,
equicorrelated=False,
rho=0.4,
random_signs=True)[:2]
n, p = X.shape
n, p
Explanation: Randomized LASSO
This selection algorithm allows the researcher to form a model
after observing the subgradient of this optimization problem
$$
\text{minimize}_{\beta} \frac{1}{2} \|y-X\beta\|^2_2 + \sum_j \lambda_j |\beta_j| - \omega^T\beta + \frac{\epsilon}{2} \|\beta\|^2_2
$$
where $\omega \sim N(0,\Sigma)$ is Gaussian randomization with a covariance specified by the user. Data splitting
is (asymptotically) a special case of this randomization mechanism.
End of explanation
L = lasso.gaussian(X, y, 2 * np.diag(X.T.dot(X)) * np.std(y))
signs = L.fit()
active_set = np.nonzero(signs != 0)[0]
active_set
Explanation: Randomization mechanism
By default, isotropic Gaussian randomization is chosen with variance chosen based on
mean diagonal of $X^TX$ and the standard deviation of $y$.
End of explanation
from selectinf.randomized.lasso import selected_targets
active_bool = np.zeros(p, np.bool)
active_bool[active_set] = True
(observed_target,
cov_target,
cov_target_score,
alternatives) = selected_targets(L.loglike, np.ones(n), active_bool)
Explanation: We see that variables [1,6,17,18] are chosen here.
Inference
For inference, the user can in principle choose any target jointly normal with $\nabla \ell(\beta^;X,y) =
X^T(X\beta^-y)$ where $\beta^*$ is the population minimizer under the model $(X_i,y_i) \overset{IID}{\sim} F$.
For convenience, we have provided some targets, though our functions expect boolean representation of the active set.
End of explanation
observed_target.shape
Xsel_inv = np.linalg.pinv(X[:, active_set])
np.testing.assert_allclose(observed_target, Xsel_inv.dot(y))
dispersion = np.linalg.norm(y - X[:, active_set].dot(Xsel_inv.dot(y)))**2 / (n - len(active_set))
np.testing.assert_allclose(cov_target, dispersion * Xsel_inv.dot(Xsel_inv.T))
np.testing.assert_allclose(cov_target_score, - X.T.dot(X)[:,active_set].dot(cov_target).T, rtol=np.inf, atol=1.e-10) # some zeros so relative
pivots, pvals, intervals = L.summary(observed_target,
cov_target, # \Sigma
cov_target_score, # \tilde{\Gamma}
alternatives,
ndraw=10000,
burnin=2000,
compute_intervals=True)
pvals
intervals
Explanation: Given our target $\widehat{\theta}$ and its estimated covariance $\Sigma$
as well as its joint covariance $\tilde{\Gamma}$ with $\nabla \ell(\beta^; X,y)$ we use th linear
decomposition
$$
\begin{aligned}
\nabla \ell(\beta^; X,y) &= \nabla \ell(\beta^*; X,y) - \tilde{\Gamma} \Sigma^{-1} \widehat{\theta} + \tilde{\Gamma} \Sigma^{-1} \widehat{\theta} \
&= N + \Gamma \widehat{\theta}.
\end{aligned}
$$
We have arranged things so that (pre-selection) $N$ is uncorrelated (and asympotically independent of) $\widehat{\theta}$.
We can then form univariate tests of $H_{0,j}:\theta_j=0$ based on this conditional distribution.
As the form is unknown, we approximate it using MCMC with ndraw steps after a burnin of burnin steps.
End of explanation |
1,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
update changes to pypi
```bash
update pypi
rm -r dist # remove old source files
python setup.py sdist # make source distribution
python setup.py bdist_wheel # make build distribution with .whl file
twine upload dist/ # pip install twine
```
Step1: DevNotes
Step2: The Code
Step11: Main
Step18: Unit Tests
Step19: Testing
Step20: Repository set-up
setup.py
Format based on minimal example
ReadTheDocs setuptools
Step21: register & upload to PyPi
Docs on python wheels (needed for pip)
recommended way reigster and upload
bash
python setup.py register # Not recommended, but did it this way. See guide
Create source distribution
python setup.py sdist
Create build distribution (python wheels for pip)
bash
python setup.py bdist_wheel
Upload distribution
bash
twine upload dist/* # pip install twine
All together
bash
python setup.py sdist
python setup.py bdist_wheel
twine upload dist/*
README.md
Step22: MANIFEST.in
packaging.python manifest.ini docs
Step23: Repository testing
bash
python setup.py test
Step24: TravisCI
For continuous integration testing
Hitchhiker's guide to Python | Python Code:
%ls dist
Explanation: update changes to pypi
```bash
update pypi
rm -r dist # remove old source files
python setup.py sdist # make source distribution
python setup.py bdist_wheel # make build distribution with .whl file
twine upload dist/ # pip install twine
```
End of explanation
import os.path
# create folder if doesn't exist
folders = ['ruxitools', 'tests']
for x in folders:
os.makedirs(x, exist_ok=True)
!tree | grep -v __pycache__ | grep -v .cpython #hides grep'd keywords
Explanation: DevNotes: XyDB.py
created: Fri Oct 21 13:16:57 CDT 2016
author: github.com/ruxi
This notebook was used to construct this repo
Purpose
XyDB is a database-like containers for derivative data
The intended usecase of XyDB is to store dervative data in a database-like
container and bind it as an attribute to the source data. It solves the
problem of namespace pollution by confining intermediate data forms to
the original dataset in a logical and structured manner. The limitation
of this object is that it exists in memory only. For more persistent storage
solutions, its recommended to use an actual database library such as
blaze, mongoDB, or SQLite. Conversely, the advantage is residual information
is not left over after a session.
Specifications
keys (list): list keywords for all records (names for intermediate data configurations)
push (func): Adds record to database
pull (func): Pulls record from database (ducktyped)
Records are accessible via attributes by keyname
Returns dictionary records
pull.<config keyword>
show (func): Show record from database. (ducktyped)
Records are accessible via attributes by keyname
Returns namedtuple objects based on db records.
show.<config keyword>.<attribute name>
Project architecture
Structure of the repository according to The Hitchhiker's Guide to Python
Target directory sturcture
|-- LISCENSE
|-- README.md
|-- setup.py
|-- requirements.txt
|-- Makefile
|-- .gitignore
|-- docs/
|-- notebooks/
|-- ruxitools/
|-- __init__.py
|-- xydb/
|-- __init__.py
|-- XyDB.py
|-- test/
|-- __init__.py
|-- test_XyDB.py
Writing guides
Python docstrings styles
Some resources on documentation conventions
Programs:
bash
pip install sphinxcontrib-napoleon
Guides:
* Google Python Style Guide
Unit Tests
Guides on how to write unit tests:
http://nedbatchelder.com/text/test0.html
https://cgoldberg.github.io/python-unittest-tutorial/
Packaging and distribution
packaging.python.org
python-packaging readthedocs
setup.py
See minimal example
Module implmentation
Create directory tree
End of explanation
# %load ruxitools/__init__.py
Explanation: The Code
End of explanation
# %load ruxitools/xydb.py
#!/usr/bin/env python
__author__ = "github.com/ruxi"
__copyright__ = "Copyright 2016, ruxitools"
__email__ = "[email protected]"
__license__ = "MIT"
__status__ = "Development"
__version__ = "0.1"
from collections import namedtuple
class XyDB(object):
XyDB is a database-like containers for intermediate data
The intended usecase of XyDB is to store intermediate data in a database-like
container and bind it as an attribute to the source data. It solves the
problem of namespace pollution by confining intermediate data forms to
the original dataset in a logical and structured manner. The limitation
of this object is that it exists in memory only. For more persistent storage
solutions, its recommended to use an actual database library such as
blaze, mongoDB, or SQLite. Conversely, the advantage is residual information
is not left over after a session.
Example:
Defined a namedtuple for input validation, then assign this function
as an attribute of your source data object, usually a pandas dataframe.
import XyDB
from collections import namedtuple
# define input validation schema
input_val = namedtuple("data", ['key','desc', 'X', 'y'])
# define data
myData = pd.DataFrame()
# assign class function
myData.Xy = XyDB(input_val, verbose = True)
# add data to DB
myRecord = dict(key='config1'
, desc='dummydata'
, X=[0,1,0]
, y=['a','b','a])
myData.Xy.push(**myRecord)
# show data
myData.Xy.config1.desc
def __init__(self, schema = None, verbose=True, welcome=True):
Arguments:
schema (default: None | NamedTuple):
Accepts a NamedTuple subclass with a "key" field
which is used for input validation when records
are "push"ed
verbose (default: True | boolean)
If false, suppresses print commands. Including this message
welcome (default: True | boolean)
Suppresses printing of the docstring upon initialization
self._db = {}
self._show = lambda: None
self._pull = lambda: None
self._verbose = verbose
# print docstring
if welcome:
print (self.__doc__)
# Input Validation (optional) can be spec'd out by NameTuple.
# Input NamedTuple requires 'key' field
self._schema = False if schema is None else schema
if self._schema:
if "key" not in dir(self._schema):
raise Exception("namedtuple must have 'key' as a field")
#@db.setter
def push(self, key, *args, **kwargs):
Adds records (dict) to database
if not(type(key)==str):
raise Exception('key must be string')
# Create database record entry (a dict)
if self._schema: # is user-defined
self._input_validator = self._schema
record = self._input_validator(key, *args,**kwargs)
else: # the schema is inferred from every push
entry_dict = dict(key=key, *args,**kwargs)
self._input_validator = namedtuple('Data', list(entry_dict.keys()))
record = self._input_validator(**entry_dict)
# The record is added to the database.
self._db[record.key] = record
if self._verbose:
print('Record added {}'.format(record.key))
self._update()
def _update(self):
updates dyanamic attribute access for self.show & self.pull
for key in self.keys:
# self.show.<key> = namedtuple
setattr(self._show
, key
, self._db[key]
)
# self.pull.<key> = dict
setattr(self._pull,
key,
self.db[key]._asdict()
)
@property
def db(self):
Intermediate data accessible by keyword. Returns a dict
return self._db
@property
def keys(self):
list configuration keywords
Returns:
list
return self.db.keys()
@property
def show(self):
Show record from database. Accessible by attribute via keyname
Returns:
namedtuple objects
Usage:
show.<config keyword>.<attribute name>
return self._show
@property
def pull(self):
Pull record from database. Accessible by attribute via keyname
Returns:
dictionary
Usage:
pull.<config keyword>
return self._pull
Explanation: Main
End of explanation
# %load tests/test_xydb.py
__author__ = "github.com/ruxi"
__copyright__ = "Copyright 2016, ruxitools"
__email__ = "[email protected]"
__license__ = "MIT"
__status__ = "Development"
__version__ = "0.1"
import unittest
import collections
from ruxitools.xydb import XyDB
class TestXydb(unittest.TestCase):
test if unittest works
############
# set-up #
############
def dummycase(self):
# dummy record
key = 'dummy0'
desc = 'test case'
X = [1,2,3,4]
y = ['a','b','c','d']
return dict(key=key, desc=desc, X=X, y=y)
def badcase_nokey(self):
desc = 'test case'
X = [1,2,3,4]
return dict(desc=desc, X=X)
def badcase_KeyNotStr(self):
key = [1,2,3,4]
X = "x is a str"
return dict(jey=key, X=X)
def mockschema(self):
input_validation = collections.namedtuple("Xy", ['key','desc', 'X', 'y'])
return input_validation
def push_record_noschema(self, record):
xy = XyDB(verbose=False)
xy.push(**record)
return xy
def push_record_w_schema(self, record, schema):
xy = XyDB(schema=schema, verbose=False)
xy.push(**record)
return xy
###########
# TESTS #
###########
def test_positive_control(self):
self.assertTrue(True)
def test_init_args(self):
xy = XyDB()
xy = XyDB(verbose=False)
xy = XyDB(verbose=True)
def test_PushRecord_NoSchema(self):
record = self.dummycase()
self.push_record_noschema(record)
def test_PushRecord_WithSchema(self):
record = self.dummycase()
schema = self.mockschema()
self.push_record_w_schema(record=record, schema=schema)
def test_PushRecord_NoKey(self):
negative test
record = self.badcase_nokey()
with self.assertRaises(TypeError):
self.push_record_noschema(record)
def test_PushRecord_KeyNotStr(self):
negative test
record = self.badcase_KeyNotStr()
with self.assertRaises(TypeError):
self.push_record_noschema(record)
def test_ShowRecord(self):
record = self.dummycase()
xy = self.push_record_noschema(record)
getattr(xy.show, record['key'])
def test_ShowRecord_NonExistKey(self):
negative test
record = self.dummycase()
key = record['key'] + "spike"
xy = self.push_record_noschema(record)
with self.assertRaises(KeyError):
getattr(xy.show, record[key])
def test_PullRecord(self):
record = self.dummycase()
xy = self.push_record_noschema(record)
getattr(xy.pull, record['key'])
def test_PullRecord_NonExistKey(self):
negative test
record = self.dummycase()
key = record['key'] + "spike"
xy = self.push_record_noschema(record)
with self.assertRaises(KeyError):
getattr(xy.pull, record[key])
def test_keys_NoRecords(self):
is dict_keys returned
xy = XyDB()
xy.keys
self.assertTrue(type(xy.keys)==type({}.keys())
, "Expecting dict_keys, instead got {}".format(type(xy.keys))
)
def test_keys_WithRecords(self):
record = self.dummycase()
xy = XyDB()
xy.push(**record)
xy.keys
def test_db_IsDict(self):
record = self.dummycase()
xy = self.push_record_noschema(record)
self.assertTrue(type(xy.db)==dict)
def test_otherattributes(self):
record = self.dummycase()
schema = self.mockschema()
xy = self.push_record_w_schema(record, schema)
xy._update
if __name__ == '__main__':
unittest.main()
Explanation: Unit Tests
End of explanation
!nosetests --tests=tests --with-coverage #conda install nose, coverage
!coverage report -mi #conda install nose, coverage
Explanation: Testing
End of explanation
# %load setup.py
from setuptools import setup, find_packages
import sys
if sys.version_info[:2]<(3,5):
sys.exit("ruxitools requires python 3.5 or higher")
# defining variables
install_requires = []
tests_require = [
'mock'
, 'nose'
]
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
classifier = [
"Programming Language :: Python",
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: MIT License',
'Natural Language :: English',
'Operating System :: Unix',
'Programming Language :: Python :: 3 :: Only'
]
keywords='ruxi tools ruxitools xydb intermediate data containers',
# setup
setup(
name='ruxitools'
, version="0.2.6"
, description="Misc general use functions. XyDB: container fo intermediate data. "
, url="http://github.com/ruxi/tools"
, author="ruxi"
, author_email="[email protected]"
, license="MIT"
, packages=find_packages()#['ruxitools']
, tests_require=tests_require
, test_suite= 'nose.collector'
, classifiers = classifier
, keywords=keywords
)
Explanation: Repository set-up
setup.py
Format based on minimal example
ReadTheDocs setuptools
End of explanation
# %load README.md
# ruxitools
Miscellaneous tools.
# Installation
method1:
pip install -e git+https://github.com/ruxi/tools.git
method2:
git clone https://github.com/ruxi/tools.git
cd tools
python setup.py install
python setup.py tests
# Modules
## XyDB: a container for intermediate data
XyDB is used to organize intermediate data by attaching it to the source dataset.
It solves the problem of namespace pollution, especially if many intermediate
datasets are derived from the source.
Usage:
```python
from ruxitools.xydb import XyDB
# attach container to source data
mydata.Xy = XyDB()
# store intermediate info & documentation into the containers
mydata.Xy.push(dict(
key="config1" # keyword
, X=[mydata*2] # intermediate data
, desc = "multiply by 2" # description of operation
))
# To retrieve intermediate data as a dict:
mydata.Xy.pull.config1
# To retrieve intermediate data as attributes:
mydata.Xy.show.config1.desc
# To show keys
mydata.Xy.keys
```
# TODO:
requirements.txt - not sure if it works
Explanation: register & upload to PyPi
Docs on python wheels (needed for pip)
recommended way reigster and upload
bash
python setup.py register # Not recommended, but did it this way. See guide
Create source distribution
python setup.py sdist
Create build distribution (python wheels for pip)
bash
python setup.py bdist_wheel
Upload distribution
bash
twine upload dist/* # pip install twine
All together
bash
python setup.py sdist
python setup.py bdist_wheel
twine upload dist/*
README.md
End of explanation
# %load MANIFEST.in
include README.md
include LICENSE
# %load LICENSE
MIT License
Copyright (c) 2016 github.com/ruxi
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Explanation: MANIFEST.in
packaging.python manifest.ini docs
End of explanation
#!python setup.py test
Explanation: Repository testing
bash
python setup.py test
End of explanation
# %load .travis.yml
os: linux
language: python
python:
- 3.5
# command to install dependencies
install:
- "pip install -r requirements.txt"
- "pip install ."
# command to run tests
script: nosetests
Explanation: TravisCI
For continuous integration testing
Hitchhiker's guide to Python: Travis-CI
travisCI official docs
End of explanation |
1,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 18
Step1: Evaluation
Now we want to examine the statistical properties of the simulated model | Python Code:
# 1. Input model parameters and print
parameters = pd.Series()
parameters['rho'] = .75
parameters['sigma'] = 0.006
parameters['alpha'] = 0.35
parameters['delta'] = 0.025
parameters['beta'] = 0.99
print(parameters)
# 2. Compute the steady state of the model directly
A = 1
K = (parameters.alpha*A/(parameters.beta**-1+parameters.delta-1))**(1/(1-parameters.alpha))
C = A*K**parameters.alpha - parameters.delta
Y = A*K**parameters.alpha
I = parameters.delta*K
# 3. Define a function that evaluates the equilibrium conditions
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Resource constraint
resource = cur.a*cur.k**p.alpha + (1-p.delta)* cur.k - fwd.k - cur.c
# Exogenous tfp
tfp_proc = p.rho*np.log(cur.a) - np.log(fwd.a)
# Euler equation
euler = p.beta*(p.alpha*fwd.a*fwd.k**(p.alpha-1) + 1 - p.delta)/fwd.c - 1/cur.c
# Production function
production = cur.a*cur.k**p.alpha - cur.y
# Capital evolution
capital_evolution = cur.i + (1-p.delta)*cur.k - fwd.k
# Stack equilibrium conditions into a numpy array
return np.array([
resource,
tfp_proc,
euler,
production,
capital_evolution
])
# 4. Initialize the model
model = ls.model(equations = equilibrium_equations,
nstates=2,
varNames=['a','k','y','c','i'], # Any order as long as the state variables are named first
shockNames=['eA','eK'], # Name a shock for each state variable *even if there is no corresponding shock in the model*
parameters = parameters)
# 5. Set the steady state of the model directly. Input vars in same order as varNames above
model.set_ss([A,K,Y,C,I])
# 6. Find the log-linear approximation around the non-stochastic steady state and solve
model.approximate_and_solve()
# 7(a) Compute impulse responses and print the computed impulse responses
model.impulse(T=41,t0=5,shock=None,percent=True)
print(model.irs['eA'].head(10))
# 8(b) Plot the computed impulse responses to a TFP shock
fig = plt.figure(figsize=(12,12))
ax1 = fig.add_subplot(3,2,1)
model.irs['eA'][['y','i','k']].plot(lw=5,alpha=0.5,grid=True,ax = ax1).legend(loc='upper right',ncol=4)
ax1.set_title('Output, investment, capital')
ax1.set_ylabel('% dev')
ax1.set_xlabel('quarters')
ax2 = fig.add_subplot(3,2,2)
model.irs['eA'][['a','eA']].plot(lw=5,alpha=0.5,grid=True,ax = ax2).legend(loc='upper right',ncol=2)
ax2.set_title('TFP and TFP shock')
ax2.set_ylabel('% dev')
ax2.set_xlabel('quarters')
ax3 = fig.add_subplot(3,2,3)
model.irs['eA'][['y','c']].plot(lw=5,alpha=0.5,grid=True,ax = ax3).legend(loc='upper right',ncol=4)
ax3.set_title('Output and consumption')
ax3.set_ylabel('% dev')
ax3.set_xlabel('quarters')
plt.tight_layout()
# 9(a) Compute stochastic simulation and print the simulated values
model.stoch_sim(seed=192,covMat= [[parameters['sigma']**2,0],[0,0]])
print(model.simulated.head(10))
# 9(b) Plot the computed stochastic simulation
fig = plt.figure(figsize=(12,4))
ax1 = fig.add_subplot(1,2,1)
model.simulated[['k','c','y','i']].plot(lw=5,alpha=0.5,grid=True,ax = ax1).legend(loc='upper right',ncol=4)
ax2 = fig.add_subplot(1,2,2)
model.simulated[['eA','a']].plot(lw=5,alpha=0.5,grid=True,ax = ax2).legend(loc='upper right',ncol=2)
Explanation: Class 18: A Centralized Real Business Cycle Model without Labor (Continued)
The Model with Output and Investment
Setup
A representative household lives for an infinite number of periods. The expected present value of lifetime utility to the household from consuming $C_0, C_1, C_2, \ldots $ is denoted by $U_0$:
\begin{align}
U_0 & = \log (C_0) + \beta E_0 \log (C_1) + \beta^2 E_0 \log (C_2) + \cdots\
& = E_0\sum_{t = 0}^{\infty} \beta^t \log (C_t),
\end{align}
where $0<\beta<1$ is the household's subjective discount factor. $E_0$ denotes the expectation with respect to all information available as of date 0.
The household enters period 0 with capital $K_0>0$. Production in period $t$:
\begin{align}
F(A_t,K_t) & = A_t K_t^{\alpha}
\end{align}
where TFP $A_t$ is stochastic:
\begin{align}
\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1}
\end{align}
Capital depreciates at the constant rate $\delta$ per period and so the household's resource constraint in each period $t$ is:
\begin{align}
C_t + K_{t+1} & = A_t K_{t}^{\alpha} + (1-\delta)K_t
\end{align}
Define output and investment:
\begin{align}
Y_t & = A_t K_{t}^{\alpha} \
I_t & = K_{t+1} - (1-\delta)K_t
\end{align}
Optimization problem
In period 0, the household solves:
\begin{align}
& \max_{C_0,K_1} \; E_0\sum_{t=0}^{\infty}\beta^t\log (C_t) \
& \; \; \; \; \; \; \; \; \text{s.t.} \; \; \; \; C_t + K_{t+1} = A_t K_{t}^{\alpha} + (1-\delta)K_t
\end{align}
which can be written as a choice of $K_1$ only:
\begin{align}
\max_{K_1} \; E_0\sum_{t=0}^{\infty}\beta^t\log \left( A_t K_{t}^{\alpha} + (1-\delta)K_t - K_{t+1}\right)
\end{align}
Equilibrium
So given $K_0>0$ and $A_0$, the equilibrium paths for consumption, capital, and TFP are described described by:
\begin{align}
\frac{1}{C_t} & = \beta E_t \left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha - 1} + 1 - \delta}{C_{t+1}}\right]\
C_t + K_{t+1} & = A_{t} K_t^{\alpha} + (1-\delta) K_t\
Y_t & = A_t K_{t}^{\alpha} \
I_t & = K_{t+1} - (1-\delta)K_t\
\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1}
\end{align}
Calibration
For computation purposes, assume the following values for the parameters of the model:
\begin{align}
\beta & = 0.99\
\rho & = .75\
\sigma & = 0.006\
\alpha & = 0.35\
\delta & = 0.025
\end{align}
Steady State
The steady state:
\begin{align}
A & = 1\
K & = \left(\frac{\alpha A}{\beta^{-1} - 1 + \delta} \right)^{\frac{1}{1-\alpha}}\
C & = AK^{\alpha} - \delta K\
Y & = AK^{\alpha} \
I & = \delta K
\end{align}
End of explanation
# Compute the standard deviations of Y, C, and I in model.simulated
print(model.simulated[['y','c','i']].std())
# Compute the coefficients of correlation for Y, C, and I
print(model.simulated[['y','c','i']].corr())
Explanation: Evaluation
Now we want to examine the statistical properties of the simulated model
End of explanation |
1,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial on Causal Inference and its Connections to Machine Learning (Using DoWhy+EconML)
This tutorial presents a walk-through on using DoWhy+EconML libraries for causal inference. Along the way, we'll highlight the connections to machine learning---how machine learning helps in building causal effect estimators, and how causal reasoning can be help build more robust machine learning models.
Examples of data science questions that are fundamentally causal inference questions
Step1: I. Modeling
The first step is to encode our domain knowledge into a causal model, often represented as a graph. The final outcome of a causal inference analysis depends largely on the input assumptions, so this step is quite important. To estimate the causal effect, most common problems involve specifying two types of variables
Step2: To visualize the graph, we can write,
Step3: In general, you can specify a causal graph that describes the mechanisms of the data-generating process for a given dataset. Each arrow in the graph denotes a causal mechanism
Step4: II. Identification
Both ways of providing domain knowledge (either through named variable sets of confounders and instrumental variables, or through a causal graph) correspond to an underlying causal graph. Given a causal graph and a target quantity (e.g., effect of A on B), the process of identifcation is to check whether the target quantity can be estimated given the observed variables. Importantly, identification only considers the names of variables that are available in the observed data; it does not need access to the data itself. Related to the two kinds of variables above, there are two main identification methods for causal inference.
Backdoor criterion (or more generally, adjustment sets)
Step5: III. Estimation
As the name suggests, the estimation step involves building a statistical estimator that can compute the target estimand identified in the previous step. Many estimators have been proposed for causal inference. DoWhy implements a few of the standard estimators while EconML implements a powerful set of estimators that use machine learning.
We show an example of using Propensity Score Stratification using DoWhy, and a machine learning-based method called Double-ML using EconML.
Step6: IV. Refutation
Finally, checking robustness of the estimate is probably the most important step of a causal analysis. We obtained an estimate using Steps 1-3, but each step may have made certain assumptions that may not be true. Absent of a proper validation "test" set, this step relies on refutation tests that seek to refute the correctness of an obtained estimate using properties of a good estimator. For example, a refutation test (placebo_treatment_refuter) checks whether the estimator returns an estimate value of 0 when the action variable is replaced by a random variable, independent of all other variables.
Step7: The DoWhy+EconML solution
We will use the DoWhy+EconML libraries for causal inference. DoWhy provides a general API for the four steps and EconML provides advanced estimators for the Estimation step.
DoWhy allows you to visualize, formalize, and test the assumptions they are making, so that you can better understand the analysis and avoid reaching incorrect conclusions. It does so by focusing on assumptions explicitly and introducing automated checks on validity of assumptions to the extent possible. As you will see, the power of DoWhy is that it provides a formal causal framework to encode domain knowledge and it can run automated robustness checks to validate the causal estimate from any estimator method.
Additionally, as data becomes high-dimensional, we need specialized methods that can handle known confounding. Here we use EconML that implements many of the state-of-the-art causal estimation approaches. This package has a common API for all the techniques, and each technique is implemented as a sequence of machine learning tasks allowing for the use of any existing machine learning software to solve these subtasks, allowing you to plug-in the ML models that you are already familiar with rather than learning a new toolkit. The power of EconML is that you can now implement the state-of-the-art in causal inference just as easily as you can run a linear regression or a random forest.
Together, DoWhy+EconML make answering what if questions a whole lot easier by providing a state-of-the-art, end-to-end framework for causal inference, including the latest causal estimation and automated robustness procedures.
A mystery dataset
Step8: Below we create a dataset where the true causal effect is decided by random variable. It can be either 0 or 1.
Step9: Model assumptions about the data-generating process using a causal graph
Step10: Identify the correct estimand for the target quantity based on the causal model
Step11: Since this is observed data, the warning asks you if there are any unobserved confounders that are missing in this dataset. If there are, then ignoring them will lead to an incorrect estimate.
If you want to disable the warning, you can use proceed_when_unidentifiable=True as an additional parameter to identify_effect.
Estimate the target estimand
Step12: As you can see, for a non-linear data-generating process, the linear regression model is unable to distinguish the causal effect from the observed correlation.
If the DGP was linear, however, then simple linear regression would have worked. To see that, try setting is_linear=True in cell 10 above.
To model non-linear data (and data with high-dimensional confounders), we need more advanced methods. Below is an example using the double machine learning estimator from EconML. This estimator uses machine learning-based methods like gradient boosting trees to learn the relationship between the outcome and confounders, and the treatment and confounders, and then finally compares the residual variation between the outcome and treatment.
Step13: As you can see, the DML method obtains a better estimate, that is closer to the true causal effect of 1.
Check robustness of the estimate using refutation tests | Python Code:
# Required libraries
import dowhy
from dowhy import CausalModel
import dowhy.datasets
# Avoiding unnecessary log messges and warnings
import logging
logging.getLogger("dowhy").setLevel(logging.WARNING)
import warnings
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
# Load some sample data
data = dowhy.datasets.linear_dataset(
beta=10,
num_common_causes=5,
num_instruments=2,
num_samples=10000,
treatment_is_binary=True,
stddev_treatment_noise=10)
Explanation: Tutorial on Causal Inference and its Connections to Machine Learning (Using DoWhy+EconML)
This tutorial presents a walk-through on using DoWhy+EconML libraries for causal inference. Along the way, we'll highlight the connections to machine learning---how machine learning helps in building causal effect estimators, and how causal reasoning can be help build more robust machine learning models.
Examples of data science questions that are fundamentally causal inference questions:
* A/B experiments: If I change the algorithm, will it lead to a higher success rate?
* Policy decisions: If we adopt this treatment/policy, will it lead to a healthier patient/more revenue/etc.?
* Policy evaluation: Knowing what I know now, did my policy help or hurt?
* Credit attribution: Are people buying because of the recommendation algorithm? Would they have bought anyway?
In this tutorial, you will:
* Learn how causal reasoning is necessary for decision-making, and the difference between a prediction and decision-making task.
<br>
Get hands-on with estimating causal effects using the four steps of causal inference: model, identify, estimate and refute.
<br>
See how DoWhy+EconML can help you estimate causal effects with 4 lines of code, using the latest methods from statistics and machine learning to estimate the causal effect and evaluate its robustness to modeling assumptions.
<br>
Work through real-world case-studies with Jupyter notebooks on applying causal reasoning in different scenarios including estimating impact of a customer loyalty program on future transactions, predicting which users will be positively impacted by an intervention (such as an ad), pricing products, and attributing which factors contribute most to an outcome.
<br>
Learn about the connections between causal inference and the challenges of modern machine learning models.
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Why-causal-inference?" data-toc-modified-id="Why-causal-inference?-1"><span class="toc-item-num">1 </span>Why causal inference?</a></span><ul class="toc-item"><li><span><a href="#Defining-a-causal-effect" data-toc-modified-id="Defining-a-causal-effect-1.1"><span class="toc-item-num">1.1 </span>Defining a causal effect</a></span></li><li><span><a href="#The-difference-between-prediction-and-causal-inference" data-toc-modified-id="The-difference-between-prediction-and-causal-inference-1.2"><span class="toc-item-num">1.2 </span>The difference between prediction and causal inference</a></span></li><li><span><a href="#Two-fundamental-challenges-for-causal-inference" data-toc-modified-id="Two-fundamental-challenges-for-causal-inference-1.3"><span class="toc-item-num">1.3 </span>Two fundamental challenges for causal inference</a></span></li></ul></li><li><span><a href="#The-four-steps-of-causal-inference" data-toc-modified-id="The-four-steps-of-causal-inference-2"><span class="toc-item-num">2 </span>The four steps of causal inference</a></span><ul class="toc-item"><li><span><a href="#The-DoWhy+EconML-solution" data-toc-modified-id="The-DoWhy+EconML-solution-2.1"><span class="toc-item-num">2.1 </span>The DoWhy+EconML solution</a></span></li><li><span><a href="#A-mystery-dataset:-Can-you-find-out-if-if-there-is-a-causal-effect?" data-toc-modified-id="A-mystery-dataset:-Can-you-find-out-if-if-there-is-a-causal-effect?-2.2"><span class="toc-item-num">2.2 </span>A mystery dataset: Can you find out if if there is a causal effect?</a></span><ul class="toc-item"><li><span><a href="#Model-assumptions-about-the-data-generating-process-using-a-causal-graph" data-toc-modified-id="Model-assumptions-about-the-data-generating-process-using-a-causal-graph-2.2.1"><span class="toc-item-num">2.2.1 </span>Model assumptions about the data-generating process using a causal graph</a></span></li><li><span><a href="#Identify-the-correct-estimand-for-the-target-quantity-based-on-the-causal-model" data-toc-modified-id="Identify-the-correct-estimand-for-the-target-quantity-based-on-the-causal-model-2.2.2"><span class="toc-item-num">2.2.2 </span>Identify the correct estimand for the target quantity based on the causal model</a></span></li><li><span><a href="#Estimate-the-target-estimand" data-toc-modified-id="Estimate-the-target-estimand-2.2.3"><span class="toc-item-num">2.2.3 </span>Estimate the target estimand</a></span></li><li><span><a href="#Check-robustness-of-the-estimate-using-refutation-tests" data-toc-modified-id="Check-robustness-of-the-estimate-using-refutation-tests-2.2.4"><span class="toc-item-num">2.2.4 </span>Check robustness of the estimate using refutation tests</a></span></li></ul></li></ul></li><li><span><a href="#Case-studies-using-DoWhy+EconML" data-toc-modified-id="Case-studies-using-DoWhy+EconML-3"><span class="toc-item-num">3 </span>Case-studies using DoWhy+EconML</a></span><ul class="toc-item"><li><span><a href="#Estimating-the-impact-of-a-customer-loyalty-program" data-toc-modified-id="Estimating-the-impact-of-a-customer-loyalty-program-3.1"><span class="toc-item-num">3.1 </span>Estimating the impact of a customer loyalty program</a></span></li><li><span><a href="#Recommendation-A/B-testing-at-an-online-company" data-toc-modified-id="Recommendation-A/B-testing-at-an-online-company-3.2"><span class="toc-item-num">3.2 </span>Recommendation A/B testing at an online company</a></span></li><li><span><a href="#User-segmentation-for-targeting-interventions" data-toc-modified-id="User-segmentation-for-targeting-interventions-3.3"><span class="toc-item-num">3.3 </span>User segmentation for targeting interventions</a></span></li><li><span><a href="#Multi-investment-attribution-at-a-software-company" data-toc-modified-id="Multi-investment-attribution-at-a-software-company-3.4"><span class="toc-item-num">3.4 </span>Multi-investment attribution at a software company</a></span></li></ul></li><li><span><a href="#Connections-to-fundamental-machine-learning-challenges" data-toc-modified-id="Connections-to-fundamental-machine-learning-challenges-4"><span class="toc-item-num">4 </span>Connections to fundamental machine learning challenges</a></span></li><li><span><a href="#Further-resources" data-toc-modified-id="Further-resources-5"><span class="toc-item-num">5 </span>Further resources</a></span><ul class="toc-item"><li><span><a href="#DoWhy+EconML-libraries" data-toc-modified-id="DoWhy+EconML-libraries-5.1"><span class="toc-item-num">5.1 </span>DoWhy+EconML libraries</a></span></li><li><span><a href="#Video-Lecture-on-causal-inference-and-its-connections-to-machine-learning" data-toc-modified-id="Video-Lecture-on-causal-inference-and-its-connections-to-machine-learning-5.2"><span class="toc-item-num">5.2 </span>Video Lecture on causal inference and its connections to machine learning</a></span></li><li><span><a href="#Detailed-KDD-Tutorial-on-Causal-Inference" data-toc-modified-id="Detailed-KDD-Tutorial-on-Causal-Inference-5.3"><span class="toc-item-num">5.3 </span>Detailed KDD Tutorial on Causal Inference</a></span></li><li><span><a href="#Book-chapters-on-causality-and-machine-learning" data-toc-modified-id="Book-chapters-on-causality-and-machine-learning-5.4"><span class="toc-item-num">5.4 </span>Book chapters on causality and machine learning</a></span></li><li><span><a href="#Causality-and-Machine-Learning-group-at-Microsoft" data-toc-modified-id="Causality-and-Machine-Learning-group-at-Microsoft-5.5"><span class="toc-item-num">5.5 </span>Causality and Machine Learning group at Microsoft</a></span></li></ul></li></ul></div>
Why causal inference?
Many key data science tasks are about decision-making. Data scientists are regularly called upon to support decision-makers at all levels, helping them make the best use of data in support of achieving desired outcomes. For example, an executive making investment and resourcing decisions, a marketer determining discounting policies, a product team prioritizing which features to ship, or a doctor deciding which treatment to administer to a patient.
Each of these decision-makers is asking a what-if question. Data-driven answers to such questions require understanding the causes of an event and how to take action to improve future outcomes.
Defining a causal effect
Suppose that we want to find the causal effect of taking an action A on the outcome Y. To define the causal effect, consider two worlds:
1. World 1 (Real World): Where the action A was taken and Y observed
2. World 2 (Counterfactual World): Where the action A was not taken (but everything else is the same)
Causal effect is the difference between Y values attained in the real world versus the counterfactual world.
$${E}[Y_{real, A=1}] - E[Y_{counterfactual, A=0}]$$
In other words, A causes Y iff changing A leads to a change in Y,
keeping everything else constant. Changing A while keeping everything else constant is called an intervention, and represented by a special notation, $do(A)$.
Formally, causal effect is the magnitude by which Y is changed by a unit interventional change in A:
$$E[Yโdo(A=1)]โE[Y|do(A=0)]$$
To estimate the effect, the gold standard is to conduct a randomized experiment where a randomized subset of units is acted upon ($A=1$) and the other subset is not ($A=0$). These subsets approximate the disjoint real and counterfactual worlds and randomization ensures that there is not systematic difference between the two subsets ("keeping everything else constant").
However, it is not always feasible to a run a randomized experiment. To answer causal questions, we often need to rely on observational or logged data. Such observed data is biased by correlations and unobserved confounding and thus there are systematic differences in which units were acted upon and which units were not. For example, a new marketing campaign may be deployed during the holiday season, a new feature may only have been applied to high-activity users, or the older patients may have been more likely to receive the new drug, and so on. The goal of causal inference methods is to remove such correlations and confounding from the data and estimate the true effect of an action, as given by the equation above.
The difference between prediction and causal inference
<table><tr>
<td> <img src="images/supervised_ml_schematic.png" alt="Drawing" style="width: 400px;"/> </td>
<td> <img src="images/causalinference_schematic.png" alt="Drawing" style="width: 400px;"/> </td>
</tr></table>
Two fundamental challenges for causal inference
We never observe the counterfactual world
Cannot directly calculate the causal effect
Must estimate the counterfactuals
Challenges in validation
Multiple causal mechanisms can be fit to a single data distribution
* Data alone is not enough for causal inference
* Need domain knowledge and assumptions
The four steps of causal inference
Since there is no ground-truth test dataset available that an estimate can be compared to, causal inference requires a series of principled steps to achieve a good estimator.
Let us illustrate the four steps through a sample dataset. This tutorial requires you to download two libraries: DoWhy and EconML. Both can be installed by the following command: pip install dowhy econml.
End of explanation
# I. Create a causal model from the data and domain knowledge.
model = CausalModel(
data=data["df"],
treatment=data["treatment_name"],
outcome=data["outcome_name"],
common_causes=data["common_causes_names"],
instruments=data["instrument_names"])
Explanation: I. Modeling
The first step is to encode our domain knowledge into a causal model, often represented as a graph. The final outcome of a causal inference analysis depends largely on the input assumptions, so this step is quite important. To estimate the causal effect, most common problems involve specifying two types of variables:
Confounders (common_causes): These are variables that cause both the action and the outcome. As a result, any observed correlation between the action and the outcome may simply be due to the confounder variables, and not due to any causal relationship from the action to the outcome.
Instrumental Variables (instruments): These are special variables that cause the action, but do not directly affect the outcome. In addition, they are not affected by any variable that affects the outcome. Instrumental variables can help reduce bias, if used in the correct way.
End of explanation
model.view_model(layout="dot")
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
Explanation: To visualize the graph, we can write,
End of explanation
# I. Create a causal model from the data and given graph.
model = CausalModel(
data=data["df"],
treatment=data["treatment_name"][0],
outcome=data["outcome_name"][0],
graph=data["gml_graph"])
model.view_model(layout="dot")
Explanation: In general, you can specify a causal graph that describes the mechanisms of the data-generating process for a given dataset. Each arrow in the graph denotes a causal mechanism: "A->B" implies that the variable A causes variable B.
End of explanation
# II. Identify causal effect and return target estimands
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
Explanation: II. Identification
Both ways of providing domain knowledge (either through named variable sets of confounders and instrumental variables, or through a causal graph) correspond to an underlying causal graph. Given a causal graph and a target quantity (e.g., effect of A on B), the process of identifcation is to check whether the target quantity can be estimated given the observed variables. Importantly, identification only considers the names of variables that are available in the observed data; it does not need access to the data itself. Related to the two kinds of variables above, there are two main identification methods for causal inference.
Backdoor criterion (or more generally, adjustment sets): If all common causes of the action A and the outcome Y are observed, then the backdoor criterion implies that the causal effect can be identified by conditioning on all the common causes. This is a simplified definition (refer to Chapter 3 of the CausalML book for a formal definition).
$$ E[Yโdo(A=a)] = E_W E[Y|A=a, W=w]$$
where $W$ refers to the set of common causes (confounders) of $A$ and $Y$.
Instrumental variable (IV) identification: If there is an instrumental variable available, then we can estimate effect even when any (or none) of the common causes of action and outcome are unobserved. The IV identification utilizes the fact that the instrument only affects the action directly, so the effect of the instrument on the outcome can be broken up into two sequential parts: the effect of the instrument on the action and the effect of the action on the treatment. It then relies on estimating the effect of the instrument on the action and the outcome to estimate the effect of the action on the outcome. For a binary instrument, the effect estimate is given by,
$$ E[Yโdo(A=1)] -E[Yโdo(A=0)] =\frac{E[YโZ=1]- E[YโZ=0]}{E[AโZ=1]- E[AโZ=0]} $$
End of explanation
# III. Estimate the target estimand using a statistical method.
propensity_strat_estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.dowhy.propensity_score_stratification")
print(propensity_strat_estimate)
import econml
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LassoCV
from sklearn.ensemble import GradientBoostingRegressor
dml_estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.econml.dml.DML",
method_params={
'init_params': {'model_y':GradientBoostingRegressor(),
'model_t': GradientBoostingRegressor(),
'model_final':LassoCV(fit_intercept=False), },
'fit_params': {}
})
print(dml_estimate)
Explanation: III. Estimation
As the name suggests, the estimation step involves building a statistical estimator that can compute the target estimand identified in the previous step. Many estimators have been proposed for causal inference. DoWhy implements a few of the standard estimators while EconML implements a powerful set of estimators that use machine learning.
We show an example of using Propensity Score Stratification using DoWhy, and a machine learning-based method called Double-ML using EconML.
End of explanation
# IV. Refute the obtained estimate using multiple robustness checks.
refute_results = model.refute_estimate(identified_estimand, propensity_strat_estimate,
method_name="placebo_treatment_refuter")
print(refute_results)
Explanation: IV. Refutation
Finally, checking robustness of the estimate is probably the most important step of a causal analysis. We obtained an estimate using Steps 1-3, but each step may have made certain assumptions that may not be true. Absent of a proper validation "test" set, this step relies on refutation tests that seek to refute the correctness of an obtained estimate using properties of a good estimator. For example, a refutation test (placebo_treatment_refuter) checks whether the estimator returns an estimate value of 0 when the action variable is replaced by a random variable, independent of all other variables.
End of explanation
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import dowhy.datasets, dowhy.plotter
Explanation: The DoWhy+EconML solution
We will use the DoWhy+EconML libraries for causal inference. DoWhy provides a general API for the four steps and EconML provides advanced estimators for the Estimation step.
DoWhy allows you to visualize, formalize, and test the assumptions they are making, so that you can better understand the analysis and avoid reaching incorrect conclusions. It does so by focusing on assumptions explicitly and introducing automated checks on validity of assumptions to the extent possible. As you will see, the power of DoWhy is that it provides a formal causal framework to encode domain knowledge and it can run automated robustness checks to validate the causal estimate from any estimator method.
Additionally, as data becomes high-dimensional, we need specialized methods that can handle known confounding. Here we use EconML that implements many of the state-of-the-art causal estimation approaches. This package has a common API for all the techniques, and each technique is implemented as a sequence of machine learning tasks allowing for the use of any existing machine learning software to solve these subtasks, allowing you to plug-in the ML models that you are already familiar with rather than learning a new toolkit. The power of EconML is that you can now implement the state-of-the-art in causal inference just as easily as you can run a linear regression or a random forest.
Together, DoWhy+EconML make answering what if questions a whole lot easier by providing a state-of-the-art, end-to-end framework for causal inference, including the latest causal estimation and automated robustness procedures.
A mystery dataset: Can you find out if if there is a causal effect?
To walk-through the four steps, let us consider the Mystery Dataset problem. Suppose you are given some data with treatment and outcome. Can you determine whether the treatment causes the outcome, or the correlation is purely due to another common cause?
End of explanation
rvar = 1 if np.random.uniform() > 0.2 else 0
is_linear = False # A non-linear dataset. Change to True to see results for a linear dataset.
data_dict = dowhy.datasets.xy_dataset(10000, effect=rvar,
num_common_causes=2,
is_linear=is_linear,
sd_error=0.2)
df = data_dict['df']
print(df.head())
dowhy.plotter.plot_treatment_outcome(df[data_dict["treatment_name"]], df[data_dict["outcome_name"]],
df[data_dict["time_val"]])
Explanation: Below we create a dataset where the true causal effect is decided by random variable. It can be either 0 or 1.
End of explanation
model= CausalModel(
data=df,
treatment=data_dict["treatment_name"],
outcome=data_dict["outcome_name"],
common_causes=data_dict["common_causes_names"],
instruments=data_dict["instrument_names"])
model.view_model(layout="dot")
Explanation: Model assumptions about the data-generating process using a causal graph
End of explanation
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
Explanation: Identify the correct estimand for the target quantity based on the causal model
End of explanation
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression")
print(estimate)
print("Causal Estimate is " + str(estimate.value))
# Plot Slope of line between action and outcome = causal effect
dowhy.plotter.plot_causal_effect(estimate, df[data_dict["treatment_name"]], df[data_dict["outcome_name"]])
Explanation: Since this is observed data, the warning asks you if there are any unobserved confounders that are missing in this dataset. If there are, then ignoring them will lead to an incorrect estimate.
If you want to disable the warning, you can use proceed_when_unidentifiable=True as an additional parameter to identify_effect.
Estimate the target estimand
End of explanation
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LassoCV
from sklearn.ensemble import GradientBoostingRegressor
dml_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.econml.dml.DML",
control_value = 0,
treatment_value = 1,
confidence_intervals=False,
method_params={"init_params":{'model_y':GradientBoostingRegressor(),
'model_t': GradientBoostingRegressor(),
"model_final":LassoCV(fit_intercept=False),
'featurizer':PolynomialFeatures(degree=2, include_bias=True)},
"fit_params":{}})
print(dml_estimate)
Explanation: As you can see, for a non-linear data-generating process, the linear regression model is unable to distinguish the causal effect from the observed correlation.
If the DGP was linear, however, then simple linear regression would have worked. To see that, try setting is_linear=True in cell 10 above.
To model non-linear data (and data with high-dimensional confounders), we need more advanced methods. Below is an example using the double machine learning estimator from EconML. This estimator uses machine learning-based methods like gradient boosting trees to learn the relationship between the outcome and confounders, and the treatment and confounders, and then finally compares the residual variation between the outcome and treatment.
End of explanation
res_random=model.refute_estimate(identified_estimand, dml_estimate, method_name="random_common_cause")
print(res_random)
res_placebo=model.refute_estimate(identified_estimand, dml_estimate,
method_name="placebo_treatment_refuter", placebo_type="permute",
num_simulations=20)
print(res_placebo)
Explanation: As you can see, the DML method obtains a better estimate, that is closer to the true causal effect of 1.
Check robustness of the estimate using refutation tests
End of explanation |
1,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Derivatives fundamentals
This notebook will introduce you to the fundamentals of computing the derivative of the solution map to optimization problems. The derivative can be used for sensitvity analysis, to see how a solution would change given small changes to the parameters, and to compute gradients of scalar-valued functions of the solution.
In this notebook, we will consider a simple disciplined geometric program. The geometric program under consideration is
$$
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & 1/(xyz) \
\mbox{subject to} & a(xy + xz + yz) \leq b\
& x \geq y^c,
\end{array}
\end{equation}
$$
where $x \in \mathbf{R}{++}$, $y \in \mathbf{R}{++}$, and $z \in \mathbf{R}{++}$ are the variables, and $a \in \mathbf{R}{++}$, $b \in \mathbf{R}_{++}$ and $c \in \mathbf{R}$ are the parameters. The vector
$$
\alpha = \begin{bmatrix} a \ b \ c \end{bmatrix}
$$
is the vector of parameters.
Step1: Notice the keyword argument dpp=True. The parameters must enter in the DGP problem acording to special rules, which we refer to as dpp. The DPP rules are described in an online tutorial.
Next, we solve the problem, setting the parameters $a$, $b$ and $c$ to $2$, $1$, and $0.5$.
Step2: Notice the keyword argument requires_grad=True; this is necessary to subsequently compute derivatives.
Solution map
The solution map of the above problem is a function
$$\mathcal{S}
Step3: The derivative method populates the delta attributes of the variables as a side-effect, with the predicted change in the variable. We can compare the predictions to the actual solution of the perturbed problem.
Step4: In this case, the predictions and the actual solutions are fairly close.
Gradient
We can compute gradient of a scalar-valued function of the solution with respect to the parameters. Let $f | Python Code:
import cvxpy as cp
x = cp.Variable(pos=True)
y = cp.Variable(pos=True)
z = cp.Variable(pos=True)
a = cp.Parameter(pos=True)
b = cp.Parameter(pos=True)
c = cp.Parameter()
objective_fn = 1/(x*y*z)
objective = cp.Minimize(objective_fn)
constraints = [a*(x*y + x*z + y*z) <= b, x >= y**c]
problem = cp.Problem(objective, constraints)
problem.is_dgp(dpp=True)
Explanation: Derivatives fundamentals
This notebook will introduce you to the fundamentals of computing the derivative of the solution map to optimization problems. The derivative can be used for sensitvity analysis, to see how a solution would change given small changes to the parameters, and to compute gradients of scalar-valued functions of the solution.
In this notebook, we will consider a simple disciplined geometric program. The geometric program under consideration is
$$
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & 1/(xyz) \
\mbox{subject to} & a(xy + xz + yz) \leq b\
& x \geq y^c,
\end{array}
\end{equation}
$$
where $x \in \mathbf{R}{++}$, $y \in \mathbf{R}{++}$, and $z \in \mathbf{R}{++}$ are the variables, and $a \in \mathbf{R}{++}$, $b \in \mathbf{R}_{++}$ and $c \in \mathbf{R}$ are the parameters. The vector
$$
\alpha = \begin{bmatrix} a \ b \ c \end{bmatrix}
$$
is the vector of parameters.
End of explanation
a.value = 2.0
b.value = 1.0
c.value = 0.5
problem.solve(gp=True, requires_grad=True)
print(x.value)
print(y.value)
print(z.value)
Explanation: Notice the keyword argument dpp=True. The parameters must enter in the DGP problem acording to special rules, which we refer to as dpp. The DPP rules are described in an online tutorial.
Next, we solve the problem, setting the parameters $a$, $b$ and $c$ to $2$, $1$, and $0.5$.
End of explanation
da, db, dc = 1e-2, 1e-2, 1e-2
a.delta = da
b.delta = db
c.delta = dc
problem.derivative()
Explanation: Notice the keyword argument requires_grad=True; this is necessary to subsequently compute derivatives.
Solution map
The solution map of the above problem is a function
$$\mathcal{S} : \mathbf{R}^2_{++} \times \mathbf{R} \to \mathbf{R}^3_{++}$$
which maps the parameter vector to the vector of optimal solutions
$$
\mathcal S(\alpha) = \begin{bmatrix} x(\alpha) \ y(\alpha) \ z(\alpha)\end{bmatrix}.
$$
Here, $x(\alpha)$, $y(\alpha)$, and $z(\alpha)$ are the optimal values of the variables corresponding to the parameter vector.
As an example, we just saw that
$$
\mathcal S((2.0, 1.0, 0.5)) = \begin{bmatrix} 0.5612 \ 0.3150 \ 0.3690 \end{bmatrix}.
$$
Sensitivity analysis
When the solution map is differentiable, we can use its derivative
$$
\mathsf{D}\mathcal{S}(\alpha) \in \mathbf{R}^{3 \times 3}
$$
to perform a sensitivity analysis, which studies how the solution would change given small changes to the parameters.
Suppose we perturb the parameters by a vector of small magnitude $\mathsf{d}\alpha \in \mathbf{R}^3$. We can approximate the change $\Delta$ in the solution due to the perturbation using the derivative, as
$$
\Delta = \mathcal{S}(\alpha + \mathsf{d}\alpha) - \mathcal{S}(\alpha) \approx \mathsf{D}\mathcal{S}(\alpha) \mathsf{d}\alpha.
$$
We can compute this in CVXPY, as follows.
Partition the perturbation as
$$
\mathsf{d}\alpha = \begin{bmatrix} \mathsf{d}a \ \mathsf{d}b \ \mathsf{d}c\end{bmatrix}.
$$
We set the delta attributes of the parameters to their perturbations, and then call the derivative method.
End of explanation
x_hat = x.value + x.delta
y_hat = y.value + y.delta
z_hat = z.value + z.delta
a.value += da
b.value += db
c.value += dc
problem.solve(gp=True)
print('x: predicted {0:.5f} actual {1:.5f}'.format(x_hat, x.value))
print('y: predicted {0:.5f} actual {1:.5f}'.format(y_hat, y.value))
print('z: predicted {0:.5f} actual {1:.5f}'.format(z_hat, z.value))
a.value -= da
b.value -= db
c.value -= dc
Explanation: The derivative method populates the delta attributes of the variables as a side-effect, with the predicted change in the variable. We can compare the predictions to the actual solution of the perturbed problem.
End of explanation
problem.solve(gp=True, requires_grad=True)
def f(x, y, z):
return 1/2*(x**2 + y**2 + z**2)
original = f(x, y, z).value
x.gradient = x.value
y.gradient = y.value
z.gradient = z.value
problem.backward()
eta = 0.5
dalpha = cp.vstack([a.gradient, b.gradient, c.gradient])
predicted = float((original - eta*dalpha.T @ dalpha).value)
a.value -= eta*a.gradient
b.value -= eta*b.gradient
c.value -= eta*c.gradient
problem.solve(gp=True)
actual = f(x, y, z).value
print('original {0:.5f} predicted {1:.5f} actual {2:.5f}'.format(
original, predicted, actual))
Explanation: In this case, the predictions and the actual solutions are fairly close.
Gradient
We can compute gradient of a scalar-valued function of the solution with respect to the parameters. Let $f : \mathbf{R}^{3} \to \mathbf{R}$, and suppose we want to compute the gradient of the composition $f \circ \mathcal S$. By the chain rule,
$$
\nabla f(S(\alpha)) = \mathsf{D}^T\mathcal{S}(\alpha) \begin{bmatrix}\mathsf{d}x \ \mathsf{d}y \ \mathsf{d}z\end{bmatrix},
$$
where $\mathsf{D}^T\mathcal{S}$ is the adjoint (or transpose) of the derivative operator, and $\mathsf{d}x$, $\mathsf{d}y$, and $\mathsf{d}z$ are the partial derivatives of $f$ with respect to its arguments.
We can compute the gradient in CVXPY, using the backward method. As an example, suppose
$$
f(x, y, z) = \frac{1}{2}(x^2 + y^2 + z^2),
$$
so that $\mathsf{d}x = x$, $\mathsf{d}y = y$, and $\mathsf{d}z = z$. Let $\mathsf{d}\alpha = \nabla f(S(\alpha))$,
and suppose we subtract $\eta \mathsf{d}\alpha$ from the parameter, where $\eta$ is a positive constant. Using the following code, we can compare $f(\mathcal S(\alpha - \eta \mathsf{d}\alpha))$ with the value predicted by the gradient,
$$
f(\mathcal S(\alpha - \eta \mathsf{d}\alpha)) \approx f(\mathcal S(\alpha)) - \eta \mathsf{d}\alpha^T\mathsf{d}\alpha.
$$
End of explanation |
1,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Please find jax implementation of this notebook here
Step1: Model
We use a slightly modified version of the LeNet CNN.
Step2: Copying parameters across devices
Step4: All-reduce will copy data (eg gradients) from all devices to device 0, add them, and then broadcast the result back to each device.
Step5: Distribute data across GPUs
Step7: Split data and labels.
Step20: Training on Fashion MNIST
Step22: Train function
Step23: Learning curve | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import math
from IPython import display
try:
import torch
except ModuleNotFoundError:
%pip install -qq torch
import torch
try:
import torchvision
except ModuleNotFoundError:
%pip install -qq torchvision
import torchvision
from torch import nn
from torch.nn import functional as F
from torch.utils import data
from torchvision import transforms
import random
import os
import time
np.random.seed(seed=1)
torch.manual_seed(1)
!mkdir figures # for saving plots
Explanation: Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/13/multi_gpu_training_jax.ipynb
<a href="https://colab.research.google.com/github/Nirzu97/pyprobml/blob/multi-gpu-training-torch/notebooks/multi_gpu_training_torch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Train a CNN on multiple GPUs using data parallelism.
Based on sec 12.5 of http://d2l.ai/chapter_computational-performance/multiple-gpus.html.
Note: in colab, we only have access to 1 GPU, so the code below just simulates the effects of multiple GPUs, so it will not run faster. You may not see a speedup eveen on a machine which really does have multiple GPUs, because the model and data are too small. But the example should still illustrate the key ideas.
End of explanation
# Initialize model parameters
scale = 0.01
torch.random.manual_seed(0)
W1 = torch.randn(size=(20, 1, 3, 3)) * scale
b1 = torch.zeros(20)
W2 = torch.randn(size=(50, 20, 5, 5)) * scale
b2 = torch.zeros(50)
W3 = torch.randn(size=(800, 128)) * scale
b3 = torch.zeros(128)
W4 = torch.randn(size=(128, 10)) * scale
b4 = torch.zeros(10)
params = [W1, b1, W2, b2, W3, b3, W4, b4]
# Define the model
def lenet(X, params):
h1_conv = F.conv2d(input=X, weight=params[0], bias=params[1])
h1_activation = F.relu(h1_conv)
h1 = F.avg_pool2d(input=h1_activation, kernel_size=(2, 2), stride=(2, 2))
h2_conv = F.conv2d(input=h1, weight=params[2], bias=params[3])
h2_activation = F.relu(h2_conv)
h2 = F.avg_pool2d(input=h2_activation, kernel_size=(2, 2), stride=(2, 2))
h2 = h2.reshape(h2.shape[0], -1)
h3_linear = torch.mm(h2, params[4]) + params[5]
h3 = F.relu(h3_linear)
y_hat = torch.mm(h3, params[6]) + params[7]
return y_hat
# Cross-entropy loss function
loss = nn.CrossEntropyLoss(reduction="none")
Explanation: Model
We use a slightly modified version of the LeNet CNN.
End of explanation
def get_params(params, device):
new_params = [p.clone().to(device) for p in params]
for p in new_params:
p.requires_grad_()
return new_params
# Copy the params to GPU0
gpu0 = torch.device("cuda:0")
new_params = get_params(params, gpu0)
print("b1 weight:", new_params[1])
print("b1 grad:", new_params[1].grad)
# Copy the params to GPU1
gpu1 = torch.device("cuda:0") # torch.device('cuda:1')
new_params = get_params(params, gpu1)
print("b1 weight:", new_params[1])
print("b1 grad:", new_params[1].grad)
Explanation: Copying parameters across devices
End of explanation
def allreduce(data):
for i in range(1, len(data)):
data[0][:] += data[i].to(data[0].device)
for i in range(1, len(data)):
data[i] = data[0].to(data[i].device)
def try_gpu(i=0):
Return gpu(i) if exists, otherwise return cpu().
if torch.cuda.device_count() >= i + 1:
return torch.device(f"cuda:{i}")
return torch.device("cpu")
data_ = [torch.ones((1, 2), device=try_gpu(i)) * (i + 1) for i in range(2)]
print("before allreduce:\n", data_[0], "\n", data_[1])
allreduce(data_)
print("after allreduce:\n", data_[0], "\n", data_[1])
Explanation: All-reduce will copy data (eg gradients) from all devices to device 0, add them, and then broadcast the result back to each device.
End of explanation
data_ = torch.arange(20).reshape(4, 5)
# devices = [torch.device('cuda:0'), torch.device('cuda:1')]
devices = [torch.device("cuda:0"), torch.device("cuda:0")]
split = nn.parallel.scatter(data_, devices)
print("input :", data_)
print("load into", devices)
print("output:", split)
Explanation: Distribute data across GPUs
End of explanation
def split_batch(X, y, devices):
Split `X` and `y` into multiple devices.
assert X.shape[0] == y.shape[0]
return (nn.parallel.scatter(X, devices), nn.parallel.scatter(y, devices))
Explanation: Split data and labels.
End of explanation
def load_data_fashion_mnist(batch_size, resize=None):
Download the Fashion-MNIST dataset and then load it into memory.
trans = [transforms.ToTensor()]
if resize:
trans.insert(0, transforms.Resize(resize))
trans = transforms.Compose(trans)
mnist_train = torchvision.datasets.FashionMNIST(root="../data", train=True, transform=trans, download=True)
mnist_test = torchvision.datasets.FashionMNIST(root="../data", train=False, transform=trans, download=True)
return (
data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=4),
data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=4),
)
class Animator:
For plotting data in animation.
def __init__(
self,
xlabel=None,
ylabel=None,
legend=None,
xlim=None,
ylim=None,
xscale="linear",
yscale="linear",
fmts=("-", "m--", "g-.", "r:"),
nrows=1,
ncols=1,
figsize=(3.5, 2.5),
):
# Incrementally plot multiple lines
if legend is None:
legend = []
display.set_matplotlib_formats("svg")
self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [
self.axes,
]
# Use a lambda function to capture arguments
self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
class Timer:
Record multiple running times.
def __init__(self):
self.times = []
self.start()
def start(self):
Start the timer.
self.tik = time.time()
def stop(self):
Stop the timer and record the time in a list.
self.times.append(time.time() - self.tik)
return self.times[-1]
def avg(self):
Return the average time.
return sum(self.times) / len(self.times)
def sum(self):
Return the sum of time.
return sum(self.times)
def cumsum(self):
Return the accumulated time.
return np.array(self.times).cumsum().tolist()
class Accumulator:
For accumulating sums over `n` variables.
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
Set the axes for matplotlib.
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
def accuracy(y_hat, y):
Compute the number of correct predictions.
if len(y_hat.shape) > 1 and y_hat.shape[1] > 1:
y_hat = torch.argmax(y_hat, axis=1)
cmp_ = y_hat.type(y.dtype) == y
return float(cmp_.type(y.dtype).sum())
def evaluate_accuracy_gpu(net, data_iter, device=None):
Compute the accuracy for a model on a dataset using a GPU.
if isinstance(net, torch.nn.Module):
net.eval() # Set the model to evaluation mode
if not device:
device = next(iter(net.parameters())).device
# No. of correct predictions, no. of predictions
metric = Accumulator(2)
for X, y in data_iter:
X = X.to(device)
y = y.to(device)
metric.add(accuracy(net(X), y), y.numel())
return metric[0] / metric[1]
Explanation: Training on Fashion MNIST
End of explanation
def sgd(params, lr, batch_size):
Minibatch stochastic gradient descent.
with torch.no_grad():
for param in params:
param -= lr * param.grad / batch_size
param.grad.zero_()
def train_batch(X, y, device_params, devices, lr):
X_shards, y_shards = split_batch(X, y, devices)
# Loss is calculated separately on each GPU
losses = [
loss(lenet(X_shard, device_W), y_shard).sum()
for X_shard, y_shard, device_W in zip(X_shards, y_shards, device_params)
]
for l in losses: # Back Propagation is performed separately on each GPU
l.backward()
# Sum all gradients from each GPU and broadcast them to all GPUs
with torch.no_grad():
for i in range(len(device_params[0])):
allreduce([device_params[c][i].grad for c in range(len(devices))])
# The model parameters are updated separately on each GPU
ndata = X.shape[0] # gradient is summed over the full minibatch
for param in device_params:
sgd(param, lr, ndata)
def train(num_gpus, batch_size, lr):
train_iter, test_iter = load_data_fashion_mnist(batch_size)
devices = [try_gpu(i) for i in range(num_gpus)]
# Copy model parameters to num_gpus GPUs
device_params = [get_params(params, d) for d in devices]
# num_epochs, times, acces = 10, [], []
num_epochs = 5
animator = Animator("epoch", "test acc", xlim=[1, num_epochs])
timer = Timer()
for epoch in range(num_epochs):
timer.start()
for X, y in train_iter:
# Perform multi-GPU training for a single minibatch
train_batch(X, y, device_params, devices, lr)
torch.cuda.synchronize()
timer.stop()
# Verify the model on GPU 0
animator.add(epoch + 1, (evaluate_accuracy_gpu(lambda x: lenet(x, device_params[0]), test_iter, devices[0]),))
print(f"test acc: {animator.Y[0][-1]:.2f}, {timer.avg():.1f} sec/epoch " f"on {str(devices)}")
Explanation: Train function
End of explanation
train(num_gpus=1, batch_size=256, lr=0.2)
Explanation: Learning curve
End of explanation |
1,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Baseline prediction for homework type
The baseline prediction method we use for predicting which homework the notebook came from uses the popular plagiarism detector JPlag.
We feed each noteboook through our pipeline to eliminate variable names, string declarations, comments, and import names
Step1: Running Jplag
To run jplag, we need to write all of our files to a directory, and then setup the command with the .jar file that needs to be run on the command line
Step2: After we run the JPlag command
While JPlag produces a nice report that is human readable, we want the pairwise similarities, which are printed out by JPlag as it runs. By parsing the output file we can get these similarities that we will use for prediction
Step3: Inter and Intra Similarities
The first measure that we can use to determine if something reasonable is happening is to look at, for each homework, the average similarity of two notebooks both pulled from that homework, and the average similarity of a notebook pulled from that homework and any notebook in the corpus not pulled from that homework. These are printed below
Step4: Actual Prediction
While the above results are helpful, it is better to use a classifier that uses more information. The setup is as follows
Step5: Results
Below are the results of the prediction. We can see a good deal of predictive power, though there is room for improvement | Python Code:
# First step is to load a balanced dataset of homeworks
import sys
home_directory = '/dfs/scratch2/fcipollone'
sys.path.append(home_directory)
import numpy as np
from nbminer.notebook_miner import NotebookMiner
hw_filenames = np.load('../homework_names_jplag_combined_per_student.npy')
min_val = min([len(temp) for temp in hw_filenames])
print(min_val)
hw_notebooks = [[NotebookMiner(filename) for filename in temp[:min_val]] for temp in hw_filenames]
# Now we do the transformation, storing the results into the variable hw_code
from nbminer.pipeline.pipeline import Pipeline
from nbminer.features.features import Features
from nbminer.preprocess.get_ast_features import GetASTFeatures
from nbminer.preprocess.get_imports import GetImports
import tqdm
hw_code = []
for corp in tqdm.tqdm(hw_notebooks):
temp = []
for nb in corp:
a = Features([nb])
gastf = GetASTFeatures()
gi = GetImports()
pipe = Pipeline([gastf, gi])
a = pipe.transform(a)
code = a.get_notebook(0).get_all_asts()
lines = code.split('\n')
lines = [line for line in lines if line != '']
temp.append('\n\n'.join(lines))
hw_code.append(temp)
# Print an example to see what the result of the transformation looks like.
print(hw_code[0][0])
Explanation: Baseline prediction for homework type
The baseline prediction method we use for predicting which homework the notebook came from uses the popular plagiarism detector JPlag.
We feed each noteboook through our pipeline to eliminate variable names, string declarations, comments, and import names
End of explanation
import os
for i in range(len(hw_code)):
base_name = 'plagiarism/homework_code_cleaned/hw' + str(i) + '_'
for j, code_body in enumerate(hw_code[i]):
fname = base_name + 'student_' + str(j) + ".py"
f = open(fname,'w')
f.write(code_body)
f.close
import os
jar_file = 'plagiarism/jplag-2.11.9-SNAPSHOT-jar-with-dependencies.jar'
lang = 'python3'
results = 'plagiarism/results_cleaned'
students = 'plagiarism/homework_code_cleaned'
command = "java -jar " + jar_file + " -l " + lang + " -r " + results + " -s " + students + " -m 200"
print("nohup",command,"> plagiarism/experiment_cleaned.out &")
Explanation: Running Jplag
To run jplag, we need to write all of our files to a directory, and then setup the command with the .jar file that needs to be run on the command line
End of explanation
output = open('plagiarism/experiment_cleaned.out','r')
lines = [line for line in output if line[:9] == 'Comparing']
output = open('plagiarism/experiment_cleaned.out','r')
lines = [line for line in output if line[:9] == 'Comparing']
len(lines)
# Create the dictionary of pairwise sims
my_dict = {}
for line in lines:
hw1 = line.split()[1].split('-')[0].split('.')[0]
hw2 = line.split()[1].split('-')[1].split('.')[0]
val = line.split()[2]
if hw1 not in my_dict:
my_dict[hw1] = {}
if hw2 not in my_dict:
my_dict[hw2] = {}
my_dict[hw1][hw2] = val
my_dict[hw2][hw1] = val
Explanation: After we run the JPlag command
While JPlag produces a nice report that is human readable, we want the pairwise similarities, which are printed out by JPlag as it runs. By parsing the output file we can get these similarities that we will use for prediction
End of explanation
import numpy as np
def get_avg_inter_intra_sims(sim_dict, hw):
cur_hw = 'hw' + str(hw)
in_vals = []
out_vals = []
for key in sim_dict.keys():
if key[:3] != cur_hw:
continue
for key2 in sim_dict[key].keys():
if key2[:3] != cur_hw:
out_vals.append(float(sim_dict[key][key2]))
else:
in_vals.append(float(sim_dict[key][key2]))
return in_vals, out_vals
for i in range(6):
intra_sims, inter_sims = get_avg_inter_intra_sims(my_dict, i)
print('Mean intra similarity for hw',i,'is',np.mean(intra_sims),'with std',np.std(intra_sims))
print('Mean inter similarity for hw',i,'is',np.mean(inter_sims),'with std',np.std(inter_sims))
print('----')
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 5, 10
def get_all_sims(sim_dict, hw):
cur_hw = 'hw' + str(hw)
sims = []
for key in sim_dict.keys():
for key2 in sim_dict[key].keys():
if key[:3] != cur_hw and key2[:3] != cur_hw:
continue
sims.append(float(sim_dict[key][key2]))
return sims
fig, axes = plt.subplots(6)
for i in range(6):
axes[i].hist(get_all_sims(my_dict,i), bins=50)
Explanation: Inter and Intra Similarities
The first measure that we can use to determine if something reasonable is happening is to look at, for each homework, the average similarity of two notebooks both pulled from that homework, and the average similarity of a notebook pulled from that homework and any notebook in the corpus not pulled from that homework. These are printed below
End of explanation
from sklearn.model_selection import train_test_split
features = [key for key in my_dict]
feature_map = {}
test_features = set()
indices = [i for i in range(len(features))]
#import pdb; pdb.set_trace()
train, test = train_test_split(indices, test_size=.2)
for i in test:
test_features.add(features[i])
train_features = []
for i in train:
train_features.append(features[i])
for i, el in enumerate(train_features):
feature_map[el] = i
X = np.zeros((len(train),len(train)))
y = []
X_test = np.zeros((len(test), len(train)))
y_test = []
for i, el in enumerate(train_features):
for key in my_dict[el]:
if key not in feature_map:
continue
loc = feature_map[key]
X[i, loc] = my_dict[el][key]
y.append(int(el[2]))
for i, el in enumerate(test_features):
for key in my_dict[el]:
if key not in feature_map:
continue
loc = feature_map[key]
X_test[i, loc] = my_dict[el][key]
y_test.append(int(el[2]))
import sklearn
from sklearn.ensemble import RandomForestClassifier
clf = sklearn.ensemble.RandomForestClassifier(n_estimators=400, max_depth=4)
clf.fit(X, y)
clf.predict(X_test)
Explanation: Actual Prediction
While the above results are helpful, it is better to use a classifier that uses more information. The setup is as follows:
Split the data into train and test
For each notebook, generate a feature vector that is calculated as the similarity between the notebook and each notebook of the train set
Build a random forest classifier that uses this feature representation, and measure the performance
End of explanation
import numpy as np
np.sum(clf.predict(X_test)==y_test)/len(y_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(clf.predict(X_test),y_test)
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(cm, cmap=plt.cm.Blues)
plt.show()
clfi = clf.feature_importances_
sa = []
for i in range(len(clfi)):
sa.append((clfi[i], train_features[i]))
sra = [el for el in reversed(sorted(sa))]
for i in range(100):
print(sra[i])
Explanation: Results
Below are the results of the prediction. We can see a good deal of predictive power, though there is room for improvement
End of explanation |
1,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Factor Risk Exposure
By Evgenia "Jenny" Nitishinskaya, Delaney Granizo-Mackenzie, and Maxwell Margenot.
Part of the Quantopian Lecture Series
Step2: How did each factor do over 2014?
Step3: Computing Risk Exposure
Now we can determine how exposed another return stream is to each of these factors. We can do this by running static or rolling linear regressions between our return stream and the factor portfolio returns. First we'll compute the active returns (returns - benchmark) of some random asset and then model that asset as a linear combination of our two factors. The more a factor contributes to the active returns, the more exposed the active returns are to that factor.
Step4: Using the formula from the start of the notebook, we can compute the factors' marginal contributions to active risk squared
Step5: The rest of the risk can be attributed to active specific risk, i.e. factors that we did not take into account or the asset's idiosyncratic risk.
However, as usual we will look at how the exposure to these factors changes over time. As we lose a tremendous amount of information by just looking at one data point. Let's look at what happens if we run a rolling regression over time.
Step6: Now we'll look at FMCAR as it changes over time.
Step7: Let's plot this.
Step8: Problems with using this data
Whereas it may be interesting to know how a portfolio was exposed to certain factors historically, it is really only useful if we can make predictions about how it will be exposed to risk factors in the future. It's not always a safe assumption to say that future exposure will be the current exposure. As you saw the exposure varies quite a bit, so taking the average is dangerous. We could put confidence intervals around that average, but that would only work if the distribution of exposures were normal or well behaved. Let's check using our old buddy, the Jarque-Bera test. | Python Code:
import numpy as np
import statsmodels.api as sm
import scipy.stats as stats
from statsmodels import regression
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import CustomFactor, Returns
# Here's the raw data we need, everything else is derivative.
class MarketCap(CustomFactor):
# Here's the data we need for this factor
inputs = [morningstar.valuation.shares_outstanding, USEquityPricing.close]
# Only need the most recent values for both series
window_length = 1
def compute(self, today, assets, out, shares, close_price):
# Shares * price/share = total price = market cap
out[:] = shares * close_price
class BookToPrice(CustomFactor):
# pb = price to book, we'll need to take the reciprocal later
inputs = [morningstar.valuation_ratios.pb_ratio]
window_length = 1
def compute(self, today, assets, out, pb):
out[:] = 1 / pb
def make_pipeline():
Create and return our pipeline.
We break this piece of logic out into its own function to make it easier to
test and modify in isolation.
In particular, this function can be copy/pasted into research and run by itself.
pipe = Pipeline()
# Add our factors to the pipeline
market_cap = MarketCap()
# Raw market cap and book to price data gets fed in here
pipe.add(market_cap, "market_cap")
book_to_price = BookToPrice()
pipe.add(book_to_price, "book_to_price")
# We also get daily returns
returns = Returns(inputs=[USEquityPricing.close], window_length=2)
pipe.add(returns, "returns")
# We compute a daily rank of both factors, this is used in the next step,
# which is computing portfolio membership.
market_cap_rank = market_cap.rank()
pipe.add(market_cap_rank, 'market_cap_rank')
book_to_price_rank = book_to_price.rank()
pipe.add(book_to_price_rank, 'book_to_price_rank')
# Build Filters representing the top and bottom 1000 stocks by our combined ranking system.
biggest = market_cap_rank.top(1000)
smallest = market_cap_rank.bottom(1000)
highpb = book_to_price_rank.top(1000)
lowpb = book_to_price_rank.bottom(1000)
# Don't return anything not in this set, as we don't need it.
pipe.set_screen(biggest | smallest | highpb | lowpb)
# Add the boolean flags we computed to the output data
pipe.add(biggest, 'biggest')
pipe.add(smallest, 'smallest')
pipe.add(highpb, 'highpb')
pipe.add(lowpb, 'lowpb')
return pipe
pipe = make_pipeline()
start_date = '2014-1-1'
end_date = '2015-1-1'
from quantopian.research import run_pipeline
results = run_pipeline(pipe, start_date, end_date)
R_biggest = results[results.biggest]['returns'].groupby(level=0).mean()
R_smallest = results[results.smallest]['returns'].groupby(level=0).mean()
R_highpb = results[results.highpb]['returns'].groupby(level=0).mean()
R_lowpb = results[results.lowpb]['returns'].groupby(level=0).mean()
SMB = R_smallest - R_biggest
HML = R_highpb - R_lowpb
Explanation: Factor Risk Exposure
By Evgenia "Jenny" Nitishinskaya, Delaney Granizo-Mackenzie, and Maxwell Margenot.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
DISCLAIMER:
As always, this analysis is based on historical data, and risk exposures estimated on historical data may or may not affect the exposures going forward. As such, computing the risk exposure of to a factor is not enough. You must put confidence bounds on that risk exposure, and determine whether the risk exposure can even be modeled reasonably. For more information on this, please see our other lectures, especially Instability of Parameter Estimates.
Using Factor Models to Determine Risk Exposure
We can use factor models to analyze the sources of risks and returns in portfolios. Recall that a factor model expresses the returns as
$$R_i = a_i + b_{i1} F_1 + b_{i2} F_2 + \ldots + b_{iK} F_K + \epsilon_i$$
By modelling the historical returns, we can see how much of them is due to speculation on different factors and how much to asset-specific fluctuations ($\epsilon_p$). We can also examine what sources of risk the portfolio is exposed to.
In risk analysis, we often model active returns (returns relative to a benchmark) and active risk (standard deviation of active returns, also known as tracking error or tracking risk).
For instance, we can find a factor's marginal contribution to active risk squared (FMCAR). For factor $j$, this is
$$ \text{FMCAR}j = \frac{b_j^a \sum{i=1}^K b_i^a Cov(F_j, F_i)}{(\text{Active risk})^2} $$
where $b_i^a$ is the portfolio's active exposure to factor $i$. This tells us how much risk we incur by being exposed to factor $j$, given all the other factors we're already exposed to.
Fundamental factor models are often used to evaluate portfolios because they correspond directly to investment choices (e.g. whether we invest in small-cap or large-cap stocks, etc.). Below, we construct a model to evaluate a single asset; for more information on the model construction, check out the fundamental factor models notebook.
We'll use the canonical Fama-French factors for this example, which are the returns of portfolios constructred based on fundamental factors.
How many factors do you want?
In the Arbitrage Pricing Theory lecture we mention that for predictive models you want fewer parameters. However, this doesn't quite hold for risk exposure. Instead of trying to not overfit a predictive model, you are looking for any possible risk factor that could be influencing your returns. Therefore it's actually safer to estimate exposure to many many risk factors to see if any stick. Anything left over in our $\alpha$ is risk exposure that is currently unexplained by the selected factors. You want your strategy's return stream to be all alpha, and to be unexplained by as many parameters as possible. If you can show that your historical returns have little to no dependence on many factors, this is very positive. Certainly some unrelated risk factors might have spurious relationships over time in a large dataset, but those are not likely to be consistent.
Setup
The first thing we do is compute a year's worth of factor returns.
NOTE
The process for doing this is described in the Fundamental Factor Models lecture and uses pipeline. For more information please see that lecture.
End of explanation
SMB_CUM = np.cumprod(SMB+1)
HML_CUM = np.cumprod(HML+1)
plt.plot(SMB_CUM.index, SMB_CUM.values)
plt.plot(HML_CUM.index, HML_CUM.values)
plt.ylabel('Cumulative Return')
plt.legend(['SMB Portfolio Returns', 'HML Portfolio Returns']);
Explanation: How did each factor do over 2014?
End of explanation
# Get returns data for our portfolio
portfolio = get_pricing(['MSFT', 'AAPL', 'YHOO', 'FB', 'TSLA'],
fields='price', start_date=start_date, end_date=end_date).pct_change()[1:]
R = np.mean(portfolio, axis=1)
bench = get_pricing('SPY', fields='price', start_date=start_date, end_date=end_date).pct_change()[1:]
# The excess returns of our active management, in this case just holding a portfolio of our one asset
active = R - bench
# Define a constant to compute intercept
constant = pd.TimeSeries(np.ones(len(active.index)), index=active.index)
df = pd.DataFrame({'R': active,
'F1': SMB,
'F2': HML,
'Constant': constant})
df = df.dropna()
# Perform linear regression to get the coefficients in the model
b1, b2 = regression.linear_model.OLS(df['R'], df[['F1', 'F2']]).fit().params
# Print the coefficients from the linear regression
print 'Sensitivities of active returns to factors:\nSMB: %f\nHML: %f' % (b1, b2)
Explanation: Computing Risk Exposure
Now we can determine how exposed another return stream is to each of these factors. We can do this by running static or rolling linear regressions between our return stream and the factor portfolio returns. First we'll compute the active returns (returns - benchmark) of some random asset and then model that asset as a linear combination of our two factors. The more a factor contributes to the active returns, the more exposed the active returns are to that factor.
End of explanation
F1 = df['F1']
F2 = df['F2']
cov = np.cov(F1, F2)
ar_squared = (active.std())**2
fmcar1 = (b1*(b2*cov[0,1] + b1*cov[0,0]))/ar_squared
fmcar2 = (b2*(b1*cov[0,1] + b2*cov[1,1]))/ar_squared
print 'SMB Risk Contribution:', fmcar1
print 'HML Risk Contribution:', fmcar2
Explanation: Using the formula from the start of the notebook, we can compute the factors' marginal contributions to active risk squared:
End of explanation
# Compute the rolling betas
model = pd.stats.ols.MovingOLS(y = df['R'], x=df[['F1', 'F2']],
window_type='rolling',
window=100)
rolling_parameter_estimates = model.beta
rolling_parameter_estimates.plot();
plt.title('Computed Betas');
plt.legend(['F1 Beta', 'F2 Beta', 'Intercept']);
Explanation: The rest of the risk can be attributed to active specific risk, i.e. factors that we did not take into account or the asset's idiosyncratic risk.
However, as usual we will look at how the exposure to these factors changes over time. As we lose a tremendous amount of information by just looking at one data point. Let's look at what happens if we run a rolling regression over time.
End of explanation
# Remove the first 99, which are all NaN for each case
# Compute covariances
covariances = pd.rolling_cov(df[['F1', 'F2']], window=100)[99:]
# Compute active risk squared
active_risk_squared = pd.rolling_std(active, window = 100)[99:]**2
# Compute betas
betas = rolling_parameter_estimates[['F1', 'F2']]
# Set up empty dataframe
FMCAR = pd.DataFrame(index=betas.index, columns=betas.columns)
# For each factor
for factor in betas.columns:
# For each bar in our data
for t in betas.index:
# Compute the sum of the betas and covariances
s = np.sum(betas.loc[t] * covariances[t][factor])
# Get the beta
b = betas.loc[t][factor]
# Get active risk squared
AR = active_risk_squared.loc[t]
# Put them all together to estimate FMCAR on that date
FMCAR[factor][t] = b * s / AR
Explanation: Now we'll look at FMCAR as it changes over time.
End of explanation
plt.plot(FMCAR['F1'].index, FMCAR['F1'].values)
plt.plot(FMCAR['F2'].index, FMCAR['F2'].values)
plt.ylabel('Marginal Contribution to Active Risk Squared')
plt.legend(['F1 FMCAR', 'F2 FMCAR']);
Explanation: Let's plot this.
End of explanation
from statsmodels.stats.stattools import jarque_bera
_, pvalue1, _, _ = jarque_bera(FMCAR['F1'].dropna().values)
_, pvalue2, _, _ = jarque_bera(FMCAR['F2'].dropna().values)
print 'p-value F1_FMCAR is normally distributed', pvalue1
print 'p-value F2_FMCAR is normally distributed', pvalue2
Explanation: Problems with using this data
Whereas it may be interesting to know how a portfolio was exposed to certain factors historically, it is really only useful if we can make predictions about how it will be exposed to risk factors in the future. It's not always a safe assumption to say that future exposure will be the current exposure. As you saw the exposure varies quite a bit, so taking the average is dangerous. We could put confidence intervals around that average, but that would only work if the distribution of exposures were normal or well behaved. Let's check using our old buddy, the Jarque-Bera test.
End of explanation |
1,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment
Run Hebbian pruning with non-binary activations.
Motivation
Attempt pruning given intuition offered up in "Memory Aware Synapses" paper
Step1: Dense Model
Step2: Static Sparse
Step3: Weighted Magnitude
Step4: SET
Step5: Hebbien | Python Code:
from IPython.display import Markdown, display
%load_ext autoreload
%autoreload 2
import sys
import itertools
sys.path.append("../../")
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from nupic.research.frameworks.dynamic_sparse.common.browser import *
base = 'gsc-trials-2019-10-07'
exp_names = [
'gsc-BaseModel',
'gsc-Static',
'gsc-Heb-nonbinary',
'gsc-WeightedMag-nonbinary',
'gsc-WeightedMag',
'gsc-SET',
]
exps = [
os.path.join(base, exp) for exp in exp_names
]
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
for p in paths:
print(os.path.exists(p), p)
df = load_many(paths)
# remove nans where appropriate
df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)
df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)
# distill certain values
df['on_perc'] = df['on_perc'].replace('None-None-0.1-None', 0.1, regex=True)
df['on_perc'] = df['on_perc'].replace('None-None-0.4-None', 0.4, regex=True)
df['on_perc'] = df['on_perc'].replace('None-None-0.02-None', 0.02, regex=True)
df['prune_methods'] = df['prune_methods'].replace('None-None-dynamic-linear-None', 'dynamic-linear', regex=True)
# def model_name(row):
# col = 'Experiment Name'
# for exp in exp_names:
# if exp in row[col]:
# return exp
# # if row[col] == 'DSNNWeightedMag':
# # return 'DSNN-WM'
# # elif row[col] == 'DSNNMixedHeb':
# # if row['hebbian_prune_perc'] == 0.3:
# # return 'SET'
# # elif row['weight_prune_perc'] == 0.3:
# # return 'DSNN-Heb'
# # elif row[col] == 'SparseModel':
# # return 'Static'
# assert False, "This should cover all cases. Got {}".format(row[col])
# df['model2'] = df.apply(model_name, axis=1)
df.iloc[34]
df.groupby('experiment_base_path')['experiment_base_path'].count()
# Did anything fail?
df[df["epochs"] < 30]["epochs"].count()
# helper functions
def mean_and_std(s):
return "{:.3f} ยฑ {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
type(np.nan)
df['on_perc'][0] is nan
Explanation: Experiment
Run Hebbian pruning with non-binary activations.
Motivation
Attempt pruning given intuition offered up in "Memory Aware Synapses" paper:
* The weights with higher coactivations computed as $x_i \times x_j$
have a greater effect on the L2 norm of the layers output. Here $x_i$ and $x_j$ are
the input and output activations respectively.
End of explanation
fltr = (df['experiment_base_path'] == 'gsc-BaseModel')
agg(['model'], fltr)
Explanation: Dense Model
End of explanation
# 2% sparse
fltr = (df['experiment_base_path'] == 'gsc-Static')
agg(['model'], fltr)
Explanation: Static Sparse
End of explanation
# 2% sparse
# 2% sparse
combos = {
'experiment_base_path': ['gsc-WeightedMag', 'gsc-WeightedMag-nonbinary'],
'hebbian_grow': [True, False],
}
combos = [[(k, v_i) for v_i in v] for k, v in combos.items()]
combos = list(itertools.product(*combos))
for c in combos:
fltr = None
summary = []
for restraint in c:
rname = restraint[0]
rcond = restraint[1]
summary.append("{}={} ".format(rname, rcond))
new_fltr = df[rname] == rcond
if fltr is not None:
fltr = fltr & new_fltr
else:
fltr = new_fltr
summary = Markdown("### " + " / ".join(summary))
display(summary)
display(agg(['experiment_base_path'], fltr))
print('\n\n\n\n')
Explanation: Weighted Magnitude
End of explanation
# 2% sparse
fltr = (df['experiment_base_path'] == 'gsc-SET')
display(agg(['model'], fltr))
Explanation: SET
End of explanation
# 2% sparse
combos = {
'hebbian_grow': [True, False],
'moving_average_alpha': [0.6, 0.8, 1.0],
'reset_coactivations': [True, False],
}
combos = [[(k, v_i) for v_i in v] for k, v in combos.items()]
combos = list(itertools.product(*combos))
for c in combos:
fltr = None
summary = []
for restraint in c:
rname = restraint[0]
rcond = restraint[1]
summary.append("{}={} ".format(rname, rcond))
new_fltr = df[rname] == rcond
if fltr is not None:
fltr = fltr & new_fltr
else:
fltr = new_fltr
summary = Markdown("### " + " / ".join(summary))
display(summary)
display(agg(['experiment_base_path'], fltr))
print('\n\n\n\n')
d = {'b':4}
'b' in d
Explanation: Hebbien
End of explanation |
1,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using nbtlib
The Named Binary Tag (NBT) file format is a simple structured binary format that is mainly used by the game Minecraft (see the official specification for more details). This short documentation will show you how you can manipulate nbt data using the nbtlib module.
Loading a file
Step1: By default nbtlib.load will figure out by itself if the specified file is gzipped, but you can also use the gzipped= keyword only argument if you know in advance whether the file is gzipped or not.
Step2: The nbtlib.load function also accepts the byteorder= keyword only argument. It lets you specify whether the file is big-endian or little-endian. The default value is 'big', which means that the file is interpreted as big-endian by default. You can set it to 'little' to use the little-endian format.
Step3: Objects returned by the nbtlib.load function are instances of the nbtlib.File class. The nbtlib.load function is actually a small helper around the File.load classmethod. If you need to load files from an already opened file-like object, you can use the File.parse class method.
Step4: The File class inherits from Compound, which inherits from dict. This means that you can use standard dict operations to access data inside of the file.
Step5: Modifying files
Step6: If you don't want to use a context manager, you can call the .save method manually to overwrite the original file or make a copy by specifying a different path. The .save method also accepts the gzipped= keyword only argument. By default, the copy will be gzipped if the original file is gzipped. Similarly, you can use the byteorder= keyword only argument to specify whether the file should be saved using the big-endian or little-endian format. By default, the copy will be saved using the same format as the original file.
Step7: You can also write nbt data to an already opened file-like object using the .write method.
Step8: Creating files
Step9: New files are uncompressed by default. You can use the gzipped= keyword only argument to create a gzipped file. New files are also big-endian by default. You can use the byteorder= keyword only argument to set the endianness of the file to either 'big' or 'little'.
Step10: Performing operations on tags
With the exception of ByteArray, IntArray and LongArray tags, every tag type inherits from a python builtin, allowing you to make use of their rich and familiar interfaces. ByteArray, IntArray and LongArray tags on the other hand, inherit from numpy arrays instead of the builtin array type in order to benefit from numpy's efficiency.
| Base type | Associated nbt tags |
| ------------------- | ------------------------------------ |
| int | Byte, Short, Int, Long |
| float | Float, Double |
| str | String |
| numpy.ndarray | ByteArray, IntArray, LongArray |
| list | List |
| dict | Compound |
All the methods and operations that are usually available on the the base types can be used on the associated tags.
Step11: Serializing nbt tags to snbt
While using repr() on nbt tags outputs a python representation of the tag, calling str() on nbt tags (or simply printing them) will return the nbt literal representing that tag.
Step12: Converting nbt tags to strings will serialize them to snbt. If you want more control over the way nbt tags are serialized, you can use the nbtlib.serialize_tag function. In fact, using str on nbt tags simply calls nbtlib.serialize_tag on the specified tag.
Step13: You might have noticed that by default, the nbtlib.serialize_tag function will render strings with single ' or double " quotes based on their content to avoid escaping quoting characters. The string is serialized such that the type of quotes used is different from the first quoting character found in the string. If the string doesn't contain any quoting character, the nbtlib.serialize_tag function will render the string as a double " quoted string.
Step14: You can overwrite this behavior by setting the quote= keyword only argument to either a single ' or a double " quote.
Step15: The nbtlib.serialize_tag function can be used with the compact= keyword only argument to remove all the extra whitespace from the output.
Step16: If you'd rather have something a bit more readable, you can use the indent= keyword only argument to tell the nbtlib.serialize_tag function to output indented snbt. The argument can be either a string or an integer and will be used to define how to render each indentation level.
Step17: If you need the output ot be indented with tabs instead, you can set the indent= argument to '\t'.
Step18: Note that the indent= keyword only argument can be set to any string, not just '\t'.
Step19: Creating tags from nbt literals
nbtlib supports creating nbt tags from their literal representation. The nbtlib.parse_nbt function can parse snbt and return the appropriate tag.
Step21: Note that the parser ignores whitespace.
Step22: Defining schemas
In order to avoid wrapping values manually every time you edit a compound tag, you can define a schema that will take care of converting python types to predefined nbt tags automatically.
Step23: By default, you can interact with keys that are not defined in the schema. However, if you use the strict= keyword only argument, the schema instance will raise a TypeError whenever you try to access a key that wasn't defined in the original schema.
Step24: The schema function is a helper that creates a class that inherits from CompoundSchema. This means that you can also inherit from the class manually.
Step25: You can also set the strict class attribute to True to create a strict schema type.
Step26: Combining schemas and custom file types
If you need to deal with files that always have a particular structure, you can create a specialized file type by combining it with a schema. For instance, this is how you would create a file type that opens minecraft structure files.
First, we need to define what a minecraft structure is, so we create a schema that matches the tag hierarchy.
Step27: Now let's test our schema by creating a structure. We can see that all the types are automatically applied.
Step28: Now we can create a custom file type that wraps our structure schema. Since structure files are always gzipped we can override the load method to default the gzipped argument to True. We also overwrite the constructor so that it can take directly an instance of our structure schema as argument.
Step29: We can now use the custom file type to load, edit and save structure files without having to specify the tags manually.
Step30: So now let's try to edit the structure. We're going to replace all the dirt blocks with stone blocks.
Step31: As you can see we didn't need to specify any tag to edit the file. | Python Code:
import nbtlib
nbt_file = nbtlib.load('nbt_files/bigtest.nbt')
nbt_file['stringTest']
Explanation: Using nbtlib
The Named Binary Tag (NBT) file format is a simple structured binary format that is mainly used by the game Minecraft (see the official specification for more details). This short documentation will show you how you can manipulate nbt data using the nbtlib module.
Loading a file
End of explanation
uncompressed_file = nbtlib.load('nbt_files/hello_world.nbt', gzipped=False)
uncompressed_file.gzipped
Explanation: By default nbtlib.load will figure out by itself if the specified file is gzipped, but you can also use the gzipped= keyword only argument if you know in advance whether the file is gzipped or not.
End of explanation
little_endian_file = nbtlib.load('nbt_files/hello_world_little.nbt', byteorder='little')
little_endian_file.byteorder
Explanation: The nbtlib.load function also accepts the byteorder= keyword only argument. It lets you specify whether the file is big-endian or little-endian. The default value is 'big', which means that the file is interpreted as big-endian by default. You can set it to 'little' to use the little-endian format.
End of explanation
from nbtlib import File
with open('nbt_files/hello_world.nbt', 'rb') as f:
hello_world = File.parse(f)
hello_world
Explanation: Objects returned by the nbtlib.load function are instances of the nbtlib.File class. The nbtlib.load function is actually a small helper around the File.load classmethod. If you need to load files from an already opened file-like object, you can use the File.parse class method.
End of explanation
nbt_file.keys()
Explanation: The File class inherits from Compound, which inherits from dict. This means that you can use standard dict operations to access data inside of the file.
End of explanation
from nbtlib.tag import *
with nbtlib.load('nbt_files/demo.nbt') as demo:
demo['counter'] = Int(demo['counter'] + 1)
demo
Explanation: Modifying files
End of explanation
demo = nbtlib.load('nbt_files/demo.nbt')
...
demo.save() # overwrite
demo.save('nbt_files/demo_copy.nbt', gzipped=True) # make a gzipped copy
demo.save('nbt_files/demo_little.nbt', byteorder='little') # convert the file to little-endian
nbtlib.load('nbt_files/demo_copy.nbt')['counter']
nbtlib.load('nbt_files/demo_little.nbt', byteorder='little')['counter']
Explanation: If you don't want to use a context manager, you can call the .save method manually to overwrite the original file or make a copy by specifying a different path. The .save method also accepts the gzipped= keyword only argument. By default, the copy will be gzipped if the original file is gzipped. Similarly, you can use the byteorder= keyword only argument to specify whether the file should be saved using the big-endian or little-endian format. By default, the copy will be saved using the same format as the original file.
End of explanation
with open('nbt_files/demo_copy.nbt', 'wb') as f:
demo.write(f)
Explanation: You can also write nbt data to an already opened file-like object using the .write method.
End of explanation
new_file = File({
'foo': String('bar'),
'spam': IntArray([1, 2, 3]),
'egg': List[String](['hello', 'world'])
})
new_file.save('nbt_files/new_file.nbt')
loaded_file = nbtlib.load('nbt_files/new_file.nbt')
loaded_file.gzipped
loaded_file.byteorder
Explanation: Creating files
End of explanation
new_file = File(
{'thing': LongArray([1, 2, 3])},
gzipped=True,
byteorder='little'
)
new_file.save('nbt_files/new_file_gzipped_little.nbt')
loaded_file = nbtlib.load('nbt_files/new_file_gzipped_little.nbt', byteorder='little')
loaded_file.gzipped
loaded_file.byteorder
Explanation: New files are uncompressed by default. You can use the gzipped= keyword only argument to create a gzipped file. New files are also big-endian by default. You can use the byteorder= keyword only argument to set the endianness of the file to either 'big' or 'little'.
End of explanation
my_list = List[String](char.upper() for char in 'hello')
my_list.reverse()
my_list[3:]
my_array = IntArray([1, 2, 3])
my_array + 100
my_pizza = Compound({
'name': String('Margherita'),
'price': Double(5.7),
'size': String('medium')
})
my_pizza.update({'name': String('Calzone'), 'size': String('large')})
my_pizza['price'] = Double(my_pizza['price'] + 2.5)
my_pizza
Explanation: Performing operations on tags
With the exception of ByteArray, IntArray and LongArray tags, every tag type inherits from a python builtin, allowing you to make use of their rich and familiar interfaces. ByteArray, IntArray and LongArray tags on the other hand, inherit from numpy arrays instead of the builtin array type in order to benefit from numpy's efficiency.
| Base type | Associated nbt tags |
| ------------------- | ------------------------------------ |
| int | Byte, Short, Int, Long |
| float | Float, Double |
| str | String |
| numpy.ndarray | ByteArray, IntArray, LongArray |
| list | List |
| dict | Compound |
All the methods and operations that are usually available on the the base types can be used on the associated tags.
End of explanation
example_tag = Compound({
'numbers': IntArray([1, 2, 3]),
'foo': String('bar'),
'syntax breaking': Float(42),
'spam': String('{"text":"Hello, world!\\n"}')
})
print(repr(example_tag))
print(str(example_tag))
print(example_tag)
Explanation: Serializing nbt tags to snbt
While using repr() on nbt tags outputs a python representation of the tag, calling str() on nbt tags (or simply printing them) will return the nbt literal representing that tag.
End of explanation
from nbtlib import serialize_tag
print(serialize_tag(example_tag))
serialize_tag(example_tag) == str(example_tag)
Explanation: Converting nbt tags to strings will serialize them to snbt. If you want more control over the way nbt tags are serialized, you can use the nbtlib.serialize_tag function. In fact, using str on nbt tags simply calls nbtlib.serialize_tag on the specified tag.
End of explanation
print(String("contains 'single' quotes"))
print(String('contains "double" quotes'))
print(String('''contains 'single' and "double" quotes'''))
Explanation: You might have noticed that by default, the nbtlib.serialize_tag function will render strings with single ' or double " quotes based on their content to avoid escaping quoting characters. The string is serialized such that the type of quotes used is different from the first quoting character found in the string. If the string doesn't contain any quoting character, the nbtlib.serialize_tag function will render the string as a double " quoted string.
End of explanation
print(serialize_tag(String('forcing "double" quotes'), quote='"'))
Explanation: You can overwrite this behavior by setting the quote= keyword only argument to either a single ' or a double " quote.
End of explanation
print(serialize_tag(example_tag, compact=True))
Explanation: The nbtlib.serialize_tag function can be used with the compact= keyword only argument to remove all the extra whitespace from the output.
End of explanation
nested_tag = Compound({
'foo': List[Int]([1, 2, 3]),
'bar': String('name'),
'values': List[Compound]([
{'test': String('a'), 'thing': ByteArray([32, 32, 32])},
{'test': String('b'), 'thing': ByteArray([64, 64, 64])}
])
})
print(serialize_tag(nested_tag, indent=4))
Explanation: If you'd rather have something a bit more readable, you can use the indent= keyword only argument to tell the nbtlib.serialize_tag function to output indented snbt. The argument can be either a string or an integer and will be used to define how to render each indentation level.
End of explanation
print(serialize_tag(nested_tag, indent='\t'))
Explanation: If you need the output ot be indented with tabs instead, you can set the indent= argument to '\t'.
End of explanation
print(serialize_tag(nested_tag, indent='. '))
Explanation: Note that the indent= keyword only argument can be set to any string, not just '\t'.
End of explanation
from nbtlib import parse_nbt
parse_nbt('hello')
parse_nbt('{foo:[{bar:[I;1,2,3]},{spam:6.7f}]}')
Explanation: Creating tags from nbt literals
nbtlib supports creating nbt tags from their literal representation. The nbtlib.parse_nbt function can parse snbt and return the appropriate tag.
End of explanation
parse_nbt({
foo: [1, 2, 3],
bar: "name",
values: [
{
test: "a",
thing: [B; 32B, 32B, 32B]
},
{
test: "b",
thing: [B; 64B, 64B, 64B]
}
]
})
Explanation: Note that the parser ignores whitespace.
End of explanation
from nbtlib import schema
MySchema = schema('MySchema', {
'foo': String,
'bar': Short
})
my_object = MySchema({'foo': 'hello world', 'bar': 21})
my_object['bar'] *= 2
my_object
Explanation: Defining schemas
In order to avoid wrapping values manually every time you edit a compound tag, you can define a schema that will take care of converting python types to predefined nbt tags automatically.
End of explanation
MyStrictSchema = schema('MyStrictSchema', {
'foo': String,
'bar': Short
}, strict=True)
strict_instance = MyStrictSchema()
strict_instance.update({'foo': 'hello world'})
strict_instance
try:
strict_instance['something'] = List[String](['this', 'raises', 'an', 'error'])
except TypeError as exc:
print(exc)
Explanation: By default, you can interact with keys that are not defined in the schema. However, if you use the strict= keyword only argument, the schema instance will raise a TypeError whenever you try to access a key that wasn't defined in the original schema.
End of explanation
from nbtlib import CompoundSchema
class MySchema(CompoundSchema):
schema = {
'foo': String,
'bar': Short
}
MySchema({'foo': 'hello world', 'bar': 42})
Explanation: The schema function is a helper that creates a class that inherits from CompoundSchema. This means that you can also inherit from the class manually.
End of explanation
class MyStrictSchema(CompoundSchema):
schema = {
'foo': String,
'bar': Short
}
strict = True
try:
MyStrictSchema({'something': Byte(5)})
except TypeError as exc:
print(exc)
Explanation: You can also set the strict class attribute to True to create a strict schema type.
End of explanation
Structure = schema('Structure', {
'DataVersion': Int,
'author': String,
'size': List[Int],
'palette': List[schema('State', {
'Name': String,
'Properties': Compound,
})],
'blocks': List[schema('Block', {
'state': Int,
'pos': List[Int],
'nbt': Compound,
})],
'entities': List[schema('Entity', {
'pos': List[Double],
'blockPos': List[Int],
'nbt': Compound,
})],
})
Explanation: Combining schemas and custom file types
If you need to deal with files that always have a particular structure, you can create a specialized file type by combining it with a schema. For instance, this is how you would create a file type that opens minecraft structure files.
First, we need to define what a minecraft structure is, so we create a schema that matches the tag hierarchy.
End of explanation
new_structure = Structure({
'DataVersion': 1139,
'author': 'dinnerbone',
'size': [1, 2, 1],
'palette': [
{'Name': 'minecraft:dirt'}
],
'blocks': [
{'pos': [0, 0, 0], 'state': 0},
{'pos': [0, 1, 0], 'state': 0}
],
'entities': [],
})
type(new_structure['blocks'][0]['pos'])
type(new_structure['entities'])
Explanation: Now let's test our schema by creating a structure. We can see that all the types are automatically applied.
End of explanation
class StructureFile(File, Structure):
def __init__(self, structure_data=None):
super().__init__(structure_data or {})
self.gzipped = True
@classmethod
def load(cls, filename, gzipped=True):
return super().load(filename, gzipped)
Explanation: Now we can create a custom file type that wraps our structure schema. Since structure files are always gzipped we can override the load method to default the gzipped argument to True. We also overwrite the constructor so that it can take directly an instance of our structure schema as argument.
End of explanation
structure_file = StructureFile(new_structure)
structure_file.save('nbt_files/new_structure.nbt') # you can load it in a minecraft world!
Explanation: We can now use the custom file type to load, edit and save structure files without having to specify the tags manually.
End of explanation
with StructureFile.load('nbt_files/new_structure.nbt') as structure_file:
structure_file['palette'][0]['Name'] = 'minecraft:stone'
Explanation: So now let's try to edit the structure. We're going to replace all the dirt blocks with stone blocks.
End of explanation
print(serialize_tag(StructureFile.load('nbt_files/new_structure.nbt'), indent=4))
Explanation: As you can see we didn't need to specify any tag to edit the file.
End of explanation |
1,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: Risk Factor Models
The first step is to define a model for the risk-neutral discounting.
Step2: We then define a market environment containing the major parameter specifications needed,
Step3: Next, the model object for the first risk factor, based on the geometric Brownian motion (Black-Scholes-Merton (1973) model).
Step4: Some paths visualized.
Step5: Second risk factor with higher volatility. We overwrite the respective value in the market environment.
Step6: Valuation Models
Based on the risk factors, we can then define derivatives models for valuation. To this end, we need to add at least one (the maturity), in general two (maturity and strike), parameters to the market environments.
Step7: The first derivative is an American put option on the first risk factor gbm_1.
Step8: Let us calculate a Monte Carlo present value estimate and estimates for the major Greeks.
Step9: The second derivative is a European call option on the second risk factor gbm_2.
Step10: Valuation and Greek estimation for this option.
Step11: Options Portfolio
Modeling
In a portfolio context, we need to add information about the model class(es) to be used to the market environments of the risk factors.
Step12: To compose a portfolio consisting of our just defined options, we need to define derivatives positions. Note that this step is independent from the risk factor model and option model definitions. We only use the market environment data and some additional information needed (e.g. payoff functions).
Step13: Let us define the relevant market by 2 Python dictionaries, the correlation between the two risk factors and a valuation environment.
Step14: These are used to define the derivatives portfolio.
Step15: Simulation and Valuation
Now, we can get the position values for the portfolio via the get_values method.
Step16: Via the get_statistics methods delta and vega values are provided as well.
Step17: Much more complex scenarios are possible with DX Analytics
Risk Reports
Having modeled the derivatives portfolio, risk reports are only two method calls away. | Python Code:
import dx
import datetime as dt
import pandas as pd
import seaborn as sns; sns.set()
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Quickstart
This brief first part illustrates---without much explanation---the usage of the DX Analytics library. It models two risk factors, two derivatives instruments and values these in a portfolio context.
End of explanation
r = dx.constant_short_rate('r', 0.01)
Explanation: Risk Factor Models
The first step is to define a model for the risk-neutral discounting.
End of explanation
me_1 = dx.market_environment('me', dt.datetime(2015, 1, 1))
me_1.add_constant('initial_value', 100.)
# starting value of simulated processes
me_1.add_constant('volatility', 0.2)
# volatiltiy factor
me_1.add_constant('final_date', dt.datetime(2016, 6, 30))
# horizon for simulation
me_1.add_constant('currency', 'EUR')
# currency of instrument
me_1.add_constant('frequency', 'W')
# frequency for discretization
me_1.add_constant('paths', 10000)
# number of paths
me_1.add_curve('discount_curve', r)
# number of paths
Explanation: We then define a market environment containing the major parameter specifications needed,
End of explanation
gbm_1 = dx.geometric_brownian_motion('gbm_1', me_1)
Explanation: Next, the model object for the first risk factor, based on the geometric Brownian motion (Black-Scholes-Merton (1973) model).
End of explanation
pdf = pd.DataFrame(gbm_1.get_instrument_values(), index=gbm_1.time_grid)
%matplotlib inline
pdf.ix[:, :10].plot(legend=False, figsize=(10, 6))
Explanation: Some paths visualized.
End of explanation
me_2 = dx.market_environment('me_2', me_1.pricing_date)
me_2.add_environment(me_1) # add complete environment
me_2.add_constant('volatility', 0.5) # overwrite value
gbm_2 = dx.geometric_brownian_motion('gbm_2', me_2)
pdf = pd.DataFrame(gbm_2.get_instrument_values(), index=gbm_2.time_grid)
pdf.ix[:, :10].plot(legend=False, figsize=(10, 6))
Explanation: Second risk factor with higher volatility. We overwrite the respective value in the market environment.
End of explanation
me_opt = dx.market_environment('me_opt', me_1.pricing_date)
me_opt.add_environment(me_1)
me_opt.add_constant('maturity', dt.datetime(2016, 6, 30))
me_opt.add_constant('strike', 110.)
Explanation: Valuation Models
Based on the risk factors, we can then define derivatives models for valuation. To this end, we need to add at least one (the maturity), in general two (maturity and strike), parameters to the market environments.
End of explanation
am_put = dx.valuation_mcs_american_single(
name='am_put',
underlying=gbm_1,
mar_env=me_opt,
payoff_func='np.maximum(strike - instrument_values, 0)')
Explanation: The first derivative is an American put option on the first risk factor gbm_1.
End of explanation
am_put.present_value()
am_put.delta()
am_put.vega()
Explanation: Let us calculate a Monte Carlo present value estimate and estimates for the major Greeks.
End of explanation
eur_call = dx.valuation_mcs_european_single(
name='eur_call',
underlying=gbm_2,
mar_env=me_opt,
payoff_func='np.maximum(maturity_value - strike, 0)')
Explanation: The second derivative is a European call option on the second risk factor gbm_2.
End of explanation
eur_call.present_value()
eur_call.delta()
eur_call.vega()
Explanation: Valuation and Greek estimation for this option.
End of explanation
me_1.add_constant('model', 'gbm')
me_2.add_constant('model', 'gbm')
Explanation: Options Portfolio
Modeling
In a portfolio context, we need to add information about the model class(es) to be used to the market environments of the risk factors.
End of explanation
put = dx.derivatives_position(
name='put',
quantity=2,
underlyings=['gbm_1'],
mar_env=me_opt,
otype='American single',
payoff_func='np.maximum(strike - instrument_values, 0)')
call = dx.derivatives_position(
name='call',
quantity=3,
underlyings=['gbm_2'],
mar_env=me_opt,
otype='European single',
payoff_func='np.maximum(maturity_value - strike, 0)')
Explanation: To compose a portfolio consisting of our just defined options, we need to define derivatives positions. Note that this step is independent from the risk factor model and option model definitions. We only use the market environment data and some additional information needed (e.g. payoff functions).
End of explanation
risk_factors = {'gbm_1': me_1, 'gbm_2' : me_2}
correlations = [['gbm_1', 'gbm_2', -0.4]]
positions = {'put' : put, 'call' : call}
val_env = dx.market_environment('general', dt.datetime(2015, 1, 1))
val_env.add_constant('frequency', 'W')
val_env.add_constant('paths', 10000)
val_env.add_constant('starting_date', val_env.pricing_date)
val_env.add_constant('final_date', val_env.pricing_date)
val_env.add_curve('discount_curve', r)
Explanation: Let us define the relevant market by 2 Python dictionaries, the correlation between the two risk factors and a valuation environment.
End of explanation
port = dx.derivatives_portfolio(
name='portfolio', # name
positions=positions, # derivatives positions
val_env=val_env, # valuation environment
risk_factors=risk_factors, # relevant risk factors
correlations=correlations, parallel=True) # correlation between risk factors
Explanation: These are used to define the derivatives portfolio.
End of explanation
port.get_values()
Explanation: Simulation and Valuation
Now, we can get the position values for the portfolio via the get_values method.
End of explanation
port.get_statistics()
Explanation: Via the get_statistics methods delta and vega values are provided as well.
End of explanation
deltas, benchvalue = port.get_port_risk(Greek='Delta')
dx.risk_report(deltas)
dx.risk_report(deltas.ix[:, :, 'value'] - benchvalue)
vegas, benchvalue = port.get_port_risk(Greek='Vega', step=0.05)
dx.risk_report(vegas)
dx.risk_report(vegas.ix[:, :, 'value'] - benchvalue)
Explanation: Much more complex scenarios are possible with DX Analytics
Risk Reports
Having modeled the derivatives portfolio, risk reports are only two method calls away.
End of explanation |
1,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Streaming Sourmash
This notebook demonstrates how to use goetia to perform a streaming analysis of sourmash minhash signatures. Goetia includes the sourmash C++ header and exposes it with cpppy, and wraps it so it can be used with goetia's sequence processors. This enables a simple way to perform fast streaming signature analysis in Python.
Step1: The Signature Class
The goetia SourmashSignature.Signature is derived from sourmash
Step2: The Signature Processor
SourmashSignature contains its own sequence processor. This processor is iterable; it will process the given sequences in chunks given by fine_interval. The default fine interval is 10000. Alternatively, we can call process to consume the entire sample.
Step3: Let's get some data. We'll start with something small, an ecoli sequencing run.
Step4: Consuming Files
The processor can handle single or paired mode natively. There's nothing special to be done with paired reads for sourmash, so the paired mode just consumes each sequence in the pair one after the other. This is a full sequencing run of ~3.5 million sequences; it should take about ten-to-twenty seconds to consume.
Step5: If sourmash is installed, we can convert the signature to a sourmash.MinHash object to do further analysis.
Step6: Chunked Processing
Now let's do it in chunked mode. We'll print info on the medium interval, which is contained in the state object.
Step7: The similarity should be exact...
Step8: Streaming Distance Calculation
Using the chunked processor for reporting is a bit boring. A more interesting use-case is is to track within-sample distances -- streaming analysis. We'll write a quick function to perform this analysis
Step9: We can see that the similarity saturates at ~2 million sequences, with the exception of a dip later on -- this could be an instrument error. If we increase the size of the signature, the saturation curve will smooth out.
Step10: Smaller minhashes are more susceptible to sequence error, and so the saturation curve is noisy; additionally, the larger minhash detects saturation more clearly, and the distribution of distances leans more heavily toward 1.0.
Some Signal Processing
There are a number of metrics we could use to detect "saturation" | Python Code:
# First, import the necessary libraries
from goetia import libgoetia
from goetia.alphabets import DNAN_SIMPLE
from goetia.signatures import SourmashSignature
from sourmash import load_one_signature, MinHash
import screed
from ficus import FigureManager
import seaborn as sns
import numpy as np
Explanation: Streaming Sourmash
This notebook demonstrates how to use goetia to perform a streaming analysis of sourmash minhash signatures. Goetia includes the sourmash C++ header and exposes it with cpppy, and wraps it so it can be used with goetia's sequence processors. This enables a simple way to perform fast streaming signature analysis in Python.
End of explanation
all_signature = SourmashSignature.Signature.build(10000, 31, False, False, False, 42, 0)
Explanation: The Signature Class
The goetia SourmashSignature.Signature is derived from sourmash::MinHash, and so follows the same interface. This signature will contain 10000 hashes at a $K$ of 31.
End of explanation
processor = SourmashSignature.Processor.build(all_signature)
Explanation: The Signature Processor
SourmashSignature contains its own sequence processor. This processor is iterable; it will process the given sequences in chunks given by fine_interval. The default fine interval is 10000. Alternatively, we can call process to consume the entire sample.
End of explanation
!curl -L https://osf.io/wa57n/download > ecoli.1.fastq.gz
!curl -L https://osf.io/khqaz/download > ecoli.2.fastq.gz
Explanation: Let's get some data. We'll start with something small, an ecoli sequencing run.
End of explanation
%time processor.process('ecoli.1.fastq.gz', 'ecoli.2.fastq.gz')
Explanation: Consuming Files
The processor can handle single or paired mode natively. There's nothing special to be done with paired reads for sourmash, so the paired mode just consumes each sequence in the pair one after the other. This is a full sequencing run of ~3.5 million sequences; it should take about ten-to-twenty seconds to consume.
End of explanation
all_mh = all_signature.to_sourmash()
all_mh
Explanation: If sourmash is installed, we can convert the signature to a sourmash.MinHash object to do further analysis.
End of explanation
chunked_signature = SourmashSignature.Signature.build(10000, 31, False, False, False, 42, 0)
processor = SourmashSignature.Processor.build(chunked_signature, 10000, 250000)
for n_reads, n_skipped, state in processor.chunked_process('ecoli.1.fastq.gz', 'ecoli.2.fastq.gz'):
if state.medium:
print('Processed', n_reads, 'sequences.')
Explanation: Chunked Processing
Now let's do it in chunked mode. We'll print info on the medium interval, which is contained in the state object.
End of explanation
chunked_mh = chunked_signature.to_sourmash()
chunked_mh.similarity(all_mh)
Explanation: The similarity should be exact...
End of explanation
def sourmash_stream(left, right, N=1000, K=31):
distances = []
times = []
streaming_sig = SourmashSignature.Signature.build(N, K, False, False, False, 42, 0)
# We'll set the medium_interval to 250000
processor = SourmashSignature.Processor.build(streaming_sig, 10000, 250000)
# Calculate a distance at each interval. The iterator is over fine chunks.
prev_mh = None
for n_reads, _, state in processor.chunked_process(left, right):
curr_mh = streaming_sig.to_sourmash()
if prev_mh is not None:
distances.append(prev_mh.similarity(curr_mh))
times.append(n_reads)
prev_mh = curr_mh
if state.medium:
print(n_reads, 'reads.')
return np.array(distances), np.array(times), streaming_sig
distances_small, times_small, mh_small = sourmash_stream('ecoli.1.fastq.gz', 'ecoli.2.fastq.gz', N=1000)
with FigureManager(show=True, figsize=(12,8)) as (fig, ax):
sns.lineplot(times_small, distances_small, ax=ax)
ax.set_title('Streaming Minhash Distance')
ax.set_xlabel('Sequence')
ax.set_ylabel('Minhash (Jaccard) Similarity')
Explanation: Streaming Distance Calculation
Using the chunked processor for reporting is a bit boring. A more interesting use-case is is to track within-sample distances -- streaming analysis. We'll write a quick function to perform this analysis
End of explanation
distances_large, times_large, mh_large = sourmash_stream('ecoli.1.fastq.gz', 'ecoli.2.fastq.gz', N=50000)
with FigureManager(show=True, figsize=(12,8)) as (fig, ax):
sns.lineplot(times_small, distances_small, label='$N$=1,000', ax=ax)
sns.lineplot(times_large, distances_large, label='$N$=50,000', ax=ax)
ax.set_title('Streaming Minhash Distance')
ax.set_xlabel('Sequence')
ax.set_ylabel('Minhash (Jaccard) Similarity')
ax.set_ylim(bottom=.8)
with FigureManager(show=True, figsize=(10,5)) as (fig, ax):
ax.set_title('Distribution of Distances')
sns.distplot(distances_small, vertical=True, hist=False, ax=ax, label='$N$=1,000')
sns.distplot(distances_large, vertical=True, hist=False, ax=ax, label='$N$=50,000')
Explanation: We can see that the similarity saturates at ~2 million sequences, with the exception of a dip later on -- this could be an instrument error. If we increase the size of the signature, the saturation curve will smooth out.
End of explanation
def sliding_window_stddev(distances, window_size=10):
stddevs = np.zeros(len(distances) - window_size + 1)
for i in range(0, len(distances) - window_size + 1):
stddevs[i] = np.std(distances[i:i+window_size])
return stddevs
with FigureManager(show=True, figsize=(12,8)) as (fig, ax):
cutoff = .0005
window_size = 10
std_small = sliding_window_stddev(distances_small, window_size=window_size)
sat_small = None
for i, val in enumerate(std_small):
if val < cutoff:
sat_small = i
break
std_large = sliding_window_stddev(distances_large, window_size=window_size)
sat_large = None
for i, val in enumerate(std_large):
if val < cutoff:
sat_large = i
break
ax = sns.lineplot(times_small[:-window_size + 1], std_small, label='$N$=1,000', color=sns.xkcd_rgb['purple'], ax=ax)
ax = sns.lineplot(times_large[:-window_size + 1], std_large, label='$N$=50,000', ax=ax, color=sns.xkcd_rgb['gold'])
if sat_small is not None:
ax.axvline(times_small[sat_small + window_size // 2], alpha=.5, color=sns.xkcd_rgb['light purple'],
label='$N$=1,000 Saturation')
if sat_large is not None:
ax.axvline(times_large[sat_large + window_size // 2], alpha=.5, color=sns.xkcd_rgb['goldenrod'],
label='$N$=50,000 Saturation')
ax.set_ylabel('Rolling stdev of Distance')
ax.set_xlabel('Sequence')
Explanation: Smaller minhashes are more susceptible to sequence error, and so the saturation curve is noisy; additionally, the larger minhash detects saturation more clearly, and the distribution of distances leans more heavily toward 1.0.
Some Signal Processing
There are a number of metrics we could use to detect "saturation": what exactly we count as such is a user decision. A simplistic approach would be to measure standard deviation of the distance over a window and consider the sample saturated when it drops below a threshold.
End of explanation |
1,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bandicoot
bandicoot is an open-source python toolbox to analyze mobile phone metadata.
For more information, see
Step1: Input files
<img src="mini-mockups-01.png" width="80%" style="border
Step2: Loading a user
Step3: Individual indicators
active_daysmobility point.
number_of_contacts
number_of_interactions
duration_of_calls
percent_nocturnal
percent_initiated_conversations
percent_initiated_interactions
response_delay_text
response_rate_text
entropy_of_contacts
balance_of_contacts
interactions_per_contact
interevent_time
Step4: Spatial indicators
number_of_antennas
entropy_of_antennas
percent_at_home
radius_of_gyration
Step5: Weekly aggregation
By default, bandicoot computes the indicators on a weekly basis and returns the mean over all the weeks available and its standard error (sem) in a nested dictionary.
The groupby='week' or groupby=None keyword controls the aggregation.
<img src="mini-mockups-02.png" width="80%" style="border
Step6: Summary
Some indicators such as active_days returns one number. Others, such as duration_of_calls returns a distribution.
The summary keyword can take three values
Step7: Exporting indicators | Python Code:
%pylab inline
import seaborn as sns
Explanation: Bandicoot
bandicoot is an open-source python toolbox to analyze mobile phone metadata.
For more information, see: http://bandicoot.mit.edu/
<hr>
End of explanation
!head -n 5 data/ego.csv
!head -n 5 data/antennas.csv
Explanation: Input files
<img src="mini-mockups-01.png" width="80%" style="border: 1px solid #aaa" />
Scheme for read_csv user records:
interaction,direction,correspondent_id,datetime,call_duration,antenna_id
Scheme for read_orange:
call_record_type;basic_service;user_msisdn;call_partner_identity;datetime;call_duration;longitude;latitude
End of explanation
import bandicoot as bc
U = bc.read_csv('ego', 'data/', 'data/antennas.csv')
bc.special.demo.export_antennas(U, 'viz/mobility_view')
bc.special.demo.export_transitions(U, 'viz/mobility_view')
bc.special.demo.export_timeline(U, 'viz/event_timeline')
bc.special.demo.export_network(U, 'viz/network_view')
from IPython.display import IFrame
IFrame("viz/network_view/index.html", "100%", 400)
from IPython.display import IFrame
IFrame("viz/mobility_view/index.html", "100%", 400)
from IPython.display import IFrame
IFrame("viz/event_timeline/index.html", "100%", 400)
U.records[:3]
Explanation: Loading a user
End of explanation
bc.individual.entropy_of_contacts(U, groupby=None)
bc.individual.percent_initiated_conversations(U, groupby=None)
interevent = bc.individual.interevent_time(U, groupby=None, summary=None)
f, axes = plt.subplots(figsize=(12, 5))
sns.distplot(np.log(interevent['allweek']['allday']['call']), norm_hist=True)
title('Distribution of interevent time', fontsize=15)
plt.xlabel('Interevent time (second)')
plt.ylabel('PDF')
_ = plt.xticks(plt.xticks()[0], [int(np.exp(i)) for i in plt.xticks()[0]])
call_durations = bc.individual.call_duration(U, groupby=None, summary=None)
f, ax = plt.subplots(figsize=(12, 5))
sns.distplot(np.log(call_durations['allweek']['allday']['call']), kde=True)
title('Distribution of call durations', fontsize=15)
plt.xlabel('Call duration (second)')
plt.ylabel('PDF')
_ = plt.xticks(plt.xticks()[0], [int(np.exp(i)) for i in plt.xticks()[0]])
Explanation: Individual indicators
active_daysmobility point.
number_of_contacts
number_of_interactions
duration_of_calls
percent_nocturnal
percent_initiated_conversations
percent_initiated_interactions
response_delay_text
response_rate_text
entropy_of_contacts
balance_of_contacts
interactions_per_contact
interevent_time
End of explanation
print bc.spatial.number_of_antennas(U, groupby=None)
bc.spatial.radius_of_gyration(U, groupby=None)
print "Home:", U.home
print "Percent at home: {0:.0f}%".format(100 * bc.spatial.percent_at_home(U, groupby=None)['allweek']['allday'])
Explanation: Spatial indicators
number_of_antennas
entropy_of_antennas
percent_at_home
radius_of_gyration
End of explanation
bc.individual.active_days(U, groupby=False)
bc.individual.active_days(U)
bc.individual.active_days(U, summary=None)
Explanation: Weekly aggregation
By default, bandicoot computes the indicators on a weekly basis and returns the mean over all the weeks available and its standard error (sem) in a nested dictionary.
The groupby='week' or groupby=None keyword controls the aggregation.
<img src="mini-mockups-02.png" width="80%" style="border: 1px solid #aaa" />
End of explanation
bc.individual.call_duration(U, summary='extended', groupby=None)
print bc.individual.call_duration(U, summary=None, groupby=None)
Explanation: Summary
Some indicators such as active_days returns one number. Others, such as duration_of_calls returns a distribution.
The summary keyword can take three values:
default: to return mean and sem;
extended for the second type of indicators, to return mean, sem, median, skewness and std of the distribution;
None: to return the full distribution.
End of explanation
bc.utils.all(U, groupby=None)
bc.io.to_csv([bc.utils.all(U, groupby=None)], 'demo_export_user.csv')
bc.io.to_json([bc.utils.all(U, groupby=None)], 'demo_export_user.json')
!head -n 5 demo_export_user.csv
Explanation: Exporting indicators
End of explanation |
1,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Attention Based Classification Tutorial
Recommended time
Step1: Load & Explore Data
Let's begin by downloading the data from Figshare and cleaning and splitting it for use in training.
Step2: We then load these splits as pandas dataframes.
Step3: We display the top few rows of the dataframe to see what we're dealing with. The key columns are 'comment' which contains the text of a comment from a Wikipedia talk page and 'toxicity' which contains the fraction of annotators who found this comment to be toxic. More information about the other fields and how this data was collected can be found on this wiki and research paper.
Step4: Hyperparameters
Hyperparameters are used to specify various aspects of our model's architecture. In practice, these are often critical to model performance and are carefully tuned using some type of hyperparameter search. For this tutorial, we will choose a reasonable set of hyperparameters and treat them as fixed.
Step5: Step 0
Step6: Step 1
Step7: Step 2
Step8: Step 3
Step10: Step 4
Step11: The predict component of our graph then just takes the output of our attention step, i.e. the weighted average of the bi-RNN hidden layers, and adds one more fully connected layer to compute the logits. These logits are fed into a our estimator_spec which uses a softmax to get the final class probabilties and a softmax_cross_entropy to build a loss function.
Step13: Step 5
Step14: Step 6
Step15: The estimator framework also requires us to define an input function. This will take the input data and provide it during model training in batches. We will use the provided numpy_input_function, which takes numpy arrays as features and labels. We also specify the batch size and whether we want to shuffle the data between epochs.
Step16: Now, it's finally time to train our model! With estimator, this is as easy as calling the train function and specifying how long we'd like to train for.
Step17: Step 7
Step18: These predictions are returned to us as a generator. The code below gives an example of how we can extract the class and attention weights for each prediction.
Step19: To evaluate our model, we can use the evaluate function provided by estimator to get the accuracy and ROC-AUC scores as we defined them in our estimator_spec.
Step20: Step 8 | Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pandas as pd
import tensorflow as tf
import numpy as np
import time
import os
from sklearn import metrics
from visualize_attention import attentionDisplay
from process_figshare import download_figshare, process_figshare
tf.set_random_seed(1234)
Explanation: Attention Based Classification Tutorial
Recommended time: 30 minutes
Contributors: nthain, martin-gorner
This tutorial provides an introduction to building text classification models in tensorflow that use attention to provide insight into how classification decisions are being made. We will build our tensorflow graph following the Embed - Encode - Attend - Predict paradigm introduced by Matthew Honnibal. For more information about this approach, you can refer to:
Slides: https://goo.gl/BYT7au
Video: https://youtu.be/pzOzmxCR37I
Figure 1 below provides a representation of the full tensorflow graph we will build in this tutorial. The green squares represent RNN cells and the blue trapezoids represent neural networks for computing attention weights which will be discussed in more detail below. We will implement each piece of this model graph in a seperate function. The whole model will then simply be calling all of these functions in turn.
This tutorial was created in collaboration with the Tensorflow without a PhD series. To check out more episodes, tutorials, and codelabs from this series, please visit:
https://github.com/GoogleCloudPlatform/tensorflow-without-a-phd
Imports
End of explanation
download_figshare()
process_figshare()
Explanation: Load & Explore Data
Let's begin by downloading the data from Figshare and cleaning and splitting it for use in training.
End of explanation
SPLITS = ['train', 'dev', 'test']
wiki = {}
for split in SPLITS:
wiki[split] = pd.read_csv('data/wiki_%s.csv' % split)
Explanation: We then load these splits as pandas dataframes.
End of explanation
wiki['train'].head()
Explanation: We display the top few rows of the dataframe to see what we're dealing with. The key columns are 'comment' which contains the text of a comment from a Wikipedia talk page and 'toxicity' which contains the fraction of annotators who found this comment to be toxic. More information about the other fields and how this data was collected can be found on this wiki and research paper.
End of explanation
hparams = {'max_document_length': 60,
'embedding_size': 50,
'rnn_cell_size': 128,
'batch_size': 256,
'attention_size': 32,
'attention_depth': 2}
MAX_LABEL = 2
WORDS_FEATURE = 'words'
NUM_STEPS = 300
Explanation: Hyperparameters
Hyperparameters are used to specify various aspects of our model's architecture. In practice, these are often critical to model performance and are carefully tuned using some type of hyperparameter search. For this tutorial, we will choose a reasonable set of hyperparameters and treat them as fixed.
End of explanation
# Initialize the vocabulary processor
vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(hparams['max_document_length'])
def process_inputs(vocab_processor, df, train_label = 'train', test_label = 'test'):
# For simplicity, we call our features x and our outputs y
x_train = df['train'].comment
y_train = df['train'].is_toxic
x_test = df['test'].comment
y_test = df['test'].is_toxic
# Train the vocab_processor from the training set
x_train = vocab_processor.fit_transform(x_train)
# Transform our test set with the vocabulary processor
x_test = vocab_processor.transform(x_test)
# We need these to be np.arrays instead of generators
x_train = np.array(list(x_train))
x_test = np.array(list(x_test))
y_train = np.array(y_train).astype(int)
y_test = np.array(y_test).astype(int)
n_words = len(vocab_processor.vocabulary_)
print('Total words: %d' % n_words)
# Return the transformed data and the number of words
return x_train, y_train, x_test, y_test, n_words
x_train, y_train, x_test, y_test, n_words = process_inputs(vocab_processor, wiki)
Explanation: Step 0: Text Preprocessing
Before we can build a neural network on comment strings, we first have to complete a number of preprocessing steps. In particular, it is important that we "tokenize" the string, splitting it into an array of tokens. In our case, each token will be a word in our sentence and they will be seperated by spaces and punctuation. Many alternative tokenizers exist, some of which use characters as tokens, and others which include punctuation, emojis, or even cleverly handle misspellings.
Once we've tokenized the sentences, each word will be replaced with an integer representative. This will make the embedding (Step 1) much easier.
Happily the tensorflow function VocabularyProcessor takes care of both the tokenization and integer mapping. We only have to give it the max_document_length argument which will determine the length of the output arrays. If sentences are shorter than this length, they will be padded and if they are longer, they will be trimmed. The VocabularyProcessor is then trained on the training set to build the initial vocabulary and map the words to integers.
End of explanation
def embed(features):
word_vectors = tf.contrib.layers.embed_sequence(
features[WORDS_FEATURE],
vocab_size=n_words,
embed_dim=hparams['embedding_size'])
return word_vectors
Explanation: Step 1: Embed
Neural networks at their core are a composition of operators from linear algebra and non-linear activation functions. In order to perform these computations on our input sentences, we must first embed them as a vector of numbers. There are two main approaches to perform this embedding:
Pre-trained: It is often beneficial to initialize our embedding matrix using pre-trained embeddings like Word2Vec or GloVe. These embeddings are trained on a huge corpus of text with a general purpose problem so that they incorporate syntactic and semantic properties of the words being embedded and are amenable to transfer learning on new problems. Once initialized, you can optionally train them further for your specific problem by allowing the embedding matrix in the graph to be a trainable variable in our tensorflow graph.
Random: Alternatively, embeddings can be "trained from scratch" by initializing the embedding matrix randomly and then training it like any other parameter in the tensorflow graph.
In this notebook, we will be using a random initialization. To perform this embedding we use the embed_sequence function from the layers package. This will take our input features, which are the arrays of integers we produced in Step 0, and will randomly initialize a matrix to embed them into. The parameters of this matrix will then be trained with the rest of the graph.
End of explanation
def encode(word_vectors):
# Create a Gated Recurrent Unit cell with hidden size of RNN_SIZE.
# Since the forward and backward RNNs will have different parameters, we instantiate two seperate GRUS.
rnn_fw_cell = tf.contrib.rnn.GRUCell(hparams['rnn_cell_size'])
rnn_bw_cell = tf.contrib.rnn.GRUCell(hparams['rnn_cell_size'])
# Create an unrolled Bi-Directional Recurrent Neural Networks to length of
# max_document_length and passes word_list as inputs for each unit.
outputs, _ = tf.nn.bidirectional_dynamic_rnn(rnn_fw_cell,
rnn_bw_cell,
word_vectors,
dtype=tf.float32,
time_major=False)
return outputs
Explanation: Step 2: Encode
A recurrent neural network is a deep learning architecture that is useful for encoding sequential information like sentences. They are built around a single cell which contains one of several standard neural network architectures (e.g. simple RNN, GRU, or LSTM). We will not focus on the details of the architectures, but at each point in time the cell takes in two inputs and produces two outputs. The inputs are the input token for that step in the sequence and some state from the previous steps in the sequence. The outputs produced are the encoded vectors for the current sequence step and a state to pass on to the next step of the sequence.
Figure 2 shows what this looks like for an unrolled RNN. Each cell (represented by a green square) has two input arrows and two output arrrows. Note that all of the green squares represent the same cell and share parameters. One major advantage of this cell replication is that, at inference time, it allows us to deal with arbitrary length input and not be restricted by the input sizes of our training set.
For our model, we will use a bi-directional RNN. This is simply the concatentation of two RNNs, one which processes the sequence from left to right (the "forward" RNN) and one which process from right to left (the "backward" RNN). By using both directions, we get a stronger encoding as each word can be encoded using the context of its neighbors on boths sides rather than just a single side. For our cells, we use gated recurrent units (GRUs). Figure 3 gives a visual representation of this.
End of explanation
def attend(inputs, attention_size, attention_depth):
inputs = tf.concat(inputs, axis = 2)
inputs_shape = inputs.shape
sequence_length = inputs_shape[1].value
final_layer_size = inputs_shape[2].value
x = tf.reshape(inputs, [-1, final_layer_size])
for _ in range(attention_depth-1):
x = tf.layers.dense(x, attention_size, activation = tf.nn.relu)
x = tf.layers.dense(x, 1, activation = None)
logits = tf.reshape(x, [-1, sequence_length, 1])
alphas = tf.nn.softmax(logits, dim = 1)
output = tf.reduce_sum(inputs * alphas, 1)
return output, alphas
Explanation: Step 3: Attend
There are a number of ways to use the encoded states of a recurrent neural network for prediction. One traditional approach is to simply use the final encoded state of the network, as seen in Figure 2. However, this could lose some useful information encoded in the previous steps of the sequence. In order to keep that information, one could instead use an average of the encoded states outputted by the RNN. There is not reason to believe, though, that all of the encoded states of the RNN are equally valuable. Thus, we arrive at the idea of using a weighted sum of these encoded states to make our prediction.
We will call the weights of this weighted sum "attention weights" as we will see below that they correspond to how important our model thinks each token of the sequence is in making a prediction decision. We compute these attention weights simply by building a small fully connected neural network on top of each encoded state. This network will have a single unit final layer which will correspond to the attention weight we will assign. As for RNNs, the parameters of this network will be the same for each step of the sequence, allowing us to accomodate variable length inputs. Figure 4 shows us what the graph would look like if we applied attention to a uni-directional RNN.
Again, as our model uses a bi-directional RNN, we first concatenate the hidden states from each RNN before computing the attention weights and applying the weighted sum. Figure 5 below visualizes this step.
End of explanation
def estimator_spec_for_softmax_classification(
logits, labels, mode, alphas):
Returns EstimatorSpec instance for softmax classification.
predicted_classes = tf.argmax(logits, 1)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={
'class': predicted_classes,
'prob': tf.nn.softmax(logits),
'attention': alphas
})
onehot_labels = tf.one_hot(labels, MAX_LABEL, 1, 0)
loss = tf.losses.softmax_cross_entropy(
onehot_labels=onehot_labels, logits=logits)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode,
loss=loss,
train_op=train_op)
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(
labels=labels, predictions=predicted_classes),
'auc': tf.metrics.auc(
labels=labels, predictions=predicted_classes),
}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
Explanation: Step 4: Predict
To genereate a class prediction about whether a comment is toxic or not, the final part of our tensorflow graph takes the weighted average of hidden states generated in the attention step and uses a fully connected layer with a softmax activation function to generate probability scores for each of our prediction classes. While training, the model will use the cross-entropy loss function to train its parameters.
As we will use the estimator framework to train our model, we write an estimator_spec function to specify how our model is trained and what values to return during the prediction stage. We also specify the evaluation metrics of accuracy and auc, which we will use to evaluate our model in Step 7.
End of explanation
def predict(encoding, labels, mode, alphas):
logits = tf.layers.dense(encoding, MAX_LABEL, activation=None)
return estimator_spec_for_softmax_classification(
logits=logits, labels=labels, mode=mode, alphas=alphas)
Explanation: The predict component of our graph then just takes the output of our attention step, i.e. the weighted average of the bi-RNN hidden layers, and adds one more fully connected layer to compute the logits. These logits are fed into a our estimator_spec which uses a softmax to get the final class probabilties and a softmax_cross_entropy to build a loss function.
End of explanation
def bi_rnn_model(features, labels, mode):
RNN model to predict from sequence of words to a class.
word_vectors = embed(features)
outputs = encode(word_vectors)
encoding, alphas = attend(outputs,
hparams['attention_size'],
hparams['attention_depth'])
return predict(encoding, labels, mode, alphas)
Explanation: Step 5: Complete Model Architecture
We are now ready to put it all together. As you can see from the bi_rnn_model function below, once you have the components for embed, encode, attend, and predict, putting the whole graph together is extremely simple!
End of explanation
current_time = str(int(time.time()))
model_dir = os.path.join('checkpoints', current_time)
classifier = tf.estimator.Estimator(model_fn=bi_rnn_model,
model_dir=model_dir)
Explanation: Step 6: Train Model
We will use the estimator framework to train our model. To define our classifier, we just provide it with the complete model graph (i.e. the bi_rnn_model function) and a directory where the models will be saved.
End of explanation
# Train.
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={WORDS_FEATURE: x_train},
y=y_train,
batch_size=hparams['batch_size'],
num_epochs=None,
shuffle=True)
Explanation: The estimator framework also requires us to define an input function. This will take the input data and provide it during model training in batches. We will use the provided numpy_input_function, which takes numpy arrays as features and labels. We also specify the batch size and whether we want to shuffle the data between epochs.
End of explanation
classifier.train(input_fn=train_input_fn,
steps=NUM_STEPS)
Explanation: Now, it's finally time to train our model! With estimator, this is as easy as calling the train function and specifying how long we'd like to train for.
End of explanation
# Predict.
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={WORDS_FEATURE: x_test},
y=y_test,
num_epochs=1,
shuffle=False)
predictions = classifier.predict(input_fn=test_input_fn)
Explanation: Step 7: Predict and Evaluate Model
To evaluate the function, we will use it to predict the values of examples from our test set. Again, we define a numpy_input_fn, for the test data in this case, and then have the classifier run predictions on this input function.
End of explanation
y_predicted = []
alphas_predicted = []
for p in predictions:
y_predicted.append(p['class'])
alphas_predicted.append(p['attention'])
Explanation: These predictions are returned to us as a generator. The code below gives an example of how we can extract the class and attention weights for each prediction.
End of explanation
scores = classifier.evaluate(input_fn=test_input_fn)
print('Accuracy: {0:f}'.format(scores['accuracy']))
print('AUC: {0:f}'.format(scores['auc']))
Explanation: To evaluate our model, we can use the evaluate function provided by estimator to get the accuracy and ROC-AUC scores as we defined them in our estimator_spec.
End of explanation
display = attentionDisplay(vocab_processor, classifier)
display.display_prediction_attention("Fuck off, you idiot.")
display.display_prediction_attention("Thanks for your help editing this.")
display.display_prediction_attention("You're such an asshole. But thanks anyway.")
display.display_prediction_attention("I'm going to shoot you!")
display.display_prediction_attention("Oh shoot. Well alright.")
display.display_prediction_attention("First of all who the fuck died and made you the god.")
display.display_prediction_attention("Gosh darn it!")
display.display_prediction_attention("God damn it!")
display.display_prediction_attention("You're not that smart are you?")
Explanation: Step 8: Display Attention
Now that we have a trained attention based toxicity model, let's use it to visualize how our model makes its classification decisions. We use the helpful attentionDisplay class from the visualize_attention package. Given any sentence, this class uses our trained classifier to determine whether the sentence is toxic and also returns a representation of the attention weights. In the arrays below, the more red a word is, the more weight classifier puts on encoded word. Try it out on some sentences of your own and see what patterns you can find!
Note: If you are viewing this on Github, the colors in the cells won't display properly. We recommend viewing it locally or with nbviewer to see the correct rendering of the attention weights.
End of explanation |
1,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Artificial Intelligence & Machine Learning
Ujian Akhir Semester
Mekanisme
Anda hanya diwajibkan untuk mengumpulkan file ini saja ke uploader yang disediakan di https
Step1: Soal 1.2.a (2 poin)
Diberikan $\pi(main) = A$. Formulasikan $V_{\pi}(main)$ dan $V_{\pi}(selesai)$.
$$
V_{\pi}(main) = ...
$$
$$
V_{\pi}(selesai) = ...
$$
Soal 1.2.b (2 poin)
Implementasikan algoritma value iteration dari formula di atas untuk mendapatkan $V_{\pi}(main)$ dan $V_{\pi}(selesai)$.
Step2: Soal 1.3 (2 poin)
Dengan $\pi(main) = A$, tuliskan formula $Q_{\pi}(main, B)$ dan tentukan nilainya.
Jawaban Anda di sini
Soal 1.4 (1 poin)
Apa yang menjadi nilai $\pi_{opt}(main)$?
Jawaban Anda di sini
2. Game Playing
Diberikan permainan seperti di bawah ini.
Diberikan ambang batas $N$ dan permainan dimulai dari nilai 1.
Para pemain secara bergantian dapat memilih untuk menambahkan nilai dengan 2 atau mengalikan nilai dengan 1.1.
Pemain yang melebihi ambang batas akan kalah.
Step3: Soal 2.1 (2 poin)
Implementasikan random policy yang akan memilih aksi dengan rasio peluang 50%
Step4: Soal 2.2 (3 poin)
Implementasikan fungsi minimax policy.
Step5: Soal 2.3 (2 poin)
Implementasikan fungsi expectimax policy untuk melawan random policy yang didefinisikan pada soal 2.1.
Step6: Soal 2.4 (3 poin)
Sebutkan policy terbaik untuk melawan
Step7: Soal 3.1 (2 poin)
Jika diketahui bahwa
\begin{align}
P(1|H) = 0.2 \
P(2|H) = 0.4 \
P(3|H) = 0.4
\end{align}
dan
\begin{align}
P(1|C) = 0.5 \
P(2|C) = 0.4 \
P(3|C) = 0.1
\end{align}
Definisikan probabilitas emisinya.
Step8: Soal 3.2 (2 poin)
Diketahui bahwa
\begin{align}
P(Q_t=H|Q_{t-1}=H) &= 0.6 \
P(Q_t=C|Q_{t-1}=H) &= 0.4 \
P(Q_t=H|Q_{t-1}=C) &= 0.5 \
P(Q_t=C|Q_{t-1}=C) &= 0.5 \
\end{align}
Definisikan probabilitas transisinya.
Step9: Soal 3.3 (2 poin)
Diketahui bahwa
$$
P(Q_1 = H) = 0.8
$$
Definisikan probabilitas inisiasinya.
Step10: Soal 3.4 (2 poin)
Berapa log probability dari observasi (observed) di atas?
Step11: Soal 3.5 (2 poin)
Tunjukkan urutan $Q$ yang paling mungkin. | Python Code:
import networkx as nx
# Kode Anda di sini
Explanation: Artificial Intelligence & Machine Learning
Ujian Akhir Semester
Mekanisme
Anda hanya diwajibkan untuk mengumpulkan file ini saja ke uploader yang disediakan di https://elearning.uai.ac.id/. Ganti nama file ini saat pengumpulan menjadi uas_NIM.ipynb.
Keterlambatan: Pengumpulan ujian yang melebihi tenggat yang telah ditentukan tidak akan diterima. Keterlambatan akan berakibat pada nilai nol untuk ujian ini.
Kolaborasi: Anda tidak diperbolehkan untuk berdiskusi dengan teman Anda. Dilarang keras menyalin kode maupun tulisan dari teman Anda. Kecurangan akan berakibat pada nilai nol untuk ujian ini.
Petunjuk
Untuk kelancaran Anda, gunakan Python 3 dalam ujian ini. Anda diperbolehkan (jika dirasa perlu) untuk mengimpor modul tambahan untuk tugas ini. Namun, seharusnya modul yang tersedia sudah cukup untuk memenuhi kebutuhan Anda. Untuk kode yang Anda ambil dari sumber lain, cantumkan URL menuju referensi tersebut jika diambil dari internet!
1. Markov Decision Processes
Diberikan definisi permainan sebagai berikut:
Untuk setiap ronder $r = 1, 2, ...$
Anda dapat memilih opsi A atau B.
Jika Anda memilih A, Anda akan mendapatkan \$7 dan akan dilempar sebuah dadu enam muka.
Jika yang keluar adalah angka 1, 2, 3, atau 4, maka permainan berhenti.
Jika yang keluar adalah angka 5 atau 6, maka kita akan lanjut ke ronde berikutnya.
Jika Anda memilih B, Anda akan mendapatkan \$3 dan akan dilempar sebuah koin dengan muka angka atau gambar.
Jika yang keluar adalah angka, maka permainan berhenti.
Jika yang keluar adalah gambar, maka kita akan lanjut ke ronde berikutnya.
Tugas Anda adalah mendapatkan uang sebanyak-banyaknya.
Soal 1.1 (3 poin)
Gambarkan MDP-nya, i.e. states, actions, dan rewards-nya dengan pustaka networkx. States yang Anda bisa kontrol adalah main dan selesai.
Tips: Anda bisa memanfaatkan modul MultiDiGraph dari networkx.
End of explanation
# Kode Anda di sini
Explanation: Soal 1.2.a (2 poin)
Diberikan $\pi(main) = A$. Formulasikan $V_{\pi}(main)$ dan $V_{\pi}(selesai)$.
$$
V_{\pi}(main) = ...
$$
$$
V_{\pi}(selesai) = ...
$$
Soal 1.2.b (2 poin)
Implementasikan algoritma value iteration dari formula di atas untuk mendapatkan $V_{\pi}(main)$ dan $V_{\pi}(selesai)$.
End of explanation
import numpy as np
class ExplodingGame(object):
def __init__(self, N):
self.N = N
# state = (player, number)
def start(self):
return (+1, 1)
def actions(self, state):
player, number = state
return ['+', '*']
def succ(self, state, action):
player, number = state
if action == '+':
return (-player, number + 2)
elif action == '*':
return (-player, np.ceil(number * 1.1))
assert False
def is_end(self, state):
player, number = state
return number > self.N
def utility(self, state):
player, number = state
assert self.is_end(state)
return player * float('inf')
def player(self, state):
player, number = state
return player
def add_policy(game, state):
action = '+'
print(f"add policy: state {state} => action {action}")
return action
def multiply_policy(game, state):
action = '*'
print(f"multiply policy: state {state} => action {action}")
return action
Explanation: Soal 1.3 (2 poin)
Dengan $\pi(main) = A$, tuliskan formula $Q_{\pi}(main, B)$ dan tentukan nilainya.
Jawaban Anda di sini
Soal 1.4 (1 poin)
Apa yang menjadi nilai $\pi_{opt}(main)$?
Jawaban Anda di sini
2. Game Playing
Diberikan permainan seperti di bawah ini.
Diberikan ambang batas $N$ dan permainan dimulai dari nilai 1.
Para pemain secara bergantian dapat memilih untuk menambahkan nilai dengan 2 atau mengalikan nilai dengan 1.1.
Pemain yang melebihi ambang batas akan kalah.
End of explanation
def random_policy(game, state):
pass
Explanation: Soal 2.1 (2 poin)
Implementasikan random policy yang akan memilih aksi dengan rasio peluang 50%:50%.
End of explanation
def minimax_policy(game, state):
pass
Explanation: Soal 2.2 (3 poin)
Implementasikan fungsi minimax policy.
End of explanation
def expectimax_policy(game, state):
pass
# Kasus uji
game = ExplodingGame(N=10)
policies = {
+1: add_policy,
-1: multiply_policy
}
state = game.start()
while not game.is_end(state):
# Who controls this state?
player = game.player(state)
policy = policies[player]
# Ask policy to make a move
action = policy(game, state)
# Advance state
state = game.succ(state, action)
print(f"Utility di akhir permainan {game.utility(state)}")
Explanation: Soal 2.3 (2 poin)
Implementasikan fungsi expectimax policy untuk melawan random policy yang didefinisikan pada soal 2.1.
End of explanation
!pip install pomegranate
from pomegranate import *
observed = [2,3,3,2,3,2,3,2,2,3,1,3,3,1,1,1,2,1,1,1,3,1,2,1,1,1,2,3,3,2,3,2,2]
Explanation: Soal 2.4 (3 poin)
Sebutkan policy terbaik untuk melawan:
random policy
expectimax policy
minimax policy
Jawaban Anda di sini
3. Bayesian Network
Bayangkan Anda adalah seorang klimatolog yang bekerja untuk BMKG di tahun 3021 yang sedang mempelajari kasus pemanasan global. Anda tidak mengetahui catatan cuaca di tahun 2021 di Jakarta, tapi Anda punya diari milik Pak Ali yang memberikan keterangan berapa jumlah es krim yang dimakan Pak Ali tiap harinya di musim kemarau. Tujuan Anda adalah mengestimasi perubahan cuaca dari hari ke hari - panas (H) atau sejuk (C). Dengan kata lain,
Jika diberikan observasi $O$ (bilangan bulat yang merepresentasikan jumlah es krim yang dimakan Pak Ali di suatu hari), temukan urutan kondisi cuaca $Q$ dari hari-hari tersebut.
Definisi variabelnya adalah sebagai berikut: $Q \in {H, C}$ dan $O \in {1, 2, 3}$. Untuk bagian ini, Anda diminta untuk mengimplementasikan kode dengan memanfaatkan pustaka pomegranate.
Masalah ini diadaptasi dari makalah oleh Eisner (2002).
Petunjuk: Lihat kembali tugas 4 Anda. Penggunaan pustaka pomegranate sangat mirip dengan implementasi pada tugas tersebut.
End of explanation
# Kode anda di sini
Explanation: Soal 3.1 (2 poin)
Jika diketahui bahwa
\begin{align}
P(1|H) = 0.2 \
P(2|H) = 0.4 \
P(3|H) = 0.4
\end{align}
dan
\begin{align}
P(1|C) = 0.5 \
P(2|C) = 0.4 \
P(3|C) = 0.1
\end{align}
Definisikan probabilitas emisinya.
End of explanation
# Kode Anda di sini
Explanation: Soal 3.2 (2 poin)
Diketahui bahwa
\begin{align}
P(Q_t=H|Q_{t-1}=H) &= 0.6 \
P(Q_t=C|Q_{t-1}=H) &= 0.4 \
P(Q_t=H|Q_{t-1}=C) &= 0.5 \
P(Q_t=C|Q_{t-1}=C) &= 0.5 \
\end{align}
Definisikan probabilitas transisinya.
End of explanation
# Kode Anda di sini
Explanation: Soal 3.3 (2 poin)
Diketahui bahwa
$$
P(Q_1 = H) = 0.8
$$
Definisikan probabilitas inisiasinya.
End of explanation
# Kode Anda di sini
Explanation: Soal 3.4 (2 poin)
Berapa log probability dari observasi (observed) di atas?
End of explanation
# Kode Anda di sini
Explanation: Soal 3.5 (2 poin)
Tunjukkan urutan $Q$ yang paling mungkin.
End of explanation |
1,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Regression with Categorical Variables</h1>
Step2: The BirthSmokers Data
Researchers interested in answering the above research question collected the following data (birthsmokers.txt) on a random sample of n = 32 births
Step3: Let's plot how the birth weight is related to the gestation period for the two groups
Step4: Let's also find out the min, max and average birth weights for smokers and non-smokers.
Step5: And the obligatory residual histogram
Step7: Does the effect of the gestation length on mean birth weight depend on whether or not the mother is a smoker? The answer is no! Regardless of whether or not the mother is a smoker, for each additional one-week of gestation, the mean birth weight is predicted to increase by 143 grams. This lack of interaction between the two predictors is exhibted by the parallelness of the two lines.
Does the effect of smoking on mean birth weight depend on the length of gestation? The answer is no! For a fixed length of gestation, the mean birth weight of babies born to smoking mothers is predicted to be 245 grams lower than the mean birth weight of babies born to non-smoking mothers. Again, this lack of interaction between the two predictors is exhibted by the parallelness of the two lines.
When two predictors do not interact, we say that each predictor has an "additive effect" on the response. More formally, a regression model contains additive effects if the response function can be written as a sum of functions of the predictor variables | Python Code:
%pylab inline
pylab.style.use('ggplot')
import numpy as np
import pandas as pd
Explanation: <h1 align="center">Regression with Categorical Variables</h1>
End of explanation
smoking_txt = Wgt Gest Smoke
2940 38 yes
3130 38 no
2420 36 yes
2450 34 no
2760 39 yes
2440 35 yes
3226 40 no
3301 42 yes
2729 37 no
3410 40 no
2715 36 yes
3095 39 no
3130 39 yes
3244 39 no
2520 35 no
2928 39 yes
3523 41 no
3446 42 yes
2920 38 no
2957 39 yes
3530 42 no
2580 38 yes
3040 37 no
3500 42 yes
3200 41 yes
3322 39 no
3459 40 no
3346 42 yes
2619 35 no
3175 41 yes
2740 38 yes
2841 36 no
from io import StringIO
smoking_df = pd.read_csv(StringIO(smoking_txt), sep='\t')
smoking_df.head()
Explanation: The BirthSmokers Data
Researchers interested in answering the above research question collected the following data (birthsmokers.txt) on a random sample of n = 32 births:
Response (y): birth weight (Weight) in grams of baby
Potential predictor (x1): Smoking status of mother (yes or no)
Potential predictor (x2): length of gestation (Gest) in weeks
The distinguishing feature of this data set is that one of the predictor variables โ Smoking โ is a qualitative predictor. To be more precise, smoking is a "binary variable" with only two possible values (yes or no). The other predictor variable (Gest) is, of course, quantitative.
End of explanation
ax = smoking_df[smoking_df.Smoke == 'yes'].plot(kind='scatter', x='Gest', y='Wgt')
smoking_df[smoking_df.Smoke == 'no'].plot(kind='scatter', color='green', ax=ax, x='Gest', y='Wgt')
ax.legend(['Smokers', 'Non-smokers'], loc='upper left')
Explanation: Let's plot how the birth weight is related to the gestation period for the two groups: smokers and non-smokers.
End of explanation
smoking_df.Wgt.groupby(smoking_df.Smoke).agg({'min': np.min, 'max': np.max, 'mean': np.mean})
import statsmodels.formula.api as sm
result = sm.ols(formula='Wgt ~ C(Smoke) + Gest', data=smoking_df).fit()
result.summary()
Explanation: Let's also find out the min, max and average birth weights for smokers and non-smokers.
End of explanation
result.resid.plot(kind='hist', bins=20)
#smoking_df['Smoke'] = smoking_df['Smoke'].astype('category', values=['no', 'yes']).cat.codes
predictions = result.predict(smoking_df)
smoking_df['fit_values'] = predictions.astype(np.int)
ax = smoking_df[smoking_df.Smoke == 'yes'].plot(kind='scatter', color='blue', x='Gest', y='Wgt')
ax = smoking_df[smoking_df.Smoke == 'no'].plot(kind='scatter', color='green', ax=ax, x='Gest', y='Wgt')
# Also the regression line by group
# smoking_df[smoking_df.Smoke == 'yes'].plot(kind='line', color='blue', x='Gest', y='fit_values')
ax = smoking_df[smoking_df.Smoke == 'yes'].sort_values(by='Gest').plot(
kind='line', ax=ax, color='blue', x='Gest', y='fit_values')
ax = smoking_df[smoking_df.Smoke == 'no'].sort_values(by='Gest').plot(
kind='line', ax=ax, color='green', x='Gest', y='fit_values')
ax.legend(['Smokers', 'Non-smokers'], loc='upper left')
Explanation: And the obligatory residual histogram:
End of explanation
depression_txt = y age x2 x3 TRT
56 21 1 0 A
41 23 0 1 B
40 30 0 1 B
28 19 0 0 rrrr
55 28 1 0 A
25 23 0 0 C
46 33 0 1 B
71 67 0 0 C
48 42 0 1 B
63 33 1 0 A
52 33 1 0 A
62 56 0 0 C
50 45 0 0 C
45 43 0 1 B
58 38 1 0 A
46 37 0 0 C
58 43 0 1 B
34 27 0 0 C
65 43 1 0 A
55 45 0 1 B
57 48 0 1 B
59 47 0 0 C
64 48 1 0 A
61 53 1 0 A
62 58 0 1 B
36 29 0 0 C
69 53 1 0 A
47 29 0 1 B
73 58 1 0 A
64 66 0 1 B
60 67 0 1 B
62 63 1 0 A
71 59 0 0 C
62 51 0 0 C
70 67 1 0 A
71 63 0 0 C
depression_df = pd.read_csv(StringIO(depression_txt), sep='\t')
depression_df.head()
Explanation: Does the effect of the gestation length on mean birth weight depend on whether or not the mother is a smoker? The answer is no! Regardless of whether or not the mother is a smoker, for each additional one-week of gestation, the mean birth weight is predicted to increase by 143 grams. This lack of interaction between the two predictors is exhibted by the parallelness of the two lines.
Does the effect of smoking on mean birth weight depend on the length of gestation? The answer is no! For a fixed length of gestation, the mean birth weight of babies born to smoking mothers is predicted to be 245 grams lower than the mean birth weight of babies born to non-smoking mothers. Again, this lack of interaction between the two predictors is exhibted by the parallelness of the two lines.
When two predictors do not interact, we say that each predictor has an "additive effect" on the response. More formally, a regression model contains additive effects if the response function can be written as a sum of functions of the predictor variables:
$$y = f_1(x_1) + f_2(x_2) + ... + f_{pโ1}(x_{pโ1})$$
Treatment for Depressions Data
Some researchers were interested in comparing the effectiveness of three treatments for severe depression. For the sake of simplicity, we denote the three treatments A, B, and C. The researchers collected the following data (depression.txt) on a random sample of n = 36 severely depressed individuals.
y = measure of the effectiveness of the treatment for individual i
x1 = age (in years) of individual i
x2 = 1 if individual i received treatment A and 0, if not
x3 = 1 if individual i received treatment B and 0, if not
End of explanation |
1,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flux sampling
Basic usage
The easiest way to get started with flux sampling is using the sample function in the flux_analysis submodule. sample takes at least two arguments
Step1: By default sample uses the optgp method based on the method presented here as it is suited for larger models and can run in parallel. By default the sampler uses a single process. This can be changed by using the processes argument.
Step2: Alternatively you can also user Artificial Centering Hit-and-Run for sampling by setting the method to achr. achr does not support parallel execution but has good convergence and is almost Markovian.
Step3: In general setting up the sampler is expensive since initial search directions are generated by solving many linear programming problems. Thus, we recommend to generate as many samples as possible in one go. However, this might require finer control over the sampling procedure as described in the following section.
Advanced usage
Sampler objects
The sampling process can be controlled on a lower level by using the sampler classes directly.
Step4: Both sampler classes have standardized interfaces and take some additional argument. For instance the thinning factor. "Thinning" means only recording samples every n iterations. A higher thinning factors mean less correlated samples but also larger computation times. By default the samplers use a thinning factor of 100 which creates roughly uncorrelated samples. If you want less samples but better mixing feel free to increase this parameter. If you want to study convergence for your own model you might want to set it to 1 to obtain all iterates.
Step5: OptGPSampler has an additional processes argument specifying how many processes are used to create parallel sampling chains. This should be in the order of your CPU cores for maximum efficiency. As noted before class initialization can take up to a few minutes due to generation of initial search directions. Sampling on the other hand is quick.
Step6: Sampling and validation
Both samplers have a sample function that generates samples from the initialized object and act like the sample function described above, only that this time it will only accept a single argument, the number of samples. For OptGPSampler the number of samples should be a multiple of the number of processes, otherwise it will be increased to the nearest multiple automatically.
Step7: You can call sample repeatedly and both samplers are optimized to generate large amount of samples without falling into "numerical traps". All sampler objects have a validate function in order to check if a set of points are feasible and give detailed information about feasibility violations in a form of a short code denoting feasibility. Here the short code is a combination of any of the following letters
Step8: And for our generated samples
Step9: Batch sampling
Sampler objects are made for generating billions of samples, however using the sample function might quickly fill up your RAM when working with genome-scale models. Here, the batch method of the sampler objects might come in handy. batch takes two arguments, the number of samples in each batch and the number of batches. This will make sense with a small example.
Let's assume we want to quantify what proportion of our samples will grow. For that we might want to generate 10 batches of 50 samples each and measure what percentage of the individual 100 samples show a growth rate larger than 0.1. Finally, we want to calculate the mean and standard deviation of those individual percentages.
Step10: Adding constraints
Flux sampling will respect additional contraints defined in the model. For instance we can add a constraint enforcing growth in asimilar manner as the section before.
Step11: Note that this is only for demonstration purposes. usually you could set the lower bound of the reaction directly instead of creating a new constraint. | Python Code:
from cobra.test import create_test_model
from cobra.flux_analysis import sample
model = create_test_model("textbook")
s = sample(model, 100)
s.head()
Explanation: Flux sampling
Basic usage
The easiest way to get started with flux sampling is using the sample function in the flux_analysis submodule. sample takes at least two arguments: a cobra model and the number of samples you want to generate.
End of explanation
print("One process:")
%time s = sample(model, 1000)
print("Two processes:")
%time s = sample(model, 1000, processes=2)
Explanation: By default sample uses the optgp method based on the method presented here as it is suited for larger models and can run in parallel. By default the sampler uses a single process. This can be changed by using the processes argument.
End of explanation
s = sample(model, 100, method="achr")
Explanation: Alternatively you can also user Artificial Centering Hit-and-Run for sampling by setting the method to achr. achr does not support parallel execution but has good convergence and is almost Markovian.
End of explanation
from cobra.flux_analysis.sampling import OptGPSampler, ACHRSampler
Explanation: In general setting up the sampler is expensive since initial search directions are generated by solving many linear programming problems. Thus, we recommend to generate as many samples as possible in one go. However, this might require finer control over the sampling procedure as described in the following section.
Advanced usage
Sampler objects
The sampling process can be controlled on a lower level by using the sampler classes directly.
End of explanation
achr = ACHRSampler(model, thinning=10)
Explanation: Both sampler classes have standardized interfaces and take some additional argument. For instance the thinning factor. "Thinning" means only recording samples every n iterations. A higher thinning factors mean less correlated samples but also larger computation times. By default the samplers use a thinning factor of 100 which creates roughly uncorrelated samples. If you want less samples but better mixing feel free to increase this parameter. If you want to study convergence for your own model you might want to set it to 1 to obtain all iterates.
End of explanation
optgp = OptGPSampler(model, processes=4)
Explanation: OptGPSampler has an additional processes argument specifying how many processes are used to create parallel sampling chains. This should be in the order of your CPU cores for maximum efficiency. As noted before class initialization can take up to a few minutes due to generation of initial search directions. Sampling on the other hand is quick.
End of explanation
s1 = achr.sample(100)
s2 = optgp.sample(100)
Explanation: Sampling and validation
Both samplers have a sample function that generates samples from the initialized object and act like the sample function described above, only that this time it will only accept a single argument, the number of samples. For OptGPSampler the number of samples should be a multiple of the number of processes, otherwise it will be increased to the nearest multiple automatically.
End of explanation
import numpy as np
bad = np.random.uniform(-1000, 1000, size=len(model.reactions))
achr.validate(np.atleast_2d(bad))
Explanation: You can call sample repeatedly and both samplers are optimized to generate large amount of samples without falling into "numerical traps". All sampler objects have a validate function in order to check if a set of points are feasible and give detailed information about feasibility violations in a form of a short code denoting feasibility. Here the short code is a combination of any of the following letters:
"v" - valid point
"l" - lower bound violation
"u" - upper bound violation
"e" - equality violation (meaning the point is not a steady state)
For instance for a random flux distribution (should not be feasible):
End of explanation
achr.validate(s1)
Explanation: And for our generated samples:
End of explanation
counts = [np.mean(s.Biomass_Ecoli_core > 0.1) for s in optgp.batch(100, 10)]
print("Usually {:.2f}% +- {:.2f}% grow...".format(
np.mean(counts) * 100.0, np.std(counts) * 100.0))
Explanation: Batch sampling
Sampler objects are made for generating billions of samples, however using the sample function might quickly fill up your RAM when working with genome-scale models. Here, the batch method of the sampler objects might come in handy. batch takes two arguments, the number of samples in each batch and the number of batches. This will make sense with a small example.
Let's assume we want to quantify what proportion of our samples will grow. For that we might want to generate 10 batches of 50 samples each and measure what percentage of the individual 100 samples show a growth rate larger than 0.1. Finally, we want to calculate the mean and standard deviation of those individual percentages.
End of explanation
co = model.problem.Constraint(model.reactions.Biomass_Ecoli_core.flux_expression, lb=0.1)
model.add_cons_vars([co])
Explanation: Adding constraints
Flux sampling will respect additional contraints defined in the model. For instance we can add a constraint enforcing growth in asimilar manner as the section before.
End of explanation
s = sample(model, 10)
print(s.Biomass_Ecoli_core)
Explanation: Note that this is only for demonstration purposes. usually you could set the lower bound of the reaction directly instead of creating a new constraint.
End of explanation |
1,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Opening and previewing
This uses the tiny excel spreadsheet example1.xls. It is small enough to preview inline in this notebook. But for bigger spreadsheet tables you will want to open them up in a separate window.
Step1: Selecting cell bags
A table is also "bag of cells", which just so happens to be a set of all the cells in the table.
A "bag of cells" is like a Python set (and looks like one when you print it), but it has extra selection functions that help you navigate around the table.
We will learn these as we go along, but you can see the full list on the tutorial_reference notebook.
Step2: Note
Step3: Observations and dimensions
Let's get on with some actual work. In our terminology, an "Observation" is a numerical measure (eg anything in the 3x4 array of numbers in the example table), and a "Dimension" is one of the headings.
Both are made up of a bag of cells, however a Dimension also needs to know how to "look up" from the Observation to its dimensional value.
Step4: Note the value of h1.cellvalobs(ob) is actually a pair composed of the heading cell and its value. This is is because we can over-ride its output value without actually rewriting the original table, as we shall see.
Step5: Conversion segments and output
A ConversionSegment is a collection of Dimensions with an Observation set that is going to be processed and output as a table all at once.
You can preview them in HTML (just like the cell bags and dimensions), only this time the observation cells can be clicked on interactively to show how they look up.
Step6: WDA Technical CSV
The ONS uses their own data system for publishing their time-series data known as WDA.
If you need to output to it, then this next section is for you.
The function which outputs to the WDA format is writetechnicalCSV(filename, [conversionsegments]) The format is very verbose because it repeats each dimension name and its value twice in each row, and every row begins with the following list of column entries, whether or not they exist.
observation, data_marking, statistical_unit_eng, statistical_unit_cym, measure_type_eng, measure_type_cym, observation_type, obs_type_value, unit_multiplier, unit_of_measure_eng, unit_of_measure_cym, confidentuality, geographic_area
The writetechnicalCSV() function accepts a single conversion segment, a list of conversion segments, or equivalently a pandas dataframe.
Step7: Note If you were wondering what the processTIMEUNIT=False was all about in the ConversionSegment constructor, it's a feature to help the WDA output automatically set the TIMEUNIT column according to whether it should be Year, Month, or Quarter.
You will note that the TIME column above is 2014.0 when it really should be 2014 with the TIMEUNIT set to Year.
By setting it to True the ConversionSegment object will identify the timeunit from the value of the TIME column and then force its format to conform. | Python Code:
# Load in the functions
from databaker.framework import *
# Load the spreadsheet
tabs = loadxlstabs("example1.xls")
# Select the first table
tab = tabs[0]
print("The unordered bag of cells for this table looks like:")
print(tab)
Explanation: Opening and previewing
This uses the tiny excel spreadsheet example1.xls. It is small enough to preview inline in this notebook. But for bigger spreadsheet tables you will want to open them up in a separate window.
End of explanation
# Preview the table as a table inline
savepreviewhtml(tab)
bb = tab.is_bold()
print("The cells with bold font are", bb)
print("The", len(bb), "cells immediately below these bold font cells are", bb.shift(DOWN))
cc = tab.filter("Cars")
print("The single cell with the text 'Cars' is", cc)
cc.assert_one() # proves there is only one cell in this bag
print("Everything in the column below the 'Cars' cell is", cc.fill(DOWN))
hcc = tab.filter("Cars").expand(DOWN)
print("If you wanted to include the 'Cars' heading, then use expand", hcc)
print("You can print the cells in row-column order if you don't mind unfriendly code")
shcc = sorted(hcc.unordered_cells, key=lambda Cell:(Cell.y, Cell.x))
print(shcc)
print("It can be easier to see the set of cells coloured within the table")
savepreviewhtml(hcc)
Explanation: Selecting cell bags
A table is also "bag of cells", which just so happens to be a set of all the cells in the table.
A "bag of cells" is like a Python set (and looks like one when you print it), but it has extra selection functions that help you navigate around the table.
We will learn these as we go along, but you can see the full list on the tutorial_reference notebook.
End of explanation
"All the cells that have an 'o' in them:", tab.regex(".*?o")
Explanation: Note: As you work through this tutorial, do please feel free to temporarily insert new Jupyter-Cells in order to give yourself a place to experiment with any of the functions that are available. (Remember, the value of the last line in a Jupyter-Cell is always printed out -- in addition to any earlier print-statements.)
End of explanation
# We get the array of observations by selecting its corner and expanding down and to the right
obs = tab.excel_ref('B4').expand(DOWN).expand(RIGHT)
savepreviewhtml(obs)
# the two main headings are in a row and a column
r1 = tab.excel_ref('B3').expand(RIGHT)
r2 = tab.excel_ref('A3').fill(DOWN)
# here we pass in a list containing two cell bags and get two colours
savepreviewhtml([r1, r2])
# HDim is made from a bag of cells, a name, and an instruction on how to look it up
# from an observation cell.
h1 = HDim(r1, "Vehicles", DIRECTLY, ABOVE)
# Here is an example cell
cc = tab.excel_ref('C5')
# You can preview a dimension as well as just a cell bag
savepreviewhtml([h1, cc])
# !!! This is the important look-up stage from a cell into a dimension
print("Cell", cc, "matches", h1.cellvalobs(cc), "in dimension", h1.label)
# You can start to see through to the final result of all this work when you
# print out the lookup values for every observation in the table at once.
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
Explanation: Observations and dimensions
Let's get on with some actual work. In our terminology, an "Observation" is a numerical measure (eg anything in the 3x4 array of numbers in the example table), and a "Dimension" is one of the headings.
Both are made up of a bag of cells, however a Dimension also needs to know how to "look up" from the Observation to its dimensional value.
End of explanation
# You can change an output value like this:
h1.AddCellValueOverride("Cars", "Horses")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# Alternatively, you can override by the reference to a single cell to a value
# (This will work even if the cell C3 is empty, which helps with filling in blank headings)
h1.AddCellValueOverride(tab.excel_ref('C3'), "Submarines")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# You can override the header value for an individual observation element.
b4cell = tab.excel_ref('B4')
h1.AddCellValueOverride(b4cell, "Clouds")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# The preview table shows how things have changed
savepreviewhtml([h1, obs])
wob = tab.excel_ref('A1')
print("Wrong-Obs", wob, "maps to", h1.cellvalobs(wob), " <--- ie Nothing")
h1.AddCellValueOverride(None, "Who knows?")
print("After giving a default value Wrong-Obs", wob, "now maps to", h1.cellvalobs(wob))
# The default even works if the cell bag set is empty. In which case we have a special
# constant case that maps every observation to the same value
h3 = HDimConst("Category", "Beatles")
for ob in obs:
print("Obs", ob, "maps to", h3.cellvalobs(ob))
Explanation: Note the value of h1.cellvalobs(ob) is actually a pair composed of the heading cell and its value. This is is because we can over-ride its output value without actually rewriting the original table, as we shall see.
End of explanation
dimensions = [
HDim(tab.excel_ref('B1'), TIME, CLOSEST, ABOVE),
HDim(r1, "Vehicles", DIRECTLY, ABOVE),
HDim(r2, "Name", DIRECTLY, LEFT),
HDimConst("Category", "Beatles")
]
c1 = ConversionSegment(obs, dimensions, processTIMEUNIT=False)
savepreviewhtml(c1)
# If the table is too big, we can preview it in another file is openable in another browser window.
# (It's very useful if you are using two computer screens.)
savepreviewhtml(c1, "preview.html", verbose=False)
print("Looking up all the observations against all the dimensions and print them out")
for ob in c1.segment:
print(c1.lookupobs(ob))
df = c1.topandas()
df
Explanation: Conversion segments and output
A ConversionSegment is a collection of Dimensions with an Observation set that is going to be processed and output as a table all at once.
You can preview them in HTML (just like the cell bags and dimensions), only this time the observation cells can be clicked on interactively to show how they look up.
End of explanation
print(writetechnicalCSV(None, c1))
# This is how to write to a file
writetechnicalCSV("exampleWDA.csv", c1)
# We can read this file back in to a list of pandas dataframes
dfs = readtechnicalCSV("exampleWDA.csv")
print(dfs[0])
Explanation: WDA Technical CSV
The ONS uses their own data system for publishing their time-series data known as WDA.
If you need to output to it, then this next section is for you.
The function which outputs to the WDA format is writetechnicalCSV(filename, [conversionsegments]) The format is very verbose because it repeats each dimension name and its value twice in each row, and every row begins with the following list of column entries, whether or not they exist.
observation, data_marking, statistical_unit_eng, statistical_unit_cym, measure_type_eng, measure_type_cym, observation_type, obs_type_value, unit_multiplier, unit_of_measure_eng, unit_of_measure_cym, confidentuality, geographic_area
The writetechnicalCSV() function accepts a single conversion segment, a list of conversion segments, or equivalently a pandas dataframe.
End of explanation
# See that the `2014` no longer ends with `.0`
c1 = ConversionSegment(obs, dimensions, processTIMEUNIT=True)
c1.topandas()
Explanation: Note If you were wondering what the processTIMEUNIT=False was all about in the ConversionSegment constructor, it's a feature to help the WDA output automatically set the TIMEUNIT column according to whether it should be Year, Month, or Quarter.
You will note that the TIME column above is 2014.0 when it really should be 2014 with the TIMEUNIT set to Year.
By setting it to True the ConversionSegment object will identify the timeunit from the value of the TIME column and then force its format to conform.
End of explanation |
1,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Chapter 5
Step1: Once the previous command are executed, you can open the file
Step2: Listing 5.1
Step3: Listing 5.2
Step4: Listing 5.3
Step5: Listing 5.4
Step6: Listing 5.5
Step7: Listing 5.6
Step8: Listing 5.7
Step9: Listing 5.8
Step10: Listing 5.9
Step11: Listing 5.10 | Python Code:
!curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/samples.tar.bz2 -o samples.tar.bz2
!mkdir samples
!tar xvfj samples.tar.bz2 -C samples
Explanation: Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Chapter 5: Handling Files
Note: Before opening the file, this file should be accesible from this Jupyter notebook. In order to do so, the following commands will download these files from Github and extract them into a directory called samples.
End of explanation
file_handle = open('samples/readme.txt', 'r')
file_handle
file_handle = open('samples/seqA.fas', 'r')
file_handle.read()
file_handle = open('samples/readme.txt', 'r')
# do something with the file
file_handle.read()
file_handle.close()
with open('samples/readme.txt', 'r') as file_handle:
print(file_handle.read())
with open('samples/seqA.fas', 'r') as file_handle:
print(file_handle.read())
Explanation: Once the previous command are executed, you can open the file
End of explanation
with open('samples/seqA.fas') as fh:
my_file = fh.read()
name = my_file.split('\n')[0][1:]
sequence = ''.join(my_file.split('\n')[1:])
print('The name is : {0}'.format(name))
print('The sequence is: {0}'.format(sequence))
Explanation: Listing 5.1: firstread.py: First try to read a FASTA file
End of explanation
sequence = ' '
with open('samples/seqA.fas') as fh:
name = fh.readline()[1:-1]
for line in fh:
sequence += line.replace('\n','')
print('The name is : {0}'.format(name))
print('The sequence is: {0}'.format(sequence))
Explanation: Listing 5.2: fastaRead.py: Reads FASTA file, sequentially
End of explanation
sequence = ''
charge = -0.002
aa_charge = {'C':-.045, 'D':-.999, 'E':-.998, 'H':.091,
'K':1, 'R':1, 'Y':-.001}
with open('samples/prot.fas') as fh:
fh.readline()
for line in fh:
sequence += line[:-1].upper()
for aa in sequence:
charge += aa_charge.get(aa,0)
print(charge)
fh = open('samples/newfile.txt','w')
fh = open('samples/error.log','a')
Explanation: Listing 5.3: netchargefile.py: Calculate the net charge, reading the input from
a file
End of explanation
with open('samples/numbers.txt','w') as fh:
fh.write('1\n2\n3\n4\n5')
Explanation: Listing 5.4: Newfile.py: Write numbers to a file.
End of explanation
sequence = ' '
charge = -0.002
aa_charge = {'C':-.045, 'D':-.999, 'E':-.998, 'H':.091,
'K':1, 'R':1, 'Y':-.001}
with open('samples/prot.fas') as fh:
next(fh)
for line in fh:
sequence += line[:-1].upper()
for aa in sequence:
charge += aa_charge.get(aa, 0)
with open('samples/out.txt','w') as file_out:
file_out.write(str(charge))
Explanation: Listing 5.5: nettofile.py Net charge calculation, saving results in a file
End of explanation
total_len = 0
with open('samples/B1.csv') as fh:
next(fh)
for n, line in enumerate(fh):
data = line.split(',')
total_len += int(data[1])
print(total_len/(n+1))
Explanation: Listing 5.6: csvwocsv.py: Reading data from a CSV file
End of explanation
import csv
total_len=0
lines = csv.reader(open('samples/B1.csv'))
next(lines)
for n, line in enumerate(lines):
total_len += int(line[1])
print(total_len/(n+1))
data = list(csv.reader(open('samples/B1.csv')))
data[0][2]
data[1][1]
data[1][2]
data[3][0]
rows = csv.reader(open('/etc/passwd'), delimiter=':')
rows = csv.reader(open('samples/data.csv'), dialect='excel')
dialect = csv.Sniffer().sniff(open('samples/data.csv').read())
rows = csv.reader(open('samples/data.csv'), dialect=dialect)
print(next(rows))
print(next(rows))
Explanation: Listing 5.7: csv1.py: Reading data from a CSV file, using csv module
End of explanation
import xlrd
iedb = {}
book = xlrd.open_workbook('samples/sampledata.xlsx')
sh = book.sheet_by_index(0)
for row_index in range(1, sh.nrows): #skips fist line.
iedb[int(sh.cell_value(rowx=row_index, colx=0))] = \
sh.cell_value(rowx=row_index, colx=2)
print(iedb)
Explanation: Listing 5.8: excel1.py: Reading an xlsx file with xlrd
End of explanation
import xlwt
list1 = [1,2,3,4,5]
list2 = [234,267,281,301,331]
wb = xlwt.Workbook()
ws = wb.add_sheet('First sheet')
ws.write(0,0,'Column A')
ws.write(0,1,'Column B')
i = 1
for x,y in zip(list1,list2): #Walk two list at the same time.
ws.write(i,0,x) # Row, Column, Data.
ws.write(i,1,y)
i += 1
wb.save('mynewfile.xls')
Explanation: Listing 5.9: excel2.py: Write an XLS file with xlwt
End of explanation
import pickle
sp_dict = {'one':'uno', 'two':'dos', 'three':'tres'}
with open('spdict.data', 'wb') as fh:
pickle.dump(sp_dict, fh)
import pickle
pickle.load(open('spdict.data','rb'))
{'one':'uno', 'two':'dos', 'three':'tres'}
Explanation: Listing 5.10: picklesample.py: Basic pickle sample
End of explanation |
1,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ะะตะผะฝะพะณะพ ะฑะตะทัะผะธั
ัะผะพััะธะผ, ัะฐะฑะพัะฐัั ะปะธ attention models ะธ ะดััะณะธะต ะผะพะดะตะปะธ
Step2: ะัะฟะพะผะพะณะฐัะตะปัะฝัะต ััะฝะบัะธะธ
Step3: ะัะฐัะธะบะธ
Step4: ะัะพััะพ ะฟะพะตะทะด + inclusive
Step5: ะะปั ััะฐะฒะฝะตะฝะธั
ัะธะปัััะฐัะธั ะฒัะฑะพัะบะธ
Step6: ะะปะธะฝะฝัะน ะฟะพะตะทะด
Step7: ะัะปะธ ะฟัะพััะพ ะฒะทะฒะตัะธัั ะพะฑััะฐัััั ะฒัะฑะพัะบั
ัะพ ะตััั ะพัะฝะพัะผะธัะพะฒะฐัั ะฒะตัะฐ ััะตะบะพะฒ ะฒะฝัััะธ ะพะดะฝะพะณะพ ัะพะฑััะธั, ะฟะพะปััะฐะตะผ ัััะตััะฒะตะฝะฝัะน ะฟัะธัะพัั, ะตัะปะธ ะธั
ะธัะฟะพะปัะทะพะฒะฐัั ะฒะพ ะฒัะตะผั ะฟัะตะดัะบะฐะทะฐะฝะธั.
ะัะธ ััะพะผ ะธัะฟะพะปัะทะพะฒะฐะฝะธะต ะฒะตัะพะฒ ะฒ ััะตะฝะธัะพะฒะบะต - ััะพ ัััะฐะฝะฝะพ. ะัะธ ะผะฐะปะพะผ ัะธัะปะต ัะพะฑััะธะน ัะฐะฑะพัะฐะตั, ะฟัะธ ะฑะพะปััะพะผ - ะฝะตั
Step8: ะะพัะผะพััะธะผ ะฝะฐ ะบะฐัะตััะฒะพ ะฒ ะทะฐะฒะธัะธะผะพััะธ ะพั ัะธัะปะฐ ััะตะบะพะฒ
Step9: ะะพะฟััะฐะตะผัั ะพัะปะธัะธัั ะฟัะฐะฒะธะปัะฝะพ ะฟัะตะดัะบะฐะทะฐะฝะฝัะต.
Step10: ะะพะฟะพะปะฝะธัะตะปัะฝะฐั ะฟัะพะฒะตัะบะฐ ะฒะตัะฝะพััะธ ะบะพััะตะบัะธัะพะฒะบะธ ะฒะตัะฐ
ะฟัะพะฑัะตะผ, ัะฐะฑะพัะฐะตั ะปะธ ะปะธะฝะตะนะฝะฐั ะบะพััะตะบัะธัะพะฒะบะฐ ะฒะตัะฐ.
ะะฐ, ะฟัะธ ััะพะผ ัะดะฒะธะณ ัะฒะปัะตััั ะฒะฐะถะฝัะผ, ัะพ ะตััั ะฝะฐะปะธัะธะต ะฝะพัะผะฐะปะธะทะฐัะธะธ ะฟะพััะดะบะฐ $e^2$
Step11: Attention (ะฟะพัะปะฐ ะฒ ะฟัะตะทะตะฝัะฐัะธั)
ัะพะณะพ, ััะพ ะฒะทะฐะธะผะฝะพะต ะพะฑััะตะฝะธะต attention ะธ ะบะปะฐััะธัะธะบะฐัะพัะฐ ัะฐะฑะพัะฐะตั.
Step12: ะะตะฑะพะปััะพะต ััะฐะฒะฝะตะฝะธะต importance ะดะปั ะผะพะดะตะปะตะน attention ะธ classifier
Step13: ะกัะฐะฒะฝะตะฝะธะต ะผะพะดะตะปะตะน (ะดะปั ะฟัะตะทะตะฝัะฐัะธะธ)
Step14: ะฝะตะนัะพัะตัะบะธ
Step15: ะะปั ััะฐะฒะฝะตะฝะธั ะฒะพะทัะผะตะผ ะฟัะพััะพะน MLP ะธะท keras
84 sec/ iteration
Step16: comparison of ROC AUCs
Step17: ะ ะตะฐะปัะฝัะต ะดะฐะฝะฝัะต
Step18: ะะฐัะตะผะฐัะธัะตัะบะฐั ะผะพะดะตะปั ะดะฐะปัะฝะตะนัะธั
ะดะตะนััะฒะธะน
GroupLogLoss ะธ DropoutLoss ะดะฐัั ะฟัะธัะพัั ะฟัะธ ะฒัะตั
ัะฐะทะผะตัะฐั
, ะฝะพ ะฒ ะฟัะตะทะตะฝัะฐัะธั ะฝะต ะฟะพัะปะธ
$$ d(B+) = \sum_{\text{track}} d(B+ | \text{track is tagging}) 1_\text{track is tagging} $$
Step19: ะะดะตั
Step20: ะัะพะฒะตัะบะฐ ะฝะฐ AdaLoss
ะฒะดััะณ ะพะฝะพ ะธ ะฟัะพััะพ ัะฐะบ ัะฐะฑะพัะฐะตั?
Step21: GroupLogLoss
ััะตะฑัะตััั ััะฐะฒะฝะธัั ั ExpLoss | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy
import root_numpy
# import pandas - no pandas today
from astropy.table import Table
from sklearn.metrics import roc_auc_score
from scipy.special import logit
from decisiontrain import DecisionTrainClassifier
from collections import OrderedDict
# theano imports
import theano
from theano import tensor as T
from theano.tensor.nnet import softplus
from theano.tensor.extra_ops import bincount
import sys
sys.path.insert(0, '../')
from folding_group import FoldingGroupClassifier
features = [
# track itself
'eta', 'partPt', 'partP',
# track and B
'cos_diff_phi', 'proj', 'diff_eta', 'ptB', 'R_separation', 'proj_T', 'proj_T2',
# PID
'PIDNNe', 'PIDNNk', 'PIDNNm', 'ghostProb',
# IP
'IP', 'IPerr', 'IPs', 'IPPU',
# Other
'veloch', 'partlcs', 'EOverP',
# deleted as probably inappropriate:
# 'phi',
# 'diff_pt', 'nnkrec',
# 'max_PID_mu_e', 'max_PID_mu_k', 'sum_PID_k_e', 'sum_PID_mu_e', 'max_PID_k_e', 'sum_PID_mu_k',
]
data = Table(root_numpy.root2array('./Bcharged_MC.root', stop=20000000))
def preprocess(data):
data['label'] = (data['signB'] * data['signTrack']) > 0
data['cos_diff_phi'] = numpy.cos(data['diff_phi'])
data['diff_pt'] = data['ptB'] - data['partPt']
data['R_separation'] = numpy.sqrt(data['diff_eta'] ** 2 + (1 - data['cos_diff_phi']) ** 2)
# projection in transverse plane
data['proj_T'] = data['cos_diff_phi'] * data['partPt']
data['proj_T2'] = data['cos_diff_phi'] * data['partPt'] * data['ptB']
data = data[data['ghostProb'] < 0.4]
data = data[numpy.isfinite(data['IPs'])]
data.rename_column('N_sig_sw', 'sweight')
# for real data, weight is also added
data = data[features + ['event_id', 'label', 'signTrack', 'signB', 'sweight']]
preprocess(data)
_, data['event_id'] = numpy.unique(data['event_id'], return_inverse=True)
N = 10 * 10 ** 6
check_data = data[N:].copy()
data = data[:N].copy()
_, check_data['event_id'] = numpy.unique(check_data['event_id'], return_inverse=True)
_, data['event_id'] = numpy.unique(data['event_id'], return_inverse=True)
Explanation: ะะตะผะฝะพะณะพ ะฑะตะทัะผะธั
ัะผะพััะธะผ, ัะฐะฑะพัะฐัั ะปะธ attention models ะธ ะดััะณะธะต ะผะพะดะตะปะธ
End of explanation
def compute_weights(data, attention):
Weights are normalized over events. Higher convenience - higher weights
assert len(numpy.shape(attention)) == 1
weights = numpy.exp(attention)
sum_weights = numpy.bincount(data['event_id'], weights=weights)
return weights / (sum_weights[data['event_id']] + 1)
def compute_simple_auc(data, track_proba):
assert track_proba.shape == (len(data), 2)
event_predictions = numpy.bincount(
data['event_id'], weights=logit(track_proba[:, 1]) * data['signTrack'])
B_signs = data['signB'].group_by(data['event_id']).groups.aggregate(numpy.mean)
return roc_auc_score(B_signs, event_predictions)
def compute_auc_with_attention(data, track_proba, track_attention):
assert track_proba.shape == (len(data), 2)
assert len(track_attention) == len(data)
tracks_weights = compute_weights(data, track_attention)
event_predictions = numpy.bincount(
data['event_id'], weights=logit(track_proba[:, 1]) * data['signTrack'] * tracks_weights)
B_signs = data['signB'].group_by(data['event_id']).groups.aggregate(numpy.mean)
return roc_auc_score(B_signs, event_predictions)
Explanation: ะัะฟะพะผะพะณะฐัะตะปัะฝัะต ััะฝะบัะธะธ
End of explanation
plt.figure(figsize=[15, 8])
plt.hist(numpy.bincount(data['event_id']), range=[0.5, 70.5], bins=70, normed=True);
plt.xlim(0, 71)
plt.xlabel('n_tracks in event', fontsize=20)
plt.ylabel('fraction of events', fontsize=20)
plt.xticks(fontsize=20)
plt.savefig('./n_tracks.png', bbox_inches='tight')
base_clf = DecisionTrainClassifier(n_estimators=1000, learning_rate=0.03, n_threads=len(features),
train_features=features, max_features=0.9)
plt.hist(numpy.bincount(data['event_id']), bins=61, range=(0, 60), alpha=0.5, normed=True)
plt.hist(numpy.bincount(check_data['event_id']), bins=61, range=(0, 60), alpha=0.5, normed=True);
Explanation: ะัะฐัะธะบะธ
End of explanation
%%time
dt = FoldingGroupClassifier(base_clf, n_folds=3, group_feature='event_id')
_ = dt.fit(data[features + ['event_id']].to_pandas(), data['label'])
# raw quality
print compute_simple_auc(check_data, dt.predict_proba(check_data.to_pandas()))
print compute_auc_with_attention(check_data,
track_proba=dt.predict_proba(check_data.to_pandas()),
track_attention=numpy.zeros(len(check_data)))
# raw quality
print compute_simple_auc(data, dt.predict_proba(data.to_pandas()))
print compute_auc_with_attention(data,
track_proba=dt.predict_proba(data.to_pandas()),
track_attention=numpy.zeros(len(data)))
Explanation: ะัะพััะพ ะฟะพะตะทะด + inclusive
End of explanation
%%time
_n_tracks = numpy.bincount(data['event_id'])[data['event_id']]
_weights = (_n_tracks > 5) & (_n_tracks < 40)
dt_on_filtered = FoldingGroupClassifier(base_clf, n_folds=2, group_feature='event_id')
_ = dt_on_filtered.fit(data[features + ['event_id']].to_pandas(), data['label'], sample_weight=_weights)
# on filtered dataset
compute_auc_with_attention(data,
track_proba=dt_on_filtered.predict_proba(data.to_pandas()),
track_attention=numpy.zeros(len(data)) - 2)
sorted(zip(dt.estimators[0].feature_importances_ + dt.estimators[1].feature_importances_, features))
Explanation: ะะปั ััะฐะฒะฝะตะฝะธั
ัะธะปัััะฐัะธั ะฒัะฑะพัะบะธ
End of explanation
long_dt = FoldingGroupClassifier(
DecisionTrainClassifier(n_estimators=3000, learning_rate=0.02, max_features=0.9,
n_threads=len(features), train_features=features),
n_folds=2, group_feature='event_id')
_ = long_dt.fit(data[features + ['event_id']].to_pandas(),
data['label'],
sample_weight=compute_weights(data, numpy.zeros(len(data)) - 2.))
for i, p in enumerate(long_dt.staged_predict_proba(data.to_pandas()), 1):
if i % 5 == 0:
print compute_auc_with_attention(data, track_proba=p, track_attention=numpy.zeros(len(data)) - 2.)
Explanation: ะะปะธะฝะฝัะน ะฟะพะตะทะด
End of explanation
weights = compute_weights(data, attention=numpy.zeros(len(data)))
dt_simpleweights = FoldingGroupClassifier(base_clf, n_folds=2, group_feature='event_id')
dt_simpleweights.fit(data[features + ['event_id']].to_pandas(), data['label'], sample_weight=weights);
compute_auc_with_attention(data,
track_proba=dt_simpleweights.predict_proba(data.to_pandas()),
track_attention=numpy.zeros(len(data)))
Explanation: ะัะปะธ ะฟัะพััะพ ะฒะทะฒะตัะธัั ะพะฑััะฐัััั ะฒัะฑะพัะบั
ัะพ ะตััั ะพัะฝะพัะผะธัะพะฒะฐัั ะฒะตัะฐ ััะตะบะพะฒ ะฒะฝัััะธ ะพะดะฝะพะณะพ ัะพะฑััะธั, ะฟะพะปััะฐะตะผ ัััะตััะฒะตะฝะฝัะน ะฟัะธัะพัั, ะตัะปะธ ะธั
ะธัะฟะพะปัะทะพะฒะฐัั ะฒะพ ะฒัะตะผั ะฟัะตะดัะบะฐะทะฐะฝะธั.
ะัะธ ััะพะผ ะธัะฟะพะปัะทะพะฒะฐะฝะธะต ะฒะตัะพะฒ ะฒ ััะตะฝะธัะพะฒะบะต - ััะพ ัััะฐะฝะฝะพ. ะัะธ ะผะฐะปะพะผ ัะธัะปะต ัะพะฑััะธะน ัะฐะฑะพัะฐะตั, ะฟัะธ ะฑะพะปััะพะผ - ะฝะตั
End of explanation
n_tracks = numpy.bincount(data['event_id'])
plt.plot(numpy.bincount(n_tracks) * numpy.arange(max(n_tracks) + 1))
plt.xlabel('n_tracks in event')
plt.ylabel('n_tracks in total')
# plt.plot(* zip(*(
# [i, roc_auc_score(B_signs[n_tracks == i], predictions[n_tracks == i])]
# for i in range(2, 60)
# )))
Explanation: ะะพัะผะพััะธะผ ะฝะฐ ะบะฐัะตััะฒะพ ะฒ ะทะฐะฒะธัะธะผะพััะธ ะพั ัะธัะปะฐ ััะตะบะพะฒ
End of explanation
# correctness = logit(dt.predict_proba(data.to_pandas())[:, 1]) * (2 * data['label'] - 1)
# for percentile in [50, 60, 70, 80]:
# dt_attention = FoldingGroupClassifier(base_clf, n_folds=2, group_feature='event_id')
# dt_attention.fit(data[features + ['event_id']].to_pandas(),
# correctness > numpy.percentile(correctness, percentile))
# attention = logit(dt_attention.predict_proba(data[features + ['event_id']].to_pandas())[:, 1])
# if percentile == 70:
# stable_attention = attention.copy()
# attention_weights = compute_weights(data, attention)
# dt_classifier = FoldingGroupClassifier(base_clf, n_folds=2, group_feature='event_id')
# dt_classifier.fit(data[features + ['event_id']].to_pandas(),
# data['label'], sample_weight=attention_weights)
# print percentile, compute_auc_with_attention(data, dt_classifier.predict_proba(data.to_pandas()), attention)
Explanation: ะะพะฟััะฐะตะผัั ะพัะปะธัะธัั ะฟัะฐะฒะธะปัะฝะพ ะฟัะตะดัะบะฐะทะฐะฝะฝัะต.
End of explanation
_proba = dt.predict_proba(data.to_pandas())
for alpha in [-3, -2, -1, 0]:
for beta in numpy.linspace(0.5, 1.2, 5):
print alpha, '\t', beta, '\t', compute_auc_with_attention(data, _proba, alpha + beta * stable_attention)
Explanation: ะะพะฟะพะปะฝะธัะตะปัะฝะฐั ะฟัะพะฒะตัะบะฐ ะฒะตัะฝะพััะธ ะบะพััะตะบัะธัะพะฒะบะธ ะฒะตัะฐ
ะฟัะพะฑัะตะผ, ัะฐะฑะพัะฐะตั ะปะธ ะปะธะฝะตะนะฝะฐั ะบะพััะตะบัะธัะพะฒะบะฐ ะฒะตัะฐ.
ะะฐ, ะฟัะธ ััะพะผ ัะดะฒะธะณ ัะฒะปัะตััั ะฒะฐะถะฝัะผ, ัะพ ะตััั ะฝะฐะปะธัะธะต ะฝะพัะผะฐะปะธะทะฐัะธะธ ะฟะพััะดะบะฐ $e^2$
End of explanation
def train_on_data(data):
# lazy start
_attention = numpy.zeros(len(data))
_correctness = numpy.zeros(len(data))
for iteration in range(3):
dt_classifier = FoldingGroupClassifier(base_clf, n_folds=3, group_feature='event_id', random_state=iteration * 10)
dt_classifier.fit(data[features + ['event_id']].to_pandas(),
data['label'], sample_weight=compute_weights(data, _attention))
_correctness = logit(dt_classifier.predict_proba(data.to_pandas())[:, 1]) * (2 * data['label'] - 1)
# print compute_auc_with_attention(data, dt_classifier.predict_proba(data.to_pandas()), _attention)
dt_attention = FoldingGroupClassifier(base_clf, n_folds=3, group_feature='event_id', random_state=3 + iteration * 1222)
dt_attention.fit(data[features + ['event_id']].to_pandas(),
_correctness > numpy.percentile(_correctness, 70))
_attention = logit(dt_attention.predict_proba(data.to_pandas())[:, 1])
print compute_auc_with_attention(data, dt_classifier.predict_proba(data.to_pandas()), _attention)
return dt_classifier, dt_attention
models = {}
assert len(data) >= 10 * 10 ** 6
for train_size in [100000, 300000, 10 ** 6, 3 * 10 ** 6, 10 ** 7]: # , 1000000, 3000000
models[train_size] = train_on_data(data[:train_size])
models_simple_dt = OrderedDict()
for train_size in [100000, 300000, 10 ** 6, 3 * 10 ** 6, 10 ** 7]: # , 1000000, 3000000
_dt = FoldingGroupClassifier(base_clf, n_folds=3, group_feature='event_id')
_dt.fit(data[features + ['event_id']][:train_size].to_pandas(), data['label'][:train_size])
models_simple_dt[train_size] = _dt
# import cPickle
# with open('./attention_models.pkl', 'w') as f:
# cPickle.dump([models, models_simple_dt], f, protocol=2)
import cPickle
with open('./attention_models.pkl', 'r') as f:
models, models_simple_dt = cPickle.load(f)
def compute_auc_with_attention_and_error(data, track_proba, track_attention):
assert track_proba.shape == (len(data), 2)
assert len(track_attention) == len(data)
tracks_weights = compute_weights(data, track_attention)
event_predictions = numpy.bincount(
data['event_id'], weights=logit(track_proba[:, 1]) * data['signTrack'] * tracks_weights)
B_signs = data['signB'].group_by(data['event_id']).groups.aggregate(numpy.mean)
values = []
for i in range(20):
mask = numpy.random.RandomState(i).uniform(size=len(B_signs)) > 0.5
values.append(roc_auc_score(B_signs[mask], event_predictions[mask]))
return numpy.mean(values), numpy.std(values)
attention_aucs = OrderedDict()
for size, (dt_classifier, dt_attention) in sorted(models.iteritems()):
attention_aucs[size] = compute_auc_with_attention_and_error(
check_data,
dt_classifier.predict_proba(check_data.to_pandas()),
logit(dt_attention.predict_proba(check_data.to_pandas())[:, 1])
)
dt_aucs = OrderedDict()
for size, _dt_classifier in sorted(models_simple_dt.iteritems()):
dt_aucs[size] = compute_auc_with_attention_and_error(
check_data,
_dt_classifier.predict_proba(check_data.to_pandas()),
numpy.zeros(len(check_data))
)
plt.figure(figsize=[12, 9])
x_ticks = range(len(dt_aucs))
# plt.plot(dt_aucs.values(), 'o--', label='plain DT', markersize=10)
plt.errorbar(x_ticks,
[x for (x, _) in dt_aucs.values()],
yerr=[x for (_, x) in dt_aucs.values()], fmt='o--', label='DT, no attention', markersize=10)
# plt.plot(attention_aucs.values(), 'x--', label='with attention', markersize=10)
plt.errorbar(x_ticks,
[x for (x, _) in attention_aucs.values()],
yerr=[x for (_, x) in attention_aucs.values()], fmt='x--', label='with attention', markersize=10)
plt.xticks(x_ticks, dt_aucs.keys(), fontsize=13)
plt.legend(loc='lower right', fontsize=25)
plt.xlabel('size of training sample', fontsize=20)
plt.ylabel('tagging ROC AUC', fontsize=20)
plt.xlim(-0.5, 4.5)
plt.savefig('./attention_quality.png', bbox_inches='tight')
# orders = compute_orders(data['event_id'], correctness)
# for percentile in [60, 70, 80]:
# for n_tracks in [3, 5, 7]:
# # lazy start
# _attention = numpy.zeros(len(data))
# dt_classifier = FoldingGroupClassifier(base_clf, n_folds=2, group_feature='event_id')
# dt_classifier.fit(data[features + ['event_id']].to_pandas(),
# data['label'], sample_weight=compute_weights(data, _attention))
# _correctness = logit(dt_classifier.predict_proba(data.to_pandas())[:, 1]) * (2 * data['label'] - 1)
# dt_attention = FoldingGroupClassifier(base_clf, n_folds=2, group_feature='event_id')
# dt_attention.fit(data[features + ['event_id']].to_pandas(),
# (_correctness > numpy.percentile(_correctness, percentile)) * (orders < n_tracks) )
# _attention = logit(dt_attention.predict_proba(data.to_pandas())[:, 1])
# print n_tracks, '\t', percentile, '\t', \
# compute_auc_with_attention(data, dt_classifier.predict_proba(data.to_pandas()), _attention)
Explanation: Attention (ะฟะพัะปะฐ ะฒ ะฟัะตะทะตะฝัะฐัะธั)
ัะพะณะพ, ััะพ ะฒะทะฐะธะผะฝะพะต ะพะฑััะตะฝะธะต attention ะธ ะบะปะฐััะธัะธะบะฐัะพัะฐ ัะฐะฑะพัะฐะตั.
End of explanation
# sorted(zip(dt_classifier.estimators[0].feature_importances_, features))
# sorted(zip(dt_attention.estimators[0].feature_importances_, features))
Explanation: ะะตะฑะพะปััะพะต ััะฐะฒะฝะตะฝะธะต importance ะดะปั ะผะพะดะตะปะตะน attention ะธ classifier
End of explanation
%%time
from rep.estimators import XGBoostClassifier
n_train = 10 ** 6
xgb = FoldingGroupClassifier(XGBoostClassifier(n_estimators=100, eta=0.05, random_state=42),
n_folds=2, group_feature='event_id', train_features=features, random_state=1337)
_ = xgb.fit(data[features + ['event_id']][:n_train].to_pandas(), data['label'][:n_train])
# raw quality
for p in xgb.staged_predict_proba(check_data.to_pandas()):
print compute_simple_auc(check_data, p)
# raw quality
# for p in xgb.staged_predict_proba(data.to_pandas()):
# print compute_simple_auc(data, p)
%%time
dt_1m = FoldingGroupClassifier(
DecisionTrainClassifier(n_estimators=1000, learning_rate=0.01, n_threads=len(features), max_features=1.0),
n_folds=2, group_feature='event_id', train_features=features, random_state=1337)
dt_1m.fit(data[features + ['event_id']][:n_train].to_pandas(), data['label'][:n_train]);
# raw quality
for p in dt_1m.staged_predict_proba(check_data.to_pandas()):
print compute_simple_auc(check_data, p)
Explanation: ะกัะฐะฒะฝะตะฝะธะต ะผะพะดะตะปะตะน (ะดะปั ะฟัะตะทะตะฝัะฐัะธะธ)
End of explanation
from hep_ml.nnet import MLPClassifier
# 0.6204 - 400
for epochs in [50, 100, 200, 400]:
nn = FoldingGroupClassifier(MLPClassifier(layers=[30, 20], epochs=epochs, scaler='iron', random_state=42),
n_folds=2, group_feature='event_id', train_features=features, random_state=1337)
nn.fit(data[features + ['event_id']][:n_train].to_pandas(), data['label'][:n_train]);
print compute_simple_auc(check_data, nn.predict_proba(check_data.to_pandas()))
%%time
for epochs in [800]:
nn = FoldingGroupClassifier(MLPClassifier(layers=[30, 20], epochs=epochs, scaler='iron', random_state=42),
n_folds=2, group_feature='event_id', train_features=features, random_state=1337)
nn.fit(data[features + ['event_id']][:n_train].to_pandas(), data['label'][:n_train]);
print compute_simple_auc(check_data, nn.predict_proba(check_data.to_pandas()))
print compute_simple_auc(data, nn.predict_proba(data.to_pandas()))
Explanation: ะฝะตะนัะพัะตัะบะธ
End of explanation
from keras.utils.np_utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Dropout, BatchNormalization
from keras.optimizers import Adam
print "%f" % (1e-3 / 20 )
model = Sequential()
model.add(BatchNormalization(input_shape=(len(features),)))
model.add(Dense(300, activation='relu'))
model.add(Dropout(0.05))
model.add(Dense(300, activation='tanh'))
model.add(Dropout(0.1))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=1e-3 / 20),
metrics=['accuracy', ])
# from sklearn.metrics import log_loss
def train_nn(data, check_data, n_epochs=100):
numpy.random.seed(1337)
_X, _y = data[features].to_pandas().values, to_categorical(data['label'])
_check_X, _check_y = check_data[features].to_pandas().values, to_categorical(check_data['label'])
for _ in range(n_epochs):
history = model.fit(_X, _y,
nb_epoch=1, verbose=2,
)
_track_proba = model.predict_proba(_check_X, verbose=2)
histories.append(history)
nn_evaluations_simple.append(compute_simple_auc(check_data, _track_proba))
nn_evaluations_attention.append(compute_auc_with_attention(check_data, _track_proba,
numpy.zeros(len(check_data)) - 2))
nn_evaluations_testlosses.append(log_loss(_check_y[:, 1], _track_proba))
histories = []
nn_evaluations_simple = []
nn_evaluations_attention = []
nn_evaluations_testlosses = []
train_nn(data[:n_train], check_data, n_epochs=100)
with open('./nn_history.pkl', 'w') as f:
cPickle.dump([nn_evaluations_simple, nn_evaluations_attention, nn_evaluations_testlosses, histories], f)
train_losses = [h.history['loss'] for h in histories]
plt.figure(figsize=[9, 6])
plt.plot(train_losses, label='train')
plt.plot(nn_evaluations_testlosses, label='test')
plt.plot(numpy.zeros(len(nn_evaluations_simple)) + numpy.log(2), 'k--', label='all zeros prediction')
plt.xlim(0, 50)
plt.ylim(0.680, 0.710)
plt.xlabel('number of epochs', fontsize=25)
plt.ylabel('loss', fontsize=25)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.legend(fontsize=20)
plt.text(25, 0.685, '(pay attention to the y-axis scale)', horizontalalignment='center', fontsize=15)
plt.savefig('./nns_losses.png', bbox_inches='tight')
plt.figure(figsize=[9, 6])
plt.plot(nn_evaluations_simple)
plt.xlabel('number of epochs', fontsize=25)
plt.ylabel('test ROC AUC (for mesons)', fontsize=25)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.xlim(0, 50)
plt.savefig('./nns_rocauc.png', bbox_inches='tight')
_dt = DecisionTrainClassifier(n_estimators=1000, learning_rate=0.01, max_features=1., n_threads=len(features),
train_features=features)
_dt.fit(data[:10**6].to_pandas(), data['label'][:10**6])
_step = 50
_aucs = []
for p in _dt.staged_predict_proba(check_data.to_pandas(), step=_step):
_aucs.append(compute_simple_auc(check_data, p))
plt.figure(figsize=[9, 6])
plt.plot(numpy.arange(1, len(_aucs) + 1) * _step, _aucs, label='DecisionTrain')
plt.xlabel('number of epochs', fontsize=25)
plt.ylabel('test ROC AUC (for mesons)', fontsize=25)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.xlim(0, 1000)
plt.legend(fontsize=20, loc='lower right')
plt.savefig('./dt1m_roc_history.png', bbox_inches='tight')
max(nn_evaluations_simple)
Explanation: ะะปั ััะฐะฒะฝะตะฝะธั ะฒะพะทัะผะตะผ ะฟัะพััะพะน MLP ะธะท keras
84 sec/ iteration
End of explanation
from sklearn.metrics.ranking import auc, roc_curve
import cPickle
with open('../models/old-rocs-MC-copied', 'r') as f:
baseline_fpr, baseline_tpr, _ = cPickle.load(f)
%%time
dt_10m = DecisionTrainClassifier(n_estimators=1000, learning_rate=0.03, max_features=1., n_threads=len(features),
train_features=features)
dt_10m.fit(data.to_pandas(), data['label'])
for p in dt_10m.staged_predict_proba(check_data.to_pandas()):
print compute_simple_auc(check_data, p)
track_proba_10m = dt_10m.predict_proba(check_data.to_pandas())
_event_predictions = numpy.bincount(check_data['event_id'], weights=logit(track_proba_10m[:, 1]) * check_data['signTrack'])
_B_signs = check_data['signB'].group_by(check_data['event_id']).groups.aggregate(numpy.mean)
fpr_10m, tpr_10m, _ = roc_curve(_B_signs, _event_predictions)
plt.figure(figsize=[10, 9])
plt.plot(baseline_fpr, baseline_tpr, label='baseline OS, AUC={:.3}'.format(auc(baseline_fpr, baseline_tpr)))
plt.plot(fpr_10m, tpr_10m, label='inclusive tagging, AUC={:.3f}'.format(auc(fpr_10m, tpr_10m)) )
plt.plot([0,1], [0,1], 'k--', label='random guess')
plt.xlabel('FPR', fontsize=25)
plt.ylabel('TPR', fontsize=25)
plt.legend(loc='lower right', fontsize=20)
plt.savefig('./inclusive_vs_old_rocs.png', bbox_inches='tight')
Explanation: comparison of ROC AUCs
End of explanation
real_data = Table(root_numpy.root2array('./Bcharged_data.root', stop=10000000))
# real_data.colnames
def compute_simple_auc_with_weight(data, track_proba):
assert track_proba.shape == (len(data), 2)
event_predictions = numpy.bincount(
data['event_id'], weights=logit(track_proba[:, 1]) * data['signTrack'])
B_signs = data['signB'].group_by(data['event_id']).groups.aggregate(numpy.mean)
sweights = data['N_sig_sw'].group_by(data['event_id']).groups.aggregate(numpy.mean)
# TODO delete hack, this is problem in data
sweights[B_signs != numpy.sign(B_signs)] = 0
B_signs[B_signs != numpy.sign(B_signs)] = -1
fpr, tpr, _ = roc_curve(B_signs, event_predictions, sample_weight=sweights)
return roc_auc_score(B_signs, event_predictions, sample_weight=sweights), fpr, tpr
_auc, _fpr_10mcheck, _tpr_10mcheck = \
compute_simple_auc_with_weight(real_data, track_proba=dt_10m.predict_proba(real_data.to_pandas()))
plt.figure(figsize=[10, 9])
plt.plot(fpr_10m, tpr_10m, label='MC, AUC={:.3}'.format(auc(fpr_10m, tpr_10m)))
plt.plot(_fpr_10mcheck, _tpr_10mcheck, label='RD, AUC={:.3f}'.format(_auc) )
plt.plot([0,1], [0,1], 'k--', label='random guess')
plt.xlabel('FPR', fontsize=25)
plt.ylabel('TPR', fontsize=25)
plt.legend(loc='lower right', fontsize=20)
plt.xlim(0, 1.)
plt.ylim(0, 1.)
plt.savefig('./inclusive_mc_vs_rd_rocs.png', bbox_inches='tight')
# for _p in dt_10m.staged_predict_proba(real_data.to_pandas()):
# print compute_simple_auc_with_weight(real_data, _p)
compute_simple_auc_with_weight(check_data, dt_10m.predict_proba(check_data.to_pandas()))
compute_simple_auc(check_data, dt_10m.predict_proba(check_data.to_pandas()))
Explanation: ะ ะตะฐะปัะฝัะต ะดะฐะฝะฝัะต
End of explanation
def loss_function(b_signs, event_indices, d_tracksign, d_is_tagging):
track_contributions = T.exp(d_is_tagging)
track_contributions /= (T.extra_ops.bincount(event_indices, track_contributions) + 1)[event_indices]
d_B = bincount(event_indices, d_tracksign * track_contributions)
return T.mean(T.exp(- b_signs * d_B))
# return T.mean(b_signs * d_B)
from sklearn.base import BaseEstimator
class AttentionLoss(BaseEstimator):
def __init__(self):
pass
def fit(self, X, y, sample_weight=None):
# self.sample_weight = numpy.require(sample_weight, dtype='float32')
self.sample_weight = numpy.ones(len(X))
self.y_signed = numpy.require(2 * y - 1, dtype='float32')
_, first_positions, event_indices = numpy.unique(X['event_id'].values, return_index=True, return_inverse=True)
self.b_signs = numpy.array(X['signB'].values)[first_positions]
d_is_tagging_var = T.vector()
self.Loss = loss_function(
self.b_signs, event_indices,
d_tracksign=numpy.array(X['trackpredictions'].values),
d_is_tagging=d_is_tagging_var
)
self.grad = theano.function([d_is_tagging_var], - T.grad(self.Loss, d_is_tagging_var))
return self
def prepare_tree_params(self, pred):
_grad = numpy.sign(self.grad(pred))
return _grad / numpy.std(_grad), self.sample_weight
data2 = data.copy()
data2['trackpredictions'] = logit(dt.predict_proba(data.to_pandas())[:, 1])
# _base = DecisionTrainClassifier(loss=AttentionLoss(), n_estimators=1000,
# learning_rate=0.03, n_threads=16, train_features=features)
# _features = features + ['event_id', 'signB', 'trackpredictions']
# dt_grouping = FoldingGroupClassifier(_base, n_folds=2, group_feature='event_id', train_features=_features)
# dt_grouping.fit(data2[_features].to_pandas(),
# data2['label']);
# attention = logit(dt_grouping.predict_proba(data2.to_pandas())[:, 1])
from rep.metaml import FoldingRegressor
from decisiontrain import DecisionTrainRegressor
simple_weights = compute_weights(data, attention=numpy.zeros(len(data)))
# ะัะพะฑัะตะผ ััะตะฝะธัะพะฒะบั ะฝะฐ ะฟะตััะตะฝัะธะปั. ะะพะบะฐ ััะพ ัะฐะผัะน ััะฐะฑะธะปัะฝัะน ะฒะฐัะธะฐะฝั
# folding = FoldingRegressor(DecisionTrainRegressor(n_estimators=1000))
# folding.fit(data2[features].to_pandas(), correctness > numpy.percentile(correctness, 70), sample_weight=simple_weights);
# ะขัะตะฝะธัะพะฒะบะฐ ะฝะฐ ะบะพััะตะบัะฝะพััั ัะฐะฑะพัะฐะตั ะฟะปะพั
ะพ
# folding = FoldingRegressor(DecisionTrainRegressor(n_estimators=1000))
# folding.fit(data2[features].to_pandas(), correctness / numpy.std(correctness), sample_weight=simple_weights);
# ะัะพะฑัะตะผ ััะตะฝะธัะพะฒะบั ะฝะฐ ะฟะพััะดะพะบ. ะัััะต, ัะตะผ ะฝะธัะตะณะพ, ั
ัะถะต, ัะตะผ ะฟะตััะตะฝัะธะปั
# folding = FoldingRegressor(DecisionTrainRegressor(n_estimators=1000))
# folding.fit(data2[features].to_pandas(), 1.4 ** - orders, sample_weight=simple_weights);
# ะัะพะฑัะตะผ ััะตะฝะธัะพะฒะบั ะฝะฐ ranktransform ะพั
folding = FoldingRegressor(DecisionTrainRegressor(n_estimators=1000))
folding.fit(data2[features].to_pandas(),
numpy.argsort(numpy.argsort(correctness)) / float(len(correctness)) - 0.5,
sample_weight=simple_weights);
# ะัะพะฑัะตะผ ััะตะฝะธัะพะฒะบั ะฝะฐ ะทะฝะฐะบ
# folding = FoldingRegressor(DecisionTrainRegressor(n_estimators=1000))
# folding.fit(data2[features].to_pandas(), numpy.sign(correctness), sample_weight=simple_weights);
# _correctness
attention = folding.predict(data2.to_pandas())
plt.hist(attention, bins=40);
compute_auc_with_attention(data, track_proba=dt.predict_proba(data2.to_pandas()),
track_attention=attention - 1.5
)
compute_auc_with_attention(data, track_proba=dt.predict_proba(data2.to_pandas()),
track_attention=compute_weights(data, attention) - 2
)
compute_auc_with_attention(data, track_proba=dt.predict_proba(data2.to_pandas()), track_attention=attention * 0 - 3)
compute_auc_with_attention(data[:10**6],
track_proba=dt.predict_proba(data2.to_pandas())[:10 ** 6],
track_attention=attention[:10 ** 6] * 0 - 3)
Explanation: ะะฐัะตะผะฐัะธัะตัะบะฐั ะผะพะดะตะปั ะดะฐะปัะฝะตะนัะธั
ะดะตะนััะฒะธะน
GroupLogLoss ะธ DropoutLoss ะดะฐัั ะฟัะธัะพัั ะฟัะธ ะฒัะตั
ัะฐะทะผะตัะฐั
, ะฝะพ ะฒ ะฟัะตะทะตะฝัะฐัะธั ะฝะต ะฟะพัะปะธ
$$ d(B+) = \sum_{\text{track}} d(B+ | \text{track is tagging}) 1_\text{track is tagging} $$
End of explanation
from sklearn.base import BaseEstimator
class DropoutLoss(BaseEstimator):
def __init__(self, p=0.0):
self.p = p
def fit(self, X, y, sample_weight=None):
self.sample_weight = numpy.require(sample_weight, dtype='float32')
self.y_signed = numpy.require(2 * y - 1, dtype='float32')
_, first_positions, self.event_indices = numpy.unique(X['event_id'].values, return_index=True, return_inverse=True)
# self.b_signs = numpy.array(X['signB'].values)[first_positions]
# track_z = - w(track) * sign(track) * sign(B)
self.track_z = - self.sample_weight * self.y_signed
self.event_losses = numpy.ones(len(first_positions))
# just in case
self.sample_weight **= 0.0
return self
def prepare_tree_params(self, pred):
# normally, prediction is weight1 * pred1 * sign1 + weight2 * pred2 * sign2 ...
# loss is exp( - isloss * decision)
# in case of dropout
track_exponents = numpy.exp(pred * self.track_z)
track_multipliers = self.p + (1 - self.p) * track_exponents
self.event_losses[:] = 1
numpy.multiply.at(self.event_losses, self.event_indices, track_multipliers)
grad = - self.event_losses[self.event_indices] / track_multipliers * track_exponents * self.track_z
return grad, self.sample_weight
for p in [0.0]:
_dropout_dt = DecisionTrainClassifier(loss=DropoutLoss(p=p), n_estimators=1000,
learning_rate=0.03, n_threads=16, train_features=features)
_features = features + ['event_id']
dt_dropout = FoldingGroupClassifier(_dropout_dt, n_folds=2, group_feature='event_id', train_features=_features)
dt_dropout.fit(data[_features].to_pandas(), data['label'])
print None
for i, _p in enumerate(dt_dropout.staged_predict_proba(data.to_pandas()), 1):
if i % 2 == 0:
print compute_auc_with_attention(data, track_proba=_p, track_attention=numpy.zeros(len(data)) - 2),
print p
print compute_auc_with_attention(data, track_proba=dt.predict_proba(data.to_pandas()),
track_attention=numpy.zeros(len(data))
)
Explanation: ะะดะตั: dropout_loss
ะปะพัั-ััะฝะบัะธั ะฟะพ ัะธะฟั exploss ะดะปั ััะตะฝะธัะพะฒะบะธ ะพะฑััะฝะพะณะพ ะบะปะฐััะธัะธะบะฐัะพัะฐ.
ะพะฟัะธะผะฐะปัะฝัะผ ะฒัะตะณะดะฐ ะพะบะฐะทัะฒะฐะปัั ะฒะฐัะธะฐะฝั ั p=0
ะฝะฐ 3 ะผะธะปะปะธะพะฝะฐั
ะฒัะธะณัะฐะป ั ะฒัะตะณะพ ะพััะฐะปัะฝะพะณะพ
ะฝะฐะทะฝะฐัะตะฝะธะต ะฒะตัะพะฒ (ัะฐะฒะฝะพะผะตัะฝัั
) ะฟะพัะตะผั-ัะพ ะฝะต ัะฐะบะพะต, ะบะฐะบ ะฒะตะทะดะต
End of explanation
from hep_ml.losses import AdaLossFunction
_base_ada = DecisionTrainClassifier(loss=AdaLossFunction(), n_estimators=2000,
learning_rate=0.06, n_threads=16, train_features=features)
dt_exploss = FoldingGroupClassifier(_base_ada, n_folds=2, group_feature='event_id', train_features=features + ['event_id'])
_ = dt_exploss.fit(data[features + ['event_id']].to_pandas(), data['label'],
sample_weight=compute_weights(data, numpy.zeros(len(data)) - 2. ) )
for i, _p in enumerate(dt_exploss.staged_predict_proba(data.to_pandas()), 1):
if i % 3 == 0:
print compute_auc_with_attention(data, track_proba=_p, track_attention=numpy.zeros(len(data)) - 2)
from scipy.special import logit, expit
print compute_auc_with_attention(data, track_proba=expit(logit(_p)) , track_attention=numpy.zeros(len(data)) - 2)
print compute_auc_with_attention(data, track_proba=expit(logit(_p) * 2.) , track_attention=numpy.zeros(len(data)) - 2)
print compute_auc_with_attention(data, track_proba=expit(logit(_p) * 0.5) , track_attention=numpy.zeros(len(data)) - 2)
Explanation: ะัะพะฒะตัะบะฐ ะฝะฐ AdaLoss
ะฒะดััะณ ะพะฝะพ ะธ ะฟัะพััะพ ัะฐะบ ัะฐะฑะพัะฐะตั?
End of explanation
from sklearn.base import BaseEstimator
class GroupLogLoss(BaseEstimator):
def __init__(self):
pass
def fit(self, X, y, sample_weight=None):
self.sample_weight = numpy.require(sample_weight, dtype='float32')
self.y_signed = numpy.require(2 * y - 1, dtype='float32')
_, first_positions, self.event_indices = numpy.unique(X['event_id'].values,
return_index=True, return_inverse=True)
self.track_z = - self.sample_weight * self.y_signed
d = T.vector()
d_b = bincount(self.event_indices, weights=self.track_z * d)
self.Loss = T.sum(T.nnet.softplus(d_b)) * 2
self.grad = theano.function([d], -T.grad(self.Loss, d))
return self
def prepare_tree_params(self, pred):
return self.grad(pred), self.sample_weight
_base_dt_log = DecisionTrainClassifier(loss=GroupLogLoss(), n_estimators=2000,
learning_rate=0.03, n_threads=len(features), train_features=features)
dt_logloss = FoldingGroupClassifier(_base_dt_log, n_folds=2, group_feature='event_id',
train_features=features + ['event_id'])
_ = dt_logloss.fit(data[features + ['event_id']].to_pandas(), data['label'],
# sample_weight=compute_weights(data, numpy.zeros(len(data)) - 2. )
)
for i, _p in enumerate(dt_logloss.staged_predict_proba(data.to_pandas()), 1):
if i % 2 == 0:
print compute_auc_with_attention(data, track_proba=_p, track_attention=numpy.zeros(len(data)) - 2)
assert 0 == 1
# ัะธะปัััะฐัะธั ะฝะธัะตะณะพ ะฝะต ะดะฐะปะฐ
# _n_tracks = numpy.bincount(data['event_id'])[data['event_id']]
# _weights = (_n_tracks > 5) & (_n_tracks < 40)
# dt_logloss_filtered = FoldingGroupClassifier(_base_dt_log, n_folds=2, group_feature='event_id',
# train_features=features + ['event_id'])
# _ = dt_logloss_filtered.fit(data[features + ['event_id']].to_pandas(), data['label'],
# sample_weight=_weights)
# for i, _p in enumerate(dt_logloss_filtered.staged_predict_proba(data.to_pandas()), 1):
# if i % 2 == 0:
# print compute_auc_with_attention(data, track_proba=_p, track_attention=numpy.zeros(len(data)) - 2)
Explanation: GroupLogLoss
ััะตะฑัะตััั ััะฐะฒะฝะธัั ั ExpLoss
End of explanation |
1,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SFR package example
Demonstrates functionality of Flopy SFR module using the example documented by Prudic and others (2004)
Step1: copy over the example files to the working directory
Step2: Load example dataset, skipping the SFR package
Step3: Read pre-prepared reach and segment data into numpy recarrays using numpy.genfromtxt()
Reach data (Item 2 in the SFR input instructions), are input and stored in a numpy record array
http
Step4: Segment Data structure
Segment data are input and stored in a dictionary of record arrays, which
Step5: define dataset 6e (channel flow data) for segment 1
dataset 6e is stored in a nested dictionary keyed by stress period and segment,
with a list of the following lists defined for each segment with icalc == 4
FLOWTAB(1) FLOWTAB(2) ... FLOWTAB(NSTRPTS)
DPTHTAB(1) DPTHTAB(2) ... DPTHTAB(NSTRPTS)
WDTHTAB(1) WDTHTAB(2) ... WDTHTAB(NSTRPTS)
Step6: define dataset 6d (channel geometry data) for segments 7 and 8
dataset 6d is stored in a nested dictionary keyed by stress period and segment,
with a list of the following lists defined for each segment with icalc == 4
FLOWTAB(1) FLOWTAB(2) ... FLOWTAB(NSTRPTS)
DPTHTAB(1) DPTHTAB(2) ... DPTHTAB(NSTRPTS)
WDTHTAB(1) WDTHTAB(2) ... WDTHTAB(NSTRPTS)
Step7: Define SFR package variables
Step8: Instantiate SFR package
Input arguments generally follow the variable names defined in the Online Guide to MODFLOW
Step9: Plot the SFR segments
any column in the reach_data array can be plotted using the key argument
Step10: Check the SFR dataset for errors
Step11: Look at results
Step12: Read results into numpy array using genfromtxt
Step13: Read results into pandas dataframe
requires the pandas library
Step14: Plot streamflow and stream/aquifer interactions for a segment
Step15: Look at stage, model top, and streambed top
Step16: Get SFR leakage results from cell budget file
Step17: Plot leakage in plan view
Step18: Plot total streamflow | Python Code:
import sys
import platform
import os
import numpy as np
import glob
import shutil
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
import flopy.utils.binaryfile as bf
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
% matplotlib inline
mpl.rcParams['figure.figsize'] = (11, 8.5)
Explanation: SFR package example
Demonstrates functionality of Flopy SFR module using the example documented by Prudic and others (2004):
Problem description:
Grid dimensions: 1 Layer, 15 Rows, 10 Columns
Stress periods: 1 steady
Flow package: LPF
Stress packages: SFR, GHB, EVT, RCH
Solver: SIP
<img src="./img/Prudic2004_fig6.png" width="400" height="500"/>
End of explanation
path = 'data'
gpth = os.path.join('..', 'data', 'mf2005_test', 'test1ss.*')
for f in glob.glob(gpth):
shutil.copy(f, path)
Explanation: copy over the example files to the working directory
End of explanation
m = flopy.modflow.Modflow.load('test1ss.nam', version='mf2005', exe_name=exe_name,
model_ws=path, load_only=['ghb', 'evt', 'rch', 'dis', 'bas6', 'oc', 'sip', 'lpf'])
Explanation: Load example dataset, skipping the SFR package
End of explanation
rpth = os.path.join('..', 'data', 'sfr_examples', 'test1ss_reach_data.csv')
reach_data = np.genfromtxt(rpth, delimiter=',', names=True)
reach_data
Explanation: Read pre-prepared reach and segment data into numpy recarrays using numpy.genfromtxt()
Reach data (Item 2 in the SFR input instructions), are input and stored in a numpy record array
http://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html
This allows for reach data to be indexed by their variable names, as described in the SFR input instructions.
For more information on Item 2, see the Online Guide to MODFLOW:
http://water.usgs.gov/nrp/gwsoftware/modflow2000/MFDOC/index.html?sfr.htm
End of explanation
spth = os.path.join('..', 'data', 'sfr_examples', 'test1ss_segment_data.csv')
ss_segment_data = np.genfromtxt(spth, delimiter=',', names=True)
segment_data = {0: ss_segment_data}
segment_data[0][0:1]['width1']
Explanation: Segment Data structure
Segment data are input and stored in a dictionary of record arrays, which
End of explanation
channel_flow_data = {0: {1: [[0.5, 1.0, 2.0, 4.0, 7.0, 10.0, 20.0, 30.0, 50.0, 75.0, 100.0],
[0.25, 0.4, 0.55, 0.7, 0.8, 0.9, 1.1, 1.25, 1.4, 1.7, 2.6],
[3.0, 3.5, 4.2, 5.3, 7.0, 8.5, 12.0, 14.0, 17.0, 20.0, 22.0]]}}
Explanation: define dataset 6e (channel flow data) for segment 1
dataset 6e is stored in a nested dictionary keyed by stress period and segment,
with a list of the following lists defined for each segment with icalc == 4
FLOWTAB(1) FLOWTAB(2) ... FLOWTAB(NSTRPTS)
DPTHTAB(1) DPTHTAB(2) ... DPTHTAB(NSTRPTS)
WDTHTAB(1) WDTHTAB(2) ... WDTHTAB(NSTRPTS)
End of explanation
channel_geometry_data = {0: {7: [[0.0, 10.0, 80.0, 100.0, 150.0, 170.0, 240.0, 250.0],
[20.0, 13.0, 10.0, 2.0, 0.0, 10.0, 13.0, 20.0]],
8: [[0.0, 10.0, 80.0, 100.0, 150.0, 170.0, 240.0, 250.0],
[25.0, 17.0, 13.0, 4.0, 0.0, 10.0, 16.0, 20.0]]}}
Explanation: define dataset 6d (channel geometry data) for segments 7 and 8
dataset 6d is stored in a nested dictionary keyed by stress period and segment,
with a list of the following lists defined for each segment with icalc == 4
FLOWTAB(1) FLOWTAB(2) ... FLOWTAB(NSTRPTS)
DPTHTAB(1) DPTHTAB(2) ... DPTHTAB(NSTRPTS)
WDTHTAB(1) WDTHTAB(2) ... WDTHTAB(NSTRPTS)
End of explanation
nstrm = len(reach_data) # number of reaches
nss = len(segment_data[0]) # number of segments
nsfrpar = 0 # number of parameters (not supported)
nparseg = 0
const = 1.486 # constant for manning's equation, units of cfs
dleak = 0.0001 # closure tolerance for stream stage computation
istcb1 = 53 # flag for writing SFR output to cell-by-cell budget (on unit 53)
istcb2 = 81 # flag for writing SFR output to text file
dataset_5 = {0: [nss, 0, 0]} # dataset 5 (see online guide)
Explanation: Define SFR package variables
End of explanation
sfr = flopy.modflow.ModflowSfr2(m, nstrm=nstrm, nss=nss, const=const, dleak=dleak, istcb1=istcb1, istcb2=istcb2,
reach_data=reach_data,
segment_data=segment_data,
channel_geometry_data=channel_geometry_data,
channel_flow_data=channel_flow_data,
dataset_5=dataset_5)
sfr.reach_data[0:1]
Explanation: Instantiate SFR package
Input arguments generally follow the variable names defined in the Online Guide to MODFLOW
End of explanation
sfr.plot(key='iseg');
Explanation: Plot the SFR segments
any column in the reach_data array can be plotted using the key argument
End of explanation
chk = sfr.check()
m.external_fnames = [os.path.split(f)[1] for f in m.external_fnames]
m.external_fnames
m.write_input()
m.run_model()
Explanation: Check the SFR dataset for errors
End of explanation
sfr_outfile = os.path.join('..', 'data', 'sfr_examples', 'test1ss.flw')
names = ["layer", "row", "column", "segment", "reach", "Qin",
"Qaquifer", "Qout", "Qovr", "Qprecip", "Qet", "stage", "depth", "width", "Cond", "gradient"]
Explanation: Look at results
End of explanation
sfrresults = np.genfromtxt(sfr_outfile, skip_header=8, names=names, dtype=None)
sfrresults[0:1]
Explanation: Read results into numpy array using genfromtxt
End of explanation
import pandas as pd
df = pd.read_csv(sfr_outfile, delim_whitespace=True, skiprows=8, names=names, header=None)
df
Explanation: Read results into pandas dataframe
requires the pandas library
End of explanation
inds = df.segment == 3
ax = df.ix[inds, ['Qin', 'Qaquifer', 'Qout']].plot(x=df.reach[inds])
ax.set_ylabel('Flow, in cubic feet per second')
ax.set_xlabel('SFR reach')
Explanation: Plot streamflow and stream/aquifer interactions for a segment
End of explanation
streambed_top = m.sfr.segment_data[0][m.sfr.segment_data[0].nseg == 3][['elevup', 'elevdn']][0]
streambed_top
df['model_top'] = m.dis.top.array[df.row.values - 1, df.column.values -1]
fig, ax = plt.subplots()
plt.plot([1, 6], list(streambed_top), label='streambed top')
ax = df.ix[inds, ['stage', 'model_top']].plot(ax=ax, x=df.reach[inds])
ax.set_ylabel('Elevation, in feet')
plt.legend()
Explanation: Look at stage, model top, and streambed top
End of explanation
bpth = os.path.join('data', 'test1ss.cbc')
cbbobj = bf.CellBudgetFile(bpth)
cbbobj.list_records()
sfrleak = cbbobj.get_data(text=' STREAM LEAKAGE')[0]
sfrleak[sfrleak == 0] = np.nan # remove zero values
Explanation: Get SFR leakage results from cell budget file
End of explanation
im = plt.imshow(sfrleak[0], interpolation='none', cmap='coolwarm', vmin = -3, vmax=3)
cb = plt.colorbar(im, label='SFR Leakage, in cubic feet per second');
Explanation: Plot leakage in plan view
End of explanation
sfrQ = sfrleak[0].copy()
sfrQ[sfrQ == 0] = np.nan
sfrQ[df.row.values-1, df.column.values-1] = df[['Qin', 'Qout']].mean(axis=1).values
im = plt.imshow(sfrQ, interpolation='none')
plt.colorbar(im, label='Streamflow, in cubic feet per second');
Explanation: Plot total streamflow
End of explanation |
1,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started
Tensors are similar to numpy's ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.
Step1: Numpy Bridge
The torch Tensor and numpy array will share their underlying memory locations, and changing one will change the other.
Converting torch Tensor to numpy Array
Step2: Converting numpy Array to torch Tensor
Step3: CUDA Tensors
Tensors can be moved onto GPU using the .cuda function.
Step4: Autograd
Step5: You should have got a matrix of 4.5.
Because PyTorch is a dynamic computation framework, we can take the gradients of all kinds of interesting computations, even loops!
Step6: Neural Networks
Neural networks can be constructed using the torch.nn package.
An nn.Module contains layers, and a method forward(input)that returns the output.
Step7: You just have to define the forward function, and the backward function (where gradients are computed) is automatically defined for you using autograd.
The learnable parameters of a model are returned by net.parameters()
Step8: The input to the forward is a Variable, and so is the output.
Step9: A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target. There are several different loss functions under the nn package. A simple loss is
Step10: Now, if you follow loss in the backward direction, using it's .creator attribute, you will see a graph of computations that looks like this
Step11: Example complete process
For vision, there is a package called torch.vision, that
has data loaders for common datasets such as Imagenet, CIFAR10, MNIST, etc. and data transformers for images.
For this tutorial, we will use the CIFAR10 dataset.
Training an image classifier
We will do the following steps in order
Step12: 2. Define a Convolution Neural Network
Step13: 2. Define a Loss function and optimizer
Step14: 3. Train the network
This is when things start to get interesting.
We simply have to loop over our data iterator, and feed the inputs to
the network and optimize
Step15: We will check what the model has learned by predicting the class label, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.
First, let's display an image from the test set to get familiar.
Step16: Okay, now let us see what the neural network thinks these examples above are
Step17: The results seem pretty good. Let us look at how the network performs on the whole dataset. | Python Code:
x = torch.Tensor(5, 3); x
x = torch.rand(5, 3); x
x.size()
y = torch.rand(5, 3)
x + y
torch.add(x, y)
result = torch.Tensor(5, 3)
torch.add(x, y, out=result)
result1 = torch.Tensor(5, 3)
result1 = x + y
result1
# anything ending in '_' is an in-place operation
y.add_(x) # adds x to y in-place
# standard numpy-like indexing with all bells and whistles
x[:,1]
Explanation: Getting Started
Tensors are similar to numpy's ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.
End of explanation
a = torch.ones(5)
a
b = a.numpy()
b
a.add_(1)
print(a)
print(b) # see how the numpy array changed in value
Explanation: Numpy Bridge
The torch Tensor and numpy array will share their underlying memory locations, and changing one will change the other.
Converting torch Tensor to numpy Array
End of explanation
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b) # see how changing the np array changed the torch Tensor automatically
Explanation: Converting numpy Array to torch Tensor
End of explanation
x = x.cuda()
y = y.cuda()
x+y
Explanation: CUDA Tensors
Tensors can be moved onto GPU using the .cuda function.
End of explanation
x = Variable(torch.ones(2, 2), requires_grad = True); x
y = x + 2; y
# y.creator # - creator seems not to be available with current pytorch version
z = y * y * 3; z
out = z.mean(); out
# You never have to look at these in practice - this is just showing how the
# computation graph is stored
# - creator seems not to be available with current pytorch version
# print(out.creator.previous_functions[0][0])
# print(out.creator.previous_functions[0][0].previous_functions[0][0])
out.backward()
# d(out)/dx
x.grad
Explanation: Autograd: automatic differentiation
Central to all neural networks in PyTorch is the autograd package.
The autograd package provides automatic differentiation for all operations on Tensors.
It is a define-by-run framework, which means that your backprop is defined by how your code is run, and that every single iteration can be different.
autograd.Variable is the central class of the package.
It wraps a Tensor, and supports nearly all of operations defined on it. Once you finish your computation you can call .backward() and have all the gradients computed automatically.
You can access the raw tensor through the .data attribute, while the gradient w.r.t. this variable is accumulated into .grad.
If you want to compute the derivatives, you can call .backward() on a Variable.
End of explanation
x = torch.randn(3)
x = Variable(x, requires_grad = True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
y
gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
x.grad
Explanation: You should have got a matrix of 4.5.
Because PyTorch is a dynamic computation framework, we can take the gradients of all kinds of interesting computations, even loops!
End of explanation
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5) # 1 input channel, 6 output channels, 5x5 kernel
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120) # like keras' Dense()
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
return reduce(operator.mul, x.size()[1:])
net = Net(); net
Explanation: Neural Networks
Neural networks can be constructed using the torch.nn package.
An nn.Module contains layers, and a method forward(input)that returns the output.
End of explanation
net.cuda();
params = list(net.parameters())
len(params), params[0].size()
Explanation: You just have to define the forward function, and the backward function (where gradients are computed) is automatically defined for you using autograd.
The learnable parameters of a model are returned by net.parameters()
End of explanation
input = Variable(torch.randn(1, 1, 32, 32)).cuda()
out = net(input); out
net.zero_grad() # zeroes the gradient buffers of all parameters
out.backward(torch.randn(1, 10).cuda()) # backprops with random gradients
Explanation: The input to the forward is a Variable, and so is the output.
End of explanation
output = net(input)
target = Variable(torch.range(1, 10)).cuda() # a dummy target, for example
loss = nn.MSELoss()(output, target); loss
Explanation: A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target. There are several different loss functions under the nn package. A simple loss is: nn.MSELoss which computes the mean-squared error between the input and the target.
End of explanation
# now we shall call loss.backward(), and have a look at gradients before and after
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
optimizer = optim.SGD(net.parameters(), lr = 0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = nn.MSELoss()(output, target)
loss.backward()
optimizer.step() # Does the update
Explanation: Now, if you follow loss in the backward direction, using it's .creator attribute, you will see a graph of computations that looks like this:
input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
-> view -> linear -> relu -> linear -> relu -> linear
-> MSELoss
-> loss
So, when we call loss.backward(), the whole graph is differentiated w.r.t. the loss, and all Variables in the graph will have their .grad Variable accumulated with the gradient.
End of explanation
import torchvision
from torchvision import transforms, datasets
# The output of torchvision datasets are PILImage images of range [0, 1].
# We transform them to Tensors of normalized range [-1, 1]
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
trainset = datasets.CIFAR10(root='./data/cifar10', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32,
shuffle=True, num_workers=2)
testset = datasets.CIFAR10(root='./data/cifar10', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=32,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def imshow(img):
plt.imshow(np.transpose((img / 2 + 0.5).numpy(), (1,2,0)))
# show some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('{}'.format([classes[labels[j]] for j in range(4)])))
Explanation: Example complete process
For vision, there is a package called torch.vision, that
has data loaders for common datasets such as Imagenet, CIFAR10, MNIST, etc. and data transformers for images.
For this tutorial, we will use the CIFAR10 dataset.
Training an image classifier
We will do the following steps in order:
Load and normalizing the CIFAR10 training and test datasets using torchvision
Define a Convolution Neural Network
Define a loss function
Train the network on the training data
Test the network on the test data
1. Loading and normalizing CIFAR10
Using torch.vision, it's extremely easy to load CIFAR10.
End of explanation
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2,2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net().cuda()
Explanation: 2. Define a Convolution Neural Network
End of explanation
criterion = nn.CrossEntropyLoss().cuda()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
Explanation: 2. Define a Loss function and optimizer
End of explanation
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
# forward + backward + optimize
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[{}, {}] loss: {}'.format(epoch+1, i+1, running_loss / 2000))
running_loss = 0.0
Explanation: 3. Train the network
This is when things start to get interesting.
We simply have to loop over our data iterator, and feed the inputs to
the network and optimize
End of explanation
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
' '.join('{}'.format(classes[labels[j]] for j in range(4)))
Explanation: We will check what the model has learned by predicting the class label, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.
First, let's display an image from the test set to get familiar.
End of explanation
outputs = net(Variable(images).cuda())
_, predicted = torch.max(outputs.data, 1)
# ' '.join('%5s'% classes[predicted[j][0]] for j in range(4)) # - "'int' object is not subscriptable" issue
' '.join('{}'.format([classes[predicted[j]] for j in range(4)]))
Explanation: Okay, now let us see what the neural network thinks these examples above are:
End of explanation
correct,total = 0,0
for data in testloader:
images, labels = data
outputs = net(Variable(images).cuda())
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels.cuda()).sum()
print('Accuracy of the network on the 10000 test images: {} %%'.format(100 * correct / total))
Explanation: The results seem pretty good. Let us look at how the network performs on the whole dataset.
End of explanation |
1,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Last updated
Step1: 1. Loading data
More details, see http
Step2: From a local text file
Let's first load some temperature data which covers all lattitudes. Since read_table is supposed to do the job for a text file, let's just try it
Step3: There is only 1 column! Let's try again stating that values are separated by any number of spaces
Step4: There are columns but the column names are 1880 and -0.1591!
Step5: Since we only have 2 columns, one of which would be nicer to access the data (the year of the record), let's try using the index_col option
Step6: Last step
Step7: From a chunked file
Since every dataset can contain mistakes, let's load a different file with temperature data. NASA's GISS dataset is written in chunks
Step8: QUIZ
Step9: From a remote text file
So far, we have only loaded temperature datasets. Climate change also affects the sea levels on the globe. Let's load some datasets with the sea levels. The university of colorado posts updated timeseries for mean sea level globably, per
hemisphere, or even per ocean, sea, ... Let's download the global one, and the ones for the northern and southern hemisphere.
That will also illustrate that to load text files that are online, there is no more work than replacing the filepath by a URL n read_table
Step10: There are clearly lots of cleanup to be done on these datasets. See below...
From a local or remote HTML file
To be able to grab more local data about mean sea levels, we can download and extract data about mean sea level stations around the world from the PSMSL (http
Step11: That table can be used to search for a station in a region of the world we choose, extract an ID for it and download the corresponding time series with the URL http
Step12: Descriptors for the vertical axis (axis=0)
Step13: Descriptors for the horizontal axis (axis=1)
Step14: A lot of information at once including memory usage
Step15: Series, the pandas 1D structure
A series can be constructed with the pd.Series constructor (passing a list or array of values) or from a DataFrame, by extracting one of its columns.
Step16: Core attributes/information
Step17: Probably the most important attribute of a Series or DataFrame is its index since we will use that to, well, index into the structures to access te information
Step18: NumPy arrays as backend of Pandas
It is always possible to fall back to a good old NumPy array to pass on to scientific libraries that need them
Step19: Creating new DataFrames manually
DataFrames can also be created manually, by grouping several Series together. Let's make a new frame from the 3 sea level datasets we downloaded above. They will be displayed along the same index. Wait, does that makes sense to do that?
Step20: So the northern hemisphere and southern hemisphere datasets are aligned. What about the global one?
Step21: For now, let's just build a DataFrame with the 2 hemisphere datasets then. We will come back to add the global one later...
Step22: Note
Step23: Now the fact that it is failing show that Pandas does auto-alignment of values
Step24: 3. Cleaning and formatting data
The datasets that we obtain straight from the reading functions are pretty raw. A lot of pre-processing can be done during data read but we haven't used all the power of the reading functions. Let's learn to do a lot of cleaning and formatting of the data.
The GISS temperature dataset has a lot of issues too
Step25: We can also rename an index by setting its name. For example, the index of the mean_sea_level dataFrame could be called date since it contains more than just the year
Step26: Setting missing values
In the full globe dataset, -999.00 was used to indicate that there was no value for that year. Let's search for all these values and replace them with the missing value that Pandas understand
Step27: Choosing what is the index
Step28: Dropping rows and columns
Step29: Let's also set **** to a real missing value (np.nan). We can often do it using a boolean mask, but that may trigger pandas warning. Another way to assign based on a boolean condition is to use the where method
Step30: Adding columns
While building the mean_sea_level dataFrame earlier, we didn't include the values from global_sea_level since the years were not aligned. Adding a column to a dataframe is as easy as adding an entry to a dictionary. So let's try
Step31: The column is full of NaNs again because the auto-alignment feature of Pandas is searching for the index values like 1992.9323 in the index of global_sea_level["msl_ib_ns(mm)"] series and not finding them. Let's set its index to these years so that that auto-alignment can work for us and figure out which values we have and not
Step32: EXERCISE
Step33: Changing dtype of series
Now that the sea levels are looking pretty good, let's got back to the GISS temperature dataset. Because of the labels (strings) found in the middle of the timeseries, every column only assumed to contain strings (didn't convert them to floating point values)
Step34: That can be changed after the fact (and after the cleanup) with the astype method of a Series
Step35: An index has a dtype just like any Series and that can be changed after the fact too.
Step36: For now, let's change it to an integer so that values can at least be compared properly. We will learn below to change it to a datetime object.
Step37: Removing missing values
Removing missing values - once they have been converted to np.nan - is very easy. Entries that contain missing values can be removed (dropped), or filled with many strategies.
Step38: Let's also mention the .interpolate method on a Series
Step39: For now, we will leave the missing values in all our datasets, because it wouldn't be meaningful to fill them.
EXERCISE
Step40: Showing distributions information
Step41: QUIZ
Step42: Correlations
There are more plot options inside pandas.tools.plotting
Step43: We will confirm the correlations we think we see further down...
EXERCISE
Step44: In a dataframe
Step45: More complex queries rely on the same concepts. For example what are the names, and IDs of the sea level stations in the USA?
Step46: 6. Working with dates and times
More details at http
Step47: The advantage of having a real datetime index is that operations can be done efficiently on it. Let's add a flag to signal if the value is before or after the great depression's black Friday
Step48: Timestamps or periods?
Step49: See also to_timestamp to conver back to timestamps and its how method to specify when inside the range to set the timestamp.
Resampling
Another thing that can be done is to resample the series, downsample or upsample. Let's see the series converted to 10 year blocks or upscale to a monthly series
Step50: Generating DatetimeIndex objects
The index for giss_temp isn't an instance of datetimes so we may want to generate such DatetimeIndex objects. This can be done with date_range and period_range
Step51: Note that "A" by default means the end of the year. Other times in the year can be specified with "AS" (start), "A-JAN" or "A-JUN". Even more options can be imported from pandas.tseries.offsets
Step52: Actually we will convert that dataset to a 1D dataset, and build a monthly index, so lets build a monthly period index
Step53: 7. Transforming datasets
Step54: Apply
Step55: This apply method is very powerful and general. We have used it to do something we could have done with astype, but any custom function can be provided to apply.
Step56: EXERCISE
Step57: Now that we know the range of dates, to look at the data, sorting it following the dates is done with sort
Step58: Since many stations last updated on the same dates, it is logical to want to sort further, for example, by Country at constant date
Step59: Stack and unstack
Let's look at the GISS dataset differently. Instead of seeing the months along the axis 1, and the years along the axis 0, it would could be good to convert these into an outer and an inner axis along only 1 time dimension.
Stacking and unstacking allows to convert a dataframe into a series and vice-versa
Step60: The result is grouped in the wrong order since it sorts first the axis that was unstacked. Another transformation that would help us is transposing...
Step61: A side note
Step62: But this new multi-index isn't very good, because is it not viewed as 1 date, just as a tuple of values
Step63: To improve on this, let's reuse an index we generated above with date_range
Step64: 8. Statistical analysis
Descriptive statistics
Let's go back to the dataframe version of the GISS temperature dataset temporarily to analyze anomalies month per month. Like most functions on a dataframe, stats functions are computed column per column. They also ignore missing values
Step65: It is possible to apply stats functions across rows instead of columns using the axis keyword (just like in NumPy).
Step66: describe provides many descriptive stats computed at once
Step67: Rolling statistics
Let's remove high frequency signal and extract the trend
Step68: Describing categorical series
Let's look at our local_sea_level_stations dataset some more
Step69: .describe() only displays information about continuous Series. What about categorical ones?
Step70: We can also create categorical series from continuous ones with the cut function
Step71: QUIZ
Step72: What kind of object did we create?
Step73: What to do with that strange GroupBy object? We can first loop over it to get the labels and the sub-dataframes for each group
Step74: We could have done the same with less effort by grouping by the result of a custom function applied to the index. Let's reset the dataframe
Step75: So that we can do the groupby on the index
Step76: Something else that can be done with such an object is to look at its groups attribute to see the labels mapped to the rows involved
Step77: How to aggregate the results of this grouping depends on what we want to see
Step78: Another possibility is to transform each group separately, rather than aggregate. For example, here we group over decades and subtract to each value, the average over that decade
Step79: Pivot_table
Pivot table also allows to summarize the information, allowing to convert repeating columns into axes. For example, let's say that we would like to know how many sea level stations are in various european countries. And we would like to group the answers into 2 categories
Step80: The columns of our future table should have 2 values, whether the station was updated recently or not. Let's build a column to store that information
Step81: Finally, what value will be displayed inside the table. The values should be extracted from a column, pivot_table allowing an aggregation function to be applied when more than 1 value is found for a given case. Each station should count for 1, and we could aggregate multiple stations by summing these ones
Step82: QUIZ
Step83: EXERCISE
Step84: EXERCISE
Step85: Note
Step86: OLS
There are 2 objects constructors inside Pandas and inside statsmodels. There has been talks about merging the 2 into SM, but that hasn't happened yet. OLS in statsmodels allows more complex formulas
Step87: OLS in pandas requires to pass a y series and an x series to do a fit of the form y ~ x. But the formula can be more complex by providing a DataFrame for x and reproduce a formula of the form y ~ x1 + x2.
Also, OLS in pandas allows to do rolling and expanding OLS
Step89: An interlude
Step90: Now, how to align the 2 series? Is this one sampled regularly so that the month temperatures can be upscaled to that frequency?
Computing the difference between successive values
What is the frequency of that new index?
Step91: IMPORTANT Note
Step92: The alignment can even be done on an entire dataframe
Step93: Correlations between sea levels and temperatures
Step94: What if we had done the analysis yearly instead of monthly to remove seasonal variations?
Step95: 11. Predictions from auto regression models
An auto-regresssive model fits existing data and build a (potentially predictive) model of the data fitted. We use the timeseries analysis (tsa) submodule of statsmodels to make out-of-sample predictions for the upcoming decades
Step96: EXERCISE | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pandas import set_option
set_option("display.max_rows", 16)
LARGE_FIGSIZE = (12, 8)
# Change this cell to the demo location on YOUR machine
%cd 'D:\\Git\\Pandas_Tutorial\\demos\\climate_timeseries'
%ls
Explanation: Last updated: Jul 06 2015
Climate data exploration: a journey through Pandas
Welcome to a demo of Python's data analysis package called Pandas. Our goal is to learn about Data Analysis and transformation using Pandas while exploring datasets used to analyze climate change.
The story
The global goal of this demo is to provide the tools to be able to try and reproduce some of the analysis done in the IPCC global climate reports published in the last decade (see for example https://www.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full.pdf).
We are first going to load a few public datasets containing information about global temperature, global and local sea level infomation, and global concentration of greenhouse gases like CO2, to see if there are correlations and how the trends are to evolve, assuming no fundamental change in the system. For all these datasets, we will download them, visualize them, clean them, search through them, merge them, resample them, transform them and summarize them.
In the process, we will learn about:
1. Loading data
2. Pandas datastructures
3. Cleaning and formatting data
4. Basic visualization
5. Accessing data
6. Working with dates and times
7. Transforming datasets
8. Statistical analysis
9. Data agregation and summarization
10. Correlations and regressions
11. Predictions from auto regression models
Some initial setup
End of explanation
#pd.read_<TAB>
pd.read_table?
Explanation: 1. Loading data
More details, see http://pandas.pydata.org/pandas-docs/stable/io.html
To find all reading functions in pandas, ask ipython's tab completion:
End of explanation
filename = "data/temperatures/annual.land_ocean.90S.90N.df_1901-2000mean.dat"
full_globe_temp = pd.read_table(filename)
full_globe_temp
Explanation: From a local text file
Let's first load some temperature data which covers all lattitudes. Since read_table is supposed to do the job for a text file, let's just try it:
End of explanation
full_globe_temp = pd.read_table(filename, sep="\s+")
full_globe_temp
Explanation: There is only 1 column! Let's try again stating that values are separated by any number of spaces:
End of explanation
full_globe_temp = pd.read_table(filename, sep="\s+", names=["year", "mean temp"])
full_globe_temp
Explanation: There are columns but the column names are 1880 and -0.1591!
End of explanation
full_globe_temp = pd.read_table(filename, sep="\s+", names=["year", "mean temp"],
index_col=0)
full_globe_temp
Explanation: Since we only have 2 columns, one of which would be nicer to access the data (the year of the record), let's try using the index_col option:
End of explanation
full_globe_temp = pd.read_table(filename, sep="\s+", names=["year", "mean temp"],
index_col=0, parse_dates=True)
full_globe_temp
Explanation: Last step: the index is made of dates. Let's make that explicit:
End of explanation
giss_temp = pd.read_table("data/temperatures/GLB.Ts+dSST.txt", sep="\s+", skiprows=7,
skip_footer=11, engine="python")
giss_temp
Explanation: From a chunked file
Since every dataset can contain mistakes, let's load a different file with temperature data. NASA's GISS dataset is written in chunks: look at it in data/temperatures/GLB.Ts+dSST.txt
End of explanation
# Your code here
Explanation: QUIZ: What happens if you remove the skiprows? skipfooter? engine?
EXERCISE: Load some readings of CO2 concentrations in the atmosphere from the data/greenhouse_gaz/co2_mm_global.txt data file.
End of explanation
# Local backup: data/sea_levels/sl_nh.txt
northern_sea_level = pd.read_table("http://sealevel.colorado.edu/files/current/sl_nh.txt",
sep="\s+")
northern_sea_level
# Local backup: data/sea_levels/sl_sh.txt
southern_sea_level = pd.read_table("http://sealevel.colorado.edu/files/current/sl_sh.txt",
sep="\s+")
southern_sea_level
# The 2015 version of the global dataset:
# Local backup: data/sea_levels/sl_ns_global.txt
url = "http://sealevel.colorado.edu/files/2015_rel2/sl_ns_global.txt"
global_sea_level = pd.read_table(url, sep="\s+")
global_sea_level
Explanation: From a remote text file
So far, we have only loaded temperature datasets. Climate change also affects the sea levels on the globe. Let's load some datasets with the sea levels. The university of colorado posts updated timeseries for mean sea level globably, per
hemisphere, or even per ocean, sea, ... Let's download the global one, and the ones for the northern and southern hemisphere.
That will also illustrate that to load text files that are online, there is no more work than replacing the filepath by a URL n read_table:
End of explanation
# Needs `lxml`, `beautifulSoup4` and `html5lib` python packages
# Local backup in data/sea_levels/Obtaining Tide Gauge Data.html
table_list = pd.read_html("http://www.psmsl.org/data/obtaining/")
# there is 1 table on that page which contains metadata about the stations where
# sea levels are recorded
local_sea_level_stations = table_list[0]
local_sea_level_stations
Explanation: There are clearly lots of cleanup to be done on these datasets. See below...
From a local or remote HTML file
To be able to grab more local data about mean sea levels, we can download and extract data about mean sea level stations around the world from the PSMSL (http://www.psmsl.org/). Again to download and parse all tables in a webpage, just give read_html the URL to parse:
End of explanation
# Type of the object?
type(giss_temp)
# Internal nature of the object
print(giss_temp.shape)
print(giss_temp.dtypes)
Explanation: That table can be used to search for a station in a region of the world we choose, extract an ID for it and download the corresponding time series with the URL http://www.psmsl.org/data/obtaining/met.monthly.data/< ID >.metdata
2. Pandas DataStructures
For more details, see http://pandas.pydata.org/pandas-docs/stable/dsintro.html
Now that we have used read_** functions to load datasets, we need to understand better what kind of objects we got from them to learn to work with them.
DataFrame, the pandas 2D structure
End of explanation
giss_temp.index
Explanation: Descriptors for the vertical axis (axis=0)
End of explanation
giss_temp.columns
Explanation: Descriptors for the horizontal axis (axis=1)
End of explanation
giss_temp.info()
Explanation: A lot of information at once including memory usage:
End of explanation
# Do we already have a series for the full_globe_temp?
type(full_globe_temp)
# Since there is only one column of values, we can make this a Series without
# loosing information:
full_globe_temp = full_globe_temp["mean temp"]
Explanation: Series, the pandas 1D structure
A series can be constructed with the pd.Series constructor (passing a list or array of values) or from a DataFrame, by extracting one of its columns.
End of explanation
print(type(full_globe_temp))
print(full_globe_temp.dtype)
print(full_globe_temp.shape)
print(full_globe_temp.nbytes)
Explanation: Core attributes/information:
End of explanation
full_globe_temp.index
Explanation: Probably the most important attribute of a Series or DataFrame is its index since we will use that to, well, index into the structures to access te information:
End of explanation
full_globe_temp.values
type(full_globe_temp.values)
Explanation: NumPy arrays as backend of Pandas
It is always possible to fall back to a good old NumPy array to pass on to scientific libraries that need them: SciPy, scikit-learn, ...
End of explanation
# Are they aligned?
southern_sea_level.year == northern_sea_level.year
# So, are they aligned?
np.all(southern_sea_level.year == northern_sea_level.year)
Explanation: Creating new DataFrames manually
DataFrames can also be created manually, by grouping several Series together. Let's make a new frame from the 3 sea level datasets we downloaded above. They will be displayed along the same index. Wait, does that makes sense to do that?
End of explanation
len(global_sea_level.year) == len(northern_sea_level.year)
Explanation: So the northern hemisphere and southern hemisphere datasets are aligned. What about the global one?
End of explanation
mean_sea_level = pd.DataFrame({"northern_hem": northern_sea_level["msl_ib(mm)"],
"southern_hem": southern_sea_level["msl_ib(mm)"],
"date": northern_sea_level.year})
mean_sea_level
Explanation: For now, let's just build a DataFrame with the 2 hemisphere datasets then. We will come back to add the global one later...
End of explanation
mean_sea_level = pd.DataFrame({"northern_hem": northern_sea_level["msl_ib(mm)"],
"southern_hem": southern_sea_level["msl_ib(mm)"]},
index = northern_sea_level.year)
mean_sea_level
Explanation: Note: there are other ways to create DataFrames manually, for example from a 2D numpy array.
There is still the date in a regular column and a numerical index that is not that meaningful. We can specify the index of a DataFrame at creation. Let's try:
End of explanation
mean_sea_level = pd.DataFrame({"northern_hem": northern_sea_level["msl_ib(mm)"].values,
"southern_hem": southern_sea_level["msl_ib(mm)"].values},
index = northern_sea_level.year)
mean_sea_level
Explanation: Now the fact that it is failing show that Pandas does auto-alignment of values: for each value of the index, it searches for a value in each Series that maps the same value. Since these series have a dumb numerical index, no values are found.
Since we know that the order of the values match the index we chose, we can replace the Series by their values only at creation of the DataFrame:
End of explanation
# The columns of the local_sea_level_stations aren't clean: they contain spaces and dots.
local_sea_level_stations.columns
# Let's clean them up a bit:
local_sea_level_stations.columns = [name.strip().replace(".", "")
for name in local_sea_level_stations.columns]
local_sea_level_stations.columns
Explanation: 3. Cleaning and formatting data
The datasets that we obtain straight from the reading functions are pretty raw. A lot of pre-processing can be done during data read but we haven't used all the power of the reading functions. Let's learn to do a lot of cleaning and formatting of the data.
The GISS temperature dataset has a lot of issues too: useless numerical index, redundant columns, useless rows, placeholder (****) for missing values, and wrong type for the columns. Let's fix all this:
Renaming columns
End of explanation
mean_sea_level.index.name = "date"
mean_sea_level
Explanation: We can also rename an index by setting its name. For example, the index of the mean_sea_level dataFrame could be called date since it contains more than just the year:
End of explanation
full_globe_temp == -999.000
full_globe_temp[full_globe_temp == -999.000] = np.nan
full_globe_temp.tail()
Explanation: Setting missing values
In the full globe dataset, -999.00 was used to indicate that there was no value for that year. Let's search for all these values and replace them with the missing value that Pandas understand: np.nan
End of explanation
# We didn't set a column number of the index of giss_temp, we can do that afterwards:
giss_temp = giss_temp.set_index("Year")
giss_temp.head()
Explanation: Choosing what is the index
End of explanation
# 1 column is redundant with the index:
giss_temp.columns
# Let's drop it:
giss_temp = giss_temp.drop("Year.1", axis=1)
giss_temp
# We can also just select the columns we want to keep:
giss_temp = giss_temp[[u'Jan', u'Feb', u'Mar', u'Apr', u'May', u'Jun', u'Jul',
u'Aug', u'Sep', u'Oct', u'Nov', u'Dec']]
giss_temp
# Let's remove all these extra column names (Year Jan ...). They all correspond to the index "Year"
giss_temp = giss_temp.drop("Year")
giss_temp
Explanation: Dropping rows and columns
End of explanation
#giss_temp[giss_temp == "****"] = np.nan
giss_temp = giss_temp.where(giss_temp != "****", np.nan)
giss_temp.tail()
Explanation: Let's also set **** to a real missing value (np.nan). We can often do it using a boolean mask, but that may trigger pandas warning. Another way to assign based on a boolean condition is to use the where method:
End of explanation
mean_sea_level["mean_global"] = global_sea_level["msl_ib_ns(mm)"]
mean_sea_level
Explanation: Adding columns
While building the mean_sea_level dataFrame earlier, we didn't include the values from global_sea_level since the years were not aligned. Adding a column to a dataframe is as easy as adding an entry to a dictionary. So let's try:
End of explanation
global_sea_level = global_sea_level.set_index("year")
global_sea_level["msl_ib_ns(mm)"]
mean_sea_level["mean_global"] = global_sea_level["msl_ib_ns(mm)"]
mean_sea_level
Explanation: The column is full of NaNs again because the auto-alignment feature of Pandas is searching for the index values like 1992.9323 in the index of global_sea_level["msl_ib_ns(mm)"] series and not finding them. Let's set its index to these years so that that auto-alignment can work for us and figure out which values we have and not:
End of explanation
# Your code here
Explanation: EXERCISE: Create a new series containing the average of the 2 hemispheres minus the global value to see if that is close to 0. Work inside the mean_sea_level dataframe first. Then try with the original Series to see what happens with data alignment while doing computations.
End of explanation
giss_temp.dtypes
Explanation: Changing dtype of series
Now that the sea levels are looking pretty good, let's got back to the GISS temperature dataset. Because of the labels (strings) found in the middle of the timeseries, every column only assumed to contain strings (didn't convert them to floating point values):
End of explanation
giss_temp["Jan"].astype("float32")
for col in giss_temp.columns:
giss_temp.loc[:, col] = giss_temp[col].astype(np.float32)
Explanation: That can be changed after the fact (and after the cleanup) with the astype method of a Series:
End of explanation
giss_temp.index.dtype
Explanation: An index has a dtype just like any Series and that can be changed after the fact too.
End of explanation
giss_temp.index = giss_temp.index.astype(np.int32)
Explanation: For now, let's change it to an integer so that values can at least be compared properly. We will learn below to change it to a datetime object.
End of explanation
full_globe_temp
full_globe_temp.dropna()
# This will remove any year that has a missing value. Use how='all' to keep partial years
giss_temp.dropna(how="any").tail()
giss_temp.fillna(value=0).tail()
# This fills them with the previous year. See also temp3.interpolate
giss_temp.fillna(method="ffill").tail()
Explanation: Removing missing values
Removing missing values - once they have been converted to np.nan - is very easy. Entries that contain missing values can be removed (dropped), or filled with many strategies.
End of explanation
giss_temp.Aug.interpolate().tail()
Explanation: Let's also mention the .interpolate method on a Series:
End of explanation
full_globe_temp.plot()
giss_temp.plot(figsize=LARGE_FIGSIZE)
mean_sea_level.plot(subplots=True, figsize=(16, 12));
Explanation: For now, we will leave the missing values in all our datasets, because it wouldn't be meaningful to fill them.
EXERCISE: Go back to the reading functions, and learn more about other options that could have allowed us to fold some of these pre-processing steps into the data loading.
4. Basic visualization
Now they have been formatted, visualizing your datasets is the next logical step and is trivial with Pandas. The first thing to try is to invoke the .plot to generate a basic visualization (uses matplotlib under the covers).
Line plots
End of explanation
# Distributions of mean sean level globally and per hemisphere?
mean_sea_level.plot(kind="kde", figsize=(12, 8))
Explanation: Showing distributions information
End of explanation
# Distributions of temperature in each month since 1880
giss_temp.boxplot();
Explanation: QUIZ: How to list the possible kinds of plots that the plot method can allow?
End of explanation
# Is there correlations between the northern and southern sea level timeseries we loaded?
from pandas.tools.plotting import scatter_matrix
scatter_matrix(mean_sea_level, figsize=LARGE_FIGSIZE);
Explanation: Correlations
There are more plot options inside pandas.tools.plotting:
End of explanation
full_globe_temp
# By default [] on a series accesses values using the index, not the location in the series
# print(temp1[0]) # This would to fail!!
# This index is non-trivial though (will talk more about these datetime objects further down):
full_globe_temp.index.dtype
first_date = full_globe_temp.index[0]
first_date == pd.Timestamp('1880')
# By default [] on a series accesses values using the index, not the location in the series
print(full_globe_temp[pd.Timestamp('1880')])
# print(temp1[0]) # This would fail!!
# Another more explicit way to do the same thing is to use loc
print(full_globe_temp.loc[pd.Timestamp('1990')])
print(full_globe_temp.iloc[0], full_globe_temp.iloc[-1])
# Year of the last record?
full_globe_temp.index[-1]
# New records can be added:
full_globe_temp[pd.Timestamp('2011')] = np.nan
Explanation: We will confirm the correlations we think we see further down...
EXERCISE: Refer to exercises/aapl_adj_close_plot/aapl_adj_close_plot.ipynb
5. Accessing data
The general philosophy for accessing values inside a Pandas datastructure is that, unlike a numpy array that only allows to index using integers a Series allows to index with the values inside the index. That makes the code more readable.
In a series
End of explanation
# In 2D, same idea, though in a DF [] accesses columns (Series)
giss_temp["Jan"]
# while .loc and .iloc allow to access individual values, slices or masked selections:
print(giss_temp.loc[1979, "Dec"])
# Slicing can be done with .loc and .iloc
print(giss_temp.loc[1979, "Jan":"Jun"]) # Note that the end point is included unlike NumPy!!!
print(giss_temp.loc[1979, ::2])
# Masking can also be used in one or more dimensions. For example, another way to grab every other month for the first year:
mask = [True, False] * 6
print(giss_temp.iloc[0, mask])
print(giss_temp.loc[1880, mask])
# We could also add a new column like a new entry in a dictionary
giss_temp["totals"] = giss_temp.sum(axis=1)
giss_temp
# Let's remove this new column, we will learn to do this differently
giss_temp = giss_temp.drop("totals", axis=1)
Explanation: In a dataframe
End of explanation
local_sea_level_stations.columns
american_stations = local_sea_level_stations["Country"] == "USA"
local_sea_level_stations.loc[american_stations, ["ID", "Station Name"]]
Explanation: More complex queries rely on the same concepts. For example what are the names, and IDs of the sea level stations in the USA?
End of explanation
# Its dtype is NumPy's new 'datetime64[ns]':
full_globe_temp.index.dtype
Explanation: 6. Working with dates and times
More details at http://pandas.pydata.org/pandas-docs/stable/timeseries.html
Let's work some more with full_globe_temp's index since we saw it is special.
End of explanation
black_friday = pd.to_datetime('1929-10-29')
full_globe_temp.index > black_friday
Explanation: The advantage of having a real datetime index is that operations can be done efficiently on it. Let's add a flag to signal if the value is before or after the great depression's black Friday:
End of explanation
# Convert its index from timestamp to period: it is more meaningfull since it was measured and averaged over the year...
full_globe_temp.index = full_globe_temp.index.to_period()
full_globe_temp
Explanation: Timestamps or periods?
End of explanation
# Frequencies can be specified as strings: "us", "ms", "S", "T", "H", "D", "B", "W", "M", "A", "3min", "2h20", ...
# More aliases at http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases
full_globe_temp.resample("M")
full_globe_temp.resample("10A", how="mean")
Explanation: See also to_timestamp to conver back to timestamps and its how method to specify when inside the range to set the timestamp.
Resampling
Another thing that can be done is to resample the series, downsample or upsample. Let's see the series converted to 10 year blocks or upscale to a monthly series:
End of explanation
# Can specify a start date and a number of values desired. By default it will assume an interval of 1 day:
pd.date_range('1/1/1880', periods=4)
# Can also specify a start and a stop date, as well as a frequency
pd.date_range('1/1/1880', '1/1/2016', freq="A")
Explanation: Generating DatetimeIndex objects
The index for giss_temp isn't an instance of datetimes so we may want to generate such DatetimeIndex objects. This can be done with date_range and period_range:
End of explanation
from pandas.tseries.offsets import YearBegin
pd.date_range('1/1/1880', '1/1/2015', freq=YearBegin())
Explanation: Note that "A" by default means the end of the year. Other times in the year can be specified with "AS" (start), "A-JAN" or "A-JUN". Even more options can be imported from pandas.tseries.offsets:
End of explanation
giss_temp_index = pd.period_range('1/1/1880', '12/1/2015', freq="M")
giss_temp_index
Explanation: Actually we will convert that dataset to a 1D dataset, and build a monthly index, so lets build a monthly period index
End of explanation
# What about the range of dates?
local_sea_level_stations["Date"].min(), local_sea_level_stations["Date"].max(), local_sea_level_stations["Date"].iloc[-1]
local_sea_level_stations.dtypes
Explanation: 7. Transforming datasets: apply, sort, stack/unstack and transpose
Let's look at our local_sea_level_stations dataset some more, to learn more about it and also do some formatting. What is the range of dates and lattitudes we have, the list of countries, the range of variations, ...
End of explanation
local_sea_level_stations["Date"].apply(pd.to_datetime)
Explanation: Apply: transforming Series
We don't see the range of dates because the dates are of dtype "Object", (usually meaning strings). Let's convert that using apply:
End of explanation
local_sea_level_stations["Date"] = local_sea_level_stations["Date"].apply(pd.to_datetime)
# Now we can really compare the dates, and therefore get a real range:
print(local_sea_level_stations["Date"].min(), local_sea_level_stations["Date"].max())
Explanation: This apply method is very powerful and general. We have used it to do something we could have done with astype, but any custom function can be provided to apply.
End of explanation
# Your code here
Explanation: EXERCISE: Use the apply method to search through the stations names for a station in New York. What is the ID of the station?
End of explanation
local_sea_level_stations.sort("Date")
Explanation: Now that we know the range of dates, to look at the data, sorting it following the dates is done with sort:
End of explanation
local_sea_level_stations.sort(["Date", "Country"], ascending=False)
Explanation: Since many stations last updated on the same dates, it is logical to want to sort further, for example, by Country at constant date:
End of explanation
giss_temp.unstack?
unstacked = giss_temp.unstack()
unstacked
# Note the nature of the result:
type(unstacked)
Explanation: Stack and unstack
Let's look at the GISS dataset differently. Instead of seeing the months along the axis 1, and the years along the axis 0, it would could be good to convert these into an outer and an inner axis along only 1 time dimension.
Stacking and unstacking allows to convert a dataframe into a series and vice-versa:
End of explanation
giss_temp.transpose()
giss_temp_series = giss_temp.transpose().unstack()
giss_temp_series.name = "Temp anomaly"
giss_temp_series
Explanation: The result is grouped in the wrong order since it sorts first the axis that was unstacked. Another transformation that would help us is transposing...
End of explanation
# Note the nature of the resulting index:
giss_temp_series.index
# It is an index made of 2 columns. Let's fix the fact that one of them doesn't have a name:
giss_temp_series.index = giss_temp_series.index.set_names(["year", "month"])
# We can now access deviations by specifying the year and month:
giss_temp_series[1980, "Jan"]
Explanation: A side note: Multi-indexes
End of explanation
giss_temp_series.plot(figsize=LARGE_FIGSIZE)
Explanation: But this new multi-index isn't very good, because is it not viewed as 1 date, just as a tuple of values:
End of explanation
giss_temp_series.index = giss_temp_index
giss_temp_series.plot(figsize=LARGE_FIGSIZE)
Explanation: To improve on this, let's reuse an index we generated above with date_range:
End of explanation
monthly_averages = giss_temp.mean()
monthly_averages
Explanation: 8. Statistical analysis
Descriptive statistics
Let's go back to the dataframe version of the GISS temperature dataset temporarily to analyze anomalies month per month. Like most functions on a dataframe, stats functions are computed column per column. They also ignore missing values:
End of explanation
yearly_averages = giss_temp.mean(axis=1)
yearly_averages
Explanation: It is possible to apply stats functions across rows instead of columns using the axis keyword (just like in NumPy).
End of explanation
mean_sea_level.describe()
Explanation: describe provides many descriptive stats computed at once:
End of explanation
full_globe_temp.plot()
pd.rolling_mean(full_globe_temp, 10).plot(figsize=LARGE_FIGSIZE)
# To see what all can be done while rolling,
#pd.rolling_<TAB>
Explanation: Rolling statistics
Let's remove high frequency signal and extract the trend:
End of explanation
local_sea_level_stations.describe()
Explanation: Describing categorical series
Let's look at our local_sea_level_stations dataset some more:
End of explanation
local_sea_level_stations.columns
local_sea_level_stations["Country"]
local_sea_level_stations["Country"].describe()
# List of unique values:
local_sea_level_stations["Country"].unique()
local_sea_level_stations["Country"].value_counts()
# To save memory, we can convert it to a categorical column:
local_sea_level_stations["Country"] = local_sea_level_stations["Country"].astype("category")
Explanation: .describe() only displays information about continuous Series. What about categorical ones?
End of explanation
categorized = pd.cut(full_globe_temp, 3, labels=["L", "M", "H"])
categorized
# The advantage is that we can use labels and control the order they should be treated in (L < M < H)
categorized.cat.categories
Explanation: We can also create categorical series from continuous ones with the cut function:
End of explanation
mean_sea_level
mean_sea_level = mean_sea_level.reset_index()
mean_sea_level
# Groupby with pandas can be done on a column or by applying a custom function to the index.
# If we want to group the data by year, we can build a year column into the DF:
mean_sea_level["year"] = mean_sea_level["date"].apply(int)
mean_sea_level
sl_grouped_year = mean_sea_level.groupby("year")
Explanation: QUIZ: How much memory did we save? What if it was categorized but with dtype object instead of category?
9. Data Aggregation/summarization
Now that we have a good grasp on our datasets, Let's transform and analyze them some more to prepare them to compare them. The 2 function(alities)s to learn about here are groupby and pivot_table.
GroupBy
Let's explore the sea levels, first splitting into calendar years to compute average sea levels for each year:
End of explanation
type(sl_grouped_year)
Explanation: What kind of object did we create?
End of explanation
for group_name, subdf in sl_grouped_year:
print(group_name)
print(subdf)
print("")
Explanation: What to do with that strange GroupBy object? We can first loop over it to get the labels and the sub-dataframes for each group:
End of explanation
mean_sea_level = mean_sea_level.drop(["year"], axis=1).set_index("date")
Explanation: We could have done the same with less effort by grouping by the result of a custom function applied to the index. Let's reset the dataframe:
End of explanation
sl_grouped_year = mean_sea_level.groupby(int)
Explanation: So that we can do the groupby on the index:
End of explanation
sl_grouped_year.groups
Explanation: Something else that can be done with such an object is to look at its groups attribute to see the labels mapped to the rows involved:
End of explanation
sl_grouped_year.mean()
# We can apply any other reduction function or even a dict of functions using aggregate:
sl_grouped_year.aggregate({"mean_global": np.std})
Explanation: How to aggregate the results of this grouping depends on what we want to see: do we want to see averaged over the years? That is so common that it has been implemented directly as a method on the GroupBy object.
End of explanation
sl_grouped_decade = mean_sea_level.groupby(lambda x: int(x/10.))
sl_grouped_decade.groups.keys()
sl_grouped_decade.transform(lambda subframe: (subframe - subframe.mean()/subframe.std()))
Explanation: Another possibility is to transform each group separately, rather than aggregate. For example, here we group over decades and subtract to each value, the average over that decade:
End of explanation
european_filter = ((local_sea_level_stations["Lat"] > 30) &
(local_sea_level_stations["Lat"] < 70) &
(local_sea_level_stations["Lon"] > -10) &
(local_sea_level_stations["Lon"] < 40)
)
# Let's make a copy to work with a new, clean block of memory
# (if you are interested, try and remove the copy to see the consequences further down...
european_stations = local_sea_level_stations[european_filter].copy()
european_stations["Country"].unique()
Explanation: Pivot_table
Pivot table also allows to summarize the information, allowing to convert repeating columns into axes. For example, let's say that we would like to know how many sea level stations are in various european countries. And we would like to group the answers into 2 categories: the stations that have been updated recently (after 2000) and the others.
Let's first extract only entries located (roughly) in Europe.
End of explanation
european_stations["Recently updated"] = european_stations["Date"] > pd.to_datetime("2000")
Explanation: The columns of our future table should have 2 values, whether the station was updated recently or not. Let's build a column to store that information:
End of explanation
european_stations["Number of stations"] = np.ones(len(european_stations))
european_stations.sort("Country")
station_counts = pd.pivot_table(european_stations, index="Country", columns="Recently updated",
values="Number of stations", aggfunc=np.sum)
# Let's remove from the table the countries for which no station was found:
station_counts.dropna(how="all")
Explanation: Finally, what value will be displayed inside the table. The values should be extracted from a column, pivot_table allowing an aggregation function to be applied when more than 1 value is found for a given case. Each station should count for 1, and we could aggregate multiple stations by summing these ones:
End of explanation
# Your code here
Explanation: QUIZ: Why is there still some countries with no entries?
EXERCISE: How many recently updated stations? Not recently updated stations? Which country has the most stations? Which country has the most recently updated stations?
End of explanation
# Your code here
Explanation: EXERCISE: How would we build the same dataframe with a groupby operation?
End of explanation
# Let's see what how the various sea levels are correlated with each other:
mean_sea_level["northern_hem"].corr(mean_sea_level["southern_hem"])
# If series are already grouped into a DataFrame, computing all correlation coeff is trivial:
mean_sea_level.corr()
Explanation: EXERCISE: Refer to exercises/pivot_table/pivot_tables.py
10. Correlations and regressions
Correlation coefficients
Both Series and dataframes have a corr method to compute the correlation coefficient between series:
End of explanation
# Visualize the correlation matrix
plt.imshow(mean_sea_level.corr(), interpolation="nearest")
plt.yticks?
# let's make it a little better to confirm that learning about global sea level cannot be done from just
# looking at stations in the northern hemisphere:
plt.imshow(mean_sea_level.corr(), interpolation="nearest")
plt.xticks(np.arange(3), mean_sea_level.corr().columns)
plt.yticks(np.arange(3), mean_sea_level.corr().index)
plt.colorbar()
Explanation: Note: by default, the method used is the Pearson correlation coefficient (https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient). Other methods are available (kendall, spearman using the method kwarg).
End of explanation
import statsmodels.formula.api as sm
sm_model = sm.ols(formula="mean_global ~ northern_hem + southern_hem", data=mean_sea_level).fit()
sm_model.params
type(sm_model.params)
sm_model.summary()
mean_sea_level["mean_global"].plot()
sm_model.fittedvalues.plot(label="OLS prediction")
plt.legend(loc="upper left")
Explanation: OLS
There are 2 objects constructors inside Pandas and inside statsmodels. There has been talks about merging the 2 into SM, but that hasn't happened yet. OLS in statsmodels allows more complex formulas:
End of explanation
from pandas.stats.api import ols as pdols
# Same fit as above:
pd_model = pdols(y=mean_sea_level["mean_global"], x=mean_sea_level[["northern_hem", "southern_hem"]])
pd_model
plt.figure(figsize=LARGE_FIGSIZE)
mean_sea_level["mean_global"].plot()
pd_model.predict().plot(label="OLS prediction")
plt.legend(loc="upper left")
Explanation: OLS in pandas requires to pass a y series and an x series to do a fit of the form y ~ x. But the formula can be more complex by providing a DataFrame for x and reproduce a formula of the form y ~ x1 + x2.
Also, OLS in pandas allows to do rolling and expanding OLS:
End of explanation
mean_sea_level["mean_global"].index
giss_temp_series.index
DAYS_PER_YEAR = {}
import calendar
# Let's first convert the floating point dates in the sea level to timestamps:
def floating_year_to_timestamp(float_date):
Convert a date as a floating point year number to a pandas timestamp object.
year = int(float_date)
days_per_year = 366 if calendar.isleap(year) else 365
remainder = float_date - year
daynum = 1 + remainder * (days_per_year - 1)
daynum = int(round(daynum))
# Convert day number to month and day
day = daynum
month = 1
while month < 13:
month_days = calendar.monthrange(year, month)[1]
if day <= month_days:
return pd.Timestamp(str(year)+"/"+str(month)+"/"+str(day))
day -= month_days
month += 1
raise ValueError('{} does not have {} days'.format(year, daynum))
floating_year_to_timestamp(1996.0), floating_year_to_timestamp(1996.5), floating_year_to_timestamp(1996.9999)
dt_index = pd.Series(mean_sea_level["mean_global"].index).apply(floating_year_to_timestamp)
dt_index
mean_sea_level = mean_sea_level.reset_index(drop=True)
mean_sea_level.index = dt_index
mean_sea_level
Explanation: An interlude: data alignment
Converting the floating point date to a timestamp
Now, we would like to look for correlations between our monthly temperatures and the sea levels we have. For this to be possible, some data alignment must be done since the time scales are very different for the 2 datasets.
End of explanation
dt_index.dtype
# What is the frequency of the new index? The numpy way to compute differences between all values doesn't work:
dt_index[1:] - dt_index[:-1]
Explanation: Now, how to align the 2 series? Is this one sampled regularly so that the month temperatures can be upscaled to that frequency?
Computing the difference between successive values
What is the frequency of that new index?
End of explanation
# There is a method for shifting values up/down the index:
dt_index.shift()
# So the distances can be computed with
dt_index - dt_index.shift()
# Not constant reads apparently. Let's downscale the frequency of the sea levels
# to monthly, like the temperature reads we have:
monthly_mean_sea_level = mean_sea_level.resample("MS").to_period()
monthly_mean_sea_level
monthly_mean_sea_level["mean_global"].align(giss_temp_series)
giss_temp_series.align?
# Now that the series are using the same type and frequency of indexes, to align them is trivial:
monthly_mean_sea_level["mean_global"].align(giss_temp_series, join='inner')
aligned_sl, aligned_temp = monthly_mean_sea_level["mean_global"].align(giss_temp_series, join='inner')
aligned_df = pd.DataFrame({"mean_sea_level": aligned_sl, "mean_global_temp": aligned_temp})
Explanation: IMPORTANT Note: The above failure is due to the fact that operations between series automatically align them based on their index.
End of explanation
monthly_mean_sea_level.align(giss_temp_series, axis=0, join='inner')
aligned_sea_levels, aligned_temp = monthly_mean_sea_level.align(giss_temp_series, axis=0, join='inner')
aligned_monthly_data = aligned_sea_levels.copy()
aligned_monthly_data["global_temp"] = aligned_temp
aligned_monthly_data
Explanation: The alignment can even be done on an entire dataframe:
End of explanation
aligned_monthly_data.plot(figsize=LARGE_FIGSIZE)
aligned_monthly_data.corr()
model = sm.ols("southern_hem ~ global_temp", data=aligned_monthly_data).fit()
model.rsquared
Explanation: Correlations between sea levels and temperatures
End of explanation
aligned_yearly_data = aligned_monthly_data.resample("A")
aligned_yearly_data.plot()
aligned_yearly_data.corr()
model = sm.ols("southern_hem ~ global_temp", data=aligned_yearly_data).fit()
model.rsquared
Explanation: What if we had done the analysis yearly instead of monthly to remove seasonal variations?
End of explanation
import statsmodels as sm
# Let's remove seasonal variations by resampling annually
data = giss_temp_series.resample("A").to_timestamp()
ar_model = sm.tsa.ar_model.AR(data, freq='A')
ar_res = ar_model.fit(maxlag=60, disp=True)
plt.figure(figsize=LARGE_FIGSIZE)
pred = ar_res.predict(start='1950-1-1', end='2070')
data.plot(style='k', label="Historical Data")
pred.plot(style='r', label="Predicted Data")
plt.ylabel("Temperature variation (0.01 degC)")
plt.legend()
Explanation: 11. Predictions from auto regression models
An auto-regresssive model fits existing data and build a (potentially predictive) model of the data fitted. We use the timeseries analysis (tsa) submodule of statsmodels to make out-of-sample predictions for the upcoming decades:
End of explanation
# Your code here
Explanation: EXERCISE: Make another auto-regression on the sea level of the Atlantic ocean to estimate how much New York is going to flood in the coming century.
You can find the historical sea levels of the Atlantic ocean at http://sealevel.colorado.edu/files/current/sl_Atlantic_Ocean.txt or locally in data/sea_levels/sl_Atlantic_Ocean.txt.
A little more work but more precise: extract the ID of a station in NewYork from the local_sea_level_stations dataset, and use it to download timeseries in NY (URL would be http://www.psmsl.org/data/obtaining/met.monthly.data/< ID >.metdata).
End of explanation |
1,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked โย neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
Step12: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step39: Project 4
Step40: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step41: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step42: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step46: Project 5
Step47: Run the following cell to recreate the network and train it once again.
Step48: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step49: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
Step53: Project 6
Step54: Run the following cell to train your network with a small polarity cutoff.
Step55: And run the following cell to test it's performance. It should be
Step56: Run the following cell to train your network with a much larger polarity cutoff.
Step57: And run the following cell to test it's performance.
Step58: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem" (this lesson)
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network (video only - nothing in notebook)
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset<a id='lesson_1'></a>
The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.
End of explanation
len(reviews)
reviews[0]
labels[0]
Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a>
End of explanation
from collections import Counter
import numpy as np
Explanation: Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library.
End of explanation
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
End of explanation
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for review, label in zip(reviews, labels):
words = review.split(' ')
if label == 'POSITIVE':
positive_counts.update(words)
elif label == 'NEGATIVE':
negative_counts.update(words)
total_counts.update(words)
Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.
End of explanation
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
End of explanation
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for word, freq in total_counts.most_common():
if freq >= 100:
pos_neg_ratios[word] = positive_counts[word] / float(negative_counts[word] + 1)
Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator โย that ensures we don't divide by zero for words that are only seen in positive reviews.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the ratios you've calculated for a few words:
End of explanation
# TODO: Convert ratios to logs
for word, ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
Explanation: Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews โ like "amazing"ย โ have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews โ like "terrible" โ have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews โ like "the" โ have values very close to 1. A perfectly neutral word โย one that was used in exactly the same number of positive reviews as negative reviews โย would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio))
In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the new ratios you've calculated for the same words from before:
End of explanation
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked โย neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
End of explanation
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())
Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary.
End of explanation
vocab_size = len(vocab)
print(vocab_size)
Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
End of explanation
from IPython.display import Image
Image(filename='sentiment_network_2.png')
Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
End of explanation
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1, vocab_size))
Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.
End of explanation
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
Explanation: Run the following cell. It should display (1, 74074)
End of explanation
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
End of explanation
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(' '):
layer_0[0][word2index[word]] += 1
Explanation: TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0.
End of explanation
update_input_layer(reviews[0])
layer_0
Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
End of explanation
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
return 1 if label == 'POSITIVE' else 0
Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively.
End of explanation
labels[0]
get_target_for_label(labels[0])
Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
End of explanation
labels[1]
get_target_for_label(labels[1])
Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
End of explanation
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
review_vocab = set(word for review in reviews for word in review.split(' '))
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
label_vocab = set(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, word in enumerate(self.label_vocab):
self.label2index[word] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes, hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0, self.output_nodes ** -0.5, (hidden_nodes, output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(' '):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 1 if label == 'POSITIVE' else 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review, label = training_reviews[i], training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
target = self.get_target_for_label(label)
error_2 = target - layer_2
error_2_term = error_2 * self.sigmoid_output_2_derivative(layer_2)
error_1 = error_2_term * self.weights_1_2.T
error_1_term = error_1
self.weights_1_2 += self.learning_rate * error_2_term * layer_1.T
self.weights_0_1 += self.learning_rate * error_1_term * self.layer_0.T
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if np.abs(error_2) < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
return 'POSITIVE' if layer_2 >= 0.5 else 'NEGATIVE'
Explanation: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
End of explanation
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
End of explanation
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
review_vocab = set(word for review in reviews for word in review.split(' '))
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
label_vocab = set(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, word in enumerate(self.label_vocab):
self.label2index[word] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes, hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0, self.output_nodes ** -0.5, (hidden_nodes, output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(' '):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 1 if label == 'POSITIVE' else 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review, label = training_reviews[i], training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
target = self.get_target_for_label(label)
error_2 = target - layer_2
error_2_term = error_2 * self.sigmoid_output_2_derivative(layer_2)
error_1 = error_2_term * self.weights_1_2.T
error_1_term = error_1
self.weights_1_2 += self.learning_rate * error_2_term * layer_1.T
self.weights_0_1 += self.learning_rate * error_1_term * self.layer_0.T
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if np.abs(error_2) < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
return 'POSITIVE' if layer_2 >= 0.5 else 'NEGATIVE'
Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
Explanation: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
review_vocab = set(word for review in reviews for word in review.split(' '))
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
label_vocab = set(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, word in enumerate(self.label_vocab):
self.label2index[word] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes, hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0, self.output_nodes ** -0.5, (hidden_nodes, output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_1 = np.zeros((1, hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 1 if label == 'POSITIVE' else 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews_raw) == len(training_labels))
training_reviews = [
set(self.word2index[word] for word in review.split(' '))
for review in training_reviews_raw
]
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review, label = training_reviews[i], training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
target = self.get_target_for_label(label)
error_2 = target - layer_2
error_2_term = error_2 * self.sigmoid_output_2_derivative(layer_2)
error_1 = error_2_term * self.weights_1_2.T
error_1_term = error_1
self.weights_1_2 += self.learning_rate * error_2_term * self.layer_1.T
for index in review:
self.weights_0_1[index] += self.learning_rate * error_1_term[0]
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if np.abs(error_2) < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.layer_1 *= 0
indices = set(self.word2index[word]
for word in review.lower().split(' ')
if word in self.word2index.keys())
for index in indices:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
return 'POSITIVE' if layer_2[0] >= 0.5 else 'NEGATIVE'
Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Remove the update_input_layer function - you will not need it in this version.
* Modify init_network:
You no longer need a separate input layer, so remove any mention of self.layer_0
You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify train:
Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.
At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.
When updating weights_0_1, only update the individual weights that were used in the forward pass.
Modify run:
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it once again.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
Explanation: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
End of explanation
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1, min_count = 10, polarity_cutoff = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels, min_count, polarity_cutoff)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels, min_count, polarity_cutoff):
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for review, label in zip(reviews, labels):
words = review.split(' ')
if label == 'POSITIVE':
positive_counts.update(words)
elif label == 'NEGATIVE':
negative_counts.update(words)
total_counts.update(words)
# Create Counter object to store positive/negative ratios
pos_neg_ratios = {}
for word, freq in total_counts.most_common():
if freq >= 50:
pos_neg_ratios[word] = np.log(positive_counts[word] / float(negative_counts[word] + 1))
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
review_vocab = set(
word for word, freq in total_counts.most_common()
if freq >= min_count
and word in pos_neg_ratios.keys()
and np.abs(pos_neg_ratios[word]) >= polarity_cutoff
)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
label_vocab = set(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, word in enumerate(self.label_vocab):
self.label2index[word] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes, hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0, self.output_nodes ** -0.5, (hidden_nodes, output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_1 = np.zeros((1, hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 1 if label == 'POSITIVE' else 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews_raw) == len(training_labels))
training_reviews = [
set(
self.word2index[word]
for word in review.split(' ')
if word in self.word2index.keys()
)
for review in training_reviews_raw
]
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review, label = training_reviews[i], training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
target = self.get_target_for_label(label)
error_2 = target - layer_2
error_2_term = error_2 * self.sigmoid_output_2_derivative(layer_2)
error_1 = error_2_term * self.weights_1_2.T
error_1_term = error_1
self.weights_1_2 += self.learning_rate * error_2_term * self.layer_1.T
for index in review:
self.weights_0_1[index] += self.learning_rate * error_1_term[0]
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if np.abs(error_2) < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.layer_1 *= 0
indices = set(self.word2index[word]
for word in review.lower().split(' ')
if word in self.word2index.keys())
for index in indices:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
return 'POSITIVE' if layer_2[0] >= 0.5 else 'NEGATIVE'
Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
Add two additional parameters: min_count and polarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
Modify __init__:
Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a small polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance. It should be
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a much larger polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance.
End of explanation
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
Explanation: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a>
End of explanation |
1,731 | Given the following text description, write Python code to implement the functionality described.
Description:
Count minimum factor jumps required to reach the end of an Array
vector to store factors of each integer ; dp array ; Precomputing all factors of integers from 1 to 100000 ; Function to count the minimum jumps ; If we reach the end of array , no more jumps are required ; If the jump results in out of index , return INT_MAX ; If the answer has been already computed , return it directly ; Else compute the answer using the recurrence relation ; Iterating over all choices of jumps ; Considering current factor as a jump ; Jump leads to the destination ; Return ans and memorize it ; Driver code ; pre - calculating the factors
| Python Code:
factors =[[ ] for i in range(100005 ) ] ;
dp =[0 for i in range(100005 ) ] ;
def precompute() :
for i in range(1 , 100001 ) :
for j in range(i , 100001 , i ) :
factors[j ] . append(i ) ;
def solve(arr , k , n ) :
if(k == n - 1 ) :
return 0 ;
if(k >= n ) :
return 1000000000
if(dp[k ] ) :
return dp[k ] ;
ans = 1000000000
for j in factors[arr[k ] ] :
res = solve(arr , k + j , n ) ;
if(res != 1000000000 ) :
ans = min(ans , res + 1 ) ;
dp[k ] = ans ;
return ans
if __name__== ' __main __' :
precompute()
arr =[2 , 8 , 16 , 55 , 99 , 100 ]
n = len(arr )
print(solve(arr , 0 , n ) )
|
1,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch5 Categorizing and Tagging Words
ๆฌ็ซ ็็ฎๆจๆฏๅ็ญ้ไบๅ้ก
Step1: ไธ้ข็็ฏไพไธญ๏ผCCๆฏๅฐ็ญ้ฃๆฅ่ฉใRBๆฏๅฏ่ฉใINๆฏไป็ณป่ฉใNNๆฏๅ่ฉใJJๅๆฏๅฝขๅฎน่ฉใๅฆๆๆณ็ฅ้่ฉณ็ดฐ็tagๅฎ็พฉ๏ผๅฏไปฅ็จnltk.help.upenn_tagset('RB')ไพๆฅ่ฉขใ
Tagged Corpora
ๅจNLTK็็ฟๆ
ฃไธ๏ผtagged tokenๆ่กจ็คบๆ็ฑๅฎๅญๅtag็ตๅ็tuple๏ผไฝๆฏๅฒๅญๅจcorpusไธญ็่ณๆ๏ผๅๆฏไธๅๅญไธฒๅ
ๅซๅฎๅญๅtag๏ผไธญ้ไปฅ'/'ๅ้๏ผไพๅฆ'fly/NN'ใๅฉ็จstr2tupleๅฏไปฅๅพcorpusไธญ็ๅญไธฒ่ฝๆtupleใ
Step2: corpusไธญไนๆtagged sentences
Step3: Mapping Words to Properties
่ฆๅฒๅญmapping่ณๆ๏ผๆ่ช็ถ็ๆนๆณๅฐฑๆฏ็จdictionary๏ผๅ็จฑ็บassociative arrayๆhash arrayใไธ่ฌ็้ฃๅๆฏ็จๆดๆธไฝ็บindex๏ผไฝdictionaryไธญๅๆฏ็จไปปไฝhasable็่ณๆไฝ็บindex๏ผไพๅฆๅญไธฒใtuple็ญ็ญใ
mapping่ณๆ็ๆ็จๆ
Step4: Default Dictionary
ๅฆๆ่ฉฆ่ๅญๅไธๅญๅจ็key๏ผๆ้ ๆ้ฏ่ชคใdefaultdictๅฏไปฅไฝฟๅญๅไธๅญๅจ็keyๆ๏ผ่ชๅๆฐๅขไธๅ้ ่จญๅผใ
Step5: Inverting a Dictionary
dict็่จญ่จๆฏ็จkeyไพๆฅ่ฉขvalue๏ผๅฆๆไฝ ๅธๆ็จvalueไพๆฅ่ฉขkey๏ผ้ๅบฆๆๅพๆ
ขใไธ็จฎ็ฐกๅฎ็่งฃๆณๆฏ็จ(value,key)้ๆฐ็ข็ไธๅdict๏ผๅฐฑๅฏไปฅ็จvalueๆฅ่ฉขkeyใ
Step6: ๆด็ฐกๅฎ็ๆนๆณ
Step7: Summary Dictionary Methods
d = {}
Step8: Default Tagger
็ฌฌไธๆญฅ๏ผๅ
ๆบๅไธๅ้ ่จญ็tagger๏ผ็ตฑ่จๆๆpart-of-speechไธญ๏ผ้ฃไธ็จฎ่ฉๆงๅบ็พ้ ป็ๆ้ซ๏ผๅฐฑ่ฆ็บ้ ่จญๅผใ
Step9: Regular Expression Tagger
็จ็ถ้ฉๅคๆท้ฃไบๅญๅฐพๅฏ่ฝๆฏไป้บผ่ฉๆง๏ผ็ถๅพ็จregular expressionๅฏซๆๆขไปถใ
Step10: Unigram Tagger
Step11: unigram taggerๆฏ็ตฑ่จๆฏๅๅญๆๅธธๅบ็พ็่ฉๆง๏ผๅ ๆญค่จ็ทด่ณๆ่ถๅคง๏ผๅฐฑๆ่ถๆบ็ขบใไฝ้ๅฐๆฒ็้็ๅญ๏ผๅฐฑๆๅณๅNoneใๅ ๆญค้่ฆ่จญbackoff๏ผ็ถunigram tagger็กๆณๅคๆทๆ๏ผ็จๅฆไธๅtaggerไพ่ผๅฉใ
Step12: ๅ
ฉๅ้้ป
Step13: Storing Taggers
ๅ ็บtraining taggerๅพ่ฑๆ้๏ผๆไปฅๅฒๅญ็ตๆๆฏๅฟ
่ฆ็ใๅฉ็จcPickle.dumpไพๅฏซๅบbinaryๆ ผๅผ็็ฉไปถ
Step14: Confusion Matrix | Python Code:
import nltk
text = nltk.word_tokenize("And now for something completely different")
nltk.pos_tag(text)
Explanation: Ch5 Categorizing and Tagging Words
ๆฌ็ซ ็็ฎๆจๆฏๅ็ญ้ไบๅ้ก:
ไป้บผๆฏlexical categories? ๅฎๅๅฆไฝๆ็จๅจNLPไธญ?
่ฆๅฒๅญๅฎๅญๅๅ้ก็่ณๆ็ตๆงๆฏไป้บผ?
ๅฆไฝ่ชๅ็บๆฏๅๅฎๅญๅ้ก?
ๆฌ็ซ ๆๆๅฐไธไบๅบๆฌ็NLPๆนๆณ๏ผไพๅฆsequence labelingใn-gram modelsใbackoffใevaluationใ
่พจ่ญๅฎๅญ็part-of-speech(่ฉๆง)ไธฆๆจ่จ็้็จ็จฑ็บtagging๏ผๆ็จฑpart-of-speech taggingใPOS taggingใๅจไธ่ฌ็NLPๆต็จไธญ๏ผtaggingๆฏๆฅๅจtokenizationๅพ้ขใpart-of-speechๅ็จฑ็บword classๆlexical categoryใ่ๅฏไพ้ธๆ็tag้ๅ็จฑ็บtagsetใ
Using a Tagger
End of explanation
nltk.tag.str2tuple('fly/NN')
# tagged_words() ๆฏไธๅๅทฒ็ถ่กจ็คบๆtupleๅฝขๆ
็่ณๆ
nltk.corpus.brown.tagged_words()
# ็จๅๆธ tagset='universal' ๅฏไปฅๆๆ็ฐกๅฎ็tag
nltk.corpus.brown.tagged_words(tagset='universal')
# ๅฉ็จ FreqDist ็ตฑ่จ่ฉๆง็ๆธ้
tag_fd = nltk.FreqDist(tag for (word, tag) in nltk.corpus.brown.tagged_words(tagset='universal'))
tag_fd.most_common()
%matplotlib inline
tag_fd.plot()
tag_cd = nltk.ConditionalFreqDist(nltk.corpus.brown.tagged_words(tagset='universal'))
# ๆฅ่ฉขๆๅฎๅญ็ๅธธ็จPOS
tag_cd['yield']
Explanation: ไธ้ข็็ฏไพไธญ๏ผCCๆฏๅฐ็ญ้ฃๆฅ่ฉใRBๆฏๅฏ่ฉใINๆฏไป็ณป่ฉใNNๆฏๅ่ฉใJJๅๆฏๅฝขๅฎน่ฉใๅฆๆๆณ็ฅ้่ฉณ็ดฐ็tagๅฎ็พฉ๏ผๅฏไปฅ็จnltk.help.upenn_tagset('RB')ไพๆฅ่ฉขใ
Tagged Corpora
ๅจNLTK็็ฟๆ
ฃไธ๏ผtagged tokenๆ่กจ็คบๆ็ฑๅฎๅญๅtag็ตๅ็tuple๏ผไฝๆฏๅฒๅญๅจcorpusไธญ็่ณๆ๏ผๅๆฏไธๅๅญไธฒๅ
ๅซๅฎๅญๅtag๏ผไธญ้ไปฅ'/'ๅ้๏ผไพๅฆ'fly/NN'ใๅฉ็จstr2tupleๅฏไปฅๅพcorpusไธญ็ๅญไธฒ่ฝๆtupleใ
End of explanation
nltk.corpus.brown.tagged_sents(tagset='universal')[0]
Explanation: corpusไธญไนๆtagged sentences:
End of explanation
pos = {} # ๅจpythonไธญๅฎ็พฉdictionaryๆ็ฐกๅฎ็ๆนๆณ
pos['hello'] = 'world'
pos['right'] = 'here'
pos
[w for w in pos] # ็จfor็ๆๅๆๆพๅบkey
pos.keys()
pos.items()
pos.values()
pos = dict(hello = 'world', right = 'here') # ๅฆไธ็จฎๅฎ็พฉๆนๅผ
pos
Explanation: Mapping Words to Properties
่ฆๅฒๅญmapping่ณๆ๏ผๆ่ช็ถ็ๆนๆณๅฐฑๆฏ็จdictionary๏ผๅ็จฑ็บassociative arrayๆhash arrayใไธ่ฌ็้ฃๅๆฏ็จๆดๆธไฝ็บindex๏ผไฝdictionaryไธญๅๆฏ็จไปปไฝhasable็่ณๆไฝ็บindex๏ผไพๅฆๅญไธฒใtuple็ญ็ญใ
mapping่ณๆ็ๆ็จๆ:
ๆธ็index: ๅฐๅฎๅญmappingๅฐ้ ๆธ
thesaurus: ๅฐๅญ็พฉmappingๅฐไธ็ตๅ็พฉๅญ
ๅญๅ
ธ: ๅฐๅฎๅญmappingๅฐๅฎๅญ็่งฃ้
ๆฏ่ผๅญ้: ๅฐๅฎๅญmappingๅฐๅคๅ่ช่จ็ๅฎๅญ
End of explanation
f = nltk.defaultdict(int)
f['color'] = 4
f
f['dream'] # dreamไธๅญๅจ๏ผไฝๆฅ่ฉขๆๆ่ชๅๆฐๅข
f # ๆฅ่ฉขdreamๅพ๏ผๅฐฑ็ดๆฅๆฐๅขไบไธๅdream
f = nltk.defaultdict(lambda: 'xxx')
f['hello'] = 'world'
f
f['here'] = f['here'] + 'comment'
f
Explanation: Default Dictionary
ๅฆๆ่ฉฆ่ๅญๅไธๅญๅจ็key๏ผๆ้ ๆ้ฏ่ชคใdefaultdictๅฏไปฅไฝฟๅญๅไธๅญๅจ็keyๆ๏ผ่ชๅๆฐๅขไธๅ้ ่จญๅผใ
End of explanation
old = dict(nltk.corpus.brown.tagged_words()[:100])
new = dict((value, key) for (key, value) in old.items())
new['JJ'] # ้็ถๆๅ็ๅ็ธ๏ผไฝๅช่ฝๆฅๅบๆๅพ่ผธๅ
ฅ็ๅญ
new2 = nltk.defaultdict(list) # ็ถkeyไธๅญๅจๆ๏ผ้ฝ่ฆ็บempty list
for (key, value) in old.items():
new2[value].append(key)
new2['JJ']
Explanation: Inverting a Dictionary
dict็่จญ่จๆฏ็จkeyไพๆฅ่ฉขvalue๏ผๅฆๆไฝ ๅธๆ็จvalueไพๆฅ่ฉขkey๏ผ้ๅบฆๆๅพๆ
ขใไธ็จฎ็ฐกๅฎ็่งฃๆณๆฏ็จ(value,key)้ๆฐ็ข็ไธๅdict๏ผๅฐฑๅฏไปฅ็จvalueๆฅ่ฉขkeyใ
End of explanation
new3 = nltk.Index((value, key) for (key, value) in old.items())
new3['JJ']
Explanation: ๆด็ฐกๅฎ็ๆนๆณ: ๅฉ็จnltkๅ
งๅปบ็ๅฝๅผใ
End of explanation
from nltk.corpus import brown
brown_tagged_sents = brown.tagged_sents(categories='news')
brown_sents = brown.sents(categories='news')
Explanation: Summary Dictionary Methods
d = {}: ๅปบ็ซ็ฉบ็dict
d[key] = value: ็บkeyๆๅฎๆฐ็value
d.keys(): ๅณๅlist of keys
list(d): ๅณๅlist of keys
d.values(): ๅณๅlist of values
sorted(d): ๅณๅsorted list of keys
key in d: ๅฆๆdๆๅ
ๅซkeyๅๅณๅTrue
for key in d: ไพๅบๅณๅๆฏไธๅKey
d1.update(d2): ๅฐd2็ๆฏๅitem่ค่ฃฝๅฐd1
defaultdict(int): ้ ่จญvalue็บ0็dict
defaultdict(list): ้ ่จญvalue็บ[]็dict
Automatic Tagging
End of explanation
tags = [tag for (word, tag) in brown.tagged_words(categories='news')]
nltk.FreqDist(tags).max()
default_tagger = nltk.DefaultTagger('NN') # ๅ ็บNN้ ป็ๆ้ซ๏ผๆไปฅๆช็ฅ่ฉๆง็ๆ
ๆณไธๅพ็ถๆNN
default_tagger.tag(nltk.word_tokenize('i like my mother and dog'))
# ็ถ็ถ้ ๆธฌ็ๆบ็ขบ็ๅพๅทฎ๏ผๅ ็บๅชๆ13%ๆฏ็็NN
default_tagger.evaluate(brown_tagged_sents)
Explanation: Default Tagger
็ฌฌไธๆญฅ๏ผๅ
ๆบๅไธๅ้ ่จญ็tagger๏ผ็ตฑ่จๆๆpart-of-speechไธญ๏ผ้ฃไธ็จฎ่ฉๆงๅบ็พ้ ป็ๆ้ซ๏ผๅฐฑ่ฆ็บ้ ่จญๅผใ
End of explanation
patterns = [
(r'.*ing$', 'VBG'),
(r'.*ed$', 'VBD'),
(r'.*es$', 'VBZ'),
(r'.*ould$', 'MD'),
(r'.*\'s$', 'NN$'),
(r'.*s$', 'NNS'),
(r'^-?[0-9]+(.[0-9]+)?$', 'CD'),
(r'.*', 'NN')
]
regexp_tagger = nltk.RegexpTagger(patterns)
regexp_tagger.tag(nltk.word_tokenize('i could be sleeping in 9 AM'))
regexp_tagger.evaluate(brown_tagged_sents)
Explanation: Regular Expression Tagger
็จ็ถ้ฉๅคๆท้ฃไบๅญๅฐพๅฏ่ฝๆฏไป้บผ่ฉๆง๏ผ็ถๅพ็จregular expressionๅฏซๆๆขไปถใ
End of explanation
unigram_tagger = nltk.UnigramTagger(brown.tagged_sents(categories='news')[:500])
unigram_tagger.tag(nltk.word_tokenize('i could be sleeping in 9 AM'))
Explanation: Unigram Tagger
End of explanation
unigram_tagger = nltk.UnigramTagger(brown.tagged_sents(categories='news')[:500],
backoff = regexp_tagger)
unigram_tagger.evaluate(brown_tagged_sents[500:])
unigram_tagger = nltk.UnigramTagger(brown.tagged_sents(categories='news')[:4000],
backoff = regexp_tagger)
unigram_tagger.evaluate(brown_tagged_sents[4000:])
Explanation: unigram taggerๆฏ็ตฑ่จๆฏๅๅญๆๅธธๅบ็พ็่ฉๆง๏ผๅ ๆญค่จ็ทด่ณๆ่ถๅคง๏ผๅฐฑๆ่ถๆบ็ขบใไฝ้ๅฐๆฒ็้็ๅญ๏ผๅฐฑๆๅณๅNoneใๅ ๆญค้่ฆ่จญbackoff๏ผ็ถunigram tagger็กๆณๅคๆทๆ๏ผ็จๅฆไธๅtaggerไพ่ผๅฉใ
End of explanation
bigram_tagger = nltk.BigramTagger(brown.tagged_sents(categories='news')[:4000])
bigram_tagger.tag(nltk.word_tokenize('i could be sleeping in 9 AM'))
bigram_tagger = nltk.BigramTagger(brown.tagged_sents(categories='news')[:4000],
backoff=unigram_tagger)
bigram_tagger.evaluate(brown_tagged_sents[4000:])
Explanation: ๅ
ฉๅ้้ป:
้จ่training dataๅขๅ ๏ผๆบ็ขบ็ไนๆๆๅ๏ผๅฉ็จunigramๆ้ซๅฏไปฅๅฐ90%ๅทฆๅณ
่จๅพๅฐtraining/testing dataๅ้๏ผๅฆๅๆบ็ขบ็ๆฏไธๆบ็
Bigram Tagger
็ตฑ่จๅ
ฉๅๅฎๅญ็ตๆ็bigramไพไฝtaggerใprecision่ผ้ซ๏ผไฝrecallๅพไฝ๏ผไธๆฆ้ๅฐไธ่ช่ญ็ๅญๅฐฑ้ฆฌไธๅบ็พNoneใ
End of explanation
from cPickle import dump
output = open('t2.pkl', 'wb')
dump(bigram_tagger, output, -1)
output.close()
from cPickle import load
input = open('t2.pkl', 'rb')
tagger = load(input)
input.close()
tagger.evaluate(brown_tagged_sents[4000:])
Explanation: Storing Taggers
ๅ ็บtraining taggerๅพ่ฑๆ้๏ผๆไปฅๅฒๅญ็ตๆๆฏๅฟ
่ฆ็ใๅฉ็จcPickle.dumpไพๅฏซๅบbinaryๆ ผๅผ็็ฉไปถ
End of explanation
brown_sents = brown.sents()
brown_tagged_sents = brown.tagged_sents(tagset = 'universal')
default_tagger = nltk.DefaultTagger('NOUN')
unigram_tagger = nltk.UnigramTagger(brown_tagged_sents[:4000], backoff=default_tagger)
bigram_tagger = nltk.BigramTagger(brown_tagged_sents[:4000], backoff=unigram_tagger)
unigram_tagger.tag(nltk.word_tokenize('I like your mother'))
test = [tag for sent in brown_sents[4000:] for (word, tag) in bigram_tagger.tag(sent)]
gold = [tag for sent in brown_tagged_sents[4000:] for (word, tag) in sent]
print nltk.ConfusionMatrix(gold, test)
Explanation: Confusion Matrix
End of explanation |
1,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Automate the ML process using pipelines
There are standard workflows in a machine learning project that can be automated. In Python scikit-learn, Pipelines help to clearly define and automate these workflows.
* Pipelines help overcome common problems like data leakage in your test harness.
* Python scikit-learn provides a Pipeline utility to help automate machine learning workflows.
* Pipelines work by allowing for a linear sequence of data transforms to be chained together culminating in a modeling process that can be evaluated.
Data Preparation and Modeling Pipeline
Step1: Evaluate Some Algorithms
Now it is time to create some models of the data and estimate their accuracy on unseen data. Here is what we are going to cover in this step
Step2: 2.0 Evaluate Algorithms
Step3: Observation
The results suggest That both Logistic Regression and LDA may be worth further study. These are just mean accuracy values. It is always wise to look at the distribution of accuracy values calculated across cross validation folds. We can do that graphically using box and whisker plots.
Step4: Observation
The results show a similar tight distribution for all classifiers except SVM which is encouraging, suggesting low variance. The poor results for SVM are surprising.
It is possible the varied distribution of the attributes may have an effect on the accuracy of algorithms such as SVM. In the next section we will repeat this spot-check with a standardized copy of the training dataset.
2.1 Evaluate Algorithms
Step5: Observations
The results show that standardization of the data has lifted the skill of SVM to be the most accurate algorithm tested so far.
The results suggest digging deeper into the SVM and LDA and LR algorithms. It is very likely that configuration beyond the default may yield even more accurate models.
3.0 Algorithm Tuning
In this section we investigate tuning the parameters for three algorithms that show promise from the spot-checking in the previous section
Step6: Tuning the hyper-parameters - k-NN hyperparameters
For your standard k-NN implementation, there are two primary hyperparameters that youโll want to tune
Step7: Finalize Model | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
# Create a pipeline that standardizes the data then creates a model
#Load libraries for data processing
import pandas as pd #data processing, CSV file I/O (e.g. pd.read_csv)
import numpy as np
from scipy.stats import norm
from sklearn.model_selection import train_test_split
from sklearn.cross_validation import cross_val_score, KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
# visualization
import seaborn as sns
plt.style.use('fivethirtyeight')
sns.set_style("white")
plt.rcParams['figure.figsize'] = (8,4)
#plt.rcParams['axes.titlesize'] = 'large'
Explanation: Automate the ML process using pipelines
There are standard workflows in a machine learning project that can be automated. In Python scikit-learn, Pipelines help to clearly define and automate these workflows.
* Pipelines help overcome common problems like data leakage in your test harness.
* Python scikit-learn provides a Pipeline utility to help automate machine learning workflows.
* Pipelines work by allowing for a linear sequence of data transforms to be chained together culminating in a modeling process that can be evaluated.
Data Preparation and Modeling Pipeline
End of explanation
#load data
data = pd.read_csv('data/clean-data.csv', index_col=False)
data.drop('Unnamed: 0',axis=1, inplace=True)
# Split-out validation dataset
array = data.values
X = array[:,1:31]
y = array[:,0]
# Divide records in training and testing sets.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=7)
#transform the class labels from their original string representation (M and B) into integers
le = LabelEncoder()
y = le.fit_transform(y)
Explanation: Evaluate Some Algorithms
Now it is time to create some models of the data and estimate their accuracy on unseen data. Here is what we are going to cover in this step:
1. Separate out a validation dataset.
2. Setup the test harness to use 10-fold cross validation.
3. Build 5 different models
4. Select the best model
1.0 Validation Dataset
End of explanation
# Spot-Check Algorithms
models = []
models.append(( 'LR' , LogisticRegression()))
models.append(( 'LDA' , LinearDiscriminantAnalysis()))
models.append(( 'KNN' , KNeighborsClassifier()))
models.append(( 'CART' , DecisionTreeClassifier()))
models.append(( 'NB' , GaussianNB()))
models.append(( 'SVM' , SVC()))
# Test options and evaluation metric
num_folds = 10
num_instances = len(X_train)
seed = 7
scoring = 'accuracy'
# Test options and evaluation metric
num_folds = 10
num_instances = len(X_train)
seed = 7
scoring = 'accuracy'
results = []
names = []
for name, model in models:
kfold = KFold(n=num_instances, n_folds=num_folds, random_state=seed)
cv_results = cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
print('-> 10-Fold cross-validation accurcay score for the training data for six classifiers')
len(X_train)
Explanation: 2.0 Evaluate Algorithms: Baseline
End of explanation
# Compare Algorithms
fig = plt.figure()
fig.suptitle( 'Algorithm Comparison' )
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()
Explanation: Observation
The results suggest That both Logistic Regression and LDA may be worth further study. These are just mean accuracy values. It is always wise to look at the distribution of accuracy values calculated across cross validation folds. We can do that graphically using box and whisker plots.
End of explanation
# Standardize the dataset
pipelines = []
pipelines.append(( 'ScaledLR' , Pipeline([( 'Scaler' , StandardScaler()),( 'LR' ,
LogisticRegression())])))
pipelines.append(( 'ScaledLDA' , Pipeline([( 'Scaler' , StandardScaler()),( 'LDA' ,
LinearDiscriminantAnalysis())])))
pipelines.append(( 'ScaledKNN' , Pipeline([( 'Scaler' , StandardScaler()),( 'KNN' ,
KNeighborsClassifier())])))
pipelines.append(( 'ScaledCART' , Pipeline([( 'Scaler' , StandardScaler()),( 'CART' ,
DecisionTreeClassifier())])))
pipelines.append(( 'ScaledNB' , Pipeline([( 'Scaler' , StandardScaler()),( 'NB' ,
GaussianNB())])))
pipelines.append(( 'ScaledSVM' , Pipeline([( 'Scaler' , StandardScaler()),( 'SVM' , SVC())])))
results = []
names = []
for name, model in pipelines:
kfold = KFold(n=num_instances, n_folds=num_folds, random_state=seed)
cv_results = cross_val_score(model, X_train, y_train, cv=kfold,
scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
# Compare Algorithms
fig = plt.figure()
fig.suptitle( 'Scaled Algorithm Comparison' )
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()
Explanation: Observation
The results show a similar tight distribution for all classifiers except SVM which is encouraging, suggesting low variance. The poor results for SVM are surprising.
It is possible the varied distribution of the attributes may have an effect on the accuracy of algorithms such as SVM. In the next section we will repeat this spot-check with a standardized copy of the training dataset.
2.1 Evaluate Algorithms: Standardize Data
End of explanation
#Make Support Vector Classifier Pipeline
pipe_svc = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', SVC(probability=True, verbose=False))])
#Fit Pipeline to training Data
pipe_svc.fit(X_train, y_train)
#print('--> Fitted Pipeline to training Data')
scores = cross_val_score(estimator=pipe_svc, X=X_train, y=y_train, cv=10, n_jobs=1, verbose=0)
print('--> Model Training Accuracy: %.3f +/- %.3f' %(np.mean(scores), np.std(scores)))
#Tune Hyperparameters
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
param_grid = [{'clf__C': param_range,'clf__kernel': ['linear']},
{'clf__C': param_range,'clf__gamma': param_range,
'clf__kernel': ['rbf']}]
gs_svc = GridSearchCV(estimator=pipe_svc,
param_grid=param_grid,
scoring='accuracy',
cv=10,
n_jobs=1)
gs_svc = gs_svc.fit(X_train, y_train)
print('--> Tuned Parameters Best Score: ',gs.best_score_)
print('--> Best Parameters: \n',gs.best_params_)
Explanation: Observations
The results show that standardization of the data has lifted the skill of SVM to be the most accurate algorithm tested so far.
The results suggest digging deeper into the SVM and LDA and LR algorithms. It is very likely that configuration beyond the default may yield even more accurate models.
3.0 Algorithm Tuning
In this section we investigate tuning the parameters for three algorithms that show promise from the spot-checking in the previous section: LR, LDA and SVM.
Tuning hyper-parameters - SVC estimator
End of explanation
from sklearn.neighbors import KNeighborsClassifier as KNN
pipe_knn = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', KNeighborsClassifier())])
#Fit Pipeline to training Data
pipe_knn.fit(X_train, y_train)
scores = cross_val_score(estimator=pipe_knn,
X=X_train,
y=y_train,
cv=10,
n_jobs=1)
print('--> Model Training Accuracy: %.3f +/- %.3f' %(np.mean(scores), np.std(scores)))
#Tune Hyperparameters
param_range = range(1, 31)
param_grid = [{'clf__n_neighbors': param_range}]
# instantiate the grid
grid = GridSearchCV(estimator=pipe_knn,
param_grid=param_grid,
cv=10,
scoring='accuracy')
gs_knn = grid.fit(X_train, y_train)
print('--> Tuned Parameters Best Score: ',gs.best_score_)
print('--> Best Parameters: \n',gs.best_params_)
Explanation: Tuning the hyper-parameters - k-NN hyperparameters
For your standard k-NN implementation, there are two primary hyperparameters that youโll want to tune:
The number of neighbors k.
The distance metric/similarity function.
Both of these values can dramatically affect the accuracy of your k-NN classifier. Grid object is ready to do 10-fold cross validation on a KNN model using classification accuracy as the evaluation metric
In addition, there is a parameter grid to repeat the 10-fold cross validation process 30 times
Each time, the n_neighbors parameter should be given a different value from the list
We can't give GridSearchCV just a list
We've to specify n_neighbors should take on 1 through 30
You can set n_jobs = -1 to run computations in parallel (if supported by your computer and OS)
End of explanation
#Use best parameters
clf_svc = gs.best_estimator_
#Get Final Scores
clf_svc.fit(X_train, y_train)
scores = cross_val_score(estimator=clf_svc,
X=X_train,
y=y_train,
cv=10,
n_jobs=1)
print('--> Final Model Training Accuracy: %.3f +/- %.3f' %(np.mean(scores), np.std(scores)))
print('--> Final Accuracy on Test set: %.5f' % clf_svc.score(X_test,y_test))
clf_svc.fit(X_train, y_train)
y_pred = clf_svc.predict(X_test)
print(accuracy_score(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
Explanation: Finalize Model
End of explanation |
1,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Chapter-1" data-toc-modified-id="Chapter-1-1"><span class="toc-item-num">1 </span>Chapter 1</a></div><div class="lev2 toc-item"><a href="#Ex-1.1" data-toc-modified-id="Ex-1.1-11"><span class="toc-item-num">1.1 </span>Ex 1.1</a></div><div class="lev2 toc-item"><a href="#Ans-1.1" data-toc-modified-id="Ans-1.1-12"><span class="toc-item-num">1.2 </span>Ans 1.1</a></div><div class="lev2 toc-item"><a href="#Ex-1.2" data-toc-modified-id="Ex-1.2-13"><span class="toc-item-num">1.3 </span>Ex 1.2</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-131"><span class="toc-item-num">1.3.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.3" data-toc-modified-id="Ex-1.3-14"><span class="toc-item-num">1.4 </span>Ex 1.3</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-141"><span class="toc-item-num">1.4.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.4" data-toc-modified-id="Ex-1.4-15"><span class="toc-item-num">1.5 </span>Ex 1.4</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-151"><span class="toc-item-num">1.5.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.5" data-toc-modified-id="Ex-1.5-16"><span class="toc-item-num">1.6 </span>Ex 1.5</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-161"><span class="toc-item-num">1.6.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.6" data-toc-modified-id="Ex-1.6-17"><span class="toc-item-num">1.7 </span>Ex 1.6</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-171"><span class="toc-item-num">1.7.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.7." data-toc-modified-id="Ex-1.7.-18"><span class="toc-item-num">1.8 </span>Ex 1.7.</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-181"><span class="toc-item-num">1.8.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.8" data-toc-modified-id="Ex-1.8-19"><span class="toc-item-num">1.9 </span>Ex 1.8</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-191"><span class="toc-item-num">1.9.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.9" data-toc-modified-id="Ex-1.9-110"><span class="toc-item-num">1.10 </span>Ex 1.9</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1101"><span class="toc-item-num">1.10.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.10" data-toc-modified-id="Ex-1.10-111"><span class="toc-item-num">1.11 </span>Ex 1.10</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1111"><span class="toc-item-num">1.11.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.11" data-toc-modified-id="Ex-1.11-112"><span class="toc-item-num">1.12 </span>Ex 1.11</a></div><div class="lev2 toc-item"><a href="#Ex-1.12" data-toc-modified-id="Ex-1.12-113"><span class="toc-item-num">1.13 </span>Ex 1.12</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1131"><span class="toc-item-num">1.13.1 </span>Ans</a></div><div class="lev4 toc-item"><a href="#Part-I" data-toc-modified-id="Part-I-11311"><span class="toc-item-num">1.13.1.1 </span>Part I</a></div><div class="lev4 toc-item"><a href="#Part-II" data-toc-modified-id="Part-II-11312"><span class="toc-item-num">1.13.1.2 </span>Part II</a></div><div class="lev2 toc-item"><a href="#Ex-1.13" data-toc-modified-id="Ex-1.13-114"><span class="toc-item-num">1.14 </span>Ex 1.13</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1141"><span class="toc-item-num">1.14.1 </span>Ans</a></div><div class="lev4 toc-item"><a href="#Ancestral-Sampling" data-toc-modified-id="Ancestral-Sampling-11411"><span class="toc-item-num">1.14.1.1 </span>Ancestral Sampling</a></div><div class="lev4 toc-item"><a href="#Through-Pandas" data-toc-modified-id="Through-Pandas-11412"><span class="toc-item-num">1.14.1.2 </span>Through Pandas</a></div><div class="lev2 toc-item"><a href="#Ex-1.14" data-toc-modified-id="Ex-1.14-115"><span class="toc-item-num">1.15 </span>Ex 1.14</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1151"><span class="toc-item-num">1.15.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.15" data-toc-modified-id="Ex-1.15-116"><span class="toc-item-num">1.16 </span>Ex 1.15</a></div><div class="lev2 toc-item"><a href="#Ex-1.16" data-toc-modified-id="Ex-1.16-117"><span class="toc-item-num">1.17 </span>Ex 1.16</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1171"><span class="toc-item-num">1.17.1 </span>Ans</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1172"><span class="toc-item-num">1.17.2 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.17" data-toc-modified-id="Ex-1.17-118"><span class="toc-item-num">1.18 </span>Ex 1.17</a></div><div class="lev3 toc-item"><a href="#1." data-toc-modified-id="1.-1181"><span class="toc-item-num">1.18.1 </span>1.</a></div><div class="lev3 toc-item"><a href="#2." data-toc-modified-id="2.-1182"><span class="toc-item-num">1.18.2 </span>2.</a></div><div class="lev2 toc-item"><a href="#Ex-1.18" data-toc-modified-id="Ex-1.18-119"><span class="toc-item-num">1.19 </span>Ex 1.18</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1191"><span class="toc-item-num">1.19.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.19" data-toc-modified-id="Ex-1.19-120"><span class="toc-item-num">1.20 </span>Ex 1.19</a></div><div class="lev3 toc-item"><a href="#Part-I" data-toc-modified-id="Part-I-1201"><span class="toc-item-num">1.20.1 </span>Part I</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12011"><span class="toc-item-num">1.20.1.1 </span>Ans</a></div><div class="lev3 toc-item"><a href="#Part-II" data-toc-modified-id="Part-II-1202"><span class="toc-item-num">1.20.2 </span>Part II</a></div><div class="lev3 toc-item"><a href="#Part-III" data-toc-modified-id="Part-III-1203"><span class="toc-item-num">1.20.3 </span>Part III</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12031"><span class="toc-item-num">1.20.3.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.20" data-toc-modified-id="Ex-1.20-121"><span class="toc-item-num">1.21 </span>Ex 1.20</a></div><div class="lev2 toc-item"><a href="#Ex-1.21" data-toc-modified-id="Ex-1.21-122"><span class="toc-item-num">1.22 </span>Ex 1.21</a></div><div class="lev2 toc-item"><a href="#Ex-1.22" data-toc-modified-id="Ex-1.22-123"><span class="toc-item-num">1.23 </span>Ex 1.22</a></div><div class="lev3 toc-item"><a href="#Part-I" data-toc-modified-id="Part-I-1231"><span class="toc-item-num">1.23.1 </span>Part I</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12311"><span class="toc-item-num">1.23.1.1 </span>Ans</a></div><div class="lev3 toc-item"><a href="#Part-II" data-toc-modified-id="Part-II-1232"><span class="toc-item-num">1.23.2 </span>Part II</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12321"><span class="toc-item-num">1.23.2.1 </span>Ans</a></div><div class="lev3 toc-item"><a href="#Part-III" data-toc-modified-id="Part-III-1233"><span class="toc-item-num">1.23.3 </span>Part III</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12331"><span class="toc-item-num">1.23.3.1 </span>Ans</a></div><div class="lev3 toc-item"><a href="#Part-IV" data-toc-modified-id="Part-IV-1234"><span class="toc-item-num">1.23.4 </span>Part IV</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12341"><span class="toc-item-num">1.23.4.1 </span>Ans</a></div>
# Chapter 1
## Ex 1.1
Prove
$
p(x,y \mid z) = p(x \mid z)p(y \mid x,z)
$
and also
$
p(x \mid y,z) = \frac{p(y \mid x, z)p(x \mid z)}{p(y \mid z)}
$
## Ans 1.1
$$
\begin{equation}
\begin{aligned}
p(x,y \mid z) &= p(x \mid z)p(y \mid x,z) \\
&= \frac{p(x,z)}{p(z)} \cdot \frac{p(y,x,z)}{p(x,z)} \\
&= \frac{p(y,x,z)}{p(z)} \\
&= p(y,x \mid z)
\end{aligned}
\end{equation}
$$
$$
\begin{equation}
\begin{aligned}
p(x \mid y,z) &= \frac{p(y \mid x, z)p(x \mid z)}{p(y \mid z)} \\
&= \frac{ \frac{p(y, x, z)}{p(x,z)} \cdot \frac{p(x, z)}{p(z)} }{\frac{p(y, z)}{p(z)}} \\
&= \frac{\frac{p(y, x, z)}{p(z)}}{\frac{p(y, z)}{p(z)}} \\
&= \frac{p(y,x,z)}{p(y,z)} \\
&= p(x \mid y,z)
\end{aligned}
\end{equation}
$$
## Ex 1.2
Prove the Bonferroni inequality.
$
p(a,b) \geq p(a) + p(b) - 1
$
### Ans
<img src='img/prove-bonferroni-ineq.jpg'>
## Ex 1.3
There are two boxes. Box 1 contains three red and five white balls and box 2 contains two red and five white balls. A box is chosen at random $p(box=1)=p(box=2) = 0.5$ and a ball chosen at random from this box turns out to be red. What is the posterior probability that the red ball came from box 1?
### Ans
$$
\begin{equation}
\begin{aligned}
p(box = 1 \mid ball = r) &= \frac{p(ball = r \mid box = 1) \cdot p(box = 1)}{p(ball=r)} \\
&= \frac{p(ball = r \mid box = 1) \cdot p(box = 1)}{p(ball = r \mid box = 1) \cdot p(box = 1) + p(ball = r \mid box = 2) \cdot p(box = 2)} \\
&= \frac{\frac{3}{8} \cdot \frac{1}{2}}{\frac{3}{8} \cdot \frac{1}{2} + \frac{2}{7} \cdot \frac{1}{2}} \\
&= \frac{\frac{3}{8}}{\frac{21 + 16}{56}} \\
&= \frac{\frac{3}{1}}{\frac{37}{7}} \\
&= \frac{21}{37}
\end{aligned}
\end{equation}
$$
## Ex 1.4
Two balls are placed in a box as follows
Step1: $$
\begin{equation}
\begin{aligned}
P(B_1 = r, B_2 = r \mid S_1 = r, S_2 = r, S_3 = r)
&= \frac{ P(S_1 = r, S_2 = r, S_3 = r \mid B_1 = r, B_2 = r)
\cdot P(B_1 = r, B_2 = r) }{ P(S_1=r, S_2=r, S_3=r)} \
&= \frac{ P(S_1 = r \mid B_1 = r, B_2 = r) \cdot P(S_2 = r \mid B_1 = r, B_2 = r) \cdot P(S_3 = r \mid B_1 = r, B_2 = r) \cdot P(B_1 = r, B_2 = r) }{ P(S_1=r, S_2=r, S_3=r) }
&\text{Samples are independent from each other, given we know what the actual types of balls are.} \
&= \frac{P(B_1=r, B_2=r)}{ P(S_1=r, S_2=r, S_3=r) } &\text{If both balls are red, then each sample will certainly be red} \
&= \frac{P(B_1=r)P(B_2=r)}{ P(S_1=r, S_2=r, S_3=r) } &\text{$B_1$ is marginally independent from $B_2$.} \
&= \frac{\frac{1}{2} \cdot \frac{1}{2}}{ P(S_1=r, S_2=r, S_3=r) } &\text{Ball type is determined by a fair coin flip.} \
&= \frac{\frac{1}{4}}{ P(S_1=r, S_2=r, S_3=r) } &\text{Simplify} \
&= \frac{\frac{1}{4}}{ P(S_1 = r \mid B_1 = r, B_2 = r) \cdot P(S_2 = r \mid B_1 = r, B_2 = r) \cdot P(S_3 = r \mid B_1 = r, B_2 = r) \cdot P(B_1=r, B_2=r)
+ 2 \cdot P(S_1 = r \mid B_1 = r, B_2 = w) \cdot P(S_2 = r \mid B_1 = r, B_2 = w) \cdot P(S_3 = r \mid B_1 = r, B_2 = w) \cdot P(B_1=r, B_2=w)
+ P(S_1 = r \mid B_1 = w, B_2 = w) \cdot P(S_2 = r \mid B_1 = w, B_2 = w) \cdot P(S_3 = r \mid B_1 = w, B_2 = w) \cdot P(B_1=w, B_2=w)
} \
&= \frac{\frac{1}{4}}{ \frac{1}{4}
+ 2 \cdot P(S_1 = r \mid B_1 = r, B_2 = w) \cdot P(S_2 = r \mid B_1 = r, B_2 = w) \cdot P(S_3 = r \mid B_1 = r, B_2 = w) \cdot P(B_1=r, B_2=w)
+ P(S_1 = r \mid B_1 = w, B_2 = w) \cdot P(S_2 = r \mid B_1 = w, B_2 = w) \cdot P(S_3 = r \mid B_1 = w, B_2 = w) \cdot P(B_1=w, B_2=w)
} &\text{Same computation as before.}\
&= \frac{\frac{1}{4}}{ \frac{1}{4}
+ 2 \cdot P(S_1 = r \mid B_1 = r, B_2 = w) \cdot P(S_2 = r \mid B_1 = r, B_2 = w) \cdot P(S_3 = r \mid B_1 = r, B_2 = w) \cdot P(B_1=r, B_2=w)
+ 0
} &\text{Impossible to get a red sample when both balls are white.}\
&= \frac{\frac{1}{4}}{ \frac{1}{4}
+ 2 \cdot \frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{4}
+ 0
} &\text{$50$% chance to sample a red ball when one is red and the other is white.} \
&= \frac{\frac{1}{4}}{\frac{1}{4} + \frac{1}{16}} \
&= \frac{\frac{1}{4}}{\frac{5}{16}} \
&= \frac{16}{4 \cdot 5} \
&= \frac{4}{5}
\end{aligned}
\end{equation}
$$
Ex 1.5
A secret agency has developed a scanner which determines a person is a terrorist. The scanner is fairly reliable; 95% of all scanned terrorists are identified as terrorists, and 95% of all upstanding citizens are identified as such. An informant tells the agency that exactly one passenger of 100 abroad an aeroplane in which you are seated is a terrorist. The police haul off the plane the first person for which the scanner tests positive. What is the probability that this person is a terrorist?
Ans
Let $T_k \in{0,1}$ describe whether or not person $k$ is a terrorist. Let $S_k \in {0,1}$ describe the result of the scanner for person $k$, with 0 as a negative result and 1 as a positive result. Let $H_k \in {0,1}$ describe the person $k$ having been hauled off. Similarly, let $H_{k-1}=0,...,H_1=0$ describe people $0,...,k-1$ not being hauled off.
Step2: Ex 1.6
Consider three variable distributions which admit the factorisation
$p(a, b, c) = p(a|b)p(b|c)p(c)$, where all variables are binary. How many parameters are needed to specify distributions of this form?
Ans
We would need 5 parameters to specify the whole distribution
Step3: The probability that the Butler was the killer and not the Maid, given that we observed that the knife was used, is 78.3%.
Ex 1.8
Prove $p(a,(b \text{ or } c))=p(a,b)+p(a,c)โp(a,b,c)$
Ans
$$
\begin{equation}
\begin{aligned}
P(a,(b \text{ or } c)) &= P(A=1, (B=1 \text{ or } C=1)) \
&= P(A=1, B=1, C=1) + P(A=1, B=0, C=1) + P(A=1, B=1, C=0) \
&= [P(A=1, B=1, C=1) + P(A=1, B=1, C=0)] + P(A=1, B=0, C=1) \
&= P(A=1, B=1) + P(A=1, B=0, C=1) \
&= P(A=1, B=1) + P(A=1, B=0, C=1) + P(A=1, B=1, C=1) - P(A=1, B=1, C=1) \
&= P(A=1, B=1) + P(A=1, C=1) - P(A=1, B=1, C=1) \
&= p(a,b) + p(a,c) - p(a,b,c)
\end{aligned}
\end{equation}
$$
Ex 1.9
Prove $P(x \mid z) = \sum_y P(x \mid y, z) \cdot P(y \mid z) = \sum_{y,w} P(x \mid w, y, z) \cdot P(w \mid y, z) \cdot P(y \mid z)$
Ans
Part I
Step4: Ex 1.12
Implement the hamburgers, example(1.2) (both scenarios) using BRMLtoolbox. To do so you will need to define the joint distribution $p(\text{hamburgers},\text{KJ})$ in which $dom(\text{hamburgers}) = dom(\text{KJ}) = {tr, fa}$.
Ans
Consider the following fictitious scientific information
Step5: Part II
Assuming eating lots of hamburgers is rather widespread, say p(Hamburger Eater) = 0.001, what is the probability that a hamburger eater will have Kreuzfeld-Jacob disease?
Step6: Ex 1.13
Implement the two-dice example, section(1.3.1) using BRMLtoolbox.
Ans
Ancestral Sampling
Step7: Through Pandas
Step8: Ex 1.14
A redistribution lottery involves picking the correct four numbers from 1 to 9 (without replacement, so 3,4,4,1 for example is not possible). The order of the picked numbers is irrelevant. Every week a million people play this game, each paying ยฃ1 to enter, with the numbers 3,5,7,9 being the most popular (1 in every 100 people chooses these numbers). Given that the million pounds prize money is split equally between winners, and that any four (different) numbers come up at random, what is the expected amount of money each of the players choosing 3,5,7,9 will win each week? The least popular set of numbers is 1,2,3,4 with only 1 in 10,000 people choosing this. How much do they profit each week, on average? Do you think there is any โskillโ involved in playing this lottery?
Ans
The probability of getting exactly 3,5,7,9 (in that order) is $P(\text{Lotto} = [3,5,7,9]) = \frac{1}{9} \cdot \frac{1}{8} \cdot \frac{1}{7} \cdot \frac{1}{6} = \frac{1}{3024}$. However, we need to take into account that order doesn't matter. There are $4 \cdot 3 \cdot 2 \cdot 1 = 24$ ways of reordering 3,5,7,9. Thus, $P(\text{Lotto} = {3,5,7,9}) = 24 \cdot \frac{1}{3024} \approx 0.007936$. If $\frac{1}{100}$ of the 1,000,000 people who enter the lottery every week pick these numbers then there would be 10,000 people every week who use these numbers. Expected amount of money each of the players choosing 3,5,7,9 will win each week is $P(\text{Lotto} = {3,5,7,9}) \cdot \text{Prize money per person} = P(\text{Lotto} = {3,5,7,9}) \cdot \frac{1,000,000}{10,000} = 0.007936 \cdot 100 = ยฃ0.79$. On the other hand, the people choosing the set ${1,2,3,4}$ would win, on average, $P(\text{Lotto} = {1,2,3,4}) \cdot \frac{1,000,000}{100} = 0.007936 \cdot 10,000 = ยฃ79.$
The "skill" involved in playing this lottery is figuring out what set of numbers are least popular. You could exploit this information to maximize your payoff, on average.
Ex 1.15
In a test of โpsychometryโ the car keys and wrist watches of 5 people are given to a medium. The medium then attempts to match the wrist watch with the car key of each person. What is the expected number of correct matches that the medium will make (by chance)? What is the probability that the medium will obtain at least 1 correct match?
Step9: Ex 1.16
Show that for any function f
$\qquad\sum_x p(x \mid y) \cdot f(y) = f(y)$
Ans
$$
\begin{equation}
\begin{aligned}
\sum_x p(x \mid y) &= \sum_x \frac{p(x,y)}{p(y)} \
&= \frac{1}{p(y)} \cdot \sum_x p(x,y) \
&= \frac{1}{p(y)} \cdot p(y) \
&= 1 \
\end{aligned}
\end{equation}
$$
Thus,
$$
\begin{equation}
\begin{aligned}
\sum_x p(x \mid y) \cdot f(y) = f(y)
\end{aligned}
\end{equation}
$$
Explain why, in general,
$\qquad \sum_x p(x \mid y) f(x,y) \neq \sum_x f(x,y)$
Ans
Using $\sum_x p(x \mid y) = 1$,
$$
\begin{equation}
\begin{aligned}
\sum_x p(x \mid y) f(x,y) = f(x,y) \
\neq f(x_1,y) + f(x_2,y) + ... + ... \
\end{aligned}
\end{equation}
$$
Ex 1.17
Seven friends decide to order pizzas by telephone from Pizza4U based on a flyer pushed through their letterbox. Pizza4U has only 4 kinds of pizzas, and each person chooses a pizza independently. Bob phones Pizza4U and places the combined pizza order, simply stating how many pizzas of each kind are required. Unfortunately, the precise order is lost, so the chef makes seven randomly chosen pizzas and then passes them to the delivery boy.
1.
How many different combined orders are possible?
Step10: 2.
What is the probability that the delivery boy has the right order?
Step11: Ex 1.18
Sally is new to the area and listens to some friends discussing about another female friend. Sally knows that they are talking about either Alice or Bella but doesnโt know which. From previous conversations Sally knows some independent pieces of information
Step12: Part III
Use the result from part 2 above as a new prior probability of rain yesterday and recompute the probability that it was raining yesterday given that itโs sunny today.
Ans
$$
\begin{equation}
\begin{aligned}
P(D_{yesterday}=r \mid D_{today}=s) &= \frac{P(D_{today}=s \mid D_{yesterday}=r) \cdot P(D_{yesterday}=r)}{P(D_{today}=s \mid D_{yesterday}=r) \cdot P(D_{yesterday}=r) + P(D_{today}=s \mid D_{yesterday}=s) \cdot P(D_{yesterday}=s)} \
&= \frac{0.3 \cdot \frac{2}{3}}{0.3 \cdot \frac{2}{3} + 0.4 \cdot \frac{1}{3}} \
&= 0.6
\end{aligned}
\end{equation}
$$
Ex 1.20
A game of Battleships is played on a 10 ร 10 pixel grid. There are two 5-pixel length ships placed uniformly at random on the grid, subject to the constraints that (i) the ships cannot overlap and (ii) one ship is vertical and the other horizontal. After 10 unsuccessful โmissesโ in locations (1,10),(2,2),(3,8),(4,4),(5,6),(6,5),(7,4),(7,7),(9,2),(9,9) calculate which pixel has the highest proba- bility of containing a ship. State this pixel and the value of the highest probability.
Step13: The most likely spots are (1,6) and (5,10) with 20.18% probability.
Ex 1.21
A game of Battleships is played on a 8 ร 8 grid. There are two 5-pixel length ships placed horizontally and two 5-pixel length ships placed vertically, subject to the same constraints as in the previous question. Given โmissesโ in locations (1, 1), (2, 2) and a โhitโ in location (5, 5), which pixel most likely contains
a ship and what is that probability?
Step14: The most likely squares are (5,4), (4,5), each one with 56% probability.
Ex 1.22
We consider an extension of the explosion example. In this extension there are two explosions at locations s1 and s2 and the observed value at sensor i is
$\qquad v_i = \frac{1}{d_i^2(1) + 0.1} + \frac{1}{d_i^2(2) + 0.1} + \sigma \epsilon_i$
where $d_i(1)$, $d_i(2)$ is the distance from explosion 1, 2 to the sensor respectively; ฯ is the standard deviation of the Gaussian sensor noise and the noise $\epsilon_i$ is drawn from a zero mean unit variance Gaussian independently for each sensor. The data in the file EarthquakeExerciseData.txt represents the observed sensor values vi and the coordinate system setup is given in earthquakeExerciseSetup.jl. Assuming that the prior locations of the explosions are independent and uniform (according to the spiral coordinate system)
Step15: Part II
Writing $H_2$ for the hypothesis that there are two explosions and $H_1$ for the hypothesis that there is only one explosion, report the value of $log p(v|H_2) โ log p(v|H_1)$.
Ans | Python Code:
from graphviz import Digraph
dot = Digraph('Ex 1.4')
dot.edge('Coin 1', 'Ball 1')
dot.edge('Coin 2', 'Ball 2')
dot.edge('Ball 1', 'Sample 1=red')
dot.edge('Ball 2', 'Sample 1=red')
dot.edge('Ball 1', 'Sample 2=red')
dot.edge('Ball 2', 'Sample 2=red')
dot.edge('Ball 1', 'Sample 3=red')
dot.edge('Ball 2', 'Sample 3=red')
dot
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Chapter-1" data-toc-modified-id="Chapter-1-1"><span class="toc-item-num">1 </span>Chapter 1</a></div><div class="lev2 toc-item"><a href="#Ex-1.1" data-toc-modified-id="Ex-1.1-11"><span class="toc-item-num">1.1 </span>Ex 1.1</a></div><div class="lev2 toc-item"><a href="#Ans-1.1" data-toc-modified-id="Ans-1.1-12"><span class="toc-item-num">1.2 </span>Ans 1.1</a></div><div class="lev2 toc-item"><a href="#Ex-1.2" data-toc-modified-id="Ex-1.2-13"><span class="toc-item-num">1.3 </span>Ex 1.2</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-131"><span class="toc-item-num">1.3.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.3" data-toc-modified-id="Ex-1.3-14"><span class="toc-item-num">1.4 </span>Ex 1.3</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-141"><span class="toc-item-num">1.4.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.4" data-toc-modified-id="Ex-1.4-15"><span class="toc-item-num">1.5 </span>Ex 1.4</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-151"><span class="toc-item-num">1.5.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.5" data-toc-modified-id="Ex-1.5-16"><span class="toc-item-num">1.6 </span>Ex 1.5</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-161"><span class="toc-item-num">1.6.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.6" data-toc-modified-id="Ex-1.6-17"><span class="toc-item-num">1.7 </span>Ex 1.6</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-171"><span class="toc-item-num">1.7.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.7." data-toc-modified-id="Ex-1.7.-18"><span class="toc-item-num">1.8 </span>Ex 1.7.</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-181"><span class="toc-item-num">1.8.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.8" data-toc-modified-id="Ex-1.8-19"><span class="toc-item-num">1.9 </span>Ex 1.8</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-191"><span class="toc-item-num">1.9.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.9" data-toc-modified-id="Ex-1.9-110"><span class="toc-item-num">1.10 </span>Ex 1.9</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1101"><span class="toc-item-num">1.10.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.10" data-toc-modified-id="Ex-1.10-111"><span class="toc-item-num">1.11 </span>Ex 1.10</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1111"><span class="toc-item-num">1.11.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.11" data-toc-modified-id="Ex-1.11-112"><span class="toc-item-num">1.12 </span>Ex 1.11</a></div><div class="lev2 toc-item"><a href="#Ex-1.12" data-toc-modified-id="Ex-1.12-113"><span class="toc-item-num">1.13 </span>Ex 1.12</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1131"><span class="toc-item-num">1.13.1 </span>Ans</a></div><div class="lev4 toc-item"><a href="#Part-I" data-toc-modified-id="Part-I-11311"><span class="toc-item-num">1.13.1.1 </span>Part I</a></div><div class="lev4 toc-item"><a href="#Part-II" data-toc-modified-id="Part-II-11312"><span class="toc-item-num">1.13.1.2 </span>Part II</a></div><div class="lev2 toc-item"><a href="#Ex-1.13" data-toc-modified-id="Ex-1.13-114"><span class="toc-item-num">1.14 </span>Ex 1.13</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1141"><span class="toc-item-num">1.14.1 </span>Ans</a></div><div class="lev4 toc-item"><a href="#Ancestral-Sampling" data-toc-modified-id="Ancestral-Sampling-11411"><span class="toc-item-num">1.14.1.1 </span>Ancestral Sampling</a></div><div class="lev4 toc-item"><a href="#Through-Pandas" data-toc-modified-id="Through-Pandas-11412"><span class="toc-item-num">1.14.1.2 </span>Through Pandas</a></div><div class="lev2 toc-item"><a href="#Ex-1.14" data-toc-modified-id="Ex-1.14-115"><span class="toc-item-num">1.15 </span>Ex 1.14</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1151"><span class="toc-item-num">1.15.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.15" data-toc-modified-id="Ex-1.15-116"><span class="toc-item-num">1.16 </span>Ex 1.15</a></div><div class="lev2 toc-item"><a href="#Ex-1.16" data-toc-modified-id="Ex-1.16-117"><span class="toc-item-num">1.17 </span>Ex 1.16</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1171"><span class="toc-item-num">1.17.1 </span>Ans</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1172"><span class="toc-item-num">1.17.2 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.17" data-toc-modified-id="Ex-1.17-118"><span class="toc-item-num">1.18 </span>Ex 1.17</a></div><div class="lev3 toc-item"><a href="#1." data-toc-modified-id="1.-1181"><span class="toc-item-num">1.18.1 </span>1.</a></div><div class="lev3 toc-item"><a href="#2." data-toc-modified-id="2.-1182"><span class="toc-item-num">1.18.2 </span>2.</a></div><div class="lev2 toc-item"><a href="#Ex-1.18" data-toc-modified-id="Ex-1.18-119"><span class="toc-item-num">1.19 </span>Ex 1.18</a></div><div class="lev3 toc-item"><a href="#Ans" data-toc-modified-id="Ans-1191"><span class="toc-item-num">1.19.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.19" data-toc-modified-id="Ex-1.19-120"><span class="toc-item-num">1.20 </span>Ex 1.19</a></div><div class="lev3 toc-item"><a href="#Part-I" data-toc-modified-id="Part-I-1201"><span class="toc-item-num">1.20.1 </span>Part I</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12011"><span class="toc-item-num">1.20.1.1 </span>Ans</a></div><div class="lev3 toc-item"><a href="#Part-II" data-toc-modified-id="Part-II-1202"><span class="toc-item-num">1.20.2 </span>Part II</a></div><div class="lev3 toc-item"><a href="#Part-III" data-toc-modified-id="Part-III-1203"><span class="toc-item-num">1.20.3 </span>Part III</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12031"><span class="toc-item-num">1.20.3.1 </span>Ans</a></div><div class="lev2 toc-item"><a href="#Ex-1.20" data-toc-modified-id="Ex-1.20-121"><span class="toc-item-num">1.21 </span>Ex 1.20</a></div><div class="lev2 toc-item"><a href="#Ex-1.21" data-toc-modified-id="Ex-1.21-122"><span class="toc-item-num">1.22 </span>Ex 1.21</a></div><div class="lev2 toc-item"><a href="#Ex-1.22" data-toc-modified-id="Ex-1.22-123"><span class="toc-item-num">1.23 </span>Ex 1.22</a></div><div class="lev3 toc-item"><a href="#Part-I" data-toc-modified-id="Part-I-1231"><span class="toc-item-num">1.23.1 </span>Part I</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12311"><span class="toc-item-num">1.23.1.1 </span>Ans</a></div><div class="lev3 toc-item"><a href="#Part-II" data-toc-modified-id="Part-II-1232"><span class="toc-item-num">1.23.2 </span>Part II</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12321"><span class="toc-item-num">1.23.2.1 </span>Ans</a></div><div class="lev3 toc-item"><a href="#Part-III" data-toc-modified-id="Part-III-1233"><span class="toc-item-num">1.23.3 </span>Part III</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12331"><span class="toc-item-num">1.23.3.1 </span>Ans</a></div><div class="lev3 toc-item"><a href="#Part-IV" data-toc-modified-id="Part-IV-1234"><span class="toc-item-num">1.23.4 </span>Part IV</a></div><div class="lev4 toc-item"><a href="#Ans" data-toc-modified-id="Ans-12341"><span class="toc-item-num">1.23.4.1 </span>Ans</a></div>
# Chapter 1
## Ex 1.1
Prove
$
p(x,y \mid z) = p(x \mid z)p(y \mid x,z)
$
and also
$
p(x \mid y,z) = \frac{p(y \mid x, z)p(x \mid z)}{p(y \mid z)}
$
## Ans 1.1
$$
\begin{equation}
\begin{aligned}
p(x,y \mid z) &= p(x \mid z)p(y \mid x,z) \\
&= \frac{p(x,z)}{p(z)} \cdot \frac{p(y,x,z)}{p(x,z)} \\
&= \frac{p(y,x,z)}{p(z)} \\
&= p(y,x \mid z)
\end{aligned}
\end{equation}
$$
$$
\begin{equation}
\begin{aligned}
p(x \mid y,z) &= \frac{p(y \mid x, z)p(x \mid z)}{p(y \mid z)} \\
&= \frac{ \frac{p(y, x, z)}{p(x,z)} \cdot \frac{p(x, z)}{p(z)} }{\frac{p(y, z)}{p(z)}} \\
&= \frac{\frac{p(y, x, z)}{p(z)}}{\frac{p(y, z)}{p(z)}} \\
&= \frac{p(y,x,z)}{p(y,z)} \\
&= p(x \mid y,z)
\end{aligned}
\end{equation}
$$
## Ex 1.2
Prove the Bonferroni inequality.
$
p(a,b) \geq p(a) + p(b) - 1
$
### Ans
<img src='img/prove-bonferroni-ineq.jpg'>
## Ex 1.3
There are two boxes. Box 1 contains three red and five white balls and box 2 contains two red and five white balls. A box is chosen at random $p(box=1)=p(box=2) = 0.5$ and a ball chosen at random from this box turns out to be red. What is the posterior probability that the red ball came from box 1?
### Ans
$$
\begin{equation}
\begin{aligned}
p(box = 1 \mid ball = r) &= \frac{p(ball = r \mid box = 1) \cdot p(box = 1)}{p(ball=r)} \\
&= \frac{p(ball = r \mid box = 1) \cdot p(box = 1)}{p(ball = r \mid box = 1) \cdot p(box = 1) + p(ball = r \mid box = 2) \cdot p(box = 2)} \\
&= \frac{\frac{3}{8} \cdot \frac{1}{2}}{\frac{3}{8} \cdot \frac{1}{2} + \frac{2}{7} \cdot \frac{1}{2}} \\
&= \frac{\frac{3}{8}}{\frac{21 + 16}{56}} \\
&= \frac{\frac{3}{1}}{\frac{37}{7}} \\
&= \frac{21}{37}
\end{aligned}
\end{equation}
$$
## Ex 1.4
Two balls are placed in a box as follows: A fair coin is tossed and a white ball is placed in the box if a head occurs, otherwise a red ball is placed in the box. The coin is tossed again and a red ball is placed in the box if a tail occurs, otherwise a white ball is placed in the box. Balls are drawn from the box three times in succession (always with replacing the drawn ball in the box). It is found that on all three occasions a red ball is drawn. What is the probability that both balls in the box are red?
### Ans
End of explanation
from graphviz import Digraph
dot = Digraph('Ex 1.5')
dot.node('T_k') # Whether or not a person k is a terrorist.
dot.edge('T_k', 'S_k=1') #
dot.edge('S_k=1', 'H_k=1')
# H_j reprents whether or not the jth person got hauled off.
# 'H_k-1=0,...,H_1=0' represents the idea that people earlier
# in line did not get hauled off.
dot.edge('H_k-1=0,...,H_1=0', 'H_k=1')
dot
import numpy as np
import pandas as pd
num_sim = 10000000
num_passengers = 100
# 1 of the 100 is a terrorist. We assume any ordering of the 100 people is equally likely.
is_terrorist = np.random.multinomial(n=1, pvals=np.ones(num_passengers) * 0.01, size=num_sim)
# P(S_k=1 | T_k)
scan_positive_proba = is_terrorist * 0.95 + (is_terrorist == 0) * 0.05
# Simulate individuals going through the scanner.
scan_positive = np.random.binomial(n=1, p=scan_positive_proba)
# Find the indices of where individuals had a positive scan.
x, y = np.where(scan_positive == 1)
x_y = pd.DataFrame({'x': x, 'y': y})
# Only keep the first row
hauled_x_y = x_y.drop_duplicates(subset=['x'])
x_y
# The probability of being a terrorist, given the scanner resulted in a positive.
is_terrorist[x_y['x'], x_y['y']].mean()
# The probability of being a terrorist, given the individual got hauled off
is_terrorist[hauled_x_y['x'], hauled_x_y['y']].mean()
Explanation: $$
\begin{equation}
\begin{aligned}
P(B_1 = r, B_2 = r \mid S_1 = r, S_2 = r, S_3 = r)
&= \frac{ P(S_1 = r, S_2 = r, S_3 = r \mid B_1 = r, B_2 = r)
\cdot P(B_1 = r, B_2 = r) }{ P(S_1=r, S_2=r, S_3=r)} \
&= \frac{ P(S_1 = r \mid B_1 = r, B_2 = r) \cdot P(S_2 = r \mid B_1 = r, B_2 = r) \cdot P(S_3 = r \mid B_1 = r, B_2 = r) \cdot P(B_1 = r, B_2 = r) }{ P(S_1=r, S_2=r, S_3=r) }
&\text{Samples are independent from each other, given we know what the actual types of balls are.} \
&= \frac{P(B_1=r, B_2=r)}{ P(S_1=r, S_2=r, S_3=r) } &\text{If both balls are red, then each sample will certainly be red} \
&= \frac{P(B_1=r)P(B_2=r)}{ P(S_1=r, S_2=r, S_3=r) } &\text{$B_1$ is marginally independent from $B_2$.} \
&= \frac{\frac{1}{2} \cdot \frac{1}{2}}{ P(S_1=r, S_2=r, S_3=r) } &\text{Ball type is determined by a fair coin flip.} \
&= \frac{\frac{1}{4}}{ P(S_1=r, S_2=r, S_3=r) } &\text{Simplify} \
&= \frac{\frac{1}{4}}{ P(S_1 = r \mid B_1 = r, B_2 = r) \cdot P(S_2 = r \mid B_1 = r, B_2 = r) \cdot P(S_3 = r \mid B_1 = r, B_2 = r) \cdot P(B_1=r, B_2=r)
+ 2 \cdot P(S_1 = r \mid B_1 = r, B_2 = w) \cdot P(S_2 = r \mid B_1 = r, B_2 = w) \cdot P(S_3 = r \mid B_1 = r, B_2 = w) \cdot P(B_1=r, B_2=w)
+ P(S_1 = r \mid B_1 = w, B_2 = w) \cdot P(S_2 = r \mid B_1 = w, B_2 = w) \cdot P(S_3 = r \mid B_1 = w, B_2 = w) \cdot P(B_1=w, B_2=w)
} \
&= \frac{\frac{1}{4}}{ \frac{1}{4}
+ 2 \cdot P(S_1 = r \mid B_1 = r, B_2 = w) \cdot P(S_2 = r \mid B_1 = r, B_2 = w) \cdot P(S_3 = r \mid B_1 = r, B_2 = w) \cdot P(B_1=r, B_2=w)
+ P(S_1 = r \mid B_1 = w, B_2 = w) \cdot P(S_2 = r \mid B_1 = w, B_2 = w) \cdot P(S_3 = r \mid B_1 = w, B_2 = w) \cdot P(B_1=w, B_2=w)
} &\text{Same computation as before.}\
&= \frac{\frac{1}{4}}{ \frac{1}{4}
+ 2 \cdot P(S_1 = r \mid B_1 = r, B_2 = w) \cdot P(S_2 = r \mid B_1 = r, B_2 = w) \cdot P(S_3 = r \mid B_1 = r, B_2 = w) \cdot P(B_1=r, B_2=w)
+ 0
} &\text{Impossible to get a red sample when both balls are white.}\
&= \frac{\frac{1}{4}}{ \frac{1}{4}
+ 2 \cdot \frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{4}
+ 0
} &\text{$50$% chance to sample a red ball when one is red and the other is white.} \
&= \frac{\frac{1}{4}}{\frac{1}{4} + \frac{1}{16}} \
&= \frac{\frac{1}{4}}{\frac{5}{16}} \
&= \frac{16}{4 \cdot 5} \
&= \frac{4}{5}
\end{aligned}
\end{equation}
$$
Ex 1.5
A secret agency has developed a scanner which determines a person is a terrorist. The scanner is fairly reliable; 95% of all scanned terrorists are identified as terrorists, and 95% of all upstanding citizens are identified as such. An informant tells the agency that exactly one passenger of 100 abroad an aeroplane in which you are seated is a terrorist. The police haul off the plane the first person for which the scanner tests positive. What is the probability that this person is a terrorist?
Ans
Let $T_k \in{0,1}$ describe whether or not person $k$ is a terrorist. Let $S_k \in {0,1}$ describe the result of the scanner for person $k$, with 0 as a negative result and 1 as a positive result. Let $H_k \in {0,1}$ describe the person $k$ having been hauled off. Similarly, let $H_{k-1}=0,...,H_1=0$ describe people $0,...,k-1$ not being hauled off.
End of explanation
import numpy as np
num_simulations = 10000
# First probability is our prior belief that the Butler was the killer, and not the Maid
# Second probability is our prior belief that the Maid was the killer, and not the Butler
# Third probability is our prior belief that neither the Butler nor the Maid was the killer.
killer = np.random.multinomial(n=1, pvals=[0.6,0.2,0.2], size=num_simulations)
# If the Butler and not the Maid was the killer, then the probability of using the knife is 0.6
# If the Maid and not the Butler was the killer, then the probability of using the knife is 0.2
# If neither the Butler nor the Maid was the killer, then the probability of using the knife is 0.3
knife_used = np.random.binomial(n=1, p=killer[:,0] * 0.6 + killer[:,1] * 0.2 + killer[:,2] * 0.3)
given_knife_used = np.where(knife_used)[0]
# Out of all the situations where the knife was used, we look at how many of them were done by the Butler?
killer[given_knife_used][:, 0].sum() / killer[given_knife_used].shape[0]
Explanation: Ex 1.6
Consider three variable distributions which admit the factorisation
$p(a, b, c) = p(a|b)p(b|c)p(c)$, where all variables are binary. How many parameters are needed to specify distributions of this form?
Ans
We would need 5 parameters to specify the whole distribution: 1 to specify $P(c)$. If we know $P(C=1)$, then we know $P(C=0)$ since all probabilities need to sum to 1. Likewise, for $P(b \mid c=1)$, if we know $P(B=1 \mid C=1)$, we know $P(B=0 \mid C=1)$. But we also have to consider the case when C=0. That would take another parameter. Now we're at 3 as total. Finally, we consider $P(A \mid B)$. Those would require 2 parameters. Once we know $P(A = 1 \mid B=1)$ and $P(A = 1 \mid B=0)$, then we know the other values. In sum, 3 variable distributions which admit the factorization above would require 5 parameters total.
Ex 1.7.
Repeat the Inspector Clouseau scenario, example(1.3), but with the restriction that either the maid or the butler is the murderer, but not both. Explicitly, the probability of the maid being the murderer and not the butler is 0.04, the probability of the butler being the murderer and not the maid is 0.64. Modify demoClouseau.m to implement this.
Ans
End of explanation
import numpy as np
num_sims = 100_000
A = np.random.binomial(n=1, p=0.65, size=num_sims)
B = np.random.binomial(n=1, p=0.77, size=num_sims)
C = np.random.binomial(n=1, p=0.1 * ((A == 0) & (B == 0)) + 0.99 * ((A == 0) & (B == 1)) + 0.8 * ((A == 1) & (B == 0)) + 0.25 * (A & B))
C_is_zero = np.where(C == 0)[0]
A[C_is_zero].sum() / A[C_is_zero].shape[0]
Explanation: The probability that the Butler was the killer and not the Maid, given that we observed that the knife was used, is 78.3%.
Ex 1.8
Prove $p(a,(b \text{ or } c))=p(a,b)+p(a,c)โp(a,b,c)$
Ans
$$
\begin{equation}
\begin{aligned}
P(a,(b \text{ or } c)) &= P(A=1, (B=1 \text{ or } C=1)) \
&= P(A=1, B=1, C=1) + P(A=1, B=0, C=1) + P(A=1, B=1, C=0) \
&= [P(A=1, B=1, C=1) + P(A=1, B=1, C=0)] + P(A=1, B=0, C=1) \
&= P(A=1, B=1) + P(A=1, B=0, C=1) \
&= P(A=1, B=1) + P(A=1, B=0, C=1) + P(A=1, B=1, C=1) - P(A=1, B=1, C=1) \
&= P(A=1, B=1) + P(A=1, C=1) - P(A=1, B=1, C=1) \
&= p(a,b) + p(a,c) - p(a,b,c)
\end{aligned}
\end{equation}
$$
Ex 1.9
Prove $P(x \mid z) = \sum_y P(x \mid y, z) \cdot P(y \mid z) = \sum_{y,w} P(x \mid w, y, z) \cdot P(w \mid y, z) \cdot P(y \mid z)$
Ans
Part I:
$$
\begin{equation}
\begin{aligned}
P(x \mid z) &=
\sum_y P(x,y \mid z) \
&= \sum_y \frac{P(x, y, z)}{P(z)} \
&= \sum_y \frac{P(x, y, z)}{P(z)} \cdot \frac{P(y, z)}{P(y, z)} \
&= \sum_y \frac{P(x, y, z)}{P(y, z)} \cdot \frac{P(y, z)}{P(z)} \
&= \sum_y (x \mid y, z) \cdot P(y \mid z)\
\end{aligned}
\end{equation}
$$
Part II:
$$
\begin{equation}
\begin{aligned}
\sum_y P(x \mid y, z) \cdot P(y \mid z) &= \sum_{w,y} P(x,y,w \mid z)\
&= \sum_{w,y} P(x,y,w \mid z)\
&= \sum_{w,y} \frac{P(x,y,w, z)}{z}\
&= \sum_{w,y} \frac{P(x \mid y, w, z) \cdot P(y, w, z)}{P(z)}\
&= \sum_{w,y} \frac{P(x \mid y, w, z) \cdot P(w \mid z, y) \cdot P(z, y)}{P(z)}\
&= \sum_{w,y} P(x \mid y, w, z) \cdot P(w \mid z, y) \cdot P(y \mid z) \
\end{aligned}
\end{equation}
$$
Ex 1.10
As a young man Mr Gott visits Berlin in 1969. Heโs surprised that he cannot cross into East Berlin since there is a wall separating the two halves of the city. Heโs told that the wall was erected 8 years previously. He reasons that : The wall will have a finite lifespan; his ignorance means that he arrives uniformly at random at some time in the lifespan of the wall. Since only 5% of the time one would arrive in the first or last 2.5% of the lifespan of the wall he asserts that with 95% confidence the wall will survive between 8/0.975 โ 8.2 and 8/0.025 = 320 years. In 1989 the now Professor Gott is pleased to find that his prediction was correct and promotes his prediction method in prestigious journals. This โdelta-tโ method is widely adopted and used to form predictions in a range of scenarios about which researchers are โtotally ignorantโ. Would you โbuyโ a prediction from Prof. Gott? Explain carefully your reasoning.
Ans
It does sound sensible in a way if what we're making predictions on are something we're totally ignorant about. However, I do think we aren't usually totally ignorant about something. We usually have some knowledge about a lot of things, what variables are probably affected by others and such (i.e. structural), that we could make a more "correct" guess about something. The internet gives us access to other people's work and research; we could use those to better inform our decisions.
Ex 1.11
Implement the soft XOR gate, example(1.7) using BRMLtoolbox. You may find condpot.m of use.
End of explanation
import pandas as pd
kj = pd.DataFrame([
{'KJ': 1, 'proba': 1/100_000},
{'KJ': 0, 'proba': 99_999 / 100_000}
])
hamburger_given_kj = pd.DataFrame([
{'hamburger_eater': 1, 'proba': 0.9, 'KJ': 1},
{'hamburger_eater': 0, 'proba': 0.1, 'KJ': 1}
])
hamburger = pd.DataFrame([
{'hamburger_eater': 1, 'proba': 0.5},
{'hamburger_eater': 0, 'proba': 0.5},
])
merged = hamburger_given_kj.merge(kj, on='KJ', suffixes=('_hamburger_eater', '_KJ'))
merged['proba'] = merged['proba_hamburger_eater'] * merged['proba_KJ']
merged
blah = merged.merge(hamburger, on='hamburger_eater')
blah['proba'] = blah['proba_x'] / blah['proba_y']
blah[blah['hamburger_eater'] == 1]['proba']
Explanation: Ex 1.12
Implement the hamburgers, example(1.2) (both scenarios) using BRMLtoolbox. To do so you will need to define the joint distribution $p(\text{hamburgers},\text{KJ})$ in which $dom(\text{hamburgers}) = dom(\text{KJ}) = {tr, fa}$.
Ans
Consider the following fictitious scientific information: Doctors find that people with Kreuzfeld-Jacob disease (KJ) almost invariably ate hamburgers, thus $p(\text{Hamburger Eater} \mid \text{KJ} ) = 0.9$. The probability of an individual having KJ is currently rather low, about one in 100,000.
Part I
Assuming eating lots of hamburgers is rather widespread, say p(Hamburger Eater) = 0.5, what is the probability that a hamburger eater will have Kreuzfeld-Jacob disease?
End of explanation
hamburger = pd.DataFrame([
{'hamburger_eater': 1, 'proba': 0.001},
{'hamburger_eater': 0, 'proba': 0.999},
])
merged = hamburger_given_kj.merge(kj, on='KJ', suffixes=('_hamburger_eater', '_KJ'))
merged['proba'] = merged['proba_hamburger_eater'] * merged['proba_KJ']
merged
blah = merged.merge(hamburger, on='hamburger_eater')
blah['proba'] = blah['proba_x'] / blah['proba_y']
blah[blah['hamburger_eater'] == 1]['proba']
Explanation: Part II
Assuming eating lots of hamburgers is rather widespread, say p(Hamburger Eater) = 0.001, what is the probability that a hamburger eater will have Kreuzfeld-Jacob disease?
End of explanation
import numpy as np
import pandas as pd
num_samples = 100_000
dice_1 = np.where(np.random.multinomial(n=1, pvals=np.ones(6) * 1/6, size=num_samples))[1] + 1
dice_2 = np.where(np.random.multinomial(n=1, pvals=np.ones(6) * 1/6, size=num_samples))[1] + 1
summation = dice_1 + dice_2
joint = pd.DataFrame({'dice_1': dice_1, 'dice_2': dice_2, 'summation': summation}, index=list(range(num_samples)))
is_nine = joint[joint['summation'] == 9].copy()
(is_nine.groupby(['dice_1', 'dice_2']).count() / is_nine.groupby(['dice_1', 'dice_2']).count().sum()).rename(columns={'summation': 'posterior_proba'})
Explanation: Ex 1.13
Implement the two-dice example, section(1.3.1) using BRMLtoolbox.
Ans
Ancestral Sampling
End of explanation
dice_1 = pd.DataFrame({'dice_1': [1,2,3,4,5,6], 'proba': np.ones(6) / 6, 'key': 0})
dice_2 = pd.DataFrame({'dice_2': [1,2,3,4,5,6], 'proba': np.ones(6) / 6, 'key': 0})
cross_join = dice_1.merge(dice_2, on='key')
cross_join['sum'] = cross_join['dice_1'] + cross_join['dice_2']
cross_join['joint_proba'] = cross_join['proba_x'] * cross_join['proba_y']
sum_is_nine = cross_join[cross_join['sum'] == 9].copy()
sum_is_nine
sum_is_nine['posterior_proba'] = sum_is_nine['joint_proba'] / sum_is_nine['joint_proba'].sum()
sum_is_nine[['dice_1', 'dice_2', 'posterior_proba']]
Explanation: Through Pandas
End of explanation
num_sims = 10_000
actual = np.tile(np.array([1,2,3,4,5]), (num_sims,1))
medium_guess = np.tile(np.array([1,2,3,4,5]), (num_sims,1))
for i in range(num_sims):
np.random.shuffle(actual[i])
np.random.shuffle(medium_guess[i])
pd.DataFrame({'correct': (actual == medium_guess).sum(axis=1), 'count': 0}).groupby('correct').count()
# On average, we should expect the medium to get one guess correctly.
(actual == medium_guess).sum(axis=1).mean()
The probability that the medium will obtain at least 1 correct match
((actual == medium_guess).sum(axis=1) >= 1).sum() / num_sims
Explanation: Ex 1.14
A redistribution lottery involves picking the correct four numbers from 1 to 9 (without replacement, so 3,4,4,1 for example is not possible). The order of the picked numbers is irrelevant. Every week a million people play this game, each paying ยฃ1 to enter, with the numbers 3,5,7,9 being the most popular (1 in every 100 people chooses these numbers). Given that the million pounds prize money is split equally between winners, and that any four (different) numbers come up at random, what is the expected amount of money each of the players choosing 3,5,7,9 will win each week? The least popular set of numbers is 1,2,3,4 with only 1 in 10,000 people choosing this. How much do they profit each week, on average? Do you think there is any โskillโ involved in playing this lottery?
Ans
The probability of getting exactly 3,5,7,9 (in that order) is $P(\text{Lotto} = [3,5,7,9]) = \frac{1}{9} \cdot \frac{1}{8} \cdot \frac{1}{7} \cdot \frac{1}{6} = \frac{1}{3024}$. However, we need to take into account that order doesn't matter. There are $4 \cdot 3 \cdot 2 \cdot 1 = 24$ ways of reordering 3,5,7,9. Thus, $P(\text{Lotto} = {3,5,7,9}) = 24 \cdot \frac{1}{3024} \approx 0.007936$. If $\frac{1}{100}$ of the 1,000,000 people who enter the lottery every week pick these numbers then there would be 10,000 people every week who use these numbers. Expected amount of money each of the players choosing 3,5,7,9 will win each week is $P(\text{Lotto} = {3,5,7,9}) \cdot \text{Prize money per person} = P(\text{Lotto} = {3,5,7,9}) \cdot \frac{1,000,000}{10,000} = 0.007936 \cdot 100 = ยฃ0.79$. On the other hand, the people choosing the set ${1,2,3,4}$ would win, on average, $P(\text{Lotto} = {1,2,3,4}) \cdot \frac{1,000,000}{100} = 0.007936 \cdot 10,000 = ยฃ79.$
The "skill" involved in playing this lottery is figuring out what set of numbers are least popular. You could exploit this information to maximize your payoff, on average.
Ex 1.15
In a test of โpsychometryโ the car keys and wrist watches of 5 people are given to a medium. The medium then attempts to match the wrist watch with the car key of each person. What is the expected number of correct matches that the medium will make (by chance)? What is the probability that the medium will obtain at least 1 correct match?
End of explanation
def process(data):
to_add = []
for num in data:
to_add += list(range(num, 0, -1))
return to_add
def process_data_num_times(num, data):
big_list = process(data)
for _ in range(num - 1):
big_list = process(big_list)
return big_list
# There are 120 different unique sets of orders, when each individual can order one out of 4 pizzas and there are 7 individuals.
sum(process_data_num_times(5, [4,3,2,1]))
Explanation: Ex 1.16
Show that for any function f
$\qquad\sum_x p(x \mid y) \cdot f(y) = f(y)$
Ans
$$
\begin{equation}
\begin{aligned}
\sum_x p(x \mid y) &= \sum_x \frac{p(x,y)}{p(y)} \
&= \frac{1}{p(y)} \cdot \sum_x p(x,y) \
&= \frac{1}{p(y)} \cdot p(y) \
&= 1 \
\end{aligned}
\end{equation}
$$
Thus,
$$
\begin{equation}
\begin{aligned}
\sum_x p(x \mid y) \cdot f(y) = f(y)
\end{aligned}
\end{equation}
$$
Explain why, in general,
$\qquad \sum_x p(x \mid y) f(x,y) \neq \sum_x f(x,y)$
Ans
Using $\sum_x p(x \mid y) = 1$,
$$
\begin{equation}
\begin{aligned}
\sum_x p(x \mid y) f(x,y) = f(x,y) \
\neq f(x_1,y) + f(x_2,y) + ... + ... \
\end{aligned}
\end{equation}
$$
Ex 1.17
Seven friends decide to order pizzas by telephone from Pizza4U based on a flyer pushed through their letterbox. Pizza4U has only 4 kinds of pizzas, and each person chooses a pizza independently. Bob phones Pizza4U and places the combined pizza order, simply stating how many pizzas of each kind are required. Unfortunately, the precise order is lost, so the chef makes seven randomly chosen pizzas and then passes them to the delivery boy.
1.
How many different combined orders are possible?
End of explanation
num_sims = 10_000
num_friends = 7
friend_1 = np.where(np.random.multinomial(1, pvals=[0.25, 0.25, 0.25, 0.25], size=num_sims))[1]
pizza_choices_by_friends = np.random.multinomial(1, pvals=[0.25, 0.25, 0.25, 0.25], size=(num_sims, num_friends))
pizza_choices_by_chef = np.random.multinomial(1, pvals=[0.25, 0.25, 0.25, 0.25], size=(num_sims, num_friends))
friends = np.where(pizza_choices_by_friends)[1]
pizza_choice = np.where(pizza_choices_by_friends)[2]
chef_pizza_choice = np.where(pizza_choices_by_chef)[2]
def compute_proba_chef_correct(pizza_choice, chef_pizza_choice, num_sims, num_friends):
count = 0
for i in range(0, num_sims, num_friends):
set_1 = set(pizza_choice[i:i+num_friends])
set_2 = set(chef_pizza_choice[i:i+num_friends])
if set_1 == set_2:
count += 1
return count / num_sims
# Probability that the delivery boy has the right order
compute_proba_chef_correct(pizza_choice, chef_pizza_choice, num_sims, num_friends)
Explanation: 2.
What is the probability that the delivery boy has the right order?
End of explanation
import numpy as np
num_samples = 100_000
def simulate_chain():
rain_given_rained = np.random.binomial(n=1, p=0.7, size=num_samples)
sunny_given_sunny = np.random.binomial(n=1, p=0.4, size=num_samples)
rain = np.random.binomial(n=1, p=0.5, size=num_samples)
for i in range(num_samples-1):
rain[i+1] = rain[i] * rain_given_rained[i] + (1 - rain[i]) * (1 - sunny_given_sunny[i])
return rain
rain_events = simulate_chain()
# Probability of raining on any given day
rain_events.mean()
Explanation: Ex 1.18
Sally is new to the area and listens to some friends discussing about another female friend. Sally knows that they are talking about either Alice or Bella but doesnโt know which. From previous conversations Sally knows some independent pieces of information: Sheโs 90% sure that Alice has a white car, but doesnโt know if Bellaโs car is white or black. Similarly, sheโs 90% sure that Bella likes sushi, but doesnโt know if Alice likes sushi. Sally hears from the conversation that the person being discussed hates sushi and drives a white car. What is the probability that the friends are talking about Alice? Assume maximal uncertainty in the absence of any knowledge of the probabilities.
Ans
$$
\begin{equation}
\begin{aligned}
P(A = 1 \mid C = w, S = h) &= \frac{P(C = w, S=h \mid A = 1) \cdot P(A=1)}{P(S=h, C = w)} \
&= \frac{P(C = w, S=h \mid A = 1) \cdot P(A=1)}{P(C = w, S=h \mid A = 1) \cdot P(A=1) + P(C = w, S=h \mid A = 0) \cdot P(A=0)} \
&= \frac{P(C = w, S=h \mid A = 1)}{P(C = w, S=h \mid A = 1) + P(C = w, S=h \mid A = 0)} \
&= \frac{P(C = w \mid A=1) \cdot P(S=h \mid A = 1)}{P(C = w \mid A=1) \cdot P(S=h \mid A = 1) + P(C = w \mid A= 0) \cdot P(S=h \mid A = 0)} & \text{Once we know if it's Alice, then the value of $S$ and the value of $C$ is independent from each other} \
&= \frac{0.9 \cdot 0.5}{0.9 \cdot 0.5 + 0.5 \cdot 0.1} \
&= \frac{0.45}{0.5} \
&= 90\% \
\end{aligned}
\end{equation}
$$
Ex 1.19
The weather in London can be summarised as: if it rains one day thereโs a 70% chance it will rain the following day; if itโs sunny one day thereโs a 40% chance it will be sunny the following day.
Part I
Assuming that the prior probability it rained yesterday is 0.5, what is the probability that it was raining yesterday given that itโs sunny today?
Ans
$$
\begin{equation}
\begin{aligned}
P(D_{yesterday}=r \mid D_{today}=s) &= \frac{P(D_{today}=s \mid D_{yesterday}=r) \cdot P(D_{yesterday}=r)}{P(D_{today}=s \mid D_{yesterday}=r) \cdot P(D_{yesterday}=r) + P(D_{today}=s \mid D_{yesterday}=s) \cdot P(D_{yesterday}=s)} \
&= \frac{0.3}{0.3 + 0.4} \
&=\frac{3}{7}
\end{aligned}
\end{equation}
$$
Part II
If the weather follows the same pattern as above, day after day, what is the probability that it will rain on any day (based on an effectively infinite number of days of observing the weather)?
End of explanation
import numpy as np
PLACE_SHIP = 2
class Ship:
def __init__(self, length):
self.length = length
def place(self, x0, y0, vert, sea):
if vert:
x1 = x0 + self.length
y1 = y0
else:
x1 = x0
y1 = y0 + self.length
xs, ys = Ship.get_pixels(x0, y0, x1, y1)
sea[xs, ys] *= PLACE_SHIP
def generate_potential_location(width, height):
for i in range(width):
for j in range(height):
for vert in [True, False]:
yield (i, j, vert)
def get_pixels(x0, y0, x1, y1):
xs = []
ys = []
#import pdb; pdb.set_trace()
if x0 == x1:
y = y0
while y < y1:
xs.append(x0)
ys.append(y)
y += 1
if y0 == y1:
x = x0
while x < x1:
xs.append(x)
ys.append(y0)
x += 1
return xs, ys
class ProbaPixels:
DROP_BOMB = 3
PLACE_SHIP = 2
def __init__(self, size, attempts, ships):
self.attempts = attempts
self.ships = ships
self.size = size
def create(width, height, attempts):
sea = np.ones((width, height))
for attempt in attempts:
try:
sea[attempt[0]-1, attempt[1]-1] *= ProbaPixels.DROP_BOMB
except IndexError:
import pdb; pdb.set_trace()
return sea
def calculate(self, func):
legal_count = 0
proba_pixels = np.zeros((self.size[1], self.size[2]))
probs_pix, leg_count = func(self.size[1], self.size[2], legal_count, proba_pixels, self.attempts)
return probs_pix / leg_count
def sample_1(width, height, legal_count, proba_pixels, attempts):
ship_a = Ship(length=5)
ship_b = Ship(length=5)
for x0_a, y0_a, vert_a in Ship.generate_potential_location(width, height):
for x0_b, y0_b, vert_b in Ship.generate_potential_location(width, height):
sea = ProbaPixels.create(width, height, attempts)
try:
ship_a.place(x0=x0_a, y0=y0_a, vert=vert_a, sea=sea)
ship_b.place(x0=x0_b, y0=y0_b, vert=vert_b, sea=sea)
except IndexError:
continue
# If there are no hits
# and if no ships overlap
# and if no ships overlap and were hit,
# then it's a legal configuration
#import pdb; pdb.set_trace()
#if (sea[(5, 6, 7, 8, 9), (5, 5, 5, 5, 5)] == ProbaPixels.PLACE_SHIP).sum() == 5:
# import pdb; pdb.set_trace()
if ((sea == ProbaPixels.PLACE_SHIP * ProbaPixels.DROP_BOMB) \
| (sea == ProbaPixels.PLACE_SHIP * ProbaPixels.PLACE_SHIP) \
| (sea == ProbaPixels.PLACE_SHIP * ProbaPixels.PLACE_SHIP * ProbaPixels.DROP_BOMB)).sum() == 0:
# if (sea[(5, 6, 7, 8, 9), (5, 5, 5, 5, 5)] == ProbaPixels.PLACE_SHIP).sum() == 5:
# import pdb; pdb.set_trace()
legal_count += 1
x, y = np.where(sea == PLACE_SHIP)
proba_pixels[x,y] += 1
return proba_pixels, legal_count
ship_1 = Ship(length=5)
ship_2 = Ship(length=5)
ships = [ship_1, ship_2]
attempts = [(1,10),(2,2),(3,8),(4,4),(5,6),(6,5),(7,4),(7,7),(9,2),(9,9)]
size = (10_000, 10, 10)
proba_pixels = ProbaPixels(size=size, attempts=attempts, ships=ships)
probas = proba_pixels.calculate(sample_1)
import seaborn as sns
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(probas, annot=True, ax=ax)
ax.set_title("Posterior Distribution")
ax.set_xlabel("Y-axis")
ax.set_ylabel("X-axis")
ax.set_xticklabels(list(range(1,11)))
ax.set_yticklabels(list(range(1,11)))
probas.max()
np.where(probas == probas.max())
Explanation: Part III
Use the result from part 2 above as a new prior probability of rain yesterday and recompute the probability that it was raining yesterday given that itโs sunny today.
Ans
$$
\begin{equation}
\begin{aligned}
P(D_{yesterday}=r \mid D_{today}=s) &= \frac{P(D_{today}=s \mid D_{yesterday}=r) \cdot P(D_{yesterday}=r)}{P(D_{today}=s \mid D_{yesterday}=r) \cdot P(D_{yesterday}=r) + P(D_{today}=s \mid D_{yesterday}=s) \cdot P(D_{yesterday}=s)} \
&= \frac{0.3 \cdot \frac{2}{3}}{0.3 \cdot \frac{2}{3} + 0.4 \cdot \frac{1}{3}} \
&= 0.6
\end{aligned}
\end{equation}
$$
Ex 1.20
A game of Battleships is played on a 10 ร 10 pixel grid. There are two 5-pixel length ships placed uniformly at random on the grid, subject to the constraints that (i) the ships cannot overlap and (ii) one ship is vertical and the other horizontal. After 10 unsuccessful โmissesโ in locations (1,10),(2,2),(3,8),(4,4),(5,6),(6,5),(7,4),(7,7),(9,2),(9,9) calculate which pixel has the highest proba- bility of containing a ship. State this pixel and the value of the highest probability.
End of explanation
def ex_1_21(width, height, legal_count, proba_pixels, attempts):
ship_a = Ship(length=5)
ship_b = Ship(length=5)
for x0_a, y0_a, vert_a in Ship.generate_potential_location(width, height):
for x0_b, y0_b, vert_b in Ship.generate_potential_location(width, height):
sea = ProbaPixels.create(width, height, attempts)
try:
ship_a.place(x0=x0_a, y0=y0_a, vert=vert_a, sea=sea)
ship_b.place(x0=x0_b, y0=y0_b, vert=vert_b, sea=sea)
except IndexError:
continue
mask = np.ones(sea.shape, bool)
mask[4,4] = False
# If there's a hit at (5,5) (where our indexing starts at 1),
# and there's no other hit,
# and there's no overlap and hit,
# and there's no overlap and no hit
if (sea[4,4] == ProbaPixels.PLACE_SHIP * ProbaPixels.DROP_BOMB) and \
((sea[mask] != ProbaPixels.PLACE_SHIP * ProbaPixels.DROP_BOMB).sum() == sea[mask].shape[0]) \
and ((sea == ProbaPixels.PLACE_SHIP * ProbaPixels.PLACE_SHIP * ProbaPixels.DROP_BOMB)).sum() == 0 \
and (sea == ProbaPixels.PLACE_SHIP * ProbaPixels.PLACE_SHIP).sum() == 0:
legal_count += 1
x, y = np.where(sea == PLACE_SHIP)
proba_pixels[x,y] += 1
return proba_pixels, legal_count
ship_1 = Ship(length=5)
ship_2 = Ship(length=5)
ships = [ship_1, ship_2]
attempts = [(1,1),(2,2),(5,5)]
size = (10_000, 8, 8)
proba_pixels = ProbaPixels(size=size, attempts=attempts, ships=ships)
probas_1_21 = proba_pixels.calculate(ex_1_21)
import seaborn as sns
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(probas_1_21, annot=True, ax=ax)
ax.set_title("Posterior Distribution")
ax.set_xlabel("Y-axis")
ax.set_ylabel("X-axis")
ax.set_xticklabels(list(range(1,9)))
ax.set_yticklabels(list(range(1,9)))
np.where(probas_1_21 == probas_1_21.max())
probas_1_21.max()
Explanation: The most likely spots are (1,6) and (5,10) with 20.18% probability.
Ex 1.21
A game of Battleships is played on a 8 ร 8 grid. There are two 5-pixel length ships placed horizontally and two 5-pixel length ships placed vertically, subject to the same constraints as in the previous question. Given โmissesโ in locations (1, 1), (2, 2) and a โhitโ in location (5, 5), which pixel most likely contains
a ship and what is that probability?
End of explanation
import numpy as np
import scipy.stats as st
def sample_explosions(num_samples=10_000):
radius_mult = np.random.rand(num_samples)
deg = np.random.rand(num_samples) * 2 * np.pi
x = radius_mult * np.sin(deg)
y = radius_mult * np.cos(deg)
return (x,y)
def discretized_sample_explosions():
radius_mult_range = np.arange(0, 1, 0.04)
rotation_range = np.arange(0, 2 * np.pi, 2 * np.pi / 144)
xs = []
ys = []
for radius_mult in radius_mult_range:
for rot in rotation_range:
xs.append(radius_mult * np.sin(rot))
ys.append(radius_mult * np.cos(rot))
return (np.array(xs), np.array(ys))
real_explosions_x, real_explosions_y = sample_explosions(num_samples=2)
#real_explosions_x = [-0.5, -0.05] # similar to the example data
#real_explosions_y = [0.4, 0.3]
sensors_x = np.sin(2 * np.pi / 12 * np.arange(12))
sensors_y = np.cos(2 * np.pi / 12 * np.arange(12))
def distance(x0, y0, x1, y1):
return (x0 - x1) ** 2 + (y0 - y1) ** 2
def sensor_value_additive(explosions_x, explosions_y, sensor_x, sensor_y, sigma):
d1, d2 = distance(explosions_x, explosions_y, sensor_x, sensor_y)
# add back sigma later on?
return 1.0 / (d1 + 0.1) + 1.0 / (d2 + 0.1) + np.random.normal(loc=0, scale=0.2)
#return 1.0 / (d1 + 0.1) #+ sigma * np.random.normal(loc=0, scale=1)
recorded_sensor_values = [
sensor_value_additive(
real_explosions_x,
real_explosions_y,
sensors_x[i],
sensors_y[i],
sigma=0.2
) for i in range(12)
]
recorded_sensor_values
def likelihood_ex_1_22(d1, d2):
loc = 1.0 / (0.1 + d1) + 1.0 / (0.1 + d2)
scale = 0.2
return loc, scale
def likelihood_ex_1_12(d1, d2):
loc = 1.0 / (0.1 + d1)
scale = 0.2
return loc, scale
list(zip(sensors_x, sensors_y))
explosions_x0, explosions_y0 = sample_explosions(num_samples=100_000)
explosions_x1, explosions_y1 = sample_explosions(num_samples=100_000)
def compute_likelihood(
explosions_x0,
explosions_y0,
explosions_x1,
explosions_y1,
recorded_sensor_values,
likelihood_func
):
prod = 1.0
for sensor_x, sensor_y, sensor_val in zip(sensors_x, sensors_y, recorded_sensor_values):
d1 = distance(explosions_x0, explosions_y0, sensor_x, sensor_y)
d2 = distance(explosions_x1, explosions_y1, sensor_x, sensor_y)
loc, scale = likelihood_func(d1, d2)
normal = st.norm(loc=loc, scale=scale)
prod *= normal.cdf(sensor_val + 0.5) - normal.cdf(sensor_val - 0.5)
return prod
def compute_likelihood_example_1_12(
explosions_x0,
explosions_y0,
explosions_x1,
explosions_y1,
recorded_sensor_values,
likelihood_func
):
#summation = np.zeros(len(explosions_x0))
prod = 1.0
for sensor_x, sensor_y, sensor_val in zip(sensors_x, sensors_y, recorded_sensor_values):
d1 = distance(explosions_x0, explosions_y0, sensor_x, sensor_y)
loc, scale = likelihood_func(d1, d2)
normal = st.norm(loc=loc, scale=scale)
#summation += np.log(normal.cdf(sensor_val + 0.1) - normal.cdf(sensor_val - 0.1))
prod *= normal.cdf(sensor_val + 0.5) - normal.cdf(sensor_val - 0.5)
return prod
compute_likelihood(
real_explosions_x[0],
real_explosions_y[0],
real_explosions_x[1],
real_explosions_y[1],
recorded_sensor_values,
likelihood_func=likelihood_ex_1_22
)
compute_likelihood(
real_explosions_x[0],
real_explosions_y[0],
real_explosions_x[1],
real_explosions_y[1],
recorded_sensor_values,
likelihood_func=likelihood_ex_1_12
)
compute_likelihood(
[0],
[0],
[0],
[0],
recorded_sensor_values,
likelihood_func=likelihood_ex_1_22
)
len(recorded_sensor_values)
probas_1_22_hypo_2_exp = compute_likelihood(
explosions_x0,
explosions_y0,
explosions_x1,
explosions_y1,
recorded_sensor_values,
likelihood_func=likelihood_ex_1_22
)
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
def plot_grid():
# Create a figure. Equal aspect so circles look circular
fig,ax = plt.subplots(1, figsize=(10,10))
ax.set_aspect('equal')
ax.set_xlim((-1.25,1.25))
ax.set_ylim((-1.25,1.25))
sensor_circle = Circle((0, 0), 1, fill=False)
ax.add_patch(sensor_circle)
# Show the location of the 12 sensors
for i in range(12):
ax.add_patch(Circle((np.sin(2 * np.pi / 12 * i), np.cos(2 * np.pi / 12 * i)), radius=0.02))
return fig, ax
def plot_explosions(real_explosions_x, real_explosions_y, ax):
# Mark the real explosions
for j in range(len(real_explosions_x)):
ax.plot(real_explosions_x[j], real_explosions_y[j], marker='x', color='r', markersize=12)
def plot_probas_2_explosions(probas, fig, ax):
rescaled_probas = probas #/ (probas.max())
for x0, y0, x1, y1, proba in zip(explosions_x0, explosions_y0, explosions_x1, explosions_y1, rescaled_probas):
# proba = 0.5
ax.add_patch(Circle((x0, y0), radius=0.02, color='black', alpha=proba))
ax.add_patch(Circle((x1, y1), radius=0.02, color='black', alpha=proba))
def plot_probas_1_explosion(probas, fig, ax):
rescaled_probas = probas / (2 * probas.max())
for x0, y0, proba in zip(explosions_x0, explosions_y0, rescaled_probas):
ax.add_patch(Circle((x0, y0), radius=0.02, color='black', alpha=proba))
fig, ax = plot_grid()
plot_probas_2_explosions(probas_1_22_hypo_2_exp, fig, ax)
plot_explosions(real_explosions_x, real_explosions_y, ax=ax)
ax.set_title("Most likely area of explosions, assuming 2 explosions")
Explanation: The most likely squares are (5,4), (4,5), each one with 56% probability.
Ex 1.22
We consider an extension of the explosion example. In this extension there are two explosions at locations s1 and s2 and the observed value at sensor i is
$\qquad v_i = \frac{1}{d_i^2(1) + 0.1} + \frac{1}{d_i^2(2) + 0.1} + \sigma \epsilon_i$
where $d_i(1)$, $d_i(2)$ is the distance from explosion 1, 2 to the sensor respectively; ฯ is the standard deviation of the Gaussian sensor noise and the noise $\epsilon_i$ is drawn from a zero mean unit variance Gaussian independently for each sensor. The data in the file EarthquakeExerciseData.txt represents the observed sensor values vi and the coordinate system setup is given in earthquakeExerciseSetup.jl. Assuming that the prior locations of the explosions are independent and uniform (according to the spiral coordinate system):
Part I
Calculate the posterior $p(s_1 \mid v)$ and draw an image similar to fig(1.3) that visualises the posterior.
Ans
End of explanation
probas_1_22_hypo_1_exp = compute_likelihood(
explosions_x0,
explosions_y0,
explosions_x1,
explosions_y1,
recorded_sensor_values,
likelihood_func=likelihood_ex_1_12
)
probas_1_22_hypo_2_exp.sum()
probas_1_22_hypo_1_exp.sum()
np.log(probas_1_22_hypo_2_exp.sum()) - np.log(probas_1_22_hypo_1_exp.sum())
Explanation: Part II
Writing $H_2$ for the hypothesis that there are two explosions and $H_1$ for the hypothesis that there is only one explosion, report the value of $log p(v|H_2) โ log p(v|H_1)$.
Ans
End of explanation |
1,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The business ID field has already been filtered for only restaurants
We want to filter the users collection for the following
Step1: Create a new dictionary with the following structure and then export as a json object | Python Code:
#Find a list of users with at least 20 reviews
user_list = []
for user in users.find():
if user['review_count'] >= 20:
user_list.append(user['_id'])
else:
pass
Explanation: The business ID field has already been filtered for only restaurants
We want to filter the users collection for the following:
1. User must have at least 20 reviews
2. For users with 20 reviews, identify the reviews which are for businesses
3. For each user, keep only those reviews which are related to a business in
the list of restaurant business IDs
4. Keep only users who have at least 20 reviews after finishing step 3
End of explanation
user_reviews = dict.fromkeys(user_list, 0)
for review in reviews.find():
try:
if user_reviews[review['_id']] == 0:
print review['_id']
print review
break
except KeyError:
pass
# user_reviews[review['_id']] = [review]
# else:
# user_reviews[review['_id']].append(review)
# except KeyError:
# pass
user_reviews[user_reviews.keys()[23]]
filtered_reviews = {}
for user in user_reviews.keys():
if user_reviews[user] != 0:
filtered_reviews[user] = user_reviews[user]
#We have this many users after our filtering
len(filtered_reviews)
#Dump file of cleaned up user data
with open('merged_user_reviews.json', 'w') as fp:
json.dump(user_reviews, fp)
Explanation: Create a new dictionary with the following structure and then export as a json object:
{user id: [review, review, review], ..., user id: [review, review, review]}
End of explanation |
1,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
Step1: Improving Reading Ability
From DASL(http
Step2: And use groupby to compute the means for the two groups.
Step4: The Normal class provides a Likelihood function that computes the likelihood of a sample from a normal distribution.
Step5: The prior distributions for mu and sigma are uniform.
Step6: I use itertools.product to enumerate all pairs of mu and sigma.
Step7: After the update, we can plot the probability of each mu-sigma pair as a contour plot.
Step8: And then we can extract the marginal distribution of mu
Step9: And the marginal distribution of sigma
Step10: Exercise
Step16: Paintball
Suppose you are playing paintball in an indoor arena 30 feet
wide and 50 feet long. You are standing near one of the 30 foot
walls, and you suspect that one of your opponents has taken cover
nearby. Along the wall, you see several paint spatters, all the same
color, that you think your opponent fired recently.
The spatters are at 15, 16, 18, and 21 feet, measured from the
lower-left corner of the room. Based on these data, where do you
think your opponent is hiding?
Here's the Suite that does the update. It uses MakeLocationPmf,
defined below.
Step17: The prior probabilities for alpha and beta are uniform.
Step18: To visualize the joint posterior, I take slices for a few values of beta and plot the conditional distributions of alpha. If the shooter is close to the wall, we can be somewhat confident of his position. The farther away he is, the less certain we are.
Step19: Here are the marginal posterior distributions for alpha and beta.
Step20: To visualize the joint posterior, I take slices for a few values of beta and plot the conditional distributions of alpha. If the shooter is close to the wall, we can be somewhat confident of his position. The farther away he is, the less certain we are.
Step21: Another way to visualize the posterio distribution
Step22: Here's another visualization that shows posterior credible regions.
Step23: Exercise
Step25: Now do some bayes
Step26: Exercise
Step27: Exercise | Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint, EvalBinomialPmf
import thinkplot
Explanation: Think Bayes: Chapter 9
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
import pandas as pd
df = pd.read_csv('drp_scores.csv', skiprows=21, delimiter='\t')
df.head()
Explanation: Improving Reading Ability
From DASL(http://lib.stat.cmu.edu/DASL/Stories/ImprovingReadingAbility.html)
An educator conducted an experiment to test whether new directed reading activities in the classroom will help elementary school pupils improve some aspects of their reading ability. She arranged for a third grade class of 21 students to follow these activities for an 8-week period. A control classroom of 23 third graders followed the same curriculum without the activities. At the end of the 8 weeks, all students took a Degree of Reading Power (DRP) test, which measures the aspects of reading ability that the treatment is designed to improve.
Summary statistics on the two groups of children show that the average score of the treatment class was almost ten points higher than the average of the control class. A two-sample t-test is appropriate for testing whether this difference is statistically significant. The t-statistic is 2.31, which is significant at the .05 level.
I'll use Pandas to load the data into a DataFrame.
End of explanation
grouped = df.groupby('Treatment')
for name, group in grouped:
print(name, group.Response.mean())
Explanation: And use groupby to compute the means for the two groups.
End of explanation
from scipy.stats import norm
class Normal(Suite, Joint):
def Likelihood(self, data, hypo):
data: sequence of test scores
hypo: mu, sigma
mu, sigma = hypo
likes = norm.pdf(data, mu, sigma)
return np.prod(likes)
Explanation: The Normal class provides a Likelihood function that computes the likelihood of a sample from a normal distribution.
End of explanation
mus = np.linspace(20, 80, 101)
sigmas = np.linspace(5, 30, 101)
Explanation: The prior distributions for mu and sigma are uniform.
End of explanation
from itertools import product
control = Normal(product(mus, sigmas))
data = df[df.Treatment=='Control'].Response
control.Update(data)
Explanation: I use itertools.product to enumerate all pairs of mu and sigma.
End of explanation
thinkplot.Contour(control, pcolor=True)
thinkplot.Config(xlabel='mu', ylabel='sigma')
Explanation: After the update, we can plot the probability of each mu-sigma pair as a contour plot.
End of explanation
pmf_mu0 = control.Marginal(0)
thinkplot.Pdf(pmf_mu0)
thinkplot.Config(xlabel='mu', ylabel='Pmf')
Explanation: And then we can extract the marginal distribution of mu
End of explanation
pmf_sigma0 = control.Marginal(1)
thinkplot.Pdf(pmf_sigma0)
thinkplot.Config(xlabel='sigma', ylabel='Pmf')
Explanation: And the marginal distribution of sigma
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# It looks like there is a high probability that the mean of
# the treatment group is higher, and the most likely size of
# the effect is 9-10 points.
# It looks like the variance of the treated group is substantially
# smaller, which suggests that the treatment might be helping
# low scorers more than high scorers.
Explanation: Exercise: Run this analysis again for the control group. What is the distribution of the difference between the groups? What is the probability that the average "reading power" for the treatment group is higher? What is the probability that the variance of the treatment group is higher?
End of explanation
class Paintball(Suite, Joint):
Represents hypotheses about the location of an opponent.
def __init__(self, alphas, betas, locations):
Makes a joint suite of parameters alpha and beta.
Enumerates all pairs of alpha and beta.
Stores locations for use in Likelihood.
alphas: possible values for alpha
betas: possible values for beta
locations: possible locations along the wall
self.locations = locations
pairs = [(alpha, beta)
for alpha in alphas
for beta in betas]
Suite.__init__(self, pairs)
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: pair of alpha, beta
data: location of a hit
Returns: float likelihood
alpha, beta = hypo
x = data
pmf = MakeLocationPmf(alpha, beta, self.locations)
like = pmf.Prob(x)
return like
def MakeLocationPmf(alpha, beta, locations):
Computes the Pmf of the locations, given alpha and beta.
Given that the shooter is at coordinates (alpha, beta),
the probability of hitting any spot is inversely proportionate
to the strafe speed.
alpha: x position
beta: y position
locations: x locations where the pmf is evaluated
Returns: Pmf object
pmf = Pmf()
for x in locations:
prob = 1.0 / StrafingSpeed(alpha, beta, x)
pmf.Set(x, prob)
pmf.Normalize()
return pmf
def StrafingSpeed(alpha, beta, x):
Computes strafing speed, given location of shooter and impact.
alpha: x location of shooter
beta: y location of shooter
x: location of impact
Returns: derivative of x with respect to theta
theta = math.atan2(x - alpha, beta)
speed = beta / math.cos(theta)**2
return speed
Explanation: Paintball
Suppose you are playing paintball in an indoor arena 30 feet
wide and 50 feet long. You are standing near one of the 30 foot
walls, and you suspect that one of your opponents has taken cover
nearby. Along the wall, you see several paint spatters, all the same
color, that you think your opponent fired recently.
The spatters are at 15, 16, 18, and 21 feet, measured from the
lower-left corner of the room. Based on these data, where do you
think your opponent is hiding?
Here's the Suite that does the update. It uses MakeLocationPmf,
defined below.
End of explanation
alphas = range(0, 31)
betas = range(1, 51)
locations = range(0, 31)
suite = Paintball(alphas, betas, locations)
suite.UpdateSet([15, 16, 18, 21])
Explanation: The prior probabilities for alpha and beta are uniform.
End of explanation
locations = range(0, 31)
alpha = 10
betas = [10, 20, 40]
thinkplot.PrePlot(num=len(betas))
for beta in betas:
pmf = MakeLocationPmf(alpha, beta, locations)
pmf.label = 'beta = %d' % beta
thinkplot.Pdf(pmf)
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
Explanation: To visualize the joint posterior, I take slices for a few values of beta and plot the conditional distributions of alpha. If the shooter is close to the wall, we can be somewhat confident of his position. The farther away he is, the less certain we are.
End of explanation
marginal_alpha = suite.Marginal(0, label='alpha')
marginal_beta = suite.Marginal(1, label='beta')
print('alpha CI', marginal_alpha.CredibleInterval(50))
print('beta CI', marginal_beta.CredibleInterval(50))
thinkplot.PrePlot(num=2)
thinkplot.Cdf(Cdf(marginal_alpha))
thinkplot.Cdf(Cdf(marginal_beta))
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
Explanation: Here are the marginal posterior distributions for alpha and beta.
End of explanation
betas = [10, 20, 40]
thinkplot.PrePlot(num=len(betas))
for beta in betas:
cond = suite.Conditional(0, 1, beta)
cond.label = 'beta = %d' % beta
thinkplot.Pdf(cond)
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
Explanation: To visualize the joint posterior, I take slices for a few values of beta and plot the conditional distributions of alpha. If the shooter is close to the wall, we can be somewhat confident of his position. The farther away he is, the less certain we are.
End of explanation
thinkplot.Contour(suite.GetDict(), contour=False, pcolor=True)
thinkplot.Config(xlabel='alpha',
ylabel='beta',
axis=[0, 30, 0, 20])
Explanation: Another way to visualize the posterio distribution: a pseudocolor plot of probability as a function of alpha and beta.
End of explanation
d = dict((pair, 0) for pair in suite.Values())
percentages = [75, 50, 25]
for p in percentages:
interval = suite.MaxLikeInterval(p)
for pair in interval:
d[pair] += 1
thinkplot.Contour(d, contour=False, pcolor=True)
thinkplot.Text(17, 4, '25', color='white')
thinkplot.Text(17, 15, '50', color='white')
thinkplot.Text(17, 30, '75')
thinkplot.Config(xlabel='alpha',
ylabel='beta',
legend=False)
Explanation: Here's another visualization that shows posterior credible regions.
End of explanation
def shared_bugs(p1, p2, bugs):
k1 = np.random.random(bugs) < p1
k2 = np.random.random(bugs) < p2
return np.sum(k1 & k2)
p1 = .20
p2 = .15
bugs = 100
bug_pmf = Pmf()
for trial in range(1000):
bug_pmf[shared_bugs(p1, p2, bugs)] += 1
bug_pmf.Normalize()
bug_pmf.Print()
thinkplot.Hist(bug_pmf)
Explanation: Exercise: From John D. Cook
"Suppose you have a tester who finds 20 bugs in your program. You want to estimate how many bugs are really in the program. You know there are at least 20 bugs, and if you have supreme confidence in your tester, you may suppose there are around 20 bugs. But maybe your tester isn't very good. Maybe there are hundreds of bugs. How can you have any idea how many bugs there are? Thereโs no way to know with one tester. But if you have two testers, you can get a good idea, even if you donโt know how skilled the testers are.
Suppose two testers independently search for bugs. Let k1 be the number of errors the first tester finds and k2 the number of errors the second tester finds. Let c be the number of errors both testers find. The Lincoln Index estimates the total number of errors as k1 k2 / c [I changed his notation to be consistent with mine]."
So if the first tester finds 20 bugs, the second finds 15, and they find 3 in common, we estimate that there are about 100 bugs. What is the Bayesian estimate of the number of errors based on this data?
End of explanation
from scipy import special
class bugFinder(Suite, Joint):
def Likelihood(self, data, hypo):
data: (k1, k2, c)
hypo: (n, p1, p1)
n = hypo[0]
p1 = hypo[1]
p2 = hypo[2]
k1 = data[0]
k2 = data[1]
c = data[2]
like1 = EvalBinomialPmf(k1, n, p1)
like2 = EvalBinomialPmf(k2, n, p2)
return like1 * like2
p1 = np.linspace(0, 1, 40)
p2 = np.linspace(0, 1, 40)
n = np.linspace(32, 300, 40)
hypos = []
for p1_ in p1:
for p2_ in p2:
for n_ in n:
hypos.append((n_, p1_, p2_))
bug_finder_suite = bugFinder(hypos)
bug_finder_suite.Update([20, 15, 3])
thinkplot.Contour(bug_finder_suite.GetDict(), contour=False, pcolor=True)
thinkplot.Config(xlabel='alpha',
ylabel='beta',
axis=[0, 30, 0, 20])
Explanation: Now do some bayes
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: The GPS problem. According to Wikipedia
๏ฟผ
GPS included a (currently disabled) feature called Selective Availability (SA) that adds intentional, time varying errors of up to 100 meters (328 ft) to the publicly available navigation signals. This was intended to deny an enemy the use of civilian GPS receivers for precision weapon guidance.
[...]
Before it was turned off on May 2, 2000, typical SA errors were about 50 m (164 ft) horizontally and about 100 m (328 ft) vertically.[10] Because SA affects every GPS receiver in a given area almost equally, a fixed station with an accurately known position can measure the SA error values and transmit them to the local GPS receivers so they may correct their position fixes. This is called Differential GPS or DGPS. DGPS also corrects for several other important sources of GPS errors, particularly ionospheric delay, so it continues to be widely used even though SA has been turned off. The ineffectiveness of SA in the face of widely available DGPS was a common argument for turning off SA, and this was finally done by order of President Clinton in 2000.
Suppose it is 1 May 2000, and you are standing in a field that is 200m square. You are holding a GPS unit that indicates that your location is 51m north and 15m west of a known reference point in the middle of the field.
However, you know that each of these coordinates has been perturbed by a "feature" that adds random errors with mean 0 and standard deviation 30m.
1) After taking one measurement, what should you believe about your position?
Note: Since the intentional errors are independent, you could solve this problem independently for X and Y. But we'll treat it as a two-dimensional problem, partly for practice and partly to see how we could extend the solution to handle dependent errors.
You can start with the code in gps.py.
2) Suppose that after one second the GPS updates your position and reports coordinates (48, 90). What should you believe now?
3) Suppose you take 8 more measurements and get:
(11.903060613102866, 19.79168669735705)
(77.10743601503178, 39.87062906535289)
(80.16596823095534, -12.797927542984425)
(67.38157493119053, 83.52841028148538)
(89.43965206875271, 20.52141889230797)
(58.794021026248245, 30.23054016065644)
(2.5844401241265302, 51.012041625783766)
(45.58108994142448, 3.5718287379754585)
At this point, how certain are you about your location?
End of explanation
import pandas as pd
df = pd.read_csv('flea_beetles.csv', delimiter='\t')
df.head()
# Solution goes here
Explanation: Exercise: The Flea Beetle problem from DASL
Datafile Name: Flea Beetles
Datafile Subjects: Biology
Story Names: Flea Beetles
Reference: Lubischew, A.A. (1962) On the use of discriminant functions in taxonomy. Biometrics, 18, 455-477. Also found in: Hand, D.J., et al. (1994) A Handbook of Small Data Sets, London: Chapman & Hall, 254-255.
Authorization: Contact Authors
Description: Data were collected on the genus of flea beetle Chaetocnema, which contains three species: concinna (Con), heikertingeri (Hei), and heptapotamica (Hep). Measurements were made on the width and angle of the aedeagus of each beetle. The goal of the original study was to form a classification rule to distinguish the three species.
Number of cases: 74
Variable Names:
Width: The maximal width of aedeagus in the forpart (in microns)
Angle: The front angle of the aedeagus (1 unit = 7.5 degrees)
Species: Species of flea beetle from the genus Chaetocnema
Suggestions:
Plot CDFs for the width and angle data, broken down by species, to get a visual sense of whether the normal distribution is a good model.
Use the data to estimate the mean and standard deviation for each variable, broken down by species.
Given a joint posterior distribution for mu and sigma, what is the likelihood of a given datum?
Write a function that takes a measured width and angle and returns a posterior PMF of species.
Use the function to classify each of the specimens in the table and see how many you get right.
End of explanation |
1,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: OT for image color adaptation
This example presents a way of transferring colors between two images
with Optimal Transport as introduced in [6]
[6] Ferradans, S., Papadakis, N., Peyre, G., & Aujol, J. F. (2014).
Regularized discrete optimal transport.
SIAM Journal on Imaging Sciences, 7(3), 1853-1882.
Step3: Generate data
Step4: Plot original image
Step5: Scatter plot of colors
Step6: Instantiate the different transport algorithms and fit them
Step7: Plot new images | Python Code:
# Authors: Remi Flamary <[email protected]>
# Stanislas Chambon <[email protected]>
#
# License: MIT License
import numpy as np
from scipy import ndimage
import matplotlib.pylab as pl
import ot
r = np.random.RandomState(42)
def im2mat(I):
Converts an image to matrix (one pixel per line)
return I.reshape((I.shape[0] * I.shape[1], I.shape[2]))
def mat2im(X, shape):
Converts back a matrix to an image
return X.reshape(shape)
def minmax(I):
return np.clip(I, 0, 1)
Explanation: OT for image color adaptation
This example presents a way of transferring colors between two images
with Optimal Transport as introduced in [6]
[6] Ferradans, S., Papadakis, N., Peyre, G., & Aujol, J. F. (2014).
Regularized discrete optimal transport.
SIAM Journal on Imaging Sciences, 7(3), 1853-1882.
End of explanation
# Loading images
I1 = ndimage.imread('../data/ocean_day.jpg').astype(np.float64) / 256
I2 = ndimage.imread('../data/ocean_sunset.jpg').astype(np.float64) / 256
X1 = im2mat(I1)
X2 = im2mat(I2)
# training samples
nb = 1000
idx1 = r.randint(X1.shape[0], size=(nb,))
idx2 = r.randint(X2.shape[0], size=(nb,))
Xs = X1[idx1, :]
Xt = X2[idx2, :]
Explanation: Generate data
End of explanation
pl.figure(1, figsize=(6.4, 3))
pl.subplot(1, 2, 1)
pl.imshow(I1)
pl.axis('off')
pl.title('Image 1')
pl.subplot(1, 2, 2)
pl.imshow(I2)
pl.axis('off')
pl.title('Image 2')
Explanation: Plot original image
End of explanation
pl.figure(2, figsize=(6.4, 3))
pl.subplot(1, 2, 1)
pl.scatter(Xs[:, 0], Xs[:, 2], c=Xs)
pl.axis([0, 1, 0, 1])
pl.xlabel('Red')
pl.ylabel('Blue')
pl.title('Image 1')
pl.subplot(1, 2, 2)
pl.scatter(Xt[:, 0], Xt[:, 2], c=Xt)
pl.axis([0, 1, 0, 1])
pl.xlabel('Red')
pl.ylabel('Blue')
pl.title('Image 2')
pl.tight_layout()
Explanation: Scatter plot of colors
End of explanation
# EMDTransport
ot_emd = ot.da.EMDTransport()
ot_emd.fit(Xs=Xs, Xt=Xt)
# SinkhornTransport
ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1)
ot_sinkhorn.fit(Xs=Xs, Xt=Xt)
# prediction between images (using out of sample prediction as in [6])
transp_Xs_emd = ot_emd.transform(Xs=X1)
transp_Xt_emd = ot_emd.inverse_transform(Xt=X2)
transp_Xs_sinkhorn = ot_sinkhorn.transform(Xs=X1)
transp_Xt_sinkhorn = ot_sinkhorn.inverse_transform(Xt=X2)
I1t = minmax(mat2im(transp_Xs_emd, I1.shape))
I2t = minmax(mat2im(transp_Xt_emd, I2.shape))
I1te = minmax(mat2im(transp_Xs_sinkhorn, I1.shape))
I2te = minmax(mat2im(transp_Xt_sinkhorn, I2.shape))
Explanation: Instantiate the different transport algorithms and fit them
End of explanation
pl.figure(3, figsize=(8, 4))
pl.subplot(2, 3, 1)
pl.imshow(I1)
pl.axis('off')
pl.title('Image 1')
pl.subplot(2, 3, 2)
pl.imshow(I1t)
pl.axis('off')
pl.title('Image 1 Adapt')
pl.subplot(2, 3, 3)
pl.imshow(I1te)
pl.axis('off')
pl.title('Image 1 Adapt (reg)')
pl.subplot(2, 3, 4)
pl.imshow(I2)
pl.axis('off')
pl.title('Image 2')
pl.subplot(2, 3, 5)
pl.imshow(I2t)
pl.axis('off')
pl.title('Image 2 Adapt')
pl.subplot(2, 3, 6)
pl.imshow(I2te)
pl.axis('off')
pl.title('Image 2 Adapt (reg)')
pl.tight_layout()
pl.show()
Explanation: Plot new images
End of explanation |
1,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fourier analysis & resonances
A great benefit of being able to call rebound from within python is the ability to directly apply sophisticated analysis tools from scipy and other python libraries. Here we will do a simple Fourier analysis of a reduced Solar System consisting of Jupiter and Saturn. Let's begin by setting our units and adding these planets using JPL's horizons database
Step1: Now let's set the integrator to whfast, and sacrificing accuracy for speed, set the timestep for the integration to about $10\%$ of Jupiter's orbital period.
Step2: The last line (moving to the center of mass frame) is important to take out the linear drift in positions due to the constant COM motion. Without it we would erase some of the signal at low frequencies.
Now let's run the integration, storing time series for the two planets' eccentricities (for plotting) and x-positions (for the Fourier analysis). Additionally, we store the mean longitudes and pericenter longitudes (varpi) for reasons that will become clear below. Having some idea of what the secular timescales are in the Solar System, we'll run the integration for $3\times 10^5$ yrs. We choose to collect $10^5$ outputs in order to resolve the planets' orbital periods ($\sim 10$ yrs) in the Fourier spectrum.
Step3: Let's see what the eccentricity evolution looks like with matplotlib
Step4: Now let's try to analyze the periodicities in this signal. Here we have a uniformly spaced time series, so we could run a Fast Fourier Transform, but as an example of the wider array of tools available through scipy, let's run a Lomb-Scargle periodogram (which allows for non-uniform time series). This could also be used when storing outputs at each timestep using the integrator IAS15 (which uses adaptive and therefore non-uniform timesteps).
Let's check for periodicities with periods logarithmically spaced between 10 and $10^5$ yrs. From the documentation, we find that the lombscargle function requires a list of corresponding angular frequencies (ws), and we obtain the appropriate normalization for the plot. To avoid conversions to orbital elements, we analyze the time series of Jupiter's x-position.
Step5: We pick out the obvious signal in the eccentricity plot with a period of $\approx 45000$ yrs, which is due to secular interactions between the two planets. There is quite a bit of power aliased into neighbouring frequencies due to the short integration duration, with contributions from the second secular timescale, which is out at $\sim 2\times10^5$ yrs and causes a slower, low-amplitude modulation of the eccentricity signal plotted above (we limited the time of integration so that the example runs in a few seconds).
Additionally, though it was invisible on the scale of the eccentricity plot above, we clearly see a strong signal at Jupiter's orbital period of about 12 years.
But wait! Even on this scale set by the dominant frequencies of the problem, we see an additional blip just below $10^3$ yrs. Such a periodicity is actually visible in the above eccentricity plot if you inspect the thickness of the lines. Let's investigate by narrowing the period range
Step6: This is the right timescale to be due to resonant perturbations between giant planets ($\sim 100$ orbits). In fact, Jupiter and Saturn are close to a 5
Step7: Now we construct $\phi_{5
Step8: We see that the resonant angle $\phi_{5 | Python Code:
import rebound
import numpy as np
sim = rebound.Simulation()
sim.units = ('AU', 'yr', 'Msun')
sim.add("Sun")
sim.add("Jupiter")
sim.add("Saturn")
Explanation: Fourier analysis & resonances
A great benefit of being able to call rebound from within python is the ability to directly apply sophisticated analysis tools from scipy and other python libraries. Here we will do a simple Fourier analysis of a reduced Solar System consisting of Jupiter and Saturn. Let's begin by setting our units and adding these planets using JPL's horizons database:
End of explanation
sim.integrator = "whfast"
sim.dt = 1. # in years. About 10% of Jupiter's period
sim.move_to_com()
Explanation: Now let's set the integrator to whfast, and sacrificing accuracy for speed, set the timestep for the integration to about $10\%$ of Jupiter's orbital period.
End of explanation
Nout = 100000
tmax = 3.e5
Nplanets = 2
x = np.zeros((Nplanets,Nout))
ecc = np.zeros((Nplanets,Nout))
longitude = np.zeros((Nplanets,Nout))
varpi = np.zeros((Nplanets,Nout))
times = np.linspace(0.,tmax,Nout)
ps = sim.particles
for i,time in enumerate(times):
sim.integrate(time)
# note we used above the default exact_finish_time = 1, which changes the timestep near the outputs to match
# the output times we want. This is what we want for a Fourier spectrum, but technically breaks WHFast's
# symplectic nature. Not a big deal here.
os = sim.calculate_orbits()
for j in range(Nplanets):
x[j][i] = ps[j+1].x # we use the 0 index in x for Jup and 1 for Sat, but the indices for ps start with the Sun at 0
ecc[j][i] = os[j].e
longitude[j][i] = os[j].l
varpi[j][i] = os[j].Omega + os[j].omega
Explanation: The last line (moving to the center of mass frame) is important to take out the linear drift in positions due to the constant COM motion. Without it we would erase some of the signal at low frequencies.
Now let's run the integration, storing time series for the two planets' eccentricities (for plotting) and x-positions (for the Fourier analysis). Additionally, we store the mean longitudes and pericenter longitudes (varpi) for reasons that will become clear below. Having some idea of what the secular timescales are in the Solar System, we'll run the integration for $3\times 10^5$ yrs. We choose to collect $10^5$ outputs in order to resolve the planets' orbital periods ($\sim 10$ yrs) in the Fourier spectrum.
End of explanation
%matplotlib inline
labels = ["Jupiter", "Saturn"]
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
plt.plot(times,ecc[0],label=labels[0])
plt.plot(times,ecc[1],label=labels[1])
ax.set_xlabel("Time (yrs)")
ax.set_ylabel("Eccentricity")
plt.legend();
Explanation: Let's see what the eccentricity evolution looks like with matplotlib:
End of explanation
from scipy import signal
Npts = 3000
logPmin = np.log10(10.)
logPmax = np.log10(1.e5)
Ps = np.logspace(logPmin,logPmax,Npts)
ws = np.asarray([2*np.pi/P for P in Ps])
periodogram = signal.lombscargle(times,x[0],ws)
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
ax.plot(Ps,np.sqrt(4*periodogram/Nout))
ax.set_xscale('log')
ax.set_xlim([10**logPmin,10**logPmax])
ax.set_ylim([0,0.15])
ax.set_xlabel("Period (yrs)")
ax.set_ylabel("Power")
Explanation: Now let's try to analyze the periodicities in this signal. Here we have a uniformly spaced time series, so we could run a Fast Fourier Transform, but as an example of the wider array of tools available through scipy, let's run a Lomb-Scargle periodogram (which allows for non-uniform time series). This could also be used when storing outputs at each timestep using the integrator IAS15 (which uses adaptive and therefore non-uniform timesteps).
Let's check for periodicities with periods logarithmically spaced between 10 and $10^5$ yrs. From the documentation, we find that the lombscargle function requires a list of corresponding angular frequencies (ws), and we obtain the appropriate normalization for the plot. To avoid conversions to orbital elements, we analyze the time series of Jupiter's x-position.
End of explanation
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
ax.plot(Ps,np.sqrt(4*periodogram/Nout))
ax.set_xscale('log')
ax.set_xlim([600,1600])
ax.set_ylim([0,0.003])
ax.set_xlabel("Period (yrs)")
ax.set_ylabel("Power")
Explanation: We pick out the obvious signal in the eccentricity plot with a period of $\approx 45000$ yrs, which is due to secular interactions between the two planets. There is quite a bit of power aliased into neighbouring frequencies due to the short integration duration, with contributions from the second secular timescale, which is out at $\sim 2\times10^5$ yrs and causes a slower, low-amplitude modulation of the eccentricity signal plotted above (we limited the time of integration so that the example runs in a few seconds).
Additionally, though it was invisible on the scale of the eccentricity plot above, we clearly see a strong signal at Jupiter's orbital period of about 12 years.
But wait! Even on this scale set by the dominant frequencies of the problem, we see an additional blip just below $10^3$ yrs. Such a periodicity is actually visible in the above eccentricity plot if you inspect the thickness of the lines. Let's investigate by narrowing the period range:
End of explanation
def zeroTo360(val):
while val < 0:
val += 2*np.pi
while val > 2*np.pi:
val -= 2*np.pi
return val*180/np.pi
Explanation: This is the right timescale to be due to resonant perturbations between giant planets ($\sim 100$ orbits). In fact, Jupiter and Saturn are close to a 5:2 mean-motion resonance. This is the famous great inequality that Laplace showed was responsible for slight offsets in the predicted positions of the two giant planets. Let's check whether this is in fact responsible for the peak.
In this case, we have that the mean longitude of Jupiter $\lambda_J$ cycles approximately 5 times for every 2 of Saturn's ($\lambda_S$). The game is to construct a slowly-varying resonant angle, which here could be $\phi_{5:2} = 5\lambda_S - 2\lambda_J - 3\varpi_J$, where $\varpi_J$ is Jupiter's longitude of pericenter. This last term is a much smaller contribution to the variation of $\phi_{5:2}$ than the first two, but ensures that the coefficients in the resonant angle sum to zero and therefore that the physics do not depend on your choice of coordinates.
To see a clear trend, we have to shift each value of $\phi_{5:2}$ into the range $[0,360]$ degrees, so we define a small helper function that does the wrapping and conversion to degrees:
End of explanation
phi = [zeroTo360(5.*longitude[1][i] - 2.*longitude[0][i] - 3.*varpi[0][i]) for i in range(Nout)]
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
ax.plot(times,phi)
ax.set_xlim([0,5.e3])
ax.set_ylim([0,360.])
ax.set_xlabel("time (yrs)")
ax.set_ylabel(r"$\phi_{5:2}$")
Explanation: Now we construct $\phi_{5:2}$ and plot it over the first 5000 yrs.
End of explanation
phi2 = [zeroTo360(2*longitude[1][i] - longitude[0][i] - varpi[0][i]) for i in range(Nout)]
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
ax.plot(times,phi2)
ax.set_xlim([0,5.e3])
ax.set_ylim([0,360.])
ax.set_xlabel("time (yrs)")
ax.set_ylabel(r"$\phi_{2:1}$")
Explanation: We see that the resonant angle $\phi_{5:2}$ circulates, but with a long period of $\approx 900$ yrs (compared to the orbital periods of $\sim 10$ yrs), which precisely matches the blip we saw in the Lomb-Scargle periodogram. This is approximately the same oscillation period observed in the Solar System, despite our simplified setup!
This resonant angle is able to have a visible effect because its (small) effects build up coherently over many orbits. As a further illustration, other resonance angles like those at the 2:1 will circulate much faster (because Jupiter and Saturn's period ratio is not close to 2). We can easily plot this. Taking one of the 2:1 resonance angles $\phi_{2:1} = 2\lambda_S - \lambda_J - \varpi_J$,
End of explanation |
1,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: ไฝฟ็จ่ฟไผผๆ่ฟ้ปๅๆๆฌๅตๅ
ฅๅ้ๆๅปบ่ฏญไนๆ็ดข
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: ๅฏผๅ
ฅๆ้็ๅบใ
Step3: 1. ไธ่ฝฝๆ ทๆฌๆฐๆฎ
A Million News Headlines ๆฐๆฎ้ๅ
ๅซ่ๅ็ๆพณๅคงๅฉไบๅนฟๆญๅ
ฌๅธ (ABC) ๅจ 15 ๅนดๅ
ๅๅธ็ๆฐ้ปๆ ้ขใๆญคๆฐ้ปๆฐๆฎ้ๆฑๆปไบไป 2003 ๅนดๅ่ณ 2017 ๅนดๅบๅจๅ
จ็่ๅดๅ
ๅ็็้ๅคงไบไปถ็ๅๅฒ่ฎฐๅฝ๏ผๅ
ถไธญๅฏนๆพณๅคงๅฉไบ็ๅ
ณๆณจๆดไธบ็ป่ดใ
ๆ ผๅผ๏ผไปฅๅถ่กจ็ฌฆๅ้็ไธคๅๆฐๆฎ๏ผ1) ๅๅธๆฅๆๅ 2) ๆ ้ขๆๆฌใๆไปฌๅชๅฏนๆ ้ขๆๆฌๆๅ
ด่ถฃใ
Step4: ไธบไบ็ฎๅ่ตท่ง๏ผๆไปฌไป
ไฟ็ๆ ้ขๆๆฌๅนถ็งป้คๅๅธๆฅๆใ
Step5: 2. ไธบๆฐๆฎ็ๆๅตๅ
ฅๅ้
ๅจๆฌๆ็จไธญ๏ผๆไปฌไฝฟ็จ็ฅ็ป็ฝ็ป่ฏญ่จๆจกๅ (NNLM) ไธบๆ ้ขๆฐๆฎ็ๆๅตๅ
ฅๅ้ใไนๅ๏ผๅฏไปฅ่ฝปๆพๅฐไฝฟ็จๅฅๅญๅตๅ
ฅๅ้่ฎก็ฎๅฅๅญ็บงๅซ็ๅซไน็ธไผผๅบฆใๆไปฌไฝฟ็จ Apache Beam ๆฅ่ฟ่กๅตๅ
ฅๅ้็ๆ่ฟ็จใ
ๅตๅ
ฅๅ้ๆๅๆนๆณ
Step6: ่ฝฌๆขไธบ tf.Example ๆนๆณ
Step7: Beam ๆตๆฐด็บฟ
Step8: ็ๆ้ๆบๆๅฝฑๆ้็ฉ้ต
้ๆบๆๅฝฑๆฏไธ็ง็ฎๅ่ๅผบๅคง็ๆๆฏ๏ผ็จไบ้ไฝไฝไบๆฌงๅ ้ๅพ็ฉบ้ดไธญ็ไธ็ป็น็็ปดๆฐใๆๅ
ณ็่ฎบ่ๆฏ๏ผ่ฏทๅ้
็บฆ็ฟฐ้-ๆ็ปๆฏ็นๅณๆฏๅผ็ใ
ๅฉ็จ้ๆบๆๅฝฑ้ไฝๅตๅ
ฅๅ้็็ปดๆฐ๏ผ่ฟๆ ท๏ผๆๅปบๅๆฅ่ฏข ANN ็ดขๅผ้่ฆ็ๆถ้ดๅฐๅๅฐใ
ๅจๆฌๆ็จไธญ๏ผๆไปฌไฝฟ็จ Scikit-learn ๅบไธญ็้ซๆฏ้ๆบๆๅฝฑใ
Step9: ่ฎพ็ฝฎๅๆฐ
ๅฆๆ่ฆไฝฟ็จๅๅงๅตๅ
ฅๅ้็ฉบ้ดๆๅปบ็ดขๅผ่ไธ่ฟ่ก้ๆบๆๅฝฑ๏ผ่ฏทๅฐ projected_dim ๅๆฐ่ฎพ็ฝฎไธบ Noneใ่ฏทๆณจๆ๏ผ่ฟไผๅๆ
ข้ซ็ปดๅตๅ
ฅๅ้็็ดขๅผ็ผๅถๆญฅ้ชคใ
Step10: ่ฟ่กๆตๆฐด็บฟ
Step11: ่ฏปๅ็ๆ็้จๅๅตๅ
ฅๅ้โฆ
Step12: 3. ไธบๅตๅ
ฅๅ้ๆๅปบ ANN ็ดขๅผ
ANNOY๏ผ่ฟไผผๆ่ฟ้ป๏ผๆฏไธไธชๅ
ๅซ Python ็ปๅฎ็ C++ ๅบ๏ผ็จไบๆ็ดข็ฉบ้ดไธญไธ็ปๅฎๆฅ่ฏข็นๆฅ่ฟ็็นใๆญคๅค๏ผๅฎ่ฟไผๅๅปบๅบไบๆไปถ็ๅคงๅๅช่ฏปๆฐๆฎ็ปๆ๏ผ่ฟไบๆฐๆฎ็ปๆไผๆ ๅฐๅฐๅ
ๅญไธญใๅฎ็ฑ Spotify ๆๅปบๅนถ็จไบ้ณไนๆจ่ใๅฆๆๆจๆๅ
ด่ถฃ๏ผๅฏไปฅๅฐ่ฏไฝฟ็จ ANNOY ็ๅ
ถไปๆฟไปฃๅบ๏ผไพๅฆ NGTใFAISS ็ญใ
Step13: 4. ไฝฟ็จ็ดขๅผ่ฟ่ก็ธไผผๅบฆๅน้
็ฐๅจ๏ผๆไปฌๅฏไปฅไฝฟ็จ ANN ็ดขๅผๆฅๆพไธ่พๅ
ฅๆฅ่ฏข่ฏญไนๆฅ่ฟ็ๆฐ้ปๆ ้ขใ
ๅ ่ฝฝ็ดขๅผๅๆ ๅฐๆไปถ
Step14: ็ธไผผๅบฆๅน้
ๆนๆณ
Step15: ไป็ปๅฎๆฅ่ฏขไธญๆๅๅตๅ
ฅๅ้
Step16: ่พๅ
ฅๆฅ่ฏขไปฅๆฅๆพๆ็ธไผผ็ๆก็ฎ | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install apache_beam
!pip install 'scikit_learn~=0.23.0' # For gaussian_random_matrix.
!pip install annoy
Explanation: ไฝฟ็จ่ฟไผผๆ่ฟ้ปๅๆๆฌๅตๅ
ฅๅ้ๆๅปบ่ฏญไนๆ็ดข
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/tf2_semantic_approximate_nearest_neighbors"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">ๅจ TensorFlow.org ไธๆฅ็</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/tf2_semantic_approximate_nearest_neighbors.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">ๅจ Google Colab ไธญ่ฟ่ก </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/tf2_semantic_approximate_nearest_neighbors.ipynb"> <img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png"> ๅจ GitHub ไธๆฅ็ๆบไปฃ็ </a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/tf2_semantic_approximate_nearest_neighbors.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">ไธ่ฝฝ็ฌ่ฎฐๆฌ</a></td>
<td><a href="https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">ๆฅ็ TF Hub ๆจกๅ</a></td>
</table>
ๆฌๆ็จๆผ็คบไบๅฆไฝๅจ็ปๅฎ่พๅ
ฅๆฐๆฎ็ๆ
ๅตไธ๏ผไป TensorFlow Hub (TF-Hub) ๆจกๅ็ๆๅตๅ
ฅๅ้๏ผๅนถไฝฟ็จๆๅ็ๅตๅ
ฅๅ้ๆๅปบ่ฟไผผๆ่ฟ้ป (ANN) ็ดขๅผใไนๅ๏ผๅฏไปฅๅฐ่ฏฅ็ดขๅผ็จไบๅฎๆถ็ธไผผๅบฆๅน้
ๅๆฃ็ดขใ
ๅจๅค็ๅ
ๅซๅคง้ๆฐๆฎ็่ฏญๆๅบๆถ๏ผ้่ฟๆซๆๆดไธชๅญๅจๅบๅฎๆถๆฅๆพไธ็ปๅฎๆฅ่ฏขๆ็ธไผผ็ๆก็ฎๆฅๆง่ก็ฒพ็กฎๅน้
็ๆ็ไธ้ซใๅ ๆญค๏ผๆไปฌไฝฟ็จไธ็ง่ฟไผผ็ธไผผๅบฆๅน้
็ฎๆณใๅฉ็จ่ฟ็ง็ฎๆณ๏ผๆไปฌๅจๆฅๆพ็ฒพ็กฎ็ๆ่ฟ้ปๅน้
ๆถไผ็บ็ฒไธ็นๅ็กฎ็๏ผไฝๆฏๅฏไปฅๆพ่ๆ้ซ้ๅบฆใ
ๅจๆฌๆ็จไธญ๏ผๆไปฌๅฐๅฑ็คบไธไธช็คบไพ๏ผๅจๆฐ้ปๆ ้ข่ฏญๆๅบไธ่ฟ่กๅฎๆถๆๆฌๆ็ดข๏ผไปฅๆฅๆพไธๆฅ่ฏขๆ็ธไผผ็ๆ ้ขใไธๅ
ณ้ฎๅญๆ็ดขไธๅ๏ผๆญค่ฟ็จไผๆ่ทๅจๆๆฌๅตๅ
ฅๅ้ไธญ็ผ็ ็่ฏญไน็ธไผผๅบฆใ
ๆฌๆ็จ็ๆไฝๆญฅ้ชคๅฆไธ๏ผ
ไธ่ฝฝๆ ทๆฌๆฐๆฎ
ไฝฟ็จ TF-Hub ๆจกๅไธบๆฐๆฎ็ๆๅตๅ
ฅๅ้
ไธบๅตๅ
ฅๅ้ๆๅปบ ANN ็ดขๅผ
ไฝฟ็จ็ดขๅผ่ฟ่ก็ธไผผๅบฆๅน้
ๆไปฌไฝฟ็จ Apache Beam ไป TF-Hub ๆจกๅ็ๆๅตๅ
ฅๅ้ใๆญคๅค๏ผๆไปฌ่ฟไฝฟ็จ Spotify ็ ANNOY ๅบๆฅๆๅปบ่ฟไผผๆ่ฟ้ป็ดขๅผใ
ๆดๅคๆจกๅ
ๅฏนไบๅ
ทๆ็ธๅๆถๆ๏ผไฝไฝฟ็จไธๅ็่ฏญ่จ่ฟ่ก่ฎญ็ป็ๆจกๅ๏ผ่ฏทๅ่ๆญค้ๅใ่ฟ้ๅฏไปฅๆพๅฐ tfhub.dev ไธๅฝๅๆ็ฎก็ๆๆๆๆฌๅตๅ
ฅๅ้ใ
่ฎพ็ฝฎ
ๅฎ่ฃ
ๆ้็ๅบใ
End of explanation
import os
import sys
import pickle
from collections import namedtuple
from datetime import datetime
import numpy as np
import apache_beam as beam
from apache_beam.transforms import util
import tensorflow as tf
import tensorflow_hub as hub
import annoy
from sklearn.random_projection import gaussian_random_matrix
print('TF version: {}'.format(tf.__version__))
print('TF-Hub version: {}'.format(hub.__version__))
print('Apache Beam version: {}'.format(beam.__version__))
Explanation: ๅฏผๅ
ฅๆ้็ๅบใ
End of explanation
!wget 'https://dataverse.harvard.edu/api/access/datafile/3450625?format=tab&gbrecs=true' -O raw.tsv
!wc -l raw.tsv
!head raw.tsv
Explanation: 1. ไธ่ฝฝๆ ทๆฌๆฐๆฎ
A Million News Headlines ๆฐๆฎ้ๅ
ๅซ่ๅ็ๆพณๅคงๅฉไบๅนฟๆญๅ
ฌๅธ (ABC) ๅจ 15 ๅนดๅ
ๅๅธ็ๆฐ้ปๆ ้ขใๆญคๆฐ้ปๆฐๆฎ้ๆฑๆปไบไป 2003 ๅนดๅ่ณ 2017 ๅนดๅบๅจๅ
จ็่ๅดๅ
ๅ็็้ๅคงไบไปถ็ๅๅฒ่ฎฐๅฝ๏ผๅ
ถไธญๅฏนๆพณๅคงๅฉไบ็ๅ
ณๆณจๆดไธบ็ป่ดใ
ๆ ผๅผ๏ผไปฅๅถ่กจ็ฌฆๅ้็ไธคๅๆฐๆฎ๏ผ1) ๅๅธๆฅๆๅ 2) ๆ ้ขๆๆฌใๆไปฌๅชๅฏนๆ ้ขๆๆฌๆๅ
ด่ถฃใ
End of explanation
!rm -r corpus
!mkdir corpus
with open('corpus/text.txt', 'w') as out_file:
with open('raw.tsv', 'r') as in_file:
for line in in_file:
headline = line.split('\t')[1].strip().strip('"')
out_file.write(headline+"\n")
!tail corpus/text.txt
Explanation: ไธบไบ็ฎๅ่ตท่ง๏ผๆไปฌไป
ไฟ็ๆ ้ขๆๆฌๅนถ็งป้คๅๅธๆฅๆใ
End of explanation
embed_fn = None
def generate_embeddings(text, model_url, random_projection_matrix=None):
# Beam will run this function in different processes that need to
# import hub and load embed_fn (if not previously loaded)
global embed_fn
if embed_fn is None:
embed_fn = hub.load(model_url)
embedding = embed_fn(text).numpy()
if random_projection_matrix is not None:
embedding = embedding.dot(random_projection_matrix)
return text, embedding
Explanation: 2. ไธบๆฐๆฎ็ๆๅตๅ
ฅๅ้
ๅจๆฌๆ็จไธญ๏ผๆไปฌไฝฟ็จ็ฅ็ป็ฝ็ป่ฏญ่จๆจกๅ (NNLM) ไธบๆ ้ขๆฐๆฎ็ๆๅตๅ
ฅๅ้ใไนๅ๏ผๅฏไปฅ่ฝปๆพๅฐไฝฟ็จๅฅๅญๅตๅ
ฅๅ้่ฎก็ฎๅฅๅญ็บงๅซ็ๅซไน็ธไผผๅบฆใๆไปฌไฝฟ็จ Apache Beam ๆฅ่ฟ่กๅตๅ
ฅๅ้็ๆ่ฟ็จใ
ๅตๅ
ฅๅ้ๆๅๆนๆณ
End of explanation
def to_tf_example(entries):
examples = []
text_list, embedding_list = entries
for i in range(len(text_list)):
text = text_list[i]
embedding = embedding_list[i]
features = {
'text': tf.train.Feature(
bytes_list=tf.train.BytesList(value=[text.encode('utf-8')])),
'embedding': tf.train.Feature(
float_list=tf.train.FloatList(value=embedding.tolist()))
}
example = tf.train.Example(
features=tf.train.Features(
feature=features)).SerializeToString(deterministic=True)
examples.append(example)
return examples
Explanation: ่ฝฌๆขไธบ tf.Example ๆนๆณ
End of explanation
def run_hub2emb(args):
'''Runs the embedding generation pipeline'''
options = beam.options.pipeline_options.PipelineOptions(**args)
args = namedtuple("options", args.keys())(*args.values())
with beam.Pipeline(args.runner, options=options) as pipeline:
(
pipeline
| 'Read sentences from files' >> beam.io.ReadFromText(
file_pattern=args.data_dir)
| 'Batch elements' >> util.BatchElements(
min_batch_size=args.batch_size, max_batch_size=args.batch_size)
| 'Generate embeddings' >> beam.Map(
generate_embeddings, args.model_url, args.random_projection_matrix)
| 'Encode to tf example' >> beam.FlatMap(to_tf_example)
| 'Write to TFRecords files' >> beam.io.WriteToTFRecord(
file_path_prefix='{}/emb'.format(args.output_dir),
file_name_suffix='.tfrecords')
)
Explanation: Beam ๆตๆฐด็บฟ
End of explanation
def generate_random_projection_weights(original_dim, projected_dim):
random_projection_matrix = None
random_projection_matrix = gaussian_random_matrix(
n_components=projected_dim, n_features=original_dim).T
print("A Gaussian random weight matrix was creates with shape of {}".format(random_projection_matrix.shape))
print('Storing random projection matrix to disk...')
with open('random_projection_matrix', 'wb') as handle:
pickle.dump(random_projection_matrix,
handle, protocol=pickle.HIGHEST_PROTOCOL)
return random_projection_matrix
Explanation: ็ๆ้ๆบๆๅฝฑๆ้็ฉ้ต
้ๆบๆๅฝฑๆฏไธ็ง็ฎๅ่ๅผบๅคง็ๆๆฏ๏ผ็จไบ้ไฝไฝไบๆฌงๅ ้ๅพ็ฉบ้ดไธญ็ไธ็ป็น็็ปดๆฐใๆๅ
ณ็่ฎบ่ๆฏ๏ผ่ฏทๅ้
็บฆ็ฟฐ้-ๆ็ปๆฏ็นๅณๆฏๅผ็ใ
ๅฉ็จ้ๆบๆๅฝฑ้ไฝๅตๅ
ฅๅ้็็ปดๆฐ๏ผ่ฟๆ ท๏ผๆๅปบๅๆฅ่ฏข ANN ็ดขๅผ้่ฆ็ๆถ้ดๅฐๅๅฐใ
ๅจๆฌๆ็จไธญ๏ผๆไปฌไฝฟ็จ Scikit-learn ๅบไธญ็้ซๆฏ้ๆบๆๅฝฑใ
End of explanation
model_url = 'https://tfhub.dev/google/nnlm-en-dim128/2' #@param {type:"string"}
projected_dim = 64 #@param {type:"number"}
Explanation: ่ฎพ็ฝฎๅๆฐ
ๅฆๆ่ฆไฝฟ็จๅๅงๅตๅ
ฅๅ้็ฉบ้ดๆๅปบ็ดขๅผ่ไธ่ฟ่ก้ๆบๆๅฝฑ๏ผ่ฏทๅฐ projected_dim ๅๆฐ่ฎพ็ฝฎไธบ Noneใ่ฏทๆณจๆ๏ผ่ฟไผๅๆ
ข้ซ็ปดๅตๅ
ฅๅ้็็ดขๅผ็ผๅถๆญฅ้ชคใ
End of explanation
import tempfile
output_dir = tempfile.mkdtemp()
original_dim = hub.load(model_url)(['']).shape[1]
random_projection_matrix = None
if projected_dim:
random_projection_matrix = generate_random_projection_weights(
original_dim, projected_dim)
args = {
'job_name': 'hub2emb-{}'.format(datetime.utcnow().strftime('%y%m%d-%H%M%S')),
'runner': 'DirectRunner',
'batch_size': 1024,
'data_dir': 'corpus/*.txt',
'output_dir': output_dir,
'model_url': model_url,
'random_projection_matrix': random_projection_matrix,
}
print("Pipeline args are set.")
args
print("Running pipeline...")
%time run_hub2emb(args)
print("Pipeline is done.")
!ls {output_dir}
Explanation: ่ฟ่กๆตๆฐด็บฟ
End of explanation
embed_file = os.path.join(output_dir, 'emb-00000-of-00001.tfrecords')
sample = 5
# Create a description of the features.
feature_description = {
'text': tf.io.FixedLenFeature([], tf.string),
'embedding': tf.io.FixedLenFeature([projected_dim], tf.float32)
}
def _parse_example(example):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example, feature_description)
dataset = tf.data.TFRecordDataset(embed_file)
for record in dataset.take(sample).map(_parse_example):
print("{}: {}".format(record['text'].numpy().decode('utf-8'), record['embedding'].numpy()[:10]))
Explanation: ่ฏปๅ็ๆ็้จๅๅตๅ
ฅๅ้โฆ
End of explanation
def build_index(embedding_files_pattern, index_filename, vector_length,
metric='angular', num_trees=100):
'''Builds an ANNOY index'''
annoy_index = annoy.AnnoyIndex(vector_length, metric=metric)
# Mapping between the item and its identifier in the index
mapping = {}
embed_files = tf.io.gfile.glob(embedding_files_pattern)
num_files = len(embed_files)
print('Found {} embedding file(s).'.format(num_files))
item_counter = 0
for i, embed_file in enumerate(embed_files):
print('Loading embeddings in file {} of {}...'.format(i+1, num_files))
dataset = tf.data.TFRecordDataset(embed_file)
for record in dataset.map(_parse_example):
text = record['text'].numpy().decode("utf-8")
embedding = record['embedding'].numpy()
mapping[item_counter] = text
annoy_index.add_item(item_counter, embedding)
item_counter += 1
if item_counter % 100000 == 0:
print('{} items loaded to the index'.format(item_counter))
print('A total of {} items added to the index'.format(item_counter))
print('Building the index with {} trees...'.format(num_trees))
annoy_index.build(n_trees=num_trees)
print('Index is successfully built.')
print('Saving index to disk...')
annoy_index.save(index_filename)
print('Index is saved to disk.')
print("Index file size: {} GB".format(
round(os.path.getsize(index_filename) / float(1024 ** 3), 2)))
annoy_index.unload()
print('Saving mapping to disk...')
with open(index_filename + '.mapping', 'wb') as handle:
pickle.dump(mapping, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('Mapping is saved to disk.')
print("Mapping file size: {} MB".format(
round(os.path.getsize(index_filename + '.mapping') / float(1024 ** 2), 2)))
embedding_files = "{}/emb-*.tfrecords".format(output_dir)
embedding_dimension = projected_dim
index_filename = "index"
!rm {index_filename}
!rm {index_filename}.mapping
%time build_index(embedding_files, index_filename, embedding_dimension)
!ls
Explanation: 3. ไธบๅตๅ
ฅๅ้ๆๅปบ ANN ็ดขๅผ
ANNOY๏ผ่ฟไผผๆ่ฟ้ป๏ผๆฏไธไธชๅ
ๅซ Python ็ปๅฎ็ C++ ๅบ๏ผ็จไบๆ็ดข็ฉบ้ดไธญไธ็ปๅฎๆฅ่ฏข็นๆฅ่ฟ็็นใๆญคๅค๏ผๅฎ่ฟไผๅๅปบๅบไบๆไปถ็ๅคงๅๅช่ฏปๆฐๆฎ็ปๆ๏ผ่ฟไบๆฐๆฎ็ปๆไผๆ ๅฐๅฐๅ
ๅญไธญใๅฎ็ฑ Spotify ๆๅปบๅนถ็จไบ้ณไนๆจ่ใๅฆๆๆจๆๅ
ด่ถฃ๏ผๅฏไปฅๅฐ่ฏไฝฟ็จ ANNOY ็ๅ
ถไปๆฟไปฃๅบ๏ผไพๅฆ NGTใFAISS ็ญใ
End of explanation
index = annoy.AnnoyIndex(embedding_dimension)
index.load(index_filename, prefault=True)
print('Annoy index is loaded.')
with open(index_filename + '.mapping', 'rb') as handle:
mapping = pickle.load(handle)
print('Mapping file is loaded.')
Explanation: 4. ไฝฟ็จ็ดขๅผ่ฟ่ก็ธไผผๅบฆๅน้
็ฐๅจ๏ผๆไปฌๅฏไปฅไฝฟ็จ ANN ็ดขๅผๆฅๆพไธ่พๅ
ฅๆฅ่ฏข่ฏญไนๆฅ่ฟ็ๆฐ้ปๆ ้ขใ
ๅ ่ฝฝ็ดขๅผๅๆ ๅฐๆไปถ
End of explanation
def find_similar_items(embedding, num_matches=5):
'''Finds similar items to a given embedding in the ANN index'''
ids = index.get_nns_by_vector(
embedding, num_matches, search_k=-1, include_distances=False)
items = [mapping[i] for i in ids]
return items
Explanation: ็ธไผผๅบฆๅน้
ๆนๆณ
End of explanation
# Load the TF-Hub model
print("Loading the TF-Hub model...")
%time embed_fn = hub.load(model_url)
print("TF-Hub model is loaded.")
random_projection_matrix = None
if os.path.exists('random_projection_matrix'):
print("Loading random projection matrix...")
with open('random_projection_matrix', 'rb') as handle:
random_projection_matrix = pickle.load(handle)
print('random projection matrix is loaded.')
def extract_embeddings(query):
'''Generates the embedding for the query'''
query_embedding = embed_fn([query])[0].numpy()
if random_projection_matrix is not None:
query_embedding = query_embedding.dot(random_projection_matrix)
return query_embedding
extract_embeddings("Hello Machine Learning!")[:10]
Explanation: ไป็ปๅฎๆฅ่ฏขไธญๆๅๅตๅ
ฅๅ้
End of explanation
#@title { run: "auto" }
query = "confronting global challenges" #@param {type:"string"}
print("Generating embedding for the query...")
%time query_embedding = extract_embeddings(query)
print("")
print("Finding relevant items in the index...")
%time items = find_similar_items(query_embedding, 10)
print("")
print("Results:")
print("=========")
for item in items:
print(item)
Explanation: ่พๅ
ฅๆฅ่ฏขไปฅๆฅๆพๆ็ธไผผ็ๆก็ฎ
End of explanation |
1,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--
The ipynb was auto-generated from markdown using notedown.
Instead of modifying the ipynb file modify the markdown source.
-->
<h1 class="tocheading">Spark</h1>
<div id="toc"></div>
<img src="images/spark-logo.png">
Apache Spark
Spark Intro
What is Spark?
Spark is a framework for distributed processing.
It is a streamlined alternative to Map-Reduce.
Spark applications can be written in Python, Scala, or Java.
Why Spark
Why learn Spark?
Spark enables you to analyze petabytes of data.
Spark skills are in high demand--http
Step1: Import random.
Step2: Notes
sc.parallelize creates an RDD.
map and filter are transformations.
They create new RDDs from existing RDDs.
count is an action and brings the data from the RDDs back to the
driver.
Spark Terminology
Term |Meaning
---- |-------
RDD |Resilient Distributed Dataset or a distributed sequence of records
Spark Job |Sequence of transformations on data with a final action
Spark Application |Sequence of Spark jobs and other code
Transformation |Spark operation that produces an RDD
Action |Spark operation that produces a local object
A Spark job consists of a series of transformations followed by an
action.
It pushes the data to the cluster, all computation happens on the
executors, then the result is sent back to the driver.
Pop Quiz
<details><summary>
In this Spark job what is the transformation is what is the action?
`sc.parallelize(xrange(10)).filter(lambda x
Step3: Use this to filter out non-primes.
Step4: Pop Quiz
<img src="images/spark-cluster.png">
<details><summary>
Q
Step5: Q
Step6: Q
Step7: What do you get when you run this code?
Step8: What about this?
Step9: Map vs FlatMap
Here's the difference between map and flatMap.
Map
Step10: FlatMap
Step11: Key Value Pairs
PairRDD
At this point we know how to aggregate values across an RDD. If we
have an RDD containing sales transactions we can find the total
revenue across all transactions.
Q
Step12: Read the file.
Step13: Split the lines.
Step14: Remove #.
Step15: Try again.
Step16: Pick off last field.
Step17: Convert to float and then sum.
Step18: ReduceByKey
Q
Step19: Now use reduceByKey to add them up.
Step20: Q
Step21: Pop Quiz
<details><summary>
Q
Step22: Count the words.
Step23: Making List Indexing Readable
While this code looks reasonable, the list indexes are cryptic and
hard to read.
Step24: We can make this more readable using Python's argument unpacking
feature.
Argument Unpacking
Q
Step25: What is the difference between getCity1 and getCity2?
Which is more readable?
What is the essence of argument unpacking?
Pop Quiz
<details><summary>
Q
Step26: Whenever you find yourself indexing into a tuple consider using
argument unpacking to make it more readable.
Here is what getCity looks like with tuple indexing.
Step27: Argument Unpacking In Spark
Q
Step28: Here is the code with argument unpacking.
Step29: In this case because we have a long list or tuple argument unpacking
is a judgement call.
GroupByKey
reduceByKey lets us aggregate values using sum, max, min, and other
associative operations. But what about non-associative operations like
average? How can we calculate them?
There are several ways to do this.
The first approach is to change the RDD tuples so that the operation
becomes associative.
Instead of (state, amount) use (state, (amount, count)).
The second approach is to use groupByKey, which is like
reduceByKey except it gathers together all the values in an
iterator.
The iterator can then be reduced in a map step immediately after
the groupByKey.
Q
Step30: Note the argument unpacking we are doing in reduceByKey to name
the elements of the tuples.
Approach 2
Step31: Note that we are using unpacking again.
Pop Quiz
<details><summary>
Q
Step32: Pop Quiz
<details><summary>
Q
Step33: Pop Quiz
<details><summary>
Q
Step34: Q
Step35: Pop Quiz
<details><summary>
Q
Step36: Create the RDD and then save it to squares.txt.
Step37: Now look at the output.
Step38: Looks like the output is a directory.
Step39: Lets take a look at the files.
Step40: Pop Quiz
<details><summary>
Q
Step41: Create the RDD and then save it to squares.txt.
Step42: Now look at the output.
Step43: Pop Quiz
<details><summary>
Q
Step44: Here is the program for finding the high of any stock that stores
the data in memory.
Step45: Notes
Spark is high-level like Hive and Pig.
At the same time it does not invent a new language.
This allows it to leverage the ecosystem of tools that Python,
Scala, and Java provide.
Caching and Persistence
RDD Caching
Consider this Spark job.
Step46: Lets time running count() on rdd2.
Step47: The RDD does no work until an action is called. And then when an
action is called it figures out the answer and then throws away all
the data.
If you have an RDD that you are going to reuse in your computation
you can use cache() to make Spark cache the RDD.
Lets cache it and try again.
Step48: Caching the RDD speeds up the job because the RDD does not have to
be computed from scratch again.
Notes
Calling cache() flips a flag on the RDD.
The data is not cached until an action is called.
You can uncache an RDD using unpersist().
Pop Quiz
<details><summary>
Q | Python Code:
from pyspark import SparkContext
sc = SparkContext()
Explanation: <!--
The ipynb was auto-generated from markdown using notedown.
Instead of modifying the ipynb file modify the markdown source.
-->
<h1 class="tocheading">Spark</h1>
<div id="toc"></div>
<img src="images/spark-logo.png">
Apache Spark
Spark Intro
What is Spark?
Spark is a framework for distributed processing.
It is a streamlined alternative to Map-Reduce.
Spark applications can be written in Python, Scala, or Java.
Why Spark
Why learn Spark?
Spark enables you to analyze petabytes of data.
Spark skills are in high demand--http://indeed.com/salary.
Spark is signficantly faster than MapReduce.
Paradoxically, Spark's API is simpler than the MapReduce API.
Goals
By the end of this lecture, you will be able to:
Create RDDs to distribute data across a cluster
Use the Spark shell to compose and execute Spark commands
Use Spark to analyze stock market data
Spark Version History
Date |Version |Changes
---- |------- |-------
May 30, 2014 |Spark 1.0.0 |APIs stabilized
September 11, 2014 |Spark 1.1.0 |New functions in MLlib, Spark SQL
December 18, 2014 |Spark 1.2.0 |Python Streaming API and better streaming fault tolerance
March 13, 2015 |Spark 1.3.0 |DataFrame API, Kafka integration in Streaming
April 17, 2015 |Spark 1.3.1 |Bug fixes, minor changes
Matei Zaharia
<img style="width:50%" src="images/matei.jpg">
Essense of Spark
What is the basic idea of Spark?
Spark takes the Map-Reduce paradigm and changes it in some critical
ways.
Instead of writing single Map-Reduce jobs a Spark job consists of a
series of map and reduce functions.
However, the intermediate data is kept in memory instead of being
written to disk or written to HDFS.
Pop Quiz
<details><summary>
Q: Since Spark keeps intermediate data in memory to get speed, what
does it make us give up? Where's the catch?
</summary>
1. Spark does a trade-off between memory and performance.
<br>
2. While Spark apps are faster, they also consume more memory.
<br>
3. Spark outshines Map-Reduce in iterative algorithms where the
overhead of saving the results of each step to HDFS slows down
Map-Reduce.
<br>
4. For non-iterative algorithms Spark is comparable to Map-Reduce.
</details>
Spark Logging
Q: How can I make Spark logging less verbose?
By default Spark logs messages at the INFO level.
Here are the steps to make it only print out warnings and errors.
sh
cd $SPARK_HOME/conf
cp log4j.properties.template log4j.properties
Edit log4j.properties and replace rootCategory=INFO with rootCategory=ERROR
Spark Fundamentals
Spark Execution
<img src="images/spark-cluster.png">
Spark Terminology
Term |Meaning
---- |-------
Driver |Process that contains the Spark Context
Executor |Process that executes one or more Spark tasks
Master |Process which manages applications across the cluster
|E.g. Spark Master
Worker |Process which manages executors on a particular worker node
|E.g. Spark Worker
Spark Job
Q: Flip a coin 100 times using Python's random() function. What
fraction of the time do you get heads?
Initialize Spark.
End of explanation
import random
flips = 1000000
heads = sc.parallelize(xrange(flips)) \
.map(lambda i: random.random()) \
.filter(lambda r: r < 0.51) \
.count()
ratio = float(heads)/float(flips)
print(heads)
print(ratio)
Explanation: Import random.
End of explanation
def is_prime(number):
factor_min = 2
factor_max = int(number**0.5)+1
for factor in xrange(factor_min,factor_max):
if number % factor == 0:
return False
return True
Explanation: Notes
sc.parallelize creates an RDD.
map and filter are transformations.
They create new RDDs from existing RDDs.
count is an action and brings the data from the RDDs back to the
driver.
Spark Terminology
Term |Meaning
---- |-------
RDD |Resilient Distributed Dataset or a distributed sequence of records
Spark Job |Sequence of transformations on data with a final action
Spark Application |Sequence of Spark jobs and other code
Transformation |Spark operation that produces an RDD
Action |Spark operation that produces a local object
A Spark job consists of a series of transformations followed by an
action.
It pushes the data to the cluster, all computation happens on the
executors, then the result is sent back to the driver.
Pop Quiz
<details><summary>
In this Spark job what is the transformation is what is the action?
`sc.parallelize(xrange(10)).filter(lambda x: x % 2 == 0).collect()`
</summary>
1. `filter` is the transformation.
<br>
2. `collect` is the action.
</details>
Lambda vs Functions
Instead of lambda you can pass in fully defined functions into
map, filter, and other RDD transformations.
Use lambda for short functions.
Use def for more substantial functions.
Finding Primes
Q: Find all the primes less than 100.
Define function to determine if a number is prime.
End of explanation
numbers = xrange(2,100)
primes = sc.parallelize(numbers)\
.filter(is_prime)\
.collect()
print primes
Explanation: Use this to filter out non-primes.
End of explanation
sc.parallelize([1,3,2,2,1]).distinct().collect()
Explanation: Pop Quiz
<img src="images/spark-cluster.png">
<details><summary>
Q: Where does `is_prime` execute?
</summary>
On the executors.
</details>
<details><summary>
Q: Where does the RDD code execute?
</summary>
On the driver.
</details>
Transformations and Actions
Common RDD Constructors
Expression |Meaning
---------- |-------
sc.parallelize(list1) |Create RDD of elements of list
sc.textFile(path) |Create RDD of lines from file
Common Transformations
Expression |Meaning
---------- |-------
filter(lambda x: x % 2 == 0) |Discard non-even elements
map(lambda x: x * 2) |Multiply each RDD element by 2
map(lambda x: x.split()) |Split each string into words
flatMap(lambda x: x.split()) |Split each string into words and flatten sequence
sample(withReplacement=True,0.25) |Create sample of 25% of elements with replacement
union(rdd) |Append rdd to existing RDD
distinct() |Remove duplicates in RDD
sortBy(lambda x: x, ascending=False) |Sort elements in descending order
Common Actions
Expression |Meaning
---------- |-------
collect() |Convert RDD to in-memory list
take(3) |First 3 elements of RDD
top(3) |Top 3 elements of RDD
takeSample(withReplacement=True,3) |Create sample of 3 elements with replacement
sum() |Find element sum (assumes numeric elements)
mean() |Find element mean (assumes numeric elements)
stdev() |Find element deviation (assumes numeric elements)
Pop Quiz
Q: What will this output?
End of explanation
sc.parallelize([1,3,2,2,1]).sortBy(lambda x: x).collect()
Explanation: Q: What will this output?
End of explanation
%%writefile input.txt
hello world
another line
yet another line
yet another another line
Explanation: Q: What will this output?
Create this input file.
End of explanation
sc.textFile('input.txt') \
.map(lambda x: x.split()) \
.count()
Explanation: What do you get when you run this code?
End of explanation
sc.textFile('input.txt') \
.flatMap(lambda x: x.split()) \
.count()
Explanation: What about this?
End of explanation
sc.textFile('input.txt') \
.map(lambda x: x.split()) \
.collect()
Explanation: Map vs FlatMap
Here's the difference between map and flatMap.
Map:
End of explanation
sc.textFile('input.txt') \
.flatMap(lambda x: x.split()) \
.collect()
Explanation: FlatMap:
End of explanation
%%writefile sales.txt
#ID Date Store State Product Amount
101 11/13/2014 100 WA 331 300.00
104 11/18/2014 700 OR 329 450.00
102 11/15/2014 203 CA 321 200.00
106 11/19/2014 202 CA 331 330.00
103 11/17/2014 101 WA 373 750.00
105 11/19/2014 202 CA 321 200.00
Explanation: Key Value Pairs
PairRDD
At this point we know how to aggregate values across an RDD. If we
have an RDD containing sales transactions we can find the total
revenue across all transactions.
Q: Using the following sales data find the total revenue across all
transactions.
End of explanation
sc.textFile('sales.txt')\
.take(2)
Explanation: Read the file.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.take(2)
Explanation: Split the lines.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: x[0].startswith('#'))\
.take(2)
Explanation: Remove #.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.take(2)
Explanation: Try again.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.map(lambda x: x[-1])\
.take(2)
Explanation: Pick off last field.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.map(lambda x: float(x[-1]))\
.sum()
Explanation: Convert to float and then sum.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.map(lambda x: (x[-3],float(x[-1])))\
.collect()
Explanation: ReduceByKey
Q: Calculate revenue per state?
Instead of creating a sequence of revenue numbers we can create
tuples of states and revenue.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.map(lambda x: (x[-3],float(x[-1])))\
.reduceByKey(lambda amount1,amount2: amount1+amount2)\
.collect()
Explanation: Now use reduceByKey to add them up.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.map(lambda x: (x[-3],float(x[-1])))\
.reduceByKey(lambda amount1,amount2: amount1+amount2)\
.sortBy(lambda state_amount:state_amount[1],ascending=False) \
.collect()
Explanation: Q: Find the state with the highest total revenue.
You can either use the action top or the transformation sortBy.
End of explanation
%%writefile input.txt
hello world
another line
yet another line
yet another another line
Explanation: Pop Quiz
<details><summary>
Q: What does `reduceByKey` do?
</summary>
1. It is like a reducer.
<br>
2. If the RDD is made up of key-value pairs, it combines the values
across all tuples with the same key by using the function we pass
to it.
<br>
3. It only works on RDDs made up of key-value pairs or 2-tuples.
</details>
Notes
reduceByKey only works on RDDs made up of 2-tuples.
reduceByKey works as both a reducer and a combiner.
It requires that the operation is associative.
Word Count
Q: Implement word count in Spark.
Create some input.
End of explanation
sc.textFile('input.txt')\
.flatMap(lambda line: line.split())\
.map(lambda word: (word,1))\
.reduceByKey(lambda count1,count2: count1+count2)\
.collect()
Explanation: Count the words.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.map(lambda x: (x[-3],float(x[-1])))\
.reduceByKey(lambda amount1,amount2: amount1+amount2)\
.sortBy(lambda state_amount:state_amount[1],ascending=False) \
.collect()
Explanation: Making List Indexing Readable
While this code looks reasonable, the list indexes are cryptic and
hard to read.
End of explanation
client = ('Dmitri','Smith','SF')
def getCity1(client):
return client[2]
def getCity2((first,last,city)):
return city
print getCity1(client)
print getCity2(client)
Explanation: We can make this more readable using Python's argument unpacking
feature.
Argument Unpacking
Q: Which version of getCity is more readable and why?
Consider this code.
End of explanation
client = ('Dmitri','Smith',('123 Eddy','SF','CA'))
def getCity((first,last,(street,city,state))):
return city
getCity(client)
Explanation: What is the difference between getCity1 and getCity2?
Which is more readable?
What is the essence of argument unpacking?
Pop Quiz
<details><summary>
Q: Can argument unpacking work for deeper nested structures?
</summary>
Yes. It can work for arbitrarily nested tuples and lists.
</details>
<details><summary>
Q: How would you write `getCity` given
`client = ('Dmitri','Smith',('123 Eddy','SF','CA'))`
</summary>
`def getCity((first,last,(street,city,state))): return city`
</details>
Argument Unpacking
Lets test this out.
End of explanation
def badGetCity(client):
return client[2][1]
getCity(client)
Explanation: Whenever you find yourself indexing into a tuple consider using
argument unpacking to make it more readable.
Here is what getCity looks like with tuple indexing.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.map(lambda x: (x[-3],float(x[-1])))\
.reduceByKey(lambda amount1,amount2: amount1+amount2)\
.sortBy(lambda state_amount:state_amount[1],ascending=False) \
.collect()
Explanation: Argument Unpacking In Spark
Q: Rewrite the last Spark job using argument unpacking.
Here is the original version of the code.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.map(lambda (id,date,store,state,product,amount): (state,float(amount)))\
.reduceByKey(lambda amount1,amount2: amount1+amount2)\
.sortBy(lambda (state,amount):amount,ascending=False) \
.collect()
Explanation: Here is the code with argument unpacking.
End of explanation
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.map(lambda x: (x[-3],(float(x[-1]),1)))\
.reduceByKey(lambda (amount1,count1),(amount2,count2): \
(amount1+amount2, count1+count2))\
.collect()
Explanation: In this case because we have a long list or tuple argument unpacking
is a judgement call.
GroupByKey
reduceByKey lets us aggregate values using sum, max, min, and other
associative operations. But what about non-associative operations like
average? How can we calculate them?
There are several ways to do this.
The first approach is to change the RDD tuples so that the operation
becomes associative.
Instead of (state, amount) use (state, (amount, count)).
The second approach is to use groupByKey, which is like
reduceByKey except it gathers together all the values in an
iterator.
The iterator can then be reduced in a map step immediately after
the groupByKey.
Q: Calculate the average sales per state.
Approach 1: Restructure the tuples.
End of explanation
def mean(iter):
total = 0.0; count = 0
for x in iter:
total += x; count += 1
return total/count
sc.textFile('sales.txt')\
.map(lambda x: x.split())\
.filter(lambda x: not x[0].startswith('#'))\
.map(lambda x: (x[-3],float(x[-1])))\
.groupByKey() \
.map(lambda (state,iter): mean(iter))\
.collect()
Explanation: Note the argument unpacking we are doing in reduceByKey to name
the elements of the tuples.
Approach 2: Use groupByKey.
End of explanation
# Employees: emp_id, loc_id, name
employee_data = [
(101, 14, 'Alice'),
(102, 15, 'Bob'),
(103, 14, 'Chad'),
(104, 15, 'Jen'),
(105, 13, 'Dee') ]
# Locations: loc_id, location
location_data = [
(14, 'SF'),
(15, 'Seattle'),
(16, 'Portland')]
employees = sc.parallelize(employee_data)
locations = sc.parallelize(location_data)
# Re-key employee records with loc_id
employees2 = employees.map(lambda (emp_id,loc_id,name):(loc_id,name));
# Now join.
employees2.join(locations).collect()
Explanation: Note that we are using unpacking again.
Pop Quiz
<details><summary>
Q: What would be the disadvantage of not using unpacking?
</summary>
1. We will need to drill down into the elements.
<br>
2. The code will be harder to read.
</details>
<details><summary>
Q: What are the pros and cons of `reduceByKey` vs `groupByKey`?
</summary>
1. `groupByKey` stores the values for particular key as an iterable.
<br>
2. This will take up space in memory or on disk.
<br>
3. `reduceByKey` therefore is more scalable.
<br>
4. However, `groupByKey` does not require associative reducer
operation.
<br>
5. For this reason `groupByKey` can be easier to program with.
</details>
Joins
Q: Given a table of employees and locations find the cities that the
employees live in.
The easiest way to do this is with a join.
End of explanation
count = 1000
list = [random.random() for _ in xrange(count)]
rdd = sc.parallelize(list)
print rdd.mean()
print rdd.variance()
print rdd.stdev()
Explanation: Pop Quiz
<details><summary>
Q: How can we keep employees that don't have a valid location ID in
the final result?
</summary>
1. Use `leftOuterJoin` to keep employees without location IDs.
<br>
2. Use `rightOuterJoin` to keep locations without employees.
<br>
3. Use `fullOuterJoin` to keep both.
<br>
</details>
RDD Details
RDD Statistics
Q: How would you calculate the mean, variance, and standard deviation of a sample
produced by Python's random() function?
Create an RDD and apply the statistical actions to it.
End of explanation
max = 10000000
%time sc.parallelize(xrange(max)).map(lambda x:x+1).count()
Explanation: Pop Quiz
<details><summary>
Q: What requirement does an RDD have to satisfy before you can apply
these statistical actions to it?
</summary>
The RDD must consist of numeric elements.
</details>
<details><summary>
Q: What is the advantage of using Spark vs Numpy to calculate mean or standard deviation?
</summary>
The calculation is distributed across different machines and will be
more scalable.
</details>
RDD Laziness
Q: What is this Spark job doing?
End of explanation
%time sc.parallelize(xrange(max)).map(lambda x:x+1)
Explanation: Q: How is the following job different from the previous one? How
long do you expect it to take?
End of explanation
!if [ -e squares.txt ] ; then rm -rf squares.txt ; fi
Explanation: Pop Quiz
<details><summary>
Q: Why did the second job complete so much faster?
</summary>
1. Because Spark is lazy.
<br>
2. Transformations produce new RDDs and do no operations on the data.
<br>
3. Nothing happens until an action is applied to an RDD.
<br>
4. An RDD is the *recipe* for a transformation, rather than the
*result* of the transformation.
</details>
<details><summary>
Q: What is the benefit of keeping the recipe instead of the result of
the action?
</summary>
1. It save memory.
<br>
2. It produces *resilience*.
<br>
3. If an RDD loses data on a machine, it always knows how to recompute it.
</details>
Writing Data
Besides reading data Spark and also write data out to a file system.
Q: Calculate the squares of integers from 1 to 100 and write them out
to squares.txt.
Make sure squares.txt does not exist.
End of explanation
rdd1 = sc.parallelize(xrange(10))
rdd2 = rdd1.map(lambda x: x*x)
rdd2.saveAsTextFile('squares.txt')
Explanation: Create the RDD and then save it to squares.txt.
End of explanation
!cat squares.txt
Explanation: Now look at the output.
End of explanation
!ls -l squares.txt
Explanation: Looks like the output is a directory.
End of explanation
!for i in squares.txt/part-*; do echo $i; cat $i; done
Explanation: Lets take a look at the files.
End of explanation
!if [ -e squares.txt ] ; then rm -rf squares.txt ; fi
Explanation: Pop Quiz
<details><summary>
Q: What's going on? Why are there two files in the output directory?
</summary>
1. There were two threads that were processing the RDD.
<br>
2. The RDD was split up in two partitions (by default).
<br>
3. Each partition was processed in a different task.
</details>
Partitions
Q: Can we control the number of partitions/tasks that Spark uses for
processing data? Solve the same problem as above but this time with 5
tasks.
Make sure squares.txt does not exist.
End of explanation
partitions = 5
rdd1 = sc.parallelize(xrange(10), partitions)
rdd2 = rdd1.map(lambda x: x*x)
rdd2.saveAsTextFile('squares.txt')
Explanation: Create the RDD and then save it to squares.txt.
End of explanation
!ls -l squares.txt
!for i in squares.txt/part-*; do echo $i; cat $i; done
Explanation: Now look at the output.
End of explanation
csv = [
"#Date,Open,High,Low,Close,Volume,Adj Close\n",
"2014-11-18,113.94,115.69,113.89,115.47,44200300,115.47\n",
"2014-11-17,114.27,117.28,113.30,113.99,46746700,113.99\n",
]
sc.parallelize(csv) \
.filter(lambda line: not line.startswith("#")) \
.map(lambda line: line.split(",")) \
.map(lambda fields: (float(fields[-1]),fields[0])) \
.sortBy(lambda (close, date): close, ascending=False) \
.take(1)
Explanation: Pop Quiz
<details><summary>
Q: How many partitions does Spark use by default?
</summary>
1. By default Spark uses 2 partitions.
<br>
2. If you read an HDFS file into an RDD Spark uses one partition per
block.
<br>
3. If you read a file into an RDD from S3 or some other source Spark
uses 1 partition per 32 MB of data.
</details>
<details><summary>
Q: If I read a file that is 200 MB into an RDD, how many partitions will that have?
</summary>
1. If the file is on HDFS that will produce 2 partitions (each is 128
MB).
<br>
2. If the file is on S3 or some other file system it will produce 7
partitions.
<br>
3. You can also control the number of partitions by passing in an
additional argument into `textFile`.
</details>
Spark Terminology
<img src="images/spark-cluster.png">
Term |Meaning
---- |-------
Task |Single thread in an executor
Partition |Data processed by a single task
Record |Records make up a partition that is processed by a single task
Notes
Every Spark application gets executors when you create a new SparkContext.
You can specify how many cores to assign to each executor.
A core is equivalent to a thread.
The number of cores determine how many tasks can run concurrently on
an executor.
Each task corresponds to one partition.
Pop Quiz
<details><summary>
Q: Suppose you have 2 executors, each with 2 cores--so a total of 4
cores. And you start a Spark job with 8 partitions. How many tasks
will run concurrently?
</summary>
4 tasks will execute concurrently.
</details>
<details><summary>
Q: What happens to the other partitions?
</summary>
1. The other partitions wait in queue until a task thread becomes
available.
<br>
2. Think of cores as turnstile gates at a train station, and
partitions as people .
<br>
3. The number of turnstiles determine how many people can get through
at once.
</details>
<details><summary>
Q: How many Spark jobs can you have in a Spark application?
</summary>
As many as you want.
</details>
<details><summary>
Q: How many Spark applications and Spark jobs are in this IPython Notebook?
</summary>
1. There is one Spark application because there is one `SparkContext`.
<br>
2. There are as many Spark jobs as we have invoked actions on RDDs.
</details>
Stock Quotes
Q: Find the date on which AAPL's stock price was the highest.
Suppose you have stock market data from Yahoo! for AAPL from
http://finance.yahoo.com/q/hp?s=AAPL+Historical+Prices. The data is
in CSV format and has these values.
Date |Open |High |Low |Close |Volume |Adj Close
---- |---- |---- |--- |----- |------ |---------
11-18-2014 |113.94 |115.69 |113.89 |115.47 |44,200,300 |115.47
11-17-2014 |114.27 |117.28 |113.30 |113.99 |46,746,700 |113.99
Here is what the CSV looks like:
csv = [
"#Date,Open,High,Low,Close,Volume,Adj Close\n",
"2014-11-18,113.94,115.69,113.89,115.47,44200300,115.47\n",
"2014-11-17,114.27,117.28,113.30,113.99,46746700,113.99\n",
]
Lets find the date on which the price was the highest.
<details><summary>
Q: What two fields do we need to extract?
</summary>
1. *Date* and *Adj Close*.
<br>
2. We want to use *Adj Close* instead of *High* so our calculation is
not affected by stock splits.
</details>
<details><summary>
Q: What field should we sort on?
</summary>
*Adj Close*
</details>
<details><summary>
Q: What sequence of operations would we need to perform?
</summary>
1. Use `filter` to remove the header line.
<br>
2. Use `map` to split each row into fields.
<br>
3. Use `map` to extract *Adj Close* and *Date*.
<br>
4. Use `sortBy` to sort descending on *Adj Close*.
<br>
5. Use `take(1)` to get the highest value.
</details>
Here is full source.
End of explanation
import urllib2
import re
def get_stock_high(symbol):
url = 'http://real-chart.finance.yahoo.com' + \
'/table.csv?s='+symbol+'&g=d&ignore=.csv'
csv = urllib2.urlopen(url).read()
csv_lines = csv.split('\n')
stock_rdd = sc.parallelize(csv_lines) \
.filter(lambda line: re.match(r'\d', line)) \
.map(lambda line: line.split(",")) \
.map(lambda fields: (float(fields[-1]),fields[0])) \
.sortBy(lambda (close, date): close, ascending=False)
return stock_rdd.take(1)
get_stock_high('AAPL')
Explanation: Here is the program for finding the high of any stock that stores
the data in memory.
End of explanation
import random
num_count = 500*1000
num_list = [random.random() for i in xrange(num_count)]
rdd1 = sc.parallelize(num_list)
rdd2 = rdd1.sortBy(lambda num: num)
Explanation: Notes
Spark is high-level like Hive and Pig.
At the same time it does not invent a new language.
This allows it to leverage the ecosystem of tools that Python,
Scala, and Java provide.
Caching and Persistence
RDD Caching
Consider this Spark job.
End of explanation
%time rdd2.count()
%time rdd2.count()
%time rdd2.count()
Explanation: Lets time running count() on rdd2.
End of explanation
rdd2.cache()
%time rdd2.count()
%time rdd2.count()
%time rdd2.count()
Explanation: The RDD does no work until an action is called. And then when an
action is called it figures out the answer and then throws away all
the data.
If you have an RDD that you are going to reuse in your computation
you can use cache() to make Spark cache the RDD.
Lets cache it and try again.
End of explanation
import pyspark
rdd = sc.parallelize(xrange(100))
rdd.persist(pyspark.StorageLevel.DISK_ONLY)
Explanation: Caching the RDD speeds up the job because the RDD does not have to
be computed from scratch again.
Notes
Calling cache() flips a flag on the RDD.
The data is not cached until an action is called.
You can uncache an RDD using unpersist().
Pop Quiz
<details><summary>
Q: Will `unpersist` uncache the RDD immediately or does it wait for an
action?
</summary>
It unpersists immediately.
</details>
Caching and Persistence
Q: Persist RDD to disk instead of caching it in memory.
You can cache RDDs at different levels.
Here is an example.
End of explanation |
1,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Performing a full distance comparison using PSA
In this example, PSA is used to compute the mutual pairwise distances between a set of trajectories. In this notebook, we show how to perform a suitable alignment of (all frames of all) trajectories prior to a distance comparison using PSA. The purpose of this alignment step is to ensure unnecessary translations and rotations in trajetory frames are removed before distances are calculated. More details can be found in the article below
Step1: 1) Set up input data for PSA using MDAnalysis
Step2: A) Generate a reference structure for trajectory alignment
Read in closed/open AdK structures; work with C$_\alpha$ only
Step3: Move centers-of-mass of C$_\alpha$ of each structure's CORE domain to origin
Step4: Get C$_\alpha$ CORE coordinates for each structure
Step5: Compute rotation matrix, R, that minimizes rmsd between the C$_\alpha$ COREs
Step6: Rotate open structure to align its C$\alpha$ CORE to closed structure's C$\alpha$ CORE
Step7: Generate reference structure coordinates
Step8: Generate Universe for the reference structure (using reference coordinates from above)
Step9: B) Build list of simulations from topologies and trajectories
Initialize lists for the methods on which to perform PSA. PSA will be performed for four different simulations methods with three runs for each
Step10: For each method, get the topology and each of three total trajectories (per method). Each simulation is represented as a (topology, trajectory) pair of file names, which is appended to a master list of simulations.
Step11: Generate a list of universes from the list of simulations.
Step12: 2) Compute and plot all-pairs distances using PSA
Initialize a PSA comparison from the universe list using a C$_\alpha$โtrajectory representation, then generate PSA Paths from the universes.
Step13: Computing mutual distances using Hausdorff and (discrete) Frรฉchet path metrics
Hausdorff
Step14: Plot clustered heat maps using Ward hierarchical clustering. The first heat map is plotted with the corresponding dendrogram and is fully labeled by the method names; the second heat map is annotated by the Hausdorff distances.
Step15: Frรฉchet
Step16: As above, plot heat maps for (discrete) Frรฉchet distances.
Step17: 3) Extract specific data from PSA
Get the Simulation IDs and PSA ID for the second DIMS simulation (DIMS 2) and third rTMD-F simulation (rTMD-F 3).
Step18: Use the Simulation IDs to locate Hausdorff and (discrete) Frรฉchet distances DIMS 2/rTMD-F 3 comparison
Step19: Use the PSA ID when the distances are in the form of a distance vector (see scipy.spatial.distance.squareform)
Step20: Check that data obtained from the distance matrix is the same as that accessed from the distance vector | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
# Suppress FutureWarning about element-wise comparison to None
# Occurs when calling PSA plotting functions
import warnings
warnings.filterwarnings('ignore')
Explanation: Performing a full distance comparison using PSA
In this example, PSA is used to compute the mutual pairwise distances between a set of trajectories. In this notebook, we show how to perform a suitable alignment of (all frames of all) trajectories prior to a distance comparison using PSA. The purpose of this alignment step is to ensure unnecessary translations and rotations in trajetory frames are removed before distances are calculated. More details can be found in the article below:
S.L. Seyler, A. Kumar, M.F. Thorpe, and O. Beckstein, Path
Similarity Analysis: a Method for Quantifying Macromolecular
Pathways. arXiv:1505.04807v1_ [q-bio.QM], 2015.
End of explanation
from MDAnalysis import Universe
from MDAnalysis.analysis.align import rotation_matrix
from MDAnalysis.analysis.psa import PSAnalysis
from pair_id import PairID
Explanation: 1) Set up input data for PSA using MDAnalysis
End of explanation
u_closed = Universe('structs/adk1AKE.pdb')
u_open = Universe('structs/adk4AKE.pdb')
ca_closed = u_closed.select_atoms('name CA')
ca_open = u_open.select_atoms('name CA')
Explanation: A) Generate a reference structure for trajectory alignment
Read in closed/open AdK structures; work with C$_\alpha$ only
End of explanation
adkCORE_resids = "(resid 1:29 or resid 60:121 or resid 160:214)"
u_closed.atoms.translate(-ca_closed.select_atoms(adkCORE_resids).center_of_mass())
u_open.atoms.translate(-ca_open.select_atoms(adkCORE_resids).center_of_mass())
Explanation: Move centers-of-mass of C$_\alpha$ of each structure's CORE domain to origin
End of explanation
closed_ca_core_coords = ca_closed.select_atoms(adkCORE_resids).positions
open_ca_core_coords = ca_open.select_atoms(adkCORE_resids).positions
Explanation: Get C$_\alpha$ CORE coordinates for each structure
End of explanation
R, rmsd_value = rotation_matrix(open_ca_core_coords, closed_ca_core_coords)
Explanation: Compute rotation matrix, R, that minimizes rmsd between the C$_\alpha$ COREs
End of explanation
u_open.atoms.rotate(R)
Explanation: Rotate open structure to align its C$\alpha$ CORE to closed structure's C$\alpha$ CORE
End of explanation
reference_coordinates = 0.5*(ca_closed.select_atoms(adkCORE_resids).positions
+ ca_open.select_atoms(adkCORE_resids).positions)
Explanation: Generate reference structure coordinates: take average positions of C$\alpha$ COREs of open and closed structures (after C$\alpha$ CORE alignment)
End of explanation
u_ref = Universe('structs/adk1AKE.pdb')
u_ref.atoms.translate(-u_ref.select_atoms(adkCORE_resids).CA.center_of_mass())
u_ref.select_atoms(adkCORE_resids).CA.set_positions(reference_coordinates)
Explanation: Generate Universe for the reference structure (using reference coordinates from above)
End of explanation
method_names = ['DIMS', 'FRODA', 'GOdMD', 'MDdMD', 'rTMD-F', 'rTMD-S',
'ANMP', 'iENM', 'MAP', 'MENM-SD', 'MENM-SP',
'Morph', 'LinInt']
labels = [] # Heat map labels
simulations = [] # List of simulation topology/trajectory filename pairs
universes = [] # List of MDAnalysis Universes representing simulations
Explanation: B) Build list of simulations from topologies and trajectories
Initialize lists for the methods on which to perform PSA. PSA will be performed for four different simulations methods with three runs for each: DIMS, FRODA, rTMD-F, and rTMD-S. Also initialize a PSAIdentifier object to keep track of the data corresponding to comparisons between pairs of simulations.
End of explanation
for method in method_names:
# Note: DIMS uses the PSF topology format
topname = 'top.psf' if 'DIMS' in method or 'TMD' in method else 'top.pdb'
pathname = 'path.dcd'
method_dir = 'methods/{}'.format(method)
if method is not 'LinInt':
for run in xrange(1, 4): # 3 runs per method
run_dir = '{}/{:03n}'.format(method_dir, run)
topology = '{}/{}'.format(method_dir, topname)
trajectory = '{}/{}'.format(run_dir, pathname)
labels.append(method + '(' + str(run) + ')')
simulations.append((topology, trajectory))
else: # only one LinInt trajectory
topology = '{}/{}'.format(method_dir, topname)
trajectory = '{}/{}'.format(method_dir, pathname)
labels.append(method)
simulations.append((topology, trajectory))
Explanation: For each method, get the topology and each of three total trajectories (per method). Each simulation is represented as a (topology, trajectory) pair of file names, which is appended to a master list of simulations.
End of explanation
for sim in simulations:
universes.append(Universe(*sim))
Explanation: Generate a list of universes from the list of simulations.
End of explanation
ref_selection = "name CA and " + adkCORE_resids
psa_full = PSAnalysis(universes,
reference=u_ref, ref_select=ref_selection,
path_select='name CA', labels=labels)
psa_full.generate_paths(align=True, store=True)
Explanation: 2) Compute and plot all-pairs distances using PSA
Initialize a PSA comparison from the universe list using a C$_\alpha$โtrajectory representation, then generate PSA Paths from the universes.
End of explanation
psa_full.run(metric='hausdorff')
hausdorff_distances = psa_full.get_pairwise_distances()
Explanation: Computing mutual distances using Hausdorff and (discrete) Frรฉchet path metrics
Hausdorff: compute the Hausdorff distances between all unique pairs of Paths and store the distance matrix.
End of explanation
psa_full.plot(filename='dh_ward_psa-full.pdf', linkage='ward');
psa_full.plot_annotated_heatmap(filename='dh_ward_psa-full_annot.pdf', linkage='ward');
Explanation: Plot clustered heat maps using Ward hierarchical clustering. The first heat map is plotted with the corresponding dendrogram and is fully labeled by the method names; the second heat map is annotated by the Hausdorff distances.
End of explanation
psa_full.run(metric='discrete_frechet')
frechet_distances = psa_full.get_pairwise_distances()
Explanation: Frรฉchet: compute the (discrete) Frรฉchet distances between all unique pairs of Paths and store the distance matrix.
End of explanation
psa_full.plot(filename='df_ward_psa-full.pdf', linkage='ward');
psa_full.plot_annotated_heatmap(filename='df_ward_psa-full_annot.pdf', linkage='ward');
Explanation: As above, plot heat maps for (discrete) Frรฉchet distances.
End of explanation
identifier = PairID()
for name in method_names:
run_ids = [1] if 'LinInt' in name else [1,2,3]
identifier.add_sim(name, run_ids)
sid1 = identifier.get_sim_id('DIMS 2')
sid2 = identifier.get_sim_id('rTMD-F 3')
pid = identifier.get_pair_id('DIMS 2', 'rTMD-F 3')
Explanation: 3) Extract specific data from PSA
Get the Simulation IDs and PSA ID for the second DIMS simulation (DIMS 2) and third rTMD-F simulation (rTMD-F 3).
End of explanation
print hausdorff_distances[sid1,sid2]
print frechet_distances[sid1,sid2]
Explanation: Use the Simulation IDs to locate Hausdorff and (discrete) Frรฉchet distances DIMS 2/rTMD-F 3 comparison:
End of explanation
from scipy.spatial.distance import squareform
hausdorff_vectorform = squareform(hausdorff_distances)
frechet_vectorform = squareform(frechet_distances)
print hausdorff_vectorform[pid]
print frechet_vectorform[pid]
Explanation: Use the PSA ID when the distances are in the form of a distance vector (see scipy.spatial.distance.squareform)
End of explanation
print hausdorff_distances[sid1,sid2] == hausdorff_vectorform[pid]
print frechet_distances[sid1,sid2] == frechet_vectorform[pid]
Explanation: Check that data obtained from the distance matrix is the same as that accessed from the distance vector
End of explanation |
1,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute source power estimate by projecting the covariance with MNE
We can apply the MNE inverse operator to a covariance matrix to obtain
an estimate of source power. This is computationally more efficient than first
estimating the source timecourses and then computing their power. This
code is based on the code from
Step1: Compute empty-room covariance
First we compute an empty-room covariance, which captures noise from the
sensors and environment.
Step2: Epoch the data
Step3: Compute and plot covariances
In addition to the empty-room covariance above, we compute two additional
covariances
Step4: We can also look at the covariances using topomaps, here we just show the
baseline and data covariances, followed by the data covariance whitened
by the baseline covariance
Step5: Apply inverse operator to covariance
Finally, we can construct an inverse using the empty-room noise covariance
Step6: Project our data and baseline covariance to source space
Step7: And visualize power is relative to the baseline | Python Code:
# Author: Denis A. Engemann <[email protected]>
# Luke Bloy <[email protected]>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse_cov
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname)
Explanation: Compute source power estimate by projecting the covariance with MNE
We can apply the MNE inverse operator to a covariance matrix to obtain
an estimate of source power. This is computationally more efficient than first
estimating the source timecourses and then computing their power. This
code is based on the code from :footcite:Sabbagh2020 and has been useful to
correct for individual field spread using source localization in the context of
predictive modeling.
References
.. footbibliography::
End of explanation
raw_empty_room_fname = op.join(
data_path, 'MEG', 'sample', 'ernoise_raw.fif')
raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname)
raw_empty_room.crop(0, 30) # cropped just for speed
raw_empty_room.info['bads'] = ['MEG 2443']
raw_empty_room.add_proj(raw.info['projs'])
noise_cov = mne.compute_raw_covariance(raw_empty_room, method='shrunk')
del raw_empty_room
Explanation: Compute empty-room covariance
First we compute an empty-room covariance, which captures noise from the
sensors and environment.
End of explanation
raw.pick(['meg', 'stim', 'eog']).load_data().filter(4, 12)
raw.info['bads'] = ['MEG 2443']
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
tmin, tmax = -0.2, 0.5
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
proj=True, picks=('meg', 'eog'), baseline=None,
reject=reject, preload=True, decim=5, verbose='error')
del raw
Explanation: Epoch the data
End of explanation
base_cov = mne.compute_covariance(
epochs, tmin=-0.2, tmax=0, method='shrunk', verbose=True)
data_cov = mne.compute_covariance(
epochs, tmin=0., tmax=0.2, method='shrunk', verbose=True)
fig_noise_cov = mne.viz.plot_cov(noise_cov, epochs.info, show_svd=False)
fig_base_cov = mne.viz.plot_cov(base_cov, epochs.info, show_svd=False)
fig_data_cov = mne.viz.plot_cov(data_cov, epochs.info, show_svd=False)
Explanation: Compute and plot covariances
In addition to the empty-room covariance above, we compute two additional
covariances:
Baseline covariance, which captures signals not of interest in our
analysis (e.g., sensor noise, environmental noise, physiological
artifacts, and also resting-state-like brain activity / "noise").
Data covariance, which captures our activation of interest (in addition
to noise sources).
End of explanation
evoked = epochs.average().pick('meg')
evoked.drop_channels(evoked.info['bads'])
evoked.plot(time_unit='s')
evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag')
noise_cov.plot_topomap(evoked.info, 'grad', title='Noise')
data_cov.plot_topomap(evoked.info, 'grad', title='Data')
data_cov.plot_topomap(evoked.info, 'grad', noise_cov=noise_cov,
title='Whitened data')
Explanation: We can also look at the covariances using topomaps, here we just show the
baseline and data covariances, followed by the data covariance whitened
by the baseline covariance:
End of explanation
# Read the forward solution and compute the inverse operator
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fwd = mne.read_forward_solution(fname_fwd)
# make an MEG inverse operator
info = evoked.info
inverse_operator = make_inverse_operator(info, fwd, noise_cov,
loose=0.2, depth=0.8)
Explanation: Apply inverse operator to covariance
Finally, we can construct an inverse using the empty-room noise covariance:
End of explanation
stc_data = apply_inverse_cov(data_cov, evoked.info, inverse_operator,
nave=len(epochs), method='dSPM', verbose=True)
stc_base = apply_inverse_cov(base_cov, evoked.info, inverse_operator,
nave=len(epochs), method='dSPM', verbose=True)
Explanation: Project our data and baseline covariance to source space:
End of explanation
stc_data /= stc_base
brain = stc_data.plot(subject='sample', subjects_dir=subjects_dir,
clim=dict(kind='percent', lims=(50, 90, 98)),
smoothing_steps=7)
Explanation: And visualize power is relative to the baseline:
End of explanation |
1,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiple Traveling Salesman and the Problem of routing vehicles
Imagine we have instead of one salesman traveling to all the sites, that instead the workload is shared among many salesman. This generalization of the traveling salesman problem is called the multiple traveling salesman problem or mTSP. In lots of literature it is studied under the name of Vehicle Routing Problem or VRP, but it is equivalent. The problem goes back to the early 1960s where it was applied to oil delivery issues [1]. This is another NP-hard problem so for large amounts of locations a solution might take a long time to find. We can solve it for small values with Pulp though.
[1]
Step1: 1. First lets make some fake data
Step2: 2. The model
With a few modifications, the original traveling salesman problem can support multiple salesman. Instead of making each facility only be visited once, the origin facility will be visited multiple times. If we have two salesman then the origin is visited exactly twice and so on.
For $K$ vehicles or sales people
Variables
Step3: Solve it!
Step4: And the result
Step5: The optimal tours | Python Code:
from pulp import *
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sn
Explanation: Multiple Traveling Salesman and the Problem of routing vehicles
Imagine we have instead of one salesman traveling to all the sites, that instead the workload is shared among many salesman. This generalization of the traveling salesman problem is called the multiple traveling salesman problem or mTSP. In lots of literature it is studied under the name of Vehicle Routing Problem or VRP, but it is equivalent. The problem goes back to the early 1960s where it was applied to oil delivery issues [1]. This is another NP-hard problem so for large amounts of locations a solution might take a long time to find. We can solve it for small values with Pulp though.
[1] : https://andresjaquep.files.wordpress.com/2008/10/2627477-clasico-dantzig.pdf
End of explanation
#a handful of sites
sites = ['org','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P']
print(len(sites)-1)
#make some positions (so we can plot this)
positions = dict( ( a, (np.random.rand()-.5, np.random.rand()-.5)) for a in sites)
positions['org']=(0,0)
for s in positions:
p = positions[s]
plt.plot(p[0],p[1],'o')
plt.text(p[0]+.01,p[1],s,horizontalalignment='left',verticalalignment='center')
plt.gca().axis('off');
#straight line distance for simplicity
d = lambda p1,p2: np.sqrt( (p1[0]-p2[0])**2+ (p1[1]-p2[1])**2)
#calculate all the pairs
distances=dict( ((s1,s2), d(positions[s1],positions[s2])) for s1 in positions for s2 in positions if s1!=s2)
Explanation: 1. First lets make some fake data
End of explanation
K = 4 #the number of sales people
#create the problme
prob=LpProblem("vehicle",LpMinimize)
#indicator variable if site i is connected to site j in the tour
x = LpVariable.dicts('x',distances, 0,1,LpBinary)
#dummy vars to eliminate subtours
u = LpVariable.dicts('u', sites, 0, len(sites)-1, LpInteger)
#the objective
cost = lpSum([x[(i,j)]*distances[(i,j)] for (i,j) in distances])
prob+=cost
#constraints
for k in sites:
cap = 1 if k != 'org' else K
#inbound connection
prob+= lpSum([ x[(i,k)] for i in sites if (i,k) in x]) ==cap
#outbound connection
prob+=lpSum([ x[(k,i)] for i in sites if (k,i) in x]) ==cap
#subtour elimination
N=len(sites)/K
for i in sites:
for j in sites:
if i != j and (i != 'org' and j!= 'org') and (i,j) in x:
prob += u[i] - u[j] <= (N)*(1-x[(i,j)]) - 1
Explanation: 2. The model
With a few modifications, the original traveling salesman problem can support multiple salesman. Instead of making each facility only be visited once, the origin facility will be visited multiple times. If we have two salesman then the origin is visited exactly twice and so on.
For $K$ vehicles or sales people
Variables:
indicators:
$$x_{i,j} = \begin{cases}
1, & \text{if site i comes exactly before j in the tour} \
0, & \text{otherwise}
\end{cases}
$$
order dummy variables:
$$u_{i} : \text{order site i is visited}$$
Minimize:
$$\sum_{i,j \space i \neq j} x_{i,j} Distance(i,j)$$
Subject to:
$$\sum_{i \neq j} x_{i,j} = 1 \space \forall j \text{ except the origin}$$
$$\sum_{i \neq j} x_{i,origin} = K$$
$$\sum_{j \neq i} x_{i,j} = 1 \space \forall i \text{ except the origin}$$
$$\sum_{j \neq i} x_{i,origin} = K$$
$$u_{i}-u_{j} \leq (N \div M)(1-x_{i,j}) - 1 \ \forall i,j \text{ except origins}$$
End of explanation
%time prob.solve()
#prob.solve(GLPK_CMD(options=['--simplex']))
print(LpStatus[prob.status])
Explanation: Solve it!
End of explanation
non_zero_edges = [ e for e in x if value(x[e]) != 0 ]
def get_next_site(parent):
'''helper function to get the next edge'''
edges = [e for e in non_zero_edges if e[0]==parent]
for e in edges:
non_zero_edges.remove(e)
return edges
tours = get_next_site('org')
tours = [ [e] for e in tours ]
for t in tours:
while t[-1][1] !='org':
t.append(get_next_site(t[-1][1])[-1])
Explanation: And the result:
End of explanation
for t in tours:
print(' -> '.join([ a for a,b in t]+['org']))
#draw the tours
colors = [np.random.rand(3) for i in range(len(tours))]
for t,c in zip(tours,colors):
for a,b in t:
p1,p2 = positions[a], positions[b]
plt.plot([p1[0],p2[0]],[p1[1],p2[1]], color=c)
#draw the map again
for s in positions:
p = positions[s]
plt.plot(p[0],p[1],'o')
plt.text(p[0]+.01,p[1],s,horizontalalignment='left',verticalalignment='center')
plt.gca().axis('off');
print(value(prob.objective))
Explanation: The optimal tours:
End of explanation |
1,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Naive Bayes Male or Female Multivariate
author
Step1: Since we are simply using two Multivariate Gaussian Distributions, our Naive Bayes model is very simple to initialize.
Step2: Of course currently our model is unitialized and needs data in order to be able to classify people as male or female. So let's create the data. For multivariate distributions, the training data set has to be specified as a list of lists with each entry being a single case for the data set. We will specify males as a 0 and females with a 1.
Step3: Now we can fit our Naive Bayes model to the set of data.
Step4: Now let's test our model on the following sample.
Step5: First the probability of the data occurring under each model.
Step6: We can see that the probability that the sample is a female is significantly larger than the probability that it is male. Logically when we classify the data as either male (0) or female (1) we get the output | Python Code:
from pomegranate import *
import numpy as np
Explanation: Naive Bayes Male or Female Multivariate
author: Nicholas Farn [<a href="sendto:[email protected]">[email protected]</a>]
This example shows how to create a Multivariate Guassian Naive Bayes Classifier using pomegranate. In this example we will use a set od data measuring a person's height (feet), weight (lbs), and foot size (inches) in order to classify them as male or female. This example is drawn from the example in the Wikipedia <a href="https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Examples">article</a> on Naive Bayes Classifiers.
End of explanation
model = NaiveBayes( MultivariateGaussianDistribution, n_components=2 )
Explanation: Since we are simply using two Multivariate Gaussian Distributions, our Naive Bayes model is very simple to initialize.
End of explanation
X = np.array([[ 6, 180, 12 ],
[ 5.92, 190, 11 ],
[ 5.58, 170, 12 ],
[ 5.92, 165, 10 ],
[ 6, 160, 9 ],
[ 5, 100, 6 ],
[ 5.5, 100, 8 ],
[ 5.42, 130, 7 ],
[ 5.75, 150, 9 ],
[ 5.5, 140, 8 ]])
y = np.array([ 0, 0, 0, 0, 0, 1, 1, 1, 1, 1 ])
Explanation: Of course currently our model is unitialized and needs data in order to be able to classify people as male or female. So let's create the data. For multivariate distributions, the training data set has to be specified as a list of lists with each entry being a single case for the data set. We will specify males as a 0 and females with a 1.
End of explanation
model.fit( X, y )
Explanation: Now we can fit our Naive Bayes model to the set of data.
End of explanation
data = np.array([[ 5.75, 130, 8 ]])
Explanation: Now let's test our model on the following sample.
End of explanation
for sample, probs in zip( data, model.predict_proba( data ) ):
print "Height {}, weight {}, and foot size {} is {:.3}% male, {:.3}% female.".format( sample[0], sample[1], sample[2], 100*probs[0], 100*probs[1] )
Explanation: First the probability of the data occurring under each model.
End of explanation
for sample, result in zip( data, model.predict( data ) ):
print "Person with height {}, weight {}, and foot size {} is {}".format( sample[0], sample[1], sample[2], "female" if result else "male" )
Explanation: We can see that the probability that the sample is a female is significantly larger than the probability that it is male. Logically when we classify the data as either male (0) or female (1) we get the output: female.
End of explanation |
1,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graded = 10/11
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output
Step5: Problem set #2
Step6: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output
Step7: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step8: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step9: EXTREME BONUS ROUND
Step10: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint
Step11: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint
Step12: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step13: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step14: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step15: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output | Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
Explanation: Graded = 10/11
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
new_list = numbers_str.split(",")
numbers = [int(item) for item in new_list]
max(numbers)
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
#len(numbers)
sorted(numbers)[10:]
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
sorted([item for item in numbers if item % 3 == 0])
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
from math import sqrt
# your code here
squared = []
for item in numbers:
if item < 100:
squared_numbers = sqrt(item)
squared.append(squared_numbers)
squared
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
[item['name'] for item in planets if item['diameter'] > 2]
#I got one more planet!
#Ta-Stephan: We asked for greater than 4, not greater than 2.
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
#sum([int(item['mass']) for item in planets])
sum([item['mass'] for item in planets])
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
import re
planet_with_giant= [item['name'] for item in planets if re.search(r'\bgiant\b', item['type'])]
planet_with_giant
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
[item for item in poem_lines if re.search(r'\b[a-zA-Z]{4}\b \b[a-zA-Z]{4}\b', item)]
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
[item for item in poem_lines if re.search(r'\b[a-zA-Z]{5}\b.?$',item)]
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
all_lines = " ".join(poem_lines)
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
re.findall(r'[I] (\b\w+\b)', all_lines)
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
#Ta-Stephan: Careful - price should be an int, not a string.
menu = []
for item in entrees:
entrees_dictionary= {}
match = re.search(r'(.*) .(\d*\d\.\d{2})\ ?( - v+)?$', item)
if match:
name = match.group(1)
price= match.group(2)
#vegetarian= match.group(3)
if match.group(3):
entrees_dictionary['vegetarian']= True
else:
entrees_dictionary['vegetarian']= False
entrees_dictionary['name']= name
entrees_dictionary['price']= price
menu.append(entrees_dictionary)
menu
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
Great work! You are done. Go cavort in the sun, or whatever it is you students do when you're done with your homework
End of explanation |
1,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Generate Features And Target Data
Step2: Create Logistic Regression
Step3: Cross-Validate Model Using Precision | Python Code:
# Load libraries
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
Explanation: Title: Precision
Slug: precision
Summary: How to evaluate a Python machine learning using precision.
Date: 2017-09-15 12:00
Category: Machine Learning
Tags: Model Evaluation
Authors: Chris Albon
<a alt="Precision" href="https://machinelearningflashcards.com">
<img src="precision/Precision_print.png" class="flashcard center-block">
</a>
Preliminaries
End of explanation
# Generate features matrix and target vector
X, y = make_classification(n_samples = 10000,
n_features = 3,
n_informative = 3,
n_redundant = 0,
n_classes = 2,
random_state = 1)
Explanation: Generate Features And Target Data
End of explanation
# Create logistic regression
logit = LogisticRegression()
Explanation: Create Logistic Regression
End of explanation
# Cross-validate model using precision
cross_val_score(logit, X, y, scoring="precision")
Explanation: Cross-Validate Model Using Precision
End of explanation |
1,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
For high dpi displays.
Step1: 0. General note
This example compares pressure calculated from pytheos and original publication for the gold scale by Dorogokupets 2007.
1. Global setup
Step2: 3. Compare
Step3: <img src='./tables/Dorogokupets2007_Au.png'> | Python Code:
%config InlineBackend.figure_format = 'retina'
Explanation: For high dpi displays.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy as unp
import pytheos as eos
Explanation: 0. General note
This example compares pressure calculated from pytheos and original publication for the gold scale by Dorogokupets 2007.
1. Global setup
End of explanation
eta = np.linspace(1., 0.65, 8)
print(eta)
dorogokupets2007_au = eos.gold.Dorogokupets2007()
help(dorogokupets2007_au)
dorogokupets2007_au.print_equations()
dorogokupets2007_au.print_equations()
dorogokupets2007_au.print_parameters()
v0 = 67.84742110765599
dorogokupets2007_au.three_r
v = v0 * (eta)
temp = 2500.
p = dorogokupets2007_au.cal_p(v, temp * np.ones_like(v))
Explanation: 3. Compare
End of explanation
print('for T = ', temp)
for eta_i, p_i in zip(eta, p):
print("{0: .3f} {1: .2f} ".format(eta_i, p_i))
v = dorogokupets2007_au.cal_v(p, temp * np.ones_like(p), min_strain=0.6)
print(1.-(v/v0))
Explanation: <img src='./tables/Dorogokupets2007_Au.png'>
End of explanation |
1,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Testing averaging methods
From this post
The equation is
Step2: $$\frac{\partial\phi}{\partial t}+\nabla . \left(-D\left(\phi_{0}\right)\nabla \phi\right)+\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi\right) =\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi_{0,face}\right)$$
Step3: $$\frac{\partial\phi}{\partial t}+\nabla . \left(-D\left(\phi_{0}\right)\nabla \phi\right)+\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi\right) =\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi_{0,face}\right)$$
Step4: The above figure shows how the upwind convection term is not consistent with the linear averaging. | Python Code:
from fipy import Grid2D, CellVariable, FaceVariable
import numpy as np
def upwindValues(mesh, field, velocity):
Calculate the upwind face values for a field variable
Note that the mesh.faceNormals point from `id1` to `id2` so if velocity is in the same
direction as the `faceNormal`s then we take the value from `id1`s and visa-versa.
Args:
mesh: a fipy mesh
field: a fipy cell variable or equivalent numpy array
velocity: a fipy face variable (rank 1) or equivalent numpy array
Returns:
numpy array shaped as a fipy face variable
# direction is over faces (rank 0)
direction = np.sum(np.array(mesh.faceNormals * velocity), axis=0)
# id1, id2 are shaped as faces but contains cell index values
id1, id2 = mesh._adjacentCellIDs
return np.where(direction >= 0, field[id1], field[id2])
from fipy import *
import numpy as np
Explanation: Testing averaging methods
From this post
The equation is: $$\frac{\partial\phi}{\partial t}+\nabla . (-D(\phi)\nabla \phi) =0$$
End of explanation
L= 1.0 # domain length
Nx= 100
dx_min=L/Nx
x=np.array([0.0, dx_min])
while x[-1]<L:
x=np.append(x, x[-1]+1.05*(x[-1]-x[-2]))
x[-1]=L
mesh = Grid1D(dx=dx)
phi = CellVariable(mesh=mesh, name="phi", hasOld=True, value = 0.0)
phi.constrain(5.0, mesh.facesLeft)
phi.constrain(0., mesh.facesRight)
# D(phi)=D0*(1.0+phi.^2)
# dD(phi)=2.0*D0*phi
D0 = 1.0
dt= 0.01*L*L/D0 # a proper time step for diffusion process
eq = TransientTerm(var=phi) - DiffusionTerm(var=phi, coeff=D0*(1+phi.faceValue**2))
for i in range(4):
for i in range(5):
c_res = eq.sweep(dt = dt)
phi.updateOld()
Viewer(vars = phi, datamax=5.0, datamin=0.0);
# viewer.plot()
Explanation: $$\frac{\partial\phi}{\partial t}+\nabla . \left(-D\left(\phi_{0}\right)\nabla \phi\right)+\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi\right) =\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi_{0,face}\right)$$
End of explanation
phi2 = CellVariable(mesh=mesh, name="phi", hasOld=True, value = 0.0)
phi2.constrain(5.0, mesh.facesLeft)
phi2.constrain(0., mesh.facesRight)
# D(phi)=D0*(1.0+phi.^2)
# dD(phi)=2.0*D0*phi
D0 = 1.0
dt= 0.01*L*L/D0 # a proper time step for diffusion process
eq2 = TransientTerm(var=phi2)-DiffusionTerm(var=phi2, coeff=D0*(1+phi2.faceValue**2))+ \
UpwindConvectionTerm(var=phi2, coeff=-2*D0*phi2.faceValue*phi2.faceGrad)== \
(-2*D0*phi2.faceValue*phi2.faceGrad*phi2.faceValue).divergence
for i in range(4):
for i in range(5):
c_res = eq2.sweep(dt = dt)
phi2.updateOld()
viewer = Viewer(vars = [phi, phi2], datamax=5.0, datamin=0.0)
Explanation: $$\frac{\partial\phi}{\partial t}+\nabla . \left(-D\left(\phi_{0}\right)\nabla \phi\right)+\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi\right) =\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi_{0,face}\right)$$
End of explanation
phi3 = CellVariable(mesh=mesh, name="phi", hasOld=True, value = 0.0)
phi3.constrain(5.0, mesh.facesLeft)
phi3.constrain(0., mesh.facesRight)
# D(phi)=D0*(1.0+phi.^2)
# dD(phi)=2.0*D0*phi
D0 = 1.0
dt= 0.01*L*L/D0 # a proper time step for diffusion process
u = -2*D0*phi3.faceValue*phi3.faceGrad
eq3 = TransientTerm(var=phi3)-DiffusionTerm(var=phi3, coeff=D0*(1+phi3.faceValue**2))+ \
UpwindConvectionTerm(var=phi3, coeff=-2*D0*phi3.faceValue*phi3.faceGrad)== \
(-2*D0*phi3.faceValue*phi3.faceGrad*phi3.faceValue).divergence
for i in range(4):
for i in range(5):
c_res = eq3.sweep(dt = dt)
phi_face = FaceVariable(mesh, upwindValues(mesh, phi3, u))
u = -2*D0*phi_face*phi3.faceGrad
eq3 = TransientTerm(var=phi3)-DiffusionTerm(var=phi3, coeff=D0*(1+phi3.faceValue**2))+ \
UpwindConvectionTerm(var=phi3, coeff=u)== \
(u*phi_face).divergence
phi3.updateOld()
viewer = Viewer(vars = [phi, phi3], datamax=5.0, datamin=0.0)
Explanation: The above figure shows how the upwind convection term is not consistent with the linear averaging.
End of explanation |
1,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trees and Forests
NOTE
Step1: Decision Tree Classification
Step2: Random Forests
Step3: Selecting the Optimal Estimator via Cross-Validation
Step4: Fit the forest manually | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Trees and Forests
NOTE: This module code was partly taken from Andreas Muellers Adavanced scikit-learn O'Reilly Course
It is just used to explore the scikit-learn random forest object in a systematic manner
I've added more code to it to understand how to generate tree plots for random forests
End of explanation
%%bash
pwd
ls
from figures import plot_interactive_tree
plot_interactive_tree.plot_tree_interactive()
Explanation: Decision Tree Classification
End of explanation
from figures import plot_interactive_forest
plot_interactive_forest.plot_forest_interactive()
Explanation: Random Forests
End of explanation
from sklearn import grid_search
from sklearn import tree
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
digits = load_digits()
X, y = digits.data, digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rf = RandomForestClassifier(n_estimators=200, n_jobs=-1)
parameters = {'max_features':['sqrt', 'log2'],
'max_depth':[5, 7, 9]}
clf_grid = grid_search.GridSearchCV(rf, parameters)
clf_grid.fit(X_train, y_train)
clf_grid.score(X_train, y_train)
clf_grid.score(X_test, y_test)
clf_grid.best_params_
clf_grid.best_estimator_
Explanation: Selecting the Optimal Estimator via Cross-Validation
End of explanation
rf = RandomForestClassifier(n_estimators=5, n_jobs=-1)
rf.fit(X_train, y_train)
rf.score(X_test, y_test)
print([estimator.tree_.max_depth for estimator in rf.estimators_])
for idx, dec_tree in enumerate(rf.estimators_):
if idx == 0:
print(dec_tree.tree_.max_depth)
else:
pass
for idx, dec_tree in enumerate(rf.estimators_):
if idx == 0:
tree.export_graphviz(dec_tree)
from sklearn import tree
i_tree = 0
for tree_in_forest in rf.estimators_:
if i_tree ==0:
with open('tree_' + str(i_tree) + '.png', 'w') as my_file:
my_file = tree.export_graphviz(tree_in_forest, out_file = my_file)
i_tree = i_tree + 1
else:
pass
import io
from scipy import misc
from sklearn import tree
import pydot
def show_tree(decisionTree, file_path):
dotfile = io.StringIO()
tree.export_graphviz(decisionTree, out_file=dotfile)
(graph,)=pydot.graph_from_dot_data(dotfile.getvalue())
#pydot.graph_from_dot_data(dotfile.getvalue()).write_png(file_path)
graph.write_png(file_path)
i = misc.imread(file_path)
plt.imshow(i)
from sklearn import tree
i_tree = 0
for tree_in_forest in rf.estimators_:
if i_tree ==0:
show_tree(tree_in_forest, 'test.png')
Explanation: Fit the forest manually
End of explanation |
1,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Cython
Why Cython
Outline
Step1: Now, let's time this
Step2: Not too bad, but this can add up. Let's see if Cython can do better
Step3: That's a little bit faster, which is nice since all we did was to call Cython on the exact same code. But can we do better?
Step4: The final bit of "easy" Cython optimization is "declaring" the variables inside the function
Step5: 4X speedup with so little effort is pretty nice. What else can we do?
Cython has a nice "-a" flag (for annotation) that can provide clues about why your code is slow.
Step6: That's a lot of yellow still! How do we reduce this?
Exercise
Step7: Rubbish! How do we fix this?
Exercise
Step8: Exercise (if time)
Write a parallel matrix multiplication routine.
Part 4
Step9: Example
Step10: Using Cython in production code
Use setup.py to build your Cython files.
```python
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy as np
setup(
cmdclass = {'build_ext' | Python Code:
def f(x):
y = x**4 - 3*x
return y
def integrate_f(a, b, n):
dx = (b - a) / n
dx2 = dx / 2
s = f(a) * dx2
for i in range(1, n):
s += f(a + i * dx) * dx
s += f(b) * dx2
return s
Explanation: Intro to Cython
Why Cython
Outline:
Speed up Python code
Interact with NumPy arrays
Release GIL and get parallel performance
Wrap C/C++ code
Part 1: speed up your Python code
We want to integrate the function $f(x) = x^4 - 3x$.
End of explanation
%timeit integrate_f(-100, 100, int(1e5))
Explanation: Now, let's time this:
End of explanation
%load_ext cython
%%cython
def f2(x):
y = x**4 - 3*x
return y
def integrate_f2(a, b, n):
dx = (b - a) / n
dx2 = dx / 2
s = f2(a) * dx2
for i in range(1, n):
s += f2(a + i * dx) * dx
s += f2(b) * dx2
return s
%timeit integrate_f2(-100, 100, int(1e5))
Explanation: Not too bad, but this can add up. Let's see if Cython can do better:
End of explanation
%%cython
def f3(double x):
y = x**4 - 3*x
return y
def integrate_f3(double a, double b, int n):
dx = (b - a) / n
dx2 = dx / 2
s = f3(a) * dx2
for i in range(1, n):
s += f3(a + i * dx) * dx
s += f3(b) * dx2
return s
%timeit integrate_f3(-100, 100, int(1e5))
Explanation: That's a little bit faster, which is nice since all we did was to call Cython on the exact same code. But can we do better?
End of explanation
%%cython
def f4(double x):
y = x**4 - 3*x
return y
def integrate_f4(double a, double b, int n):
cdef:
double dx = (b - a) / n
double dx2 = dx / 2
double s = f4(a) * dx2
int i = 0
for i in range(1, n):
s += f4(a + i * dx) * dx
s += f4(b) * dx2
return s
%timeit integrate_f4(-100, 100, int(1e5))
Explanation: The final bit of "easy" Cython optimization is "declaring" the variables inside the function:
End of explanation
%%cython -a
def f4(double x):
y = x**4 - 3*x
return y
def integrate_f4(double a, double b, int n):
cdef:
double dx = (b - a) / n
double dx2 = dx / 2
double s = f4(a) * dx2
int i = 0
for i in range(1, n):
s += f4(a + i * dx) * dx
s += f4(b) * dx2
return s
Explanation: 4X speedup with so little effort is pretty nice. What else can we do?
Cython has a nice "-a" flag (for annotation) that can provide clues about why your code is slow.
End of explanation
import numpy as np
def mean3filter(arr):
arr_out = np.empty_like(arr)
for i in range(1, arr.shape[0] - 1):
arr_out[i] = np.sum(arr[i-1 : i+1]) / 3
arr_out[0] = (arr[0] + arr[1]) / 2
arr_out[-1] = (arr[-1] + arr[-2]) / 2
return arr_out
%timeit mean3filter(np.random.rand(1e5))
%%cython
import cython
import numpy as np
@cython.boundscheck(False)
def mean3filter2(double[::1] arr):
cdef double[::1] arr_out = np.empty_like(arr)
cdef int i
for i in range(1, arr.shape[0]-1):
arr_out[i] = np.sum(arr[i-1 : i+1]) / 3
arr_out[0] = (arr[0] + arr[1]) / 2
arr_out[-1] = (arr[-1] + arr[-2]) / 2
return np.asarray(arr_out)
%timeit mean3filter2(np.random.rand(1e5))
Explanation: That's a lot of yellow still! How do we reduce this?
Exercise: change the f4 declaration to C
Part 2: work with NumPy arrays
This is a very small subset of Python. Most scientific application deal not with single values, but with arrays of data.
End of explanation
%%cython -a
import cython
from cython.parallel import prange
import numpy as np
@cython.boundscheck(False)
def mean3filter3(double[::1] arr, double[::1] out):
cdef int i, j, k = arr.shape[0]-1
with nogil:
for i in prange(1, k-1, schedule='static',
chunksize=(k-2) // 2, num_threads=2):
for j in range(i-1, i+1):
out[i] += arr[j]
out[i] /= 3
out[0] = (arr[0] + arr[1]) / 2
out[-1] = (arr[-1] + arr[-2]) / 2
return np.asarray(out)
rin = np.random.rand(1e7)
rout = np.empty_like(rin)
%timeit mean3filter2(rin, rout)
%timeit mean3filter3(rin, rout)
Explanation: Rubbish! How do we fix this?
Exercise: use %%cython -a to speed up the code
Part 3: write parallel code
Warning:: Dragons afoot.
End of explanation
%%cython -a
# distutils: language=c++
import cython
from libcpp.vector cimport vector
@cython.boundscheck(False)
def build_list_with_vector(double[::1] in_arr):
cdef vector[double] out
cdef int i
for i in range(in_arr.shape[0]):
out.push_back(in_arr[i])
return out
build_list_with_vector(np.random.rand(10))
Explanation: Exercise (if time)
Write a parallel matrix multiplication routine.
Part 4: interact with C/C++ code
End of explanation
%%cython -a
#distutils: language=c++
from cython.operator cimport dereference as deref, preincrement as inc
from libcpp.vector cimport vector
from libcpp.map cimport map as cppmap
cdef class Graph:
cdef cppmap[int, vector[int]] _adj
cpdef int has_node(self, int node):
return self._adj.find(node) != self._adj.end()
cdef void add_node(self, int new_node):
cdef vector[int] out
if not self.has_node(new_node):
self._adj[new_node] = out
def add_edge(self, int u, int v):
self.add_node(u)
self.add_node(v)
self._adj[u].push_back(v)
self._adj[v].push_back(u)
def __getitem__(self, int u):
return self._adj[u]
cdef vector[int] _degrees(self):
cdef vector[int] deg
cdef int first = 0
cdef vector[int] edges
cdef cppmap[int, vector[int]].iterator it = self._adj.begin()
while it != self._adj.end():
deg.push_back(deref(it).second.size())
it = inc(it)
return deg
def degrees(self):
return self._degrees()
g0 = Graph()
g0.add_edge(1, 5)
g0.add_edge(1, 6)
g0[1]
g0.has_node(1)
g0.degrees()
import networkx as nx
g = nx.barabasi_albert_graph(100000, 6)
with open('graph.txt', 'w') as fout:
for u, v in g.edges_iter():
fout.write('%i,%i\n' % (u, v))
%timeit list(g.degree())
myg = Graph()
def line2edges(line):
u, v = map(int, line.rstrip().split(','))
return u, v
edges = map(line2edges, open('graph.txt'))
for u, v in edges:
myg.add_edge(u, v)
%timeit mydeg = myg.degrees()
Explanation: Example: C++ int graph
End of explanation
from mean3 import mean3filter
mean3filter(np.random.rand(10))
Explanation: Using Cython in production code
Use setup.py to build your Cython files.
```python
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy as np
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [
Extension("prange_demo", ["prange_demo.pyx"],
include_dirs=[np.get_include()],
extra_compile_args=['-fopenmp'],
extra_link_args=['-fopenmp', '-lgomp']),
]
)
```
Exercise
Write a Cython module with a setup.py to run the mean-3 filter, then import from the notebook.
End of explanation |
1,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyLadies and local Python User Groups
Last updated
Step1: Part 1
Step2: The Meetup API limits requests, however their documentation isn't exactly helpful. Using their headers, I saw that I was limited to 30 requests per 10 seconds. Therefore, I'll sleep 1 second in between each request to be safe.
Step5: Part 2
Step6: Part 3
Step7: Sanity check (I have a tree command installed via brew install tree)
Step8: Part 4 | Python Code:
from __future__ import print_function
from collections import defaultdict
import json
import os
import time
import requests
Explanation: PyLadies and local Python User Groups
Last updated: August 4, 2015
I am not a statistician by trade; far from it. I did take a few stats & econometrics courses in college, but I won't even consider myself an armchair statistician here.
I am not making any suggestions about causation, just merely exploring what the Meetup API has to offer.
This also isn't how I code in general; but I love ~~IPython~~ Jupyter Notebooks, and I wanted an excuse to use it with Pandas (first time I'm using Pandas too!).
This data was used in my EuroPython 2015 talk, Diversity: We're not done yet. (Slides, video soon)
End of explanation
def save_output(data, output_file):
with open(output_file, "w") as f:
json.dump(data, f)
# Set some global variables
MEETUP_API_KEY = "yeah right"
MEETUP_GROUPS_URL = "https://api.meetup.com/2/groups"
PARAMS = {
"signed": True,
"key": MEETUP_API_KEY,
"topic": "python",
"category_id": 34, # 34 = Tech, there are only ~35 categories
"order": "members",
"page": 200, # max allowed
"omit": "group_photo" # no need for photos in response
}
TOTAL_PAGES = 6 # looked on the API console, 1117 meetup groups as of 7/17, 200 groups per page = 6 pages
Explanation: Part 1: Grabbing all Python-centric meetup groups
NOTE
This repository includes all the data files that I used (latest update: Aug 4, 2015). You may skip this part if you don't want to call the Meetup API to get new/fresh data.
TIP
Take a look at Meetup's API Console; I used it when forming API requests as well as getting an idea of pagination for some requests.
What we're doing
We'll call a few different endpoints from the Meetup API and save the data locally in a json file for us to use later.
To get your own Meetup API key, you'll need a regular Meetup user account. Once you're logged in, you can navigate to the API Key portion of the API docs to reveal your API key.
API Endpoint docs:
Groups
End of explanation
def get_meetup_groups():
meetup_groups = []
for i in xrange(TOTAL_PAGES):
PARAMS["offset"] = i
print("GROUPS: Getting page {0} of {1}".format(i+1, TOTAL_PAGES+1))
response = requests.get(MEETUP_GROUPS_URL, params=PARAMS)
if response.ok:
meetup_groups.extend(response.json().get("results"))
time.sleep(1) # don't bombard the Meetup API
print("GROUPS: Collected {0} Meetup groups".format(len(meetup_groups)))
return meetup_groups
meetup_groups = get_meetup_groups()
# Create a directory to save everything
data_dir = "meetup_data"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Save meetup groups data
output = os.path.join(data_dir, "meetup_groups.json")
save_output(meetup_groups, output)
# inspect one for funsies
meetup_groups[0]
Explanation: The Meetup API limits requests, however their documentation isn't exactly helpful. Using their headers, I saw that I was limited to 30 requests per 10 seconds. Therefore, I'll sleep 1 second in between each request to be safe.
End of explanation
search = ["python", "pydata", "pyramid", "py", "django", "flask", "plone"]
omit = ["happy"] # I realize that a group could be called "happy python user group" or something...
def is_pug(group):
Return `True` if in `search` key words and not in `omit` keywords.
group_name = group.get("name").lower()
for o in omit:
if o in group_name:
return False
for s in search:
if s in group_name:
return True
return False
def sort_groups(groups):
Sort groups by 'pyladies' and 'python user groups'.
pyladies = []
user_groups = []
for g in groups:
if "pyladies" in g.get("name").lower():
pyladies.append(g)
else:
if is_pug(g):
user_groups.append(g)
return user_groups, pyladies
user_groups, pyladies = sort_groups(meetup_groups)
# Let's spot check the UGs to see if what we're left with makes sense
# Note: I took a peek at a few (not shown here) and for the most part,
# all seems okay
for g in user_groups:
print(g.get("name"))
Explanation: Part 2: Narrow down & sort the meetup groups
We got a lot returned from searching the /groups endpoint with just the "python" topic. So we should narrow it down a bit, as well as sort out PyLadies groups.
My process is to just narrow down by actual name of the group (e.g. python, py, django, etc).
Spot checking the results will definitely be needed, but will come a bit later.
End of explanation
from math import sin, cos, asin, degrees, radians, atan2, sqrt
RADIUS = 3958.75 # Earth's radius in miles
def is_within_50_miles(pyladies_coords, python_coords):
pyladies_lat, pyladies_lon = pyladies_coords[0], pyladies_coords[1]
python_lat, python_lon = python_coords[0], python_coords[1]
d_lat = radians(pyladies_lat - python_lat)
d_lon = radians(pyladies_lon - python_lon)
sin_d_lat = sin(d_lat / 2)
sin_d_lon = sin(d_lon / 2)
a = (sin_d_lat ** 2 + sin_d_lon ** 2 ) * cos(radians(pyladies_lat)) * cos(radians(python_lat))
c = 2 * atan2(sqrt(a), sqrt(1-a))
dist = RADIUS * c
return dist <= 50
def get_coords(group):
return group.get("lat"), group.get("lon")
def get_nearby_python_groups(pyl, collect):
pyl_coords = get_coords(pyl)
nearby = []
for group in user_groups:
pyt_coords = get_coords(group)
if is_within_50_miles(pyl_coords, pyt_coords):
nearby.append(group)
collect[pyl.get("name")] = nearby
return collect
collect = {}
for pylady in pyladies:
collect = get_nearby_python_groups(pylady, collect)
for item in collect.items():
print(item[0], len(item[1]))
# Save data into pyladies-specific directories
def pylady_dir(pyl):
_dir = pyl.split()
_dir = "".join(_dir)
outdir = os.path.join(data_dir, _dir)
if not os.path.exists(outdir):
os.makedirs(outdir)
return _dir
def save_pyladies():
for pylady in pyladies:
name = pylady.get("name")
subdir = pylady_dir(name)
outputdir = os.path.join(data_dir, subdir)
output = os.path.join(outputdir, subdir + ".json")
save_output(pylady, output)
groups = collect.get(name)
for g in groups:
group_link = g.get("link")
group_name = group_link.split(".com/")[1][:-1]
group_name = "".join(group_name)
outfile = group_name + ".json"
ug_output = os.path.join(outputdir, outfile)
save_output(g, ug_output)
save_pyladies()
Explanation: Part 3: Find all Python meetup groups with a PyLadies within 50 miles
I've adapted this from a Java implementation to find if a point is within a radius of another point. Geo-math is hard.
End of explanation
!tree
Explanation: Sanity check (I have a tree command installed via brew install tree):
End of explanation
MEETUP_MEMBER_URL = "https://api.meetup.com/2/members"
PARAMS = {
"signed": True,
"key": MEETUP_API_KEY,
}
def get_members(group):
PARAMS["group_id"] = group.get("id")
members_count = group.get("members")
print(u"MEMBERS: Getting {0} members for group {1}".format(members_count, group.get("name")))
pages = members_count / 200
remainder = members_count % 200
if remainder > 0:
pages += 1
members = []
for i in xrange(pages):
print("MEMBERS: Iteration {0} out of {1}".format(i+1, pages+1))
PARAMS["offset"] = i
resp = requests.get(MEETUP_MEMBER_URL, PARAMS)
if resp.ok:
results = resp.json().get("results")
members.extend(results)
time.sleep(1)
print("MEMBERS: Got {0} members".format(len(members)))
return members
def get_members_collection(pylady, groups):
pylady_members = get_members(pylady)
pug_members = defaultdict(list)
for g in groups:
pg_mbrs = get_members(g)
pug_members[g.get("name")].append(pg_mbrs)
return pylady_members, pug_members
# NOTE: this takes *FOREVER*.
start = time.time()
for i, item in enumerate(collect.items()):
print("COLLECTING: {0} out of {1}".format(i+1, len(collect)+1))
pylady = [p for p in pyladies if p.get("name") == item[0]][0]
pylady_members, pug_members = get_members_collection(pylady, item[1])
print("COLLECTING: Saving all the data!")
pylady_name = pylady.get("name")
outdir = pylady_dir(pylady_name)
outdir = os.path.join(data_dir, outdir)
outfile = os.path.join(outdir, "pyladies_members.json")
save_output(pylady_members, outfile)
outfile = os.path.join(outdir, "pug_members.json")
save_output(pug_members, outfile)
end = time.time()
delta_s = end - start
delta_m = delta_s / 60
print("**DONE**")
print("Completed in {:.0f} minutes".format(delta_m))
Explanation: Part 4: Membership join history
Note
If getting members from an endpoint returns 0, despite the member count in the group data being a positive number, then the group is set to private & accessible only to members (you can join that group to be able to have access that data, but I did not; I already have too much email).
Note
There's a "pseudo" race condition where the group data member # may be one number, but you actually receive a different number (+/- ~3), it's (probably) due to people leaving or joining the group between the group API call and the members API call.
API endpoint docs:
Members
End of explanation |
1,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GPyTorch Regression Tutorial
Introduction
In this notebook, we demonstrate many of the design features of GPyTorch using the simplest example, training an RBF kernel Gaussian process on a simple function. We'll be modeling the function
\begin{align}
y &= \sin(2\pi x) + \epsilon \
\epsilon &\sim \mathcal{N}(0, 0.04)
\end{align}
with 100 training examples, and testing on 51 test examples.
Note
Step1: Set up training data
In the next cell, we set up the training data for this example. We'll be using 100 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels.
Step2: Setting up the model
The next cell demonstrates the most critical features of a user-defined Gaussian process model in GPyTorch. Building a GP model in GPyTorch is different in a number of ways.
First in contrast to many existing GP packages, we do not provide full GP models for the user. Rather, we provide the tools necessary to quickly construct one. This is because we believe, analogous to building a neural network in standard PyTorch, it is important to have the flexibility to include whatever components are necessary. As can be seen in more complicated examples, this allows the user great flexibility in designing custom models.
For most GP regression models, you will need to construct the following GPyTorch objects
Step3: Model modes
Like most PyTorch modules, the ExactGP has a .train() and .eval() mode.
- .train() mode is for optimizing model hyperameters.
- .eval() mode is for computing predictions through the model posterior.
Training the model
In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.
The most obvious difference here compared to many other GP implementations is that, as in standard PyTorch, the core training loop is written by the user. In GPyTorch, we make use of the standard PyTorch optimizers as from torch.optim, and all trainable parameters of the model should be of type torch.nn.Parameter. Because GP models directly extend torch.nn.Module, calls to methods like model.parameters() or model.named_parameters() function as you might expect coming from PyTorch.
In most cases, the boilerplate code below will work well. It has the same basic components as the standard PyTorch training loop
Step4: Make predictions with the model
In the next cell, we make predictions with the model. To do this, we simply put the model and likelihood in eval mode, and call both modules on the test data.
Just as a user defined GP model returns a MultivariateNormal containing the prior mean and covariance from forward, a trained GP model in eval mode returns a MultivariateNormal containing the posterior mean and covariance. Thus, getting the predictive mean and variance, and then sampling functions from the GP at the given test points could be accomplished with calls like
Step5: Plot the model fit
In the next cell, we plot the mean and confidence region of the Gaussian process model. The confidence_region method is a helper method that returns 2 standard deviations above and below the mean. | Python Code:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: GPyTorch Regression Tutorial
Introduction
In this notebook, we demonstrate many of the design features of GPyTorch using the simplest example, training an RBF kernel Gaussian process on a simple function. We'll be modeling the function
\begin{align}
y &= \sin(2\pi x) + \epsilon \
\epsilon &\sim \mathcal{N}(0, 0.04)
\end{align}
with 100 training examples, and testing on 51 test examples.
Note: this notebook is not necessarily intended to teach the mathematical background of Gaussian processes, but rather how to train a simple one and make predictions in GPyTorch. For a mathematical treatment, Chapter 2 of Gaussian Processes for Machine Learning provides a very thorough introduction to GP regression (this entire text is highly recommended): http://www.gaussianprocess.org/gpml/chapters/RW2.pdf
End of explanation
# Training data is 100 points in [0,1] inclusive regularly spaced
train_x = torch.linspace(0, 1, 100)
# True function is sin(2*pi*x) with Gaussian noise
train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * math.sqrt(0.04)
Explanation: Set up training data
In the next cell, we set up the training data for this example. We'll be using 100 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels.
End of explanation
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)
Explanation: Setting up the model
The next cell demonstrates the most critical features of a user-defined Gaussian process model in GPyTorch. Building a GP model in GPyTorch is different in a number of ways.
First in contrast to many existing GP packages, we do not provide full GP models for the user. Rather, we provide the tools necessary to quickly construct one. This is because we believe, analogous to building a neural network in standard PyTorch, it is important to have the flexibility to include whatever components are necessary. As can be seen in more complicated examples, this allows the user great flexibility in designing custom models.
For most GP regression models, you will need to construct the following GPyTorch objects:
A GP Model (gpytorch.models.ExactGP) - This handles most of the inference.
A Likelihood (gpytorch.likelihoods.GaussianLikelihood) - This is the most common likelihood used for GP regression.
A Mean - This defines the prior mean of the GP.(If you don't know which mean to use, a gpytorch.means.ConstantMean() is a good place to start.)
A Kernel - This defines the prior covariance of the GP.(If you don't know which kernel to use, a gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) is a good place to start).
A MultivariateNormal Distribution (gpytorch.distributions.MultivariateNormal) - This is the object used to represent multivariate normal distributions.
The GP Model
The components of a user built (Exact, i.e. non-variational) GP model in GPyTorch are, broadly speaking:
An __init__ method that takes the training data and a likelihood, and constructs whatever objects are necessary for the model's forward method. This will most commonly include things like a mean module and a kernel module.
A forward method that takes in some $n \times d$ data x and returns a MultivariateNormal with the prior mean and covariance evaluated at x. In other words, we return the vector $\mu(x)$ and the $n \times n$ matrix $K_{xx}$ representing the prior mean and covariance matrix of the GP.
This specification leaves a large amount of flexibility when defining a model. For example, to compose two kernels via addition, you can either add the kernel modules directly:
python
self.covar_module = ScaleKernel(RBFKernel() + WhiteNoiseKernel())
Or you can add the outputs of the kernel in the forward method:
python
covar_x = self.rbf_kernel_module(x) + self.white_noise_module(x)
End of explanation
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iter):
# Zero gradients from previous iteration
optimizer.zero_grad()
# Output from model
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.item(),
model.likelihood.noise.item()
))
optimizer.step()
Explanation: Model modes
Like most PyTorch modules, the ExactGP has a .train() and .eval() mode.
- .train() mode is for optimizing model hyperameters.
- .eval() mode is for computing predictions through the model posterior.
Training the model
In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.
The most obvious difference here compared to many other GP implementations is that, as in standard PyTorch, the core training loop is written by the user. In GPyTorch, we make use of the standard PyTorch optimizers as from torch.optim, and all trainable parameters of the model should be of type torch.nn.Parameter. Because GP models directly extend torch.nn.Module, calls to methods like model.parameters() or model.named_parameters() function as you might expect coming from PyTorch.
In most cases, the boilerplate code below will work well. It has the same basic components as the standard PyTorch training loop:
Zero all parameter gradients
Call the model and compute the loss
Call backward on the loss to fill in gradients
Take a step on the optimizer
However, defining custom training loops allows for greater flexibility. For example, it is easy to save the parameters at each step of training, or use different learning rates for different parameters (which may be useful in deep kernel learning for example).
End of explanation
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
test_x = torch.linspace(0, 1, 51)
observed_pred = likelihood(model(test_x))
Explanation: Make predictions with the model
In the next cell, we make predictions with the model. To do this, we simply put the model and likelihood in eval mode, and call both modules on the test data.
Just as a user defined GP model returns a MultivariateNormal containing the prior mean and covariance from forward, a trained GP model in eval mode returns a MultivariateNormal containing the posterior mean and covariance. Thus, getting the predictive mean and variance, and then sampling functions from the GP at the given test points could be accomplished with calls like:
```python
f_preds = model(test_x)
y_preds = likelihood(model(test_x))
f_mean = f_preds.mean
f_var = f_preds.variance
f_covar = f_preds.covariance_matrix
f_samples = f_preds.sample(sample_shape=torch.Size(1000,))
```
The gpytorch.settings.fast_pred_var context is not needed, but here we are giving a preview of using one of our cool features, getting faster predictive distributions using LOVE.
End of explanation
with torch.no_grad():
# Initialize plot
f, ax = plt.subplots(1, 1, figsize=(4, 3))
# Get upper and lower confidence bounds
lower, upper = observed_pred.confidence_region()
# Plot training data as black stars
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
# Plot predictive means as blue line
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
Explanation: Plot the model fit
In the next cell, we plot the mean and confidence region of the Gaussian process model. The confidence_region method is a helper method that returns 2 standard deviations above and below the mean.
End of explanation |
1,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear algebra
Step1: Matrix and vector products
Q1. Predict the results of the following code.
Step2: Q2. Predict the results of the following code.
Step3: Q3. Predict the results of the following code.
Step4: Q4. Predict the results of the following code.
Step5: Decompositions
Q5. Get the lower-trianglular L in the Cholesky decomposition of x and verify it.
Step6: Q6. Compute the qr factorization of x and verify it.
Step7: Q7. Factor x by Singular Value Decomposition and verify it.
Step8: Matrix eigenvalues
Q8. Compute the eigenvalues and right eigenvectors of x. (Name them eigenvals and eigenvecs, respectively)
Step9: Q9. Predict the results of the following code.
Step10: Norms and other numbers
Q10. Calculate the Frobenius norm and the condition number of x.
Step11: Q11. Calculate the determinant of x.
Step12: Q12. Calculate the rank of x.
Step13: Q13. Compute the sign and natural logarithm of the determinant of x.
Step14: Q14. Return the sum along the diagonal of x.
Step15: Solving equations and inverting matrices
Q15. Compute the inverse of x. | Python Code:
import numpy as np
np.__version__
Explanation: Linear algebra
End of explanation
x = [1,2]
y = [[4, 1], [2, 2]]
print np.dot(x, y)
print np.dot(y, x)
print np.matmul(x, y)
print np.inner(x, y)
print np.inner(y, x)
Explanation: Matrix and vector products
Q1. Predict the results of the following code.
End of explanation
x = [[1, 0], [0, 1]]
y = [[4, 1], [2, 2], [1, 1]]
print np.dot(y, x)
print np.matmul(y, x)
Explanation: Q2. Predict the results of the following code.
End of explanation
x = np.array([[1, 4], [5, 6]])
y = np.array([[4, 1], [2, 2]])
print np.vdot(x, y)
print np.vdot(y, x)
print np.dot(x.flatten(), y.flatten())
print np.inner(x.flatten(), y.flatten())
print (x*y).sum()
Explanation: Q3. Predict the results of the following code.
End of explanation
x = np.array(['a', 'b'], dtype=object)
y = np.array([1, 2])
print np.inner(x, y)
print np.inner(y, x)
print np.outer(x, y)
print np.outer(y, x)
Explanation: Q4. Predict the results of the following code.
End of explanation
x = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]], dtype=np.int32)
L = np.linalg.cholesky(x)
print L
assert np.array_equal(np.dot(L, L.T.conjugate()), x)
Explanation: Decompositions
Q5. Get the lower-trianglular L in the Cholesky decomposition of x and verify it.
End of explanation
x = np.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=np.float32)
q, r = np.linalg.qr(x)
print "q=\n", q, "\nr=\n", r
assert np.allclose(np.dot(q, r), x)
Explanation: Q6. Compute the qr factorization of x and verify it.
End of explanation
x = np.array([[1, 0, 0, 0, 2], [0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 2, 0, 0, 0]], dtype=np.float32)
U, s, V = np.linalg.svd(x, full_matrices=False)
print "U=\n", U, "\ns=\n", s, "\nV=\n", v
assert np.allclose(np.dot(U, np.dot(np.diag(s), V)), x)
Explanation: Q7. Factor x by Singular Value Decomposition and verify it.
End of explanation
x = np.diag((1, 2, 3))
eigenvals = np.linalg.eig(x)[0]
eigenvals_ = np.linalg.eigvals(x)
assert np.array_equal(eigenvals, eigenvals_)
print "eigenvalues are\n", eigenvals
eigenvecs = np.linalg.eig(x)[1]
print "eigenvectors are\n", eigenvecs
Explanation: Matrix eigenvalues
Q8. Compute the eigenvalues and right eigenvectors of x. (Name them eigenvals and eigenvecs, respectively)
End of explanation
print np.array_equal(np.dot(x, eigenvecs), eigenvals * eigenvecs)
Explanation: Q9. Predict the results of the following code.
End of explanation
x = np.arange(1, 10).reshape((3, 3))
print np.linalg.norm(x, 'fro')
print np.linalg.cond(x, 'fro')
Explanation: Norms and other numbers
Q10. Calculate the Frobenius norm and the condition number of x.
End of explanation
x = np.arange(1, 5).reshape((2, 2))
out1 = np.linalg.det(x)
out2 = x[0, 0] * x[1, 1] - x[0, 1] * x[1, 0]
assert np.allclose(out1, out2)
print out1
Explanation: Q11. Calculate the determinant of x.
End of explanation
x = np.eye(4)
out1 = np.linalg.matrix_rank(x)
out2 = np.linalg.svd(x)[1].size
assert out1 == out2
print out1
Explanation: Q12. Calculate the rank of x.
End of explanation
x = np.arange(1, 5).reshape((2, 2))
sign, logdet = np.linalg.slogdet(x)
det = np.linalg.det(x)
assert sign == np.sign(det)
assert logdet == np.log(np.abs(det))
print sign, logdet
Explanation: Q13. Compute the sign and natural logarithm of the determinant of x.
End of explanation
x = np.eye(4)
out1 = np.trace(x)
out2 = x.diagonal().sum()
assert out1 == out2
print out1
Explanation: Q14. Return the sum along the diagonal of x.
End of explanation
x = np.array([[1., 2.], [3., 4.]])
out1 = np.linalg.inv(x)
assert np.allclose(np.dot(x, out1), np.eye(2))
print out1
Explanation: Solving equations and inverting matrices
Q15. Compute the inverse of x.
End of explanation |
1,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inverse Kinematics Problem
In this example, we are going to use the pyswarms library to solve a 6-DOF (Degrees of Freedom) Inverse Kinematics (IK) problem by treating it as an optimization problem. We will use the pyswarms library to find an optimal solution from a set of candidate solutions.
Inverse Kinematics is one of the most challenging problems in robotics. The problem involves finding an optimal pose for a manipulator given the position of the end-tip effector as opposed to forward kinematics, where the end-tip position is sought given the pose or joint configuration. Normally, this position is expressed as a point in a coordinate system (e.g., in a Cartesian system with $x$, $y$ and $z$ coordinates). However, the pose of the manipulator can also be expressed as the collection of joint variables that describe the angle of bending or twist (in revolute joints) or length of extension (in prismatic joints).
IK is particularly difficult because an abundance of solutions can arise. Intuitively, one can imagine that a robotic arm can have multiple ways of reaching through a certain point. It's the same when you touch the table and move your arm without moving the point you're touching the table at. Moreover, the calculation of these positions can be very difficult. Simple solutions can be found for 3-DOF manipulators but trying to solve the problem for 6 or even more DOF can lead to challenging algebraic problems.
Step1: IK as an Optimization Problem
In this implementation, we are going to use a 6-DOF Stanford Manipulator with 5 revolute joints and 1 prismatic joint. Furthermore, the constraints of the joints are going to be as follows
Step2: We are going to use the distance function to compute the cost, the further away the more costly the position is.
The optimization algorithm needs some parameters (the swarm size, $c_1$, $c_2$ and $\epsilon$). For the options ($c_1$,$c_2$ and $w$) we have to create a dictionary and for the constraints a tuple with a list of the respective minimal values and a list of the respective maximal values. The rest can be handled with variables. Additionally, we define the joint lengths to be 3 units long
Step3: In order to obtain the current position, we need to calculate the matrices of rotation and translation for every joint. Here we use the Denvait-Hartenberg parameters for that. So we define a function that calculates these. The function uses the rotation angle and the extension $d$ of a prismatic joint as input
Step4: Now we can calculate the transformation matrix to obtain the end tip position. For this we create another function that takes our vector $\mathbf{X}$ with the joint variables as input
Step5: The last thing we need to prepare in order to run the algorithm is the actual function that we want to optimize. We just need to calculate the distance between the position of each swarm particle and the target point
Step6: Running the algorithm
Braced with these preparations we can finally start using the algorithm
Step7: Now let's see if the algorithm really worked and test the output for joint_vars | Python Code:
# Import modules
import numpy as np
# Import PySwarms
import pyswarms as ps
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Inverse Kinematics Problem
In this example, we are going to use the pyswarms library to solve a 6-DOF (Degrees of Freedom) Inverse Kinematics (IK) problem by treating it as an optimization problem. We will use the pyswarms library to find an optimal solution from a set of candidate solutions.
Inverse Kinematics is one of the most challenging problems in robotics. The problem involves finding an optimal pose for a manipulator given the position of the end-tip effector as opposed to forward kinematics, where the end-tip position is sought given the pose or joint configuration. Normally, this position is expressed as a point in a coordinate system (e.g., in a Cartesian system with $x$, $y$ and $z$ coordinates). However, the pose of the manipulator can also be expressed as the collection of joint variables that describe the angle of bending or twist (in revolute joints) or length of extension (in prismatic joints).
IK is particularly difficult because an abundance of solutions can arise. Intuitively, one can imagine that a robotic arm can have multiple ways of reaching through a certain point. It's the same when you touch the table and move your arm without moving the point you're touching the table at. Moreover, the calculation of these positions can be very difficult. Simple solutions can be found for 3-DOF manipulators but trying to solve the problem for 6 or even more DOF can lead to challenging algebraic problems.
End of explanation
def distance(query, target):
x_dist = (target[0] - query[0])**2
y_dist = (target[1] - query[1])**2
z_dist = (target[2] - query[2])**2
dist = np.sqrt(x_dist + y_dist + z_dist)
return dist
Explanation: IK as an Optimization Problem
In this implementation, we are going to use a 6-DOF Stanford Manipulator with 5 revolute joints and 1 prismatic joint. Furthermore, the constraints of the joints are going to be as follows:
| Parameters | Lower Boundary | Upper Boundary |
|:---:|:----------------:|:----------------:|
|$\theta_1$ | $-\pi$ | $\pi$ |
|$\theta_2$ |$-\frac{\pi}{2}$| $\frac{\pi}{2}$|
|$d_3$ | $1$ | $3$ |
|$\theta_4$ | $-\pi$ | $\pi$ |
|$\theta_5$ |$-\frac{5\pi}{36}$|$\frac{5\pi}{36}$|
|$\theta_6$ | $-\pi$ | $\pi$ |
Now, if we are given an end-tip position (in this case a $xyz$ coordinate) we need to find the optimal parameters with the constraints imposed in Table 1. These conditions are then sufficient in order to treat this problem as an optimization problem. We define our parameter vector $\mathbf{X}$ as follows:
$$\mathbf{X}\,:=\, [ \, \theta_1 \quad \theta_2 \quad d_3\ \quad \theta_4 \quad \theta_5 \, ]$$
And for our end-tip position we define the target vector $\mathbf{T}$ as:
$$\mathbf{T}\,:=\, [\, T_x \quad T_y \quad T_z \,]$$
We can then start implementing our optimization algorithm.
Initializing the Swarm
The main idea for PSO is that we set a swarm $\mathbf{S}$ composed of particles $\mathbf{P}_n$ into a search space in order to find the optimal solution. The movement of the swarm depends on the cognitive ($c_1$) and social ($c_2$) of all the particles. The cognitive component speaks of the particle's bias towards its personal best from its past experience (i.e., how attracted it is to its own best position). The social component controls how the particles are attracted to the best score found by the swarm (i.e., the global best). High $c_1$ paired with low $c_2$ values can often cause the swarm to stagnate. The inverse can cause the swarm to converge too fast, resulting in suboptimal solutions.
We define our particle $\mathbf{P}$ as:
$$\mathbf{P}\,:=\,\mathbf{X}$$
And the swarm as being composed of $N$ particles with certain positions at a timestep $t$:
$$\mathbf{S}_t\,:=\,[\,\mathbf{P}_1\quad\mathbf{P}_2\quad ... \quad\mathbf{P}_N\,]$$
In this implementation, we designate $\mathbf{P}_1$ as the initial configuration of the manipulator at the zero-position. This means that the angles are equal to 0 and the link offset is also zero. We then generate the $N-1$ particles using a uniform distribution which is controlled by the hyperparameter $\epsilon$.
Finding the global optimum
In order to find the global optimum, the swarm must be moved. This movement is then translated by an update of the current position given the swarm's velocity $\mathbf{V}$. That is:
$$\mathbf{S}{t+1} = \mathbf{S}_t + \mathbf{V}{t+1}$$
The velocity is then computed as follows:
$$\mathbf{V}{t+1} = w\mathbf{V}_t + c_1 r_1 (\mathbf{p}{best} - \mathbf{p}) + c_2 r_2(\mathbf{g}_{best} - \mathbf{p})$$
Where $r_1$ and $r_2$ denote random values in the intervall $[0,1]$, $\mathbf{p}{best}$ is the best and $\mathbf{p}$ is the current personal position and $\mathbf{g}{best}$ is the best position of all the particles. Moreover, $w$ is the inertia weight that controls the "memory" of the swarm's previous position.
Preparations
Let us now see how this works with the pyswarms library. We use the point $[-2,2,3]$ as our target for which we want to find an optimal pose of the manipulator. We start by defining a function to get the distance from the current position to the target position:
End of explanation
swarm_size = 20
dim = 6 # Dimension of X
epsilon = 1.0
options = {'c1': 1.5, 'c2':1.5, 'w':0.5}
constraints = (np.array([-np.pi , -np.pi/2 , 1 , -np.pi , -5*np.pi/36 , -np.pi]),
np.array([np.pi , np.pi/2 , 3 , np.pi , 5*np.pi/36 , np.pi]))
d1 = d2 = d3 = d4 = d5 = d6 = 3
Explanation: We are going to use the distance function to compute the cost, the further away the more costly the position is.
The optimization algorithm needs some parameters (the swarm size, $c_1$, $c_2$ and $\epsilon$). For the options ($c_1$,$c_2$ and $w$) we have to create a dictionary and for the constraints a tuple with a list of the respective minimal values and a list of the respective maximal values. The rest can be handled with variables. Additionally, we define the joint lengths to be 3 units long:
End of explanation
def getTransformMatrix(theta, d, a, alpha):
T = np.array([[np.cos(theta) , -np.sin(theta)*np.cos(alpha) , np.sin(theta)*np.sin(alpha) , a*np.cos(theta)],
[np.sin(theta) , np.cos(theta)*np.cos(alpha) , -np.cos(theta)*np.sin(alpha) , a*np.sin(theta)],
[0 , np.sin(alpha) , np.cos(alpha) , d ],
[0 , 0 , 0 , 1 ]
])
return T
Explanation: In order to obtain the current position, we need to calculate the matrices of rotation and translation for every joint. Here we use the Denvait-Hartenberg parameters for that. So we define a function that calculates these. The function uses the rotation angle and the extension $d$ of a prismatic joint as input:
End of explanation
def get_end_tip_position(params):
# Create the transformation matrices for the respective joints
t_00 = np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
t_01 = getTransformMatrix(params[0] , d2 , 0 , -np.pi/2)
t_12 = getTransformMatrix(params[1] , d2 , 0 , -np.pi/2)
t_23 = getTransformMatrix(0 , params[2] , 0 , -np.pi/2)
t_34 = getTransformMatrix(params[3] , d4 , 0 , -np.pi/2)
t_45 = getTransformMatrix(params[4] , 0 , 0 , np.pi/2)
t_56 = getTransformMatrix(params[5] , d6 ,0 , 0)
# Get the overall transformation matrix
end_tip_m = t_00.dot(t_01).dot(t_12).dot(t_23).dot(t_34).dot(t_45).dot(t_56)
# The coordinates of the end tip are the 3 upper entries in the 4th column
pos = np.array([end_tip_m[0,3],end_tip_m[1,3],end_tip_m[2,3]])
return pos
Explanation: Now we can calculate the transformation matrix to obtain the end tip position. For this we create another function that takes our vector $\mathbf{X}$ with the joint variables as input:
End of explanation
def opt_func(X):
n_particles = X.shape[0] # number of particles
target = np.array([-2,2,3])
dist = [distance(get_end_tip_position(X[i]), target) for i in range(n_particles)]
return np.array(dist)
Explanation: The last thing we need to prepare in order to run the algorithm is the actual function that we want to optimize. We just need to calculate the distance between the position of each swarm particle and the target point:
End of explanation
%%time
# Call an instance of PSO
optimizer = ps.single.GlobalBestPSO(n_particles=swarm_size,
dimensions=dim,
options=options,
bounds=constraints)
# Perform optimization
cost, joint_vars = optimizer.optimize(opt_func, iters=1000)
Explanation: Running the algorithm
Braced with these preparations we can finally start using the algorithm:
End of explanation
print(get_end_tip_position(joint_vars))
Explanation: Now let's see if the algorithm really worked and test the output for joint_vars:
End of explanation |
1,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text classification with a RNN Tutorial in Tensorflow 2.0
Step1: Set up input pipeline
The IMDB large movie review dataset is a binary classification datasetโall the reviews have either a positive or negative sentiment.
We will download the dataset using TensorFlow Datasets.
Step2: This text encoder will reversibly encode any string, falling back to byte-encoding if necessary
Step3: Prepare data for training
We will use the padded_batch introduce in the Word embeddings tutorial to create batches of these encoded strings.
Step4: Create the model
We will use a Sequential model that starts with an Embedding layer and then goes straight to a bi-directional LSTM. Finally, there is a dense layer with 64 units that is connected to the final dense layer with a single neuron, used for the classification task.
Step5: NOTE
Step6: Train the model
Step7: We will now create the inference function for a given sample
Step8: Let's now plot the training and evaluation graphs
Step9: Stack two or more LSTM layers
We can set up the LSTM layers so that they return the full sequences using the return_sequences options. | Python Code:
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
import time
Explanation: Text classification with a RNN Tutorial in Tensorflow 2.0
End of explanation
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
train_dataset, test_dataset = dataset["train"], dataset["test"]
encoder = info.features["text"].encoder
print(f"Vocabulary size: {encoder.vocab_size}")
Explanation: Set up input pipeline
The IMDB large movie review dataset is a binary classification datasetโall the reviews have either a positive or negative sentiment.
We will download the dataset using TensorFlow Datasets.
End of explanation
encoded_string = encoder.encode("Hello Tensorflow, let's see how you encode this sentence.")
print(encoded_string)
decoded_string = encoder.decode(encoded_string)
print(decoded_string)
for index in encoded_string:
print(f"{index} --> {encoder.decode([index])}")
Explanation: This text encoder will reversibly encode any string, falling back to byte-encoding if necessary
End of explanation
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE)
test_dataset = test_dataset.padded_batch(BATCH_SIZE)
Explanation: Prepare data for training
We will use the padded_batch introduce in the Word embeddings tutorial to create batches of these encoded strings.
End of explanation
model = tf.keras.models.Sequential([
tf.keras.layers.Embedding(input_dim=encoder.vocab_size, output_dim=64, mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(units=64)),
tf.keras.layers.Dense(units=64, activation="relu"),
tf.keras.layers.Dense(units=1)
])
model.summary()
Explanation: Create the model
We will use a Sequential model that starts with an Embedding layer and then goes straight to a bi-directional LSTM. Finally, there is a dense layer with 64 units that is connected to the final dense layer with a single neuron, used for the classification task.
End of explanation
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'],
)
Explanation: NOTE: If we wanted to use the stateful RNN layer, we should have built the model with Keras functional API or model subclassing so that the RNN layer states can be retrieved and used. Please check Keras RNN guide for more details.
End of explanation
history = model.fit(
train_dataset,
epochs=10,
validation_data=test_dataset,
validation_steps=30,
)
test_loss, test_acc = model.evaluate(test_dataset)
Explanation: Train the model
End of explanation
def encode_sample(sample_pred_text):
encoded_sample = encoder.encode(sample_pred_text)
encoded_sample_tensor = tf.constant(encoded_sample, dtype=tf.float32)
return encoded_sample_tensor
@tf.function
def model_predict(encoded_sample):
logits = model(tf.expand_dims(encoded_sample, axis=0))
predicted_value = tf.sigmoid(logits)
return predicted_value
# Predict on a positive sample
sample_pred_text = "The movie was cool. The animation and the graphics" \
" were out of this world and, because of that, " \
"I would definitely recommend this movie!"
print(f"Text to predict: \n{sample_pred_text}")
encoded_sample = encode_sample(sample_pred_text)
prediction = model_predict(encoded_sample)
start = time.time()
print(f"Prediction: {prediction[0][0]} [took {time.time() - start} s]")
# Predict on a negative sample
sample_pred_text = "What an awful movie! I expected much more from this director." \
" The plot was ok, but the actors and graphics were terrible..."
print(f"Text to predict: \n{sample_pred_text}")
encoded_sample = encode_sample(sample_pred_text)
prediction = model_predict(encoded_sample)
start = time.time()
print(f"Prediction: {prediction[0][0]} [took {time.time() - start} s]")
# Prediction on a neutral sample
sample_pred_text = "I would say this is an ok movie"
print(f"Text to predict: \n{sample_pred_text}")
encoded_sample = encode_sample(sample_pred_text)
prediction = model_predict(encoded_sample)
start = time.time()
print(f"Prediction: {prediction[0][0]} [took {time.time() - start} s]")
Explanation: We will now create the inference function for a given sample
End of explanation
def plot_graphs(history, metric):
plt.plot(history.history[metric])
plt.plot(history.history['val_'+metric], '')
plt.xlabel("Epochs")
plt.ylabel(metric)
plt.legend([metric, 'val_'+metric])
plt.show()
plot_graphs(history, "loss")
plot_graphs(history, "accuracy")
Explanation: Let's now plot the training and evaluation graphs
End of explanation
model_v2 = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64, mask_zero=True),
tf.keras.layers.LSTM(units=64, return_sequences=True),
tf.keras.layers.LSTM(units=32),
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1, activation="sigmoid"),
])
model_v2.summary()
model_v2.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4),
metrics=[
tf.keras.metrics.Precision(),
tf.keras.metrics.Recall()
],
)
history = model_v2.fit(
train_dataset,
epochs=10,
validation_data=test_dataset,
validation_steps=30
)
test_loss, test_precision, test_recall = model_v2.evaluate(test_dataset)
print(f"Test Loss: {test_loss}")
print(f"Test Precision: {test_precision}")
print(f"Test Recall: {test_recall}")
@tf.function
def model_v2_predict(encoded_sample):
return model_v2(tf.expand_dims(encoded_sample, axis=0))
# Predict on a positive sample
sample_pred_text = "bla bla bla bla bla"
print(f"Text to predict: \n{sample_pred_text}")
encoded_sample = encode_sample(sample_pred_text)
prediction_v2 = model_v2_predict(encoded_sample)
start = time.time()
print(f"Prediction: {prediction[0][0]} [took {time.time() - start} s]")
plot_graphs(history, "loss")
plot_graphs(history, "precision_1")
plot_graphs(history, "recall_1")
Explanation: Stack two or more LSTM layers
We can set up the LSTM layers so that they return the full sequences using the return_sequences options.
End of explanation |
1,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducciรณn a Python para Ciencias Biรณlogicas
Curso de Biofรญsica - Universidad de Antioquia
Daniel Mejรญa Raigosa (email
Step1: Probemos creando una variable que inicialice a una palabra
Esto produce la siguiente salida,
```Python
palabra=hola
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-12-fda39d79c7ea> in <module>()
----> 1 palabra=hola
NameError
Step2: El historial de comandos ingresados
Mediante el comando,
Step3: Es posible visualizar la lista de los comandos ingresados durante una sesiรณn de trabajo de IPython. Tambiรฉn es posible crear
un archivo que almacene los comandos de la sesiรณn aรฑadiendo la opciรณn -f nombre_archivo_destino.py.
Por ejemplo, el comando
Step4: Crea el archivo archivo_destino.py en el directorio de trabajo actual. Los contenidos de dicho archivo son los comandos ingresados durante la sesiรณn de trabajo.
Notebook Jupyter
<div id="ch
Step5: Las variables pueden clasificarse segรบn los siguientes tipos de datos,
Booleanos
Step6: Enteros int
Almacenan valores numรฉricos enteros, tanto positivos como negativos,
Step7: Punto flotante float
Almacenan valores numรฉricos de punto flotante,
Step8: Cadenas de texto string
Almacenan el contenido de texto o caracteres,
Step9: Listas list
Las listas son un tipo especial de variable que permite almacenar una secuencia de varios objetos. Por ejemplo la lista,
Step10: Las listas permiten acceder a cada elemento utilizando un sistema de indices que van desde el 0 hasta el N-1 donde N
es el tamaรฑo de la lista, es decir, la cantidad de elementos que almacena
Step11: Tambiรฉn es posible mostrar los contenidos de una lista utilizando print
Step12: Algunos mรฉtodos de las listas
Step13: Interacciรณn con el usuario
Mostrar mensajes en pantalla
Para presentar comandos en pantalla se utiliza el comando print() colocando el contenido de lo que se desea mostrar
al interior de los ().
Por ejemplo
Step14: Al interior tenemos la cadena de caracteres "Hola".
Step15: Es posible ingresar varios argumentos a la funciรณn print,
Step16: Secuencias de escape en texto
Las secuencias de escape son un conjunto de caracteres especiales que cotienen informaciรณn sobre el formateo de texto,
\n significa nueva lรญnea.
\t significa un espacio de tabulaciรณn.
\' permite colocar comillas simples en el texto.
\" permite colocar comillas dobles en el texto.
Un cรณdigo de ejemplo ilustra mejor su uso,
Step17: Leer informaciรณn ingresada por el teclado
Se puede leer informaciรณn ingresada por el teclado mediante el comando input el cual debe ir almacenado en una variable
dentro del parรฉntesis se puede ingresar un mensaje (opcional).
Step18: Es equivalente a tener,
Step19: Estructuras de control
Las estructuras de control determinan el comportamiento de un programa con base en desiciones. Los criterios de decisiรณn
suelen ser condiciones de lรณgica Booleana, es decir, Verdadero o Falso.
Operaciones lรณgicas and, or
Corresponde a las operaciones lรณgicas de conjunciรณn y disyunciรณn. Las tablas de verdad son las siguientes,
Conjunciรณn and
<table border="1">
<thead>
<tr><th align="center">A</th> <th align="center">B</th> <th align="center">A <code>and</code> B</th> </tr>
</thead>
<tbody>
<tr><td align="center"> V </td> <td align="center"> V </td> <td align="center"> V </td> </tr>
<tr><td align="center"> V </td> <td align="center"> F </td> <td align="center"> F </td> </tr>
<tr><td align="center"> F </td> <td align="center"> V </td> <td align="center"> F </td> </tr>
<tr><td align="center"> F </td> <td align="center"> F </td> <td align="center"> F </td> </tr>
</tbody>
</table>
Step20: Disyunciรณn or
<table border="1">
<thead>
<tr><th align="center">A</th> <th align="center">B</th> <th align="center">A <code>or</code> B</th> </tr>
</thead>
<tbody>
<tr><td align="center"> V </td> <td align="center"> V </td> <td align="center"> F </td> </tr>
<tr><td align="center"> V </td> <td align="center"> F </td> <td align="center"> V </td> </tr>
<tr><td align="center"> F </td> <td align="center"> V </td> <td align="center"> V </td> </tr>
<tr><td align="center"> F </td> <td align="center"> F </td> <td align="center"> V </td> </tr>
</tbody>
</table>
Step21: Operadores lรณgicos de comparaciรณn <=, >=, <, >,y ==
Los operadores lรณgicos de comparaciรณn retornan los valores de verdad correspondientes al resultado de la comparaciรณn
Step22: Sentencia if
La sentencia if es la que nos permite evaluar condiciones booleanas y controlar el flujo de software,
Step23: La sentencia permite aรฑadir la instrucciรณn else que se ocupa del caso excluyente del if
Step24: Se pueden anidar sentencias if que aรฑadan condiciones adicionales mediante la instrucciรณn elif (de else if).
Por ejemplo, considere lo que pasarรญa segรบn los distintos valores de la variable A
Step25: Ciclos o Bucles
Un ciclo es una porciรณn de cรณdigo que se ejecuta repetidamente, una y otra vez, un nรบmero de veces determinado.
El ciclo se detiene cuando se satisface una condiciรณn de parada
Ciclo for
Este ciclo es muy รบtil cuando se conoce de antemano la cantidad de veces que se necesita repetir
una acciรณn determinada dentro del programa. Para controlar el ciclo se pueden utlizar las diferentes secuencias vistas anteriormente
Step26: Tambiรฉn se puede incluรญr un "paso" en el ciclo
Step27: Tambiรฉn es posible iterar a lo largo de los elementos de una lista,
Step28: O a lo largo de los elementos de una cadena de caracteres,
Step29: Notemos que en este caso i contiene momentรกneamente el i-รฉsimo caracter de la cadena frase.
Ciclo while
El ciclo while se utiliza para ejecutar un conjunto de instrucciones repetidamente hasta que se
cumpla cierta condiciรณn que nosotros inducimos.
Veamos cรณmo generar los puntos que describen la trayectoria de un movimiento parabรณlico con velocidad inicial
de 20 m/s con un รกngulo de tiro de 60 grados.
Utilizaremos las ecuaciones de movimiento,
$$x = x_{0} + v_{0}\cos\theta\,t$$
$$y = y_{0} + v_{0}\sin\theta\, t - \frac{1}{2}g\,t^{2}$$
Step30: Funciones
En programaciรณn es comรบn que existan tareas que se realicen recurrentemente. Las funciones sirven para disminuir la complejidad
del cรณdigo, organizarlo mejor, ademรกs de facilitar su depuraciรณn. Al igual que las fuciones en matemรกticas, las funciones en
programaciรณn pueden recibir argumentos los cuales son procesados para retornar algรบn resultado. Por otro lado, las fuciones tambiรฉn
pueden existir sin requerir argumentos y ejecutar un conjunto de ordenes sin necesidad de devolver resultado alguno.
Declaraciรณn de una funciรณn
Step31: Tambiรฉn puedo tener argumentos que no retornen resultado alguno
Step32: Mรณdulos (o librerรญas)
Los mรณdulos o librerรญas son conjuntos de funciones que han sido creadas por grupos de programadores y que permiten ahorrar
tiempo a la hora de programar.
Los mรณdulos de interรฉs para este curso son
math.
numpy (Numerical Python).
matplotlib (Mathematical Plot Library).
Mรณdulo math
Step33: Numpy
<div id="ch
Step34: Matplotlib
<div id="ch | Python Code:
print("Hola mundo!")
print("1+1=",2)
print("Hola, otra vez","1+1=",2)
print("Hola, otra vez.","Sabias que 1+1 =",2,"?")
numero=3
print(numero)
numero=3.1415
print(numero)
Explanation: Introducciรณn a Python para Ciencias Biรณlogicas
Curso de Biofรญsica - Universidad de Antioquia
Daniel Mejรญa Raigosa (email: [email protected])
Grupo de Biofรญsica
Universidad de Antioquia
Date: Abril 27, 2016
Acerca de.
Materiales de trabajo para el curso corto de Python para ciencias biolรณgicas presentado en la Universidad
de Antioquia el Miรฉrcoles 27 de Abril de 2016.
Revisiรณn del documento: Versiรณn 1.3.0
Contenidos
Motivaciรณn: programaciรณn en Ciencias Biolรณgicas?
Instalaciรณn de Python Anaconda
Consola de IPython
Notebook Jupyter
Elementos de Python
Numpy
Matplotlib
Motivaciรณn: programaciรณn en Ciencias Biolรณgicas?
<div id="ch:motivacion"></div>
Los sistemas biolรณgicos exhiben gran complejidad. El desarrollo de tรฉcnicas experimentales y tecnologรญas nuevas
ha causado que se disponga de datos experimentales cada vez mรกs masivos, requiriรฉndose el uso de herramientas complejas
que pueda manipular la cantidad de informaciรณn disponible con facilidad. La mejor manera de exponer razones por las cuales
un biรณgolo o profesional de las ciencias biolรณgicas deberรญa aprender a programar como una herramienta se puede hacer
a travรฉs de casos reales de aplicaciรณn.
Dentro de los casos de รฉxito tenemos,
รmicas: genรณmica, proteรณmica, metabolรณmica, secuenciaciรณn...
Minerรญa de datos en bases de datos (Proteรญnas, Georeferenciaciรณn, ...).
Dinรกmica de poblaciones (Ecosistรฉmica?).
Anรกlisis estadรญstico masivo.
Anรกlisis de datos con inteligencia artificial (Redes Neuronales, Algoritmos Adaptativos y Evolutivos, ...).
Simulaciรณn en general (Demasiados casos de รฉxito...).
<!-- [Why biology students should learn to program](http://www.wired.com/2009/03/why-biology-students-should-learn-how-to-program/) -->
Quรฉ es Python?
<div id="ch:motivacion:python"></div>
Python es un lenguaje de programaciรณn interpretado que se ha popularizado mucho debido a su simpleza y
relativa facilidad de aprendizaje. Es un lenguaje de programaciรณn de scripting, es decir, el cรณdigo fuente se ejecuta lรญnea a lรญnea,
y no requiere de su compilaciรณn para producir aplicaciones.
La diferencia entre un lenguaje de programaciรณn compilado y uno interpretado es que en el primer caso el cรณdigo fuente debe ser
compilado para producir un archivo ejecutable. Para el lenguaje interpretado se requiere de la existencia de una aplicaciรณn
conocida como interprete que se encarga de ejecutar las instrucciones descritas en el cรณdigo fuente o script.
<!-- ======= ======= -->
<!-- <div id="ch:motivacion:"></div> -->
Instalaciรณn de Python Anaconda
<div id="ch:instalacion"></div>
Durante el desarrollo de este curso estaremos trabajando con una distribuciรณn de Python conocida como Anaconda.
Anaconda es un empaquetado de Python que incluye varios paquetes de uso frecuente en ciencias y anรกlisis de datos, entre
ellos Matplotlib y Numpy de interรฉs para este curso y cuya instalaciรณn independiente es bastante tediosa.
Enlace de descarga Anaconda para Windows
Tamaรฑo aproximado: 350 MB.
Tiempo de instalaciรณn aproximado: 15 minutos.
Mรณdulo Python Visual (VPython)
El mรณdulo de python-visual es รบtil para el dibujado de objetos 3D y animaciones.
El mรณdulo VisualPython (VPython) funciona bajo Python 2.7 que es la versiรณn estรกndar anterior a Python 3.x.
Para instalar vpython con Anaconda 3.5 se deben seguir los siguientes pasos
Crear un entorno con Python 2.7 en Anaconda, mediante el comando
conda create --name oldpython python=2.7
el nombre oldpython puede ser cambiado por cualquier otro. Posteriormente, es necesario activar el nuevo entorno
activate oldpython
Ya se puede instalar vpython desde la consola de Anaconda mediante el comando,
conda install -c https://conda.binstar.org/mwcraig vpython
Opcionalmente, para usar vpython con IPython es necesario instalar una versiรณn de ipython compatible con python 2.7.
Esto se logra mediante el comando
conda install ipython
Siempre y cuando tengamos el entorno oldpython activado.
Mรกs informaciรณn y otros enlaces de descarga
Para usar, vpython con las versiones mรกs nuevas de python se recomienda incluir la siguiente lรญnea al inicio del cรณdigo
fuente,
from __future__ import print_function, division
Editor ATOM (Opcional)
Cualquier editor de texto es รบtil para escribir cรณdigo fuente en python, el รบnico requisito es guardar el cรณdigo fuente con extensiรณn .py.
Un editor que podrรญa ser รบtil debido al resaltado de sintaxis es ATOM el cual puede descargarse para windows desde
este enlace https://github.com/atom/atom/releases/download/v1.7.2/atom-windows.zip
Tamaรฑo aproximado: 100 MB.
Consola de IPython
<div id="ch:ipython"></div>
<!-- dom:FIGURE: [figures/ipython-console.png, width=640 frac=0.8] Consola de IPython en Linux <div id="ch:ipython:fig:consola_ejemplo"></div> -->
<!-- begin figure -->
<div id="ch:ipython:fig:consola_ejemplo"></div>
<p>Consola de IPython en Linux</p>
<img src="figures/ipython-console.png" width=640>
<!-- end figure -->
Para iniciar una consola de IPython es necesario buscar la carpeta de instalaciรณn de Anaconda en el menรบ de inicio
de windows, y abrir el acceso directo IPyton correspondiente.
Al abrir la consola de IPython nos debe aparecer el siguiente mensaje (tal como en la figura mรกs arriba ),
Esta consola o prompt nos permite ingresar lรญneas de cรณdigo python que se ejecutan despuรฉs de ser ingresadas.
End of explanation
palabra="hola"
print(palabra)
Explanation: Probemos creando una variable que inicialice a una palabra
Esto produce la siguiente salida,
```Python
palabra=hola
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-12-fda39d79c7ea> in <module>()
----> 1 palabra=hola
NameError: name 'hola' is not defined
```
Lo que ocurriรณ es que la expresiรณn palabra=hola corresponde a la asignaciรณn del contenido de una variable llamada
hola en una llamada palabra. La variable hola no existe (no fue definida previamente) por lo tanto se arrojรณ el error.
Para asignar el contenido de una palabra o frase se debe hacer uso de comillas dobles,
End of explanation
%history
Explanation: El historial de comandos ingresados
Mediante el comando,
End of explanation
%history -f archivo_destino.py
Explanation: Es posible visualizar la lista de los comandos ingresados durante una sesiรณn de trabajo de IPython. Tambiรฉn es posible crear
un archivo que almacene los comandos de la sesiรณn aรฑadiendo la opciรณn -f nombre_archivo_destino.py.
Por ejemplo, el comando
End of explanation
Cualquier_Cosa="Almaceno contenido"
print(Cualquier_Cosa)
Explanation: Crea el archivo archivo_destino.py en el directorio de trabajo actual. Los contenidos de dicho archivo son los comandos ingresados durante la sesiรณn de trabajo.
Notebook Jupyter
<div id="ch:jupyternb"></div>
Jupyter es la evoluciรณn de lo que se conocรญa como notebooks de IPython, una interfaz en un navegador web que interacciona con un
kernel o nรบcleo de computaciรณn de IPython el cual se encarga de ejecutar los comandos ingresados en el notebook.
Actualmente Jupyter soporta varios nรบcleos de computaciรณn, entre ellos Python, R, Perl, entre otros.
<!-- dom:FIGURE: [figures/jupyter-console.png, width=640 frac=0.8] Notebook de Jupyter <div id="ch:jupyternb:fig:notebook_ejemplo"></div> -->
<!-- begin figure -->
<div id="ch:jupyternb:fig:notebook_ejemplo"></div>
<p>Notebook de Jupyter</p>
<img src="figures/jupyter-console.png" width=640>
<!-- end figure -->
Iniciando el notebook
Para iniciar un notebook de Jupyter en Anaconda es necesario ejecutar el enlace directo con nombre Jupyter
en el menรบ de inicio, bajo el directorio de Anaconda
Esto iniciarรก el kernel establecido por defecto segรบn la instalaciรณn de Anaconda que hayamos elegido,
y se abrirรก una ventana del navegador web del sistema presentando el notebook de manera similar que vemos
en la figura mรกs arriba
Elementos de Python
<div id="ch:elementospython"></div>
Sintaxis
Variables y tipos de datos
En programaciรณn existe el concepto de variable. Una variable puede verse como una etiqueta que le damos a una regiรณn
de memoria donde almacenamos informaciรณn. La utilidad de tener etiquetadas regiones de memoria es que podemos hacer llamadas
a ella en cualquier parte de nuestros programas.
End of explanation
verdadera = True
falsa = False
print(verdadera)
print(falsa)
Explanation: Las variables pueden clasificarse segรบn los siguientes tipos de datos,
Booleanos: almacenan valores de verdad, verdaero, falso, 1, 0.
Enteros: almacenan valores numรฉricos enteros, como 2, -2, 3, 1000.
Punto flotante: almacenan valores numรฉricos de punto flotante es decir, nรบmeros con decimales o en notaciรณn cientรญfica.
Cadenas: almacenan valores tipo cadena de caracteres como las oraciรณn "Hola Mundo!"
Variables Booleanas bool
Almacenan valores de verdad verdadero o falso que en Python corresponden a true y false
End of explanation
numero=19881129
print(numero)
numeronegativo=-19881129
print(numero)
Explanation: Enteros int
Almacenan valores numรฉricos enteros, tanto positivos como negativos,
End of explanation
numero=3.19881129
print(numero)
numeronegativo=-3.19881129
print(numero)
Explanation: Punto flotante float
Almacenan valores numรฉricos de punto flotante,
End of explanation
palabra="Alicia"
frase="ยฟEn quรฉ se parecen un cuervo y un escritorio?"
print(palabra)
print(frase)
Explanation: Cadenas de texto string
Almacenan el contenido de texto o caracteres,
End of explanation
lista = [2,3.5,True, "perro feliz"] #existe en esta lista diferentes tipos de datos
Explanation: Listas list
Las listas son un tipo especial de variable que permite almacenar una secuencia de varios objetos. Por ejemplo la lista,
End of explanation
print(lista[0])
print(lista[1])
print(lista[2])
print(lista[3])
Explanation: Las listas permiten acceder a cada elemento utilizando un sistema de indices que van desde el 0 hasta el N-1 donde N
es el tamaรฑo de la lista, es decir, la cantidad de elementos que almacena
End of explanation
print(lista)
Explanation: Tambiรฉn es posible mostrar los contenidos de una lista utilizando print
End of explanation
lista=[]
print(lista)
lista.append(1)
lista.append(":D")
lista.append("0211")
print(lista)
lista.remove("0211")
print(lista)
barranquero={"Reino":"Animalia","Filo":"Chordata","Clase":"Aves","Orden":"Coraciiformes","Familia":"Momotidae","Gรฉnero":"Momotus"}
velociraptor={"Reino":"Animalia","Filo":"Chordata","Clase":"Sauropsida","Orden":"Saurischia","Familia":"Dromaeosauridae","Gรฉnero":"Velociraptor"}
print(barranquero)
Meses={"Enero":1,"Febrero":2}
Meses["Enero"]
Meses={1:"Enero",2:"Febrero"}
Meses[1]
Explanation: Algunos mรฉtodos de las listas
End of explanation
print("Hola")
Explanation: Interacciรณn con el usuario
Mostrar mensajes en pantalla
Para presentar comandos en pantalla se utiliza el comando print() colocando el contenido de lo que se desea mostrar
al interior de los ().
Por ejemplo
End of explanation
print(5.5)
Explanation: Al interior tenemos la cadena de caracteres "Hola".
End of explanation
print("Voy a tener varios argumentos","separados","por",1,"coma","entre ellos")
Explanation: Es posible ingresar varios argumentos a la funciรณn print,
End of explanation
print("Esto es un \"texto\" que se divide por\nUna lรญnea nueva")
print("Tambiรฉn puedo tener texto\tseparado por un tabulador")
print("Las secuencias de escape\n\t pueden ser combinadas\ncuantas veces se quiera")
Explanation: Secuencias de escape en texto
Las secuencias de escape son un conjunto de caracteres especiales que cotienen informaciรณn sobre el formateo de texto,
\n significa nueva lรญnea.
\t significa un espacio de tabulaciรณn.
\' permite colocar comillas simples en el texto.
\" permite colocar comillas dobles en el texto.
Un cรณdigo de ejemplo ilustra mejor su uso,
End of explanation
nombre=input("Que quieres saber?: ")
print(barranquero[nombre])
Explanation: Leer informaciรณn ingresada por el teclado
Se puede leer informaciรณn ingresada por el teclado mediante el comando input el cual debe ir almacenado en una variable
dentro del parรฉntesis se puede ingresar un mensaje (opcional).
End of explanation
print("Hola, cual es tu nombre?")
nombre=input() # es necesario colocar comillas al texto ingresado
print("Tu nombre es",nombre)
Explanation: Es equivalente a tener,
End of explanation
print("Tabla de verdad \'and\'")
A = True
B = True
print(A,"and",B,"=",A or B)
A = True
B = False
print(A,"and",B,"=",A or B)
A = False
B = True
print(A,"and",B,"=",A or B)
A = False
B = False
print(A,"and",B,"=",A or B)
Explanation: Estructuras de control
Las estructuras de control determinan el comportamiento de un programa con base en desiciones. Los criterios de decisiรณn
suelen ser condiciones de lรณgica Booleana, es decir, Verdadero o Falso.
Operaciones lรณgicas and, or
Corresponde a las operaciones lรณgicas de conjunciรณn y disyunciรณn. Las tablas de verdad son las siguientes,
Conjunciรณn and
<table border="1">
<thead>
<tr><th align="center">A</th> <th align="center">B</th> <th align="center">A <code>and</code> B</th> </tr>
</thead>
<tbody>
<tr><td align="center"> V </td> <td align="center"> V </td> <td align="center"> V </td> </tr>
<tr><td align="center"> V </td> <td align="center"> F </td> <td align="center"> F </td> </tr>
<tr><td align="center"> F </td> <td align="center"> V </td> <td align="center"> F </td> </tr>
<tr><td align="center"> F </td> <td align="center"> F </td> <td align="center"> F </td> </tr>
</tbody>
</table>
End of explanation
print("Tabla de verdad \'or\'")
A = True
B = True
print(A,"or",B,"=",A or B)
A = True
B = False
print(A,"or",B,"=",A or B)
A = False
B = True
print(A,"or",B,"=",A or B)
A = False
B = False
print(A,"or",B,"=",A or B)
Explanation: Disyunciรณn or
<table border="1">
<thead>
<tr><th align="center">A</th> <th align="center">B</th> <th align="center">A <code>or</code> B</th> </tr>
</thead>
<tbody>
<tr><td align="center"> V </td> <td align="center"> V </td> <td align="center"> F </td> </tr>
<tr><td align="center"> V </td> <td align="center"> F </td> <td align="center"> V </td> </tr>
<tr><td align="center"> F </td> <td align="center"> V </td> <td align="center"> V </td> </tr>
<tr><td align="center"> F </td> <td align="center"> F </td> <td align="center"> V </td> </tr>
</tbody>
</table>
End of explanation
A=5
print(5>2)
print(5<2)
print(5==10)
print(5==A)
print(5>=2)
print(5<=2)
print(5<=A)
Explanation: Operadores lรณgicos de comparaciรณn <=, >=, <, >,y ==
Los operadores lรณgicos de comparaciรณn retornan los valores de verdad correspondientes al resultado de la comparaciรณn
End of explanation
A=2
if( 5 > A ):
print("5 es mayor que",A)
Explanation: Sentencia if
La sentencia if es la que nos permite evaluar condiciones booleanas y controlar el flujo de software,
End of explanation
A=2
if( 5 > A ):
print("5 es mayor que",A)
print("Hey")
else:
print("5 es menor que",A)
Explanation: La sentencia permite aรฑadir la instrucciรณn else que se ocupa del caso excluyente del if
End of explanation
A=5
if( 5 > A ):
print("5 es mayor que",A)
elif( 5 == A):
print("5 es igual",A)
else:
print("5 es menor que",A)
peticion=input("Que desea saber? ")
if(peticion=="Familia"):
print(barranquero["Familia"])
elif(peticion=="Orden"):
print(barranquero["Orden"])
else:
print("No entiendo tu peticiรณn")
Explanation: Se pueden anidar sentencias if que aรฑadan condiciones adicionales mediante la instrucciรณn elif (de else if).
Por ejemplo, considere lo que pasarรญa segรบn los distintos valores de la variable A
End of explanation
for x in range(1,6):
print("Este mensaje aparece por",x,"vez")
Explanation: Ciclos o Bucles
Un ciclo es una porciรณn de cรณdigo que se ejecuta repetidamente, una y otra vez, un nรบmero de veces determinado.
El ciclo se detiene cuando se satisface una condiciรณn de parada
Ciclo for
Este ciclo es muy รบtil cuando se conoce de antemano la cantidad de veces que se necesita repetir
una acciรณn determinada dentro del programa. Para controlar el ciclo se pueden utlizar las diferentes secuencias vistas anteriormente
End of explanation
for x in range(1,20,2):
print(x)
for x in range(1,30,3):
print(x**2)
Explanation: Tambiรฉn se puede incluรญr un "paso" en el ciclo
End of explanation
lista=[1,2,3,4,5,6]
animales=["perro","gato","elefante"]
for x in animales:
print(x)
Explanation: Tambiรฉn es posible iterar a lo largo de los elementos de una lista,
End of explanation
frase="Alicia\nยฟEn quรฉ se parecen un cuervo y un escritorio?"
for i in frase:
print(i)
frase[0:6]
Explanation: O a lo largo de los elementos de una cadena de caracteres,
End of explanation
# Esto se conoce como la importaciรณn de un mรณdulo
# Aquรญ importamos el mรณdulo math que contiene las operaciones matematicas de seno y coseno
# las cuales vamos a utilizar para calcular las componentes vertical y horizontal de la velocidad
# inicial
import math
t=0.0
dt=0.2
x=0.0
y=0.0
# Aquรญ tenemos
vx=20*math.sin(math.radians(60))
vy=20*math.cos(math.radians(60))
while y>=0.0:
print(t,"\t",x,"\t",y)
t=t+dt
x=x+vx*t
y=y+vy*t-(9.8/2)*t**2
Explanation: Notemos que en este caso i contiene momentรกneamente el i-รฉsimo caracter de la cadena frase.
Ciclo while
El ciclo while se utiliza para ejecutar un conjunto de instrucciones repetidamente hasta que se
cumpla cierta condiciรณn que nosotros inducimos.
Veamos cรณmo generar los puntos que describen la trayectoria de un movimiento parabรณlico con velocidad inicial
de 20 m/s con un รกngulo de tiro de 60 grados.
Utilizaremos las ecuaciones de movimiento,
$$x = x_{0} + v_{0}\cos\theta\,t$$
$$y = y_{0} + v_{0}\sin\theta\, t - \frac{1}{2}g\,t^{2}$$
End of explanation
def producto(argumento1,argumento2):
return argumento1*argumento2
print(producto(2,3))
def busqueda(diccionario,peticion):
if(peticion=="Familia"):
print(diccionario["Familia"])
elif(peticion=="Orden"):
print(diccionario["Orden"])
else:
print("No entiendo tu peticiรณn")
peticion=input("Que desea saber? ")
busqueda(barranquero,peticion)
peticion=input("Que desea saber? ")
busqueda(velociraptor,peticion)
animales=[barranquero,velociraptor]
for animal in animales:
print(animal["Orden"])
for x in animales:
busqueda(animal,"Orden") #funciรณn definida previamente
Explanation: Funciones
En programaciรณn es comรบn que existan tareas que se realicen recurrentemente. Las funciones sirven para disminuir la complejidad
del cรณdigo, organizarlo mejor, ademรกs de facilitar su depuraciรณn. Al igual que las fuciones en matemรกticas, las funciones en
programaciรณn pueden recibir argumentos los cuales son procesados para retornar algรบn resultado. Por otro lado, las fuciones tambiรฉn
pueden existir sin requerir argumentos y ejecutar un conjunto de ordenes sin necesidad de devolver resultado alguno.
Declaraciรณn de una funciรณn
End of explanation
def mensaje():
print("Hola clase")
mensaje()
def informacion(diccionario):
for i in diccionario.keys():
print(i,"\t",diccionario[i])
informacion(velociraptor)
Explanation: Tambiรฉn puedo tener argumentos que no retornen resultado alguno
End of explanation
import math
print("math.fabs(-1)=",math.fabs(-1))
print("math.ceil(3.67)=",math.ceil(3.67))
print("math.floor(3.37)=",math.floor(3.37))
print("math.ceil(3.67)=",math.ceil(3.67))
print("math.floor(3.37)=",math.floor(3.37))
print("math.factorial(4)=",math.factorial(4))
print("math.exp(1)=",math.exp(1))
print("math.log(math.exp(1))=",math.log(math.exp(1)))
print("math.log(10)=",math.log(10))
print("math.log(10,10)=",math.log(10,10))
print("math.sqrt(2)=",math.sqrt(2))
print("math.degrees(3.141592)=",math.degrees(3.141592))
print("math.radians(2*math.pi)=",math.radians(2*math.pi))
print("math.cos(1)=",math.cos(1))
print("math.sin(1)=",math.sin(1))
print("math.tan(1)=",math.tan(1))
print("math.acos(1)=",math.acos(1))
print("math.asin(1)=",math.asin(1))
print("math.atan(1)=",math.atan(1))
Explanation: Mรณdulos (o librerรญas)
Los mรณdulos o librerรญas son conjuntos de funciones que han sido creadas por grupos de programadores y que permiten ahorrar
tiempo a la hora de programar.
Los mรณdulos de interรฉs para este curso son
math.
numpy (Numerical Python).
matplotlib (Mathematical Plot Library).
Mรณdulo math
End of explanation
import numpy as np
a = np.arange(15).reshape(3, 5)
print(a)
a = np.array([[ 0, 1, 2, 3, 4],[ 5, 6, 7, 8, 9],[10, 11, 12, 13, 14]])
a.shape
a.ndim
np.zeros( (3,4) )
np.ones( (3,4) )
np.arange( 10, 30, 5 )
random = np.random.random((2,3))
Explanation: Numpy
<div id="ch:intro-numpy"></div>
End of explanation
%matplotlib inline
import math
import matplotlib.pyplot as plt
t=0.0
dt=0.1
# Posiciรณn inicial
x=0.0
y=0.0
# Velocidad inicial
vo=20
# Listas vacรญas que almacenaran los puntos
# x, y de las trayectorias
puntos_x=[]
puntos_y=[]
# Aquรญ tenemos
vx=vo*math.cos(math.radians(60))
vy=vo*math.sin(math.radians(60))
while y>=0.0:
# Aรฑado las coordenadas a la lista
puntos_x.append(x)
puntos_y.append(y)
t=t+dt
x=x+vx*t
y=y+vy*t-(9.8/2)*t**2
plt.title("Movimiento Parabรณlico")
plt.xlabel("Posiciรณn horizontal(m)")
plt.ylabel("Altura (m)")
plt.plot(puntos_x,puntos_y)
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo', t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
# the histogram of the data
n, bins, patches = plt.hist(x, 50, normed=1, facecolor='g', alpha=0.75)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title('Histogram of IQ')
plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
plt.axis([40, 160, 0, 0.03])
plt.grid(True)
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# make up some data in the interval ]0, 1[
y = np.random.normal(loc=0.5, scale=0.4, size=1000)
y = y[(y > 0) & (y < 1)]
y.sort()
x = np.arange(len(y))
# plot with various axes scales
plt.figure(1)
# linear
plt.subplot(221)
plt.plot(x, y)
plt.yscale('linear')
plt.title('linear')
plt.grid(True)
# log
plt.subplot(222)
plt.plot(x, y)
plt.yscale('log')
plt.title('log')
plt.grid(True)
# symmetric log
plt.subplot(223)
plt.plot(x, y - y.mean())
plt.yscale('symlog', linthreshy=0.05)
plt.title('symlog')
plt.grid(True)
# logit
plt.subplot(224)
plt.plot(x, y)
plt.yscale('logit')
plt.title('logit')
plt.grid(True)
plt.show()
Explanation: Matplotlib
<div id="ch:intro-matplotlib"></div>
End of explanation |
1,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Outlier Detection with bqplot
In this notebook, we create a class DNA that leverages the new bqplot canvas based HeatMap along with the ipywidgets Range Slider to help us detect and clean outliers in our data. The class accepts a DataFrame and allows you to visually and programmatically filter your outliers. The cleaned DataFrame can then be retrieved through a simple convenience function.
Step2: We define the size of our matrix here. Larger matrices require a larger height.
Step3: Instead of setting the quantiles by the sliders, we can also set them programmatically. Using a range of (5, 95) restricts the data considerably.
Step4: Now, we can use the convenience function to extract a clean DataFrame.
Step5: The DNA fills outliers with the mean of the column. Alternately, we can fill the outliers by the mean.
Step6: We can also visualize the new DataFrame the same way to test how our outliers look now. | Python Code:
from bqplot import (
DateScale,
ColorScale,
HeatMap,
Figure,
LinearScale,
OrdinalScale,
Axis,
)
from scipy.stats import percentileofscore
from scipy.interpolate import interp1d
import bqplot.pyplot as plt
from traitlets import List, Float, observe
from ipywidgets import IntRangeSlider, Layout, VBox, HBox, jslink
from pandas import DatetimeIndex
import numpy as np
import pandas as pd
def quantile_space(x, q1=0.1, q2=0.9):
Returns a function that squashes quantiles between q1 and q2
q1_x, q2_x = np.percentile(x, [q1, q2])
qs = np.percentile(x, np.linspace(0, 100, 100))
def get_quantile(t):
return np.interp(t, qs, np.linspace(0, 100, 100))
def f(y):
return np.interp(get_quantile(y), [0, q1, q2, 100], [-1, 0, 0, 1])
return f
class DNA(VBox):
colors = List()
q1 = Float()
q2 = Float()
def __init__(self, data, **kwargs):
self.data = data
date_x, date_y = False, False
transpose = kwargs.pop("transpose", False)
if transpose is True:
if type(data.index) is DatetimeIndex:
self.x_scale = DateScale()
if type(data.columns) is DatetimeIndex:
self.y_scale = DateScale()
x, y = list(data.columns.values), data.index.values
else:
if type(data.index) is DatetimeIndex:
date_x = True
if type(data.columns) is DatetimeIndex:
date_y = True
x, y = data.index.values, list(data.columns.values)
self.q1, self.q2 = kwargs.pop("quantiles", (1, 99))
self.quant_func = quantile_space(
self.data.values.flatten(), q1=self.q1, q2=self.q2
)
self.colors = kwargs.pop("colors", ["Red", "Black", "Green"])
self.x_scale = DateScale() if date_x is True else LinearScale()
self.y_scale = DateScale() if date_y is True else OrdinalScale(padding_y=0)
self.color_scale = ColorScale(colors=self.colors)
self.heat_map = HeatMap(
color=self.quant_func(self.data.T),
x=x,
y=y,
scales={"x": self.x_scale, "y": self.y_scale, "color": self.color_scale},
)
self.x_ax = Axis(scale=self.x_scale)
self.y_ax = Axis(scale=self.y_scale, orientation="vertical")
show_axes = kwargs.pop("show_axes", True)
self.axes = [self.x_ax, self.y_ax] if show_axes is True else []
self.height = kwargs.pop("height", "800px")
self.layout = kwargs.pop(
"layout", Layout(width="100%", height=self.height, flex="1")
)
self.fig_margin = kwargs.pop(
"fig_margin", {"top": 60, "bottom": 60, "left": 150, "right": 0}
)
kwargs.setdefault("padding_y", 0.0)
self.create_interaction(**kwargs)
self.figure = Figure(
marks=[self.heat_map],
axes=self.axes,
fig_margin=self.fig_margin,
layout=self.layout,
min_aspect_ratio=0.0,
**kwargs
)
super(VBox, self).__init__(
children=[self.range_slider, self.figure],
layout=Layout(align_items="center", width="100%", height="100%"),
**kwargs
)
def create_interaction(self, **kwargs):
self.range_slider = IntRangeSlider(
description="Filter Range",
value=(self.q1, self.q2),
layout=Layout(width="100%"),
)
self.range_slider.observe(self.slid_changed, "value")
self.observe(self.changed, ["q1", "q2"])
def slid_changed(self, new):
self.q1 = self.range_slider.value[0]
self.q2 = self.range_slider.value[1]
def changed(self, new):
self.range_slider.value = (self.q1, self.q2)
self.quant_func = quantile_space(
self.data.values.flatten(), q1=self.q1, q2=self.q2
)
self.heat_map.color = self.quant_func(self.data.T)
def get_filtered_df(self, fill_type="median"):
q1_x, q2_x = np.percentile(self.data, [self.q1, self.q2])
if fill_type == "median":
return self.data[(self.data >= q1_x) & (self.data <= q2_x)].apply(
lambda x: x.fillna(x.median())
)
elif fill_type == "mean":
return self.data[(self.data >= q1_x) & (self.data <= q2_x)].apply(
lambda x: x.fillna(x.mean())
)
else:
raise ValueError("fill_type must be one of ('median', 'mean')")
Explanation: Outlier Detection with bqplot
In this notebook, we create a class DNA that leverages the new bqplot canvas based HeatMap along with the ipywidgets Range Slider to help us detect and clean outliers in our data. The class accepts a DataFrame and allows you to visually and programmatically filter your outliers. The cleaned DataFrame can then be retrieved through a simple convenience function.
End of explanation
size = 100
def num_to_col_letters(num):
letters = ""
while num:
mod = (num - 1) % 26
letters += chr(mod + 65)
num = (num - 1) // 26
return "".join(reversed(letters))
letters = []
for i in range(1, size + 1):
letters.append(num_to_col_letters(i))
data = pd.DataFrame(np.random.randn(size, size), columns=letters)
data_dna = DNA(
data, title="DNA of our Data", height="1400px", colors=["Red", "White", "Green"]
)
data_dna
Explanation: We define the size of our matrix here. Larger matrices require a larger height.
End of explanation
data_dna.q1, data_dna.q2 = 5, 95
Explanation: Instead of setting the quantiles by the sliders, we can also set them programmatically. Using a range of (5, 95) restricts the data considerably.
End of explanation
data_clean = data_dna.get_filtered_df()
Explanation: Now, we can use the convenience function to extract a clean DataFrame.
End of explanation
data_mean = data_dna.get_filtered_df(fill_type="mean")
Explanation: The DNA fills outliers with the mean of the column. Alternately, we can fill the outliers by the mean.
End of explanation
DNA(data_clean, title="Cleaned Data", height="1200px", colors=["Red", "White", "Green"])
Explanation: We can also visualize the new DataFrame the same way to test how our outliers look now.
End of explanation |
1,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Accessing and Plotting Meshes
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: The 'Mesh' Dataset
NOTE
Step3: Note that we can no manually set the times of the mesh AND/OR reference the times for existing non-mesh datasets (such as the light curve we just added) as well as any of the various t0s in the system.
Step4: By default, the mesh only exposes the geometric columns of the triangles
Step5: But we can also specify other columns to be included (by setting the columns parameter before calling run_compute)
Step6: Any of the exposed columns are then available for plotting the mesh, via b.plot. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Accessing and Plotting Meshes
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,6))
b.add_dataset('mesh')
print b['times@mesh']
print b['include_times@mesh']
Explanation: The 'Mesh' Dataset
NOTE: the "pbmesh" and "protomesh" have been removed as of PHOEBE 2.1+.
You must create a mesh dataset and specify the times and columns which you'd like exposed. For more information, see the tutorial on the MESH dataset.
The mesh will be exposed at the times specified by the 'times' Parameter, as well as any times referenced by the 'include_times' SelectParameter.
So let's add a LC and MESH dataset.
End of explanation
b['times@mesh'] = [10]
b['include_times@mesh'] = ['lc01']
b.run_compute()
print b['mesh@model'].times
Explanation: Note that we can no manually set the times of the mesh AND/OR reference the times for existing non-mesh datasets (such as the light curve we just added) as well as any of the various t0s in the system.
End of explanation
print b['mesh@model'].qualifiers
Explanation: By default, the mesh only exposes the geometric columns of the triangles
End of explanation
print b['columns@mesh']
b['columns@mesh'] = ['teffs']
b.run_compute()
print b['mesh@model'].qualifiers
print b.get_value('teffs', time=0.0, component='primary')
Explanation: But we can also specify other columns to be included (by setting the columns parameter before calling run_compute)
End of explanation
afig, mplfig = b['mesh@model'].plot(time=0.2, fc='teffs', ec='none', show=True)
Explanation: Any of the exposed columns are then available for plotting the mesh, via b.plot.
End of explanation |
1,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<H1>Multivariate regression</H1>
Step1: Let's evaluate how much the membrane potential depends on Input resistance and
membrane time constant and the sag ratio. We will create the following multivariate function | Python Code:
%pylab inline
import pandas as pd
mypath = 'Cell_types.xlsx'
xls = pd.read_excel(mypath)
xls.head()
xls.InputR
xls['Vrest'].mean()
xls['Vrest'].unique() # get NumPy array
Explanation: <H1>Multivariate regression</H1>
End of explanation
x = xls[['InputR', 'SagRatio','mbTau']]
y = xls[['Vrest']]
# import standard regression models (sm)
import statsmodels.api as sm
X1 = sm.add_constant(x) # k0, k1, k2 and k3...
# get estimation
est = sm.OLS(y, X1).fit() # ordinary least square regression
est.summary()
Explanation: Let's evaluate how much the membrane potential depends on Input resistance and
membrane time constant and the sag ratio. We will create the following multivariate function:
$f(k;x) = k_0 + k_1x_1 + k_2x_2 + k_3x_3$
where k is a vector or parameters (contants) and x is a vector of independent variables (i.e x_1 is the input resistance x_2 is membrane time constant and x_3 the sag ratio)
End of explanation |
1,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cross Validation
Step1: cross_val_score uses the KFold or StratifiedKFold strategies by default
Step2: Cross Validation Iterator
K-Fold - KFold divides all the samples in k groups of samples, called folds (if k = n, this is equivalent to the Leave One Out strategy), of equal sizes (if possible). The prediction function is learned using k - 1 folds, and the fold left out is used for test.
Step3: Leave One Out (LOO) - LeaveOneOut (or LOO) is a simple cross-validation. Each learning set is created by taking all the samples except one, the test set being the sample left out. Thus, for n samples, we have n different training sets and n different tests set. This cross-validation procedure does not waste much data as only one sample is removed from the training set
Step4: Leave P Out (LPO) - LeavePOut is very similar to LeaveOneOut as it creates all the possible training/test sets by removing p samples from the complete set. For n samples, this produces {n \choose p} train-test pairs. Unlike LeaveOneOut and KFold, the test sets will overlap for p > 1
Step5: Random permutations cross-validation a.k.a. Shuffle & Split - The ShuffleSplit iterator will generate a user defined number of independent train / test dataset splits. Samples are first shuffled and then split into a pair of train and test sets.
It is possible to control the randomness for reproducibility of the results by explicitly seeding the random_state pseudo random number generator.
Step6: Some classification problems can exhibit a large imbalance in the distribution of the target classes | Python Code:
# import
from sklearn.datasets import load_iris
from sklearn.cross_validation import cross_val_score, KFold, train_test_split, cross_val_predict, LeaveOneOut, LeavePOut
from sklearn.cross_validation import ShuffleSplit, StratifiedKFold, StratifiedShuffleSplit
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
from scipy.stats import sem
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
iris = load_iris()
X, y = iris.data, iris.target
# splotting the data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=27)
print(X_train.shape, X_test.shape, X_train.shape[0])
Explanation: Cross Validation
End of explanation
# define cross_val func
def xVal_score(clf, X, y, K):
# creating K using KFold
cv = KFold(n=X.shape[0], n_folds=K, shuffle=True, random_state=True)
# Can use suffle as well
# cv = ShuffleSplit(n_splits=3, test_size=0.3, random_state=0)
# doing cross validation
scores = cross_val_score(clf, X, y, cv=cv)
print(scores)
print("Accuracy Mean : %0.3f" %np.mean(scores))
print("Std : ", np.std(scores)*2)
print("Standard Err : +/- {0:0.3f} ".format(sem(scores)))
svc1 = SVC()
xVal_score(svc1, X_train, y_train, 10)
# define cross_val predict
# The function cross_val_predict has a similar interface to cross_val_score, but returns,
# for each element in the input, the prediction that was obtained for that element when it
# was in the test set. Only cross-validation strategies that assign all elements to a test
# set exactly once can be used (otherwise, an exception is raised).
def xVal_predict(clf, X, y, K):
# creating K using KFold
cv = KFold(n=X.shape[0], n_folds=K, shuffle=True, random_state=True)
# Can use suffle as well
# cv = ShuffleSplit(n_splits=3, test_size=0.3, random_state=0)
# doing cross validation prediction
predicted = cross_val_predict(clf, X, y, cv=cv)
print(predicted)
print("Accuracy Score : %0.3f" % accuracy_score(y, predicted))
xVal_predict(svc1, X_train, y_train, 10)
Explanation: cross_val_score uses the KFold or StratifiedKFold strategies by default
End of explanation
X = [1,2,3,4,5]
kf = KFold(n=len(X), n_folds=2)
print(kf)
for i in kf:
print(i)
Explanation: Cross Validation Iterator
K-Fold - KFold divides all the samples in k groups of samples, called folds (if k = n, this is equivalent to the Leave One Out strategy), of equal sizes (if possible). The prediction function is learned using k - 1 folds, and the fold left out is used for test.
End of explanation
X = [1,2,3,4,5]
loo = LeaveOneOut(len(X))
print(loo)
for i in loo:
print(i)
Explanation: Leave One Out (LOO) - LeaveOneOut (or LOO) is a simple cross-validation. Each learning set is created by taking all the samples except one, the test set being the sample left out. Thus, for n samples, we have n different training sets and n different tests set. This cross-validation procedure does not waste much data as only one sample is removed from the training set:
End of explanation
X = [1,2,3,4,5]
loo = LeavePOut(len(X), p=3)
print(loo)
for i in loo:
print(i)
Explanation: Leave P Out (LPO) - LeavePOut is very similar to LeaveOneOut as it creates all the possible training/test sets by removing p samples from the complete set. For n samples, this produces {n \choose p} train-test pairs. Unlike LeaveOneOut and KFold, the test sets will overlap for p > 1
End of explanation
X = [1,2,3,4,5]
loo = ShuffleSplit(len(X))
print(loo)
for i in loo:
print(i)
Explanation: Random permutations cross-validation a.k.a. Shuffle & Split - The ShuffleSplit iterator will generate a user defined number of independent train / test dataset splits. Samples are first shuffled and then split into a pair of train and test sets.
It is possible to control the randomness for reproducibility of the results by explicitly seeding the random_state pseudo random number generator.
End of explanation
X = np.ones(10)
y = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
skf = StratifiedKFold(n_folds=4, y=y)
for i in skf:
print(i)
skf.
Explanation: Some classification problems can exhibit a large imbalance in the distribution of the target classes: for instance there could be several times more negative samples than positive samples. In such cases it is recommended to use stratified sampling as implemented in StratifiedKFold and StratifiedShuffleSplit to ensure that relative class frequencies is approximately preserved in each train and validation fold.
Stratified k-fold
StratifiedKFold is a variation of k-fold which returns stratified folds: each set contains approximately the same percentage of samples of each target class as the complete set.
End of explanation |
1,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Density Estimation
Step1: Introducing Gaussian Mixture Models
We previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.
Here we'll consider an extension to this which is suitable for both clustering and density estimation.
For example, imagine we have some one-dimensional data in a particular distribution
Step2: Gaussian mixture models will allow us to approximate this density
Step3: Note that this density is fit using a mixture of Gaussians, which we can examine by looking at the means_, covars_, and weights_ attributes
Step4: These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the posterior probability is used to compute the weighted mean and covariance.
Somewhat surprisingly, this algorithm provably converges to the optimum (though the optimum is not necessarily global).
$R^2$
How many Gaussians?
Given a model, we can use one of several means to evaluate how well it fits the data.
For example, there is the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC)
Step5: Let's take a look at these as a function of the number of gaussians
Step6: It appears that for both the AIC and BIC, 4 components is preferred.
Example
Step7: Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of y
Step8: The algorithm misses a few of these points, which is to be expected (some of the "outliers" actually land in the middle of the distribution!)
Here are the outliers that were missed
Step9: And here are the non-outliers which were spuriously labeled outliers
Step10: Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.
Other Density Estimators
The other main density estimator that you might find useful is Kernel Density Estimation, which is available via sklearn.neighbors.KernelDensity. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of every training point! | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
Explanation: <small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Density Estimation: Gaussian Mixture Models
Here we'll explore Gaussian Mixture Models, which is an unsupervised clustering & density estimation technique.
We'll start with our standard set of initial imports
End of explanation
np.random.seed(2)
a=np.random.normal(0, 2, 2000)
b=np.random.normal(5, 5, 2000)
c=np.random.normal(3, 0.5, 600)
x = np.concatenate([a,
b,
c])
ax=plt.figure().gca()
ax.hist(x, 80, density=True,color='r')
# ax.hist(c, 80, density=True,color='g')
ax.set_xlim(-10, 20);
Explanation: Introducing Gaussian Mixture Models
We previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.
Here we'll consider an extension to this which is suitable for both clustering and density estimation.
For example, imagine we have some one-dimensional data in a particular distribution:
End of explanation
from sklearn.mixture import GaussianMixture as GMM
X = x[:, np.newaxis]
clf = GMM(5, max_iter=500, random_state=3).fit(X)
xpdf = np.linspace(-10, 20, 1000)
density = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-r')
plt.xlim(-10, 20);
Explanation: Gaussian mixture models will allow us to approximate this density:
End of explanation
clf.means_
clf.covariances_
clf.weights_
plt.hist(x, 80, density=True, alpha=0.3)
plt.plot(xpdf, density, '-r')
for i in range(clf.n_components):
pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],
np.sqrt(clf.covariances_[i, 0])).pdf(xpdf)
plt.fill(xpdf, pdf, facecolor='gray',
edgecolor='none', alpha=0.3)
plt.xlim(-10, 20);
Explanation: Note that this density is fit using a mixture of Gaussians, which we can examine by looking at the means_, covars_, and weights_ attributes:
End of explanation
print(clf.bic(X))
print(clf.aic(X))
Explanation: These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the posterior probability is used to compute the weighted mean and covariance.
Somewhat surprisingly, this algorithm provably converges to the optimum (though the optimum is not necessarily global).
$R^2$
How many Gaussians?
Given a model, we can use one of several means to evaluate how well it fits the data.
For example, there is the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC)
End of explanation
n_estimators = np.arange(1, 10)
clfs = [GMM(n, max_iter=1000).fit(X) for n in n_estimators]
bics = [clf.bic(X) for clf in clfs]
aics = [clf.aic(X) for clf in clfs]
plt.plot(n_estimators, bics, label='BIC')
plt.plot(n_estimators, aics, label='AIC')
plt.legend();
Explanation: Let's take a look at these as a function of the number of gaussians:
End of explanation
np.random.seed(0)
# Add 20 outliers
true_outliers = np.sort(np.random.randint(0, len(x), 20))
y = x.copy()
y[true_outliers] += 50 * np.random.randn(20)
clf = GMM(4, max_iter=500, random_state=0).fit(y[:, np.newaxis])
xpdf = np.linspace(-10, 20, 1000)
density_noise = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(y, 80, density=True, alpha=0.5)
plt.plot(xpdf, density_noise, '-r')
plt.xlim(-15, 30);
Explanation: It appears that for both the AIC and BIC, 4 components is preferred.
Example: GMM For Outlier Detection
GMM is what's known as a Generative Model: it's a probabilistic model from which a dataset can be generated.
One thing that generative models can be useful for is outlier detection: we can simply evaluate the likelihood of each point under the generative model; the points with a suitably low likelihood (where "suitable" is up to your own bias/variance preference) can be labeld outliers.
Let's take a look at this by defining a new dataset with some outliers:
End of explanation
log_likelihood = np.array([clf.score_samples([[yy]]) for yy in y])
# log_likelihood = clf.score_samples(y[:, np.newaxis])[0]
plt.plot(y, log_likelihood, '.k');
detected_outliers = np.where(log_likelihood < -9)[0]
print("true outliers:")
print(true_outliers)
print("\ndetected outliers:")
print(detected_outliers)
Explanation: Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of y:
End of explanation
set(true_outliers) - set(detected_outliers)
Explanation: The algorithm misses a few of these points, which is to be expected (some of the "outliers" actually land in the middle of the distribution!)
Here are the outliers that were missed:
End of explanation
set(detected_outliers) - set(true_outliers)
Explanation: And here are the non-outliers which were spuriously labeled outliers:
End of explanation
from sklearn.neighbors import KernelDensity
kde = KernelDensity(0.15).fit(x[:, None])
density_kde = np.exp(kde.score_samples(xpdf[:, None]))
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-b', label='GMM')
plt.plot(xpdf, density_kde, '-r', label='KDE')
plt.xlim(-10, 20)
plt.legend();
Explanation: Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.
Other Density Estimators
The other main density estimator that you might find useful is Kernel Density Estimation, which is available via sklearn.neighbors.KernelDensity. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of every training point!
End of explanation |
1,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-2', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:28
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
1,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
inputs_ = tf.placeholder(tf.float32, [None, 784])
targets_ = tf.placeholder(tf.float32, [None, 784])
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(inputs_, 784)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer().minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
1,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Part 5
Step1: There are actually two different approaches you can take to using TensorFlow or PyTorch models with DeepChem. It depends on whether you want to use TensorFlow/PyTorch APIs or DeepChem APIs for training and evaluating your model. For the former case, DeepChem's Dataset class has methods for easily adapting it to use with other frameworks. make_tf_dataset() returns a tensorflow.data.Dataset object that iterates over the data. make_pytorch_dataset() returns a torch.utils.data.IterableDataset that iterates over the data. This lets you use DeepChem's datasets, loaders, featurizers, transformers, splitters, etc. and easily integrate them into your existing TensorFlow or PyTorch code.
But DeepChem also provides many other useful features. The other approach, which lets you use those features, is to wrap your model in a DeepChem Model object. Let's look at how to do that.
KerasModel
KerasModel is a subclass of DeepChem's Model class. It acts as a wrapper around a tensorflow.keras.Model. Let's see an example of using it. For this example, we create a simple sequential model consisting of two dense layers.
Step2: For this example, we used the Keras Sequential class. Our model consists of a dense layer with ReLU activation, 50% dropout to provide regularization, and a final layer that produces a scalar output. We also need to specify the loss function to use when training the model, in this case L<sub>2</sub> loss. We can now train and evaluate the model exactly as we would with any other DeepChem model. For example, let's load the Delaney solubility dataset. How does our model do at predicting the solubilities of molecules based on their extended-connectivity fingerprints (ECFPs)?
Step3: TorchModel
TorchModel works just like KerasModel, except it wraps a torch.nn.Module. Let's use PyTorch to create another model just like the previous one and train it on the same data.
Step4: Computing Losses
Now let's see a more advanced example. In the above models, the loss was computed directly from the model's output. Often that is fine, but not always. Consider a classification model that outputs a probability distribution. While it is possible to compute the loss from the probabilities, it is more numerically stable to compute it from the logits.
To do this, we create a model that returns multiple outputs, both probabilities and logits. KerasModel and TorchModel let you specify a list of "output types". If a particular output has type 'prediction', that means it is a normal output that should be returned when you call predict(). If it has type 'loss', that means it should be passed to the loss function in place of the normal outputs.
Sequential models do not allow multiple outputs, so instead we use a subclassing style model.
Step5: We can train our model on the BACE dataset. This is a binary classification task that tries to predict whether a molecule will inhibit the enzyme BACE-1. | Python Code:
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
Explanation: Tutorial Part 5: Creating Models with TensorFlow and PyTorch
In the tutorials so far, we have used standard models provided by DeepChem. This is fine for many applications, but sooner or later you will want to create an entirely new model with an architecture you define yourself. DeepChem provides integration with both TensorFlow (Keras) and PyTorch, so you can use it with models from either of these frameworks.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine.
End of explanation
import deepchem as dc
import tensorflow as tf
keras_model = tf.keras.Sequential([
tf.keras.layers.Dense(1000, activation='relu'),
tf.keras.layers.Dropout(rate=0.5),
tf.keras.layers.Dense(1)
])
model = dc.models.KerasModel(keras_model, dc.models.losses.L2Loss())
Explanation: There are actually two different approaches you can take to using TensorFlow or PyTorch models with DeepChem. It depends on whether you want to use TensorFlow/PyTorch APIs or DeepChem APIs for training and evaluating your model. For the former case, DeepChem's Dataset class has methods for easily adapting it to use with other frameworks. make_tf_dataset() returns a tensorflow.data.Dataset object that iterates over the data. make_pytorch_dataset() returns a torch.utils.data.IterableDataset that iterates over the data. This lets you use DeepChem's datasets, loaders, featurizers, transformers, splitters, etc. and easily integrate them into your existing TensorFlow or PyTorch code.
But DeepChem also provides many other useful features. The other approach, which lets you use those features, is to wrap your model in a DeepChem Model object. Let's look at how to do that.
KerasModel
KerasModel is a subclass of DeepChem's Model class. It acts as a wrapper around a tensorflow.keras.Model. Let's see an example of using it. For this example, we create a simple sequential model consisting of two dense layers.
End of explanation
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='ECFP', splitter='random')
train_dataset, valid_dataset, test_dataset = datasets
model.fit(train_dataset, nb_epoch=50)
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
Explanation: For this example, we used the Keras Sequential class. Our model consists of a dense layer with ReLU activation, 50% dropout to provide regularization, and a final layer that produces a scalar output. We also need to specify the loss function to use when training the model, in this case L<sub>2</sub> loss. We can now train and evaluate the model exactly as we would with any other DeepChem model. For example, let's load the Delaney solubility dataset. How does our model do at predicting the solubilities of molecules based on their extended-connectivity fingerprints (ECFPs)?
End of explanation
import torch
pytorch_model = torch.nn.Sequential(
torch.nn.Linear(1024, 1000),
torch.nn.ReLU(),
torch.nn.Dropout(0.5),
torch.nn.Linear(1000, 1)
)
model = dc.models.TorchModel(pytorch_model, dc.models.losses.L2Loss())
model.fit(train_dataset, nb_epoch=50)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
Explanation: TorchModel
TorchModel works just like KerasModel, except it wraps a torch.nn.Module. Let's use PyTorch to create another model just like the previous one and train it on the same data.
End of explanation
class ClassificationModel(tf.keras.Model):
def __init__(self):
super(ClassificationModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(1000, activation='relu')
self.dense2 = tf.keras.layers.Dense(1)
def call(self, inputs, training=False):
y = self.dense1(inputs)
if training:
y = tf.nn.dropout(y, 0.5)
logits = self.dense2(y)
output = tf.nn.sigmoid(logits)
return output, logits
keras_model = ClassificationModel()
output_types = ['prediction', 'loss']
model = dc.models.KerasModel(keras_model, dc.models.losses.SigmoidCrossEntropy(), output_types=output_types)
Explanation: Computing Losses
Now let's see a more advanced example. In the above models, the loss was computed directly from the model's output. Often that is fine, but not always. Consider a classification model that outputs a probability distribution. While it is possible to compute the loss from the probabilities, it is more numerically stable to compute it from the logits.
To do this, we create a model that returns multiple outputs, both probabilities and logits. KerasModel and TorchModel let you specify a list of "output types". If a particular output has type 'prediction', that means it is a normal output that should be returned when you call predict(). If it has type 'loss', that means it should be passed to the loss function in place of the normal outputs.
Sequential models do not allow multiple outputs, so instead we use a subclassing style model.
End of explanation
tasks, datasets, transformers = dc.molnet.load_bace_classification(feturizer='ECFP', split='scaffold')
train_dataset, valid_dataset, test_dataset = datasets
model.fit(train_dataset, nb_epoch=100)
metric = dc.metrics.Metric(dc.metrics.roc_auc_score)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
Explanation: We can train our model on the BACE dataset. This is a binary classification task that tries to predict whether a molecule will inhibit the enzyme BACE-1.
End of explanation |
1,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Applied Machine Learning
Step1: Dummy Classifiers
DummyClassifier is a classifier that makes predictions using simple rules, which can be useful as a baseline for comparison against actual classifiers, especially with imbalanced classes.
Step2: Confusion matrices
Binary (two-class) confusion matrix
Step3: Evaluation metrics for binary classification
Step4: Decision functions
Step5: Precision-recall curves
Step6: ROC curves, Area-Under-Curve (AUC)
Step7: Evaluation measures for multi-class classification
Multi-class confusion matrix
Step8: Multi-class classification report
Step9: Micro- vs. macro-averaged metrics
Step10: Regression evaluation metrics
Step11: Model selection using evaluation metrics
Cross-validation example
Step12: Grid search example
Step13: Evaluation metrics supported for model selection
Step14: Two-feature classification example using the digits dataset
Optimizing a classifier using different evaluation metrics
Step15: Precision-recall curve for the default SVC classifier (with balanced class weights) | Python Code:
%matplotlib notebook
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_digits
dataset = load_digits()
X, y = dataset.data, dataset.target
for class_name, class_count in zip(dataset.target_names, np.bincount(dataset.target)):
print(class_name,class_count)
# Creating a dataset with imbalanced binary classes:
# Negative class (0) is 'not digit 1'
# Positive class (1) is 'digit 1'
y_binary_imbalanced = y.copy()
y_binary_imbalanced[y_binary_imbalanced != 1] = 0
print('Original labels:\t', y[1:30])
print('New binary labels:\t', y_binary_imbalanced[1:30])
np.bincount(y_binary_imbalanced) # Negative class (0) is the most frequent class
X_train, X_test, y_train, y_test = train_test_split(X, y_binary_imbalanced, random_state=0)
# Accuracy of Support Vector Machine classifier
from sklearn.svm import SVC
svm = SVC(kernel='rbf', C=1).fit(X_train, y_train)
svm.score(X_test, y_test)
Explanation: You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Applied Machine Learning: Module 3 (Evaluation)
Evaluation for Classification
Preamble
End of explanation
from sklearn.dummy import DummyClassifier
# Negative class (0) is most frequent
dummy_majority = DummyClassifier(strategy = 'most_frequent').fit(X_train, y_train)
# Therefore the dummy 'most_frequent' classifier always predicts class 0
y_dummy_predictions = dummy_majority.predict(X_test)
y_dummy_predictions
dummy_majority.score(X_test, y_test)
svm = SVC(kernel='linear', C=1).fit(X_train, y_train)
svm.score(X_test, y_test)
Explanation: Dummy Classifiers
DummyClassifier is a classifier that makes predictions using simple rules, which can be useful as a baseline for comparison against actual classifiers, especially with imbalanced classes.
End of explanation
from sklearn.metrics import confusion_matrix
# Negative class (0) is most frequent
dummy_majority = DummyClassifier(strategy = 'most_frequent').fit(X_train, y_train)
y_majority_predicted = dummy_majority.predict(X_test)
confusion = confusion_matrix(y_test, y_majority_predicted)
print('Most frequent class (dummy classifier)\n', confusion)
# produces random predictions w/ same class proportion as training set
dummy_classprop = DummyClassifier(strategy='stratified').fit(X_train, y_train)
y_classprop_predicted = dummy_classprop.predict(X_test)
confusion = confusion_matrix(y_test, y_classprop_predicted)
print('Random class-proportional prediction (dummy classifier)\n', confusion)
svm = SVC(kernel='linear', C=1).fit(X_train, y_train)
svm_predicted = svm.predict(X_test)
confusion = confusion_matrix(y_test, svm_predicted)
print('Support vector machine classifier (linear kernel, C=1)\n', confusion)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression().fit(X_train, y_train)
lr_predicted = lr.predict(X_test)
confusion = confusion_matrix(y_test, lr_predicted)
print('Logistic regression classifier (default settings)\n', confusion)
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier(max_depth=2).fit(X_train, y_train)
tree_predicted = dt.predict(X_test)
confusion = confusion_matrix(y_test, tree_predicted)
print('Decision tree classifier (max_depth = 2)\n', confusion)
Explanation: Confusion matrices
Binary (two-class) confusion matrix
End of explanation
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
# Accuracy = TP + TN / (TP + TN + FP + FN)
# Precision = TP / (TP + FP)
# Recall = TP / (TP + FN) Also known as sensitivity, or True Positive Rate
# F1 = 2 * Precision * Recall / (Precision + Recall)
print('Accuracy: {:.2f}'.format(accuracy_score(y_test, tree_predicted)))
print('Precision: {:.2f}'.format(precision_score(y_test, tree_predicted)))
print('Recall: {:.2f}'.format(recall_score(y_test, tree_predicted)))
print('F1: {:.2f}'.format(f1_score(y_test, tree_predicted)))
# Combined report with all above metrics
from sklearn.metrics import classification_report
print(classification_report(y_test, tree_predicted, target_names=['not 1', '1']))
print('Random class-proportional (dummy)\n',
classification_report(y_test, y_classprop_predicted, target_names=['not 1', '1']))
print('SVM\n',
classification_report(y_test, svm_predicted, target_names = ['not 1', '1']))
print('Logistic regression\n',
classification_report(y_test, lr_predicted, target_names = ['not 1', '1']))
print('Decision tree\n',
classification_report(y_test, tree_predicted, target_names = ['not 1', '1']))
Explanation: Evaluation metrics for binary classification
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y_binary_imbalanced, random_state=0)
y_scores_lr = lr.fit(X_train, y_train).decision_function(X_test)
y_score_list = list(zip(y_test[0:20], y_scores_lr[0:20]))
# show the decision_function scores for first 20 instances
y_score_list
X_train, X_test, y_train, y_test = train_test_split(X, y_binary_imbalanced, random_state=0)
y_proba_lr = lr.fit(X_train, y_train).predict_proba(X_test)
y_proba_list = list(zip(y_test[0:20], y_proba_lr[0:20,1]))
# show the probability of positive class for first 20 instances
y_proba_list
Explanation: Decision functions
End of explanation
from sklearn.metrics import precision_recall_curve
precision, recall, thresholds = precision_recall_curve(y_test, y_scores_lr)
closest_zero = np.argmin(np.abs(thresholds))
closest_zero_p = precision[closest_zero]
closest_zero_r = recall[closest_zero]
plt.figure()
plt.xlim([0.0, 1.01])
plt.ylim([0.0, 1.01])
plt.plot(precision, recall, label='Precision-Recall Curve')
plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
plt.axes().set_aspect('equal')
plt.show()
Explanation: Precision-recall curves
End of explanation
from sklearn.metrics import roc_curve, auc
X_train, X_test, y_train, y_test = train_test_split(X, y_binary_imbalanced, random_state=0)
y_score_lr = lr.fit(X_train, y_train).decision_function(X_test)
fpr_lr, tpr_lr, _ = roc_curve(y_test, y_score_lr)
roc_auc_lr = auc(fpr_lr, tpr_lr)
plt.figure()
plt.xlim([-0.01, 1.00])
plt.ylim([-0.01, 1.01])
plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr))
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('ROC curve (1-of-10 digits classifier)', fontsize=16)
plt.legend(loc='lower right', fontsize=13)
plt.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')
plt.axes().set_aspect('equal')
plt.show()
from matplotlib import cm
X_train, X_test, y_train, y_test = train_test_split(X, y_binary_imbalanced, random_state=0)
plt.figure()
plt.xlim([-0.01, 1.00])
plt.ylim([-0.01, 1.01])
for g in [0.01, 0.1, 0.20, 1]:
svm = SVC(gamma=g).fit(X_train, y_train)
y_score_svm = svm.decision_function(X_test)
fpr_svm, tpr_svm, _ = roc_curve(y_test, y_score_svm)
roc_auc_svm = auc(fpr_svm, tpr_svm)
accuracy_svm = svm.score(X_test, y_test)
print("gamma = {:.2f} accuracy = {:.2f} AUC = {:.2f}".format(g, accuracy_svm,
roc_auc_svm))
plt.plot(fpr_svm, tpr_svm, lw=3, alpha=0.7,
label='SVM (gamma = {:0.2f}, area = {:0.2f})'.format(g, roc_auc_svm))
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate (Recall)', fontsize=16)
plt.plot([0, 1], [0, 1], color='k', lw=0.5, linestyle='--')
plt.legend(loc="lower right", fontsize=11)
plt.title('ROC curve: (1-of-10 digits classifier)', fontsize=16)
plt.axes().set_aspect('equal')
plt.show()
Explanation: ROC curves, Area-Under-Curve (AUC)
End of explanation
dataset = load_digits()
X, y = dataset.data, dataset.target
X_train_mc, X_test_mc, y_train_mc, y_test_mc = train_test_split(X, y, random_state=0)
svm = SVC(kernel = 'linear').fit(X_train_mc, y_train_mc)
svm_predicted_mc = svm.predict(X_test_mc)
confusion_mc = confusion_matrix(y_test_mc, svm_predicted_mc)
df_cm = pd.DataFrame(confusion_mc,
index = [i for i in range(0,10)], columns = [i for i in range(0,10)])
plt.figure(figsize=(5.5,4))
sns.heatmap(df_cm, annot=True)
plt.title('SVM Linear Kernel \nAccuracy:{0:.3f}'.format(accuracy_score(y_test_mc,
svm_predicted_mc)))
plt.ylabel('True label')
plt.xlabel('Predicted label')
svm = SVC(kernel = 'rbf').fit(X_train_mc, y_train_mc)
svm_predicted_mc = svm.predict(X_test_mc)
confusion_mc = confusion_matrix(y_test_mc, svm_predicted_mc)
df_cm = pd.DataFrame(confusion_mc, index = [i for i in range(0,10)],
columns = [i for i in range(0,10)])
plt.figure(figsize = (5.5,4))
sns.heatmap(df_cm, annot=True)
plt.title('SVM RBF Kernel \nAccuracy:{0:.3f}'.format(accuracy_score(y_test_mc,
svm_predicted_mc)))
plt.ylabel('True label')
plt.xlabel('Predicted label');
Explanation: Evaluation measures for multi-class classification
Multi-class confusion matrix
End of explanation
print(classification_report(y_test_mc, svm_predicted_mc))
Explanation: Multi-class classification report
End of explanation
print('Micro-averaged precision = {:.2f} (treat instances equally)'
.format(precision_score(y_test_mc, svm_predicted_mc, average = 'micro')))
print('Macro-averaged precision = {:.2f} (treat classes equally)'
.format(precision_score(y_test_mc, svm_predicted_mc, average = 'macro')))
print('Micro-averaged f1 = {:.2f} (treat instances equally)'
.format(f1_score(y_test_mc, svm_predicted_mc, average = 'micro')))
print('Macro-averaged f1 = {:.2f} (treat classes equally)'
.format(f1_score(y_test_mc, svm_predicted_mc, average = 'macro')))
Explanation: Micro- vs. macro-averaged metrics
End of explanation
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import datasets
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.dummy import DummyRegressor
diabetes = datasets.load_diabetes()
X = diabetes.data[:, None, 6]
y = diabetes.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
lm = LinearRegression().fit(X_train, y_train)
lm_dummy_mean = DummyRegressor(strategy = 'mean').fit(X_train, y_train)
y_predict = lm.predict(X_test)
y_predict_dummy_mean = lm_dummy_mean.predict(X_test)
print('Linear model, coefficients: ', lm.coef_)
print("Mean squared error (dummy): {:.2f}".format(mean_squared_error(y_test,
y_predict_dummy_mean)))
print("Mean squared error (linear model): {:.2f}".format(mean_squared_error(y_test, y_predict)))
print("r2_score (dummy): {:.2f}".format(r2_score(y_test, y_predict_dummy_mean)))
print("r2_score (linear model): {:.2f}".format(r2_score(y_test, y_predict)))
# Plot outputs
plt.scatter(X_test, y_test, color='black')
plt.plot(X_test, y_predict, color='green', linewidth=2)
plt.plot(X_test, y_predict_dummy_mean, color='red', linestyle = 'dashed',
linewidth=2, label = 'dummy')
plt.show()
Explanation: Regression evaluation metrics
End of explanation
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC
dataset = load_digits()
# again, making this a binary problem with 'digit 1' as positive class
# and 'not 1' as negative class
X, y = dataset.data, dataset.target == 1
clf = SVC(kernel='linear', C=1)
# accuracy is the default scoring metric
print('Cross-validation (accuracy)', cross_val_score(clf, X, y, cv=5))
# use AUC as scoring metric
print('Cross-validation (AUC)', cross_val_score(clf, X, y, cv=5, scoring = 'roc_auc'))
# use recall as scoring metric
print('Cross-validation (recall)', cross_val_score(clf, X, y, cv=5, scoring = 'recall'))
Explanation: Model selection using evaluation metrics
Cross-validation example
End of explanation
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score
dataset = load_digits()
X, y = dataset.data, dataset.target == 1
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = SVC(kernel='rbf')
grid_values = {'gamma': [0.001, 0.01, 0.05, 0.1, 1, 10, 100]}
# default metric to optimize over grid parameters: accuracy
grid_clf_acc = GridSearchCV(clf, param_grid = grid_values)
grid_clf_acc.fit(X_train, y_train)
y_decision_fn_scores_acc = grid_clf_acc.decision_function(X_test)
print('Grid best parameter (max. accuracy): ', grid_clf_acc.best_params_)
print('Grid best score (accuracy): ', grid_clf_acc.best_score_)
# alternative metric to optimize over grid parameters: AUC
grid_clf_auc = GridSearchCV(clf, param_grid = grid_values, scoring = 'roc_auc')
grid_clf_auc.fit(X_train, y_train)
y_decision_fn_scores_auc = grid_clf_auc.decision_function(X_test)
print('Test set AUC: ', roc_auc_score(y_test, y_decision_fn_scores_auc))
print('Grid best parameter (max. AUC): ', grid_clf_auc.best_params_)
print('Grid best score (AUC): ', grid_clf_auc.best_score_)
Explanation: Grid search example
End of explanation
from sklearn.metrics.scorer import SCORERS
print(sorted(list(SCORERS.keys())))
Explanation: Evaluation metrics supported for model selection
End of explanation
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from adspy_shared_utilities import plot_class_regions_for_classifier_subplot
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
dataset = load_digits()
X, y = dataset.data, dataset.target == 1
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Create a two-feature input vector matching the example plot above
# We jitter the points (add a small amount of random noise) in case there are areas
# in feature space where many instances have the same features.
jitter_delta = 0.25
X_twovar_train = X_train[:,[20,59]]+ np.random.rand(X_train.shape[0], 2) - jitter_delta
X_twovar_test = X_test[:,[20,59]] + np.random.rand(X_test.shape[0], 2) - jitter_delta
clf = SVC(kernel = 'linear').fit(X_twovar_train, y_train)
grid_values = {'class_weight':['balanced', {1:2},{1:3},{1:4},{1:5},{1:10},{1:20},{1:50}]}
plt.figure(figsize=(9,6))
for i, eval_metric in enumerate(('precision','recall', 'f1','roc_auc')):
grid_clf_custom = GridSearchCV(clf, param_grid=grid_values, scoring=eval_metric)
grid_clf_custom.fit(X_twovar_train, y_train)
print('Grid best parameter (max. {0}): {1}'
.format(eval_metric, grid_clf_custom.best_params_))
print('Grid best score ({0}): {1}'
.format(eval_metric, grid_clf_custom.best_score_))
plt.subplots_adjust(wspace=0.3, hspace=0.3)
plot_class_regions_for_classifier_subplot(grid_clf_custom, X_twovar_test, y_test, None,
None, None, plt.subplot(2, 2, i+1))
plt.title(eval_metric+'-oriented SVC')
plt.tight_layout()
plt.show()
Explanation: Two-feature classification example using the digits dataset
Optimizing a classifier using different evaluation metrics
End of explanation
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_recall_curve
from adspy_shared_utilities import plot_class_regions_for_classifier
from sklearn.svm import SVC
dataset = load_digits()
X, y = dataset.data, dataset.target == 1
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# create a two-feature input vector matching the example plot above
jitter_delta = 0.25
X_twovar_train = X_train[:,[20,59]]+ np.random.rand(X_train.shape[0], 2) - jitter_delta
X_twovar_test = X_test[:,[20,59]] + np.random.rand(X_test.shape[0], 2) - jitter_delta
clf = SVC(kernel='linear', class_weight='balanced').fit(X_twovar_train, y_train)
y_scores = clf.decision_function(X_twovar_test)
precision, recall, thresholds = precision_recall_curve(y_test, y_scores)
closest_zero = np.argmin(np.abs(thresholds))
closest_zero_p = precision[closest_zero]
closest_zero_r = recall[closest_zero]
plot_class_regions_for_classifier(clf, X_twovar_test, y_test)
plt.title("SVC, class_weight = 'balanced', optimized for accuracy")
plt.show()
plt.figure()
plt.xlim([0.0, 1.01])
plt.ylim([0.0, 1.01])
plt.title ("Precision-recall curve: SVC, class_weight = 'balanced'")
plt.plot(precision, recall, label = 'Precision-Recall Curve')
plt.plot(closest_zero_p, closest_zero_r, 'o', markersize=12, fillstyle='none', c='r', mew=3)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
plt.axes().set_aspect('equal')
plt.show()
print('At zero threshold, precision: {:.2f}, recall: {:.2f}'
.format(closest_zero_p, closest_zero_r))
Explanation: Precision-recall curve for the default SVC classifier (with balanced class weights)
End of explanation |
1,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The intensity is $\theta * X$ where $X$ is a row vector.
Step1: We consider different shapes for the intensity | Python Code:
theta = np.array([2])
Explanation: The intensity is $\theta * X$ where $X$ is a row vector.
End of explanation
X = 0.1*np.random.normal(size = (d,N))
X = np.reshape(np.ones(N,),(1,N))
X = np.reshape(np.sin(np.arange(N)),(1,N))
dt = 0.1 # discretization step
l = np.exp(np.dot(X.T,theta))
u = np.random.uniform(size = len(l))
y = 1*(l*dt>u)
print(y)
model = PP.PPModel(X,dt = dt)
res = model.fit(y)
print('The estimated parameter is '+ str(res.x[0])+ '. The true parameter is '+str(theta[0])+'.')
Explanation: We consider different shapes for the intensity: random, constant,sinusoidal:
End of explanation |
1,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot the histogram of the number of trajectories over queries.
Step1: Plot the histogram of the length of trajectory given a start point.
Step2: Compute the ratio of multi-label when query=(start, length).
Step3: Compute the ratio of multi-label when query=(start, user).
Step4: Compute the ratio of multi-label when query=(start, user, length). | Python Code:
plt.figure(figsize=[15, 5])
ax = plt.subplot()
ax.set_xlabel('#Trajectories')
ax.set_ylabel('#Queries')
ax.set_title('Histogram of #Trajectories')
queries = sorted(dat_obj.TRAJID_GROUP_DICT.keys())
X = [len(dat_obj.TRAJID_GROUP_DICT[q]) for q in queries]
pd.Series(X).hist(ax=ax, bins=20)
Explanation: Plot the histogram of the number of trajectories over queries.
End of explanation
dat_obj.poi_all.index
startPOI = 20
X = [len(dat_obj.traj_dict[tid]) for tid in dat_obj.trajid_set_all \
if dat_obj.traj_dict[tid][0] == startPOI and len(dat_obj.traj_dict[tid]) >= 2]
if len(X) > 0:
plt.figure(figsize=[15, 5])
ax = plt.subplot()
ax.set_xlabel('Trajectory Length')
ax.set_ylabel('#Trajectories')
ax.set_title('Histogram of Trajectory Length (startPOI: %d)' % startPOI)
pd.Series(X).hist(ax=ax, bins=20)
print('Trajectory Length:', X)
Explanation: Plot the histogram of the length of trajectory given a start point.
End of explanation
multi_label_queries = [q for q in dat_obj.TRAJID_GROUP_DICT if len(dat_obj.TRAJID_GROUP_DICT[q]) > 1]
nqueries = len(dat_obj.TRAJID_GROUP_DICT)
print('%d/%d ~ %.1f%%' % (len(multi_label_queries), nqueries, 100 * len(multi_label_queries) / nqueries))
Explanation: Compute the ratio of multi-label when query=(start, length).
End of explanation
dat_obj.traj_user['userID'].unique().shape
query_dict = dict()
for tid in dat_obj.trajid_set_all:
t = dat_obj.traj_dict[tid]
if len(t) >= 2:
query = (t[0], dat_obj.traj_user.loc[tid, 'userID'])
try: query_dict[query].add(tid)
except: query_dict[query] = set({tid})
multi_label_queries = [q for q in query_dict.keys() if len(query_dict[q]) > 1]
print('%d/%d ~ %.1f%%' % (len(multi_label_queries), len(query_dict), 100 * len(multi_label_queries) / len(query_dict)))
Explanation: Compute the ratio of multi-label when query=(start, user).
End of explanation
query_dict = dict()
for tid in dat_obj.trajid_set_all:
t = dat_obj.traj_dict[tid]
if len(t) >= 2:
query = (t[0], dat_obj.traj_user.loc[tid, 'userID'], len(t))
try: query_dict[query].add(tid)
except: query_dict[query] = set({tid})
multi_label_queries = [q for q in query_dict.keys() if len(query_dict[q]) > 1]
print('%d/%d ~ %.1f%%' % (len(multi_label_queries), len(query_dict), 100 * len(multi_label_queries) / len(query_dict)))
Explanation: Compute the ratio of multi-label when query=(start, user, length).
End of explanation |
1,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Big Data Applications and Analytics - Term Project
Sean M. Shiverick Fall 2017
Classification of Prescription Opioid Misuse
Step1: Part 1. Load Project dataset
Delete first two columns and SUICIDATT; examine dataframe keys
Identify features and target variable PRLMISEVR
Split data into Training and Test sets
Step2: Part 2. Logistic Regression Classifier
Decision boundary is a linear function of input
Binary linear classifier separates two classes using a line, plane, or hyperplane
Regularization Parameter C
Low values of C
Step3: 2.2 Evaluate Classifier Model on Test set
Step4: 2.3 Adjust Model Parameter settings
Default setting C=1 provides good performance for train and test sets
Very likely UNDERFITTING the test data
Higher value of C fits more 'Flexible' model
C=100 generally gives higher training set accuracy and slightly higher Test set accuracy
Step5: Lower value of C fits more 'regularized' model
Setting C=0.01 leads model to try to adjust to 'majority' of data points
Step6: 2.4 Plot Coefficients of Logistic Regression Classifier
Main difference between linear models for classification is penalty parameter
LogisticRegression applies L2 (Ridge) regularization by default
Penalty Parameter (L)
L2 penalty (Ridge) uses all available features, regularization C pushes toward zero
L1 penalty (Lasso) sets coefficients for most features to zero, uses only a subset
Improved interpretability with L2 penalty (Lasso)
L1 Regularization (Lasso)
Model is limited to using only a few features, more interpretable
Legend
Step7: Part 3. Decision Trees Classifier
Building decision tree
Continues until all leaves are pure leads to models that are complex, overfit
Presence of pure leaves means the tree is 100% accurate on the training set
Each data point in training set is in a leaf that has the correct majority class
Pre-pruning
Step8: 3.2 Evaluate Tree Classifier model on Test set
Step9: 3.3 Adjust Parameter Settings
Training set accuracy is 100% because leaves are pure
Trees can become arbitrarily deep, complex, if depth of the tree is not limmited
Unpruned trees are proone to overfitting and not generalizing well to new data
Pruning
Step10: 3.4 Visualizing Decision Tree Classifier
Visualize the tree using export_graphviz function from trees module
Set an option to color the nodes to reflect majority class in each node
First, install graphviz at terminal using brew install graphviz
Step11: Feature Importance in Trees
Relates how important each feature is for the decision a tree makes
Values range from 0 = "Not at all", to 1 = perfectly predicts target"
Feature importances always sum to a total of 1.
3.5 Visualize Feature Importancee
Similar to how we visualized coefficients in linear model
Features used in top split ("worst radius") is most important feature
Features with low importance may be redundant with another feature that encodes same info
Step12: Part 4. Random Forests Classifier
Random forest gives better accuracy than linear models or single decision tree, without tuning any parameters
Building Random Forests
First, need to decide how many trees to build
Step13: 3.2 Evaluate Random Forests Classifier on Test set
Step14: 3.3 Features Importance for Random Forest
Computed by aggregating the feature importances over the trees in the forest
Feature importances provided by Random Forest are more reliable than provided by single tree
Many more features have non-zero importance than single tree, chooses similar features
Step15: Part 5. Gradient Boosted Classifier Trees
Works by building trees in a serial manner, where each tree tries to correct for mistakes of previous ones
Main idea
Step16: 5.2 Feature Importance
With 100 trees, cannot inspect them all, even if maximum depth is only 1 | Python Code:
import sklearn
import mglearn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Big Data Applications and Analytics - Term Project
Sean M. Shiverick Fall 2017
Classification of Prescription Opioid Misuse: PRL
Logistic Regression Classifier, Decision Tree Classifier, Random Forests
from Introduction to Machine Learning by Andreas Mueller and Sarah Guido
Ch. 2 Supervised Learning, Classification Models
Dataset: NSDUH-2015
National Survey of Drug Use and Health 2015
Substance Abuse and Metnal Health Data Archive
Import packages
Load forge dataset and assign variables
End of explanation
file = pd.read_csv('project-data.csv')
opioids = pd.DataFrame(file)
opioids.drop(opioids.columns[[0,1]], axis=1, inplace=True)
del opioids['SUICATT']
opioids.shape
print(opioids.keys())
opioids['PRLMISEVR'].value_counts()
features = ['AGECAT', 'SEX', 'MARRIED', 'EDUCAT', 'EMPLOY18',
'CTYMETRO', 'HEALTH','MENTHLTH','HEROINEVR','HEROINUSE','HEROINFQY',
'TRQLZRS', 'SEDATVS','COCAINE','AMPHETMN','TRTMENT','MHTRTMT']
opioids.data = pd.DataFrame(opioids, columns=['AGECAT', 'SEX', 'MARRIED', 'EDUCAT', 'EMPLOY18',
'CTYMETRO', 'HEALTH','MENTHLTH','HEROINEVR',
'HEROINUSE','HEROINFQY','TRQLZRS','SEDATVS',
'COCAINE', 'AMPHETMN', 'TRTMENT','MHTRTMT'])
opioids.target = opioids['PRLMISEVR']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
opioids.data, opioids.target, stratify=opioids.target, random_state=42)
Explanation: Part 1. Load Project dataset
Delete first two columns and SUICIDATT; examine dataframe keys
Identify features and target variable PRLMISEVR
Split data into Training and Test sets
End of explanation
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression().fit(X_train, y_train)
Explanation: Part 2. Logistic Regression Classifier
Decision boundary is a linear function of input
Binary linear classifier separates two classes using a line, plane, or hyperplane
Regularization Parameter C
Low values of C:
Model tries to correctly classify all points correctly with a straight line
Cause models to try to adjust to the 'majority' of data points
May not capture overall layout of classes well; model is likely OVERFITTING!
High values of C:
Correspond to less regularization, models will fit training set as best as possible
Stresses importance of each individual data point to be classified correctly
2.1 Built Classifier Model on Training set
End of explanation
print("Training set score: {:.3f}".format(logreg.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg.score(X_test, y_test)))
Explanation: 2.2 Evaluate Classifier Model on Test set
End of explanation
logreg100 = LogisticRegression(C=100).fit(X_train, y_train)
print("Training set score: {:.3f}".format(logreg100.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg100.score(X_test, y_test)))
Explanation: 2.3 Adjust Model Parameter settings
Default setting C=1 provides good performance for train and test sets
Very likely UNDERFITTING the test data
Higher value of C fits more 'Flexible' model
C=100 generally gives higher training set accuracy and slightly higher Test set accuracy
End of explanation
logreg001 = LogisticRegression(C=0.01).fit(X_train, y_train)
print("Training set score: {:.3f}".format(logreg001.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg001.score(X_test, y_test)))
Explanation: Lower value of C fits more 'regularized' model
Setting C=0.01 leads model to try to adjust to 'majority' of data points
End of explanation
for C, marker in zip([0.01, 1, 100], ['v', 'o', '^']):
lr_l1 = LogisticRegression(C=C, penalty="l1").fit(X_train, y_train)
print("Training accuracy of L1 logreg with C={:.3f}: {:.3f}".format(
C, lr_l1.score(X_train, y_train)))
print("Test accuracy of L1 logreg with C={:.3f}: {:.3f}".format(
C, lr_l1.score(X_test, y_test)))
plt.plot(lr_l1.coef_.T, marker, label="C={:.3f}".format(C))
plt.xticks(range(opioids.data.shape[1]), features, rotation=90)
plt.hlines(0,0, opioids.data.shape[1])
plt.xlabel("Coefficient Index")
plt.xlabel("Coefficient Magnitude")
plt.ylim(-2, 2)
plt.legend()
Explanation: 2.4 Plot Coefficients of Logistic Regression Classifier
Main difference between linear models for classification is penalty parameter
LogisticRegression applies L2 (Ridge) regularization by default
Penalty Parameter (L)
L2 penalty (Ridge) uses all available features, regularization C pushes toward zero
L1 penalty (Lasso) sets coefficients for most features to zero, uses only a subset
Improved interpretability with L2 penalty (Lasso)
L1 Regularization (Lasso)
Model is limited to using only a few features, more interpretable
Legend: Different values of Parameter C
Stronger regularization pushes coefficients closer to zero
Parameter values can influence values of Coefficients
End of explanation
from IPython.display import Image, display
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
Explanation: Part 3. Decision Trees Classifier
Building decision tree
Continues until all leaves are pure leads to models that are complex, overfit
Presence of pure leaves means the tree is 100% accurate on the training set
Each data point in training set is in a leaf that has the correct majority class
Pre-pruning: to Prevent Overfitting
Stopping creation of tree early
Limiting the maximum depth of the tree, or limiting maximum number of leaves
Requiring a minimum number of points in a node to keep splitting it
3.1 Build Decision Trees Classifier Model for Heroin
Import package: DecisionTreeClassifier
Build model using default setting that fully develops tree until all leaves are pure
Fix the random_state in the tree, for breaking ties internally
End of explanation
print("Accuracy on the training: {:.3f}".format(tree.score(X_train, y_train)))
print("Accuracy on the test set: {:.3f}".format(tree.score(X_test, y_test)))
Explanation: 3.2 Evaluate Tree Classifier model on Test set
End of explanation
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
print("Accuracy on the training: {:.3f}".format(tree.score(X_train, y_train)))
print("Accuracy on the test set: {:.3f}".format(tree.score(X_test, y_test)))
Explanation: 3.3 Adjust Parameter Settings
Training set accuracy is 100% because leaves are pure
Trees can become arbitrarily deep, complex, if depth of the tree is not limmited
Unpruned trees are proone to overfitting and not generalizing well to new data
Pruning: Set max_depth=4
Tree depth is limited to 4 branches
Limiting depth of the tree decreases overfitting
Results in lower accuracy on training set, but improvement on test set
End of explanation
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["No", "Yes"],
feature_names=features, impurity=False, filled=True)
from IPython.display import display
import graphviz
with open('tree.dot') as f:
dot_graph = f.read()
display(graphviz.Source(dot_graph))
Explanation: 3.4 Visualizing Decision Tree Classifier
Visualize the tree using export_graphviz function from trees module
Set an option to color the nodes to reflect majority class in each node
First, install graphviz at terminal using brew install graphviz
End of explanation
print("Feature importances:\n{}".format(tree.feature_importances_))
def plot_feature_importances_prl(model):
n_features = opioids.data.shape[1]
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), features)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plot_feature_importances_prl(tree)
Explanation: Feature Importance in Trees
Relates how important each feature is for the decision a tree makes
Values range from 0 = "Not at all", to 1 = perfectly predicts target"
Feature importances always sum to a total of 1.
3.5 Visualize Feature Importancee
Similar to how we visualized coefficients in linear model
Features used in top split ("worst radius") is most important feature
Features with low importance may be redundant with another feature that encodes same info
End of explanation
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=100, random_state=0)
forest.fit(X_train, y_train)
Explanation: Part 4. Random Forests Classifier
Random forest gives better accuracy than linear models or single decision tree, without tuning any parameters
Building Random Forests
First, need to decide how many trees to build: n_estimators parameter
Trees are built independently of each other, based on "bootstrap" samples: n_samples
Algorithm randomly selects subset of features, number determined by max_features paramter
Each node of tree makes decision involving different subset of features
Bootstrap Sampling and Subsets of Features
Each decision tree in random forest is built on slightly different dataset
Each split in each tree operates on different subset of features
Critical Parameter: max_features
If max_features set to n_features, each split can look at all features inthe dataset, no randomness in selection
High max_features means the trees in a random forest will be quite similar
Low max_feature means trees in random forest will be quite different
Prediction with Random Forests
Algorithm first makes prediction for every tree in the forest
For classification, a "soft voting" is applied, probabilities for all trees are then averaged, and class with highest probability is predicted
3.1 Build Random Forests Classifier: Heroin
Split data into train and test sets;
set n_estimators to 100 trees; build model on the training set
End of explanation
print("Accuracy on training set: {:.3f}".format(forest.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(forest.score(X_test, y_test)))
Explanation: 3.2 Evaluate Random Forests Classifier on Test set
End of explanation
plot_feature_importances_prl(forest)
Explanation: 3.3 Features Importance for Random Forest
Computed by aggregating the feature importances over the trees in the forest
Feature importances provided by Random Forest are more reliable than provided by single tree
Many more features have non-zero importance than single tree, chooses similar features
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
gbrt = GradientBoostingClassifier(random_state=0, n_estimators=100, max_depth=3, learning_rate=0.01)
gbrt.fit(X_train, y_train)
Explanation: Part 5. Gradient Boosted Classifier Trees
Works by building trees in a serial manner, where each tree tries to correct for mistakes of previous ones
Main idea: combine many simple models, shallow trees ('weak learners'); more tree iteratively improves performance
Parameters
Pre-pruning, and number of trees in ensemble
learning_rate parameter controls how strongly each tree tries to correct mistakes of previous trees
Add more trees to model with n_estimators, increases model complexity
5.1 Build Gradient Boosting Classifier for PRL
With 100 trees, of maximum depth 3, and learning rate of 0.1
End of explanation
gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)
gbrt.fit(X_train, y_train)
plot_feature_importances_prl(gbrt)
Explanation: 5.2 Feature Importance
With 100 trees, cannot inspect them all, even if maximum depth is only 1
End of explanation |
1,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 2b
Step1: Import necessary libraries.
Step2: Lab Task #1
Step3: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value weโll train a model to predict.
Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called babyweight if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
Step4: Create the training and evaluation data tables
Since there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to weight_pounds, is_male, mother_age, plurality, and gestation_weeks as well as some simple filtering and a column to hash on for repeatable splitting.
Note
Step5: Lab Task #3
Step6: Lab Task #4
Step7: Split augmented dataset into eval dataset
Exercise
Step8: Verify table creation
Verify that you created the dataset and training data table.
Step9: Lab Task #5
Step10: Verify CSV creation
Verify that we correctly created the CSV files in our bucket. | Python Code:
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
Explanation: LAB 2b: Prepare babyweight dataset.
Learning Objectives
Setup up the environment
Preprocess natality dataset
Augment natality dataset
Create the train and eval tables in BigQuery
Export data from BigQuery to GCS in CSV format
Introduction
In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.
In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
End of explanation
import os
from google.cloud import bigquery
Explanation: Import necessary libraries.
End of explanation
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR BUCKET NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
Explanation: Lab Task #1: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
End of explanation
%%bash
## Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w # TODO: Add dataset name)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:# TODO: Add dataset name
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
Explanation: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value weโll train a model to predict.
Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called babyweight if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
# TODO: Add selected raw features and preprocessed features
FROM
publicdata.samples.natality
WHERE
# TODO: Add filters
Explanation: Create the training and evaluation data tables
Since there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to weight_pounds, is_male, mother_age, plurality, and gestation_weeks as well as some simple filtering and a column to hash on for repeatable splitting.
Note: The dataset in the create table code below is the one created previously, e.g. "babyweight".
Lab Task #2: Preprocess and filter dataset
We have some preprocessing and filtering we would like to do to get our data in the right format for training.
Preprocessing:
* Cast is_male from BOOL to STRING
* Cast plurality from INTEGER to STRING where [1, 2, 3, 4, 5] becomes ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]
* Add hashcolumn hashing on year and month
Filtering:
* Only want data for years later than 2000
* Only want baby weights greater than 0
* Only want mothers whose age is greater than 0
* Only want plurality to be greater than 0
* Only want the number of weeks of gestation to be greater than 0
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
# TODO: Replace is_male and plurality as indicated above
FROM
babyweight.babyweight_data
Explanation: Lab Task #3: Augment dataset to simulate missing data
Now we want to augment our dataset with our simulated babyweight data by setting all gender information to Unknown and setting plurality of all non-single births to Multiple(2+).
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
# TODO: Modulo hashmonth to be approximately 75% of the data
Explanation: Lab Task #4: Split augmented dataset into train and eval sets
Using hashmonth, apply a modulo to get approximately a 75/25 train/eval split.
Split augmented dataset into train dataset
Exercise: RUN the query to create the training data table.
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
# TODO: Modulo hashmonth to be approximately 25% of the data
Explanation: Split augmented dataset into eval dataset
Exercise: RUN the query to create the evaluation data table.
End of explanation
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
Explanation: Verify table creation
Verify that you created the dataset and training data table.
End of explanation
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = # TODO: Add dataset name
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in [# TODO: Loop over train and eval]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
Explanation: Lab Task #5: Export from BigQuery to CSVs in GCS
Use BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
Explanation: Verify CSV creation
Verify that we correctly created the CSV files in our bucket.
End of explanation |
1,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 98% accuracy! Some simple models have been known to get up to 99.7% accuracy. | Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
show_digit(10)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model(learning_rate):
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
# Input layer
net = tflearn.input_data([None, 784])
# Hidden layers
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 40, activation='ReLU')
# Output layers
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=learning_rate, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model(learning_rate=0.1)
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=10)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 98% accuracy! Some simple models have been known to get up to 99.7% accuracy.
End of explanation |
1,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Correlation Matrix
By calling df.corr() on a full pandas DataFrame will return a square matrix containing all pairs of correlations.
By plotting them as a heatmap, you can visualize many correlations more efficiently.
Correlation matrix with two perfectly correlated features
Step1: Correlation matrix with mildly-correlated features
Step2: Correlation matrix with not-very-correlated features | Python Code:
df = x_plus_noise(randomness=0)
sns.heatmap(df.corr(), vmin=0, vmax=1)
df.corr()
Explanation: Correlation Matrix
By calling df.corr() on a full pandas DataFrame will return a square matrix containing all pairs of correlations.
By plotting them as a heatmap, you can visualize many correlations more efficiently.
Correlation matrix with two perfectly correlated features
End of explanation
df = x_plus_noise(randomness=0.5)
sns.heatmap(df.corr(), vmin=0, vmax=1)
df.corr()
Explanation: Correlation matrix with mildly-correlated features
End of explanation
df = x_plus_noise(randomness=1)
sns.heatmap(df.corr(), vmin=0, vmax=1)
df.corr()
Explanation: Correlation matrix with not-very-correlated features
End of explanation |
1,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gravitational Redshift (rv_grav)
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Relevant Parameters
Gravitational redshifts are only accounted for flux-weighted RVs (dynamical RVs literally only return the z-component of the velocity of the center-of-mass of each star).
First let's run a model with the default radii for our stars.
Step3: Note that gravitational redshift effects for RVs (rv_grav) are disabled by default. We could call add_compute and then set them to be true, or just temporarily override them by passing rv_grav to the run_compute call.
Step4: Now let's run another model but with much smaller stars (but with the same masses).
Step5: Now let's run another model, but with gravitational redshift effects disabled
Step6: Influence on Radial Velocities
Step7: Besides the obvious change in the Rossiter-McLaughlin effect (not due to gravitational redshift), we can see that making the radii smaller shifts the entire RV curve up (the spectra are redshifted as they have to climb out of a steeper potential at the surface of the stars). | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Gravitational Redshift (rv_grav)
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rv01')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.set_value_all('atm', 'blackbody')
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
print(b['value@requiv@primary@component'], b['value@requiv@secondary@component'])
Explanation: Relevant Parameters
Gravitational redshifts are only accounted for flux-weighted RVs (dynamical RVs literally only return the z-component of the velocity of the center-of-mass of each star).
First let's run a model with the default radii for our stars.
End of explanation
b.run_compute(rv_method='flux-weighted', rv_grav=True, irrad_method='none', model='defaultradii_true')
Explanation: Note that gravitational redshift effects for RVs (rv_grav) are disabled by default. We could call add_compute and then set them to be true, or just temporarily override them by passing rv_grav to the run_compute call.
End of explanation
b['requiv@primary'] = 0.4
b['requiv@secondary'] = 0.4
b.run_compute(rv_method='flux-weighted', rv_grav=True, irrad_method='none', model='smallradii_true')
Explanation: Now let's run another model but with much smaller stars (but with the same masses).
End of explanation
b.run_compute(rv_method='flux-weighted', rv_grav=False, irrad_method='none', model='smallradii_false')
Explanation: Now let's run another model, but with gravitational redshift effects disabled
End of explanation
afig, mplfig = b.filter(model=['defaultradii_true', 'smallradii_true']).plot(legend=True, show=True)
afig, mplfig = b.filter(model=['smallradii_true', 'smallradii_false']).plot(legend=True, show=True)
Explanation: Influence on Radial Velocities
End of explanation
print(b['rvs@rv01@primary@defaultradii_true'].get_value().min())
print(b['rvs@rv01@primary@smallradii_true'].get_value().min())
print(b['rvs@rv01@primary@smallradii_false'].get_value().min())
print(b['rvs@rv01@primary@defaultradii_true'].get_value().max())
print(b['rvs@rv01@primary@smallradii_true'].get_value().max())
print(b['rvs@rv01@primary@smallradii_false'].get_value().max())
Explanation: Besides the obvious change in the Rossiter-McLaughlin effect (not due to gravitational redshift), we can see that making the radii smaller shifts the entire RV curve up (the spectra are redshifted as they have to climb out of a steeper potential at the surface of the stars).
End of explanation |
1,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 3
Imports
Step2: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step5: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
Explanation: Algorithms Exercise 3
Imports
End of explanation
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
d={}
f=[]
for i in range(len(s)):
d[s[i]]=s.count(s[i])
for j in d:
f.append(d[j]/len(s))
for m in range(len(s)):
for k in range(0,len(d)):
d[s[m]]=f[k]
return d
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
Explanation: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string:
First do a character count and store the result in a dictionary.
Then divide each character counts by the total number of character to compute the normalized probabilties.
Return the dictionary of characters (keys) and probabilities (values).
End of explanation
lst=char_probs('abcd')
a=np.array(lst['a'])
a
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
Explanation: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:
$$H = - \Sigma_i P_i \log_2(P_i)$$
In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.
Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.
To compute the entropy, you should:
First convert the values (probabilities) of the dict to a Numpy array of probabilities.
Then use other Numpy functions (np.log2, etc.) to compute the entropy.
Don't use any for or while loops in your code.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the pi digits histogram
Explanation: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
End of explanation |
1,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The notebook interface
The IPython -- being rebranded as Jupyter -- notebook interface is becoming a standard for a number of languages other than Python
Step1: For instance, you can benchmark the speed of a function
Step2: You also have direct access to the operating system calls
Step3: You can, of course, always access the result of the previous command
Step4: Some other cool features to explore
Step5: We also want nicely formatted output
Step6: Then we bulk-import everything we might need
Step7: We define some symbols
Step8: It renders nicely as $\frac{\sin\pi x}{\cos\pi x}$. Now a symbolic integral
Step9: Evaluate this for a particular value of x and y
Step10: SymPy, just like Mathematica, keeps results symbolic as long as possible. If you want a numerical result, you specifically have to request it
Step11: The quantum physics module is especially helpful. The noncommutative algebra is better than the respective package in Mathematica, especially when it comes to non-Hermitian variables
Step12: We can easily define Hamiltonians. For instance, the Hubbard model on a chain is as follows
Step13: List comprehensions
Python was retrofitted with some elements of functional programming, mainly building on ideas coming from Haskell. It nevertheless remains an object-oriented language, but it is highly opportunistic. This approach is a lot like Mathematica, which is quintessentially functional, but you can follow any programming paradigm when using it.
List comprehensions are probably the most used construct in Python from functional programming. In fact, it is considered a Pythonesque way of doing things. It is a quick way of generating transformed lists from other lists
Step14: The expression on the left inside the list comprehension is similar to a pure function and we can have more complex forms
Step15: We can also include conditionals in the list comprehension
Step16: It is easy to emulate the MapIndex function of Mathematica
Step17: List comprehension does not actually return a list, it returns an iterator. Iterators are essentially generating functions for list and they are very important to functional programming in Python. Iterators have a next() function to retrieve subsequent elements, making them very easy to loop over. For instance
Step18: This is, of course, not very useful, as we could have just used the list itself. List comprehensions are a more sensible way of getting iterators. Another way of creating an iterator is by defining a function that returns values through yield rather than through return. This allows an internal state for the function and lets it continue where it left it off. For example
Step19: There is no shortage of useful examples for using iterators. Take combinatoric functions, for instance | Python Code:
%quickref
Explanation: The notebook interface
The IPython -- being rebranded as Jupyter -- notebook interface is becoming a standard for a number of languages other than Python: Julia, Scala, R, Haskell, bash are all getting their kernels in IPython. Since Python allows you to call MATLAB anyway, you can also use the notebook interface for MATLAB if you wish so.
Many features of IPython are independent of the underlying language. The so-called magic functions make it extremely powerful. This are prefixed by a percentage sign. For a quick reference, try
End of explanation
%timeit range(1000000)
Explanation: For instance, you can benchmark the speed of a function:
End of explanation
!uname -a
Explanation: You also have direct access to the operating system calls:
End of explanation
print(_)
Explanation: You can, of course, always access the result of the previous command:
End of explanation
from __future__ import print_function, division
Explanation: Some other cool features to explore: LaTeX export of notebooks with support for Bibtex -- also works for HTML -- and launching parallel computations in Python interpreters distributed across a cluster.
Symbolic operations
First, let us bypass the debate over Python 2 and 3 by forcing us to write code that functions identically in either version:
End of explanation
from sympy.interactive import printing
printing.init_printing(use_latex='mathjax')
Explanation: We also want nicely formatted output:
End of explanation
from sympy import *
Explanation: Then we bulk-import everything we might need:
End of explanation
x, y = symbols('x y')
sin(pi*x)/cos(pi*y)
Explanation: We define some symbols:
End of explanation
integrate(pi*sin(x*y), x)
Explanation: It renders nicely as $\frac{\sin\pi x}{\cos\pi x}$. Now a symbolic integral:
End of explanation
integrate(pi*sin(x*y), x).subs([(x, pi), (y, 1)])
Explanation: Evaluate this for a particular value of x and y:
End of explanation
N(integrate(pi*sin(x*y), x).subs([(x, pi), (y, 1)]))
Explanation: SymPy, just like Mathematica, keeps results symbolic as long as possible. If you want a numerical result, you specifically have to request it:
End of explanation
from sympy.physics.quantum import *
X = HermitianOperator('X')
Y = Operator('Y')
Dagger(X*Y)
Explanation: The quantum physics module is especially helpful. The noncommutative algebra is better than the respective package in Mathematica, especially when it comes to non-Hermitian variables:
End of explanation
t = 1.0
U = 4.0
n_sites = 2
cu = [Operator("%s_%s_u" % ("c", i + 1)) for i in range(n_sites)]
cd = [Operator("%s_%s_d" % ("c", i + 1)) for i in range(n_sites)]
hamiltonian = sum(U*Dagger(cu[r])*cu[r]*Dagger(cd[r])*cd[r] for r in range(n_sites))
hamiltonian += sum(-t*(Dagger(cu[r])*cu[r+1]+Dagger(cu[r+1])*cu[r]
+Dagger(cd[r])*cd[r+1]+Dagger(cd[r+1])*cd[r]) for r in range(n_sites-1))
expand(hamiltonian)
Explanation: We can easily define Hamiltonians. For instance, the Hubbard model on a chain is as follows:
End of explanation
[i**2 for i in range(5)]
Explanation: List comprehensions
Python was retrofitted with some elements of functional programming, mainly building on ideas coming from Haskell. It nevertheless remains an object-oriented language, but it is highly opportunistic. This approach is a lot like Mathematica, which is quintessentially functional, but you can follow any programming paradigm when using it.
List comprehensions are probably the most used construct in Python from functional programming. In fact, it is considered a Pythonesque way of doing things. It is a quick way of generating transformed lists from other lists: list comprehension is a simple map function in disguise.
End of explanation
[i*j for i in range(1, 4) for j in range(1, 4)]
Explanation: The expression on the left inside the list comprehension is similar to a pure function and we can have more complex forms:
End of explanation
[i for i in range(30) if i%3 == 0]
Explanation: We can also include conditionals in the list comprehension:
End of explanation
[[a, i] for i, a in enumerate([sqrt(2), pi, x])]
Explanation: It is easy to emulate the MapIndex function of Mathematica:
End of explanation
l = iter([1, 2, 3])
next(l)
Explanation: List comprehension does not actually return a list, it returns an iterator. Iterators are essentially generating functions for list and they are very important to functional programming in Python. Iterators have a next() function to retrieve subsequent elements, making them very easy to loop over. For instance:
End of explanation
def squares(N):
for i in range(N):
yield(i**2)
for j in squares(5):
print(j)
Explanation: This is, of course, not very useful, as we could have just used the list itself. List comprehensions are a more sensible way of getting iterators. Another way of creating an iterator is by defining a function that returns values through yield rather than through return. This allows an internal state for the function and lets it continue where it left it off. For example:
End of explanation
import itertools
for combination in itertools.combinations([1, 2, 3, 4, 5], 2):
print(combination)
Explanation: There is no shortage of useful examples for using iterators. Take combinatoric functions, for instance:
End of explanation |
1,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Peak finder
Can one break a peak into several gaussian peaks usding pymc?
Step1: Simulate data
Step2: Now we know that the answer two overlayed gaussians. So model it that way and see what we can get. | Python Code:
# http://onlinelibrary.wiley.com/doi/10.1002/2016JA022652/epdf
import datetime
import pymc
from pprint import pprint
import numpy as np
import matplotlib.pyplot as plt
import spacepy.plot as spp
print(datetime.datetime.now().isoformat())
Explanation: Peak finder
Can one break a peak into several gaussian peaks usding pymc?
End of explanation
p1 = np.asarray([pymc.Normal('P1', 2, 1/0.04).value for v in range(1000)])
p2 = np.asarray([pymc.Normal('P2', 2.5, 1/0.04).value for v in range(1000)])
dat = np.hstack((p1,p2))
plt.hist(dat)
plt.hist(p1, alpha=0.5)
plt.hist(p2, alpha=0.5)
Explanation: Simulate data
End of explanation
cent = pymc.Container([pymc.Uniform('cent1', 1.5, 2.5), pymc.Uniform('cent2', cent1, 3.0)])
# # cent1 = pymc.Uniform('cent1', 1.5, 2.5)
# # cent2 = pymc.Uniform('cent2', cent1, 3.0)
# w = pymc.Container([pymc.Uniform('w1', 0, 30), pymc.Uniform('w2', 0, 30)])
# # w1 = pymc.Uniform('w1', 0, 30)
# # w2 = pymc.Uniform('w2', 0, 30)
# I = pymc.Container([pymc.Categorical('g1', [0.5]*len(cent)), pymc.Categorical('g2', [0.5]*len(cent))])
# p1 = pymc.Uniform('p1', 0, 1)
# p2 = 1-p1
# val = pymc.Container([pymc.Normal('val1', cent, w), pymc.Normal('val2', cent, w)])
# obsval = pymc.Normal('obs', val, w, observed=True, value=dat)
sigmas = pymc.Normal('sigmas', mu=2.25, tau=1000, size=2)
centers = pymc.Normal('centers', [1.8, 2.25], [20, 20], size=2)
alpha = pymc.Beta('alpha', alpha=2, beta=3)
category = pymc.Container([pymc.Categorical("category%i" % i, [alpha, 1 - alpha])
for i in range(len(dat))])
observations = pymc.Container([pymc.Normal('samples_model%i' % i,
mu=centers[category[i]], tau=1/(sigmas[category[i]]**2),
value=dat[i], observed=True) for i in range(len(dat))])
model = pymc.Model([observations, category, alpha, sigmas, centers])
mcmc = pymc.MCMC(model)
# initialize in a good place to reduce the number of steps required
centers.value = [1.5, 3]
# set a custom proposal for centers, since the default is bad
mcmc.use_step_method(pymc.Metropolis, centers, proposal_sd=1.5/np.sqrt(len(dat)))
# set a custom proposal for category, since the default is bad
for i in range(len(dat)):
mcmc.use_step_method(pymc.DiscreteMetropolis, category[i], proposal_distribution='Prior')
mcmc.sample(100) # beware sampling takes much longer now
# check the acceptance rates
print(mcmc.step_method_dict[category[0]][0].ratio)
print(mcmc.step_method_dict[centers][0].ratio)
print(mcmc.step_method_dict[alpha][0].ratio)
pymc.Matplot.plot(mcmc, centers)
Explanation: Now we know that the answer two overlayed gaussians. So model it that way and see what we can get.
End of explanation |
1,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading an event file
Read events from a file. For a more detailed guide on how to read events
using MNE-Python, see tut_epoching_and_averaging.
Step1: Reading events
Below we'll read in an events file. We suggest that this file end in
-eve.fif. Note that we can read in the entire events file, or only
events corresponding to particular event types with the include and
exclude parameters.
Step2: Events objects are essentially numpy arrays with three columns
Step3: Plotting events
We can also plot events in order to visualize how events occur over the
course of our recording session. Below we'll plot our three event types
to see which ones were included.
Step4: Writing events
Finally, we can write events to disk. Remember to use the naming convention
-eve.fif for your file. | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Chris Holdgraf <[email protected]>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
Explanation: Reading an event file
Read events from a file. For a more detailed guide on how to read events
using MNE-Python, see tut_epoching_and_averaging.
End of explanation
events_1 = mne.read_events(fname, include=1)
events_1_2 = mne.read_events(fname, include=[1, 2])
events_not_4_32 = mne.read_events(fname, exclude=[4, 32])
Explanation: Reading events
Below we'll read in an events file. We suggest that this file end in
-eve.fif. Note that we can read in the entire events file, or only
events corresponding to particular event types with the include and
exclude parameters.
End of explanation
print(events_1[:5], '\n\n---\n\n', events_1_2[:5], '\n\n')
for ind, before, after in events_1[:5]:
print("At sample %d stim channel went from %d to %d"
% (ind, before, after))
Explanation: Events objects are essentially numpy arrays with three columns:
event_sample | previous_event_id | event_id
End of explanation
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
mne.viz.plot_events(events_1, axes=axs[0], show=False)
axs[0].set(title="restricted to event 1")
mne.viz.plot_events(events_1_2, axes=axs[1], show=False)
axs[1].set(title="restricted to event 1 or 2")
mne.viz.plot_events(events_not_4_32, axes=axs[2], show=False)
axs[2].set(title="keep all but 4 and 32")
plt.setp([ax.get_xticklabels() for ax in axs], rotation=45)
plt.tight_layout()
plt.show()
Explanation: Plotting events
We can also plot events in order to visualize how events occur over the
course of our recording session. Below we'll plot our three event types
to see which ones were included.
End of explanation
mne.write_events('example-eve.fif', events_1)
Explanation: Writing events
Finally, we can write events to disk. Remember to use the naming convention
-eve.fif for your file.
End of explanation |
1,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visual Comparison Between Different Classification Methods in Shogun
Notebook by Youssef Emad El-Din (Github ID
Step1: <a id = "section1">Data Generation and Visualization</a>
Transformation of features to Shogun format using <a href="http
Step5: Data visualization methods.
Step6: <a id="section2" href="http
Step7: SVM - Kernels
Shogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href="http
Step8: <a id ="section2c" href="http
Step9: <a id ="section2d" href="http
Step10: <a id ="section3" href="http
Step11: <a id ="section4" href="http
Step12: <a id ="section5" href="http
Step13: <a id ="section6" href="http
Step14: <a id ="section7" href="http
Step15: <a id ="section7b">Probit Likelihood model</a>
Shogun's <a href="http
Step16: <a id="section8">Putting It All Together</a> | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from shogun import *
import shogun as sg
#Needed lists for the final plot
classifiers_linear = []*10
classifiers_non_linear = []*10
classifiers_names = []*10
fadings = []*10
Explanation: Visual Comparison Between Different Classification Methods in Shogun
Notebook by Youssef Emad El-Din (Github ID: <a href="https://github.com/youssef-emad/">youssef-emad</a>)
This notebook demonstrates different classification methods in Shogun. The point is to compare and visualize the decision boundaries of different classifiers on two different datasets, where one is linear seperable, and one is not.
<a href ="#section1">Data Generation and Visualization</a>
<a href ="#section2">Support Vector Machine</a>
<a href ="#section2a">Linear SVM</a>
<a href ="#section2b">Gaussian Kernel</a>
<a href ="#section2c">Sigmoid Kernel</a>
<a href ="#section2d">Polynomial Kernel</a>
<a href ="#section3">Naive Bayes</a>
<a href ="#section4">Nearest Neighbors</a>
<a href ="#section5">Linear Discriminant Analysis</a>
<a href ="#section6">Quadratic Discriminat Analysis</a>
<a href ="#section7">Gaussian Process</a>
<a href ="#section7a">Logit Likelihood model</a>
<a href ="#section7b">Probit Likelihood model</a>
<a href ="#section8">Putting It All Together</a>
End of explanation
shogun_feats_linear = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_features_train.dat')))
shogun_labels_linear = BinaryLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_labels_train.dat')))
shogun_feats_non_linear = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_features_train.dat')))
shogun_labels_non_linear = BinaryLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_labels_train.dat')))
feats_linear = shogun_feats_linear.get('feature_matrix')
labels_linear = shogun_labels_linear.get('labels')
feats_non_linear = shogun_feats_non_linear.get('feature_matrix')
labels_non_linear = shogun_labels_non_linear.get('labels')
Explanation: <a id = "section1">Data Generation and Visualization</a>
Transformation of features to Shogun format using <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1DenseFeatures.html">RealFeatures</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1BinaryLabels.html">BinaryLables</a> classes.
End of explanation
def plot_binary_data(plot,X_train, y_train):
This function plots 2D binary data with different colors for different labels.
plot.xlabel(r"$x$")
plot.ylabel(r"$y$")
plot.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro')
plot.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo')
def compute_plot_isolines(classifier,feats,size=200,fading=True):
This function computes the classification of points on the grid
to get the decision boundaries used in plotting
x1 = np.linspace(1.2*min(feats[0]), 1.2*max(feats[0]), size)
x2 = np.linspace(1.2*min(feats[1]), 1.2*max(feats[1]), size)
x, y = np.meshgrid(x1, x2)
plot_features=features(np.array((np.ravel(x), np.ravel(y))))
if fading == True:
plot_labels = classifier.apply(plot_features).get('current_values')
else:
plot_labels = classifier.apply(plot_features).get('labels')
z = plot_labels.reshape((size, size))
return x,y,z
def plot_model(plot,classifier,features,labels,fading=True):
This function plots an input classification model
x,y,z = compute_plot_isolines(classifier,features,fading=fading)
plot.pcolor(x,y,z,cmap='RdBu_r')
plot.contour(x, y, z, linewidths=1, colors='black')
plot_binary_data(plot,features, labels)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Linear Features")
plot_binary_data(plt,feats_linear, labels_linear)
plt.subplot(122)
plt.title("Non Linear Features")
plot_binary_data(plt,feats_non_linear, labels_non_linear)
Explanation: Data visualization methods.
End of explanation
plt.figure(figsize=(15,5))
c = 0.5
epsilon =1e-3
svm_linear = LibLinear(c,shogun_feats_linear,shogun_labels_linear)
svm_linear.put('liblinear_solver_type', L2R_L2LOSS_SVC)
svm_linear.put('epsilon', epsilon)
svm_linear.train()
classifiers_linear.append(svm_linear)
classifiers_names.append("SVM Linear")
fadings.append(True)
plt.subplot(121)
plt.title("Linear SVM - Linear Features")
plot_model(plt,svm_linear,feats_linear,labels_linear)
svm_non_linear = LibLinear(c,shogun_feats_non_linear,shogun_labels_non_linear)
svm_non_linear.put('liblinear_solver_type', L2R_L2LOSS_SVC)
svm_non_linear.put('epsilon', epsilon)
svm_non_linear.train()
classifiers_non_linear.append(svm_non_linear)
plt.subplot(122)
plt.title("Linear SVM - Non Linear Features")
plot_model(plt,svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id="section2" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1SVM.html">Support Vector Machine</a>
<a id="section2a" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html">Linear SVM</a>
Shogun provide <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html">Liblinear</a> which is a library for large-scale linear learning focusing on SVM used for classification
End of explanation
gaussian_c=0.7
gaussian_kernel_linear=sg.kernel("GaussianKernel", log_width=np.log(100))
gaussian_svm_linear=sg.machine('LibSVM', C1=gaussian_c, C2=gaussian_c, kernel=gaussian_kernel_linear, labels=shogun_labels_linear)
gaussian_svm_linear.train(shogun_feats_linear)
classifiers_linear.append(gaussian_svm_linear)
fadings.append(True)
gaussian_kernel_non_linear=sg.kernel("GaussianKernel", log_width=np.log(100))
gaussian_svm_non_linear=sg.machine('LibSVM', C1=gaussian_c, C2=gaussian_c, kernel=gaussian_kernel_non_linear, labels=shogun_labels_non_linear)
gaussian_svm_non_linear.train(shogun_feats_non_linear)
classifiers_non_linear.append(gaussian_svm_non_linear)
classifiers_names.append("SVM Gaussian Kernel")
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Gaussian Kernel - Linear Features")
plot_model(plt,gaussian_svm_linear,feats_linear,labels_linear)
plt.subplot(122)
plt.title("SVM Gaussian Kernel - Non Linear Features")
plot_model(plt,gaussian_svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: SVM - Kernels
Shogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Kernel.html">Kernel</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1KernelMachine.html">KernelMachine</a> base class.
<a id ="section2b" href = "http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianKernel.html">Gaussian Kernel</a>
End of explanation
sigmoid_c = 0.9
sigmoid_kernel_linear = SigmoidKernel(shogun_feats_linear,shogun_feats_linear,200,1,0.5)
sigmoid_svm_linear = sg.machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c, kernel=sigmoid_kernel_linear, labels=shogun_labels_linear)
sigmoid_svm_linear.train()
classifiers_linear.append(sigmoid_svm_linear)
classifiers_names.append("SVM Sigmoid Kernel")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Sigmoid Kernel - Linear Features")
plot_model(plt,sigmoid_svm_linear,feats_linear,labels_linear)
sigmoid_kernel_non_linear = SigmoidKernel(shogun_feats_non_linear,shogun_feats_non_linear,400,2.5,2)
sigmoid_svm_non_linear = sg.machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c, kernel=sigmoid_kernel_non_linear, labels=shogun_labels_non_linear)
sigmoid_svm_non_linear.train()
classifiers_non_linear.append(sigmoid_svm_non_linear)
plt.subplot(122)
plt.title("SVM Sigmoid Kernel - Non Linear Features")
plot_model(plt,sigmoid_svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section2c" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSigmoidKernel.html">Sigmoid Kernel</a>
End of explanation
poly_c = 0.5
degree = 4
poly_kernel_linear = sg.kernel('PolyKernel', degree=degree, c=1.0)
poly_kernel_linear.init(shogun_feats_linear, shogun_feats_linear)
poly_svm_linear = sg.machine('LibSVM', C1=poly_c, C2=poly_c, kernel=poly_kernel_linear, labels=shogun_labels_linear)
poly_svm_linear.train()
classifiers_linear.append(poly_svm_linear)
classifiers_names.append("SVM Polynomial kernel")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Polynomial Kernel - Linear Features")
plot_model(plt,poly_svm_linear,feats_linear,labels_linear)
poly_kernel_non_linear = sg.kernel('PolyKernel', degree=degree, c=1.0)
poly_kernel_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)
poly_svm_non_linear = sg.machine('LibSVM', C1=poly_c, C2=poly_c, kernel=poly_kernel_non_linear, labels=shogun_labels_non_linear)
poly_svm_non_linear.train()
classifiers_non_linear.append(poly_svm_non_linear)
plt.subplot(122)
plt.title("SVM Polynomial Kernel - Non Linear Features")
plot_model(plt,poly_svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section2d" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CPolyKernel.html">Polynomial Kernel</a>
End of explanation
multiclass_labels_linear = shogun_labels_linear.get('labels')
for i in range(0,len(multiclass_labels_linear)):
if multiclass_labels_linear[i] == -1:
multiclass_labels_linear[i] = 0
multiclass_labels_non_linear = shogun_labels_non_linear.get('labels')
for i in range(0,len(multiclass_labels_non_linear)):
if multiclass_labels_non_linear[i] == -1:
multiclass_labels_non_linear[i] = 0
shogun_multiclass_labels_linear = MulticlassLabels(multiclass_labels_linear)
shogun_multiclass_labels_non_linear = MulticlassLabels(multiclass_labels_non_linear)
naive_bayes_linear = GaussianNaiveBayes()
naive_bayes_linear.put('features', shogun_feats_linear)
naive_bayes_linear.put('labels', shogun_multiclass_labels_linear)
naive_bayes_linear.train()
classifiers_linear.append(naive_bayes_linear)
classifiers_names.append("Naive Bayes")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Naive Bayes - Linear Features")
plot_model(plt,naive_bayes_linear,feats_linear,labels_linear,fading=False)
naive_bayes_non_linear = GaussianNaiveBayes()
naive_bayes_non_linear.put('features', shogun_feats_non_linear)
naive_bayes_non_linear.put('labels', shogun_multiclass_labels_non_linear)
naive_bayes_non_linear.train()
classifiers_non_linear.append(naive_bayes_non_linear)
plt.subplot(122)
plt.title("Naive Bayes - Non Linear Features")
plot_model(plt,naive_bayes_non_linear,feats_non_linear,labels_non_linear,fading=False)
Explanation: <a id ="section3" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianNaiveBayes.html">Naive Bayes</a>
End of explanation
number_of_neighbors = 10
distances_linear = sg.distance('EuclideanDistance')
distances_linear.init(shogun_feats_linear, shogun_feats_linear)
knn_linear = KNN(number_of_neighbors,distances_linear,shogun_labels_linear)
knn_linear.train()
classifiers_linear.append(knn_linear)
classifiers_names.append("Nearest Neighbors")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Nearest Neighbors - Linear Features")
plot_model(plt,knn_linear,feats_linear,labels_linear,fading=False)
distances_non_linear = sg.distance('EuclideanDistance')
distances_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)
knn_non_linear = KNN(number_of_neighbors,distances_non_linear,shogun_labels_non_linear)
knn_non_linear.train()
classifiers_non_linear.append(knn_non_linear)
plt.subplot(122)
plt.title("Nearest Neighbors - Non Linear Features")
plot_model(plt,knn_non_linear,feats_non_linear,labels_non_linear,fading=False)
Explanation: <a id ="section4" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1KNN.html">Nearest Neighbors</a>
End of explanation
gamma = 0.1
lda_linear = sg.machine('LDA', gamma=gamma, labels=shogun_labels_linear)
lda_linear.train(shogun_feats_linear)
classifiers_linear.append(lda_linear)
classifiers_names.append("LDA")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("LDA - Linear Features")
plot_model(plt,lda_linear,feats_linear,labels_linear)
lda_non_linear = sg.machine('LDA', gamma=gamma, labels=shogun_labels_non_linear)
lda_non_linear.train(shogun_feats_non_linear)
classifiers_non_linear.append(lda_non_linear)
plt.subplot(122)
plt.title("LDA - Non Linear Features")
plot_model(plt,lda_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section5" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CLDA.html">Linear Discriminant Analysis</a>
End of explanation
qda_linear = QDA(shogun_feats_linear, shogun_multiclass_labels_linear)
qda_linear.train()
classifiers_linear.append(qda_linear)
classifiers_names.append("QDA")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("QDA - Linear Features")
plot_model(plt,qda_linear,feats_linear,labels_linear,fading=False)
qda_non_linear = QDA(shogun_feats_non_linear, shogun_multiclass_labels_non_linear)
qda_non_linear.train()
classifiers_non_linear.append(qda_non_linear)
plt.subplot(122)
plt.title("QDA - Non Linear Features")
plot_model(plt,qda_non_linear,feats_non_linear,labels_non_linear,fading=False)
Explanation: <a id ="section6" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1QDA.html">Quadratic Discriminant Analysis</a>
End of explanation
# create Gaussian kernel with width = 2.0
kernel = sg.kernel("GaussianKernel", log_width=np.log(2))
# create zero mean function
zero_mean = ZeroMean()
# create logit likelihood model
likelihood = LogitLikelihood()
# specify EP approximation inference method
inference_model_linear = EPInferenceMethod(kernel, shogun_feats_linear, zero_mean, shogun_labels_linear, likelihood)
# create and train GP classifier, which uses Laplace approximation
gaussian_logit_linear = GaussianProcessClassification(inference_model_linear)
gaussian_logit_linear.train()
classifiers_linear.append(gaussian_logit_linear)
classifiers_names.append("Gaussian Process Logit")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Gaussian Process - Logit - Linear Features")
plot_model(plt,gaussian_logit_linear,feats_linear,labels_linear)
inference_model_non_linear = EPInferenceMethod(kernel, shogun_feats_non_linear, zero_mean,
shogun_labels_non_linear, likelihood)
gaussian_logit_non_linear = GaussianProcessClassification(inference_model_non_linear)
gaussian_logit_non_linear.train()
classifiers_non_linear.append(gaussian_logit_non_linear)
plt.subplot(122)
plt.title("Gaussian Process - Logit - Non Linear Features")
plot_model(plt,gaussian_logit_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section7" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1GaussianProcessBinaryClassification.html">Gaussian Process</a>
<a id ="section7a">Logit Likelihood model</a>
Shogun's <a href= "http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1LogitLikelihood.html">LogitLikelihood</a> and <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1EPInferenceMethod.html">EPInferenceMethod</a> classes are used.
End of explanation
likelihood = ProbitLikelihood()
inference_model_linear = EPInferenceMethod(kernel, shogun_feats_linear, zero_mean, shogun_labels_linear, likelihood)
gaussian_probit_linear = GaussianProcessClassification(inference_model_linear)
gaussian_probit_linear.train()
classifiers_linear.append(gaussian_probit_linear)
classifiers_names.append("Gaussian Process Probit")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Gaussian Process - Probit - Linear Features")
plot_model(plt,gaussian_probit_linear,feats_linear,labels_linear)
inference_model_non_linear = EPInferenceMethod(kernel, shogun_feats_non_linear,
zero_mean, shogun_labels_non_linear, likelihood)
gaussian_probit_non_linear = GaussianProcessClassification(inference_model_non_linear)
gaussian_probit_non_linear.train()
classifiers_non_linear.append(gaussian_probit_non_linear)
plt.subplot(122)
plt.title("Gaussian Process - Probit - Non Linear Features")
plot_model(plt,gaussian_probit_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section7b">Probit Likelihood model</a>
Shogun's <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1ProbitLikelihood.html">ProbitLikelihood</a> class is used.
End of explanation
figure = plt.figure(figsize=(30,9))
plt.subplot(2,11,1)
plot_binary_data(plt,feats_linear, labels_linear)
for i in range(0,10):
plt.subplot(2,11,i+2)
plt.title(classifiers_names[i])
plot_model(plt,classifiers_linear[i],feats_linear,labels_linear,fading=fadings[i])
plt.subplot(2,11,12)
plot_binary_data(plt,feats_non_linear, labels_non_linear)
for i in range(0,10):
plt.subplot(2,11,13+i)
plot_model(plt,classifiers_non_linear[i],feats_non_linear,labels_non_linear,fading=fadings[i])
Explanation: <a id="section8">Putting It All Together</a>
End of explanation |
1,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding similar documents with Word2Vec and WMD
Word Mover's Distance is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. For example, in a blog post OpenTable use WMD on restaurant reviews. Using this approach, they are able to mine different aspects of the reviews. In part 2 of this tutorial, we show how you can use Gensim's WmdSimilarity to do something similar to what OpenTable did. In part 1 shows how you can compute the WMD distance between two documents using wmdistance. Part 1 is optional if you want use WmdSimilarity, but is also useful in it's own merit.
First, however, we go through the basics of what WMD is.
Word Mover's Distance basics
WMD is a method that allows us to assess the "distance" between two documents in a meaningful way, even when they have no words in common. It uses word2vec [4] vector embeddings of words. It been shown to outperform many of the state-of-the-art methods in k-nearest neighbors classification [3].
WMD is illustrated below for two very similar sentences (illustration taken from Vlad Niculae's blog). The sentences have no words in common, but by matching the relevant words, WMD is able to accurately measure the (dis)similarity between the two sentences. The method also uses the bag-of-words representation of the documents (simply put, the word's frequencies in the documents), noted as $d$ in the figure below. The intution behind the method is that we find the minimum "traveling distance" between documents, in other words the most efficient way to "move" the distribution of document 1 to the distribution of document 2.
<img src='https
Step1: These sentences have very similar content, and as such the WMD should be low. Before we compute the WMD, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
Step2: Now, as mentioned earlier, we will be using some downloaded pre-trained embeddings. We load these into a Gensim Word2Vec model class. Note that the embeddings we have chosen here require a lot of memory.
Step3: So let's compute WMD using the wmdistance method.
Step4: Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger.
Step5: Normalizing word2vec vectors
When using the wmdistance method, it is beneficial to normalize the word2vec vectors first, so they all have equal length. To do this, simply call model.init_sims(replace=True) and Gensim will take care of that for you.
Usually, one measures the distance between two word2vec vectors using the cosine distance (see cosine similarity), which measures the angle between vectors. WMD, on the other hand, uses the Euclidean distance. The Euclidean distance between two vectors might be large because their lengths differ, but the cosine distance is small because the angle between them is small; we can mitigate some of this by normalizing the vectors.
Note that normalizing the vectors can take some time, especially if you have a large vocabulary and/or large vectors.
Usage is illustrated in the example below. It just so happens that the vectors we have downloaded are already normalized, so it won't do any difference in this case.
Step6: Part 2
Step7: Below is a plot with a histogram of document lengths and includes the average document length as well. Note that these are the pre-processed documents, meaning stopwords are removed, punctuation is removed, etc. Document lengths have a high impact on the running time of WMD, so when comparing running times with this experiment, the number of documents in query corpus (about 4000) and the length of the documents (about 62 words on average) should be taken into account.
Step8: Now we want to initialize the similarity class with a corpus and a word2vec model (which provides the embeddings and the wmdistance method itself).
Step9: The num_best parameter decides how many results the queries return. Now let's try making a query. The output is a list of indeces and similarities of documents in the corpus, sorted by similarity.
Note that the output format is slightly different when num_best is None (i.e. not assigned). In this case, you get an array of similarities, corresponding to each of the documents in the corpus.
The query below is taken directly from one of the reviews in the corpus. Let's see if there are other reviews that are similar to this one.
Step10: The query and the most similar documents, together with the similarities, are printed below. We see that the retrieved documents are discussing the same thing as the query, although using different words. The query talks about getting a seat "outdoor", while the results talk about sitting "outside", and one of them says the restaurant has a "nice view".
Step11: Let's try a different query, also taken directly from one of the reviews in the corpus.
Step12: This time around, the results are more straight forward; the retrieved documents basically contain the same words as the query.
WmdSimilarity normalizes the word embeddings by default (using init_sims(), as explained before), but you can overwrite this behaviour by calling WmdSimilarity with normalize_w2v_and_replace=False. | Python Code:
from time import time
start_nb = time()
# Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s')
sentence_obama = 'Obama speaks to the media in Illinois'
sentence_president = 'The president greets the press in Chicago'
sentence_obama = sentence_obama.lower().split()
sentence_president = sentence_president.lower().split()
Explanation: Finding similar documents with Word2Vec and WMD
Word Mover's Distance is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. For example, in a blog post OpenTable use WMD on restaurant reviews. Using this approach, they are able to mine different aspects of the reviews. In part 2 of this tutorial, we show how you can use Gensim's WmdSimilarity to do something similar to what OpenTable did. In part 1 shows how you can compute the WMD distance between two documents using wmdistance. Part 1 is optional if you want use WmdSimilarity, but is also useful in it's own merit.
First, however, we go through the basics of what WMD is.
Word Mover's Distance basics
WMD is a method that allows us to assess the "distance" between two documents in a meaningful way, even when they have no words in common. It uses word2vec [4] vector embeddings of words. It been shown to outperform many of the state-of-the-art methods in k-nearest neighbors classification [3].
WMD is illustrated below for two very similar sentences (illustration taken from Vlad Niculae's blog). The sentences have no words in common, but by matching the relevant words, WMD is able to accurately measure the (dis)similarity between the two sentences. The method also uses the bag-of-words representation of the documents (simply put, the word's frequencies in the documents), noted as $d$ in the figure below. The intution behind the method is that we find the minimum "traveling distance" between documents, in other words the most efficient way to "move" the distribution of document 1 to the distribution of document 2.
<img src='https://vene.ro/images/wmd-obama.png' height='600' width='600'>
This method was introduced in the article "From Word Embeddings To Document Distances" by Matt Kusner et al. (link to PDF). It is inspired by the "Earth Mover's Distance", and employs a solver of the "transportation problem".
In this tutorial, we will learn how to use Gensim's WMD functionality, which consists of the wmdistance method for distance computation, and the WmdSimilarity class for corpus based similarity queries.
Note:
If you use this software, please consider citing [1], [2] and [3].
Running this notebook
You can download this iPython Notebook, and run it on your own computer, provided you have installed Gensim, PyEMD, NLTK, and downloaded the necessary data.
The notebook was run on an Ubuntu machine with an Intel core i7-4770 CPU 3.40GHz (8 cores) and 32 GB memory. Running the entire notebook on this machine takes about 3 minutes.
Part 1: Computing the Word Mover's Distance
To use WMD, we need some word embeddings first of all. You could train a word2vec (see tutorial here) model on some corpus, but we will start by downloading some pre-trained word2vec embeddings. Download the GoogleNews-vectors-negative300.bin.gz embeddings here (warning: 1.5 GB, file is not needed for part 2). Training your own embeddings can be beneficial, but to simplify this tutorial, we will be using pre-trained embeddings at first.
Let's take some sentences to compute the distance between.
End of explanation
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
# Remove stopwords.
stop_words = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stop_words]
sentence_president = [w for w in sentence_president if w not in stop_words]
Explanation: These sentences have very similar content, and as such the WMD should be low. Before we compute the WMD, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
End of explanation
start = time()
import os
from gensim.models import KeyedVectors
if not os.path.exists('data/GoogleNews-vectors-negative300.bin.gz'):
raise ValueError("SKIP: You need to download the google news model")
model = KeyedVectors.load_word2vec_format('data/GoogleNews-vectors-negative300.bin.gz', binary=True)
print('Cell took %.2f seconds to run.' % (time() - start))
Explanation: Now, as mentioned earlier, we will be using some downloaded pre-trained embeddings. We load these into a Gensim Word2Vec model class. Note that the embeddings we have chosen here require a lot of memory.
End of explanation
distance = model.wmdistance(sentence_obama, sentence_president)
print("distance = {0:.4f}".format(distance))
Explanation: So let's compute WMD using the wmdistance method.
End of explanation
sentence_orange = 'Oranges are my favorite fruit'
sentence_orange = sentence_orange.lower().split()
sentence_orange = [w for w in sentence_orange if w not in stop_words]
distance = model.wmdistance(sentence_obama, sentence_orange)
print("distance = {0:.4f}".format(distance))
Explanation: Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger.
End of explanation
# Normalizing word2vec vectors.
start = time()
model.init_sims(replace=True) # Normalizes the vectors in the word2vec class.
distance = model.wmdistance(sentence_obama, sentence_president) # Compute WMD as normal.
print ('Cell took %.2f seconds to run.' %(time() - start))
Explanation: Normalizing word2vec vectors
When using the wmdistance method, it is beneficial to normalize the word2vec vectors first, so they all have equal length. To do this, simply call model.init_sims(replace=True) and Gensim will take care of that for you.
Usually, one measures the distance between two word2vec vectors using the cosine distance (see cosine similarity), which measures the angle between vectors. WMD, on the other hand, uses the Euclidean distance. The Euclidean distance between two vectors might be large because their lengths differ, but the cosine distance is small because the angle between them is small; we can mitigate some of this by normalizing the vectors.
Note that normalizing the vectors can take some time, especially if you have a large vocabulary and/or large vectors.
Usage is illustrated in the example below. It just so happens that the vectors we have downloaded are already normalized, so it won't do any difference in this case.
End of explanation
# Pre-processing a document.
from nltk import word_tokenize
download('punkt') # Download data for tokenizer.
def preprocess(doc):
doc = doc.lower() # Lower the text.
doc = word_tokenize(doc) # Split into words.
doc = [w for w in doc if not w in stop_words] # Remove stopwords.
doc = [w for w in doc if w.isalpha()] # Remove numbers and punctuation.
return doc
start = time()
import json
# Business IDs of the restaurants.
ids = ['4bEjOyTaDG24SY5TxsaUNQ', '2e2e7WgqU1BnpxmQL5jbfw', 'zt1TpTuJ6y9n551sw9TaEg',
'Xhg93cMdemu5pAMkDoEdtQ', 'sIyHTizqAiGu12XMLX3N3g', 'YNQgak-ZLtYJQxlDwN-qIg']
w2v_corpus = [] # Documents to train word2vec on (all 6 restaurants).
wmd_corpus = [] # Documents to run queries against (only one restaurant).
documents = [] # wmd_corpus, with no pre-processing (so we can see the original documents).
with open('data/yelp/yelp_academic_dataset_review.json') as data_file:
for line in data_file:
json_line = json.loads(line)
# if json_line['business_id'] not in ids:
# # Not one of the 6 restaurants.
# continue
# Pre-process document.
text = json_line['text'] # Extract text from JSON object.
text = preprocess(text)
# Add to corpus for training Word2Vec.
w2v_corpus.append(text)
# print (text)
if json_line['business_id'] == ids[0]:
# Add to corpus for similarity queries.
wmd_corpus.append(text)
documents.append(json_line['text'])
# print (w2v_corpus)
print ('Cell took %.2f seconds to run.' %(time() - start))
Explanation: Part 2: Similarity queries using WmdSimilarity
You can use WMD to get the most similar documents to a query, using the WmdSimilarity class. Its interface is similar to what is described in the Similarity Queries Gensim tutorial.
Important note:
WMD is a measure of distance. The similarities in WmdSimilarity are simply the negative distance. Be careful not to confuse distances and similarities. Two similar documents will have a high similarity score and a small distance; two very different documents will have low similarity score, and a large distance.
Yelp data
Let's try similarity queries using some real world data. For that we'll be using Yelp reviews, available at http://www.yelp.com/dataset_challenge. Specifically, we will be using reviews of a single restaurant, namely the Mon Ami Gabi.
To get the Yelp data, you need to register by name and email address. The data is 775 MB.
This time around, we are going to train the Word2Vec embeddings on the data ourselves. One restaurant is not enough to train Word2Vec properly, so we use 6 restaurants for that, but only run queries against one of them. In addition to the Mon Ami Gabi, mentioned above, we will be using:
Earl of Sandwich.
Wicked Spoon.
Serendipity 3.
Bacchanal Buffet.
The Buffet.
The restaurants we chose were those with the highest number of reviews in the Yelp dataset. Incidentally, they all are on the Las Vegas Boulevard. The corpus we trained Word2Vec on has 18957 documents (reviews), and the corpus we used for WmdSimilarity has 4137 documents.
Below a JSON file with Yelp reviews is read line by line, the text is extracted, tokenized, and stopwords and punctuation are removed.
End of explanation
from matplotlib import pyplot as plt
%matplotlib inline
# Document lengths.
lens = [len(doc) for doc in wmd_corpus]
# print (w2v_corpus)
# Plot.
plt.rc('figure', figsize=(8,6))
plt.rc('font', size=14)
plt.rc('lines', linewidth=2)
plt.rc('axes', color_cycle=('#377eb8','#e41a1c','#4daf4a',
'#984ea3','#ff7f00','#ffff33'))
# Histogram.
plt.hist(lens, bins=20)
plt.hold(True)
# Average length.
avg_len = sum(lens) / float(len(lens))
plt.axvline(avg_len, color='#e41a1c')
plt.hold(False)
plt.title('Histogram of document lengths.')
plt.xlabel('Length')
plt.text(100, 800, 'mean = %.2f' % avg_len)
plt.show()
Explanation: Below is a plot with a histogram of document lengths and includes the average document length as well. Note that these are the pre-processed documents, meaning stopwords are removed, punctuation is removed, etc. Document lengths have a high impact on the running time of WMD, so when comparing running times with this experiment, the number of documents in query corpus (about 4000) and the length of the documents (about 62 words on average) should be taken into account.
End of explanation
# Train Word2Vec on all the restaurants.
model = Word2Vec(w2v_corpus, workers=3, size=100)
# Initialize WmdSimilarity.
from gensim.similarities import WmdSimilarity
num_best = 10
instance = WmdSimilarity(wmd_corpus, model, num_best=10)
Explanation: Now we want to initialize the similarity class with a corpus and a word2vec model (which provides the embeddings and the wmdistance method itself).
End of explanation
start = time()
sent = 'Very good, you should seat outdoor.'
query = preprocess(sent)
sims = instance[query] # A query is simply a "look-up" in the similarity class.
print 'Cell took %.2f seconds to run.' %(time() - start)
Explanation: The num_best parameter decides how many results the queries return. Now let's try making a query. The output is a list of indeces and similarities of documents in the corpus, sorted by similarity.
Note that the output format is slightly different when num_best is None (i.e. not assigned). In this case, you get an array of similarities, corresponding to each of the documents in the corpus.
The query below is taken directly from one of the reviews in the corpus. Let's see if there are other reviews that are similar to this one.
End of explanation
# Print the query and the retrieved documents, together with their similarities.
print 'Query:'
print sent
for i in range(num_best):
print
print 'sim = %.4f' % sims[i][1]
print documents[sims[i][0]]
Explanation: The query and the most similar documents, together with the similarities, are printed below. We see that the retrieved documents are discussing the same thing as the query, although using different words. The query talks about getting a seat "outdoor", while the results talk about sitting "outside", and one of them says the restaurant has a "nice view".
End of explanation
start = time()
sent = 'I felt that the prices were extremely reasonable for the Strip'
query = preprocess(sent)
sims = instance[query] # A query is simply a "look-up" in the similarity class.
print 'Query:'
print sent
for i in range(num_best):
print
print 'sim = %.4f' % sims[i][1]
print documents[sims[i][0]]
print '\nCell took %.2f seconds to run.' %(time() - start)
Explanation: Let's try a different query, also taken directly from one of the reviews in the corpus.
End of explanation
print 'Notebook took %.2f seconds to run.' %(time() - start_nb)
Explanation: This time around, the results are more straight forward; the retrieved documents basically contain the same words as the query.
WmdSimilarity normalizes the word embeddings by default (using init_sims(), as explained before), but you can overwrite this behaviour by calling WmdSimilarity with normalize_w2v_and_replace=False.
End of explanation |
1,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Soundscape Analysis by Shift-Invariant Latent Components</h1>
<h2>Michael Casey - Bregman Labs, Dartmouth College</h2>
A toolkit for matrix factorization of soundscape spectrograms into independent streams of sound objects, possibly representing individual species or independent group behaviours.
The method employs shift-invariant probabilistic latent component analysis (SIPLCA) for factoring a time-frequency matrix (2D array) into a convolution of 2D kernels (patches) with sparse activation functions.
Methods are based on the following
Step1: <h2>Example audio</h2>
Load an example audio file from the 'sounds' directory, 44.1kHz, stereo, 60 seconds duration.
Step2: <h2>Spectrum Analysis Parameters</h2>
Inspect the soundfile by loading it (using wavread) and printing some useful parameters.
A window size of 4096 with a hop of 1024 translates to 92ms and 23ms respectively for an audio samplerate of 44100Hz
Step3: <h2>SoundscapeEcology Toolkit</h2>
Analyze species-specific patterns in environmental recordings
SoundscapeEcology methods | Python Code:
from pylab import * # numpy, matplotlib, plt
from bregman.suite import * # Bregman audio feature extraction library
from soundscapeecology import * # 2D time-frequency shift-invariant convolutive matrix factorization
%matplotlib inline
rcParams['figure.figsize'] = (15.0, 9.0)
Explanation: <h1>Soundscape Analysis by Shift-Invariant Latent Components</h1>
<h2>Michael Casey - Bregman Labs, Dartmouth College</h2>
A toolkit for matrix factorization of soundscape spectrograms into independent streams of sound objects, possibly representing individual species or independent group behaviours.
The method employs shift-invariant probabilistic latent component analysis (SIPLCA) for factoring a time-frequency matrix (2D array) into a convolution of 2D kernels (patches) with sparse activation functions.
Methods are based on the following:
Smaragdis, P, B. Raj, and M.V. Shashanka, 2008. Sparse and shift-invariant feature extraction from non-negative data. In proceedings IEEE International Conference on Audio and Speech Signal Processing, Las Vegas, Nevada, USA.
Smaragdis, P. and Raj, B. 2007. Shift-Invariant Probabilistic Latent Component Analysis, tech report, MERL technical report, Camrbidge, MA.
A. C. Eldridge, M. Casey, P. Moscoso, and M. Peck (2015) A New Method for Ecoacoustics? Toward the Extraction and Evaluation of Ecologically-Meaningful Sound Objects using Sparse Coding Methods. PeerJ PrePrints, 3(e1855) 1407v2 [In Review]
End of explanation
sound_path = 'sounds'
sounds = os.listdir(sound_path)
print "sounds:", sounds
Explanation: <h2>Example audio</h2>
Load an example audio file from the 'sounds' directory, 44.1kHz, stereo, 60 seconds duration.
End of explanation
N=4096; H=N/4
x,sr,fmt = wavread(os.path.join(sound_path,sounds[0]))
print "sample_rate:", sr, "(Hz), fft size:", (1000*N)/sr, "(ms), hop size:", (1000*H)/sr, "(ms)"
Explanation: <h2>Spectrum Analysis Parameters</h2>
Inspect the soundfile by loading it (using wavread) and printing some useful parameters.
A window size of 4096 with a hop of 1024 translates to 92ms and 23ms respectively for an audio samplerate of 44100Hz
End of explanation
# 1. Instantiate a new SoundscapeEcololgy object using the spectral analysis parameters defined above
S = SoundscapeEcology(nfft=N, wfft=N/2, nhop=H)
# Inspect the contents of this object
print S.__dict__
# 2. load_audio() - sample segments of the soundfile without replacement, to speed up analysis
# The computational complexity of the analysis is high, and the information in a soundscape is largely redundant
# So, draw 25 random segments in time order, each consisting of 20 STFT frames (~500ms) of audio data
S.load_audio(os.path.join(sound_path,sounds[0]), num_samples=25, frames_per_sample=20) # num_samples=None means analyze the whole sound file
# 3. analyze() into shift-invariant kernels
# The STFT spectrum will be converted to a constant-Q transform by averaging over logarithmically spaced bins
# The shift-invariant kernels will have shift and time-extent dimensions
# The default kernel shape yields 1-octave of shift (self.feature_params['nbpo']),
# and its duration is frames_per_sample. Here, the num_components and win parameters are illustrated.
S.analyze(num_components=7, win=(S.feature_params['nbpo'], S.frames_per_sample))
# 4. visualize() - visualize the spectrum reconstruction and the individual components
# inputs:
# plotXi - visualize individual reconstructed component spectra [True]
# plotX - visualize original (pre-analysis) spectrum and reconstruction [False]
# plotW - visualize component time-frequency kernels [False]
# plotH - visualize component shift-time activation functions [False]
# **pargs - plotting key word arguments [**self.plt_args]
S.visualize(plotX=True, plotXi=True, plotW=True, plotH=True)
# 5. resynthesize() - sonify the results
# First, listen to the original (inverse STFT) and the full component reconstruction (inverse CQFT with random phases)
x_orig = S.F.inverse(S.X)
x_recon = S.F.inverse(S.X_hat, Phi_hat=(np.random.rand(*S.F.STFT.shape)*2-1)*np.pi) # random phase reconstruction
play(balance_signal(x_orig))
play(balance_signal(x_recon))
# First, listen to the original (inverse CQFT with original phases in STFT reconstruction)
# and the all-components reconstruction (inverse CQFT with random phases)
# Second, listen to the individual component reconstructions
# Use the notebook's "interrupt kernel" button (stop button) if this is too long (n_comps x audio sequence)
# See above plots for the individual component spectrograms
for k in range(S.n_components):
x_hat = S.resynthesize(k) # resynthesize individual component
play(balance_signal(x_hat)) # play it back
Explanation: <h2>SoundscapeEcology Toolkit</h2>
Analyze species-specific patterns in environmental recordings
SoundscapeEcology methods:
load_audio() - load sample of a soundscape recording
sample_audio_dir() - load group sample from multiple recordings
analyze() - extract per-species?? time-frequency partitioning from loaded audio
visualize() - show component spectrograms
resynthesize() - reconstruct audio for component spectrograms to sonify
model_fit_resynhesize() - generative statistical model of time-shift kernels
summarize() - show soundscape ecology entropy statistics
SoundscapeEcology static methods:
batch_analyze() - multiple analyses for a list of recordings
entropy() - compute entropy (in nats) of an acoustic feature distribution
gen_test_data() - generate an artificial soundscape for testing
Workflows:
[load_audio(), sample_audio_dir()] -> analyze() -> [visualize(), resynthesize(), summarize()]
In the following example we will:
1. Instantiate a new SoundscapeEcololgy object
2. load_audio() and sample segments of it without replacement
3. analyze() extract Constant-Q Frequency Transform (CQFT) and extract shift-invariant kernels
4. visualize() - reconstruct individual component features (CQFT) and make subplots
5. resynthesize() - invert individual feature reconstructions back to audio for sonifying
End of explanation |
1,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Drag from Tides
This adds a constant time lag model (Hut 1981) to tides raised on either the primary and/or the orbiting bodies.
As an example, we'll add the tides raised on a post-main sequence Sun near its tip-RGB phase by the Earth.
For a more advanced example implementation that includes stellar evolution using "Parameter Interpolation" and extended integration of terrestrial-like planets, see ยง4.2 and Fig. 5 of Baronett et al. (2021) and https
Step1: We specify the primary and secondaries' equilibrium gravitational response to the tidal field acting on them through the tctl_k2 potential Love number of degree 2. If we additionally give the primary a physical radius, then any (massive) orbiting body will raise equilibrium tides on the primary. Similarly, if we add a physical radius and tctl_k2 to any of the orbiting bodies, the primary will raise tides on that particle (but note that orbiting bodies will not raise tides on one another)
Step2: If we stop here and don't add a time lag, we will get the instantaneous equilibrium tide, which provides a conservative, radial non-Keplerian potential. The total energy will be conserved, but the pericenter will precess.
Step3: Constant Time Lag
If we additionally set the tctl_tau constant time lag parameter, this delayed response introduces dissipation, which will typically cause eccentricity damping, and will migrate the orbiting bodies either inward or outward depending on whether they orbit faster or slower than the spin of the tidally deformed body. We set the spin rate of each body with the Omega parameter. If it is not set, Omega is assumed to be zero.
We note that this implementation assumes bodies' spins are fixed, so consider whether more angular momentum is being changed in the system than is available in the spins! We additionally assume that bodies spins are aligned with the reference z axis.
As an example, for a highly-evolved RGB Sun, tidal friction in the outer convective envelope will retard tidal bulges on the solar photosphere (Schrรถder & Smith 2008), resulting in a non-zero constant time lag.
From Eq. 8 of Baronett et al. (2021),
$$
\tau = \dfrac{2R^3}{GMt_f},
$$
where $\tau$ is the constant time lag parameter (tctl_tau),
$R$ and $M$ are the physical radius and mass of the tidally deformed body respectively,
$G$ is the gravitational constant, and
$t_f(t) = (M(t)R(t)^2/L(t))^{1/3} \approx \mathcal{O}(1 \textrm{yr}$) is the convective friction time (Zahn 1989, Eq. 7).
For this simulation's values (i.e., $R = 0.85\,\text{au}$, $G = 4\pi^2\,\text{au}^3\cdot\text{yr}^{-2}\cdot M_\odot^{-1}$, $M = 0.86\,M_\odot$, and $t_f = 1\,\text{yr}$),
$$
\tau \approx 0.04\,\text{yr}.
$$
Step4: We can compare our numerical integration to the theoretical prediction assuming a circular orbit (see Baronett et al. 2021, Eq. 7). We'll integrate for 250 kyr and store the Earth's semi-major axis and eccentricity.
Step5: Note the small eccentricity we originally initialized for the Earth causes our numerical result to diverge only slightly from the circular, theoretical prediction.
In fact, we can also check that the eccentricity decays | Python Code:
import rebound
import reboundx
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
def getsim():
sim = rebound.Simulation()
sim.units = ('yr', 'AU', 'Msun')
sim.add(m=0.86) # post-MS Sun
sim.add(m=3.e-6, a=1., e=0.03) # Earth
sim.move_to_com()
rebx = reboundx.Extras(sim)
tides = rebx.load_force("tides_constant_time_lag")
rebx.add_force(tides)
return sim, rebx, tides
Explanation: Drag from Tides
This adds a constant time lag model (Hut 1981) to tides raised on either the primary and/or the orbiting bodies.
As an example, we'll add the tides raised on a post-main sequence Sun near its tip-RGB phase by the Earth.
For a more advanced example implementation that includes stellar evolution using "Parameter Interpolation" and extended integration of terrestrial-like planets, see ยง4.2 and Fig. 5 of Baronett et al. (2021) and https://github.com/sabaronett/REBOUNDxPaper.
End of explanation
sim, rebx, tides = getsim()
ps = sim.particles
ps[0].r = 0.85 # AU
ps[0].params["tctl_k2"] = 0.03
Explanation: We specify the primary and secondaries' equilibrium gravitational response to the tidal field acting on them through the tctl_k2 potential Love number of degree 2. If we additionally give the primary a physical radius, then any (massive) orbiting body will raise equilibrium tides on the primary. Similarly, if we add a physical radius and tctl_k2 to any of the orbiting bodies, the primary will raise tides on that particle (but note that orbiting bodies will not raise tides on one another):
End of explanation
H0 = sim.calculate_energy() + rebx.tides_constant_time_lag_potential(tides)
tmax = 5000
Nout=1000
pomega, Eerr = np.zeros(Nout), np.zeros(Nout)
times = np.linspace(0,tmax,Nout)
for i, time in enumerate(times):
sim.integrate(time)
pomega[i] = ps[1].pomega
H = sim.calculate_energy() + rebx.tides_constant_time_lag_potential(tides)
Eerr[i] = abs((H-H0)/H0)
%matplotlib inline
import matplotlib.pyplot as plt
fig, axarr = plt.subplots(nrows=2, figsize=(12,8))
axarr[0].plot(times, pomega)
axarr[0].set_ylabel("Pericenter", fontsize='xx-large')
axarr[1].plot(times, Eerr, '.')
axarr[1].set_xscale('log')
axarr[1].set_yscale('log')
axarr[1].set_xlabel('Time', fontsize='xx-large')
axarr[1].set_ylabel('Energy Error', fontsize='xx-large')
Explanation: If we stop here and don't add a time lag, we will get the instantaneous equilibrium tide, which provides a conservative, radial non-Keplerian potential. The total energy will be conserved, but the pericenter will precess.
End of explanation
sim, rebx, tides = getsim()
ps = sim.particles
ps[0].r = 0.85 # AU
ps[0].params["tctl_k2"] = 0.03
ps[0].params["tctl_tau"] = 0.04
ps[0].params["Omega"] = 0 # explicitly set to 0 (would be 0 by default)
Explanation: Constant Time Lag
If we additionally set the tctl_tau constant time lag parameter, this delayed response introduces dissipation, which will typically cause eccentricity damping, and will migrate the orbiting bodies either inward or outward depending on whether they orbit faster or slower than the spin of the tidally deformed body. We set the spin rate of each body with the Omega parameter. If it is not set, Omega is assumed to be zero.
We note that this implementation assumes bodies' spins are fixed, so consider whether more angular momentum is being changed in the system than is available in the spins! We additionally assume that bodies spins are aligned with the reference z axis.
As an example, for a highly-evolved RGB Sun, tidal friction in the outer convective envelope will retard tidal bulges on the solar photosphere (Schrรถder & Smith 2008), resulting in a non-zero constant time lag.
From Eq. 8 of Baronett et al. (2021),
$$
\tau = \dfrac{2R^3}{GMt_f},
$$
where $\tau$ is the constant time lag parameter (tctl_tau),
$R$ and $M$ are the physical radius and mass of the tidally deformed body respectively,
$G$ is the gravitational constant, and
$t_f(t) = (M(t)R(t)^2/L(t))^{1/3} \approx \mathcal{O}(1 \textrm{yr}$) is the convective friction time (Zahn 1989, Eq. 7).
For this simulation's values (i.e., $R = 0.85\,\text{au}$, $G = 4\pi^2\,\text{au}^3\cdot\text{yr}^{-2}\cdot M_\odot^{-1}$, $M = 0.86\,M_\odot$, and $t_f = 1\,\text{yr}$),
$$
\tau \approx 0.04\,\text{yr}.
$$
End of explanation
import numpy as np
tmax = 2.5e5
Nout=1000
a, e = np.zeros(Nout), np.zeros(Nout)
times = np.linspace(0,tmax,Nout)
# to plot physical radius of the Sun
R0 = 0*times + ps[0].r
q = (ps[1].m/ps[0].m)
T = ps[0].r**3/sim.G/ps[0].m/ps[0].params["tctl_tau"]
apred = ps[0].r*((ps[1].a/ps[0].r)**8 - 48.*ps[0].params["tctl_k2"]*q*(1+q)*times/T)**(1./8.)
%%time
for i, time in enumerate(times):
sim.integrate(time)
a[i] = ps[1].a
e[i] = ps[1].e
fig, ax = plt.subplots(figsize=(12,4))
ax.plot(times/1e3, a, label='$a_{\oplus}$')
ax.plot(times/1e3, R0, label='$R_{\odot}$')
ax.plot(times/1e3, apred, '--', label='$a_{\oplus}$ predicted')
ax.set_xlabel('$t$ / kyr', fontsize='xx-large')
ax.set_ylabel('(AU)', fontsize='xx-large')
ax.legend(fontsize='xx-large', loc='best')
Explanation: We can compare our numerical integration to the theoretical prediction assuming a circular orbit (see Baronett et al. 2021, Eq. 7). We'll integrate for 250 kyr and store the Earth's semi-major axis and eccentricity.
End of explanation
fig, ax = plt.subplots(figsize=(12,4))
ax.plot(times/1e3, e, label='$a_{\oplus}$')
ax.set_xlabel('$t$ / kyr', fontsize='xx-large')
ax.set_ylabel('e', fontsize='xx-large')
ax.legend(fontsize='xx-large', loc='best')
Explanation: Note the small eccentricity we originally initialized for the Earth causes our numerical result to diverge only slightly from the circular, theoretical prediction.
In fact, we can also check that the eccentricity decays:
End of explanation |
1,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Storage Commands
Google Cloud Datalab provides a set of commands for working with data stored in Google Cloud Storage. They can help you work with data files containing data that is not stored in BigQuery or manage data imported into or exported from BigQuery.
This notebook introduces several Cloud Storage commands that Datalab introduces into the notebook environment.
The Commands
The commands can list storage buckets and their contained objects, manage those objects, and read from and write to those objects.
Step1: Buckets and Objects
Items or files held in Cloud Storage are called objects. These objects are immutable once written. They are organized into buckets.
Listing
First, a couple of commands to list Datalab sample data. Try %%gcs list without arguments to list all buckets within the current project
Step2: You can also use wildchars to list all objects matching a pattern
Step3: Creating
Step4: NOTE
Step5: Reading and Writing
Step6: Deleting | Python Code:
%%gcs --help
Explanation: Storage Commands
Google Cloud Datalab provides a set of commands for working with data stored in Google Cloud Storage. They can help you work with data files containing data that is not stored in BigQuery or manage data imported into or exported from BigQuery.
This notebook introduces several Cloud Storage commands that Datalab introduces into the notebook environment.
The Commands
The commands can list storage buckets and their contained objects, manage those objects, and read from and write to those objects.
End of explanation
%%gcs list
%%gcs list --objects gs://cloud-datalab-samples
Explanation: Buckets and Objects
Items or files held in Cloud Storage are called objects. These objects are immutable once written. They are organized into buckets.
Listing
First, a couple of commands to list Datalab sample data. Try %%gcs list without arguments to list all buckets within the current project:
End of explanation
%%gcs list --objects gs://cloud-datalab-samples/udf*
Explanation: You can also use wildchars to list all objects matching a pattern:
End of explanation
# Some code to determine a unique bucket name for the purposes of the sample
from google.datalab import Context
import random, string
project = Context.default().project_id
suffix = ''.join(random.choice(string.lowercase) for _ in range(5))
sample_bucket_name = project + '-datalab-samples-' + suffix
sample_bucket_path = 'gs://' + sample_bucket_name
sample_bucket_object = sample_bucket_path + '/Hello.txt'
print('Bucket: ' + sample_bucket_path)
print('Object: ' + sample_bucket_object)
Explanation: Creating
End of explanation
%%gcs create --bucket $sample_bucket_path
%%gcs list --objects $sample_bucket_path
%%gcs copy --source gs://cloud-datalab-samples/hello.txt --destination $sample_bucket_object
%%gcs list --objects $sample_bucket_path
Explanation: NOTE: In the examples below, the variables are referenced in the command using $ syntax since the names are determined based on the current project. In your scenarios, you may be able to use literal values if they are constant instead of creating and using variables.
End of explanation
%%gcs view --object $sample_bucket_object
%%gcs read --object $sample_bucket_object --variable text
print(text)
text = 'Hello World!\n====\n'
%%gcs write --variable text --object $sample_bucket_object
%%gcs list --objects $sample_bucket_path
Explanation: Reading and Writing
End of explanation
%%gcs delete --object $sample_bucket_object
%%gcs delete --bucket $sample_bucket_path
Explanation: Deleting
End of explanation |
1,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuous renal replacement therapy (CRRT)
This notebook overviews the process of defining CRRT
Step2: Step 1
Step4: The above gives us some hints to expand our initial search
Step6: Manually label above itemid
The above is a list of all the potential data elements which could be used to define CRRT. The next step is to identify the specific elements which can be used to define start/stop time. This process requires clinical expertise in the area.
The following tables are a result of reviewing all ITEMID labels and flagging them as "consider for further review" or "not relevant".
Links to CHARTEVENTS
itemid | label | category | linksto | Included/comment
--- | --- | --- | --- | ---
225740 | Dialysis Catheter Discontinued | Access Lines - Invasive | chartevents | No - access line
227357 | Dialysis Catheter Dressing Occlusive | Access Lines - Invasive | chartevents | No - access line
225776 | Dialysis Catheter Dressing Type | Access Lines - Invasive | chartevents | No - access line
226118 | Dialysis Catheter placed in outside facility | Access Lines - Invasive | chartevents | No - access line
227753 | Dialysis Catheter Placement Confirmed by X-ray | Access Lines - Invasive | chartevents | No - access line
225323 | Dialysis Catheter Site Appear | Access Lines - Invasive | chartevents | No - access line
225725 | Dialysis Catheter Tip Cultured | Access Lines - Invasive | chartevents | No - access line
227124 | Dialysis Catheter Type | Access Lines - Invasive | chartevents | No - access line
225126 | Dialysis patient | Adm History/FHPA | chartevents | No - admission information
224149 | Access Pressure | Dialysis | chartevents | Yes - CRRT setting
224404 | ART Lumen Volume | Dialysis | chartevents | Yes - CRRT setting
224144 | Blood Flow (ml/min) | Dialysis | chartevents | Yes - CRRT setting
228004 | Citrate (ACD-A) | Dialysis | chartevents | Yes - CRRT setting
227290 | CRRT mode | Dialysis | chartevents | Yes - CRRT setting
225183 | Current Goal | Dialysis | chartevents | Yes - CRRT setting
225977 | Dialysate Fluid | Dialysis | chartevents | Yes - CRRT setting
224154 | Dialysate Rate | Dialysis | chartevents | Yes - CRRT setting
224135 | Dialysis Access Site | Dialysis | chartevents | No - access line
225954 | Dialysis Access Type | Dialysis | chartevents | No - access line
224139 | Dialysis Site Appearance | Dialysis | chartevents | No - access line
225810 | Dwell Time (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
224151 | Effluent Pressure | Dialysis | chartevents | Yes - CRRT setting
224150 | Filter Pressure | Dialysis | chartevents | Yes - CRRT setting
226499 | Hemodialysis Output | Dialysis | chartevents | No - hemodialysis
225958 | Heparin Concentration (units/mL) | Dialysis | chartevents | Yes - CRRT setting
224145 | Heparin Dose (per hour) | Dialysis | chartevents | Yes - CRRT setting
224191 | Hourly Patient Fluid Removal | Dialysis | chartevents | Yes - CRRT setting
225952 | Medication Added #1 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
227638 | Medication Added #2 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
225959 | Medication Added Amount #1 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
227639 | Medication Added Amount #2 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
225961 | Medication Added Units #1 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
227640 | Medication Added Units #2 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
228005 | PBP (Prefilter) Replacement Rate | Dialysis | chartevents | Yes - CRRT setting
225965 | Peritoneal Dialysis Catheter Status | Dialysis | chartevents | No - peritoneal dialysis
225963 | Peritoneal Dialysis Catheter Type | Dialysis | chartevents | No - peritoneal dialysis
225951 | Peritoneal Dialysis Fluid Appearance | Dialysis | chartevents | No - peritoneal dialysis
228006 | Post Filter Replacement Rate | Dialysis | chartevents | Yes - CRRT setting
225956 | Reason for CRRT Filter Change | Dialysis | chartevents | Yes - CRRT setting
225976 | Replacement Fluid | Dialysis | chartevents | Yes - CRRT setting
224153 | Replacement Rate | Dialysis | chartevents | Yes - CRRT setting
224152 | Return Pressure | Dialysis | chartevents | Yes - CRRT setting
225953 | Solution (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
224146 | System Integrity | Dialysis | chartevents | Yes - CRRT setting
226457 | Ultrafiltrate Output | Dialysis | chartevents | Yes - CRRT setting
224406 | VEN Lumen Volume | Dialysis | chartevents | Yes - CRRT setting
225806 | Volume In (PD) | Dialysis | chartevents | No - peritoneal dialysis
227438 | Volume not removed | Dialysis | chartevents | No - peritoneal dialysis
225807 | Volume Out (PD) | Dialysis | chartevents | No - peritoneal dialysis
Links to DATETIMEEVENTS
itemid | label | category | linksto | Included/comment
--- | --- | --- | --- | ---
225318 | Dialysis Catheter Cap Change | Access Lines - Invasive | datetimeevents | No - access lines
225319 | Dialysis Catheter Change over Wire Date | Access Lines - Invasive | datetimeevents | No - access lines
225321 | Dialysis Catheter Dressing Change | Access Lines - Invasive | datetimeevents | No - access lines
225322 | Dialysis Catheter Insertion Date | Access Lines - Invasive | datetimeevents | No - access lines
225324 | Dialysis CatheterTubing Change | Access Lines - Invasive | datetimeevents | No - access lines
225128 | Last dialysis | Adm History/FHPA | datetimeevents | No - admission information
Links to INPUTEVENTS_MV
itemid | label | category | linksto | Included/comment
--- | --- | --- | --- | ---
227525 | Calcium Gluconate (CRRT) | Medications | inputevents_mv | Yes - CRRT setting
227536 | KCl (CRRT) | Medications | inputevents_mv | Yes - CRRT setting
Links to PROCEDUREEVENTS_MV
itemid | label | category | linksto | Included/comment
--- | --- | --- | --- | ---
225441 | Hemodialysis | 4-Procedures | procedureevents_mv | No - hemodialysis
224270 | Dialysis Catheter | Access Lines - Invasive | procedureevents_mv | No - access lines
225436 | CRRT Filter Change | Dialysis | procedureevents_mv | Yes - CRRT setting
225802 | Dialysis - CRRT | Dialysis | procedureevents_mv | Yes - CRRT setting
225803 | Dialysis - CVVHD | Dialysis | procedureevents_mv | Yes - CRRT setting
225809 | Dialysis - CVVHDF | Dialysis | procedureevents_mv | Yes - CRRT setting
225955 | Dialysis - SCUF | Dialysis | procedureevents_mv | Yes - CRRT setting
225805 | Peritoneal Dialysis | Dialysis | procedureevents_mv | No - peritoneal dialysis
Reasons for inclusion/exclusion
CRRT Setting - yes (included) - these settings are only documented when a patient is receiving CRRT.
Access lines- no (excluded) - these ITEMIDs are not included as the presence of an access line does not guarantee that CRRT is being delivered. While having an access line is a requirement of performing CRRT, these lines are present even when a patient is not actively being hemodialysed.
Peritoneal dialysis - no (excluded) - Peritoneal dialysis is a different form of dialysis, and is not CRRT
Hemodialysis - no (excluded) - Similar as above, hemodialysis is a different form of dialysis and is not CRRT
Define rules based upon ITEMIDs
Above, we acquired a list of itemid which we determined to be related to administration of CRRT. The next step is to determine how these itemid relate to CRRT
Step10: Above we can see that ART Lumen Volume and VEN Lumen Volume are documented at a drastically different time than the other settings. Upon discussion with a clinical expert, they confirmed that this is expected, as these volumes indicate settings to keep open the line and are not directly relevant to the administration of CRRT - at best they are superfluous and at worst they can mislead the start/stop times. As a result ART Lumen Volume and VEN Lumen Volume are excluded. This leaves us with the final set of ITEMIDs
Step11: 224146 - System Integrity
Step12: In discussion with a clinical expert, each of these settings indicate different stages of the CRRT treatment. We can simplify them into three modes
Step13: The above is a stop time as the filter needed to be changed at this time. Any subsequent CRRT would be a restart of CRRT - and not a continuation of an ongoing CRRT session.
225958 - Heparin Concentration (units/mL)
Step14: The above is a normal setting and can be combined with the numeric fields.
225976 - Replacement Fluid
Step15: The above is a normal setting and can be combined with the numeric fields.
225977 - Dialysate Fluid
Step16: The above is a normal setting and can be combined with the numeric fields.
227290 - CRRT mode
Step18: While all of this looks good, it's feasible that the documentation of the CRRT mode is not done directly concurrent to the actual administration of CRRT. We thus investigate whether CRRT mode is available for all patients with a CRRT setting.
Step20: We can take this analysis a bit further and ask
Step21: As CRRT mode is relatively redundant, doesn't necessarily indicate CRRT is being actively performed, and documentation for it is not 100% compliant, we exclude it from the list of ITEMID.
CHARTEVENTS wrap up
The following is the final set of ITEMID from CHARTEVENTS which indicate CRRT is started/ongoing
Step22: To make sure we don't display data we don't have to, we define a function which
Step25: Aggregating INPUTEVENTS_MV
First, let's look at INPUTEVENTS_MV. Each entry is stored with a starttime and an endtime. Note we have to exclude statusdescription = 'Rewritten' as these are undelivered medications which have been rewritten (useful for auditing purposes but does not give you information about drugs delivered to the patient).
Step28: Normally linkorderid links together administrations which are consecutive but may have changes in rate, but from the above we can note that linkorderid seems to rarely group entries. Rows 8-10 and 16-18 are grouped (i.e. they are sequential administrations where the rate may or may not have changed), but many aren't even though they occur sequentially. We'd like to merge together sequential events to simplify the durations - and it appears we can greatly simplify this data by merging two rows if endtime(row-1) == starttime(row).
We can do this in three steps
Step32: Note we have added the endtime_lag column to give a clearer idea of how the query is working. We can see the first row starts with new_event_flag = 1 since endtime_lag is null. Next, the endtime_lag != starttime, so new_event_flag is again = 1.
Finally, for row 2 (marked by 2 on the far left), the endtime_lag == starttime - and so new_event_flag is 0. This continues all the way until row 9, where we can again see endtime_lag != starttime. Note that the statusdescription on row 8 even informs us why
Step35: The above (hopefully) makes it clear how a unique partition for each continuous segment of KCl administration can be delineated by cumulatively summing new_event_flag to create time_partition.
Step 3
Step38: Step 4
Step40: The above looks good - so we save the query to query_inputevents without the clause that isolates the data to one patient.
Step43: Conclusion
We now have a good method of combining contiguous events from INPUTEVENTS_MV. Note that this is usually not required, as the linkorderid is meant to partition these events for us. For example, lets look at a very common sedative agent used in the ICU, propofol
Step46: Here we see that linkorderid nicely delineates contiguous events without us having to put in the effort of above. It also separates distinct administrations. Above, at row 6, we can see a "1 minute" delivery of propofol. This is how MetaVision tables (those which end in _mv) mark "instant" events - in the case of drug delivery, these are boluses of drugs administered to the patient.
When using this data, we can group like events on a partition (as we did above), but we don't have to create the partition
Step50: It's also worth noting that bolus administrations do not have a rate. They only have an amount.
Convert CHARTEVENTS into durations
Step53: Extract durations from PROCEDUREEVENTS_MV
PROCEDUREEVENTS_MV contains entries for dialysis. As a reminder from the above, we picked the following itemid
Step55: Note that the above documentation is quite dilligent
Step56: Roundup
Step57: Compare durations
We now need to merge together the above durations into a single, master set of CRRT administrations. | Python Code:
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
from IPython.display import display, HTML # used to print out pretty pandas dataframes
import matplotlib.dates as dates
import matplotlib.lines as mlines
%matplotlib inline
plt.style.use('ggplot')
# specify user/password/where the database is
sqluser = 'postgres'
sqlpass = 'postgres'
dbname = 'mimic'
schema_name = 'mimiciii'
host = 'localhost'
query_schema = 'SET search_path to ' + schema_name + ';'
# connect to the database
con = psycopg2.connect(dbname=dbname, user=sqluser, password=sqlpass, host=host)
Explanation: Continuous renal replacement therapy (CRRT)
This notebook overviews the process of defining CRRT: a treatment used to dialyse or filter a patient's blood continuously. Key to CRRT is its lower speed compared to conventional dialysis: avoidance of rapid solute/fluid loss is suspected to be the main reason why CRRT tends to be tolerated better than intermittent hemodialysis.
The primary aim of this notebook is to define the start and end times of CRRT for patients in the MIMIC-III database v1.4.
A secondary aim of this notebook is to provide insight into how to extract clinical concepts from the MIMIC-III database.
Many thanks to Sharon O'Donoghue for her invaluable advice in the creation of this notebook.
Outline
The main steps in defining a clinical concept in MIMIC-III are as follows:
Identification of key terms and phrases which describe the concept
Search for these terms in D_ITEMS (or D_LABITEMS if searching for a laboratory measurement)
Extraction of the data from tables specified in the LINKSTO column of D_ITEMS
Definition of the concept using rules applied to the data extracted
Validation of the concepts by individual inspection and aggregate statistics
This process is iterative and not as clear cut as the above - validation may lead you to redefine data extraction, and so on. Furthermore, in the case of MIMIC-III v1.4, this process must be repeated twice: once for Metavision, once for CareVue.
MetaVision vs. CareVue
One issue in MIMIC-III is that it is a combination of two ICU database systems. As a result, concepts are split among different ITEMID values. For example, a patient's heart rate is a relatively simple concept to extract, however, if we look in the D_ITEMS table for labels matching 'heart rate', we find at least two ITEMID:
itemid | label | abbreviation | dbsource | linksto
--------|-------------------------|-----------------|------------|-------------
211 | Heart Rate | | carevue | chartevents
220045 | Heart Rate | HR | metavision | chartevents
Both these ITEMID values capture heart rate - but one is used for the CareVue database system (dbsource = 'carevue') and one is used for the MetaVision database system (dbsource = 'metavision'). The data extraction step must be repeated twice: once for dbsource = 'carevue' and once for dbsource = 'metavision'. In general, it is recommended to extract data from MetaVision first, as the data is better structured and provides useful information for what data elements to include. For example, ITEMID values in MetaVision have abbrevations with each label - these abbreviations can then be used to search for data elements in CareVue.
Step 0: import libraries, connect to the database
End of explanation
query = query_schema +
select itemid, label, category, linksto
from d_items
where dbsource = 'metavision'
and lower(label) like '%crrt%'
df = pd.read_sql_query(query,con)
df
Explanation: Step 1: Identification of key terms
We are interested in continuous renal replacement therapy (CRRT). First, we look for 'CRRT' in the database, isolating ourselves to metavision data:
End of explanation
query = query_schema +
select itemid, label, category, linksto
from d_items di
where dbsource = 'metavision'
and (lower(label) like '%dialy%'
or category = 'Dialysis'
or lower(label) like '%crrt%'
)
order by linksto, category, label
df = pd.read_sql_query(query,con)
HTML(df.head().to_html().replace('NaN', ''))
Explanation: The above gives us some hints to expand our initial search:
category = 'Dialysis'
lower(label) like '%dialysis%'
Step 2: Extraction of ITEMIDs from tables
Get list of itemid related to CRRT
End of explanation
query = query_schema +
select
ce.icustay_id, di.label, ce.charttime
, ce.value
, ce.valueuom
from chartevents ce
inner join d_items di
on ce.itemid = di.itemid
where ce.icustay_id = 246866
and ce.itemid in
(
224404, -- | ART Lumen Volume
224406, -- | VEN Lumen Volume
228004, -- | Citrate (ACD-A)
224145, -- | Heparin Dose (per hour)
225183, -- | Current Goal
224149, -- | Access Pressure
224144, -- | Blood Flow (ml/min)
224154, -- | Dialysate Rate
224151, -- | Effluent Pressure
224150, -- | Filter Pressure
224191, -- | Hourly Patient Fluid Removal
228005, -- | PBP (Prefilter) Replacement Rate
228006, -- | Post Filter Replacement Rate
224153, -- | Replacement Rate
224152, -- | Return Pressure
226457 -- | Ultrafiltrate Output
)
order by ce.icustay_id, ce.charttime, di.label;
df = pd.read_sql_query(query,con)
HTML(df.head().to_html().replace('NaN', ''))
Explanation: Manually label above itemid
The above is a list of all the potential data elements which could be used to define CRRT. The next step is to identify the specific elements which can be used to define start/stop time. This process requires clinical expertise in the area.
The following tables are a result of reviewing all ITEMID labels and flagging them as "consider for further review" or "not relevant".
Links to CHARTEVENTS
itemid | label | category | linksto | Included/comment
--- | --- | --- | --- | ---
225740 | Dialysis Catheter Discontinued | Access Lines - Invasive | chartevents | No - access line
227357 | Dialysis Catheter Dressing Occlusive | Access Lines - Invasive | chartevents | No - access line
225776 | Dialysis Catheter Dressing Type | Access Lines - Invasive | chartevents | No - access line
226118 | Dialysis Catheter placed in outside facility | Access Lines - Invasive | chartevents | No - access line
227753 | Dialysis Catheter Placement Confirmed by X-ray | Access Lines - Invasive | chartevents | No - access line
225323 | Dialysis Catheter Site Appear | Access Lines - Invasive | chartevents | No - access line
225725 | Dialysis Catheter Tip Cultured | Access Lines - Invasive | chartevents | No - access line
227124 | Dialysis Catheter Type | Access Lines - Invasive | chartevents | No - access line
225126 | Dialysis patient | Adm History/FHPA | chartevents | No - admission information
224149 | Access Pressure | Dialysis | chartevents | Yes - CRRT setting
224404 | ART Lumen Volume | Dialysis | chartevents | Yes - CRRT setting
224144 | Blood Flow (ml/min) | Dialysis | chartevents | Yes - CRRT setting
228004 | Citrate (ACD-A) | Dialysis | chartevents | Yes - CRRT setting
227290 | CRRT mode | Dialysis | chartevents | Yes - CRRT setting
225183 | Current Goal | Dialysis | chartevents | Yes - CRRT setting
225977 | Dialysate Fluid | Dialysis | chartevents | Yes - CRRT setting
224154 | Dialysate Rate | Dialysis | chartevents | Yes - CRRT setting
224135 | Dialysis Access Site | Dialysis | chartevents | No - access line
225954 | Dialysis Access Type | Dialysis | chartevents | No - access line
224139 | Dialysis Site Appearance | Dialysis | chartevents | No - access line
225810 | Dwell Time (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
224151 | Effluent Pressure | Dialysis | chartevents | Yes - CRRT setting
224150 | Filter Pressure | Dialysis | chartevents | Yes - CRRT setting
226499 | Hemodialysis Output | Dialysis | chartevents | No - hemodialysis
225958 | Heparin Concentration (units/mL) | Dialysis | chartevents | Yes - CRRT setting
224145 | Heparin Dose (per hour) | Dialysis | chartevents | Yes - CRRT setting
224191 | Hourly Patient Fluid Removal | Dialysis | chartevents | Yes - CRRT setting
225952 | Medication Added #1 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
227638 | Medication Added #2 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
225959 | Medication Added Amount #1 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
227639 | Medication Added Amount #2 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
225961 | Medication Added Units #1 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
227640 | Medication Added Units #2 (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
228005 | PBP (Prefilter) Replacement Rate | Dialysis | chartevents | Yes - CRRT setting
225965 | Peritoneal Dialysis Catheter Status | Dialysis | chartevents | No - peritoneal dialysis
225963 | Peritoneal Dialysis Catheter Type | Dialysis | chartevents | No - peritoneal dialysis
225951 | Peritoneal Dialysis Fluid Appearance | Dialysis | chartevents | No - peritoneal dialysis
228006 | Post Filter Replacement Rate | Dialysis | chartevents | Yes - CRRT setting
225956 | Reason for CRRT Filter Change | Dialysis | chartevents | Yes - CRRT setting
225976 | Replacement Fluid | Dialysis | chartevents | Yes - CRRT setting
224153 | Replacement Rate | Dialysis | chartevents | Yes - CRRT setting
224152 | Return Pressure | Dialysis | chartevents | Yes - CRRT setting
225953 | Solution (Peritoneal Dialysis) | Dialysis | chartevents | No - peritoneal dialysis
224146 | System Integrity | Dialysis | chartevents | Yes - CRRT setting
226457 | Ultrafiltrate Output | Dialysis | chartevents | Yes - CRRT setting
224406 | VEN Lumen Volume | Dialysis | chartevents | Yes - CRRT setting
225806 | Volume In (PD) | Dialysis | chartevents | No - peritoneal dialysis
227438 | Volume not removed | Dialysis | chartevents | No - peritoneal dialysis
225807 | Volume Out (PD) | Dialysis | chartevents | No - peritoneal dialysis
Links to DATETIMEEVENTS
itemid | label | category | linksto | Included/comment
--- | --- | --- | --- | ---
225318 | Dialysis Catheter Cap Change | Access Lines - Invasive | datetimeevents | No - access lines
225319 | Dialysis Catheter Change over Wire Date | Access Lines - Invasive | datetimeevents | No - access lines
225321 | Dialysis Catheter Dressing Change | Access Lines - Invasive | datetimeevents | No - access lines
225322 | Dialysis Catheter Insertion Date | Access Lines - Invasive | datetimeevents | No - access lines
225324 | Dialysis CatheterTubing Change | Access Lines - Invasive | datetimeevents | No - access lines
225128 | Last dialysis | Adm History/FHPA | datetimeevents | No - admission information
Links to INPUTEVENTS_MV
itemid | label | category | linksto | Included/comment
--- | --- | --- | --- | ---
227525 | Calcium Gluconate (CRRT) | Medications | inputevents_mv | Yes - CRRT setting
227536 | KCl (CRRT) | Medications | inputevents_mv | Yes - CRRT setting
Links to PROCEDUREEVENTS_MV
itemid | label | category | linksto | Included/comment
--- | --- | --- | --- | ---
225441 | Hemodialysis | 4-Procedures | procedureevents_mv | No - hemodialysis
224270 | Dialysis Catheter | Access Lines - Invasive | procedureevents_mv | No - access lines
225436 | CRRT Filter Change | Dialysis | procedureevents_mv | Yes - CRRT setting
225802 | Dialysis - CRRT | Dialysis | procedureevents_mv | Yes - CRRT setting
225803 | Dialysis - CVVHD | Dialysis | procedureevents_mv | Yes - CRRT setting
225809 | Dialysis - CVVHDF | Dialysis | procedureevents_mv | Yes - CRRT setting
225955 | Dialysis - SCUF | Dialysis | procedureevents_mv | Yes - CRRT setting
225805 | Peritoneal Dialysis | Dialysis | procedureevents_mv | No - peritoneal dialysis
Reasons for inclusion/exclusion
CRRT Setting - yes (included) - these settings are only documented when a patient is receiving CRRT.
Access lines- no (excluded) - these ITEMIDs are not included as the presence of an access line does not guarantee that CRRT is being delivered. While having an access line is a requirement of performing CRRT, these lines are present even when a patient is not actively being hemodialysed.
Peritoneal dialysis - no (excluded) - Peritoneal dialysis is a different form of dialysis, and is not CRRT
Hemodialysis - no (excluded) - Similar as above, hemodialysis is a different form of dialysis and is not CRRT
Define rules based upon ITEMIDs
Above, we acquired a list of itemid which we determined to be related to administration of CRRT. The next step is to determine how these itemid relate to CRRT: do they indicate it is started, stopped, continuing, or something else.
We will evaluate itemid from three tables, in turn: CHARTEVENTS, INPUTEVENTS_MV, and PROCEDUREEVENTS_MV. Note that the _MV subscript indicates that the table only has data from MetaVision (half the patients), while _CV indicates the table only has data from CareVue (the other half of patients). Note that after we extract data from MetaVision patients, we will repeat this exercise for CareVue patients.
table 1 of 3: itemid from CHARTEVENTS
These are the included CRRT settings in CHARTEVENTS:
itemid | label | param_type
--------|----------------------------------|------------
224144 | Blood Flow (ml/min) | Numeric
224145 | Heparin Dose (per hour) | Numeric
224146 | System Integrity | Text
224149 | Access Pressure | Numeric
224150 | Filter Pressure | Numeric
224151 | Effluent Pressure | Numeric
224152 | Return Pressure | Numeric
224153 | Replacement Rate | Numeric
224154 | Dialysate Rate | Numeric
224191 | Hourly Patient Fluid Removal | Numeric
224404 | ART Lumen Volume | Numeric
224406 | VEN Lumen Volume | Numeric
225183 | Current Goal | Numeric
225956 | Reason for CRRT Filter Change | Text
225958 | Heparin Concentration (units/mL) | Text
225976 | Replacement Fluid | Text
225977 | Dialysate Fluid | Text
226457 | Ultrafiltrate Output | Numeric
227290 | CRRT mode | Text
228004 | Citrate (ACD-A) | Numeric
228005 | PBP (Prefilter) Replacement Rate | Numeric
228006 | Post Filter Replacement Rate | Numeric
First, we examine the numeric fields. These fields are the core CRRT settings which, according to clinical advice, should be documented hourly for patients actively on CRRT:
End of explanation
def print_itemid_info(con, itemid):
# get name of itemid
query = query_schema +
select label
from d_items
where itemid = + str(itemid)
df = pd.read_sql_query(query,con)
print('Values for {} - {}...'.format(itemid, df['label'][0]))
query = query_schema +
select value
, count(distinct icustay_id) as number_of_patients
, count(icustay_id) as number_of_observations
from chartevents
where itemid = + str(itemid) +
group by value
order by value
df = pd.read_sql_query(query,con)
display(HTML(df.to_html().replace('NaN', '')))
Explanation: Above we can see that ART Lumen Volume and VEN Lumen Volume are documented at a drastically different time than the other settings. Upon discussion with a clinical expert, they confirmed that this is expected, as these volumes indicate settings to keep open the line and are not directly relevant to the administration of CRRT - at best they are superfluous and at worst they can mislead the start/stop times. As a result ART Lumen Volume and VEN Lumen Volume are excluded. This leaves us with the final set of ITEMIDs:
sql
224149, -- Access Pressure
224144, -- Blood Flow (ml/min)
228004, -- Citrate (ACD-A)
225183, -- Current Goal
224154, -- Dialysate Rate
224151, -- Effluent Pressure
224150, -- Filter Pressure
224145, -- Heparin Dose (per hour)
224191, -- Hourly Patient Fluid Removal
228005, -- PBP (Prefilter) Replacement Rate
228006, -- Post Filter Replacement Rate
224153, -- Replacement Rate
224152, -- Return Pressure
226457 -- Ultrafiltrate Output
The next step is to examine the remaining text based ITEMID:
itemid | label | param_type
--------|----------------------------------|------------
224146 | System Integrity | Text
225956 | Reason for CRRT Filter Change | Text
225958 | Heparin Concentration (units/mL) | Text
225976 | Replacement Fluid | Text
225977 | Dialysate Fluid | Text
227290 | CRRT mode | Text
We define a helper function which prints out the number of observations for a given itemid:
End of explanation
print_itemid_info(con, 224146)
Explanation: 224146 - System Integrity
End of explanation
print_itemid_info(con, 225956)
Explanation: In discussion with a clinical expert, each of these settings indicate different stages of the CRRT treatment. We can simplify them into three modes: started, stopped, or active. Since active implies that the CRRT is running, the first active event could also be a start time, therefore we call it "active/started". Here we list the manually curated mapping:
value | count | interpretation
--- | --- | ---
Active | 539 | CRRT active/started
Clots Increasing | 245 | CRRT active/started
Clots Present | 427 | CRRT active/started
Clotted | 233 | CRRT stopped
Discontinued | 339 | CRRT stopped
Line pressure inconsistent | 127 | CRRT active/started
New Filter | 357 | CRRT started
No Clot Present | 275 | CRRT active/started
Recirculating | 172 | CRRT stopped
Reinitiated | 336 | CRRT started
Later on we will code special rules to incorporate this itemid.
225956 - Reason for CRRT Filter Change
End of explanation
print_itemid_info(con, 225958)
Explanation: The above is a stop time as the filter needed to be changed at this time. Any subsequent CRRT would be a restart of CRRT - and not a continuation of an ongoing CRRT session.
225958 - Heparin Concentration (units/mL)
End of explanation
print_itemid_info(con, 225976)
Explanation: The above is a normal setting and can be combined with the numeric fields.
225976 - Replacement Fluid
End of explanation
print_itemid_info(con, 225977)
Explanation: The above is a normal setting and can be combined with the numeric fields.
225977 - Dialysate Fluid
End of explanation
print_itemid_info(con, 227290)
Explanation: The above is a normal setting and can be combined with the numeric fields.
227290 - CRRT mode
End of explanation
# Examining CRRT mode
query = query_schema +
with t1 as
(
select icustay_id,
max(case when itemid = 227290 then 1 else 0 end) as HasMode
from chartevents ce
where itemid in
(
227290, -- CRRT mode
228004, -- Citrate (ACD-A)
225958, -- Heparin Concentration (units/mL)
224145, -- Heparin Dose (per hour)
225183, -- Current Goal -- always there
224149, -- Access Pressure
224144, -- Blood Flow (ml/min)
225977, -- Dialysate Fluid
224154, -- Dialysate Rate
224151, -- Effluent Pressure
224150, -- Filter Pressure
224191, -- Hourly Patient Fluid Removal
228005, -- PBP (Prefilter) Replacement Rate
228006, -- Post Filter Replacement Rate
225976, -- Replacement Fluid
224153, -- Replacement Rate
224152, -- Return Pressure
226457 -- Ultrafiltrate Output
)
group by icustay_id
)
select count(icustay_id) as Num_ICUSTAY_ID
, sum(hasmode) as Num_With_Mode
from t1
df = pd.read_sql_query(query,con)
HTML(df.to_html().replace('NaN', ''))
Explanation: While all of this looks good, it's feasible that the documentation of the CRRT mode is not done directly concurrent to the actual administration of CRRT. We thus investigate whether CRRT mode is available for all patients with a CRRT setting.
End of explanation
query = query_schema +
with t1 as
(
select icustay_id, charttime
, max(case when itemid = 227290 then 1 else 0 end) as HasCRRTMode
, max(case when itemid != 227290 then 1 else 0 end) as OtherITEMID
from chartevents ce
where itemid in
(
227290, -- CRRT mode
228004, -- Citrate (ACD-A)
225958, -- Heparin Concentration (units/mL)
224145, -- Heparin Dose (per hour)
225183, -- Current Goal -- always there
224149, -- Access Pressure
224144, -- Blood Flow (ml/min)
225977, -- Dialysate Fluid
224154, -- Dialysate Rate
224151, -- Effluent Pressure
224150, -- Filter Pressure
224191, -- Hourly Patient Fluid Removal
228005, -- PBP (Prefilter) Replacement Rate
228006, -- Post Filter Replacement Rate
225976, -- Replacement Fluid
224153, -- Replacement Rate
224152, -- Return Pressure
226457 -- Ultrafiltrate Output
)
group by icustay_id, charttime
)
select count(icustay_id) as NumObs
, sum(case when HasCRRTMode = 1 and OtherITEMID = 1 then 1 else 0 end) as Both
, sum(case when HasCRRTMode = 1 and OtherITEMID = 0 then 1 else 0 end) as OnlyCRRTMode
, sum(case when HasCRRTMode = 0 and OtherITEMID = 1 then 1 else 0 end) as NoCRRTMode
from t1
df = pd.read_sql_query(query,con)
HTML(df.to_html().replace('NaN', ''))
Explanation: We can take this analysis a bit further and ask: is CRRT mode is present when none of the other settings are present?
End of explanation
# define the example ICUSTAY_ID for the below code
# originally, this was 246866 - if changed, the interpretation provided will no longer make sense
query_where_clause = "and icustay_id = 246866"
Explanation: As CRRT mode is relatively redundant, doesn't necessarily indicate CRRT is being actively performed, and documentation for it is not 100% compliant, we exclude it from the list of ITEMID.
CHARTEVENTS wrap up
The following is the final set of ITEMID from CHARTEVENTS which indicate CRRT is started/ongoing:
sql
224149, -- Access Pressure
224144, -- Blood Flow (ml/min)
228004, -- Citrate (ACD-A)
225183, -- Current Goal
225977, -- Dialysate Fluid
224154, -- Dialysate Rate
224151, -- Effluent Pressure
224150, -- Filter Pressure
225958, -- Heparin Concentration (units/mL)
224145, -- Heparin Dose (per hour)
224191, -- Hourly Patient Fluid Removal
228005, -- PBP (Prefilter) Replacement Rate
228006, -- Post Filter Replacement Rate
225976, -- Replacement Fluid
224153, -- Replacement Rate
224152, -- Return Pressure
226457 -- Ultrafiltrate Output
The following ITEMID are the final set which indicate CRRT is started/stopped/ongoing (i.e. require special rules):
sql
224146, -- System Integrity
225956 -- Reason for CRRT Filter Change
table 2 of 3: INPUTEVENTS_MV
The following is the final set of ITEMID from INPUTEVENTS_MV:
sql
227525,-- Calcium Gluconate (CRRT)
227536 -- KCl (CRRT)
No special examination is required for these fields - they are guaranteed to be CRRT (as verified by a clinician) - we can use these to indicate that CRRT is active/started.
table 3 of 3: PROCEDUREEVENTS_MV
The following are the set of itemid from above related to PROCEDUREEVENTS_MV:
itemid | label
--- | ---
225436 | CRRT Filter Change
225802 | Dialysis - CRRT
225803 | Dialysis - CVVHD
225809 | Dialysis - CVVHDF
225955 | Dialysis - SCUF
The only contentious ITEMID is 225436 (CRRT Filter Change). This ITEMID indicates a break from CRRT, and it reinitiates at the end of this change. While in principle this could be used as an end time, documentation on it is not 100%, and as recommended by staff it's easier to ignore this and use the filter change field from CHARTEVENTS to define the end of CRRT events.
The final set of ITEMID used for CRRT are:
sql
225802, -- Dialysis - CRRT
225803, -- Dialysis - CVVHD
225809, -- Dialysis - CVVHDF
225955 -- Dialysis - SCUF
Step 4: definition of concept using rules
Let's review the goal of this notebook. We would like to define the duration of CRRT for each patient. Concretely, this means we must define, for each ICUSTAY_ID:
a STARTTIME
an ENDTIME
As CRRT can be started/stopped throughout a patient's stay, there may be multiple STARTTIME and ENDTIME for a single ICUSTAY_ID - but they should not overlap.
Recall that CHARTEVENTS stores data at charted times (CHARTTIME), and as a result the settings are stored at a single point in time. For CHARTEVENTS, the main task thus becomes converting a series of CHARTTIME into pairs of STARTTIME and ENDTIME. Intuitively this can be done by looking for consecutive settings each hour, and combining these into a single CRRT event. The first observed CHARTTIME becomes the STARTTIME, and the last observed CHARTTIME becomes the ENDTIME. However, CHARTEVENTS is not the only source of data. To improve the accuracy of our calculation, we also include data from INPUTEVENTS_MV and PROCEDUREEVENTS_MV. For INPUTEVENTS_MV, this does not complicate things too much. Each observation in INPUTEVENTS_MV is also stored at a single CHARTTIME, and so we simply need to combine this table with CHARTEVENTS before proceeding (likely by using the SQL UNION command).
PROCEDUREEVENTS_MV is more complicated as it actually stores data with a STARTTIME and an ENDTIME column already. We need to merge the extracted data from CHARTEVENTS/INPUTEVENTS_MV with this already nicely formatted data from PROCEDUREEVENTS_MV.
With the task laid out, let's get started. We will:
Aggregate INPUTEVENTS_MV into durations
Convert CHARTEVENTS into durations
Compare these durations with PROCEDUREVENTS_MV and decide on a rule for merging the two
Merge PROCEDUREEVENTS_MV with INPUTEVENTS_MV/CHARTEVENTS for a final durations table for Metavision
End of explanation
def display_df(df):
col = [x for x in df.columns if x != 'icustay_id']
df_tmp = df[col].copy()
for c in df_tmp.columns:
if '[ns]' in str(df_tmp[c].dtype):
df_tmp[c] = df_tmp[c].dt.strftime('Day %d, %H:%M')
display(HTML(df_tmp.to_html().replace('NaN', '')))
Explanation: To make sure we don't display data we don't have to, we define a function which: (i) doesn't display icustay_id, and (ii) simplifies the date by removing the month/year.
End of explanation
print("Durations from INPUTEVENTS for one patient with KCl...")
query = query_schema +
select
linkorderid
, orderid
, case when itemid = 227525 then 'Calcium' else 'KCl' end as label
, starttime, endtime
, rate, rateuom
, statusdescription
from inputevents_mv
where itemid in
(
--227525,-- Calcium Gluconate (CRRT)
227536 -- KCl (CRRT)
)
and statusdescription != 'Rewritten'
+ query_where_clause +
order by starttime, endtime
ie = pd.read_sql_query(query,con)
display_df(ie)
Explanation: Aggregating INPUTEVENTS_MV
First, let's look at INPUTEVENTS_MV. Each entry is stored with a starttime and an endtime. Note we have to exclude statusdescription = 'Rewritten' as these are undelivered medications which have been rewritten (useful for auditing purposes but does not give you information about drugs delivered to the patient).
End of explanation
print("Durations from INPUTEVENTS_MV, new events noted with time_partition...")
query = query_schema +
with t1 as
(
select
icustay_id
, case when itemid = 227525 then 'Calcium' else 'KCl' end as label
, starttime, endtime
, lag(endtime) over (partition by icustay_id, itemid order by starttime, endtime) as endtime_lag
, case
when lag(endtime) over (partition by icustay_id, itemid order by starttime, endtime) = starttime
then 0
else 1 end
as new_event_flag
, rate, rateuom
, statusdescription
from inputevents_mv
where itemid in
(
--227525,-- Calcium Gluconate (CRRT)
227536 -- KCl (CRRT)
)
and statusdescription != 'Rewritten'
+ query_where_clause +
)
select
label
, starttime
, endtime
, endtime_lag
, new_event_flag
, rate, rateuom
, statusdescription
from t1
ie = pd.read_sql_query(query,con)
display_df(ie)
Explanation: Normally linkorderid links together administrations which are consecutive but may have changes in rate, but from the above we can note that linkorderid seems to rarely group entries. Rows 8-10 and 16-18 are grouped (i.e. they are sequential administrations where the rate may or may not have changed), but many aren't even though they occur sequentially. We'd like to merge together sequential events to simplify the durations - and it appears we can greatly simplify this data by merging two rows if endtime(row-1) == starttime(row).
We can do this in three steps:
Create a binary flag that indicates when new "events" occur, where an "event" is defined as a continuous segment of administration, i.e. the binary flag is 1 if the row does not immediately follow the previous row, and 0 if the row does immediately follow the previous row
Aggregate this binary flag so each individual event is assigned a unique integer (i.e. create a partition over these events)
Create an integer to identify the last row in the event (so we can get useful information from this row)
Group the data based off the partition to result in a single starttime and endtime for each continguous medication administration
Now we'll go through the code for doing this step by step.
Step 1: create a binary flag for new events
sql
with t1 as
(
select icustay_id
, case when itemid = 227525 then 'Calcium' else 'KCl' end as label
, starttime
, endtime
, case
when lag(endtime) over (partition by icustay_id, itemid order by starttime, endtime) = starttime
then 0
else 1 end
as new_event_flag
, rate, rateuom
, statusdescription
from inputevents_mv
where itemid in
(
--227525,-- Calcium Gluconate (CRRT)
227536 -- KCl (CRRT)
)
and statusdescription != 'Rewritten'
+ query_where_clause +
This selects data from INPUTEVENTS_MV for just KCl using a single patient specified by the query_where_clause (this is so it can act as an example - you can omit the single patient and it will work on all the data).
The key code block is here:
sql
, case
when lag(endtime) over (partition by icustay_id, itemid order by starttime, endtime) = starttime
then 0
else 1 end
as new_event_flag
This creates a boolean flag which is 1 every time the current starttime is not equal to the previous endtime, i.e. it marks new "events". We can see it in action here:
End of explanation
print("Durations from INPUTEVENTS for one patient with KCl...")
query = query_schema +
with t1 as
(
select icustay_id
, case when itemid = 227525 then 'Calcium' else 'KCl' end as label
, starttime
, endtime
, case
when lag(endtime) over (partition by icustay_id, itemid order by starttime, endtime) = starttime
then 0
else 1 end
as new_event_flag
, rate, rateuom
, statusdescription
from inputevents_mv
where itemid in
(
--227525,-- Calcium Gluconate (CRRT)
227536 -- KCl (CRRT)
)
and statusdescription != 'Rewritten'
+ query_where_clause +
)
, t2 as
(
select
icustay_id
, label
, starttime, endtime
, new_event_flag
, SUM(new_event_flag) OVER (partition by icustay_id, label order by starttime, endtime) as time_partition
, rate, rateuom, statusdescription
from t1
)
select
label
, starttime
, endtime
, new_event_flag
, time_partition
, rate, rateuom, statusdescription
from t2
order by starttime, endtime
ie = pd.read_sql_query(query,con)
display_df(ie)
Explanation: Note we have added the endtime_lag column to give a clearer idea of how the query is working. We can see the first row starts with new_event_flag = 1 since endtime_lag is null. Next, the endtime_lag != starttime, so new_event_flag is again = 1.
Finally, for row 2 (marked by 2 on the far left), the endtime_lag == starttime - and so new_event_flag is 0. This continues all the way until row 9, where we can again see endtime_lag != starttime. Note that the statusdescription on row 8 even informs us why: it states that the administration has been "Paused". This is why we mentioned earlier that we were interested in the last row from an event.
Step 2: create a binary flag for new events
With SQL, in order to aggregate groups of rows, we need a partition. That is, we need some key (usually an integer) which is unique for that set of rows. Once we have this unique key, we can do all the standard SQL aggregations like max(), min(), and so on (note: SQL "window" functions operate on the same principle, except you define the partition explicitly from a combination of columns).
With this in mind, our next step is to use this flag to create a unique integer for each set of rows we'd like grouped. Since we'd like to group new events together, we can run a cumulative sum along new_event_flag: every time a new event occurs, the integer will increase and consequently that event will all have the same unique key. The code to do this is:
sql
SUM(new_event_flag) OVER (partition by icustay_id, label order by starttime, endtime) as time_partition
Let's see this in action:
End of explanation
print("Durations from INPUTEVENTS for one patient with KCl...")
query = query_schema +
with t1 as
(
select icustay_id
, case when itemid = 227525 then 'Calcium' else 'KCl' end as label
, starttime
, endtime
, case
when lag(endtime) over (partition by icustay_id, itemid order by starttime, endtime) = starttime
then 0
else 1 end
as new_event_flag
, rate, rateuom
, statusdescription
from inputevents_mv
where itemid in
(
--227525,-- Calcium Gluconate (CRRT)
227536 -- KCl (CRRT)
)
and statusdescription != 'Rewritten'
+ query_where_clause +
)
, t2 as
(
select
icustay_id
, label
, starttime, endtime
, SUM(new_event_flag) OVER (partition by icustay_id, label order by starttime, endtime) as time_partition
, rate, rateuom, statusdescription
from t1
)
, t3 as
(
select
icustay_id
, label
, starttime, endtime
, time_partition
, rate, rateuom, statusdescription
, ROW_NUMBER() over (PARTITION BY icustay_id, label, time_partition order by starttime desc, endtime desc) as lastrow
from t2
)
select
label
, starttime
, endtime
, time_partition
, rate, rateuom
, statusdescription
, lastrow
from t3
order by starttime, endtime
ie = pd.read_sql_query(query,con)
display_df(ie)
Explanation: The above (hopefully) makes it clear how a unique partition for each continuous segment of KCl administration can be delineated by cumulatively summing new_event_flag to create time_partition.
Step 3: create an integer to mark the last row of an event
From above, it appears as though the last statusdescription would provide us useful debugging information as to why the administration event stopped - so we have another inline view where we create an integer which is 1 for the last statusdescription.
End of explanation
print("Durations from INPUTEVENTS for one patient with KCl...")
query = query_schema +
with t1 as
(
select icustay_id
, case when itemid = 227525 then 'Calcium' else 'KCl' end as label
, starttime
, endtime
, case
when lag(endtime) over (partition by icustay_id, itemid order by starttime, endtime) = starttime
then 0
else 1 end
as new_event_flag
, rate, rateuom
, statusdescription
from inputevents_mv
where itemid in
(
227525,-- Calcium Gluconate (CRRT)
227536 -- KCl (CRRT)
)
and statusdescription != 'Rewritten'
+ query_where_clause +
)
, t2 as
(
select
icustay_id
, label
, starttime, endtime
, SUM(new_event_flag) OVER (partition by icustay_id, label order by starttime, endtime) as time_partition
, rate, rateuom, statusdescription
from t1
)
, t3 as
(
select
icustay_id
, label
, starttime, endtime
, time_partition
, rate, rateuom, statusdescription
, ROW_NUMBER() over (PARTITION BY icustay_id, label, time_partition order by starttime desc, endtime desc) as lastrow
from t2
)
select
label
--, time_partition
, min(starttime) AS starttime
, max(endtime) AS endtime
, min(rate) AS rate_min
, max(rate) AS rate_max
, min(rateuom) AS rateuom
, min(case when lastrow = 1 then statusdescription else null end) as statusdescription
from t3
group by icustay_id, label, time_partition
order by starttime, endtime
ie = pd.read_sql_query(query,con)
display_df(ie)
Explanation: Step 4: aggregate to merge together contiguous start/end times
Now we aggregate the starttime and endtime together by grouping by time_partition, as follows:
we want the first starttime, so we use min(starttime)
we want the last endtime, so we use max(endtime)
we want the statusdescription at the last row, so we aggregate a column where all rows except the last are null
To give more detail on the last step, let's look at the SQL code:
sql
, min(case when lastrow = 1 then statusdescription else null end) as statusdescription
Aggregate functions ignore null values, so if we set the column to null for all but lastrow = 1, then the aggregate function is guaranteed to only return the value at lastrow = 1. The use of aggregate function could be either min() or max() - since it only effectively operates on a single value.
Tying it all together, we have the final query:
End of explanation
query_inputevents = query_schema +
with t1 as
(
select icustay_id
, case when itemid = 227525 then 'Calcium' else 'KCl' end as label
, starttime
, endtime
, case
when lag(endtime) over (partition by icustay_id, itemid order by starttime, endtime) = starttime
then 0
else 1 end
as new_event_flag
, rate, rateuom
, statusdescription
from inputevents_mv
where itemid in
(
227525,-- Calcium Gluconate (CRRT)
227536 -- KCl (CRRT)
)
and statusdescription != 'Rewritten'
)
, t2 as
(
select
icustay_id
, label
, starttime, endtime
, SUM(new_event_flag) OVER (partition by icustay_id, label order by starttime, endtime) as time_partition
, rate, rateuom, statusdescription
from t1
)
, t3 as
(
select
icustay_id
, label
, starttime, endtime
, time_partition
, rate, rateuom, statusdescription
, ROW_NUMBER() over (PARTITION BY icustay_id, label, time_partition order by starttime desc, endtime desc) as lastrow
from t2
)
select
icustay_id
, time_partition as num
, min(starttime) AS starttime
, max(endtime) AS endtime
, label
--, min(rate) AS rate_min
--, max(rate) AS rate_max
--, min(rateuom) AS rateuom
--, min(case when lastrow = 1 then statusdescription else null end) as statusdescription
from t3
group by icustay_id, label, time_partition
order by starttime, endtime
Explanation: The above looks good - so we save the query to query_inputevents without the clause that isolates the data to one patient.
End of explanation
print("Durations from INPUTEVENTS for one patient given propofol...")
query = query_schema +
with t1 as
(
select icustay_id
, di.label
, mv.linkorderid, mv.orderid
, starttime
, endtime
, rate, rateuom
, amount, amountuom
from inputevents_mv mv
inner join d_items di
on mv.itemid = di.itemid
and statusdescription != 'Rewritten'
+ query_where_clause +
and mv.itemid = 222168
)
select
label
, linkorderid, orderid
, starttime
, endtime
, rate, rateuom
, amount, amountuom
from t1
order by starttime, endtime
ie = pd.read_sql_query(query,con)
display_df(ie)
Explanation: Conclusion
We now have a good method of combining contiguous events from INPUTEVENTS_MV. Note that this is usually not required, as the linkorderid is meant to partition these events for us. For example, lets look at a very common sedative agent used in the ICU, propofol:
End of explanation
print("Grouped durations from INPUTEVENTS for one patient given propofol...")
query = query_schema +
with t1 as
(
select icustay_id
, di.itemid, di.label
, mv.linkorderid, mv.orderid
, starttime
, endtime
, amount, amountuom
, rate, rateuom
from inputevents_mv mv
inner join d_items di
on mv.itemid = di.itemid
and statusdescription != 'Rewritten'
+ query_where_clause +
and mv.itemid = 222168
)
select icustay_id
, label
, linkorderid
, min(starttime) as starttime
, max(endtime) as endtime
, min(rate) as rate_min
, max(rate) as rate_max
, max(rateuom) as rateuom
, min(amount) as amount_min
, max(amount) as amount_max
, max(amountuom) as amountuom
from t1
group by icustay_id, itemid, label, linkorderid
order by starttime, endtime
ie = pd.read_sql_query(query,con)
display_df(ie)
Explanation: Here we see that linkorderid nicely delineates contiguous events without us having to put in the effort of above. It also separates distinct administrations. Above, at row 6, we can see a "1 minute" delivery of propofol. This is how MetaVision tables (those which end in _mv) mark "instant" events - in the case of drug delivery, these are boluses of drugs administered to the patient.
When using this data, we can group like events on a partition (as we did above), but we don't have to create the partition: it already exists with linkorderid.
End of explanation
# convert CHARTEVENTS into durations
# NOTE: we only look at a single patient as an exemplar
print("Durations from CHARTEVENTS...")
query = query_schema +
with crrt_settings as
(
select ce.icustay_id, ce.charttime
, max(
case
when ce.itemid in
(
224149, -- Access Pressure
224144, -- Blood Flow (ml/min)
228004, -- Citrate (ACD-A)
225183, -- Current Goal
225977, -- Dialysate Fluid
224154, -- Dialysate Rate
224151, -- Effluent Pressure
224150, -- Filter Pressure
225958, -- Heparin Concentration (units/mL)
224145, -- Heparin Dose (per hour)
224191, -- Hourly Patient Fluid Removal
228005, -- PBP (Prefilter) Replacement Rate
228006, -- Post Filter Replacement Rate
225976, -- Replacement Fluid
224153, -- Replacement Rate
224152, -- Return Pressure
226457 -- Ultrafiltrate Output
) then 1
else 0 end)
as RRT
-- Below indicates that a new instance of CRRT has started
, max(
case
-- System Integrity
when ce.itemid = 224146 and value in ('New Filter','Reinitiated')
then 1
else 0
end ) as RRT_start
-- Below indicates that the current instance of CRRT has ended
, max(
case
-- System Integrity
when ce.itemid = 224146 and value in ('Discontinued','Recirculating')
then 1
when ce.itemid = 225956
then 1
else 0
end ) as RRT_end
from chartevents ce
where ce.itemid in
(
-- MetaVision ITEMIDs
-- Below require special handling
224146, -- System Integrity
225956, -- Reason for CRRT Filter Change
-- Below are settings which indicate CRRT is started/continuing
224149, -- Access Pressure
224144, -- Blood Flow (ml/min)
228004, -- Citrate (ACD-A)
225183, -- Current Goal
225977, -- Dialysate Fluid
224154, -- Dialysate Rate
224151, -- Effluent Pressure
224150, -- Filter Pressure
225958, -- Heparin Concentration (units/mL)
224145, -- Heparin Dose (per hour)
224191, -- Hourly Patient Fluid Removal
228005, -- PBP (Prefilter) Replacement Rate
228006, -- Post Filter Replacement Rate
225976, -- Replacement Fluid
224153, -- Replacement Rate
224152, -- Return Pressure
226457 -- Ultrafiltrate Output
)
and ce.value is not null
+ query_where_clause +
group by icustay_id, charttime
)
-- create the durations for each CRRT instance
select icustay_id
, ROW_NUMBER() over (partition by icustay_id order by num) as num
, min(charttime) as starttime
, max(charttime) as endtime
from
(
select vd1.*
-- create a cumulative sum of the instances of new CRRT
-- this results in a monotonically increasing integer assigned to each CRRT
, case when RRT_start = 1 or RRT=1 or RRT_end = 1 then
SUM( NewCRRT )
OVER ( partition by icustay_id order by charttime )
else null end
as num
--- now we convert CHARTTIME of CRRT settings into durations
from ( -- vd1
select
icustay_id
-- this carries over the previous charttime
, case
when RRT=1 then
LAG(CHARTTIME, 1) OVER (partition by icustay_id, RRT order by charttime)
else null
end as charttime_lag
, charttime
, RRT
, RRT_start
, RRT_end
-- calculate the time since the last event
, case
-- non-null iff the current observation indicates settings are present
when RRT=1 then
CHARTTIME -
(
LAG(CHARTTIME, 1) OVER
(
partition by icustay_id, RRT
order by charttime
)
)
else null
end as CRRT_duration
-- now we determine if the current event is a new instantiation
, case
when RRT_start = 1
then 1
-- if there is an end flag, we mark any subsequent event as new
when RRT_end = 1
-- note the end is *not* a new event, the *subsequent* row is
-- so here we output 0
then 0
when
LAG(RRT_end,1)
OVER
(
partition by icustay_id, case when RRT=1 or RRT_end=1 then 1 else 0 end
order by charttime
) = 1
then 1
-- if there is less than 2 hours between CRRT settings, we do not treat this as a new CRRT event
when (CHARTTIME - (LAG(CHARTTIME, 1)
OVER
(
partition by icustay_id, case when RRT=1 or RRT_end=1 then 1 else 0 end
order by charttime
))) <= interval '2' hour
then 0
else 1
end as NewCRRT
-- use the temp table with only settings from chartevents
FROM crrt_settings
) AS vd1
-- now we can isolate to just rows with settings
-- (before we had rows with start/end flags)
-- this removes any null values for NewCRRT
where
RRT_start = 1 or RRT = 1 or RRT_end = 1
) AS vd2
group by icustay_id, num
having min(charttime) != max(charttime)
order by icustay_id, num
ce = pd.read_sql_query(query,con)
display_df(ce)
# happy with the above query - repeat it without the isolation to a single ICUSTAY_ID
query_chartevents = query_schema +
with crrt_settings as
(
select ce.icustay_id, ce.charttime
, max(
case
when ce.itemid in
(
224149, -- Access Pressure
224144, -- Blood Flow (ml/min)
228004, -- Citrate (ACD-A)
225183, -- Current Goal
225977, -- Dialysate Fluid
224154, -- Dialysate Rate
224151, -- Effluent Pressure
224150, -- Filter Pressure
225958, -- Heparin Concentration (units/mL)
224145, -- Heparin Dose (per hour)
224191, -- Hourly Patient Fluid Removal
228005, -- PBP (Prefilter) Replacement Rate
228006, -- Post Filter Replacement Rate
225976, -- Replacement Fluid
224153, -- Replacement Rate
224152, -- Return Pressure
226457 -- Ultrafiltrate Output
) then 1
else 0 end)
as RRT
-- Below indicates that a new instance of CRRT has started
, max(
case
-- System Integrity
when ce.itemid = 224146 and value in ('New Filter','Reinitiated')
then 1
else 0
end ) as RRT_start
-- Below indicates that the current instance of CRRT has ended
, max(
case
-- System Integrity
when ce.itemid = 224146 and value in ('Discontinued','Recirculating')
then 1
when ce.itemid = 225956
then 1
else 0
end ) as RRT_end
from chartevents ce
where ce.itemid in
(
-- MetaVision ITEMIDs
-- Below require special handling
224146, -- System Integrity
225956, -- Reason for CRRT Filter Change
-- Below are settings which indicate CRRT is started/continuing
224149, -- Access Pressure
224144, -- Blood Flow (ml/min)
228004, -- Citrate (ACD-A)
225183, -- Current Goal
225977, -- Dialysate Fluid
224154, -- Dialysate Rate
224151, -- Effluent Pressure
224150, -- Filter Pressure
225958, -- Heparin Concentration (units/mL)
224145, -- Heparin Dose (per hour)
224191, -- Hourly Patient Fluid Removal
228005, -- PBP (Prefilter) Replacement Rate
228006, -- Post Filter Replacement Rate
225976, -- Replacement Fluid
224153, -- Replacement Rate
224152, -- Return Pressure
226457 -- Ultrafiltrate Output
)
and ce.value is not null
group by icustay_id, charttime
)
-- create the durations for each CRRT instance
select icustay_id
, ROW_NUMBER() over (partition by icustay_id order by num) as num
, min(charttime) as starttime
, max(charttime) as endtime
from
(
select vd1.*
-- create a cumulative sum of the instances of new CRRT
-- this results in a monotonically increasing integer assigned to each CRRT
, case when RRT_start = 1 or RRT=1 or RRT_end = 1 then
SUM( NewCRRT )
OVER ( partition by icustay_id order by charttime )
else null end
as num
--- now we convert CHARTTIME of CRRT settings into durations
from ( -- vd1
select
icustay_id
-- this carries over the previous charttime
, case
when RRT=1 then
LAG(CHARTTIME, 1) OVER (partition by icustay_id, RRT order by charttime)
else null
end as charttime_lag
, charttime
, RRT
, RRT_start
, RRT_end
-- calculate the time since the last event
, case
-- non-null iff the current observation indicates settings are present
when RRT=1 then
CHARTTIME -
(
LAG(CHARTTIME, 1) OVER
(
partition by icustay_id, RRT
order by charttime
)
)
else null
end as CRRT_duration
-- now we determine if the current event is a new instantiation
, case
when RRT_start = 1
then 1
-- if there is an end flag, we mark any subsequent event as new
when RRT_end = 1
-- note the end is *not* a new event, the *subsequent* row is
-- so here we output 0
then 0
when
LAG(RRT_end,1)
OVER
(
partition by icustay_id, case when RRT=1 or RRT_end=1 then 1 else 0 end
order by charttime
) = 1
then 1
-- if there is less than 2 hours between CRRT settings, we do not treat this as a new CRRT event
when (CHARTTIME - (LAG(CHARTTIME, 1)
OVER
(
partition by icustay_id, case when RRT=1 or RRT_end=1 then 1 else 0 end
order by charttime
))) <= interval '2' hour
then 0
else 1
end as NewCRRT
-- use the temp table with only settings from chartevents
FROM crrt_settings
) AS vd1
-- now we can isolate to just rows with settings
-- (before we had rows with start/end flags)
-- this removes any null values for NewCRRT
where
RRT_start = 1 or RRT = 1 or RRT_end = 1
) AS vd2
group by icustay_id, num
having min(charttime) != max(charttime)
order by icustay_id, num
Explanation: It's also worth noting that bolus administrations do not have a rate. They only have an amount.
Convert CHARTEVENTS into durations
End of explanation
# extract the durations from PROCEDUREEVENTS_MV
# NOTE: we only look at a single patient as an exemplar
print("Durations from PROCEDUREEVENTS_MV...")
query = query_schema +
select icustay_id
, ROW_NUMBER() over (partition by icustay_id order by starttime, endtime) as num
, starttime, endtime
from procedureevents_mv
where itemid in
(
225802 -- Dialysis - CRRT
, 225803 -- Dialysis - CVVHD
, 225809 -- Dialysis - CVVHDF
, 225955 -- Dialysis - SCUF
)
+ query_where_clause +
order by icustay_id, num
pe = pd.read_sql_query(query,con)
display_df(pe)
Explanation: Extract durations from PROCEDUREEVENTS_MV
PROCEDUREEVENTS_MV contains entries for dialysis. As a reminder from the above, we picked the following itemid:
225802 -- Dialysis - CRRT
225803 -- Dialysis - CVVHD
225809 -- Dialysis - CVVHDF
225955 -- Dialysis - SCUF
Extracting data for these entries is straightforward. Each instance of CRRT is documented with a single starttime and a single stoptime, with no need to merge together different rows.
End of explanation
# happy with above query
query_procedureevents = query_schema +
select icustay_id
, ROW_NUMBER() over (partition by icustay_id order by starttime, endtime) as num
, starttime, endtime
from procedureevents_mv
where itemid in
(
225802 -- Dialysis - CRRT
, 225803 -- Dialysis - CVVHD
, 225809 -- Dialysis - CVVHDF
, 225955 -- Dialysis - SCUF
)
order by icustay_id, num
Explanation: Note that the above documentation is quite dilligent: the entry pauses between the first and second row for 1 hour representing an actual pause in the administration of CRRT.
End of explanation
print("Durations from INPUTEVENTS...")
ie = pd.read_sql_query(query_inputevents,con)
print("Durations from CHARTEVENTS...")
ce = pd.read_sql_query(query_chartevents,con)
print("Durations from PROCEDUREEVENTS...")
pe = pd.read_sql_query(query_procedureevents,con)
Explanation: Roundup: data from INPUTEVENTS_MV, CHARTEVENTS, and PROCEDUREEVENTS_MV
End of explanation
# how many PROCEDUREEVENTS_MV dialysis events encapsulate CHARTEVENTS/INPUTEVENTS_MV?
# vice-versa?
iid = 205508
# compare the above durations
ce['source'] = 'chartevents'
ie['source'] = 'inputevents_kcl'
ie.loc[ie['label']=='Calcium','source'] = 'inputevents_ca'
pe['source'] = 'procedureevents'
df = pd.concat([ie[['icustay_id','num','starttime','endtime','source']], ce, pe])
idxDisplay = df['icustay_id'] == iid
display_df(df.loc[idxDisplay, :])
# 2) how many have no overlap whatsoever?
col_dict = {'chartevents': [247,129,191],
'inputevents_kcl': [255,127,0],
'inputevents_ca': [228,26,28],
'procedureevents': [55,126,184]}
for c in col_dict:
col_dict[c] = [x/256.0 for x in col_dict[c]]
fig, ax = plt.subplots(figsize=[16,10])
m = 0.
M = np.sum(idxDisplay)
# dummy plots for legend
legend_handle = list()
for c in col_dict:
legend_handle.append(mlines.Line2D([], [], color=col_dict[c], marker='o',
markersize=15, label=c))
for row in df.loc[idxDisplay,:].iterrows():
# row is a tuple: [index, actual_data], so we use row[1]
plt.plot([row[1]['starttime'].to_pydatetime(), row[1]['endtime'].to_pydatetime()], [0+m/M,0+m/M],
'o-',color=col_dict[row[1]['source']],
markersize=15, linewidth=2)
m=m+1
ax.xaxis.set_minor_locator(dates.HourLocator(byhour=[0,12],interval=1))
ax.xaxis.set_minor_formatter(dates.DateFormatter('%H:%M'))
ax.xaxis.grid(True, which="minor")
ax.xaxis.set_major_locator(dates.DayLocator(interval=1))
ax.xaxis.set_major_formatter(dates.DateFormatter('\n%d\n%a'))
ax.set_ylim([-0.1,1.0])
plt.legend(handles=legend_handle,loc='best')
plt.show()
# print out the above for 10 examples
# compare the above durations
ce['source'] = 'chartevents'
ie['source'] = 'inputevents_kcl'
ie.loc[ie['label']=='Calcium','source'] = 'inputevents_ca'
pe['source'] = 'procedureevents'
df = pd.concat([ie[['icustay_id','num','starttime','endtime','source']], ce, pe])
for iid in np.sort(df.icustay_id.unique()[0:10]):
iid = int(iid)
# how many PROCEDUREEVENTS_MV dialysis events encapsulate CHARTEVENTS/INPUTEVENTS_MV?
# vice-versa?
idxDisplay = df['icustay_id'] == iid
# no need to display here
#display_df(df.loc[idxDisplay, :])
# 2) how many have no overlap whatsoever?
col_dict = {'chartevents': [247,129,191],
'inputevents_kcl': [255,127,0],
'inputevents_ca': [228,26,28],
'procedureevents': [55,126,184]}
for c in col_dict:
col_dict[c] = [x/256.0 for x in col_dict[c]]
fig, ax = plt.subplots(figsize=[16,10])
m = 0.
M = np.sum(idxDisplay)
# dummy plots for legend
legend_handle = list()
for c in col_dict:
legend_handle.append(mlines.Line2D([], [], color=col_dict[c], marker='o',
markersize=15, label=c))
for row in df.loc[idxDisplay,:].iterrows():
# row is a tuple: [index, actual_data], so we use row[1]
plt.plot([row[1]['starttime'].to_pydatetime(), row[1]['endtime'].to_pydatetime()], [0+m/M,0+m/M],
'o-',color=col_dict[row[1]['source']],
markersize=15, linewidth=2)
m=m+1
ax.xaxis.set_minor_locator(dates.HourLocator(byhour=[0,6,12,18],interval=1))
ax.xaxis.set_minor_formatter(dates.DateFormatter('%H:%M'))
ax.xaxis.grid(True, which="minor")
ax.xaxis.set_major_locator(dates.DayLocator(interval=1))
ax.xaxis.set_major_formatter(dates.DateFormatter('\n%d-%m-%Y'))
ax.set_ylim([-0.1,1.0])
plt.legend(handles=legend_handle,loc='best')
# if you want to save the figures, uncomment the line below
#plt.savefig('crrt_' + str(iid) + '.png')
Explanation: Compare durations
We now need to merge together the above durations into a single, master set of CRRT administrations.
End of explanation |
1,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick introduction to GRASS GIS Temporal Framework
The GRASS GIS Temporal Framework implements temporal GIS functionality at user level and provides additionally an API to implement new spatio-temporal processing modules.
The temporal framework introduces space time datasets to represent time series of raster, 3D raster or vector maps.
It provides the following functionalities
Step1: Creating a new temporal dataset
First, we initialize temporal database
Step2: Next, we create an empty space-time raster dataset. We specify its name, its type (here
Step3: Check if the temporal dataset was created
Step4: Register maps into temporal dataset
Create a set of random maps of temperature using min and max values
Step5: Now we register the created maps within the temporal dataset
Step6: Next we update the information of temporal dataset and print its metadata
Step7: Query an existing temporal dataset
During this session you will learn how to extract values for a point from a temporal dataset.
Step8: We get the temporal dataset object
Step9: Now it is possible to obtain all the registered maps
Step10: Get useful info (as name, starting time) about registered maps and query the data using the GRASS GIS command r.what
Step11: Write CSV file
To write out a CSV file in Python, the CSV module should be used. So, import the CSV and the tempfile modules to then create a new CSV file in the temporal directory
Step12: Now, for each record read from the temporal dataset, the procedure stores the respective record in the CSV file
Step13: For verification, we can simply print the CSV file to the terminal using the cat shell command
Step14: Plot data using Matplotlib
To print the data using Matplotlib some libraries have to be imported first
Step15: Next we create the list of values for the x and y axes
Step16: Finally we plot the temperature values over time
Step17: Some other tips
Unregistering maps
To remove maps from a temporal dataset (they will not be truly deleted but just unregistered from the temporal dataset), first a map object should be created. Then, using this new dataset object, we remove the selected map(s) using the unregister_map function
Step18: Deleting a temporal dataset
Removing a space time datasets from temporal database (again, the contained maps remain in GRASS GIS) can be done directly from the same object using the delete function | Python Code:
import grass.temporal as tgis
import grass.script as gscript
Explanation: Quick introduction to GRASS GIS Temporal Framework
The GRASS GIS Temporal Framework implements temporal GIS functionality at user level and provides additionally an API to implement new spatio-temporal processing modules.
The temporal framework introduces space time datasets to represent time series of raster, 3D raster or vector maps.
It provides the following functionalities:
Assign time stamp to maps and register maps in the temporal database
Modification of time stamps
Creation, renaming and deletion of space time datasets
Registration and un-registration of maps in space time datasets
Query of maps that are registered in space time datasets using SQL 'WHERE' statements
Analysis of the spatio-temporal topology of space time datasets
Sampling of space time datasets
Computation of temporal and spatial relationships between registered maps
Higher level functions that are shared between modules
Most of the functions described above are member functions of dedicated map layer and space time dataset classes.
Three related datatypes are available:
* Space time raster datasets (strds) are designed to manage raster map time series. Modules that process strds have the naming prefix t.rast
* Space time 3D raster datasets (str3ds) are designed to manage 3D raster map time series. Modules that process str3ds have the naming prefix t.rast3d
* Space time vector datasets (stvds) are designed to manage vector map time series. Modules that process stvds have the naming prefix t.vect
Reference: Gebbert, S., Pebesma, E., 2014. TGRASS: A temporal GIS for field based environmental modeling. Environmental Modelling & Software 53, 1-12. http://dx.doi.org/10.1016/j.envsoft.2013.11.001
End of explanation
tgis.init()
Explanation: Creating a new temporal dataset
First, we initialize temporal database:
End of explanation
dataset_name = 'temperature'
dataset = tgis.open_new_stds(name=dataset_name, type='strds', temporaltype='absolute',
title="Temperature in Raleigh", descr="Created for test purposes",
semantic='mean', overwrite=True)
Explanation: Next, we create an empty space-time raster dataset. We specify its name, its type (here: strds), temporal type (absolute, relative), title and description. You can imagine a temporal dataset as a container for selected data which puts them into order, describes their space-time relationships and saves all kind of metadata. The maps themselves remain standard GRASS GIS maps.
End of explanation
# Print some info about the new dataset
dataset.print_shell_info()
Explanation: Check if the temporal dataset was created:
End of explanation
# monthly mean Raleigh temperature
nc_temp_data = {1:[30, 51], 2: [32, 54], 3: [40, 63], 4: [48, 72],
5:[57, 80], 6: [66, 87], 7: [70, 90], 8: [69, 88],
9:[62, 82], 10:[50, 73], 11:[41, 64], 12:[32, 54]}
# list of maps to add into temporal dataset
maps = []
gscript.run_command('g.region', raster='elevation')
for month, values in nc_temp_data.iteritems():
map_name = "temp_{mon}".format(mon=month)
gscript.run_command('r.random.surface', output=map_name, seed=values, high=values[1], overwrite=True)
maps.append(map_name)
print maps
Explanation: Register maps into temporal dataset
Create a set of random maps of temperature using min and max values
End of explanation
tgis.register_maps_in_space_time_dataset(type='raster', name=dataset_name, maps=','.join(maps), start='2014-01-01',
increment="1 month", interval=True, update_cmd_list=True)
Explanation: Now we register the created maps within the temporal dataset:
End of explanation
dataset.update_from_registered_maps()
dataset.print_shell_info()
Explanation: Next we update the information of temporal dataset and print its metadata:
End of explanation
coors = (638000, 222800.0)
Explanation: Query an existing temporal dataset
During this session you will learn how to extract values for a point from a temporal dataset.
End of explanation
strds = tgis.open_old_stds(dataset_name, "strds")
Explanation: We get the temporal dataset object:
End of explanation
rows = strds.get_registered_maps(columns="name,mapset,start_time,end_time",
where=None, order="start_time")
Explanation: Now it is possible to obtain all the registered maps:
End of explanation
from collections import OrderedDict
infos = OrderedDict()
for row in rows:
name = row["name"] + "@" + row["mapset"]
values = gscript.read_command('r.what', map=name, coordinates=coors).strip().split('|')
infos[name] = {'date': row["start_time"], 'value': values[3]}
print infos
Explanation: Get useful info (as name, starting time) about registered maps and query the data using the GRASS GIS command r.what:
End of explanation
import csv
import tempfile
fil = tempfile.NamedTemporaryFile(delete=False)
fil.name = fil.name + '.csv'
print fil.name
Explanation: Write CSV file
To write out a CSV file in Python, the CSV module should be used. So, import the CSV and the tempfile modules to then create a new CSV file in the temporal directory:
End of explanation
with open(fil.name, 'wb') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=';',
quoting=csv.QUOTE_MINIMAL)
spamwriter.writerow(['Map_name', 'Date', 'Temp'])
for mapp, vals in infos.iteritems():
spamwriter.writerow([mapp, vals['date'].strftime('%Y-%m-%d'), vals['value']])
fil.close()
Explanation: Now, for each record read from the temporal dataset, the procedure stores the respective record in the CSV file:
End of explanation
!cat {fil.name}
Explanation: For verification, we can simply print the CSV file to the terminal using the cat shell command:
End of explanation
%matplotlib notebook
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
Explanation: Plot data using Matplotlib
To print the data using Matplotlib some libraries have to be imported first:
End of explanation
x = []
y = []
for mapp, vals in infos.iteritems():
x.append(vals['date'])
y.append(vals['value'])
print x
print y
Explanation: Next we create the list of values for the x and y axes:
End of explanation
# create the plot
fig, ax = plt.subplots()
# create the plot line
ax.plot(x,y, label='2014 monthly temperature', color='red')
# set the format of X axis label
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))
# set the title of graph
plt.title('Monthly temperature');
# add legend and set it lower-center position
ax.legend(loc='lower center')
# fix the position/rotation of X lavel
fig.autofmt_xdate()
# set the label for X and Y axis
ax.set_xlabel('Month')
ax.set_ylabel('Fahrenheit temp')
# show the graph
plt.show()
Explanation: Finally we plot the temperature values over time:
End of explanation
remove_map = tgis.RasterDataset('temp_12@{mapset}'.format(mapset=gscript.gisenv()['MAPSET']))
dataset.unregister_map(remove_map)
dataset.update_from_registered_maps()
Explanation: Some other tips
Unregistering maps
To remove maps from a temporal dataset (they will not be truly deleted but just unregistered from the temporal dataset), first a map object should be created. Then, using this new dataset object, we remove the selected map(s) using the unregister_map function:
End of explanation
dataset.delete()
Explanation: Deleting a temporal dataset
Removing a space time datasets from temporal database (again, the contained maps remain in GRASS GIS) can be done directly from the same object using the delete function:
End of explanation |
1,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DAT210x - Programming with Python for DS
Module5- Lab4
Step1: You can experiment with these parameters
Step2: Some Convenience Functions
Step3: Load up the dataset. It may or may not have nans in it. Make sure you catch them and destroy them, by setting them to 0. This is valid for this dataset, since if the value is missing, you can assume no money was spent on it.
Step4: As instructed, get rid of the Channel and Region columns, since you'll be investigating as if this were a single location wholesaler, rather than a national / international one. Leaving these fields in here would cause KMeans to examine and give weight to them
Step5: Before unitizing / standardizing / normalizing your data in preparation for K-Means, it's a good idea to get a quick peek at it. You can do this using the .describe() method, or even by using the built-in pandas df.plot.hist()
Step6: Having checked out your data, you may have noticed there's a pretty big gap between the top customers in each feature category and the rest. Some feature scaling algorithms won't get rid of outliers for you, so it's a good idea to handle that manually---particularly if your goal is NOT to determine the top customers.
After all, you can do that with a simple Pandas .sort_values() and not a machine learning clustering algorithm. From a business perspective, you're probably more interested in clustering your +/- 2 standard deviation customers, rather than the top and bottom customers.
Remove top 5 and bottom 5 samples for each column
Step7: Drop rows by index. We do this all at once in case there is a collision. This way, we don't end up dropping more rows than we have to, if there is a single row that satisfies the drop for multiple columns. Since there are 6 rows, if we end up dropping < 562 = 60 rows, that means there indeed were collisions
Step8: What are you interested in?
Depending on what you're interested in, you might take a different approach to normalizing/standardizing your data.
You should note that all columns left in the dataset are of the same unit. You might ask yourself, do I even need to normalize / standardize the data? The answer depends on what you're trying to accomplish. For instance, although all the units are the same (generic money unit), the price per item in your store isn't. There may be some cheap items and some expensive one. If your goal is to find out what items people tend to buy together but you didn't "unitize" properly before running kMeans, the contribution of the lesser priced item would be dwarfed by the more expensive item. This is an issue of scale.
For a great overview on a few of the normalization methods supported in SKLearn, please check out
Step9: Sometimes people perform PCA before doing KMeans, so that KMeans only operates on the most meaningful features. In our case, there are so few features that doing PCA ahead of time isn't really necessary, and you can do KMeans in feature space. But keep in mind you have the option to transform your data to bring down its dimensionality. If you take that route, then your Clusters will already be in PCA-transformed feature space, and you won't have to project them again for visualization.
Step10: Print out your centroids. They're currently in feature-space, which is good. Print them out before you transform them into PCA space for viewing
Step11: Now that we've clustered our KMeans, let's do PCA, using it as a tool to visualize the results. Project the centroids as well as the samples into the new 2D feature space for visualization purposes
Step12: Visualize all the samples. Give them the color of their cluster label
Step13: Plot the Centroids as X's, and label them | Python Code:
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from sklearn import preprocessing
from sklearn.decomposition import PCA
# You might need to import more modules here..
# .. your code here ..
matplotlib.style.use('ggplot') # Look Pretty
c = ['red', 'green', 'blue', 'orange', 'yellow', 'brown']
Explanation: DAT210x - Programming with Python for DS
Module5- Lab4
End of explanation
PLOT_TYPE_TEXT = False # If you'd like to see indices
PLOT_VECTORS = True # If you'd like to see your original features in P.C.-Space
Explanation: You can experiment with these parameters:
End of explanation
def drawVectors(transformed_features, components_, columns, plt):
num_columns = len(columns)
# This function will project your *original* feature (columns)
# onto your principal component feature-space, so that you can
# visualize how "important" each one was in the
# multi-dimensional scaling
# Scale the principal components by the max value in
# the transformed set belonging to that component
xvector = components_[0] * max(transformed_features[:,0])
yvector = components_[1] * max(transformed_features[:,1])
## Visualize projections
# Sort each column by its length. These are your *original*
# columns, not the principal components.
important_features = { columns[i] : math.sqrt(xvector[i]**2 + yvector[i]**2) for i in range(num_columns) }
important_features = sorted(zip(important_features.values(), important_features.keys()), reverse=True)
print("Projected Features by importance:\n", important_features)
ax = plt.axes()
for i in range(num_columns):
# Use an arrow to project each original feature as a
# labeled vector on your principal component axes
plt.arrow(0, 0, xvector[i], yvector[i], color='b', width=0.0005, head_width=0.02, alpha=0.75, zorder=600000)
plt.text(xvector[i]*1.2, yvector[i]*1.2, list(columns)[i], color='b', alpha=0.75, zorder=600000)
return ax
def doPCA(data, dimensions=2):
model = PCA(n_components=dimensions, svd_solver='randomized', random_state=7)
model.fit(data)
return model
def doKMeans(data, num_clusters=0):
# TODO: Do the KMeans clustering here, passing in the # of clusters parameter
# and fit it against your data. Then, return a tuple containing the cluster
# centers and the labels.
#
# Hint: Just like with doPCA above, you will have to create a variable called
# `model`, which will be a SKLearn K-Means model for this to work.
# .. your code here ..
return model.cluster_centers_, model.labels_
Explanation: Some Convenience Functions
End of explanation
# .. your code here ..
Explanation: Load up the dataset. It may or may not have nans in it. Make sure you catch them and destroy them, by setting them to 0. This is valid for this dataset, since if the value is missing, you can assume no money was spent on it.
End of explanation
# .. your code here ..
Explanation: As instructed, get rid of the Channel and Region columns, since you'll be investigating as if this were a single location wholesaler, rather than a national / international one. Leaving these fields in here would cause KMeans to examine and give weight to them:
End of explanation
# .. your code here ..
Explanation: Before unitizing / standardizing / normalizing your data in preparation for K-Means, it's a good idea to get a quick peek at it. You can do this using the .describe() method, or even by using the built-in pandas df.plot.hist():
End of explanation
drop = {}
for col in df.columns:
# Bottom 5
sort = df.sort_values(by=col, ascending=True)
if len(sort) > 5: sort=sort[:5]
for index in sort.index: drop[index] = True # Just store the index once
# Top 5
sort = df.sort_values(by=col, ascending=False)
if len(sort) > 5: sort=sort[:5]
for index in sort.index: drop[index] = True # Just store the index once
Explanation: Having checked out your data, you may have noticed there's a pretty big gap between the top customers in each feature category and the rest. Some feature scaling algorithms won't get rid of outliers for you, so it's a good idea to handle that manually---particularly if your goal is NOT to determine the top customers.
After all, you can do that with a simple Pandas .sort_values() and not a machine learning clustering algorithm. From a business perspective, you're probably more interested in clustering your +/- 2 standard deviation customers, rather than the top and bottom customers.
Remove top 5 and bottom 5 samples for each column:
End of explanation
print("Dropping {0} Outliers...".format(len(drop)))
df.drop(inplace=True, labels=drop.keys(), axis=0)
df.describe()
Explanation: Drop rows by index. We do this all at once in case there is a collision. This way, we don't end up dropping more rows than we have to, if there is a single row that satisfies the drop for multiple columns. Since there are 6 rows, if we end up dropping < 562 = 60 rows, that means there indeed were collisions:
End of explanation
#T = preprocessing.StandardScaler().fit_transform(df)
#T = preprocessing.MinMaxScaler().fit_transform(df)
#T = preprocessing.MaxAbsScaler().fit_transform(df)
#T = preprocessing.Normalizer().fit_transform(df)
T = df # No Change
Explanation: What are you interested in?
Depending on what you're interested in, you might take a different approach to normalizing/standardizing your data.
You should note that all columns left in the dataset are of the same unit. You might ask yourself, do I even need to normalize / standardize the data? The answer depends on what you're trying to accomplish. For instance, although all the units are the same (generic money unit), the price per item in your store isn't. There may be some cheap items and some expensive one. If your goal is to find out what items people tend to buy together but you didn't "unitize" properly before running kMeans, the contribution of the lesser priced item would be dwarfed by the more expensive item. This is an issue of scale.
For a great overview on a few of the normalization methods supported in SKLearn, please check out: https://stackoverflow.com/questions/30918781/right-function-for-normalizing-input-of-sklearn-svm
Suffice to say, at the end of the day, you're going to have to know what question you want answered and what data you have available in order to select the best method for your purpose. Luckily, SKLearn's interfaces are easy to switch out so in the mean time, you can experiment with all of them and see how they alter your results.
5-sec summary before you dive deeper online:
Normalization
Let's say your user spend a LOT. Normalization divides each item by the average overall amount of spending. Stated differently, your new feature is = the contribution of overall spending going into that particular item: \$spent on feature / \$overall spent by sample.
MinMax
What % in the overall range of $spent by all users on THIS particular feature is the current sample's feature at? When you're dealing with all the same units, this will produce a near face-value amount. Be careful though: if you have even a single outlier, it can cause all your data to get squashed up in lower percentages.
Imagine your buyers usually spend \$100 on wholesale milk, but today only spent \$20. This is the relationship you're trying to capture with MinMax. NOTE: MinMax doesn't standardize (std. dev.); it only normalizes / unitizes your feature, in the mathematical sense. MinMax can be used as an alternative to zero mean, unit variance scaling. [(sampleFeatureValue-min) / (max-min)] * (max-min) + min Where min and max are for the overall feature values for all samples.
Back to The Assignment
Un-comment just ONE of lines at a time and see how alters your results. Pay attention to the direction of the arrows, as well as their LENGTHS:
End of explanation
# Do KMeans
n_clusters = 3
centroids, labels = doKMeans(T, n_clusters)
Explanation: Sometimes people perform PCA before doing KMeans, so that KMeans only operates on the most meaningful features. In our case, there are so few features that doing PCA ahead of time isn't really necessary, and you can do KMeans in feature space. But keep in mind you have the option to transform your data to bring down its dimensionality. If you take that route, then your Clusters will already be in PCA-transformed feature space, and you won't have to project them again for visualization.
End of explanation
# .. your code here ..
Explanation: Print out your centroids. They're currently in feature-space, which is good. Print them out before you transform them into PCA space for viewing
End of explanation
display_pca = doPCA(T)
T = display_pca.transform(T)
CC = display_pca.transform(centroids)
Explanation: Now that we've clustered our KMeans, let's do PCA, using it as a tool to visualize the results. Project the centroids as well as the samples into the new 2D feature space for visualization purposes:
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
if PLOT_TYPE_TEXT:
# Plot the index of the sample, so you can further investigate it in your dset
for i in range(len(T)): ax.text(T[i,0], T[i,1], df.index[i], color=c[labels[i]], alpha=0.75, zorder=600000)
ax.set_xlim(min(T[:,0])*1.2, max(T[:,0])*1.2)
ax.set_ylim(min(T[:,1])*1.2, max(T[:,1])*1.2)
else:
# Plot a regular scatter plot
sample_colors = [ c[labels[i]] for i in range(len(T)) ]
ax.scatter(T[:, 0], T[:, 1], c=sample_colors, marker='o', alpha=0.2)
Explanation: Visualize all the samples. Give them the color of their cluster label
End of explanation
ax.scatter(CC[:, 0], CC[:, 1], marker='x', s=169, linewidths=3, zorder=1000, c=c)
for i in range(len(centroids)):
ax.text(CC[i, 0], CC[i, 1], str(i), zorder=500010, fontsize=18, color=c[i])
# Display feature vectors for investigation:
if PLOT_VECTORS:
drawVectors(T, display_pca.components_, df.columns, plt)
# Add the cluster label back into the dataframe and display it:
df['label'] = pd.Series(labels, index=df.index)
df
plt.show()
Explanation: Plot the Centroids as X's, and label them
End of explanation |
1,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Magic functions
You can enable magic functions by loading pandas_td.ipython
Step1: It can be loaded automatically by the following configuration in "~/.ipython/profile_default/ipython_config.py"
Step2: After loading the extension, type "%td" and press TAB to list magic functions
Step3: %td_tables returns the list of tables
Step4: %td_jobs returns the list of recently executed jobs
Step5: Use database
%td_use is a special function that has side effects. First, it pushes table names into the current namespace
Step6: By printing a table name, you can describe column names
Step7: Tab completion is also supported
Step8: The result of the query can be stored in a variable by -o
Step9: Or you can save the result into a file by -O
Step10: Python-style variable substition is supported
Step11: You can preview the actual query by --dry-run (or -n)
Step12: Time-series index
With magic functions, "time" column is converted into time-series index automatically. You can use td_date_trunc() or td_time_format() in combination with GROUP BY for aggregation
Step13: Plotting
--plot is a convenient option for plotting. The first column represents x-axis. Other columns represent y-axis
Step14: In practice, however, it is more efficient to execute rough calculation on the server side and store the result into a variable for further analysis
Step15: --plot provides a shortcut way of plotting "pivot charts", as a combination of pivot() and plot(). If the query result contains non-numeric columns, or column names ending with "_id", they are used as columns parameter
Step16: Pivot tables
--pivot creates a pivot table from the result of query. Like --plot, the first column represents index and other non-numeric columns represents new columns
Step17: Verbose output
By passing -v (--verbose) option, you can print pseudo Python code that was executed by the magic function. | Python Code:
%load_ext pandas_td.ipython
Explanation: Magic functions
You can enable magic functions by loading pandas_td.ipython:
End of explanation
c = get_config()
c.InteractiveShellApp.extensions = [
'pandas_td.ipython',
]
Explanation: It can be loaded automatically by the following configuration in "~/.ipython/profile_default/ipython_config.py":
End of explanation
%td_databases
Explanation: After loading the extension, type "%td" and press TAB to list magic functions:
List functions
%td_databases returns the list of databases:
End of explanation
%td_tables sample
Explanation: %td_tables returns the list of tables:
End of explanation
%td_jobs
Explanation: %td_jobs returns the list of recently executed jobs:
End of explanation
%td_use sample_datasets
Explanation: Use database
%td_use is a special function that has side effects. First, it pushes table names into the current namespace:
End of explanation
nasdaq
Explanation: By printing a table name, you can describe column names:
End of explanation
%%td_presto
select count(1) cnt
from nasdaq
Explanation: Tab completion is also supported:
As the second effect of %td_use, it implicitly changes "default database", which is used when you write queries without database names.
Query functions
%%td_hive, %%td_pig, and %%td_presto are cell magic functions that run queries:
End of explanation
%%td_presto -o df
select count(1) cnt
from nasdaq
df
Explanation: The result of the query can be stored in a variable by -o:
End of explanation
%%td_presto -O './output.csv'
select count(1) cnt
from nasdaq
Explanation: Or you can save the result into a file by -O:
End of explanation
start = '2010-01-01'
end = '2011-01-01'
%%td_presto
select count(1) cnt
from nasdaq
where td_time_range(time, '{start}', '{end}')
Explanation: Python-style variable substition is supported:
End of explanation
%%td_presto -n
select count(1) cnt
from nasdaq
where td_time_range(time, '{start}', '{end}')
Explanation: You can preview the actual query by --dry-run (or -n):
End of explanation
%%td_presto
select
-- Time-series index (yearly)
td_date_trunc('year', time) time,
-- Same as above
-- td_time_format(time, 'yyyy-01-01') time,
count(1) cnt
from
nasdaq
group by
1
limit
3
Explanation: Time-series index
With magic functions, "time" column is converted into time-series index automatically. You can use td_date_trunc() or td_time_format() in combination with GROUP BY for aggregation:
End of explanation
%matplotlib inline
%%td_presto --plot
select
-- x-axis
td_date_trunc('year', time) time,
-- y-axis
min(low) low,
max(high) high
from
nasdaq
where
symbol = 'AAPL'
group by
1
Explanation: Plotting
--plot is a convenient option for plotting. The first column represents x-axis. Other columns represent y-axis:
End of explanation
%%td_presto -o df
select
-- daily summary
td_date_trunc('day', time) time,
min(low) low,
max(high) high,
sum(volume) volume
from
nasdaq
where
symbol = 'AAPL'
group by
1
# Use resample for local calculation
df['high'].resample('1m', how='max').plot()
Explanation: In practice, however, it is more efficient to execute rough calculation on the server side and store the result into a variable for further analysis:
End of explanation
%%td_presto --plot
select
-- x-axis
td_date_trunc('month', time) time,
-- columns
symbol,
-- y-axis
avg(close) close
from
nasdaq
where
symbol in ('AAPL', 'MSFT')
group by
1, 2
Explanation: --plot provides a shortcut way of plotting "pivot charts", as a combination of pivot() and plot(). If the query result contains non-numeric columns, or column names ending with "_id", they are used as columns parameter:
End of explanation
%%td_presto --pivot
select
td_date_trunc('year', time) time,
symbol,
avg(close) close
from
nasdaq
where
td_time_range(time, '2010', '2015')
and symbol like 'AA%'
group by
1, 2
Explanation: Pivot tables
--pivot creates a pivot table from the result of query. Like --plot, the first column represents index and other non-numeric columns represents new columns:
End of explanation
%%td_presto -v --plot
select
td_date_trunc('year', time) time,
sum(volume) volume
from
nasdaq
group by
1
Explanation: Verbose output
By passing -v (--verbose) option, you can print pseudo Python code that was executed by the magic function.
End of explanation |
1,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repaso (Mรณdulo 1)
Recordar que el tema principal del mรณdulo 1 son las ecuaciones diferenciales. Entonces, al finalizar este mรณdulo, las competencias principales que deben tener ustedes es
- Resolver de forma numรฉrica ecuaciones diferenciales ordinarias (EDO) de cualquier orden.
- Graficar soluciones de dichas EDO en diferentes representaciones.
- Interpretar o concluir acerca de las grรกficas que se obtuvieron.
Ejemplo 1. Recordemos la ecuaciรณn logรญstica.
Un modelo popular de crecimiento poblacional de organismos es la llamada ecuaciรณn lรณgistica, publicada por Pierre Verhulst en 1838.
$$\frac{dx}{dt} = \mu(x) \; x = r\; (1- x)\; x.$$
En este modelo, $x$ es una variable que representa cualitativamente la poblaciรณn. El valor $x=1$ representa la capacidad mรกxima de poblaciรณn y el valor $x=0$ representa extinciรณn.
Ademรกs, $r$ es la tasa de crecimiento mรกxima de la poblaciรณn.
Dibuje $\mu(x)=r(1-x)$ para $r=1$, e interprete su significado.
Para $r=1$ y $x(0)=x_0=0.1$ resuelva numรฉricamente esta ecuaciรณn y grafique $x$ vs. $t$. ยฟQuรฉ se puede decir de la poblaciรณn cuando $t\to\infty$?
Haga un barrido de $-1\leq r\leq 1$ con pasos de $0.5$, resolviendo la ecuaciรณn logรญstica numรฉricamente cada vez y graficando los resultados en una misma grรกfica. ยฟPara quรฉ valores de $r$ se tiene crecimiento de la poblaciรณn? ยฟPara quรฉ valores se tiene extinciรณn?
Step1: Por la ecuaciรณn logรญstica, $\mu(x)$ representa una tasa de crecimiento de la poblaciรณn. Por la grรกfica, cuando la poblacion es pequeรฑa esta tasa es mรกxima y cuando la poblaciรณn estรก en su tope esta tasa es cero.
Step2: Con estas condiciones, la soluciรณn numรฉrica nos muestra que la poblaciรณn tiende a su capacidad mรกxima cuando $t\to\infty$.
Step3: De la grรกfica podemos inferir que la poblaciรณn crece hasta su capacidad mรกxima para $r>0$, se extingue para $r<0$ y permanece constante para $r=0$.
Ejemplo 2. Conejos vs. Ovejas.
Imaginemos que en un mismo ecosistema se encuentran conejos y ovejas. Supongamos, ademรกs, que ambas compiten por el mismo alimento (hierba) y que la cantidad total de alimento es limitada. Se ignoran otros factores como depredadores, efectos de temporada (estaciones), y otras fuentes de comida. El modelo de competiciรณn entre dos especies Lotka-Volterra nos sirve para describir este fenรณmeno.
Dos fenรณmenos importantes
Step4: Con estas condiciones iniciales, evidenciamos que los conejos se extinguen y que las ovejas alcanzan su mรกxima capacidad de poblaciรณn.
Step5: Con estas condiciones iniciales, evidenciamos que las ovejas se extinguen y que los conejos alcanzan su mรกxima capacidad de poblaciรณn.
Step6: Con estas condiciones iniciales, evidenciamos que las ovejas se extinguen y que los conejos alcanzan su mรกxima capacidad de poblaciรณn.
Step7: Con estas condiciones iniciales, evidenciamos que los conejos se extinguen y que las ovejas alcanzan su mรกxima capacidad de poblaciรณn. | Python Code:
# Numeral 1
# Importar librerรญas necesarias
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Definimos funcion mu
def mu(x, r):
return r*(1-x)
# Definimos conjunto de valores en x
x = np.linspace(0, 1.2, 50)
# Valor del parametro solicitado
r = 1
# Conjunto de valores en y
y = mu(x, r)
# Graficamos
plt.figure(figsize=(6,4))
plt.plot(x, y, 'r')
plt.xlabel('Poblacion $x$')
plt.ylabel('$\mu(x)$')
plt.grid()
plt.show()
Explanation: Repaso (Mรณdulo 1)
Recordar que el tema principal del mรณdulo 1 son las ecuaciones diferenciales. Entonces, al finalizar este mรณdulo, las competencias principales que deben tener ustedes es
- Resolver de forma numรฉrica ecuaciones diferenciales ordinarias (EDO) de cualquier orden.
- Graficar soluciones de dichas EDO en diferentes representaciones.
- Interpretar o concluir acerca de las grรกficas que se obtuvieron.
Ejemplo 1. Recordemos la ecuaciรณn logรญstica.
Un modelo popular de crecimiento poblacional de organismos es la llamada ecuaciรณn lรณgistica, publicada por Pierre Verhulst en 1838.
$$\frac{dx}{dt} = \mu(x) \; x = r\; (1- x)\; x.$$
En este modelo, $x$ es una variable que representa cualitativamente la poblaciรณn. El valor $x=1$ representa la capacidad mรกxima de poblaciรณn y el valor $x=0$ representa extinciรณn.
Ademรกs, $r$ es la tasa de crecimiento mรกxima de la poblaciรณn.
Dibuje $\mu(x)=r(1-x)$ para $r=1$, e interprete su significado.
Para $r=1$ y $x(0)=x_0=0.1$ resuelva numรฉricamente esta ecuaciรณn y grafique $x$ vs. $t$. ยฟQuรฉ se puede decir de la poblaciรณn cuando $t\to\infty$?
Haga un barrido de $-1\leq r\leq 1$ con pasos de $0.5$, resolviendo la ecuaciรณn logรญstica numรฉricamente cada vez y graficando los resultados en una misma grรกfica. ยฟPara quรฉ valores de $r$ se tiene crecimiento de la poblaciรณn? ยฟPara quรฉ valores se tiene extinciรณn?
End of explanation
# Numeral 2
# Importamos librerรญa para soluciรณn numรฉrica de ecuaciones diferenciales
from scipy.integrate import odeint
# Definimos la funciรณn que nos pide odeint
def logistica(x, t):
return r*(1-x)*x
x0 = 0.1 # Condiciรณn inicial
tt = np.linspace(0, 10) # Vector de tiempo
xx = odeint(logistica, x0, tt) # Soluciรณn numรฉrica
# Graficamos soluciรณn
plt.figure(figsize=(6,4))
plt.plot(tt, xx, '--y', linewidth = 3)
plt.xlabel('Tiempo $t$')
plt.ylabel('Poblacion $x(t)$')
plt.grid()
plt.show()
Explanation: Por la ecuaciรณn logรญstica, $\mu(x)$ representa una tasa de crecimiento de la poblaciรณn. Por la grรกfica, cuando la poblacion es pequeรฑa esta tasa es mรกxima y cuando la poblaciรณn estรก en su tope esta tasa es cero.
End of explanation
# Numeral 3
plt.figure(figsize=(6,4))
for r in np.arange(-1,1.1,0.5):
xx = odeint(logistica, x0, tt)
plt.plot(tt, xx, linewidth = 3, label = 'r=%f'%r)
plt.xlabel('Tiempo $t$')
plt.ylabel('Poblacion $x(t)$')
plt.legend(loc='center left', bbox_to_anchor=(1.05,0.5))
plt.grid()
plt.show()
Explanation: Con estas condiciones, la soluciรณn numรฉrica nos muestra que la poblaciรณn tiende a su capacidad mรกxima cuando $t\to\infty$.
End of explanation
# importar librerรญas
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# definimos modelo lotka volterra
def lotka_volterra(x, t):
x1 = x[0]
x2 = x[1]
return [x1*(3-x1-2*x2), x2*(2-x2-x1)]
# primer condiciรณn inicial
x0 = [0.5, 1]
tt = np.linspace(0, 10, 100)
# solucion numerica
xx = odeint(lotka_volterra, x0, tt)
xx.shape
x1 = xx[:, 0]
x2 = xx[:, 1]
# Graficas
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(tt, x1, '*g', label = 'conejos $x_1$')
plt.plot(tt, x2, '--r', label = 'ovejas $x_2$')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$t$')
plt.ylabel('Poblaciรณn')
plt.subplot(1,2,2)
plt.plot(x1, x2, 'b', label = '(conejos,ovejas)')
plt.plot(x1[0], x2[0], 'oy', lw = 3, label = 'inicial')
plt.plot(x1[-1], x2[-1], 'ok', lw = 3, label = 'final')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$x_1$ (conejos)')
plt.ylabel('$x_2$ (ovejas)')
plt.show()
Explanation: De la grรกfica podemos inferir que la poblaciรณn crece hasta su capacidad mรกxima para $r>0$, se extingue para $r<0$ y permanece constante para $r=0$.
Ejemplo 2. Conejos vs. Ovejas.
Imaginemos que en un mismo ecosistema se encuentran conejos y ovejas. Supongamos, ademรกs, que ambas compiten por el mismo alimento (hierba) y que la cantidad total de alimento es limitada. Se ignoran otros factores como depredadores, efectos de temporada (estaciones), y otras fuentes de comida. El modelo de competiciรณn entre dos especies Lotka-Volterra nos sirve para describir este fenรณmeno.
Dos fenรณmenos importantes:
- Cada especie crecerรญa hasta su capacidad mรกxima en ausencia de la otra especie. Esto se puede modelar con la ecuaciรณn logรญstica para cada especie. Los conejos tienen una habilidad soprendente para reproducirse, entonces comparativamente deberรญan crecer mรกs.
- Cuando conejos y ovejas se encuentran, empieza la competencia. Algunas veces los conejos comen, pero las ovejas (al ser mรกs grandes) ganarรกn el derecho a la comida la mayorรญa de las veces. Supondremos que dichos conflictos se dan a una tasa proporcional al tamaรฑo de cada poblaciรณn (si hay dos veces mรกs ovejas, la probabilidad de que un conejo encuentre a una oveja serรก el doble). Supondremos que dicha competencia disminuirรก la tasa de crecimiento para cada especie, y el efecto serรก mayor para los conejos.
Con las consideraciones anteriores, un modelo especรญfico es:
\begin{align}
\frac{dx_1}{dt} &= x_1(3-x_1-2x_2)\
\frac{dx_2}{dt} &= x_2(2-x_2-x_1),
\end{align}
donde $x_1(t)\geq 0$ es la poblaciรณn de conejos al instante $t$ y $x_2(t)\geq 0$ es la poblaciรณn de ovejas al instante $t$. Definimos $x=\left[x_1\quad x_2\right]^T$.
Esta selecciรณn de coeficientes se tiene para recrear el escenario descrito. Sin embargo, este modelo se puede utilizar para estudiar competiciรณn entre especies en general y los coeficientes cambiarรกn en cada caso.
Simule el sistema para cada una de las siguientes condiciones iniciales. Para cada caso, obtenga grรกficas de $x_1$ vs. $t$, $x_2$ vs. $t$ y $x_2$ vs. $x_1$. ยฟQuรฉ pasa con las poblaciones de conejos y de ovejas cuando $t\to\infty$? ยฟPueden coexistir?
$x(0)=\left[x_1(0)\quad x_2(0)\right]^T = [0.5 \quad 1]^T$.
$x(0)=\left[x_1(0)\quad x_2(0)\right]^T = [1 \quad 0.5]^T$.
$x(0)=\left[x_1(0)\quad x_2(0)\right]^T = [1.5 \quad 1]^T$.
$x(0)=\left[x_1(0)\quad x_2(0)\right]^T = [1 \quad 1.5]^T$.
$x(0)=\left[x_1(0)\quad x_2(0)\right]^T = [1 \quad 1]^T$.
End of explanation
# segunda condiciรณn inicial
x0 = [1, 0.5]
tt = np.linspace(0, 10, 100)
xx = odeint(lotka_volterra, x0, tt)
x1 = xx[:, 0]
x2 = xx[:, 1]
# Graficas
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(tt, x1, '*g', label = 'conejos $x_1$')
plt.plot(tt, x2, '--r', label = 'ovejas $x_2$')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$t$')
plt.ylabel('Poblaciรณn')
plt.subplot(1,2,2)
plt.plot(x1, x2, 'b', label = '(conejos,ovejas)')
plt.plot(x1[0], x2[0], 'oy', lw = 3, label = 'inicial')
plt.plot(x1[-1], x2[-1], 'ok', lw = 3, label = 'final')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$x_1$ (conejos)')
plt.ylabel('$x_2$ (ovejas)')
plt.show()
Explanation: Con estas condiciones iniciales, evidenciamos que los conejos se extinguen y que las ovejas alcanzan su mรกxima capacidad de poblaciรณn.
End of explanation
# tercer condiciรณn inicial
x0 = [1.5, 1]
tt = np.linspace(0, 10, 100)
xx = odeint(lotka_volterra, x0, tt)
x1 = xx[:, 0]
x2 = xx[:, 1]
# Graficas
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(tt, x1, '*g', label = 'conejos $x_1$')
plt.plot(tt, x2, '--r', label = 'ovejas $x_2$')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$t$')
plt.ylabel('Poblaciรณn')
plt.subplot(1,2,2)
plt.plot(x1, x2, 'b', label = '(conejos,ovejas)')
plt.plot(x1[0], x2[0], 'oy', lw = 3, label = 'inicial')
plt.plot(x1[-1], x2[-1], 'ok', lw = 3, label = 'final')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$x_1$ (conejos)')
plt.ylabel('$x_2$ (ovejas)')
plt.show()
Explanation: Con estas condiciones iniciales, evidenciamos que las ovejas se extinguen y que los conejos alcanzan su mรกxima capacidad de poblaciรณn.
End of explanation
# cuarta condiciรณn inicial
x0 = [1, 1.5]
tt = np.linspace(0, 10, 100)
xx = odeint(lotka_volterra, x0, tt)
x1 = xx[:, 0]
x2 = xx[:, 1]
# Graficas
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(tt, x1, '*g', label = 'conejos $x_1$')
plt.plot(tt, x2, '--r', label = 'ovejas $x_2$')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$t$')
plt.ylabel('Poblaciรณn')
plt.subplot(1,2,2)
plt.plot(x1, x2, 'b', label = '(conejos,ovejas)')
plt.plot(x1[0], x2[0], 'oy', lw = 3, label = 'inicial')
plt.plot(x1[-1], x2[-1], 'ok', lw = 3, label = 'final')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$x_1$ (conejos)')
plt.ylabel('$x_2$ (ovejas)')
plt.show()
Explanation: Con estas condiciones iniciales, evidenciamos que las ovejas se extinguen y que los conejos alcanzan su mรกxima capacidad de poblaciรณn.
End of explanation
# cuarta condiciรณn inicial
x0 = [1, 1]
tt = np.linspace(0, 10, 100)
xx = odeint(lotka_volterra, x0, tt)
x1 = xx[:, 0]
x2 = xx[:, 1]
# Graficas
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(tt, x1, '*g', label = 'conejos $x_1$')
plt.plot(tt, x2, '--r', label = 'ovejas $x_2$')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$t$')
plt.ylabel('Poblaciรณn')
plt.subplot(1,2,2)
plt.plot(x1, x2, 'b', label = '(conejos,ovejas)')
plt.plot(x1[0], x2[0], 'oy', lw = 3, label = 'inicial')
plt.plot(x1[-1], x2[-1], 'ok', lw = 3, label = 'final')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('$x_1$ (conejos)')
plt.ylabel('$x_2$ (ovejas)')
plt.show()
Explanation: Con estas condiciones iniciales, evidenciamos que los conejos se extinguen y que las ovejas alcanzan su mรกxima capacidad de poblaciรณn.
End of explanation |
1,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Rotations" data-toc-modified-id="Rotations-1"><span class="toc-item-num">1 </span>Rotations</a></div><div class="lev1 toc-item"><a href="#PCA" data-toc-modified-id="PCA-2"><span class="toc-item-num">2 </span>PCA</a></div><div class="lev1 toc-item"><a href="#FastFourier-Transformation" data-toc-modified-id="FastFourier-Transformation-3"><span class="toc-item-num">3 </span>FastFourier Transformation</a></div><div class="lev1 toc-item"><a href="#Save-python-object-with-pickle" data-toc-modified-id="Save-python-object-with-pickle-4"><span class="toc-item-num">4 </span>Save python object with pickle</a></div><div class="lev1 toc-item"><a href="#Progress-Bar" data-toc-modified-id="Progress-Bar-5"><span class="toc-item-num">5 </span>Progress Bar</a></div><div class="lev1 toc-item"><a href="#Check-separations-by-histogram-and-scatter-plot" data-toc-modified-id="Check-separations-by-histogram-and-scatter-plot-6"><span class="toc-item-num">6 </span>Check separations by histogram and scatter plot</a></div><div class="lev1 toc-item"><a href="#Plot-Cumulative-Lift" data-toc-modified-id="Plot-Cumulative-Lift-7"><span class="toc-item-num">7 </span>Plot Cumulative Lift</a></div><div class="lev1 toc-item"><a href="#GBM-skitlearn" data-toc-modified-id="GBM-skitlearn-8"><span class="toc-item-num">8 </span>GBM skitlearn</a></div><div class="lev1 toc-item"><a href="#Xgboost" data-toc-modified-id="Xgboost-9"><span class="toc-item-num">9 </span>Xgboost</a></div><div class="lev1 toc-item"><a href="#LightGBM" data-toc-modified-id="LightGBM-10"><span class="toc-item-num">10 </span>LightGBM</a></div><div class="lev1 toc-item"><a href="#Control-plots
Step3: Rotations
Step5: PCA
Step8: FastFourier Transformation
Step9: Save python object with pickle
Step10: Progress Bar
There are many packages to create a progress bar in python, the one I use is tqdm
- tqdm
Step11: Check separations by histogram and scatter plot
Step13: Plot Cumulative Lift
Step15: GBM skitlearn
Step17: Xgboost
To instal xgboost
Step18: LightGBM
New way to install the package
Step21: Control plots
Step25: Tuning parameters of a model
Grid search with skitlearn
Random search with skitlearn
Bayesian Optimization Search
https | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import xgboost as xgb
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import precision_recall_curve
df = pd.read_csv("iris.csv")
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Rotations" data-toc-modified-id="Rotations-1"><span class="toc-item-num">1 </span>Rotations</a></div><div class="lev1 toc-item"><a href="#PCA" data-toc-modified-id="PCA-2"><span class="toc-item-num">2 </span>PCA</a></div><div class="lev1 toc-item"><a href="#FastFourier-Transformation" data-toc-modified-id="FastFourier-Transformation-3"><span class="toc-item-num">3 </span>FastFourier Transformation</a></div><div class="lev1 toc-item"><a href="#Save-python-object-with-pickle" data-toc-modified-id="Save-python-object-with-pickle-4"><span class="toc-item-num">4 </span>Save python object with pickle</a></div><div class="lev1 toc-item"><a href="#Progress-Bar" data-toc-modified-id="Progress-Bar-5"><span class="toc-item-num">5 </span>Progress Bar</a></div><div class="lev1 toc-item"><a href="#Check-separations-by-histogram-and-scatter-plot" data-toc-modified-id="Check-separations-by-histogram-and-scatter-plot-6"><span class="toc-item-num">6 </span>Check separations by histogram and scatter plot</a></div><div class="lev1 toc-item"><a href="#Plot-Cumulative-Lift" data-toc-modified-id="Plot-Cumulative-Lift-7"><span class="toc-item-num">7 </span>Plot Cumulative Lift</a></div><div class="lev1 toc-item"><a href="#GBM-skitlearn" data-toc-modified-id="GBM-skitlearn-8"><span class="toc-item-num">8 </span>GBM skitlearn</a></div><div class="lev1 toc-item"><a href="#Xgboost" data-toc-modified-id="Xgboost-9"><span class="toc-item-num">9 </span>Xgboost</a></div><div class="lev1 toc-item"><a href="#LightGBM" data-toc-modified-id="LightGBM-10"><span class="toc-item-num">10 </span>LightGBM</a></div><div class="lev1 toc-item"><a href="#Control-plots:-ROC,-Precision-Recall,-ConfusionMatrix,-top_k,-classification-report," data-toc-modified-id="Control-plots:-ROC,-Precision-Recall,-ConfusionMatrix,-top_k,-classification-report,-11"><span class="toc-item-num">11 </span>Control plots: ROC, Precision-Recall, ConfusionMatrix, top_k, classification report,</a></div><div class="lev1 toc-item"><a href="#Tuning-parameters-of-a-model" data-toc-modified-id="Tuning-parameters-of-a-model-12"><span class="toc-item-num">12 </span>Tuning parameters of a model</a></div><div class="lev2 toc-item"><a href="#Grid-search-with-skitlearn" data-toc-modified-id="Grid-search-with-skitlearn-121"><span class="toc-item-num">12.1 </span>Grid search with skitlearn</a></div><div class="lev2 toc-item"><a href="#Random-search-with-skitlearn" data-toc-modified-id="Random-search-with-skitlearn-122"><span class="toc-item-num">12.2 </span>Random search with skitlearn</a></div><div class="lev2 toc-item"><a href="#Bayesian-Optimization--Search" data-toc-modified-id="Bayesian-Optimization--Search-123"><span class="toc-item-num">12.3 </span>Bayesian Optimization Search</a></div>
End of explanation
def rotMat3D(a,r):
Return the matrix that rotate the a vector into the r vector. numpy array are required
a = a/np.linalg.norm(a)
r = r/np.linalg.norm(r)
I = np.eye(3)
v = np.cross(a,r)
c = np.inner(a,r)
v_x = np.array([[0,-v[2],v[1]],[v[2],0,-v[0]],[-v[1],v[0],0]])
return I + v_x + np.matmul(v_x,v_x)/(1+c)
# example usage
z_old = np.array([0, 0, 1])
z = np.array([1, 1, 1])
R = rotMat3D(z, z_old)
print(z, R.dot(z))
print(z_old, R.dot(z_old))
print(np.linalg.norm(z), np.linalg.norm(R.dot(z)))
def createR2D(vector):
rotate the vector to [0,1], require numpy array
m = np.linalg.norm(vector)
c, s = vector[1]/m , vector[0]/m
R2 = np.array([c, -s, s, c]).reshape(2,2)
return R2
# example usage
y_old = np.array([3,4])
R2 = createR2D(y_old)
print(y_old, R2.dot(y_old))
Explanation: Rotations
End of explanation
from sklearn import decomposition
def pca_decomposition(df):
Perform sklearn PCA. The returned components are already ordered by the explained variance
pca = decomposition.PCA()
pca.fit(df)
return pca
def pca_stats(pca):
print("variance explained:\n", pca.explained_variance_ratio_)
print("pca components:\n", pca.components_)
def plot_classcolor(df, x='y', y='x', hue=None):
sns.lmplot(x, y, data=df, hue=hue, fit_reg=False)
sns.plt.title("({} vs {})".format(y, x))
plt.show()
def add_pca_to_df(df, allvars, pca):
df[["pca_" + str(i) for i, j in enumerate(pca.components_)
]] = pd.DataFrame(pca.fit_transform(df[allvars]))
pca = pca_decomposition( df[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']] )
pca_stats(pca)
add_pca_to_df(df, ['sepal_length', 'sepal_width', 'petal_length', 'petal_width'], pca)
plot_classcolor(df, 'pca_0', 'pca_1', 'species_id')
Explanation: PCA
End of explanation
from scipy.fftpack import fft, rfft, irfft, fftfreq
def rfourier_transformation(df, var, pass_high=-1, pass_low=-1, verbose=True, plot=True):
Return the signal after low and high filter applied.
Use verbose and plot to see stats and plot the signal before and after the filter.
low = pass_high
high = pass_low
if (high < low) and (high>0):
print("Cannot be pass_low < pass_high!!")
return -1
time = pd.Series(df.index.values[1:10] -
df.index.values[:10 - 1]) # using the first 10 data
dt = time.describe()['50%']
if (verbose):
print(
sampling time: {0} s
sampling frequency: {1} hz
max freq in rfft: {2} hz
.format(dt, 1 / dt, 1 / (dt * 2), 1 / (dt)))
signal = df[var]
freq = fftfreq(signal.size, d=dt)
f_signal = rfft(signal)
m = {}
if (low > 0):
f_signal_lowcut = f_signal.copy()
f_signal_lowcut[(freq < low)] = 0
cutted_signal_low = irfft(f_signal_lowcut)
m['low'] = 1
if (high > 0):
f_signal_highcut = f_signal.copy()
f_signal_highcut[(freq > high)] = 0
cutted_signal_high = irfft(f_signal_highcut)
m['high'] = 1
if (high > 0) & (low > 0):
f_signal_bwcut = f_signal.copy()
f_signal_bwcut[(freq < low) | (freq > high)] = 0
cutted_signal_bw = irfft(f_signal_bwcut)
m['bw'] = 1
m['low'] = 2
m['high'] = 3
n = len(freq)
if (plot):
f, axarr = plt.subplots(len(m) + 1, 1, sharex=True, figsize=(18,15))
f.canvas.set_window_title(var)
# time plot
axarr[0].plot(signal)
axarr[0].set_title('Signal')
if 'bw' in m:
axarr[m['bw']].plot(df.index, cutted_signal_bw)
axarr[m['bw']].set_title('Signal after low-high cut')
if 'low' in m:
axarr[m['low']].plot(df.index, cutted_signal_low)
axarr[m['low']].set_title('Signal after high filter (low frequencies rejected)')
if 'high' in m:
axarr[m['high']].plot(df.index, cutted_signal_high)
axarr[m['high']].set_title('Signal after low filter (high frequencies rejected)')
plt.show()
# spectrum
f = plt.figure(figsize=(18,8))
plt.plot(freq[0:n // 2], f_signal[:n // 2])
f.suptitle('Frequency spectrum')
if 'low' in m:
plt.axvline(x=low, ymin=0., ymax=1, linewidth=2, color='red')
if 'high' in m:
plt.axvline(x=high, ymin=0., ymax=1, linewidth=2, color='red')
plt.show()
if 'bw' in m:
return cutted_signal_bw
elif 'low' in m:
return cutted_signal_low
elif 'high' in m:
return cutted_signal_high
else:
return signal
acc = pd.read_csv('accelerations.csv')
signal = rfourier_transformation(acc, 'x', pass_high=0.1, pass_low=0.5, verbose=True, plot=True)
Explanation: FastFourier Transformation
End of explanation
# save in pickle with gzip compression
import pickle
import gzip
def save(obj, filename, protocol=0):
file = gzip.GzipFile(filename, 'wb')
file.write(pickle.dumps(obj, protocol))
file.close()
def load(filename):
file = gzip.GzipFile(filename, 'rb')
buffer = ""
while True:
data = file.read()
if data == "":
break
buffer += data
obj = pickle.loads(buffer)
file.close()
return obj
Explanation: Save python object with pickle
End of explanation
# Simple bar, the one to be used in a general python code
import tqdm
import time
for i in tqdm.tqdm(range(0, 1000)):
pass
# Bar to be used in a jupyter notebook
for i in tqdm.tqdm_notebook(range(0, 1000)):
pass
# custom update bar
tot = 4000
bar = tqdm.tqdm_notebook(desc='Status ', total=tot, mininterval=0.5, miniters=5, unit='cm', unit_scale=True)
# with the file options you can show the progress bar into a file
# mininterval: time in seconds to see an update on the progressbar. tqdm always gets updated in the background, but it will diplay only every mininterval.
# miniters: Tweak this and `mininterval` to get very efficient loops, if 0 will only use mininterval
# unit_scale: use international scale for the units (k, M, m, etc...)
# bar_format: specify the bar format, default is '{l_bar}{bar}{r_bar}'. It can impact the performance if you ask for complicate bar format
# unit_divisor: [default: 1000], ignored unless `unit_scale` is True
# ncols: The width of the entire output message. If specified, dynamically resizes the progressbar to stay within this bound.
for l in range(0, tot):
if ((l-1) % 10) == 0:
bar.update(10)
if l % 1000 == 0:
bar.write('to print something without duplicate the progress bar (if you are using tqdm.tqdm instead of tqdm.tqdm_notebook)')
print('or use the simple print if you are using tqdm.tqdm_notebook')
time.sleep(0.001)
# Some text from web page to don't slow your code and keep the progressbar with the right printing frequency
# mininterval is more intuitive to configure than miniters.
# clever adjustment system dynamic_miniters will automatically adjust miniters to the amount of iterations that fit into time mininterval.
# Essentially, tqdm will check if itโs time to print without actually checking time. This behavior can be still be bypassed by manually setting miniters.
# However, consider a case with a combination of fast and slow iterations. After a few fast iterations, dynamic_miniters will set miniters to a large number.
# When interation rate subsequently slows, miniters will remain large and thus reduce display update frequency. To address this:
# maxinterval defines the maximum time between display refreshes. A concurrent monitoring thread checks for overdue updates and forces one where necessary.
# you can use tqdm as bash command too (e.g. for compression/decompression of a file, cut, sed, awk operations etc...)
!seq 9999999 | tqdm --unit_scale | wc -l
# use trange instead of range, it's faster with progressbar
for i in tqdm.trange(100):
pass
# use tqdm.tnrange instead of trange in jupyter notebook
for i in tqdm.tnrange(100):
pass
# change the prefix and postfix of the bar during executions
from random import random, randint
t = tqdm.trange(100)
for i in t:
# Description will be displayed on the left
t.set_description('GEN %i' % i)
# Postfix will be displayed on the right, and will format automatically based on argument's datatype
t.set_postfix(loss=random(), gen=randint(1,999), str='h', lst=[1, 2])
time.sleep(0.1)
# nested progress bar
for i in tqdm.tnrange(3, desc='first progressbar'):
for j in tqdm.tnrange(20, desc='\tsecond progressbar', leave=True):
sleep(0.05)
# with this extension you can use tqdm_notebook().pandas(...) instead of tqdm.pandas(...)
from tqdm import tqdm_notebook
!jupyter nbextension enable --py --sys-prefix widgetsnbextension
# pandas apply & groupby operations with progressbar (tqdm state that it will not noticeably slow pandas down)
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0, int(1e8), (100, 3)))
# Create and register a new `tqdm` instance with `pandas`
# (can use tqdm_gui, optional kwargs, etc.)
print('set tqdm_notebook for pandas, show the bar')
tqdm_notebook().pandas()
# Now you can use `progress_apply` instead of `apply`
print('example usage of progressbar in a groupby pandas statement')
df_g = df.groupby(0).progress_apply(lambda x: time.sleep(0.01))
print('example usage of progressbar in an apply pandas statement')
df_a = df.progress_apply(lambda x: time.sleep(0.01))
Explanation: Progress Bar
There are many packages to create a progress bar in python, the one I use is tqdm
- tqdm: https://pypi.python.org/pypi/tqdm
others are:
- progressbar: with each iterable (mainly for) https://pypi.python.org/pypi/progressbar2
- https://github.com/niltonvolpato/python-progressbar
End of explanation
def plot_classcolor(df, x='y', y='x', hue=None):
sns.lmplot(x, y, data=df, hue=hue, fit_reg=False)
sns.plt.title("({} vs {})".format(y, x))
plt.show()
plot_classcolor(df, 'sepal_length', 'sepal_width', hue='species')
def plot_histo_per_class(df, var, target):
t_list = df[target].unique()
for t in t_list:
sns.distplot(
df[df[target] == t][var], kde=False, norm_hist=True, label=str(t))
sns.plt.legend()
sns.plt.show()
plot_histo_per_class(df, 'sepal_length', "species_id")
Explanation: Check separations by histogram and scatter plot
End of explanation
def plotLift(df, features, target, ascending=False, multiclass_level=None):
Plot the Lift function for all the features.
Ascending can be a list of the same feature length or a single boolean value.
For the multiclass case you can give the value of a class and the lift is calculated
considering the select class vs all the other
if multiclass_level != None:
df = df[features+[target]].copy()
if multiclass_level != 0:
df.loc[df[target] != multiclass_level, target] = 0
df.loc[df[target] == multiclass_level, target] = 1
else :
df.loc[df[target] == multiclass_level, target] = 1
df.loc[df[target] != multiclass_level, target] = 0
npoints = 100
n = len(df)
st = n / npoints
df_shuffled = df.sample(frac=1)
flat = np.array([[(i * st) / n, df_shuffled[0:int(i * st)][target].sum()]
for i in range(1, npoints + 1)])
flat = flat.transpose()
to_leg = []
if not isinstance(features, list):
features = [features]
if not isinstance(ascending, list):
ascending = [ascending for i in features]
for f, asc in zip(features, ascending):
a = df[[f, target]].sort_values(f, ascending=asc)
b = np.array([[(i * st) / n, a[0:int(i * st)][target].sum()]
for i in range(1, npoints + 1)])
b = b.transpose()
to_leg.append(plt.plot(b[0], b[1], label=f)[0])
to_leg.append(plt.plot(flat[0], flat[1], label="no_gain")[0])
plt.legend(handles=to_leg, loc=4)
plt.xlabel('faction of data', fontsize=18)
plt.ylabel(target+' (cumulative sum)', fontsize=16)
plt.show()
# Lift for regression
titanic = sns.load_dataset("titanic")
plotLift(titanic, ['sibsp', 'survived', 'class'], 'fare', ascending=[False,False, True])
# Lift plot example for multiclass
plotLift(
df, ['sepal_length', 'sepal_width', 'petal_length'],
'species_id',
ascending=[False, True, False],
multiclass_level=3)
Explanation: Plot Cumulative Lift
End of explanation
def plot_var_imp_skitlearn(features, clf_fit):
Plot var_imp for a skitlearn fitted model
my_ff = np.array(features)
importances = clf_fit.feature_importances_
indices = np.argsort(importances)
pos = np.arange(len(my_ff[indices])) + .5
plt.figure(figsize=(20, 0.75 * len(my_ff[indices])))
plt.barh(pos, importances[indices], align='center')
plt.yticks(pos, my_ff[indices], size=25)
plt.xlabel('rank')
plt.title('Feature importances', size=25)
plt.grid(True)
plt.show()
importance_dict = dict(zip(my_ff[indices], importances[indices]))
return importance_dict
Explanation: GBM skitlearn
End of explanation
import xgboost
#### VERSIONE GIUSTA A LAVORO
def plot_var_imp_xgboost(model, mode='gain', ntop=-1):
Plot the vars imp for xgboost model, where mode = ['weight','gain','cover']
'weight' - the number of times a feature is used to split the data across all trees.
'gain' - the average gain of the feature when it is used in trees
'cover' - the average coverage of the feature when it is used in trees
importance = model.get_score(importance_type=mode)
importance = sorted(
importance.items(), key=operator.itemgetter(1), reverse=True)
if ntop == -1: ntop = len(importance)
importance = importance[0:ntop]
my_ff = np.array([i[0] for i in importance])
imp = np.array([i[1] for i in importance])
indices = np.argsort(imp)
pos = np.arange(len(my_ff[indices])) + .5
plt.figure(figsize=(20, 0.75 * len(my_ff[indices])))
plt.barh(pos, imp[indices], align='center')
plt.yticks(pos, my_ff[indices], size=25)
plt.xlabel('rank')
plt.title('Feature importances (' + mode + ')', size=25)
plt.grid(True)
plt.show()
return
Explanation: Xgboost
To instal xgboost: https://github.com/dmlc/xgboost/tree/master/python-package
- conda install py-xgboost (pip install xgboost)
End of explanation
import lightgbm as lgb
Explanation: LightGBM
New way to install the package:
- pip install lightgbm
Old way to install the package
- build the package from https://github.com/Microsoft/LightGBM/wiki/Installation-Guide
- add the library to python using: python setup.py install inside the python package of the github clone
Problem: when you compile a package with the linux compiler and then you use it with the anaconda compiler you need to have the same compiler on both.
This cannot be the default and to fix this do:
- cd ~/anaconda3/lib
- mv -vf libstdc++.so.6 libstdc++.so.6.old
- ln -s /usr/lib/x86_64-linux-gnu/libstdc++.so.6 ./libstdc++.so.6
So now the shared library seen from anaconda is the same that have been usage when the package was compiled.
These steps were necessary to install lightgbm
End of explanation
### VERSIONE CORRETTA A LAVORO
def plot_ROC_PrecisionRecall(y_test, y_pred):
Plot ROC curve and Precision-Recall plot
numpy arrays are required.
fpr_clf, tpr_clf, _ = roc_curve(y_test, y_pred)
precision, recall, thresholds = precision_recall_curve(y_test, y_pred)
f1 = np.array([2 * p * r / (p + r) for p, r in zip(precision, recall)])
f1[np.isnan(f1)] = 0
t_best_f1 = thresholds[np.argmax(f1)]
roc_auc = auc(fpr_clf, tpr_clf)
plt.figure(figsize=(25, 25))
# plot_ROC
plt.subplot(221)
plt.plot(
fpr_clf,
tpr_clf,
color='r',
lw=2,
label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='-')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
# plot_PrecisionRecall
plt.subplot(222)
plt.plot(
recall, precision, color='r', lw=2, label='Precision-Recall curve')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precison-Recall curve')
plt.legend(loc="lower right")
plt.show()
return {"roc_auc": roc_auc, "t_best_f1": t_best_f1}
def plot_ROC_PR_test_train(y_train, y_test, y_test_pred, y_train_pred):
Plot ROC and Precision-Recall curve for test and train.
Return the auc for test and train
roc_auc_test = plot_ROC_PrecisionRecall(y_test, y_test_pred)
roc_auc_train = plot_ROC_PrecisionRecall(y_train, y_train_pred)
return roc_auc_test, roc_auc_train
Explanation: Control plots: ROC, Precision-Recall, ConfusionMatrix, top_k, classification report,
End of explanation
### Bayesian Optimization
# https://github.com/fmfn/BayesianOptimization
from bayes_opt import BayesianOptimization
def xgb_evaluate_gen(xg_train, xg_test, watchlist, num_rounds):
Create the function to be optimized (example for xgboost)
params = { 'eta': 0.1, 'objective':'binary:logistic','silent': 1, 'eval_metric': 'auc' }
def xgb_evaluate(min_child_weight,colsample_bytree,max_depth,subsample,gamma,alpha):
Return the function to be maximized by the Bayesian Optimization,
where the inputs are the parameters to be optimized and the output the
evaluation_metric on test set
params['min_child_weight'] = int(round(min_child_weight))
params['cosample_bytree'] = max(min(colsample_bytree, 1), 0)
params['max_depth'] = int(round(max_depth))
params['subsample'] = max(min(subsample, 1), 0)
params['gamma'] = max(gamma, 0)
params['alpha'] = max(alpha, 0)
#cv_result = xgb.cv(params, xg_train, num_boost_round=num_rounds, nfold=5,
# seed=random_state, callbacks=[xgb.callback.early_stop(25)]
model_temp = xgb.train(params, dtrain=xg_train, num_boost_round=num_rounds,
evals=watchlist, early_stopping_rounds=15, verbose_eval=False)
# return -cv_result['test-merror-mean'].values[-1]
return float(str(model_temp.eval(xg_test)).split(":")[1][0:-1])
return xgb_evaluate
def go_with_BayesianOptimization(xg_train, xg_test, watchlist, num_rounds = 1,
num_iter = 10, init_points = 10, acq='ucb'):
Send the Batesian Optimization for xgboost. acq = 'ucb', 'ei', 'poi'
xgb_func = xgb_evaluate_gen(xg_train, xg_test, watchlist, num_rounds)
xgbBO = BayesianOptimization(xgb_func, {'min_child_weight': (1, 50),
'colsample_bytree': (0.5, 1),
'max_depth': (5, 15),
'subsample': (0.5, 1),
'gamma': (0, 2),
'alpha': (0, 2),
})
xgbBO.maximize(init_points=init_points, n_iter=num_iter, acq=acq) # poi, ei, ucb
Explanation: Tuning parameters of a model
Grid search with skitlearn
Random search with skitlearn
Bayesian Optimization Search
https://github.com/fmfn/BayesianOptimization
use params, pars to pass the different parameters to the train function.
try to make it without xgboost or with other things in general or a simple example
End of explanation |
1,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
User Defined Functions
User defined functions make for neater and more efficient programming.
We have already made use of several library functions in the math, scipy and numpy libraries.
Step1: Link to What's in Scipy.constants
Step2: User Defined Functions
Here we'll practice writing our own functions.
Functions start with
python
def name(input)
Step3: Finding distance to the origin in cylindrical co-ordinates
Step4: Another Example
Step5: The reason these are useful are for things like the below, where you want to make the same calculation many times. This finds all the prime numbers (only divided by 1 and themselves) from 2 to 100. | Python Code:
import numpy as np
import scipy.constants as constants
print('Pi = ', constants.pi)
h = float(input("Enter the height of the tower (in metres): "))
t = float(input("Enter the time interval (in seconds): "))
s = constants.g*t**2/2
print("The height of the ball is",h-s,"meters")
Explanation: User Defined Functions
User defined functions make for neater and more efficient programming.
We have already made use of several library functions in the math, scipy and numpy libraries.
End of explanation
x = 4**0.5
print(x)
x = np.sqrt(4)
print(x)
Explanation: Link to What's in Scipy.constants: https://docs.scipy.org/doc/scipy/reference/constants.html
Library Functions in Maths
(and numpy)
End of explanation
def factorial(n):
f = 1.0
for k in range(1,n+1):
f *= k
return f
print("This programme calculates n!")
n = int(input("Enter n:"))
a = factorial(10)
print("n! = ", a)
Explanation: User Defined Functions
Here we'll practice writing our own functions.
Functions start with
python
def name(input):
and must end with a statement to return the value calculated
python
return x
To run a function your code would look like this:
```python
import numpy as np
def name(input)
```
FUNCTION CODE HERE
```python
return D
y=int(input("Enter y:"))
D = name(y)
print(D)
```
First - write a function to calculate n factorial. Reminder:
$n! = \pi^n_{k=1} k$
~
~
~
~
~
~
~
~
~
~
~
~
~
~
End of explanation
from math import sqrt, cos, sin
def distance(r,theta,z):
x = r*cos(theta)
y = r*sin(theta)
d = sqrt(x**2+y**2+z**2)
return d
D = distance(2.0,0.1,1.5)
print(D)
Explanation: Finding distance to the origin in cylindrical co-ordinates:
End of explanation
def factors(n):
factorlist=[]
k = 2
while k<=n:
while n%k==0:
factorlist.append(n)
n //= k
k += 1
return factorlist
list=factors(12)
print(list)
print(factors(17556))
print(factors(23))
Explanation: Another Example: Prime Factors and Prime Numbers
Reminder: prime factors are the numbers which divide another number exactly.
Factors of the integer n can be found by dividing by all integers from 2 up to n and checking to see which remainders are zero.
Remainder in python calculated using
python
n % k
End of explanation
for n in range(2,100):
if len(factors(n))==1:
print(n)
Explanation: The reason these are useful are for things like the below, where you want to make the same calculation many times. This finds all the prime numbers (only divided by 1 and themselves) from 2 to 100.
End of explanation |
1,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this short tutorial, we will build and expand on the previous tutorials by computing the dynamic connectivity, using Time-Varying Functional Connectivity Graphs.
In the near future, the standard method of "sliding window" will be supported.
Load data
Step1: Dynamic connectivity
Prepare and configure the estimator object
Step2: Process condition "eyes open"
Step3: Process condition "eyes closed"
Step4: FCฮผstates / Clustering
Step5: Separate the encoded symbols based on their original groupings
Step6: Plot
Step7: Convert state prototypes to symmetric matrices and plot them
Step8: Separate symbols per subject
Now we would like to analyze the symbols per subject, per group.
Step11: Examine the first subject | Python Code:
import numpy as np
import tqdm
raw_eeg_eyes_open = np.load("data/eeg_eyes_opened.npy")
raw_eeg_eyes_closed = np.load("data/eeg_eyes_closed.npy")
num_trials, num_channels, num_samples = np.shape(raw_eeg_eyes_open)
read_trials = 10
eeg_eyes_open = raw_eeg_eyes_open[0:read_trials, ...]
eeg_eyes_closed = raw_eeg_eyes_closed[0:read_trials, ...]
Explanation: In this short tutorial, we will build and expand on the previous tutorials by computing the dynamic connectivity, using Time-Varying Functional Connectivity Graphs.
In the near future, the standard method of "sliding window" will be supported.
Load data
End of explanation
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
from dyconnmap import tvfcg
from dyconnmap.fc import IPLV
fb = [7.0, 13.0]
cc = 4.0
fs = 160.0
step = 80
estimator = IPLV(fb, fs)
Explanation: Dynamic connectivity
Prepare and configure the estimator object
End of explanation
X = np.squeeze(eeg_eyes_open[0])
fcgs = tvfcg(X, estimator, fb, fs, cc, step)
fcgs_eyes_open = np.array(np.real(fcgs))
for i in tqdm.tqdm(range(1, read_trials)):
X = np.squeeze(eeg_eyes_open[i])
fcgs = tvfcg(X, estimator, fb, fs, cc, step)
fcgs_eyes_open = np.vstack([fcgs_eyes_open, np.real(fcgs)])
Explanation: Process condition "eyes open"
End of explanation
X = np.squeeze(eeg_eyes_closed[0])
fcgs = tvfcg(X, estimator, fb, fs, cc, step)
fcgs_eyes_closed = np.array(np.real(fcgs))
for i in tqdm.tqdm(range(1, read_trials)):
X = np.squeeze(eeg_eyes_closed[i])
fcgs = tvfcg(X, estimator, fb, fs, cc, step)
fcgs_eyes_closed = np.vstack([fcgs_eyes_closed, np.real(fcgs)])
Explanation: Process condition "eyes closed"
End of explanation
from dyconnmap.cluster import NeuralGas
num_fcgs_eo, _, _ = np.shape(fcgs_eyes_open)
num_fcgs_ec, _, _ = np.shape(fcgs_eyes_closed)
fcgs = np.vstack([fcgs_eyes_open, fcgs_eyes_closed])
num_fcgs, num_channels, num_channels = np.shape(fcgs)
triu_ind = np.triu_indices_from(np.squeeze(fcgs[0, ...]), k=1)
fcgs = fcgs[:, triu_ind[0], triu_ind[1]]
rng = np.random.RandomState(0)
mdl = NeuralGas(n_protos=5, rng=rng).fit(fcgs)
encoding, symbols = mdl.encode(fcgs)
Explanation: FCฮผstates / Clustering
End of explanation
grp_dist_eo = symbols[0:num_fcgs_eo]
grp_dist_ec = symbols[num_fcgs_eo:]
Explanation: Separate the encoded symbols based on their original groupings
End of explanation
h_grp_dist_eo = np.histogram(grp_dist_eo, bins=mdl.n_protos, normed=True)
h_grp_dist_ec = np.histogram(grp_dist_ec, bins=mdl.n_protos, normed=True)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12, 6))
ind = np.arange(mdl.n_protos)
p1 = ax.bar(ind - 0.125, h_grp_dist_ec[0], 0.25, label='Eyes Closed')
p2 = ax.bar(ind + 0.125, h_grp_dist_eo[0], 0.25, label='Eyes Open')
ax.legend()
ax.set_xlabel('Symbol Index')
ax.set_ylabel('Hits %')
ax.set_xticks(np.arange(mdl.n_protos))
plt.show()
Explanation: Plot
End of explanation
protos_mtx = np.zeros((mdl.n_protos, 64, 64))
for i in range(mdl.n_protos):
symbol_state = np.zeros((64, 64))
symbol_state[triu_ind] = mdl.protos[i, :]
symbol_state = symbol_state + symbol_state.T
np.fill_diagonal(symbol_state, 1.0)
protos_mtx[i, :, :] = symbol_state
mtx_min = np.min(protos_mtx)
mtx_max = np.max(protos_mtx)
f, ax = plt.subplots(ncols=mdl.n_protos, figsize=(12, 12))
for i in range(mdl.n_protos):
cax = ax[i].imshow(np.squeeze(protos_mtx[i,...]), vmin=mtx_min, vmax=mtx_max, cmap=plt.cm.Spectral)
ax[i].set_title('#{0}'.format(i))
# move the colorbar to the side ;)
f.subplots_adjust(right=0.8)
cbar_ax = f.add_axes([0.82, 0.445, 0.0125, 0.115])
cb = f.colorbar(cax, cax=cbar_ax)
cb.set_label('Imaginary PLV')
Explanation: Convert state prototypes to symmetric matrices and plot them
End of explanation
grp_sym_eo = np.array_split(grp_dist_eo, 10, axis=0)
grp_sym_ec = np.array_split(grp_dist_ec, 10, axis=0)
Explanation: Separate symbols per subject
Now we would like to analyze the symbols per subject, per group.
End of explanation
subj1_eyes_open = grp_sym_eo[0]
subj1_eyes_closed = grp_sym_ec[0]
from dyconnmap.ts import markov_matrix
markov_matrix_eo = markov_matrix(subj1_eyes_open)
markov_matrix_ec = markov_matrix(subj1_eyes_closed)
from mpl_toolkits.axes_grid1 import ImageGrid
f = plt.figure(figsize=(8, 6))
grid = ImageGrid(f, 111,
nrows_ncols=(1,2),
axes_pad=0.15,
share_all=True,
cbar_location="right",
cbar_mode="single",
cbar_size="7%",
cbar_pad=0.15,
)
im = grid[0].imshow(markov_matrix_eo, vmin=0.0, vmax=1.0, cmap=plt.cm.Spectral)
grid[0].set_xlabel('Prototype')
grid[0].set_ylabel('Prototype')
grid[0].set_title('Eyes Open')
im = grid[1].imshow(markov_matrix_ec, vmin=0.0, vmax=1.0, cmap=plt.cm.Spectral)
grid[1].set_xlabel('Prototype')
grid[1].set_ylabel('Prototype')
grid[1].set_title('Eyes Close')
cb = grid[1].cax.colorbar(im)
cax = grid.cbar_axes[0]
axis = cax.axis[cax.orientation]
axis.label.set_text("Transition Probability")
plt.show()
from dyconnmap.ts import transition_rate, occupancy_time
tr_eo = transition_rate(subj1_eyes_open)
tr_ec = transition_rate(subj1_eyes_closed)
print(f
Transition rate
===============
Eyes open: {tr_eo:.3f}
Eyes closed: {tr_ec:.3f}
)
occ_eo = occupancy_time(subj1_eyes_open)[0]
occ_ec = occupancy_time(subj1_eyes_closed)[0]
print(
Occupancy time
==============
State \t 0 \t 1 \t 2 \t 3 \t 4
-----
Eyes open \t {0:.3f} \t {1:.3f} \t {2:.3f} \t {3:.3f} \t {4:.3f}
Eyes closed \t {5:.3f} \t {6:.3f} \t {7:.3f} \t {8:.3f} \t {9:.3f}
.format(*occ_eo, *occ_ec))
Explanation: Examine the first subject
End of explanation |
1,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
้ๆบๅ้ๅๅ
ถๅๅธ Random Variable and its Distribution
ๅ
ๆฌไปฅไธๅ
ๅฎน๏ผ
1. ้ๆบๅ้ Random Variable
2. ไผฏๅชๅฉๅๅธ Bernoulli Distribution
3. ไบ้กนๅๅธ Binomial Distribution
4. ๆณๆพๅๅธ Poisson Distribution
5. ๅๅๅๅธ Uniform Distribution
6. ๆๆฐๅๅธ Exponential Distribution
7. ๆญฃๆๅๅธ Normal Distribution
ๅผๅ
ฅ็งๅญฆ่ฎก็ฎๅ็ปๅพ็ธๅ
ณๅ
Step1: 1. ้ๆบๅ้ Random Variable
ๅฎไน๏ผ่ฎพ้ๆบ่ฏ้ช็ๆ ทๆฌ็ฉบ้ดไธบ S = {e}ใX = X(e)ๆฏๅฎไนๅจๆ ทๆฌ็ฉบ้ดSไธ็ๅฎๅผๅๅผๅฝๆฐใ็งฐ X = X(e)ไธบ้ๆบๅ้ใ
ไพ๏ผๅฐไธๆ็กฌๅธๆๆทไธๆฌก๏ผ่งๅฏๅบ็ฐๆญฃ้ขๅๅ้ข็ๆ
ๅต๏ผๆ ทๆฌ็ฉบ้ดๆฏ
S = {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}ใ
ไปฅX่ฎฐไธๆฌกๆๆทๅพๅฐๆญฃ้ขH็ๆปๆฐ๏ผ้ฃไน๏ผๅฏนไบๆ ทๆฌ็ฉบ้ด S = {e}๏ผ็จ e ไปฃ่กจๆ ทๆฌ็ฉบ้ด็ๅ
็ด ๏ผ่ๅฐๆ ทๆฌ็ฉบ้ด่ฎฐๆ{e}๏ผไธญ็ๆฏไธไธชๆ ทๆฌ็น e๏ผX ้ฝๆไธไธชๆฐไธไนๅฏนๅบใX ๆฏๅฎไนๅจๆ ทๆฌ็ฉบ้ด S ไธ็ไธไธชๅฎๅผๅๅผๅฝๆฐใๅฎ็ๅฎไนๅๆฏๆ ทๆฌ็ฉบ้ด S๏ผๅผๅๆฏๅฎๆฐ้ๅ{0, 1, 2, 3}ใไฝฟ็จๅฝๆฐ่ฎฐๅทๅฏๅฐXๅๆ
$$ X = X(e) =\left{
\begin{aligned}
3 & , e = HHH, \
2 & , e = HHT, HTH, THH, \
1 & , e = HTT, THT, TTH, \
0 & , e = TTT.
\end{aligned}
\right.
$$
ๆ่ฎธๅค้ๆบ่ฏ้ช๏ผๅฎไปฌ็็ปๆๆฌ่บซๆฏไธไธชๆฐ๏ผๅณๆ ทๆฌ็น e ๆฌ่บซๆฏไธไธชๆฐใๆไปฌไปค X = X(e) = e๏ผ้ฃไน X ๅฐฑๆฏไธไธช้ๆบๅ้ใไพๅฆ๏ผ็จ Y ่ฎฐๆ่ฝฆ้ดไธๅคฉ็็ผบๅคไบบๆฐ๏ผไปฅ W ่ฎฐๆๅฐๅบ็ฌฌไธๅญฃๅบฆ็้้จ้๏ผไปฅ Z ่ฎฐๆๅทฅๅไธๅคฉ็่็ต้๏ผไปฅ N ่ฎฐๆๅป้ขไธๅคฉ็ๆๅทไบบๆฐใ้ฃไน Y, W, Z, N ้ฝๆฏ้ๆบๅ้ใ
ไธ่ฌ็จๅคงๅ็ๅญๆฏๅฆ X, Y, Z, W, ... ่กจ็คบ้ๆบๅ้๏ผ่ไปฅๅฐๅๅญๆฏ x, y, z, w, ... ่กจ็คบๅฎๆฐใ
้ๆบๅ้็ๅๅผ้่ฏ้ช็็ปๆ่ๅฎ๏ผ่่ฏ้ช็ๅไธช็ปๆๅบ็ฐๆไธๅฎ็ๆฆ็๏ผๅ ไธบ้ๆบๅ้็ๅๅผๆไธๅฎ็ๆฆ็ใไพๅฆ๏ผๅจไธ่ฟฐไพๅญไธญ X ๅๅผไธบ2๏ผ่ฎฐๆ{X = 2}๏ผๅฏนๅบๆ ทๆฌ็น็้ๅ A = {HHT, HTH, THH}๏ผ่ฟๆฏไธไธชๆถ้ด๏ผๅฝไธไป
ๅฝไบไปถ A ๅ็ๆถๆ{X = 2}ใๆไปฌ็งฐๆฆ็P(A) = P{HHT, HTH, THH}ไธบ{X = 2}็ๆฆ็๏ผๅณP{X = 2} = P(A) = 3 / 8ใไปฅๅ๏ผ่ฟๅฐไบไปถ A = {HHT, HTH, THH}่ฏดๆๆฏไบไปถ{X = 2}ใ็ฑปไผผๅฐๆ
$$ P{X \leq 1} = P{HTT, THT, TTH, TTT} = \frac{1}{2} $$
ไธ่ฌ๏ผ่ฅ L ๆฏไธไธชๅฎๆฐ้ๅ๏ผๅฐ X ๅจ L ไธ็ๅๅผๅๆ{X โ L}ใๅฎ่กจ็คบไบไปถ B = {e | X(e) โ L}๏ผๅณ B ๆฏ็ฑ S ไธญไฝฟๅพ X(e) โ L ็ๆๆๆ ทๆฌ็น e ๆ็ปๆ็ไบไปถ๏ผๆญคๆถๆ
$$ P{X \in L } = P(B) = P{ e | X(e) \in L} $$
1.1 ็ฆปๆฃๅ้ๆบๅ้ Discrete Random Variable
ๆไบ้ๆบๅ้๏ผๅฎๅ
จ้จๅฏ่ฝๅๅฐ็ๅผๆฏๆ้ไธชๆๅฏๅๆ ้ๅคไธช๏ผ่ฟ็ง้ๆบๅ้็งฐไธบ็ฆปๆฃๅ้ๆบๅ้ใ
ๅฎนๆ็ฅ้๏ผ่ฆๆๆกไธไธช็ฆปๆฃๅ้ๆบๅ้ X ็็ป่ฎก่งๅพ๏ผๅฟ
้กปไธๅช้็ฅ้ X ็ๆๆๅฏ่ฝๅๅผไปฅๅๅๆฏไธไธชๅฏ่ฝๅผ็ๆฆ็ใ
่ฎพ็ฆปๆฃๅ้ๆบๅ้ X ็ๆๆๅฏ่ฝๅ็ๅผไธบ $x_k$(k = 1, 2, ...)๏ผX ๅๅไธชๅฏ่ฝๅผ็ๆฆ็๏ผๅณไบไปถ{X = $x_k$}็ๆฆ็๏ผไธบ
$$ P{X = X_k } = p_k๏ผk = 1,2, ... $$
็ฑๆฆ็็ๅฎไน๏ผp<sub>k</sub>ๆปก่ถณๅฆไธไธคไธชๆกไปถ๏ผ
$$ p_k \geq 0, k = 1,2๏ผ...; $$
$$ \begin{equation}
\sum_{k=1}^\infty p_k = 1
\end{equation}
$$
ๅ
ถไธญ๏ผๆกไปถไบๆฏ็ฑไบ ${X = x_1} \cup {X = x_2} \cup ... $ ๆฏๅฟ
็ถไบไปถ๏ผไธ ${X = x_1} \cap {X = x_2} \cap ... = \emptyset $๏ผ$ k \neq j $๏ผๆ
$ 1 = P[\bigcup_{k=1}^\infty {X = x_k}] = \sum_{k=1}^\infty P{X = x_k} $๏ผๅณ$ \sum_{k=1}^\infty p_k = 1 $ใ
ๆไปฌ็งฐ$ P{X = X_k } = p_k๏ผk = 1,2, ... $ไธบ็ฆปๆฃๅ้ๆบๅ้ X ็ๅๅธๅพใๅๅธๅพไนๅฏไปฅ็จ่กจๆ ผ็ๅฝขๅผๆฅ่กจ็คบ
$$\begin{array}{rr} \hline
X &x_1 &x_2 &... &x_n &... \ \hline
P_k &p_1 &p_2 &... &p_n &... \ \hline
\end{array}$$
ๅฐๅๅธๅพๅๆๅฝๆฐ็ๅฝขๅผ๏ผ่กจ็คบ็ฆปๆฃๅ้ๆบๅ้ๅจๅ็นๅฎๅๅผไธ็ๆฆ็๏ผ่ฏฅๅฝๆฐ็งฐไธบๆฆ็่ดจ้ๅฝๆฐ Probability Mass Function, pmfใ
1.2 ้ๆบๅ้็ๅๅธๅฝๆฐ Distribution Function of Random Variable
ๅฏนไบ้็ฆปๆฃๅ้ๆบๅ้ X๏ผ็ฑไบๅ
ถๅฏ่ฝๅ็ๅผไธ่ฝไธไธๅไธพๅบๆฅ๏ผๅ ่ๅฐฑไธ่ฝๅ็ฆปๆฃๅ้ๆบๅ้้ฃๆ ทๅฏไปฅ็จๅๅธๅพๆฅๆ่ฟฐๅฎใๅฆๅค๏ผๆไปฌ้ๅธธๆ้ๅฐ็้็ฆปๆฃๅ้ๆบๅ้ๅไปปไธๆๅฎ็ๅฎๆฐๅผ็ๆฆ็้ฝ็ญไบ0ใๅ่
๏ผๅจๅฎ้
ไธญ๏ผๅฏนไบ่ฟๆ ท็้ๆบๅ้๏ผๆไปฌๅนถไธไผๅฏนๅๆไธ็นๅฎๅผ็ๆฆ็ๆๅ
ด่ถฃ๏ผ่ๆฏ่่ๅจๆไธชๅบ้ด$(x_1, x_2]$ๅ
็ๆฆ็๏ผ$P{x_1 < X \leq x_2 }$ใไฝ็ฑไบ$ P{x_1 < X \leq x_2 } = P{X \leq x_2} - P{X \leq x_1} $๏ผๆไปฅๆไปฌๅช้่ฆ็ฅ้$ P{X \leq x_2 } $ๅ$ P{X \leq x_1 } $ๅฐฑๅฏไปฅไบใไธ้ขๅผๅ
ฅ้ๆบๅ้็ๅๅธๅฝๆฐ็ๆฆๅฟตใ
ๅฎไน๏ผ่ฎพ X ๆฏไธไธช้ๆบๅ้๏ผxๆฏไปปๆๅฎๆฐ๏ผๅฝๆฐ$ F(x) = P{X \leq x }, -\infty < x < \infty $็งฐไธบX็ๅๅธๅฝๆฐใ
ๅฏนไบไปปๆๅฎๆฐ$x_1, x_2(x_1 < x_2)$๏ผๆ$P{x_1 < X \leq x_2} = P{X \leq x_2}-P{X \leq x_1} = F(x_2) - F(x_1)$๏ผๅ ๆญค๏ผ่ฅๅทฒ็ฅ X ็ๅๅธๅฝๆฐ๏ผๆไปฌๅฐฑ็ฅ้ X ่ฝๅจไปปไธๅบ้ด$(x_1, x_2]$ไธ็ๆฆ็๏ผไป่ฟไธชๆไนไธ่ฏด๏ผๅๅธๅฝๆฐๅฎๆดๅฐๆ่ฟฐไบ้ๆบๅ้็็ป่ฎก่งๅพๆงใ
ๅๅธๅฝๆฐๆฏไธไธชๆฎ้็ๅฝๆฐ๏ผๆญฃๆฏ้่ฟๅฎ๏ผๆไปฌๅฐ่ฝ็จๆฐๅญฆๅๆ็ๆนๆณๆฅ็ ็ฉถ้ๆบๅ้ใ
ๅฆๆๅฐ X ็ๆๆฏๆฐ่ฝดไธ็้ๆบ็น็ๅๆ ๏ผ้ฃไน๏ผๅๅธๅฝๆฐF(x)ๅจxๅค็ๅฝๆฐๅผๅฐฑ่กจ็คบX่ฝๅจๅบ้ด$(-\infty, x_2]$ไธ็ๆฆ็ใ
ๅๅธๅฝๆฐF(x)ๅ
ทๆไปฅไธ็ๅบๆฌๆง่ดจ๏ผ
F(x)ๆฏไธไธชไธๅๅฝๆฐใไบๅฎไธ๏ผๅฏนไบไปปๆๅฎๆฐ$x_1, x_2(x_1 < x_2)$๏ผๆ
$$F(x_2) - F(x_1) = P{x_1 < X \leq x_2} \geq 0$$
$0 \leq F(x) \leq 1$๏ผไธ
$$F(-\infty) = \lim_{x \to -\infty} = 0, F(\infty) = \lim_{x \to \infty} = 1$$
$F(x+0)=F(x)$๏ผๅณF(x)ๆฏๅณ่ฟ็ปญ็ใ
ๅไน๏ผๅฏ่ฏๅ
ทๅคไธ่ฟฐๆง่ดจ็ๅฝๆฐF(x)ๅฟ
ๆฏๆไธช้ๆบๅ้็ๅๅธๅฝๆฐใ
1.3 ่ฟ็ปญๅ้ๆบๅ้ๅๅ
ถๆฆ็ๅฏๅบฆ Continuous Random Variable and its Probability Density
ๅฆๆๅฏนไบ้ๆบๅ้ X ็ๅๅธๅฝๆฐF(x)๏ผๅญๅจ้่ดๅฏ็งฏๅฝๆฐf(x)๏ผไฝฟๅฏนไบไปปๆๅฎๆฐ x ๆ
$$ F(x) = \int_{-\infty}^x f(t)dt $$
ๅ็งฐ X ไธบ่ฟ็ปญๅ้ๆบๅ้๏ผf(x)็งฐไธบ X ็ๆฆ็ๅฏๅบฆๅฝๆฐ๏ผ็ฎ็งฐๆฆ็ๅฏๅบฆใ
ๆฎๆฐๅญฆๅๆ็็ฅ่ฏ็ฅ่ฟ็ปญๅ้ๆบๅ้็ๅๅธๅฝๆฐๆฏ่ฟ็ปญๅฝๆฐใ
็ฑๅฎไน็ฅ้๏ผๆฆ็ๅฏๅบฆf(x)ๅ
ทๆไปฅไธๆง่ดจ๏ผ
$f(x) \geq 0$;
$\int_{-\infty}^{\infty} f(x)dx = 1$;
ๅฏนไบไปปๆๅฎๆฐ$x_1, x_2(x_1 \leq x_2)$๏ผ
$$ P{x_1 < X \leq x_2} = F(x_2) - F(x_1) = \int_{x_1}^{x_2} f(x)dx $$
่ฅf(x)ๅจ็น x ๅค่ฟ็ปญ๏ผๅๆ$F'(x) = f(x)$
ๅไน๏ผ่ฅf(x)ๅ
ทๅคๆง่ดจ1,2๏ผๅผๅ
ฅ$G(x) = \int_{-\infty}^x f(t)dt$๏ผๅฎๆฏๆไธ้ๆบๅ้ X ็ๅๅธๅฝๆฐ๏ผf(x)ๆฏ X ็ๆฆ็ๅฏๅบฆใ
็ฑๆง่ดจ2็ฅ้ไปไบๆฒ็บฟy=f(x)ไธOx่ฝดไน้ด็้ข็งฏ็ญไบ1๏ผ็ฑ3็ฅ้ X ่ฝๅจๅบ้ด$(x_1, x_2]$็ๆฆ็$P{x_1 < X \leq x_2}$็ญไบๅบ้ด$(x_1, x_2]$ไธๆฒ็บฟy=f(x)ไนไธ็ๆฒ่พนๆขฏๅฝข็้ข็งฏใ
2. ไผฏๅชๅฉๅๅธ Bernoulli Distribution
ไผฏๅชๅฉๅๅธๅ็งฐ(0 - 1)ๅๅธ
่ฎพ้ๆบๅ้Xๅชๅฏ่ฝๅ 0 ไธ 1 ไธคไธชๅผ๏ผๅฎ็ๅๅธๅพๆฏ
$$ P{X=k} = p^k(1-p)^{1-k}, k=0,1 (0 < p < 1) $$
ๅ็งฐXๆไปไปฅpไธบๅๆฐ็(0 - 1)ๅๅธๆไธค็นๅๅธใ
(0 - 1)ๅๅธ็ๅๅธๅพไนๅฏๅๆ
$$\begin{array}{rr} \hline
X &0 &1 \ \hline
P_k &1-p &p \ \hline
\end{array}$$
3. ไบ้กนๅๅธ Binomial Distribution
่ฎพ่ฏ้ช E ๅชๆไธคไธชๅฏ่ฝ็ปๆ๏ผ$A$ๅ$\overline{A}$๏ผๅ็งฐ E ไธบไผฏๅชๅฉ่ฏ้ช๏ผ่ฎพ$P(A)=p(0<p<1)$๏ผๆญคๆถ$P(\overline{A})=1-p$ใๅฐ E ็ฌ็ซ้ๅคnๆฌก๏ผๅ็งฐ่ฟไธไธฒ้ๅค็็ฌ็ซ่ฏ้ชไธบn้ไผฏๅชๅฉ่ฏ้ชใ
่ฟ้โ้ๅคโๆฏๆๅจๆฏๆฌก่ฏ้ชไธญ$P(A)=p$ไฟๆไธๅ๏ผโ็ฌ็ซโๆฏๆๅๆฌก่ฏ้ช็็ปๆไบไธๅฝฑๅ๏ผ่ฅไปฅ$C_i$่ฎฐ็ฌฌ i ๆฌก่ฏ้ช็็ปๆ๏ผ$C_i$ไธบ$A$ๆ$\overline{A}$, i=1,2,...,nใโ็ฌ็ซโๆฏๆ
$$ P(C_{1}C_{2}...C{n}) = P(C_1)P(C_2)...P(C_n) $$
ไปฅ X ่กจ็คบn้ไผฏๅชๅฉ่ฏ้ชไธญไบไปถ A ๅ็็ๆฌกๆฐ๏ผX ๆฏไธไธช้ๆบๅ้๏ผX ๆๆๅฏ่ฝๅ็ๅผไธบ0, 1, 2, ..., nใ็ฑไบๅๆฌก่ฏ้ชๆฏ็ธไบ็ฌ็ซ็๏ผๅ ไธบไบไปถ A ๅจๆๅฎ็$k(0\leq k \leq n)$ๆฌก่ฏ้ชไธญๅ็๏ผๅจๅ
ถไปn - kๆฌก่ฏ้ชไธญ A ไธๅ็็ๆฆ็ไธบ
$$ \underbrace{\left({p \cdot p \cdot ... \cdot p}\right)}k \cdot \underbrace{\left({(1-p) \cdot (1-p) \cdot ... \cdot (1-p)}\right)}{n-k} = p^{k}(1-p)^{n-k}$$
่ฟ็งๆๅฎ็ๆนๅผๅ
ฑๆ$\binom{n}{k}$็ง๏ผๅฎไปฌๆฏไธคไธคไบไธ็ธๅฎน็๏ผๆ
ๅจ n ๆฌก่ฏ้ชไธญ A ๅ็ k ๆฌก็ๆฆ็ไธบ$\binom{n}{k}p^{k}(1-p)^{n-k}$๏ผ่ฎฐ$q=1-p$๏ผๅณๆ
$$ P{X=k} = \binom{n}{k}p^{k}q^{n-k}, k=0,1,2,..,n $$
ๆไปฌ็งฐ้ๆบๅ้ X ๆไปๅๆฐไธบn, p็ไบ้กนๅๅธ๏ผๅนถ่ฎฐไธบ$X \sim b(n, p)$ใ
็นๅซ๏ผๅฝn=1ๆถ๏ผไบ้กนๅๅธๅไธบ$P{X=k}=p^{k}q^{1-k}, k=0,1$๏ผ่ฟๅฐฑๆฏ(0 - 1)ๅๅธใ
numpy.random.binomialๅฝๆฐๅฏไปฅๆ นๆฎไบ้กนๅๅธ่ฟ่กๆฝๆ ท๏ผ
Step2: ไธไธช็ฐๅฎ็ๆดปไธญ็ไพๅญใไธๅฎถ้ปไบๅ
ฌๅธๆข็ดขไนไธช็ฟไบ๏ผ้ข่ฎกๆฏไธชๅผ้ๆๅ็ไธบ0.1๏ผไนไธช็ฟไบๅ
จ้จๅผ้ๅคฑ่ดฅ็ๆฆ็ๆฏๅคๅฐ๏ผ
ๆ นๆฎๅ
ฌๅผ๏ผ$n = 9, p = 0.1, P{X = 0} = \binom{9}{0} \cdot 0.1^{0} \cdot 0.9^{9} \approx 0.3874$
ๆไปฌๅฏน่ฏฅๆจกๅ่ฟ่ก20000ๆฌก่ฏ้ช๏ผ่ฎก็ฎๅ
ถไธญๅพๅฐ0็ๆฆ็๏ผ
Step3: ๅฐ่ฏ้ชๆฌกๆฐๅขๅ ๏ผๅฏไปฅๆจกๆๅบๆดๅ ้ผ่ฟๅ็กฎๅผ็็ปๆใ
4. ๆณๆพๅๅธ Poisson Distribution
่ฎพ้ๆบๅ้ X ๆๆๅฏ่ฝๅ็ๅผไธบ0, 1, 2, ..., ่ๅๅไธชๅผ็ๆฆ็ไธบ
$$P{X=k} = \frac{\lambda^ke^{-\lambda}}{k!}, k=0,1,2,...,$$
ๅ
ถไธญ $\lambda > 0$ ๆฏๅธธๆฐ๏ผๅ็งฐ $X$ ๆไปๅๆฐไธบ $\lambda$ ็ๆณๆพๅๅธ๏ผ่ฎฐไธบ $X \sim \pi(\lambda)$ใ
ๆ็ฅ๏ผ$P{X=k}\geq0,k=0,1,2,...$๏ผไธๆ
$$ \sum_{k=0}^\infty P{X=k} = \sum_{k=0}^\infty \frac{\lambda^{k}e^{-\lambda}}{k!} = e^{-\lambda}\sum_{k=0}^\infty \frac{\lambda^k}{k!} = e^{-\lambda} \cdot e^{\lambda} = 1 $$
ๅ
ทๆๆณๆพๅๅธ็้ๆบๅ้ๅจๅฎ้
ๅบ็จไธญๆฏ้ๅธธๅค็ใไพๅฆ๏ผไธๆฌไนฆไธ้กตไธญ็ๅฐๅท้่ฏฏๆฐใๆๅฐๅบๅจไธๅคฉๅ
้ฎ้้ๅคฑ็ไฟกไปถๆฐใๆไธๅป้ขๅจไธๅคฉๅ
็ๆฅ่ฏ็
ไบบๆฐใๆไธๅฐๅบไธไธชๆถ้ด้ด้ๅ
ๅ็ไบค้ไบๆ
็ๆฌกๆฐใๅจไธไธชๆถ้ด้ด้ๅ
ๆ็งๆพๅฐๆง็ฉ่ดจๅๅบ็ใ็ป่ฟ่ฎกๆฐๅจ็ $\alpha$ ็ฒๅญๆฐ็ญ้ฝๆไปๆณๆพๅๅธใ
numpy.random.poissonๅฝๆฐๅฏไปฅๆ นๆฎๆณๆพๅๅธ่ฟ่กๆฝๆ ท๏ผ
Step4: 5. ๅๅๅๅธ Uniform Distribution
่ฅ่ฟ็ปญๅ้ๆบๅ้ X ๅ
ทๆๆฆ็ๅฏๅบฆ
$$ f(x) =\left{
\begin{aligned}
& \frac{1}{b-a}, & a < x < b, \
& 0, & ๅ
ถๅฎ \
\end{aligned}
\right.
$$
ๅ็งฐ X ๅจๅบ้ด(a, b)ไธๆไปๅๅๅๅธ๏ผ่ฎฐไธบ$X \sim U(a, b)$
numpy.random.uniformๅฝๆฐๅฏไปฅๆ นๆฎๅๅๅๅธ่ฟ่กๆฝๆ ท๏ผ
Step5: 6. ๆๆฐๅๅธ Exponential Distribution
่ฅ่ฟ็ปญๅ้ๆบๅ้ X ๅ
ทๆๆฆ็ๅฏๅบฆ
$$ f(x) =\left{
\begin{aligned}
& \frac{1}{\theta}e^{-\frac{x}{\theta}}, & x > 0, \
& 0, & ๅ
ถๅฎ \
\end{aligned}
\right.
$$
ๅ
ถไธญ$\theta > 0$ไธบๅธธๆฐ๏ผๅ็งฐ X ๆไปๅๆฐไธบ$\theta$็ๆๆฐๅๅธใ
numpy.random.exponentialๅฝๆฐๅฏไปฅๆ นๆฎๅๅๅๅธ่ฟ่กๆฝๆ ท๏ผ
Step6: 7. ๆญฃๆๅๅธ Normal Distribution
่ฅ่ฟ็ปญๅ้ๆบๅ้ X ็ๆฆ็ๅฏๅบฆไธบ
$$ f(x) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}, -\infty < x < \infty $$
ๅ
ถไธญ$\mu, \sigma(\sigma > 0)$ไธบๅธธๆฐ๏ผๅ็งฐ X ๆไปๅๆฐไธบ$\mu, \sigma$็ๆญฃๆๅๅธๆ้ซๆฏ๏ผGauss๏ผๅๅธ๏ผ่ฎฐไธบ$X \sim N(\mu, \sigma^2)$ใ
f(x)็ๅพๅฝขๅ
ทๆไปฅไธๆง่ดจ๏ผ
ๆฒ็บฟๅ
ณไบ$x = \mu$ๅฏน็งฐใ่ฟ่กจๆๅฏนไบไปปๆ$h > 0$ๆ
$$ P{\mu - h < X \leq \mu } = P{\mu < X \leq \mu + h} $$
ๅฝ$x = \mu$ๆถๅๅฐๆๅคงๅผ
$$ f(\mu) = \frac{1}{\sqrt{2\pi}\sigma} $$
x็ฆป$\mu$่ถ่ฟ๏ผf(x)็ๅผ่ถๅฐใ่ฟ่กจๆๅฏนไบๅๆ ท้ฟๅบฆ็ๅบ้ด๏ผๅฝๅบ้ด็ฆป$\mu$่ถ่ฟ๏ผX่ฝๅจ่ฟไธชๅบ้ดไธ็ๆฆ็่ถๅฐใ
ๅจ$x = \mu \pm \sigma$ๅคๆฒ็บฟๆๆ็นใๆฒ็บฟไปฅOx่ฝดไธบๆธ่ฟ็บฟใ
ๅฆๆๅบๅฎ$\sigma$๏ผๆนๅ$\mu$็ๅผ๏ผๅๅพๅฝขๆฒฟ็Ox่ฝดๅนณ็งป๏ผ่ไธๆนๅๅ
ถๅฝข็ถ๏ผๅฏ่งๆญฃๅคชๅๅธ็ๆฆ็ๅฏๅบฆๆฒ็บฟy=f(x)็ไฝ็ฝฎๅฎๅ
จ็ฑๅๆฐ$\mu$ๆ็กฎๅฎใ$\mu$็งฐไธบไฝ็ฝฎๅๆฐใ
ๅฆๆๅบๅฎ$\mu$๏ผๆนๅ$\sigma$๏ผ็ฑไบๆๅคงๅผ$f(\mu)=\frac{1}{\sqrt{2\pi}\sigma}$๏ผๅฏ็ฅๅฝ$\sigma$่ถๅฐๆถๅพๅฝขๅๅพ่ถๅฐ๏ผๅ ไธบ X ่ฝๅจ$\mu$้่ฟ็ๆฆ็่ถๅคงใ
็นๅซ๏ผๅฝ$\mu = 0, \sigma = 1$ๆถ็งฐ้ๆบๅ้ X ๆไปๆ ๅๆญฃๆๅๅธใๅ
ถๆฆ็ๅฏๅบฆๅๅๅธๅฝๆฐๅๅซ็จ$\varphi(x), \Phi(x)$่กจ็คบ๏ผๅณๆ
$$ \varphi(x) = \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} $$
$$ \Phi(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x} e^{-\frac{x^2}{2}}dx $$
ๆ็ฅ
$$ \Phi(-x) = 1 - \Phi(x) $$
numpy.random.normalๅฝๆฐๅฏไปฅๆ นๆฎๆญฃๆๅๅธ่ฟ่กๆฝๆ ท๏ผ | Python Code:
import math
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
# ๅผๅ
ฅ็ปๅพๅ
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
Explanation: ้ๆบๅ้ๅๅ
ถๅๅธ Random Variable and its Distribution
ๅ
ๆฌไปฅไธๅ
ๅฎน๏ผ
1. ้ๆบๅ้ Random Variable
2. ไผฏๅชๅฉๅๅธ Bernoulli Distribution
3. ไบ้กนๅๅธ Binomial Distribution
4. ๆณๆพๅๅธ Poisson Distribution
5. ๅๅๅๅธ Uniform Distribution
6. ๆๆฐๅๅธ Exponential Distribution
7. ๆญฃๆๅๅธ Normal Distribution
ๅผๅ
ฅ็งๅญฆ่ฎก็ฎๅ็ปๅพ็ธๅ
ณๅ
End of explanation
# ๆๆท็กฌๅธ10ๆฌก๏ผๆญฃ้ขๆไธ็ๆฌกๆฐ๏ผ้ๅค100ๆฌก
n, p = 10, .5
np.random.binomial(n, p, 100)
Explanation: 1. ้ๆบๅ้ Random Variable
ๅฎไน๏ผ่ฎพ้ๆบ่ฏ้ช็ๆ ทๆฌ็ฉบ้ดไธบ S = {e}ใX = X(e)ๆฏๅฎไนๅจๆ ทๆฌ็ฉบ้ดSไธ็ๅฎๅผๅๅผๅฝๆฐใ็งฐ X = X(e)ไธบ้ๆบๅ้ใ
ไพ๏ผๅฐไธๆ็กฌๅธๆๆทไธๆฌก๏ผ่งๅฏๅบ็ฐๆญฃ้ขๅๅ้ข็ๆ
ๅต๏ผๆ ทๆฌ็ฉบ้ดๆฏ
S = {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}ใ
ไปฅX่ฎฐไธๆฌกๆๆทๅพๅฐๆญฃ้ขH็ๆปๆฐ๏ผ้ฃไน๏ผๅฏนไบๆ ทๆฌ็ฉบ้ด S = {e}๏ผ็จ e ไปฃ่กจๆ ทๆฌ็ฉบ้ด็ๅ
็ด ๏ผ่ๅฐๆ ทๆฌ็ฉบ้ด่ฎฐๆ{e}๏ผไธญ็ๆฏไธไธชๆ ทๆฌ็น e๏ผX ้ฝๆไธไธชๆฐไธไนๅฏนๅบใX ๆฏๅฎไนๅจๆ ทๆฌ็ฉบ้ด S ไธ็ไธไธชๅฎๅผๅๅผๅฝๆฐใๅฎ็ๅฎไนๅๆฏๆ ทๆฌ็ฉบ้ด S๏ผๅผๅๆฏๅฎๆฐ้ๅ{0, 1, 2, 3}ใไฝฟ็จๅฝๆฐ่ฎฐๅทๅฏๅฐXๅๆ
$$ X = X(e) =\left{
\begin{aligned}
3 & , e = HHH, \
2 & , e = HHT, HTH, THH, \
1 & , e = HTT, THT, TTH, \
0 & , e = TTT.
\end{aligned}
\right.
$$
ๆ่ฎธๅค้ๆบ่ฏ้ช๏ผๅฎไปฌ็็ปๆๆฌ่บซๆฏไธไธชๆฐ๏ผๅณๆ ทๆฌ็น e ๆฌ่บซๆฏไธไธชๆฐใๆไปฌไปค X = X(e) = e๏ผ้ฃไน X ๅฐฑๆฏไธไธช้ๆบๅ้ใไพๅฆ๏ผ็จ Y ่ฎฐๆ่ฝฆ้ดไธๅคฉ็็ผบๅคไบบๆฐ๏ผไปฅ W ่ฎฐๆๅฐๅบ็ฌฌไธๅญฃๅบฆ็้้จ้๏ผไปฅ Z ่ฎฐๆๅทฅๅไธๅคฉ็่็ต้๏ผไปฅ N ่ฎฐๆๅป้ขไธๅคฉ็ๆๅทไบบๆฐใ้ฃไน Y, W, Z, N ้ฝๆฏ้ๆบๅ้ใ
ไธ่ฌ็จๅคงๅ็ๅญๆฏๅฆ X, Y, Z, W, ... ่กจ็คบ้ๆบๅ้๏ผ่ไปฅๅฐๅๅญๆฏ x, y, z, w, ... ่กจ็คบๅฎๆฐใ
้ๆบๅ้็ๅๅผ้่ฏ้ช็็ปๆ่ๅฎ๏ผ่่ฏ้ช็ๅไธช็ปๆๅบ็ฐๆไธๅฎ็ๆฆ็๏ผๅ ไธบ้ๆบๅ้็ๅๅผๆไธๅฎ็ๆฆ็ใไพๅฆ๏ผๅจไธ่ฟฐไพๅญไธญ X ๅๅผไธบ2๏ผ่ฎฐๆ{X = 2}๏ผๅฏนๅบๆ ทๆฌ็น็้ๅ A = {HHT, HTH, THH}๏ผ่ฟๆฏไธไธชๆถ้ด๏ผๅฝไธไป
ๅฝไบไปถ A ๅ็ๆถๆ{X = 2}ใๆไปฌ็งฐๆฆ็P(A) = P{HHT, HTH, THH}ไธบ{X = 2}็ๆฆ็๏ผๅณP{X = 2} = P(A) = 3 / 8ใไปฅๅ๏ผ่ฟๅฐไบไปถ A = {HHT, HTH, THH}่ฏดๆๆฏไบไปถ{X = 2}ใ็ฑปไผผๅฐๆ
$$ P{X \leq 1} = P{HTT, THT, TTH, TTT} = \frac{1}{2} $$
ไธ่ฌ๏ผ่ฅ L ๆฏไธไธชๅฎๆฐ้ๅ๏ผๅฐ X ๅจ L ไธ็ๅๅผๅๆ{X โ L}ใๅฎ่กจ็คบไบไปถ B = {e | X(e) โ L}๏ผๅณ B ๆฏ็ฑ S ไธญไฝฟๅพ X(e) โ L ็ๆๆๆ ทๆฌ็น e ๆ็ปๆ็ไบไปถ๏ผๆญคๆถๆ
$$ P{X \in L } = P(B) = P{ e | X(e) \in L} $$
1.1 ็ฆปๆฃๅ้ๆบๅ้ Discrete Random Variable
ๆไบ้ๆบๅ้๏ผๅฎๅ
จ้จๅฏ่ฝๅๅฐ็ๅผๆฏๆ้ไธชๆๅฏๅๆ ้ๅคไธช๏ผ่ฟ็ง้ๆบๅ้็งฐไธบ็ฆปๆฃๅ้ๆบๅ้ใ
ๅฎนๆ็ฅ้๏ผ่ฆๆๆกไธไธช็ฆปๆฃๅ้ๆบๅ้ X ็็ป่ฎก่งๅพ๏ผๅฟ
้กปไธๅช้็ฅ้ X ็ๆๆๅฏ่ฝๅๅผไปฅๅๅๆฏไธไธชๅฏ่ฝๅผ็ๆฆ็ใ
่ฎพ็ฆปๆฃๅ้ๆบๅ้ X ็ๆๆๅฏ่ฝๅ็ๅผไธบ $x_k$(k = 1, 2, ...)๏ผX ๅๅไธชๅฏ่ฝๅผ็ๆฆ็๏ผๅณไบไปถ{X = $x_k$}็ๆฆ็๏ผไธบ
$$ P{X = X_k } = p_k๏ผk = 1,2, ... $$
็ฑๆฆ็็ๅฎไน๏ผp<sub>k</sub>ๆปก่ถณๅฆไธไธคไธชๆกไปถ๏ผ
$$ p_k \geq 0, k = 1,2๏ผ...; $$
$$ \begin{equation}
\sum_{k=1}^\infty p_k = 1
\end{equation}
$$
ๅ
ถไธญ๏ผๆกไปถไบๆฏ็ฑไบ ${X = x_1} \cup {X = x_2} \cup ... $ ๆฏๅฟ
็ถไบไปถ๏ผไธ ${X = x_1} \cap {X = x_2} \cap ... = \emptyset $๏ผ$ k \neq j $๏ผๆ
$ 1 = P[\bigcup_{k=1}^\infty {X = x_k}] = \sum_{k=1}^\infty P{X = x_k} $๏ผๅณ$ \sum_{k=1}^\infty p_k = 1 $ใ
ๆไปฌ็งฐ$ P{X = X_k } = p_k๏ผk = 1,2, ... $ไธบ็ฆปๆฃๅ้ๆบๅ้ X ็ๅๅธๅพใๅๅธๅพไนๅฏไปฅ็จ่กจๆ ผ็ๅฝขๅผๆฅ่กจ็คบ
$$\begin{array}{rr} \hline
X &x_1 &x_2 &... &x_n &... \ \hline
P_k &p_1 &p_2 &... &p_n &... \ \hline
\end{array}$$
ๅฐๅๅธๅพๅๆๅฝๆฐ็ๅฝขๅผ๏ผ่กจ็คบ็ฆปๆฃๅ้ๆบๅ้ๅจๅ็นๅฎๅๅผไธ็ๆฆ็๏ผ่ฏฅๅฝๆฐ็งฐไธบๆฆ็่ดจ้ๅฝๆฐ Probability Mass Function, pmfใ
1.2 ้ๆบๅ้็ๅๅธๅฝๆฐ Distribution Function of Random Variable
ๅฏนไบ้็ฆปๆฃๅ้ๆบๅ้ X๏ผ็ฑไบๅ
ถๅฏ่ฝๅ็ๅผไธ่ฝไธไธๅไธพๅบๆฅ๏ผๅ ่ๅฐฑไธ่ฝๅ็ฆปๆฃๅ้ๆบๅ้้ฃๆ ทๅฏไปฅ็จๅๅธๅพๆฅๆ่ฟฐๅฎใๅฆๅค๏ผๆไปฌ้ๅธธๆ้ๅฐ็้็ฆปๆฃๅ้ๆบๅ้ๅไปปไธๆๅฎ็ๅฎๆฐๅผ็ๆฆ็้ฝ็ญไบ0ใๅ่
๏ผๅจๅฎ้
ไธญ๏ผๅฏนไบ่ฟๆ ท็้ๆบๅ้๏ผๆไปฌๅนถไธไผๅฏนๅๆไธ็นๅฎๅผ็ๆฆ็ๆๅ
ด่ถฃ๏ผ่ๆฏ่่ๅจๆไธชๅบ้ด$(x_1, x_2]$ๅ
็ๆฆ็๏ผ$P{x_1 < X \leq x_2 }$ใไฝ็ฑไบ$ P{x_1 < X \leq x_2 } = P{X \leq x_2} - P{X \leq x_1} $๏ผๆไปฅๆไปฌๅช้่ฆ็ฅ้$ P{X \leq x_2 } $ๅ$ P{X \leq x_1 } $ๅฐฑๅฏไปฅไบใไธ้ขๅผๅ
ฅ้ๆบๅ้็ๅๅธๅฝๆฐ็ๆฆๅฟตใ
ๅฎไน๏ผ่ฎพ X ๆฏไธไธช้ๆบๅ้๏ผxๆฏไปปๆๅฎๆฐ๏ผๅฝๆฐ$ F(x) = P{X \leq x }, -\infty < x < \infty $็งฐไธบX็ๅๅธๅฝๆฐใ
ๅฏนไบไปปๆๅฎๆฐ$x_1, x_2(x_1 < x_2)$๏ผๆ$P{x_1 < X \leq x_2} = P{X \leq x_2}-P{X \leq x_1} = F(x_2) - F(x_1)$๏ผๅ ๆญค๏ผ่ฅๅทฒ็ฅ X ็ๅๅธๅฝๆฐ๏ผๆไปฌๅฐฑ็ฅ้ X ่ฝๅจไปปไธๅบ้ด$(x_1, x_2]$ไธ็ๆฆ็๏ผไป่ฟไธชๆไนไธ่ฏด๏ผๅๅธๅฝๆฐๅฎๆดๅฐๆ่ฟฐไบ้ๆบๅ้็็ป่ฎก่งๅพๆงใ
ๅๅธๅฝๆฐๆฏไธไธชๆฎ้็ๅฝๆฐ๏ผๆญฃๆฏ้่ฟๅฎ๏ผๆไปฌๅฐ่ฝ็จๆฐๅญฆๅๆ็ๆนๆณๆฅ็ ็ฉถ้ๆบๅ้ใ
ๅฆๆๅฐ X ็ๆๆฏๆฐ่ฝดไธ็้ๆบ็น็ๅๆ ๏ผ้ฃไน๏ผๅๅธๅฝๆฐF(x)ๅจxๅค็ๅฝๆฐๅผๅฐฑ่กจ็คบX่ฝๅจๅบ้ด$(-\infty, x_2]$ไธ็ๆฆ็ใ
ๅๅธๅฝๆฐF(x)ๅ
ทๆไปฅไธ็ๅบๆฌๆง่ดจ๏ผ
F(x)ๆฏไธไธชไธๅๅฝๆฐใไบๅฎไธ๏ผๅฏนไบไปปๆๅฎๆฐ$x_1, x_2(x_1 < x_2)$๏ผๆ
$$F(x_2) - F(x_1) = P{x_1 < X \leq x_2} \geq 0$$
$0 \leq F(x) \leq 1$๏ผไธ
$$F(-\infty) = \lim_{x \to -\infty} = 0, F(\infty) = \lim_{x \to \infty} = 1$$
$F(x+0)=F(x)$๏ผๅณF(x)ๆฏๅณ่ฟ็ปญ็ใ
ๅไน๏ผๅฏ่ฏๅ
ทๅคไธ่ฟฐๆง่ดจ็ๅฝๆฐF(x)ๅฟ
ๆฏๆไธช้ๆบๅ้็ๅๅธๅฝๆฐใ
1.3 ่ฟ็ปญๅ้ๆบๅ้ๅๅ
ถๆฆ็ๅฏๅบฆ Continuous Random Variable and its Probability Density
ๅฆๆๅฏนไบ้ๆบๅ้ X ็ๅๅธๅฝๆฐF(x)๏ผๅญๅจ้่ดๅฏ็งฏๅฝๆฐf(x)๏ผไฝฟๅฏนไบไปปๆๅฎๆฐ x ๆ
$$ F(x) = \int_{-\infty}^x f(t)dt $$
ๅ็งฐ X ไธบ่ฟ็ปญๅ้ๆบๅ้๏ผf(x)็งฐไธบ X ็ๆฆ็ๅฏๅบฆๅฝๆฐ๏ผ็ฎ็งฐๆฆ็ๅฏๅบฆใ
ๆฎๆฐๅญฆๅๆ็็ฅ่ฏ็ฅ่ฟ็ปญๅ้ๆบๅ้็ๅๅธๅฝๆฐๆฏ่ฟ็ปญๅฝๆฐใ
็ฑๅฎไน็ฅ้๏ผๆฆ็ๅฏๅบฆf(x)ๅ
ทๆไปฅไธๆง่ดจ๏ผ
$f(x) \geq 0$;
$\int_{-\infty}^{\infty} f(x)dx = 1$;
ๅฏนไบไปปๆๅฎๆฐ$x_1, x_2(x_1 \leq x_2)$๏ผ
$$ P{x_1 < X \leq x_2} = F(x_2) - F(x_1) = \int_{x_1}^{x_2} f(x)dx $$
่ฅf(x)ๅจ็น x ๅค่ฟ็ปญ๏ผๅๆ$F'(x) = f(x)$
ๅไน๏ผ่ฅf(x)ๅ
ทๅคๆง่ดจ1,2๏ผๅผๅ
ฅ$G(x) = \int_{-\infty}^x f(t)dt$๏ผๅฎๆฏๆไธ้ๆบๅ้ X ็ๅๅธๅฝๆฐ๏ผf(x)ๆฏ X ็ๆฆ็ๅฏๅบฆใ
็ฑๆง่ดจ2็ฅ้ไปไบๆฒ็บฟy=f(x)ไธOx่ฝดไน้ด็้ข็งฏ็ญไบ1๏ผ็ฑ3็ฅ้ X ่ฝๅจๅบ้ด$(x_1, x_2]$็ๆฆ็$P{x_1 < X \leq x_2}$็ญไบๅบ้ด$(x_1, x_2]$ไธๆฒ็บฟy=f(x)ไนไธ็ๆฒ่พนๆขฏๅฝข็้ข็งฏใ
2. ไผฏๅชๅฉๅๅธ Bernoulli Distribution
ไผฏๅชๅฉๅๅธๅ็งฐ(0 - 1)ๅๅธ
่ฎพ้ๆบๅ้Xๅชๅฏ่ฝๅ 0 ไธ 1 ไธคไธชๅผ๏ผๅฎ็ๅๅธๅพๆฏ
$$ P{X=k} = p^k(1-p)^{1-k}, k=0,1 (0 < p < 1) $$
ๅ็งฐXๆไปไปฅpไธบๅๆฐ็(0 - 1)ๅๅธๆไธค็นๅๅธใ
(0 - 1)ๅๅธ็ๅๅธๅพไนๅฏๅๆ
$$\begin{array}{rr} \hline
X &0 &1 \ \hline
P_k &1-p &p \ \hline
\end{array}$$
3. ไบ้กนๅๅธ Binomial Distribution
่ฎพ่ฏ้ช E ๅชๆไธคไธชๅฏ่ฝ็ปๆ๏ผ$A$ๅ$\overline{A}$๏ผๅ็งฐ E ไธบไผฏๅชๅฉ่ฏ้ช๏ผ่ฎพ$P(A)=p(0<p<1)$๏ผๆญคๆถ$P(\overline{A})=1-p$ใๅฐ E ็ฌ็ซ้ๅคnๆฌก๏ผๅ็งฐ่ฟไธไธฒ้ๅค็็ฌ็ซ่ฏ้ชไธบn้ไผฏๅชๅฉ่ฏ้ชใ
่ฟ้โ้ๅคโๆฏๆๅจๆฏๆฌก่ฏ้ชไธญ$P(A)=p$ไฟๆไธๅ๏ผโ็ฌ็ซโๆฏๆๅๆฌก่ฏ้ช็็ปๆไบไธๅฝฑๅ๏ผ่ฅไปฅ$C_i$่ฎฐ็ฌฌ i ๆฌก่ฏ้ช็็ปๆ๏ผ$C_i$ไธบ$A$ๆ$\overline{A}$, i=1,2,...,nใโ็ฌ็ซโๆฏๆ
$$ P(C_{1}C_{2}...C{n}) = P(C_1)P(C_2)...P(C_n) $$
ไปฅ X ่กจ็คบn้ไผฏๅชๅฉ่ฏ้ชไธญไบไปถ A ๅ็็ๆฌกๆฐ๏ผX ๆฏไธไธช้ๆบๅ้๏ผX ๆๆๅฏ่ฝๅ็ๅผไธบ0, 1, 2, ..., nใ็ฑไบๅๆฌก่ฏ้ชๆฏ็ธไบ็ฌ็ซ็๏ผๅ ไธบไบไปถ A ๅจๆๅฎ็$k(0\leq k \leq n)$ๆฌก่ฏ้ชไธญๅ็๏ผๅจๅ
ถไปn - kๆฌก่ฏ้ชไธญ A ไธๅ็็ๆฆ็ไธบ
$$ \underbrace{\left({p \cdot p \cdot ... \cdot p}\right)}k \cdot \underbrace{\left({(1-p) \cdot (1-p) \cdot ... \cdot (1-p)}\right)}{n-k} = p^{k}(1-p)^{n-k}$$
่ฟ็งๆๅฎ็ๆนๅผๅ
ฑๆ$\binom{n}{k}$็ง๏ผๅฎไปฌๆฏไธคไธคไบไธ็ธๅฎน็๏ผๆ
ๅจ n ๆฌก่ฏ้ชไธญ A ๅ็ k ๆฌก็ๆฆ็ไธบ$\binom{n}{k}p^{k}(1-p)^{n-k}$๏ผ่ฎฐ$q=1-p$๏ผๅณๆ
$$ P{X=k} = \binom{n}{k}p^{k}q^{n-k}, k=0,1,2,..,n $$
ๆไปฌ็งฐ้ๆบๅ้ X ๆไปๅๆฐไธบn, p็ไบ้กนๅๅธ๏ผๅนถ่ฎฐไธบ$X \sim b(n, p)$ใ
็นๅซ๏ผๅฝn=1ๆถ๏ผไบ้กนๅๅธๅไธบ$P{X=k}=p^{k}q^{1-k}, k=0,1$๏ผ่ฟๅฐฑๆฏ(0 - 1)ๅๅธใ
numpy.random.binomialๅฝๆฐๅฏไปฅๆ นๆฎไบ้กนๅๅธ่ฟ่กๆฝๆ ท๏ผ
End of explanation
sum(np.random.binomial(9, 0.1, 20000) == 0) / 20000
Explanation: ไธไธช็ฐๅฎ็ๆดปไธญ็ไพๅญใไธๅฎถ้ปไบๅ
ฌๅธๆข็ดขไนไธช็ฟไบ๏ผ้ข่ฎกๆฏไธชๅผ้ๆๅ็ไธบ0.1๏ผไนไธช็ฟไบๅ
จ้จๅผ้ๅคฑ่ดฅ็ๆฆ็ๆฏๅคๅฐ๏ผ
ๆ นๆฎๅ
ฌๅผ๏ผ$n = 9, p = 0.1, P{X = 0} = \binom{9}{0} \cdot 0.1^{0} \cdot 0.9^{9} \approx 0.3874$
ๆไปฌๅฏน่ฏฅๆจกๅ่ฟ่ก20000ๆฌก่ฏ้ช๏ผ่ฎก็ฎๅ
ถไธญๅพๅฐ0็ๆฆ็๏ผ
End of explanation
lb = 5
s = np.random.poisson(lb, 10000)
count, bins, ignored = plt.hist(s, 14, normed=True)
Explanation: ๅฐ่ฏ้ชๆฌกๆฐๅขๅ ๏ผๅฏไปฅๆจกๆๅบๆดๅ ้ผ่ฟๅ็กฎๅผ็็ปๆใ
4. ๆณๆพๅๅธ Poisson Distribution
่ฎพ้ๆบๅ้ X ๆๆๅฏ่ฝๅ็ๅผไธบ0, 1, 2, ..., ่ๅๅไธชๅผ็ๆฆ็ไธบ
$$P{X=k} = \frac{\lambda^ke^{-\lambda}}{k!}, k=0,1,2,...,$$
ๅ
ถไธญ $\lambda > 0$ ๆฏๅธธๆฐ๏ผๅ็งฐ $X$ ๆไปๅๆฐไธบ $\lambda$ ็ๆณๆพๅๅธ๏ผ่ฎฐไธบ $X \sim \pi(\lambda)$ใ
ๆ็ฅ๏ผ$P{X=k}\geq0,k=0,1,2,...$๏ผไธๆ
$$ \sum_{k=0}^\infty P{X=k} = \sum_{k=0}^\infty \frac{\lambda^{k}e^{-\lambda}}{k!} = e^{-\lambda}\sum_{k=0}^\infty \frac{\lambda^k}{k!} = e^{-\lambda} \cdot e^{\lambda} = 1 $$
ๅ
ทๆๆณๆพๅๅธ็้ๆบๅ้ๅจๅฎ้
ๅบ็จไธญๆฏ้ๅธธๅค็ใไพๅฆ๏ผไธๆฌไนฆไธ้กตไธญ็ๅฐๅท้่ฏฏๆฐใๆๅฐๅบๅจไธๅคฉๅ
้ฎ้้ๅคฑ็ไฟกไปถๆฐใๆไธๅป้ขๅจไธๅคฉๅ
็ๆฅ่ฏ็
ไบบๆฐใๆไธๅฐๅบไธไธชๆถ้ด้ด้ๅ
ๅ็ไบค้ไบๆ
็ๆฌกๆฐใๅจไธไธชๆถ้ด้ด้ๅ
ๆ็งๆพๅฐๆง็ฉ่ดจๅๅบ็ใ็ป่ฟ่ฎกๆฐๅจ็ $\alpha$ ็ฒๅญๆฐ็ญ้ฝๆไปๆณๆพๅๅธใ
numpy.random.poissonๅฝๆฐๅฏไปฅๆ นๆฎๆณๆพๅๅธ่ฟ่กๆฝๆ ท๏ผ
End of explanation
# ๅa = -1, b = 0, ๆ ทๆฌๆฐ10000
a, b = -1, 0
s = np.random.uniform(a, b, 10000)
# ๆๆๆ ทๆฌ็ๅผๅๅคงไบa
np.all(s >= a)
# ๆๆๆ ทๆฌ็ๅผๅๅฐไบb
np.all(s < b)
# ็ปๅถๆ ทๆฌ็ดๆนๅพๅๅฏๅบฆๅฝๆฐ
count, bins, ignored = plt.hist(s, 15, normed=True)
plt.plot(bins, np.ones_like(bins) / (b - a), linewidth=2, color='r')
plt.show()
Explanation: 5. ๅๅๅๅธ Uniform Distribution
่ฅ่ฟ็ปญๅ้ๆบๅ้ X ๅ
ทๆๆฆ็ๅฏๅบฆ
$$ f(x) =\left{
\begin{aligned}
& \frac{1}{b-a}, & a < x < b, \
& 0, & ๅ
ถๅฎ \
\end{aligned}
\right.
$$
ๅ็งฐ X ๅจๅบ้ด(a, b)ไธๆไปๅๅๅๅธ๏ผ่ฎฐไธบ$X \sim U(a, b)$
numpy.random.uniformๅฝๆฐๅฏไปฅๆ นๆฎๅๅๅๅธ่ฟ่กๆฝๆ ท๏ผ
End of explanation
# ๅtheta = 1๏ผ็ปๅถๆ ทๆฌ็ดๆนๅพๅๅฏๅบฆๅฝๆฐ
theta = 1
f = lambda x: math.e ** (-x / theta) / theta
s = np.random.exponential(theta, 10000)
count, bins, ignored = plt.hist(s, 100, normed=True)
plt.plot(bins, f(bins), linewidth=2, color='r')
plt.show()
Explanation: 6. ๆๆฐๅๅธ Exponential Distribution
่ฅ่ฟ็ปญๅ้ๆบๅ้ X ๅ
ทๆๆฆ็ๅฏๅบฆ
$$ f(x) =\left{
\begin{aligned}
& \frac{1}{\theta}e^{-\frac{x}{\theta}}, & x > 0, \
& 0, & ๅ
ถๅฎ \
\end{aligned}
\right.
$$
ๅ
ถไธญ$\theta > 0$ไธบๅธธๆฐ๏ผๅ็งฐ X ๆไปๅๆฐไธบ$\theta$็ๆๆฐๅๅธใ
numpy.random.exponentialๅฝๆฐๅฏไปฅๆ นๆฎๅๅๅๅธ่ฟ่กๆฝๆ ท๏ผ
End of explanation
# ๅๅๅผ0๏ผๆ ๅๅทฎ0.1
mu, sigma = 0, 0.1
s = np.random.normal(mu, sigma, 1000)
# ้ช่ฏๅๅผ
abs(mu - np.mean(s)) < 0.01
# ้ช่ฏๆ ๅๅทฎ
abs(sigma - np.std(s, ddof=1)) < 0.01
# ็ปๅถๆ ทๆฌ็ดๆนๅพๅๅฏๅบฆๅฝๆฐ
count, bins, ignored = plt.hist(s, 30, normed=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r')
plt.show()
Explanation: 7. ๆญฃๆๅๅธ Normal Distribution
่ฅ่ฟ็ปญๅ้ๆบๅ้ X ็ๆฆ็ๅฏๅบฆไธบ
$$ f(x) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}, -\infty < x < \infty $$
ๅ
ถไธญ$\mu, \sigma(\sigma > 0)$ไธบๅธธๆฐ๏ผๅ็งฐ X ๆไปๅๆฐไธบ$\mu, \sigma$็ๆญฃๆๅๅธๆ้ซๆฏ๏ผGauss๏ผๅๅธ๏ผ่ฎฐไธบ$X \sim N(\mu, \sigma^2)$ใ
f(x)็ๅพๅฝขๅ
ทๆไปฅไธๆง่ดจ๏ผ
ๆฒ็บฟๅ
ณไบ$x = \mu$ๅฏน็งฐใ่ฟ่กจๆๅฏนไบไปปๆ$h > 0$ๆ
$$ P{\mu - h < X \leq \mu } = P{\mu < X \leq \mu + h} $$
ๅฝ$x = \mu$ๆถๅๅฐๆๅคงๅผ
$$ f(\mu) = \frac{1}{\sqrt{2\pi}\sigma} $$
x็ฆป$\mu$่ถ่ฟ๏ผf(x)็ๅผ่ถๅฐใ่ฟ่กจๆๅฏนไบๅๆ ท้ฟๅบฆ็ๅบ้ด๏ผๅฝๅบ้ด็ฆป$\mu$่ถ่ฟ๏ผX่ฝๅจ่ฟไธชๅบ้ดไธ็ๆฆ็่ถๅฐใ
ๅจ$x = \mu \pm \sigma$ๅคๆฒ็บฟๆๆ็นใๆฒ็บฟไปฅOx่ฝดไธบๆธ่ฟ็บฟใ
ๅฆๆๅบๅฎ$\sigma$๏ผๆนๅ$\mu$็ๅผ๏ผๅๅพๅฝขๆฒฟ็Ox่ฝดๅนณ็งป๏ผ่ไธๆนๅๅ
ถๅฝข็ถ๏ผๅฏ่งๆญฃๅคชๅๅธ็ๆฆ็ๅฏๅบฆๆฒ็บฟy=f(x)็ไฝ็ฝฎๅฎๅ
จ็ฑๅๆฐ$\mu$ๆ็กฎๅฎใ$\mu$็งฐไธบไฝ็ฝฎๅๆฐใ
ๅฆๆๅบๅฎ$\mu$๏ผๆนๅ$\sigma$๏ผ็ฑไบๆๅคงๅผ$f(\mu)=\frac{1}{\sqrt{2\pi}\sigma}$๏ผๅฏ็ฅๅฝ$\sigma$่ถๅฐๆถๅพๅฝขๅๅพ่ถๅฐ๏ผๅ ไธบ X ่ฝๅจ$\mu$้่ฟ็ๆฆ็่ถๅคงใ
็นๅซ๏ผๅฝ$\mu = 0, \sigma = 1$ๆถ็งฐ้ๆบๅ้ X ๆไปๆ ๅๆญฃๆๅๅธใๅ
ถๆฆ็ๅฏๅบฆๅๅๅธๅฝๆฐๅๅซ็จ$\varphi(x), \Phi(x)$่กจ็คบ๏ผๅณๆ
$$ \varphi(x) = \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} $$
$$ \Phi(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x} e^{-\frac{x^2}{2}}dx $$
ๆ็ฅ
$$ \Phi(-x) = 1 - \Phi(x) $$
numpy.random.normalๅฝๆฐๅฏไปฅๆ นๆฎๆญฃๆๅๅธ่ฟ่กๆฝๆ ท๏ผ
End of explanation |
1,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
use spearman correlation between OTUs and 5 VitD variables (with BH FDR corrected p-val <= 0.05 as threshold)
use lasso regression on all OTUs vs. 5 VitD variables (need Cross-validation to choose tuning parameter)
Step1: Spearman correlation
merge biomtable with mapping file (output
Step2: correlation
Step3: OHVD3
Step4: OHV1D3
Step5: OHV24D3
Step6: ratio_activation
Step7: ratio_catabolism
Step8: Lasso Regression
reference on lasso
Step9: OHVD3
Step10: OHV1D3
Step11: OHV24D3
Step12: ratio_activation
Step13: ratio_catabolism | Python Code:
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import numpy as np
from scipy.stats import spearmanr, pearsonr
from statsmodels.sandbox.stats.multicomp import multipletests
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LassoLarsCV
from sklearn.preprocessing import StandardScaler
import matplotlib.pylab as plt
import seaborn as sns
%matplotlib inline
Explanation: use spearman correlation between OTUs and 5 VitD variables (with BH FDR corrected p-val <= 0.05 as threshold)
use lasso regression on all OTUs vs. 5 VitD variables (need Cross-validation to choose tuning parameter)
End of explanation
mf = pd.read_csv('../data/mapping_cleaned_MrOS.txt', sep='\t', dtype=str, index_col='#SampleID')
bt = pd.read_csv('../data/biomtable.txt', sep='\t', dtype=str, index_col='#OTU ID')
bt = bt.transpose()
print(mf.shape, bt.shape) # bt has an additional row of 'taxonomy'
mf.head()
bt.head()
dat = pd.merge(mf, bt, left_index=True, right_index=True)
dat.shape
dat.head()
vars_vd = np.array(['OHVD3', 'OHV1D3', 'OHV24D3', 'ratio_activation', 'ratio_catabolism'])
dat[vars_vd] = dat[vars_vd].apply(pd.to_numeric, errors='coerce')
dat[vars_vd].describe()
Explanation: Spearman correlation
merge biomtable with mapping file (output: dat)
End of explanation
otu_cols = dat.columns[mf.shape[1]:dat.shape[1]]
len(otu_cols)
Explanation: correlation
End of explanation
results= []
i = 0
for j in range(len(otu_cols)):
tmp = dat[[vars_vd[i], otu_cols[j]]].dropna(axis=0, how='any')
rho, pval = spearmanr(tmp[vars_vd[i]], tmp[otu_cols[j]])
tax = bt.loc['taxonomy'][otu_cols[j]]
results.append([vars_vd[i], otu_cols[j], tax, rho, pval])
# output table
results = pd.DataFrame(results, columns=['vars', 'otu',
'taxonomy', 'rho', 'pval']).dropna(axis=0, how='any')
results['fdr pval'] = multipletests(results['pval'], method = 'fdr_bh')[1]
results = results.sort_values(['fdr pval'], ascending=True)
# specific bacteria
index = results.loc[results['fdr pval'] <= 0.05].index
for i in range(len(index)):
print(results.taxonomy[index[i]], results['fdr pval'][index[i]])
# check
results.head(5)
Explanation: OHVD3
End of explanation
results_spearman_OHV1D3 = []
i = 1
for j in range(len(otu_cols)):
tmp = dat[[vars_vd[i], otu_cols[j]]].dropna(axis=0, how='any')
rho, pval = spearmanr(tmp[vars_vd[i]], tmp[otu_cols[j]])
tax = bt.loc['taxonomy'][otu_cols[j]]
results_spearman_OHV1D3.append([vars_vd[i], otu_cols[j], tax, rho, pval])
# output table
results_spearman_OHV1D3 = pd.DataFrame(results_spearman_OHV1D3, columns=['vars', 'otu', 'taxonomy', 'rho',
'pval']).dropna(axis=0, how='any')
results_spearman_OHV1D3['fdr pval'] = multipletests(results_spearman_OHV1D3['pval'], method = 'fdr_bh')[1]
results_spearman_OHV1D3 = results_spearman_OHV1D3.sort_values(['fdr pval'], ascending=True)
# specific bacteria
index_OHV1D3 = results_spearman_OHV1D3.loc[results_spearman_OHV1D3['fdr pval'] <= 0.05].index
for i in range(len(index_OHV1D3)):
print(results_spearman_OHV1D3.taxonomy[index_OHV1D3[i]],
results_spearman_OHV1D3['rho'][index_OHV1D3[i]],
results_spearman_OHV1D3['fdr pval'][index_OHV1D3[i]])
# check
results_spearman_OHV1D3.head(10)
Explanation: OHV1D3
End of explanation
results= []
i = 2
for j in range(len(otu_cols)):
tmp = dat[[vars_vd[i], otu_cols[j]]].dropna(axis=0, how='any')
rho, pval = spearmanr(tmp[vars_vd[i]], tmp[otu_cols[j]])
tax = bt.loc['taxonomy'][otu_cols[j]]
results.append([vars_vd[i], otu_cols[j], tax, rho, pval])
# output table
results = pd.DataFrame(results, columns=['vars', 'otu',
'taxonomy', 'rho', 'pval']).dropna(axis=0, how='any')
results['fdr pval'] = multipletests(results['pval'], method = 'fdr_bh')[1]
results = results.sort_values(['fdr pval'], ascending=True)
# specific bacteria
index = results.loc[results['fdr pval'] <= 0.05].index
for i in range(len(index)):
print(results.taxonomy[index[i]],
results['rho'][index[i]],
results['fdr pval'][index[i]])
# check
results.head(3)
Explanation: OHV24D3
End of explanation
results= []
i = 3
for j in range(len(otu_cols)):
tmp = dat[[vars_vd[i], otu_cols[j]]].dropna(axis=0, how='any')
rho, pval = spearmanr(tmp[vars_vd[i]], tmp[otu_cols[j]])
tax = bt.loc['taxonomy'][otu_cols[j]]
results.append([vars_vd[i], otu_cols[j], tax, rho, pval])
# output table
results = pd.DataFrame(results, columns=['vars', 'otu',
'taxonomy', 'rho', 'pval']).dropna(axis=0, how='any')
results['fdr pval'] = multipletests(results['pval'], method = 'fdr_bh')[1]
results = results.sort_values(['fdr pval'], ascending=True)
# specific bacteria
index = results.loc[results['fdr pval'] <= 0.05].index
for i in range(len(index)):
print(results.taxonomy[index[i]],
results['rho'][index[i]],
results['fdr pval'][index[i]])
# store result for future check with lasso
results_spearman_activation = results
# check
results.head(10)
Explanation: ratio_activation
End of explanation
results= []
i = 4
for j in range(len(otu_cols)):
tmp = dat[[vars_vd[i], otu_cols[j]]].dropna(axis=0, how='any')
rho, pval = spearmanr(tmp[vars_vd[i]], tmp[otu_cols[j]])
tax = bt.loc['taxonomy'][otu_cols[j]]
results.append([vars_vd[i], otu_cols[j], tax, rho, pval])
# output table
results = pd.DataFrame(results, columns=['vars', 'otu',
'taxonomy', 'rho', 'pval']).dropna(axis=0, how='any')
results['fdr pval'] = multipletests(results['pval'], method = 'fdr_bh')[1]
results = results.sort_values(['fdr pval'], ascending=True)
# specific bacteria
index = results.loc[results['fdr pval'] <= 0.05].index
for i in range(len(index)):
print(results.taxonomy[index[i]],
results['fdr pval'][index[i]])
# check
results.head(3)
Explanation: ratio_catabolism
End of explanation
tmp = dat[np.append(vars_vd, otu_cols)].dropna()
print(tmp.shape)
tmp.head()
Explanation: Lasso Regression
reference on lasso: https://www.coursera.org/learn/machine-learning-data-analysis/supplement/MMg4w/python-code-lasso-regression
reference on standardize data before lasso: https://chrisalbon.com/machine-learning/lasso_regression_in_scikit.html
no need for standardization on OTUs, as they are on the same scale already
End of explanation
X = tmp[otu_cols]
y = tmp[vars_vd[0]]
# split data into train and test sets
pred_train, pred_test, tar_train, tar_test = train_test_split(X, y, test_size=.3, random_state=123)
# specify the lasso regression model
model=LassoLarsCV(cv=10, precompute=True).fit(pred_train,tar_train)
np.sum(model.coef_)
Explanation: OHVD3
End of explanation
X = tmp[otu_cols]
y = tmp[vars_vd[0]]
# split data into train and test sets
pred_train, pred_test, tar_train, tar_test = train_test_split(X, y, test_size=.3, random_state=123)
# specify the lasso regression model
model=LassoLarsCV(cv=10, precompute=True).fit(pred_train,tar_train)
np.sum(model.coef_)
Explanation: OHV1D3
End of explanation
X = tmp[otu_cols]
y = tmp[vars_vd[0]]
# split data into train and test sets
pred_train, pred_test, tar_train, tar_test = train_test_split(X, y, test_size=.3, random_state=123)
# specify the lasso regression model
model=LassoLarsCV(cv=10, precompute=True).fit(pred_train,tar_train)
np.sum(model.coef_)
Explanation: OHV24D3
End of explanation
X = tmp[otu_cols]
# split data into train and test sets
pred_train, pred_test, tar_train, tar_test = train_test_split(X, y, test_size=.3, random_state=123)
# specify the lasso regression model
model=LassoLarsCV(cv=20, precompute=True).fit(pred_train,tar_train)
np.sum(model.coef_)
reg = dict(zip(X.columns, model.coef_))
reg = pd.DataFrame.from_dict(reg, orient='index').rename(
columns={0: 'lasso coef'})
reg['taxonomy'] = bt.loc['taxonomy'][reg.index]
subset = reg.loc[reg['lasso coef'] != 0]
print(subset.shape)
print(subset.taxonomy.values, subset['lasso coef'].values)
# store result for future need
subset_activation = subset
subset_activation
# check whether the same as in spearman result
same = results_spearman_activation.loc[results_spearman_activation['otu'].isin (subset_activation.index)]
print(same.taxonomy.values)
print(same['fdr pval'])
results_spearman_activation.loc[results['fdr pval'] <= 0.05].index
# plot coefficient progression
m_log_alphas = -np.log10(model.alphas_)
ax = plt.gca()
plt.plot(m_log_alphas, model.coef_path_.T)
plt.axvline(-np.log10(model.alpha_), linestyle='--', color='k',
label='alpha CV')
plt.ylabel('Regression Coefficients')
plt.xlabel('-log(alpha)')
plt.title('Regression Coefficients Progression for Lasso Paths')
#plt.savefig('../figures/lasso_coef.png', bbox_inches='tight')
# plot mean square error for each fold
m_log_alphascv = -np.log10(model.cv_alphas_)
plt.figure()
plt.plot(m_log_alphascv, model.cv_mse_path_, ':')
plt.plot(m_log_alphascv, model.cv_mse_path_.mean(axis=-1), 'k',
label='Average across the folds', linewidth=2)
plt.axvline(-np.log10(model.alpha_), linestyle='--', color='k',
label='alpha CV')
plt.legend()
plt.xlabel('-log(alpha)')
plt.ylabel('Mean squared error')
plt.title('Mean squared error on each fold')
#plt.savefig('../figures/lasso_mse.png', bbox_inches='tight')
# MSE from training and test data
from sklearn.metrics import mean_squared_error
train_error = mean_squared_error(tar_train, model.predict(pred_train))
test_error = mean_squared_error(tar_test, model.predict(pred_test))
print ('training data MSE')
print(train_error)
print ('test data MSE')
print(test_error)
# R-square from training and test data
rsquared_train=model.score(pred_train,tar_train)
rsquared_test=model.score(pred_test,tar_test)
print ('training data R-square')
print(rsquared_train)
print ('test data R-square')
print(rsquared_test)
Explanation: ratio_activation
End of explanation
X = tmp[otu_cols]
y = tmp[vars_vd[4]]
# split data into train and test sets
pred_train, pred_test, tar_train, tar_test = train_test_split(X, y, test_size=.3, random_state=123)
# specify the lasso regression model
model=LassoLarsCV(cv=10, precompute=True).fit(pred_train,tar_train)
np.sum(model.coef_)
# check
reg = dict(zip(X.columns, model.coef_))
reg = pd.DataFrame.from_dict(reg, orient='index').rename(
columns={0: 'lasso coef'})
reg['taxonomy'] = bt.loc['taxonomy'][reg.index]
subset = reg.loc[reg['lasso coef'] != 0]
print(subset.shape)
#### previous scratch on lasso
from sklearn.linear_model import Lasso
from sklearn.preprocessing import StandardScaler
small= tmp[np.append(vars_vd[1], otu_cols)].dropna(axis=0, how='any')
X = scaler.fit_transform(small[otu_cols])
Y = small[vars_vd[1]]
# drop all missing values
tmp = dat[np.append(vars_vd, otu_cols)]
tmp.head()
scaler = StandardScaler()
small= tmp[np.append(vars_vd[0], otu_cols)].dropna(axis=0, how='any')
X = scaler.fit_transform(small[otu_cols])
Y = small[vars_vd[0]]
names = otu_cols
# Create a function called lasso,
def lasso(alphas):
'''
Takes in a list of alphas. Outputs a dataframe containing the coefficients of lasso regressions from each alpha.
'''
# Create an empty data frame
df = pd.DataFrame()
# Create a column of feature names
df['OTU names'] = names
# For each alpha value in the list of alpha values,
for alpha in alphas:
# Create a lasso regression with that alpha value,
lasso = Lasso(alpha=alpha)
# Fit the lasso regression
lasso.fit(X, Y)
# Create a column name for that alpha value
column_name = 'Alpha = %f' % alpha
# Create a column of coefficient values
df[column_name] = lasso.coef_
# Return the datafram
return df
table = lasso([0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5])
table.head()
table.loc[table['Alpha = 0.500000'] != 0]
list = np.array([63, 131, 188, 237, 384, 505, 2116, 2545, 3484, 3598])
for i in range(len(list)):
print(bt.loc['taxonomy'][list[i]])
scaler = StandardScaler()
small= tmp[np.append(vars_vd[1], otu_cols)].dropna(axis=0, how='any')
X = scaler.fit_transform(small[otu_cols])
Y = small[vars_vd[1]]
names = otu_cols
table = lasso([0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5])
table.loc[table['Alpha = 3.000000'] != 0]
bt.loc['taxonomy'][3323]
scaler = StandardScaler()
small= tmp[np.append(vars_vd[2], otu_cols)].dropna(axis=0, how='any')
X = scaler.fit_transform(small[otu_cols])
Y = small[vars_vd[2]]
names = otu_cols
table = lasso([0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5])
table.loc[table['Alpha = 2.000000'] != 0]
scaler = StandardScaler()
small= tmp[np.append(vars_vd[3], otu_cols)].dropna(axis=0, how='any')
X = scaler.fit_transform(small[otu_cols])
Y = small[vars_vd[3]]
names = otu_cols
table = lasso([0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5])
table.loc[table['Alpha = 2.000000'] != 0]
scaler = StandardScaler()
small= tmp[np.append(vars_vd[3], otu_cols)].dropna(axis=0, how='any')
X = scaler.fit_transform(small[otu_cols])
Y = small[vars_vd[3]]
names = otu_cols
table = lasso([0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5])
table.loc[table['Alpha = 1.000000'] != 0]
scaler = StandardScaler()
small= tmp[np.append(vars_vd[4], otu_cols)].dropna(axis=0, how='any')
X = scaler.fit_transform(small[otu_cols])
Y = small[vars_vd[4]]
names = otu_cols
table = lasso([0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5])
table.loc[table['Alpha = 1.000000'] != 0]
clf = Lasso(alpha=0.5)
clf.fit(X, Y)
Lasso(alpha=0.5, copy_X=True, fit_intercept=True, max_iter=1000,
normalize=False, positive=False, precompute=False, random_state=None,
selection='cyclic', tol=0.0001, warm_start=False)
print(clf.coef_)
sum(np.abs(clf.coef_))
print(clf.intercept_)
Explanation: ratio_catabolism
End of explanation |
1,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chap 3 ็ทๅฝขๅๅธฐ (ML)
ๅ้ก่จญๅฎ
$N$ ๅใฎ่ฆณๆธฌๅค ${\bf x}_n$, $(n=1, ..., N)$ ใจใใใซๅฏพๅฟใใ็ฎๆจๅค ${\bf t_n}$ ใฎใใผใฟใใ
${\bf x}$ ใจ ${\bf t}$ ใฎ้ขไฟใใขใใซๅใใใ
็ทๅฝขๅๅธฐใงใฏใ$M$ ๅใฎ้ใฟไฟๆฐ $w_j$, $(j=1, ..., M)$ ใจๅบๅบ้ขๆฐ ${\phi_j({\bf x})}$ ใฎ็ทๅฝขๅ
$$
y({\bf x}, {\bf w}) = \sum_{j=1}^{M} w_j \phi_j({\bf x}) = {\bf w}^T {\bf \phi(x)}
$$
ใใขใใซใจใใ $w_j$ ใๆจๅฎใใใ
Step1: ๅบๅบ้ขๆฐ
ใใฎใพใพ
$$\phi_j(x) = x$$
ๅค้
ๅผ
$$\phi_j(x) = x^j$$
ใฌใฆใน
$$\phi_j(x) = \exp\left(-\frac{(x-\mu_j)^2}{2s^2}\right)$$
ใญใธในใใฃใใฏใทใฐใขใคใ
$$\phi_j(x) = \frac{1}{1 + \exp\left((x - \mu_j)/s\right)}$$
(ๅณ 3.1 (p137))
Step2: ไพ้ก
Step3: ${\bf w}$ ใฎๆๅฐคๆจๅฎ
้ใฟใฎๆๅฐคๆจๅฎใฏไปฅไธใฎๅผใงใใจใพใ
$$
{\bf w}_{ML} = (\Phi^T\Phi)^{-1} \Phi^T{\bf t}
$$
ใใ ใใ${\bf \Phi} \in R^{N \times M}$ ใฏ่จ็ป่กๅใงใใใฎ $i$ ่ก $j$ ๅ่ฆ็ด ใฏใ$\phi_j(x_i)$
Step4: ๆญฃๅๅ้
่ชคๅทฎ้ขๆฐใ$E_D({\bf w})$ ใซๆญฃๅๅ้
$E_W({\bf w})$ ใจๆญฃๅๅไฟๆฐ $\lambda$ ใ่ฟฝๅ ใใ
$$
\underset{\bf w}{\operatorname{argmin}} E_D({\bf w}) + \lambda E_W({\bf x})\
E_D({\bf w}) = \frac{1}{2} \sum_{i=1}^{N} \left(t_i - {\bf w}^T\phi({\bf x}_i)\right)^2 \
E_W({\bf w}) = \frac{1}{2} {\bf w}^T{\bf w}
$$
ๆๅฐคๆจๅฎใฏ
$$
{\bf w}_{ML} = (\lambda{\bf I} + \Phi^T\Phi)^{-1} \Phi^T{\bf t}
$$ | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Chap 3 ็ทๅฝขๅๅธฐ (ML)
ๅ้ก่จญๅฎ
$N$ ๅใฎ่ฆณๆธฌๅค ${\bf x}_n$, $(n=1, ..., N)$ ใจใใใซๅฏพๅฟใใ็ฎๆจๅค ${\bf t_n}$ ใฎใใผใฟใใ
${\bf x}$ ใจ ${\bf t}$ ใฎ้ขไฟใใขใใซๅใใใ
็ทๅฝขๅๅธฐใงใฏใ$M$ ๅใฎ้ใฟไฟๆฐ $w_j$, $(j=1, ..., M)$ ใจๅบๅบ้ขๆฐ ${\phi_j({\bf x})}$ ใฎ็ทๅฝขๅ
$$
y({\bf x}, {\bf w}) = \sum_{j=1}^{M} w_j \phi_j({\bf x}) = {\bf w}^T {\bf \phi(x)}
$$
ใใขใใซใจใใ $w_j$ ใๆจๅฎใใใ
End of explanation
def basisNone(x, dmy1=None, dmy2=None, dmy3=None):
return x
def basisPoly(x, j, dmy1=None, dmy2=None):
return x ** j
def basisGauss(x, j, mu, s):
return np.exp(- (x-mu[j]) ** 2 / (2 * s ** 2))
def basisSigmoid(x, j, mu, s):
return 1.0 / (1 + np.exp(-(x - mu[j])/s))
basis = {
"Linear": basisNone,
"Polynomial": basisPoly,
"Gauss": basisGauss,
"Sigmoid": basisSigmoid,
}
x = np.linspace(-1, 1, 100)
mu = np.linspace(-1, 1, 10)
plt.figure(figsize=(16, 3))
for ikey, key in enumerate(basis):
plt.subplot(1, 4, ikey + 1)
plt.title(key + " Kernel")
for j in range(len(mu)):
plt.plot(x, [basis[key](tmpx, j, mu, 0.1) for tmpx in x])
def y(x, w, basisName, mu=None, s=None):
ret = w[0]
for index in range(1, len(w)):
ret += w[index] * basis[basisName](x, index, mu, s)
return ret
Explanation: ๅบๅบ้ขๆฐ
ใใฎใพใพ
$$\phi_j(x) = x$$
ๅค้
ๅผ
$$\phi_j(x) = x^j$$
ใฌใฆใน
$$\phi_j(x) = \exp\left(-\frac{(x-\mu_j)^2}{2s^2}\right)$$
ใญใธในใใฃใใฏใทใฐใขใคใ
$$\phi_j(x) = \frac{1}{1 + \exp\left((x - \mu_j)/s\right)}$$
(ๅณ 3.1 (p137))
End of explanation
N = 100 # ใตใณใใซๆฐ
M = 5 # ใขใใซๆฌกๅ
b = 0.5 # ใใคใบใฎ็ฒพๅบฆ
x_opt = np.linspace(-5, 5, N)
y_opt = 2 * np.sin(x_opt) + 3
y_obs = (y_opt + np.random.normal(0, 1.0/b, len(x_opt)))[:, np.newaxis]
plt.plot(x_opt, y_obs, "k.", label="Observation")
plt.plot(x_opt, y_opt, "r-", linewidth=3, label="Truth")
plt.xlim(-5, 5)
plt.legend()
Explanation: ไพ้ก: $\sin$ ้ขๆฐใไฝฟใฃใไปฅไธใฎ้ขไฟๅผใใ$M$ ๆฌกๅ
ใงๅๅธฐ
่ฆณๆธฌใฏๅ ๆณๆงใใคใบ $\epsilon$ ใๅซใพใใ
$$
y(x) = 2 \sin(x) + 3 + \epsilon \
\epsilon \sim N(0, \beta^{-1})
$$
* $x \in [-5, 5]$
* $M$: ็ทๅฝขๅๅธฐใฎๆฌกๅ
ๆฐ
* $N$: ใตใณใใซๆฐ
* $\beta$: ๅ ๆณๆงใใคใบใฎ็ฒพๅบฆ
End of explanation
# ่จ็ป่กๅใๆฑใใ
# s ใฏๅ
จใฆ 1 ใงๅบๅฎใ
def makeDesignMatrix(x, basis, mu, s):
ret = np.zeros((len(x), len(mu)))
for i in range(len(x)):
for j in range(len(mu)):
ret[i][ j] = basis(x[i], j, mu, s)
return ret
mu = np.linspace(min(x_opt), max(x_opt), M)
s = 1
designMatrix = {}
plt.figure(figsize=(16, 3))
for ikey, key in enumerate(basis):
designMatrix[key] = makeDesignMatrix(x_opt, basis[key], mu, s)
plt.subplot(1, 4, ikey+1)
plt.title(key + "(M=%d)" % M)
plt.imshow(designMatrix[key], aspect="auto", interpolation="nearest")
# ๅบๅบ้ขๆฐใใจใซๆๅฐคๆจๅฎใ่กใใ็ตๆใใใญใใ
for M in [1, 3, 5, 7, 10]:
mu = np.linspace(min(x_opt), max(x_opt), M)
s = 1.0
plt.figure(figsize=(16, 3))
for ikey, key in enumerate(basis):
phi = makeDesignMatrix(x_opt, basis[key], mu, s) + np.random.uniform(0, 0.001, (N, M))
phit = phi.transpose()
wml = np.dot(np.dot(np.linalg.inv(np.dot(phit, phi)), phit), y_obs)
bml = 1.0 / (1.0 / N + sum([(y_obs[i] - np.dot(wml.transpose(), phi[i, :]))**2 for i in range(N)]))
plt.subplot(1, 4, ikey+1)
plt.title(key + "(M=%d)" % M)
plt.plot(x_opt, [np.dot(wml.transpose(), phi[i, :]) for i in range(N)], "k", label="Estimated")
plt.plot(x_opt, y_opt, "r", label="Oracle")
plt.plot(x_opt, y_obs, "r.", markersize=2)
plt.ylim([-3, 10])
plt.xlim([min(x_opt), max(x_opt)])
plt.legend()
Explanation: ${\bf w}$ ใฎๆๅฐคๆจๅฎ
้ใฟใฎๆๅฐคๆจๅฎใฏไปฅไธใฎๅผใงใใจใพใ
$$
{\bf w}_{ML} = (\Phi^T\Phi)^{-1} \Phi^T{\bf t}
$$
ใใ ใใ${\bf \Phi} \in R^{N \times M}$ ใฏ่จ็ป่กๅใงใใใฎ $i$ ่ก $j$ ๅ่ฆ็ด ใฏใ$\phi_j(x_i)$
End of explanation
# ๅบๅบ้ขๆฐใใจใซ, M ใจใlambda ใๅคใใชใใ w ใฎๆๅฐคๆจๅฎใ่กใใ็ตๆใใใใใใใญใใ
for M in [1, 5, 10, 50]:
mu = np.linspace(min(x_opt), max(x_opt), M)
s = 1.0
plt.figure(figsize=(16, 3))
for ikey, key in enumerate(basis):
phi = makeDesignMatrix(x_opt, basis[key], mu, s) + np.random.uniform(0, 0.001, (N, M))
phit = phi.transpose()
plt.subplot(1, 4, ikey+1)
plt.title(key + "(M=%d)" % M)
for lmd in np.linspace(0, 5, 5):
wml = np.dot(np.dot(np.linalg.inv(lmd * np.eye(M) + np.dot(phit, phi)), phit), y_obs)
plt.plot(x_opt, [np.dot(wml.transpose(), phi[i, :]) for i in range(N)], "k", label="Estimated" if lmd == 0 else "")
plt.plot(x_opt, y_opt, "r", label="Oracle")
plt.plot(x_opt, y_obs, "r.", markersize=2)
plt.ylim([-3, 10])
plt.xlim([min(x_opt), max(x_opt)])
plt.legend()
mu = np.linspace(min(x_opt), max(x_opt), M)
phi = makeDesignMatrix(x_opt, basis["Gauss"], mu, s)
phit = phi.transpose()
wml = np.dot(np.dot(np.linalg.inv(lmd * np.eye(M) + np.dot(phit, phi)), phit), y_obs)
plt.figure(figsize=(16, 3))
plt.subplot(1, 4, 1)
plt.imshow(phi, aspect="auto", interpolation="nearest")
plt.title("Design Matrix")
plt.ylabel("x")
plt.xlabel("weight")
plt.subplot(1, 4, 2)
out = np.zeros((N, M))
plt.plot(wml)
plt.title("Weight Vector")
#plt.imshow(out, aspect="auto", interpolation="nearest")
plt.subplot(1, 4, 3)
dw = (wml.transpose() * phi).transpose()
plt.imshow(dw.transpose(), aspect="auto", interpolation="nearest")
plt.title("Design x Weight")
plt.subplot(1, 4, 4)
plt.plot(x_opt, dw.sum(0))
plt.plot(x_opt, y_opt, "r:", linewidth=3)
plt.title("sum(Design x Weight)")
Explanation: ๆญฃๅๅ้
่ชคๅทฎ้ขๆฐใ$E_D({\bf w})$ ใซๆญฃๅๅ้
$E_W({\bf w})$ ใจๆญฃๅๅไฟๆฐ $\lambda$ ใ่ฟฝๅ ใใ
$$
\underset{\bf w}{\operatorname{argmin}} E_D({\bf w}) + \lambda E_W({\bf x})\
E_D({\bf w}) = \frac{1}{2} \sum_{i=1}^{N} \left(t_i - {\bf w}^T\phi({\bf x}_i)\right)^2 \
E_W({\bf w}) = \frac{1}{2} {\bf w}^T{\bf w}
$$
ๆๅฐคๆจๅฎใฏ
$$
{\bf w}_{ML} = (\lambda{\bf I} + \Phi^T\Phi)^{-1} \Phi^T{\bf t}
$$
End of explanation |
1,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
๊ฐ์ค๊ฒ์
Step1: ์ค๋์ ์ฃผ์ ์์
Step2: sp.factorial() ํจ์๋ฅผ ์ด์ฉํ์ฌ ์กฐํฉ์ ๊ฒฝ์ฐ์ ์์ธ $\binom{n}{r}$์ ๊ณ์ฐํ๋ ํจ์๋ฅผ ์ ์ํ๋ค.
Step3: ์ด์ ์ดํญ๋ถํฌ ํ๋ฅ ๋ฅผ ๊ตฌํ๋ ํจ์๋ ๋ค์๊ณผ ๊ฐ๋ค.
n, r, p ์ธ ๊ฐ์ ์ธ์๋ฅผ ์ฌ์ฉํ๋ฉฐ p๋ ํ ๋ฒ ์คํํ ๋ ํน์ ์ฌ๊ฑด์ด ๋ฐ์ํ ํ๋ฅ ์ด๋ค.
Step4: ์ ํจ์๋ฅผ ์ด์ฉํ์ฌ ๋์ ์ 30๋ฒ ๋์ ธ์ ์๋ฉด์ด 24๋ฒ ๋์ฌ ํ๋ฅ ์ ๊ณ์ฐํ ์ ์๋ค.
Step5: ์ด์ ๋์ ์ 30๋ฒ ๋์ ธ์ ์๋ฉด์ด 24๋ฒ ์ด์ ๋์ฌ ํ๋ฅ ์ ๋ค์๊ณผ ๊ฐ๋ค.
Step6: ์ ๊ณ์ฐ๊ฒฐ๊ณผ์ ์ํ๋ฉด ์ฃผ์ฌ์๋ฅผ 30๋ฒ ๋์ ธ์ ์๋ฉด์ด 24๋ฒ ์ด์ ๋์ฌ ํ๋ฅ ์ 0.07%์ด๋ค.
์ฆ, ์ฃผ์ฌ์๋ฅผ ์๋ฅธ ๋ฒ ๋์ง๋ ์คํ์ ๋ง ๋ฒ ํ๋ฉด 7๋ฒ ์ ๋, ์๋ฉด์ด 24๋ฒ ์ด์ ๋์จ๋ค๋ ์๋ฏธ์ด๋ค.
๋ฌ๋ฆฌ ๋งํ๋ฉด, ์๋ฅธ ๋ฒ์ฉ ๋์ง๋ ๋ชจ์์คํ์ 1500๋ฒ ์ ๋ ํด์ผ ์๋ฉด์ด 24๋ฒ ์ด์ ๋์ค๋ ๊ฒฝ์ฐ๋ฅผ ๋ณผ ์ ์๋ค๋ ๋ง์ด๋ค.
์ด๋ฅผ ์คํ์ ์ผ๋ก ํ์ธํด๋ณผ ์ ์๋ค.
ํ๋ก๊ทธ๋๋ฐ์ ์ด์ฉํ ์ดํญ๋ถํฌ ๋ชจ์์คํ
๋ชจ์์คํ์ ์ด์ฉํ์ฌ ๋์ ์ 30๋ฒ ๋์ ธ์ ์๋ฉด์ด 24๋ฒ ์ด์ ๋์ฌ ํ๋ฅ ์ด ์ผ๋ง๋ ๋๋์ง ํ์ธํด๋ณด์.
์ผ๋ฐ์ ์ผ๋ก ๊ทธ ํ๋ฅ ์ด 5% ์ดํ๋ผ๊ณ ๋ฐํ์ง๋ฉด ์์ ์ธ๊ธํ ๋์ ์ ํธํฅ๋ ๋์ ์ด์๋ค๊ณ ๊ฒฐ๋ก ์ง์ ์ ์๋ค.
์์ ์ด๋ฏธ ์ด๋ก ์ ์ผ๋ก ๊ทธ ํ๋ฅ ์ด 5%์ ํจ์ฌ ๋ฏธ์น์ง ์๋๋ค๋ ๊ฒ์ ํ์ธํ์๋ค.
์ฌ๊ธฐ์๋ ๋ชจ์์คํ์ ์ด์ฉํ์ฌ ๋์ผํ ๊ฒฐ๋ก ์ ๋๋ฌํ ์ ์์์ ๋ณด์ด๊ณ ์ ํ๋ค.
๋ชจ์์คํ
Step7: ๋์ ์ 30๋ฒ ๋์ง๋ ๊ฒ์ ๊ตฌํํ๊ธฐ ์ํด ์ด์ 0๊ณผ 1์ ๋ฌด์์์ ์ผ๋ก 30๊ฐ ์์ฑํ์.
Step8: ์๋ฉด ๋์จ ๊ฒ๋ง ๋ชจ์์ ๋์ง์ด ๋ด๊ธฐ ์ํด ๋ง์คํฌ๋ฅผ ์ด์ฉํ ํฌ์ ์ธ๋ฑ์ฑ์ ํ์ฉํ๋ค.
Step9: ์๋ฉด์ด ๋์จ ํ์๋ ์ ์ด๋ ์ด์ ๊ธธ์ด์ ํด๋นํ๋ค.
Step10: ๊ธธ์ด ์ ๋ณด๋ฅผ ์๋์ ๊ฐ์ด ๋ชจ์์ ์ ๋ณด๋ก๋ถํฐ ๊ฐ์ ธ์ฌ ์ ์๋ค.
Step11: (13,)์ ๊ธธ์ด๊ฐ 1์ธ ํํ ์๋ฃํ์ด๋ฉฐ, ์ด๋ heads๊ฐ 1์ฐจ์ ์ด๋ ์ด์ด๋ฉฐ, ์ด๋ ์ด์ ๊ธธ์ด๊ฐ 13์ด๋ ์๋ฏธ์ด๋ค.
ํํ์ ์ธ๋ฑ์ฑ์ ์ด์ฉํ์ฌ heads์ ๊ธธ์ด, ์ฆ, ์๋ฉด์ด ๋์จ ํ์๋ฅผ ์ ์ ์๋ค.
Step12: ๋ชจ์์คํ
Step13: ์ ์ฝ๋๋ฅผ ์ค๋ช
ํ๋ฉด ๋ค์๊ณผ ๊ฐ๋ค.
๋จผ์ num_repeat ๋งํผ ๋ฐ๋ณตํ 30๋ฒ ๋์ ๋์ง๊ธฐ์ ๊ฒฐ๊ณผ๋ฅผ ์ ์ฅํ (num_repeat, 1) ๋ชจ์์
2์ฐจ์ ์ด๋ ์ด๋ฅผ ์์ฑํ๋ค.
์ด๋ฅผ ์ํด np.empty ํจ์๋ฅผ ํ์ฉํ๋ค.
np.zeros ํจ์์ ์ ์ฌํ๊ฒ ์๋ํ์ง๋ง ์์ฑ๋ ๋ ํญ๋ชฉ ๊ฐ๋ค์ ์๋ฏธ ์์ด ์์๋ก ์์ฑ๋๋ค.
์ดํ์ ๊ฐ๊ฐ์ ํญ๋ชฉ์ ๋ฐ๋์ ์ง์ ๋์ด์ผ ํ๋ฉฐ, ๊ทธ๋ ์ง ์์ผ๋ฉด ์ค๋ฅ๊ฐ ๋ฐ์ํ๋ค.
heads_count_array = np.empty([num_repeat,1], dtype=int)
num_repeat ๋งํผ 30๋ฒ ๋์ ๋์ง๊ธฐ ๋ชจ์์คํ์ ์คํํ์ฌ ์์ ์์ฑํ 2์ฐจ์ ์ด๋ ์ด์ ์ฐจ๋ก๋๋ก ์ ์ฅํ๋ค.
์ ์ฅ์ ์ด๋ ์ด ์ธ๋ฑ์ฑ์ ํ์ฉํ๋ค.
for times in np.arange(num_repeat)
Step14: ์์
30๋ฒ ๋์ ๋์ง๊ธฐ๋ฅผ 1000๋ฒ ๋ชจ์์คํํ ๊ฒฐ๊ณผ๋ฅผ ์์ ๋ก ์ดํด๋ณด์.
Step15: 100๋ฒ์ ๋ชจ์์คํ ๊ฒฐ๊ณผ ์ค์ ์ฒ์ 10๊ฐ์ ๊ฒฐ๊ณผ๋ฅผ ํ์ธํด๋ณด์.
Step16: ๋ชจ์์คํ ๊ฒฐ๊ณผ ๊ทธ๋ํ๋ก ํ์ธํ๊ธฐ
๋ชจ์์คํ ๊ฒฐ๊ณผ๋ฅผ ํ์คํธ๊ทธ๋จ์ผ๋ก ํ์ธํด๋ณผ ์ ์๋ค.
์ฌ๊ธฐ์๋ seaborn ์ด๋ ๋ชจ๋์ ํ์ฉํ์ฌ ๋ณด๋ค ๋ฉ์ง ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆฌ๋ ๋ฒ์ ๊ธฐ์ตํด๋๋ฉด ์ข๋ค.
Step17: ์๋ ๊ทธ๋ํ๋ ๋์ 30๋ฒ ๋์ง๊ธฐ๋ฅผ 100๋ฒ ๋ชจ์์คํํ์ ๋ ์๋ฉด์ด ๋์จ ํ์๋ฅผ ํ์คํ ๊ทธ๋จ์ผ๋ก ๋ณด์ฌ์ค๋ค.
Step18: ์๋ ๊ทธ๋ํ๋ ์ปค๋๋ฐ๋์ถ์ (kde = kernel density estimation) ๊ธฐ๋ฒ์ ์ ์ฉํ์ฌ ๋ฐ์ดํฐ๋ฅผ
๋ณด๋ค ์ดํดํ๊ธฐ ์ฝ๋๋ก ๋์์ฃผ๋ ๊ทธ๋ํ๋ฅผ ํจ๊ป ๋ณด์ฌ์ค๋ค.
Step19: 1000๋ฒ์ ๋ชจ์์คํ์์ ์๋ฉด์ด 24ํ ์ด์ ๋์จ ์คํ์ด ๋ช ๋ฒ์ธ์ง๋ฅผ ํ์ธํด๋ณด์.
์์ ์ฌ์ฉํ ๊ธฐ์ ์ธ ๋ง์คํฌ ์ธ๋ฑ์ฑ ๊ธฐ์ ์ ํ์ฉํ๋ค.
์ฃผ์
Step20: ์ ๋ชจ์์คํ์์๋ ํ ๋ฒ ์๋ฉด์ด 24ํ ์ด์ ๋์๋ค.
์ด์ ์ ๋ชจ์์คํ์ 10,000๋ฒ ๋ฐ๋ณตํด๋ณด์.
Step21: ์ ๋ชจ์์คํ์์๋ ๋์ 30๋ฒ ๋์ง๊ธฐ๋ฅผ 10,000๋ฒ ๋ฐ๋ณตํ์ ๋ 24๋ฒ ์ด์ ์๋ฉด์ด ๋์จ ๊ฒฝ์ฐ๊ฐ 6๋ฒ ์์๋ค.
์์ ํ๋ฅ ์ ์ผ๋ก 0.0007%, ์ฆ 10,000๋ฒ์ 7๋ฒ ์ ๋ ๋์์ผ ํ๋ค๋ ๊ฒ๊ณผ ๊ฑฐ์ ์ผ์นํ๋ค.
์ ์์ ์ธ ๋์ ์ธ๊ฐ?
๋ชจ์์คํ์ ๊ฒฐ๊ณผ ์ญ์ ๋์ ์ 30๋ฒ ๋์ ธ์ 24๋ฒ ์ด์ ์๋ฉด์ด ๋์ฌ ํ๋ฅ ์ด 5%์ ํฌ๊ฒ ๋ฏธ์น์ง ๋ชปํ๋ค.
์ด๋ฐ ๊ฒฝ์ฐ ์ฐ๋ฆฌ๋ ์ฌ์ฉํ ๋์ ์ด ์ ์์ ์ธ ๋์ ์ด๋ผ๋ ์๊ฐ์ค(H0)์ ๋ฐ์๋ค์ผ ์ ์๋ค๊ณ ๋งํ๋ค.
์ฆ, ๊ธฐ๊ฐํด์ผ ํ๋ค.
๊ฐ์ค๊ฒ์ ์ ์ํด ์ง๊ธ๊น์ง ๋ค๋ฃฌ ๋ด์ฉ์ ์ ๋ฆฌํ๋ฉด ๋ค์๊ณผ ๊ฐ๋ค.
๊ฐ์ค๊ฒ์ 6๋จ๊ณ
1) ๊ฒ์ ํ ๊ฐ์ค์ ๊ฒฐ์ ํ๋ค.
* ์๊ฐ์ค
Step22: ์ฐ์ต
numpy.random ๋ชจ๋์ ์ง๊ธ๊น์ง ๋ค๋ฃฌ ์ดํญ๋ถํฌ ํ๋ฅ ์ ๊ณ์ฐํด์ฃผ๋ ํจ์์ธ binomial์ด ์ด๋ฏธ ๊ตฌํ๋์ด ์๋ค.
Step23: ์๋ ์ฝ๋๋ B(30, 1.5)๋ฅผ ๋ฐ๋ฅด๋ ํ๋ฅ ๋ณ์๋ฅผ 10,000๋ฒ ๋ฐ๋ณตํ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ฌ์ค๋ค.
Step24: ์ ๊ฒฐ๊ณผ๋ฅผ ์ด์ฉํ์ฌ ์์ ๋ถ์ํ ๊ฒฐ๊ณผ์ ์ ์ฌํ ๊ฒฐ๊ณผ๋ฅผ ์ป๋๋ค๋ ๊ฒ์ ํ์ธํ ์ ์๋ค. | Python Code:
import numpy as np
from __future__ import print_function, division
Explanation: ๊ฐ์ค๊ฒ์
End of explanation
import sympy as sp
sp.factorial(5)
Explanation: ์ค๋์ ์ฃผ์ ์์ : ๋์ ๋์ง๊ธฐ
๋์ ์ 30๋ฒ ๋์ ธ์ ์๋ฉด(Head)์ด 24๋ฒ ๋์์ ๋, ์ ์์ ์ธ ๋์ ์ด๋ผ ํ ์ ์์๊น?
์๊ฐ์ค(H0): ์ ์์ ์ธ ๋์ ์ด๋ผ๋ฉด 30๋ฒ ์ค์ ๋ณดํต์ 15๋ฒ์ ์๋ฉด(Head), 15๋ฒ์ ๋ท๋ฉด(Tail)์ด ๋์จ๋ค.
๋ฐ๋ผ์ ์ ์์ ์ธ ๋์ ์ด ์๋๋ค.
๋๋ฆฝ๊ฐ์ค(H1): ์ ์์ ์ธ ๋์ ์ด๋ผ๋ 24๋ฒ ์๋ฉด์ด ๋์ฌ ์ ์๋ค. ์ฐ์ฐํ๊ฒ ๋ฐ์ํ ์ฌ๊ฑด์ด๋ค.
์ด๋ ๊ฐ์ค์ด ๋ง์๊น? ์๊ฐ์ค์ ๊ธฐ๊ฐํด์ผ ํ๋๊ฐ?
์ดํญ๋ถํฌ
๋์ผํ ์๋๋ฅผ n๋ฒ ์๋ํ ๋ ํน์ ์ฌ๊ฑด์ด r๋ฒ ๋์ฌ ํ๋ฅ ์ ๊ณ์ฐํ ๋ ์ดํญ๋ถํฌ๋ฅผ ์ด์ฉํ๋ค.
์กฐ๊ฑด
๊ฐ๊ฐ์ ์๋๋ ์ํธ ๋
๋ฆฝ์ ์ด๋ค.
๊ฐ๊ฐ์ ์๋์์ ํน์ ์ฌ๊ฑด์ด ๋ฐ์ํ ํ๋ฅ p๋ ์ธ์ ๋ ๋์ผํ๋ค.
์์
๋์ ์ n๋ฒ ๋์ ธ ์๋ฉด์ด r๋ฒ ๋์ฌ ํ๋ฅ ๋ค์ ๋ถํฌ: p = 1/2
์ฃผ์ฌ์๋ฅผ n๋ฒ ๋์ ธ 2๋ณด๋ค๋ ํฐ ์์๊ฐ r๋ฒ ๋์ฌ ํ๋ฅ ๋ค์ด ๋ถํฌ: p = 1/3
์ดํญ๋ถํฌ๋ฅผ ๋ฐ๋ฅด๋ ํ๋ฅ ๋ถํฌ๋ฅผ $B(n, p)$๋ก ํ๊ธฐํ๋ค.
์ด๋, n๋ฒ ์๋ํด์ r๋ฒ ์ฑ๊ณตํ ํ๋ฅ $P(r)$์ ์๋ ์์ผ๋ก ๊ตฌํ ์ ์๋ค.
$$P(r) = \binom{n}{r} \cdot p^r \cdot (1-p)^{n-r}$$
์ ์์์ $\binom{n}{r}$์ n๊ฐ์์ r๊ฐ๋ฅผ ์ ํํ๋ ์กฐํฉ์ ๊ฒฝ์ฐ์ ์๋ฅผ ๋ํ๋ด๋ฉฐ, ์๋์ ์์ผ๋ก ๊ตฌํด์ง๋ค.
$$\binom{n}{r} = \frac{n!}{(n-r)!\cdot r!}$$
์์
์ ์์ ์ธ ๋์ผํ ๋์ ์ ๋ฐ๋ณต์ ์ผ๋ก ๋์ง๋ ํ์๋ ์ดํญ๋ถํฌ๋ฅผ ๋ฐ๋ฅธ๋ค, ์ฆ, n๋ฒ ๋์ง ๊ฒฝ์ฐ $B(n, 1/2)$๊ฐ ์ฑ๋ฆฝํ๋ค.
์๋ ํ๋ฅ ์ ๊ตฌํ๋ผ.
๋์ ์ ํ ๋ฒ ๋์ ธ ์๋ฉด(Head)์ด ๋์ฌ ํ๋ฅ P(H)๋?
P(H) = 1/2
๋์ ์ ๋ ๋ฒ ๋์ ธ ๋ ๋ฒ ๋ชจ๋ ์๋ฉด(Head)์ด ๋์ฌ ํ๋ฅ P(HH)๋?
P(HH) = (1/2) * (1/2)
๋์ ์ ์ธ ๋ฒ ๋์ ธ ์ฒ์์ ์๋ฉด, ๋๋จธ์ง ๋ ๋ฒ์ ๋ชจ๋ ๋ท๋ฉด(Tail)์ด ๋์ฌ ํ๋ฅ P(HTT)๋?
P(HTT) = (1/2) * (1/2) * (1/2)
๋์ ์ ์ธ ๋ฒ ๋์ ธ ์๋ฉด์ด ๋ ๋ฒ, ๋ท๋ฉด์ด ํ ๋ฒ ๋์ฌ ํ๋ฅ P(2H, 1T)๋?
์ด ๊ฒฝ์ฐ์ ์ดํญ๋ถํฌ B(3, 1/2)๋ฅผ ์ด์ฉํ๋ฉด ๋๋ค.
์ฌ๊ธฐ์ 1/2๋ ์๋ฉด์ด ๋์ค๋ ์ฌ๊ฑด์ ํ๋ฅ ์ด๋ค.
์ดํญ๋ถํฌ ์์ ์ด์ฉํ๋ฉด P(2H, 1T)๋ ์๋์ ๊ฐ๋ค.
$$P(2) = \binom{3}{2}\cdot (\frac 1 2)^2 \cdot (\frac 1 2) = 3/8$$
์ฐ์ต
๋์ ์ 30๋ฒ ๋์ ธ์ ์๋ฉด์ด 24๋ฒ ์ด์ ๋์ฌ ํ๋ฅ ์?
์ด ๋ฌธ์ ๋ฅผ ํ๊ธฐ ์ํด ์ดํญ๋ถํฌ ๊ณต์์ ํจ์๋ก ์ ์ธํ์.
๋จผ์ ํฉํ ๋ฆฌ์ผ ํจ์๊ฐ ํ์ํ๋ฐ, sympy ๋ชจ๋์ ์ ์๋์ด ์๋ค.
End of explanation
def binom(n, r):
return sp.factorial(n) / (sp.factorial(r) * sp.factorial(n-r))
Explanation: sp.factorial() ํจ์๋ฅผ ์ด์ฉํ์ฌ ์กฐํฉ์ ๊ฒฝ์ฐ์ ์์ธ $\binom{n}{r}$์ ๊ณ์ฐํ๋ ํจ์๋ฅผ ์ ์ํ๋ค.
End of explanation
def binom_distribution(n, r, p):
return binom(n, r) * p**r * (1-p)**(n-r)
Explanation: ์ด์ ์ดํญ๋ถํฌ ํ๋ฅ ๋ฅผ ๊ตฌํ๋ ํจ์๋ ๋ค์๊ณผ ๊ฐ๋ค.
n, r, p ์ธ ๊ฐ์ ์ธ์๋ฅผ ์ฌ์ฉํ๋ฉฐ p๋ ํ ๋ฒ ์คํํ ๋ ํน์ ์ฌ๊ฑด์ด ๋ฐ์ํ ํ๋ฅ ์ด๋ค.
End of explanation
binom_distribution(30, 24, 1/2.)
Explanation: ์ ํจ์๋ฅผ ์ด์ฉํ์ฌ ๋์ ์ 30๋ฒ ๋์ ธ์ ์๋ฉด์ด 24๋ฒ ๋์ฌ ํ๋ฅ ์ ๊ณ์ฐํ ์ ์๋ค.
End of explanation
probability = 0.0
for x in range(24, 31):
probability += binom_distribution(30, x, 1/2)
print(probability)
Explanation: ์ด์ ๋์ ์ 30๋ฒ ๋์ ธ์ ์๋ฉด์ด 24๋ฒ ์ด์ ๋์ฌ ํ๋ฅ ์ ๋ค์๊ณผ ๊ฐ๋ค.
End of explanation
np.random.randint(0, 10, 5)
Explanation: ์ ๊ณ์ฐ๊ฒฐ๊ณผ์ ์ํ๋ฉด ์ฃผ์ฌ์๋ฅผ 30๋ฒ ๋์ ธ์ ์๋ฉด์ด 24๋ฒ ์ด์ ๋์ฌ ํ๋ฅ ์ 0.07%์ด๋ค.
์ฆ, ์ฃผ์ฌ์๋ฅผ ์๋ฅธ ๋ฒ ๋์ง๋ ์คํ์ ๋ง ๋ฒ ํ๋ฉด 7๋ฒ ์ ๋, ์๋ฉด์ด 24๋ฒ ์ด์ ๋์จ๋ค๋ ์๋ฏธ์ด๋ค.
๋ฌ๋ฆฌ ๋งํ๋ฉด, ์๋ฅธ ๋ฒ์ฉ ๋์ง๋ ๋ชจ์์คํ์ 1500๋ฒ ์ ๋ ํด์ผ ์๋ฉด์ด 24๋ฒ ์ด์ ๋์ค๋ ๊ฒฝ์ฐ๋ฅผ ๋ณผ ์ ์๋ค๋ ๋ง์ด๋ค.
์ด๋ฅผ ์คํ์ ์ผ๋ก ํ์ธํด๋ณผ ์ ์๋ค.
ํ๋ก๊ทธ๋๋ฐ์ ์ด์ฉํ ์ดํญ๋ถํฌ ๋ชจ์์คํ
๋ชจ์์คํ์ ์ด์ฉํ์ฌ ๋์ ์ 30๋ฒ ๋์ ธ์ ์๋ฉด์ด 24๋ฒ ์ด์ ๋์ฌ ํ๋ฅ ์ด ์ผ๋ง๋ ๋๋์ง ํ์ธํด๋ณด์.
์ผ๋ฐ์ ์ผ๋ก ๊ทธ ํ๋ฅ ์ด 5% ์ดํ๋ผ๊ณ ๋ฐํ์ง๋ฉด ์์ ์ธ๊ธํ ๋์ ์ ํธํฅ๋ ๋์ ์ด์๋ค๊ณ ๊ฒฐ๋ก ์ง์ ์ ์๋ค.
์์ ์ด๋ฏธ ์ด๋ก ์ ์ผ๋ก ๊ทธ ํ๋ฅ ์ด 5%์ ํจ์ฌ ๋ฏธ์น์ง ์๋๋ค๋ ๊ฒ์ ํ์ธํ์๋ค.
์ฌ๊ธฐ์๋ ๋ชจ์์คํ์ ์ด์ฉํ์ฌ ๋์ผํ ๊ฒฐ๋ก ์ ๋๋ฌํ ์ ์์์ ๋ณด์ด๊ณ ์ ํ๋ค.
๋ชจ์์คํ: ๋์ 30๋ฒ ๋์ง๊ธฐ
๋จผ์ ์ ์์ ์ธ ๋์ ์ 30๋ฒ ๋์ง๋ ๋ชจ์์คํ์ ์ฝ๋๋ก ๊ตฌํํ๊ธฐ ์ํด ์๋ ์์ด๋์ด๋ฅผ ํ์ฉํ๋ค.
๋ชจ์์คํ์์ 1์ ์๋ฉด(H)๋ฅผ, 0์ ๋ท๋ฉด(T)์ ์๋ฏธํ๋ค.
์ ์์ ์ธ ๋์ ์ ๋์ง ๊ฒฐ๊ณผ๋ ์์์ ์ผ๋ก ๊ฒฐ์ ๋๋ค.
np.random ๋ชจ๋์ randint ํจ์๋ฅผ ์ด์ฉํ์ฌ ๋ฌด์์์ ์ผ๋ก 0๊ณผ 1๋ก ๊ตฌ์ฑ๋, ๊ธธ์ด๊ฐ 30์ธ ์ด๋ ์ด๋ฅผ ์์ฑํ ์ ์๋ค.
np.random.randint ํจ์๋ ์ฃผ์ด์ง ๊ตฌ๊ฐ์์ ์ ์๋ฅผ ์ง์ ๋ ๊ธธ์ด๋งํผ ์์ฑํด์ ์ด๋ ์ด๋ก ๋ฆฌํดํ๋ค.
์๋ ์ฝ๋๋ 0๊ณผ 10 ์ฌ์ด์ ์ ์๋ฅผ ๋ฌด์์์ ์ผ๋ก 5๊ฐ ์์ฑํ์ฌ ์ด๋ ์ด๋ก ๋ฆฌํดํ๋ค.
์ฃผ์:
0์ ํฌํจ๋จ
10์ ํฌํจ๋์ง ์์
End of explanation
num_tosses = 30
num_heads = 24
head_prob = 0.5
experiment = np.random.randint(0, 2, num_tosses)
experiment
Explanation: ๋์ ์ 30๋ฒ ๋์ง๋ ๊ฒ์ ๊ตฌํํ๊ธฐ ์ํด ์ด์ 0๊ณผ 1์ ๋ฌด์์์ ์ผ๋ก 30๊ฐ ์์ฑํ์.
End of explanation
mask = experiment == 1
mask
heads = experiment[mask]
heads
Explanation: ์๋ฉด ๋์จ ๊ฒ๋ง ๋ชจ์์ ๋์ง์ด ๋ด๊ธฐ ์ํด ๋ง์คํฌ๋ฅผ ์ด์ฉํ ํฌ์ ์ธ๋ฑ์ฑ์ ํ์ฉํ๋ค.
End of explanation
len(heads)
Explanation: ์๋ฉด์ด ๋์จ ํ์๋ ์ ์ด๋ ์ด์ ๊ธธ์ด์ ํด๋นํ๋ค.
End of explanation
heads.shape
Explanation: ๊ธธ์ด ์ ๋ณด๋ฅผ ์๋์ ๊ฐ์ด ๋ชจ์์ ์ ๋ณด๋ก๋ถํฐ ๊ฐ์ ธ์ฌ ์ ์๋ค.
End of explanation
heads.shape[0]
Explanation: (13,)์ ๊ธธ์ด๊ฐ 1์ธ ํํ ์๋ฃํ์ด๋ฉฐ, ์ด๋ heads๊ฐ 1์ฐจ์ ์ด๋ ์ด์ด๋ฉฐ, ์ด๋ ์ด์ ๊ธธ์ด๊ฐ 13์ด๋ ์๋ฏธ์ด๋ค.
ํํ์ ์ธ๋ฑ์ฑ์ ์ด์ฉํ์ฌ heads์ ๊ธธ์ด, ์ฆ, ์๋ฉด์ด ๋์จ ํ์๋ฅผ ์ ์ ์๋ค.
End of explanation
def coin_experiment(num_repeat):
heads_count_array = np.empty([num_repeat,1], dtype=int)
for times in np.arange(num_repeat):
experiment = np.random.randint(0,2,num_tosses)
heads_count_array[times] = experiment[experiment==1].shape[0]
return heads_count_array
Explanation: ๋ชจ์์คํ: ๋์ 30๋ฒ ๋์ง๊ธฐ ๋ชจ์์คํ ๋ฐ๋ณตํ๊ธฐ
์์ ๊ตฌํํ ๋์ 30๋ฒ ๋์ง๊ธฐ๋ฅผ ๊ณ์ํด์ ๋ฐ๋ณต์ํค๋ ๋ชจ์์คํ์ ๊ตฌํํ์.
End of explanation
heads_count_10 = coin_experiment(10)
heads_count_10
Explanation: ์ ์ฝ๋๋ฅผ ์ค๋ช
ํ๋ฉด ๋ค์๊ณผ ๊ฐ๋ค.
๋จผ์ num_repeat ๋งํผ ๋ฐ๋ณตํ 30๋ฒ ๋์ ๋์ง๊ธฐ์ ๊ฒฐ๊ณผ๋ฅผ ์ ์ฅํ (num_repeat, 1) ๋ชจ์์
2์ฐจ์ ์ด๋ ์ด๋ฅผ ์์ฑํ๋ค.
์ด๋ฅผ ์ํด np.empty ํจ์๋ฅผ ํ์ฉํ๋ค.
np.zeros ํจ์์ ์ ์ฌํ๊ฒ ์๋ํ์ง๋ง ์์ฑ๋ ๋ ํญ๋ชฉ ๊ฐ๋ค์ ์๋ฏธ ์์ด ์์๋ก ์์ฑ๋๋ค.
์ดํ์ ๊ฐ๊ฐ์ ํญ๋ชฉ์ ๋ฐ๋์ ์ง์ ๋์ด์ผ ํ๋ฉฐ, ๊ทธ๋ ์ง ์์ผ๋ฉด ์ค๋ฅ๊ฐ ๋ฐ์ํ๋ค.
heads_count_array = np.empty([num_repeat,1], dtype=int)
num_repeat ๋งํผ 30๋ฒ ๋์ ๋์ง๊ธฐ ๋ชจ์์คํ์ ์คํํ์ฌ ์์ ์์ฑํ 2์ฐจ์ ์ด๋ ์ด์ ์ฐจ๋ก๋๋ก ์ ์ฅํ๋ค.
์ ์ฅ์ ์ด๋ ์ด ์ธ๋ฑ์ฑ์ ํ์ฉํ๋ค.
for times in np.arange(num_repeat):
experiment = np.random.randint(0,2,num_tosses)
heads_count_array[times] = experiment[experiment==1].shape[0]
์์
๋์ 30๋ฒ ๋์ง๊ธฐ๋ฅผ 10๋ฒ ๋ชจ์์คํํ ๊ฒฐ๊ณผ๋ฅผ ์์ ๋ก ์ดํด๋ณด์.
End of explanation
heads_count_1000 = coin_experiment(1000)
Explanation: ์์
30๋ฒ ๋์ ๋์ง๊ธฐ๋ฅผ 1000๋ฒ ๋ชจ์์คํํ ๊ฒฐ๊ณผ๋ฅผ ์์ ๋ก ์ดํด๋ณด์.
End of explanation
heads_count_1000[:10]
Explanation: 100๋ฒ์ ๋ชจ์์คํ ๊ฒฐ๊ณผ ์ค์ ์ฒ์ 10๊ฐ์ ๊ฒฐ๊ณผ๋ฅผ ํ์ธํด๋ณด์.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(color_codes = True)
Explanation: ๋ชจ์์คํ ๊ฒฐ๊ณผ ๊ทธ๋ํ๋ก ํ์ธํ๊ธฐ
๋ชจ์์คํ ๊ฒฐ๊ณผ๋ฅผ ํ์คํธ๊ทธ๋จ์ผ๋ก ํ์ธํด๋ณผ ์ ์๋ค.
์ฌ๊ธฐ์๋ seaborn ์ด๋ ๋ชจ๋์ ํ์ฉํ์ฌ ๋ณด๋ค ๋ฉ์ง ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆฌ๋ ๋ฒ์ ๊ธฐ์ตํด๋๋ฉด ์ข๋ค.
End of explanation
sns.distplot(heads_count_1000, kde=False)
Explanation: ์๋ ๊ทธ๋ํ๋ ๋์ 30๋ฒ ๋์ง๊ธฐ๋ฅผ 100๋ฒ ๋ชจ์์คํํ์ ๋ ์๋ฉด์ด ๋์จ ํ์๋ฅผ ํ์คํ ๊ทธ๋จ์ผ๋ก ๋ณด์ฌ์ค๋ค.
End of explanation
sns.distplot(heads_count_1000, kde=True)
Explanation: ์๋ ๊ทธ๋ํ๋ ์ปค๋๋ฐ๋์ถ์ (kde = kernel density estimation) ๊ธฐ๋ฒ์ ์ ์ฉํ์ฌ ๋ฐ์ดํฐ๋ฅผ
๋ณด๋ค ์ดํดํ๊ธฐ ์ฝ๋๋ก ๋์์ฃผ๋ ๊ทธ๋ํ๋ฅผ ํจ๊ป ๋ณด์ฌ์ค๋ค.
End of explanation
# ์๋ฉด์ด 24ํ ์ด์ ๋์ค๋ ๊ฒฝ์ฐ๋ค์ ์ด๋ ์ด
mask = heads_count_1000>=24
heads_count_1000[mask]
Explanation: 1000๋ฒ์ ๋ชจ์์คํ์์ ์๋ฉด์ด 24ํ ์ด์ ๋์จ ์คํ์ด ๋ช ๋ฒ์ธ์ง๋ฅผ ํ์ธํด๋ณด์.
์์ ์ฌ์ฉํ ๊ธฐ์ ์ธ ๋ง์คํฌ ์ธ๋ฑ์ฑ ๊ธฐ์ ์ ํ์ฉํ๋ค.
์ฃผ์: ์์ ์ด๋ก ์ ์ผ๋ก ์ดํด๋ณด์์ ๋ 1500๋ฒ ์ ๋ ์คํํด์ผ ํ ๋ฒ ์ ๋ ๋ณผ ์ ์๋ค๊ณ ๊ฒฐ๋ก ์ง์์์ ๊ธฐ์ตํ๋ผ.
End of explanation
heads_count_10000 = coin_experiment(10000)
sns.distplot(heads_count_10000, kde=False)
# ์๋ฉด์ด 24ํ ์ด์ ๋์ค๋ ๊ฒฝ์ฐ๋ค์ ์ด๋ ์ด
mask = heads_count_10000>=24
heads_count_10000[mask].shape[0]
Explanation: ์ ๋ชจ์์คํ์์๋ ํ ๋ฒ ์๋ฉด์ด 24ํ ์ด์ ๋์๋ค.
์ด์ ์ ๋ชจ์์คํ์ 10,000๋ฒ ๋ฐ๋ณตํด๋ณด์.
End of explanation
def coin_experiment_2(num_repeat):
experiment = np.random.randint(0,2,[num_repeat, num_tosses])
return experiment.sum(axis=1)
heads_count = coin_experiment_2(100000)
sns.distplot(heads_count, kde=False)
mask = heads_count>=24
heads_count[mask].shape[0]/100000
Explanation: ์ ๋ชจ์์คํ์์๋ ๋์ 30๋ฒ ๋์ง๊ธฐ๋ฅผ 10,000๋ฒ ๋ฐ๋ณตํ์ ๋ 24๋ฒ ์ด์ ์๋ฉด์ด ๋์จ ๊ฒฝ์ฐ๊ฐ 6๋ฒ ์์๋ค.
์์ ํ๋ฅ ์ ์ผ๋ก 0.0007%, ์ฆ 10,000๋ฒ์ 7๋ฒ ์ ๋ ๋์์ผ ํ๋ค๋ ๊ฒ๊ณผ ๊ฑฐ์ ์ผ์นํ๋ค.
์ ์์ ์ธ ๋์ ์ธ๊ฐ?
๋ชจ์์คํ์ ๊ฒฐ๊ณผ ์ญ์ ๋์ ์ 30๋ฒ ๋์ ธ์ 24๋ฒ ์ด์ ์๋ฉด์ด ๋์ฌ ํ๋ฅ ์ด 5%์ ํฌ๊ฒ ๋ฏธ์น์ง ๋ชปํ๋ค.
์ด๋ฐ ๊ฒฝ์ฐ ์ฐ๋ฆฌ๋ ์ฌ์ฉํ ๋์ ์ด ์ ์์ ์ธ ๋์ ์ด๋ผ๋ ์๊ฐ์ค(H0)์ ๋ฐ์๋ค์ผ ์ ์๋ค๊ณ ๋งํ๋ค.
์ฆ, ๊ธฐ๊ฐํด์ผ ํ๋ค.
๊ฐ์ค๊ฒ์ ์ ์ํด ์ง๊ธ๊น์ง ๋ค๋ฃฌ ๋ด์ฉ์ ์ ๋ฆฌํ๋ฉด ๋ค์๊ณผ ๊ฐ๋ค.
๊ฐ์ค๊ฒ์ 6๋จ๊ณ
1) ๊ฒ์ ํ ๊ฐ์ค์ ๊ฒฐ์ ํ๋ค.
* ์๊ฐ์ค: ์ฌ๊ธฐ์๋ "์ ์์ ์ธ ๋์ ์ด๋ค" ๋ผ๋ ๊ฐ์ค ์ฌ์ฉ
2) ๊ฐ์ค์ ๊ฒ์ฆํ ๋ ์ฌ์ฉํ ํต๊ณ๋ฐฉ์์ ์ ํํ๋ค.
* ๊ธฐ์๋ ์ดํญ๋ถํฌ ํ๋ฅ ์ ํ
3) ๊ธฐ๊ฐ์ญ์ ์ ํ๋ค.
* ์ฌ๊ธฐ์๋ ์๋ฉด์ด ๋์ฌ ํ์๋ฅผ ๊ธฐ์ค์ผ๋ก ์์ 5%๋ก ์ ํจ
* ์๋ฉด์ด 24๋ฒ ๋์ฌ ํ๋ฅ ์ด 5% ์ด์๋์ด์ผ ์ธ์ ํ๋ค๋ ์๋ฏธ์.
4) ๊ฒ์ ํต๊ณ๋ฅผ ์ํ p-๊ฐ์ ์ฐพ๋๋ค.
* ์ฌ๊ธฐ์๋ ๋ชจ์์คํ์ ์ด์ฉํ์ฌ ๊ฐ์ค์ ์ฌ์ฉ๋ ์ฌ๊ฑด์ด ๋ฐ์ํ ํ๋ฅ ์ ๊ณ์ฐ.
* ๊ฒฝ์ฐ์ ๋ฐ๋ผ ์ด๋ก ์ ์ผ๋ก๋ ๊ณ์ฐ ๊ฐ๋ฅ
5) ํ๋ณธ๊ฒฐ๊ณผ๊ฐ ๊ธฐ๊ฐ์ญ ์์ ๋ค์ด์ค๋์ง ํ์ธํ๋ค.
* ์ฌ๊ธฐ์๋ 5% ์ดํ์ธ์ง ํ์ธ
6) ๊ฒฐ์ ์ ๋ด๋ฆฐ๋ค.
* ์ฌ๊ธฐ์๋ "์ ์์ ์ธ ๋์ ์ด๋ค" ๋ผ๋ ์๊ฐ์ค์ ๊ธฐ๊ฐํจ.
์ฐ์ต๋ฌธ์
์ฐ์ต
๋ชจ์์คํ ๋ฐ๋ณต์ ๊ตฌํํ๋ coin_experiment ํจ์๋ฅผ for๋ฌธ์ ์ฌ์ฉํ์ง ์๊ณ ๊ตฌํํด๋ณด์.
๊ฒฌ๋ณธ๋ต์:
End of explanation
from numpy.random import binomial
Explanation: ์ฐ์ต
numpy.random ๋ชจ๋์ ์ง๊ธ๊น์ง ๋ค๋ฃฌ ์ดํญ๋ถํฌ ํ๋ฅ ์ ๊ณ์ฐํด์ฃผ๋ ํจ์์ธ binomial์ด ์ด๋ฏธ ๊ตฌํ๋์ด ์๋ค.
End of explanation
an_experiment = binomial(30, 0.5, 10000)
an_experiment
Explanation: ์๋ ์ฝ๋๋ B(30, 1.5)๋ฅผ ๋ฐ๋ฅด๋ ํ๋ฅ ๋ณ์๋ฅผ 10,000๋ฒ ๋ฐ๋ณตํ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ฌ์ค๋ค.
End of explanation
an_experiment[an_experiment>=24].shape[0]
Explanation: ์ ๊ฒฐ๊ณผ๋ฅผ ์ด์ฉํ์ฌ ์์ ๋ถ์ํ ๊ฒฐ๊ณผ์ ์ ์ฌํ ๊ฒฐ๊ณผ๋ฅผ ์ป๋๋ค๋ ๊ฒ์ ํ์ธํ ์ ์๋ค.
End of explanation |
1,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computation
The labels associated with DataArray and Dataset objects enables some powerful shortcuts for computation, notably including aggregation and broadcasting by dimension names.
Basic array math
Arithmetic operations with a single DataArray automatically vectorize (like numpy) over all array values
Step1: You can also use any of numpyโs or scipyโs many ufunc functions directly on a DataArray
Step2: Data arrays also implement many numpy.ndarray methods
Step3: Missing values
xarray objects borrow the isnull(), notnull(), count(), dropna() and fillna() methods for working with missing data from pandas
Step4: Like pandas, xarray uses the float value np.nan (not-a-number) to represent missing values.
Aggregation
Aggregation methods have been updated to take a dim argument instead of axis. This allows for very intuitive syntax for aggregation methods that are applied along particular dimension(s)
Step5: If you need to figure out the axis number for a dimension yourself (say, for wrapping code designed to work with numpy arrays), you can use the get_axis_num() method
Step6: These operations automatically skip missing values, like in pandas
Step7: If desired, you can disable this behavior by invoking the aggregation method with skipna=False.
Rolling window operations
DataArray objects include a rolling() method. This method supports rolling window aggregation
Step8: rolling() is applied along one dimension using the name of the dimension as a key (e.g. y) and the window size as the value (e.g. 3). We get back a Rolling object
Step9: The label position and minimum number of periods in the rolling window are controlled by the center and min_periods arguments
Step10: Note that rolling window aggregations are much faster (both asymptotically and because they avoid a loop in Python) when bottleneck is installed. Otherwise, we fall back to a slower, pure Python implementation.
Finally, we can manually iterate through Rolling objects
Step11: Broadcasting by dimension name
DataArray objects are automatically align themselves (โbroadcastingโ in the numpy parlance) by dimension name instead of axis order. With xarray, you do not need to transpose arrays or insert dimensions of length 1 to get array operations to work, as commonly done in numpy with np.reshape() or np.newaxis.
This is best illustrated by a few examples. Consider two one-dimensional arrays with different sizes aligned along different dimensions
Step12: With xarray, we can apply binary mathematical operations to these arrays, and their dimensions are expanded automatically
Step13: Moreover, dimensions are always reordered to the order in which they first appeared
Step14: This means, for example, that you always subtract an array from its transpose | Python Code:
%matplotlib inline
import numpy as np
import xarray as xr
arr = xr.DataArray(np.random.randn(2, 3), [('x', ['a', 'b']), ('y', [10, 20, 30])])
arr - 3
abs(arr)
Explanation: Computation
The labels associated with DataArray and Dataset objects enables some powerful shortcuts for computation, notably including aggregation and broadcasting by dimension names.
Basic array math
Arithmetic operations with a single DataArray automatically vectorize (like numpy) over all array values:
End of explanation
np.sin(arr)
Explanation: You can also use any of numpyโs or scipyโs many ufunc functions directly on a DataArray:
End of explanation
arr.round(2)
arr.T
Explanation: Data arrays also implement many numpy.ndarray methods:
End of explanation
x = xr.DataArray([0, 1, np.nan, np.nan, 2], dims=['x'])
x.isnull()
x.notnull()
x.count()
x.dropna(dim='x')
x.fillna(-1)
Explanation: Missing values
xarray objects borrow the isnull(), notnull(), count(), dropna() and fillna() methods for working with missing data from pandas:
End of explanation
arr.sum(dim='x')
arr.std(['x', 'y'])
arr.min()
Explanation: Like pandas, xarray uses the float value np.nan (not-a-number) to represent missing values.
Aggregation
Aggregation methods have been updated to take a dim argument instead of axis. This allows for very intuitive syntax for aggregation methods that are applied along particular dimension(s):
End of explanation
arr.get_axis_num('y')
Explanation: If you need to figure out the axis number for a dimension yourself (say, for wrapping code designed to work with numpy arrays), you can use the get_axis_num() method:
End of explanation
xr.DataArray([1, 2, np.nan, 3]).mean()
Explanation: These operations automatically skip missing values, like in pandas:
End of explanation
arr = xr.DataArray(np.arange(0, 7.5, 0.5).reshape(3, 5), dims=('x', 'y'))
arr
Explanation: If desired, you can disable this behavior by invoking the aggregation method with skipna=False.
Rolling window operations
DataArray objects include a rolling() method. This method supports rolling window aggregation:
End of explanation
arr.rolling(y=3)
Explanation: rolling() is applied along one dimension using the name of the dimension as a key (e.g. y) and the window size as the value (e.g. 3). We get back a Rolling object:
End of explanation
arr.rolling(y=3, min_periods=2, center=True)
r = arr.rolling(y=3)
r.mean()
r.reduce(np.std)
Explanation: The label position and minimum number of periods in the rolling window are controlled by the center and min_periods arguments:
End of explanation
for label, arr_window in r:
print('==============================\n', label, '\n-------------------------\n', arr_window)
Explanation: Note that rolling window aggregations are much faster (both asymptotically and because they avoid a loop in Python) when bottleneck is installed. Otherwise, we fall back to a slower, pure Python implementation.
Finally, we can manually iterate through Rolling objects:
End of explanation
a = xr.DataArray([1, 2], [('x', ['a', 'b'])])
a
b = xr.DataArray([-1, -2, -3], [('y', [10, 20, 30])])
b
Explanation: Broadcasting by dimension name
DataArray objects are automatically align themselves (โbroadcastingโ in the numpy parlance) by dimension name instead of axis order. With xarray, you do not need to transpose arrays or insert dimensions of length 1 to get array operations to work, as commonly done in numpy with np.reshape() or np.newaxis.
This is best illustrated by a few examples. Consider two one-dimensional arrays with different sizes aligned along different dimensions:
End of explanation
a * b
Explanation: With xarray, we can apply binary mathematical operations to these arrays, and their dimensions are expanded automatically:
End of explanation
c = xr.DataArray(np.arange(6).reshape(3, 2), [b['y'], a['x']])
c
a + c
Explanation: Moreover, dimensions are always reordered to the order in which they first appeared:
End of explanation
c - c.T
Explanation: This means, for example, that you always subtract an array from its transpose:
End of explanation |
1,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
Class MLPClassifier implements a multi-layer perceptron (MLP) algorithm that trains using Backpropagation.
MLP trains on two arrays
Step1: MLP can fit a non-linear model to the training data. clf.coefs_ contains the weight matrices that constitute the model parameters
Step2: Currently, MLPClassifier supports only the Cross-Entropy loss function, which allows probability estimates by running the predict_proba method.
MLP trains using Backpropagation. More precisely, it trains using some form of gradient descent and the gradients are calculated using Backpropagation. For classification, it minimizes the Cross-Entropy loss function, giving a vector of probability estimates P(y|x) per sample x
Step3: MLPClassifier supports multi-class classification by applying Softmax as the output function.
Further, the model supports multi-label classification in which a sample can belong to more than one class. For each class, the raw output passes through the logistic function. Values larger or equal to 0.5 are rounded to 1, otherwise to 0. For a predicted output of a sample, the indices where the value is 1 represents the assigned classes of that sample
Step4: Regularization
Both MLPRegressor and class
Step5: Tips on Practical Use
Multi-layer Perceptron is sensitive to feature scaling, so it is highly recommended to scale your data. For example, scale each attribute on the input vector X to [0, 1] or [-1, +1], or standardize it to have mean 0 and variance 1. Note that you must apply the same scaling to the test set for meaningful results. You can use StandardScaler for standardization.
Step6: An alternative and recommended approach is to use StandardScaler in a Pipeline
* Finding a reasonable regularization parameter \alpha is best done using GridSearchCV, usually in the range 10.0 ** -np.arange(1,7).
* Empirically, we observed that L-BFGS converges faster and with better solutions on small datasets. For relatively large datasets, however, Adam is very robust. It usually converges quickly and gives pretty good performance. SGD with momentum or nesterovโs momentum, on the other hand, can perform better than those two algorithms if learning rate is correctly tuned.
More control with warm_start
If you want more control over stopping criteria or learning rate in SGD, or want to do additional monitoring, using warm_start=True and max_iter=1 and iterating yourself can be helpful
Step7: Compare Stochastic learning strategies for MLPClassifier
This example visualizes some training loss curves for different stochastic learning strategies, including SGD and Adam. Because of time-constraints, we use several small datasets, for which L-BFGS might be more suitable. The general trend shown in these examples seems to carry over to larger datasets, however.
Note that those results can be highly dependent on the value of learning_rate_init. | Python Code:
## Input
X = [[0., 0.], [1., 1.]]
## Labels
y = [0, 1]
## Create Model
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,
hidden_layer_sizes=(5, 2), random_state=1)
## Fit
clf.fit(X, y)
## Make Predictions
clf.predict([[2., 2.], [-1., -2.]])
Explanation: Classification
Class MLPClassifier implements a multi-layer perceptron (MLP) algorithm that trains using Backpropagation.
MLP trains on two arrays: array X of size (n_samples, n_features), which holds the training samples represented as floating point feature vectors; and array y of size (n_samples,), which holds the target values (class labels) for the training samples:
End of explanation
## Weight matrices/ model parameters
[coef.shape for coef in clf.coefs_]
## Coefficents of classifier
clf.coefs_
Explanation: MLP can fit a non-linear model to the training data. clf.coefs_ contains the weight matrices that constitute the model parameters:
End of explanation
clf.predict_proba([[0, 0.], [0., 0.]])
Explanation: Currently, MLPClassifier supports only the Cross-Entropy loss function, which allows probability estimates by running the predict_proba method.
MLP trains using Backpropagation. More precisely, it trains using some form of gradient descent and the gradients are calculated using Backpropagation. For classification, it minimizes the Cross-Entropy loss function, giving a vector of probability estimates P(y|x) per sample x:
End of explanation
X = [[0., 0.], [1., 1.]]
y = [[0, 1], [1, 1]]
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,
hidden_layer_sizes=(15,), random_state=1)
clf.fit(X, y)
clf.predict([1., 2.])
clf.predict([0., 0.])
Explanation: MLPClassifier supports multi-class classification by applying Softmax as the output function.
Further, the model supports multi-label classification in which a sample can belong to more than one class. For each class, the raw output passes through the logistic function. Values larger or equal to 0.5 are rounded to 1, otherwise to 0. For a predicted output of a sample, the indices where the value is 1 represents the assigned classes of that sample:
End of explanation
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
h = .02 # step size in the mesh
alphas = np.logspace(-5, 3, 5)
names = []
for i in alphas:
names.append('alpha ' + str(i))
classifiers = []
for i in alphas:
classifiers.append(MLPClassifier(alpha=i, random_state=1))
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=0, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable]
figure = plt.figure(figsize=(17, 9))
i = 1
# iterate over datasets
for X, y in datasets:
# preprocess dataset, split into training and test part
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot also the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
i += 1
figure.subplots_adjust(left=.02, right=.98)
plt.show()
Explanation: Regularization
Both MLPRegressor and class:MLPClassifier use parameter alpha for regularization (L2 regularization) term which helps in avoiding overfitting by penalizing weights with large magnitudes. Following plot displays varying decision function with value of alpha.
Varying regularization in Multi-layer Perceptronยถ
A comparison of different values for regularization parameter โalphaโ on synthetic datasets. The plot shows that different alphas yield different decision functions.
Alpha is a parameter for regularization term, aka penalty term, that combats overfitting by constraining the size of the weights. Increasing alpha may fix high variance (a sign of overfitting) by encouraging smaller weights, resulting in a decision boundary plot that appears with lesser curvatures. Similarly, decreasing alpha may fix high bias (a sign of underfitting) by encouraging larger weights, potentially resulting in a more complicated decision boundary.
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# Don't cheat - fit only on training data
scaler.fit(X_train)
X_train = scaler.transform(X_train)
# apply same transformation to test data
X_test = scaler.transform(X_test)
Explanation: Tips on Practical Use
Multi-layer Perceptron is sensitive to feature scaling, so it is highly recommended to scale your data. For example, scale each attribute on the input vector X to [0, 1] or [-1, +1], or standardize it to have mean 0 and variance 1. Note that you must apply the same scaling to the test set for meaningful results. You can use StandardScaler for standardization.
End of explanation
X = [[0., 0.], [1., 1.]]
y = [0, 1]
clf = MLPClassifier(hidden_layer_sizes=(15,), random_state=1, max_iter=1, warm_start=True)
for i in range(10):
clf.fit(X, y)
# additional monitoring / inspection
Explanation: An alternative and recommended approach is to use StandardScaler in a Pipeline
* Finding a reasonable regularization parameter \alpha is best done using GridSearchCV, usually in the range 10.0 ** -np.arange(1,7).
* Empirically, we observed that L-BFGS converges faster and with better solutions on small datasets. For relatively large datasets, however, Adam is very robust. It usually converges quickly and gives pretty good performance. SGD with momentum or nesterovโs momentum, on the other hand, can perform better than those two algorithms if learning rate is correctly tuned.
More control with warm_start
If you want more control over stopping criteria or learning rate in SGD, or want to do additional monitoring, using warm_start=True and max_iter=1 and iterating yourself can be helpful:
End of explanation
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn import datasets
# different learning rate schedules and momentum parameters
params = [{'solver': 'sgd', 'learning_rate': 'constant', 'momentum': 0,
'learning_rate_init': 0.2},
{'solver': 'sgd', 'learning_rate': 'constant', 'momentum': .9,
'nesterovs_momentum': False, 'learning_rate_init': 0.2},
{'solver': 'sgd', 'learning_rate': 'constant', 'momentum': .9,
'nesterovs_momentum': True, 'learning_rate_init': 0.2},
{'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': 0,
'learning_rate_init': 0.2},
{'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': .9,
'nesterovs_momentum': True, 'learning_rate_init': 0.2},
{'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': .9,
'nesterovs_momentum': False, 'learning_rate_init': 0.2},
{'solver': 'adam', 'learning_rate_init': 0.01}]
labels = ["constant learning-rate", "constant with momentum",
"constant with Nesterov's momentum",
"inv-scaling learning-rate", "inv-scaling with momentum",
"inv-scaling with Nesterov's momentum", "adam"]
plot_args = [{'c': 'red', 'linestyle': '-'},
{'c': 'green', 'linestyle': '-'},
{'c': 'blue', 'linestyle': '-'},
{'c': 'red', 'linestyle': '--'},
{'c': 'green', 'linestyle': '--'},
{'c': 'blue', 'linestyle': '--'},
{'c': 'black', 'linestyle': '-'}]
def plot_on_dataset(X, y, ax, name):
# for each dataset, plot learning for each learning strategy
print("\nlearning on dataset %s" % name)
ax.set_title(name)
X = MinMaxScaler().fit_transform(X)
mlps = []
if name == "digits":
# digits is larger but converges fairly quickly
max_iter = 15
else:
max_iter = 400
for label, param in zip(labels, params):
print("training: %s" % label)
mlp = MLPClassifier(verbose=0, random_state=0,
max_iter=max_iter, **param)
mlp.fit(X, y)
mlps.append(mlp)
print("Training set score: %f" % mlp.score(X, y))
print("Training set loss: %f" % mlp.loss_)
for mlp, label, args in zip(mlps, labels, plot_args):
ax.plot(mlp.loss_curve_, label=label, **args)
fig, axes = plt.subplots(2, 2, figsize=(15, 10))
# load / generate some toy datasets
iris = datasets.load_iris()
digits = datasets.load_digits()
data_sets = [(iris.data, iris.target),
(digits.data, digits.target),
datasets.make_circles(noise=0.2, factor=0.5, random_state=1),
datasets.make_moons(noise=0.3, random_state=0)]
for ax, data, name in zip(axes.ravel(), data_sets, ['iris', 'digits',
'circles', 'moons']):
plot_on_dataset(*data, ax=ax, name=name)
fig.legend(ax.get_lines(), labels=labels, ncol=3, loc="upper center")
plt.show()
Explanation: Compare Stochastic learning strategies for MLPClassifier
This example visualizes some training loss curves for different stochastic learning strategies, including SGD and Adam. Because of time-constraints, we use several small datasets, for which L-BFGS might be more suitable. The general trend shown in these examples seems to carry over to larger datasets, however.
Note that those results can be highly dependent on the value of learning_rate_init.
End of explanation |
1,796 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a dataframe, e.g: | Problem:
import pandas as pd
df = pd.DataFrame({'Date': ['20.07.2018', '20.07.2018', '21.07.2018', '21.07.2018'],
'B': [10, 1, 0, 1],
'C': [8, 0, 1, 0]})
def g(df):
df1 = df.groupby('Date').agg(lambda x: (x%2==0).sum())
df2 = df.groupby('Date').agg(lambda x: (x%2==1).sum())
return df1, df2
result1, result2 = g(df.copy()) |
1,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MultiGraph
Create 3 views
Step1: Code the user can supply to view the streaming data
Given the streams with the 3 different moving averages, create 3 separate views to obtain the data.
Step2: Submit To Distributed Streams Install
Step3: Graph The Stock Price & Moving Averages | Python Code:
from streamsx.topology.topology import Topology
from streamsx.topology import context
from some_module import jsonRandomWalk, movingAverage
#from streamsx import rest
import json
# Define operators
rw = jsonRandomWalk()
ma_150 = movingAverage(150)
ma_50 = movingAverage(50)
# Define topology & submit
top = Topology("myTop")
ticker_price = top.source(rw)
ma_150_stream = ticker_price.map(ma_150)
ma_50_stream = ticker_price.map(ma_50)
Explanation: MultiGraph
Create 3 views:
* A view on a randomly generated stock price
* A view on a moving average of the last 50 stock prices
* A view on a moving average of the last 150 stock prices
End of explanation
ticker_view = ticker_price.view()
ma_150_view = ma_150_stream.view()
ma_50_view = ma_50_stream.view()
Explanation: Code the user can supply to view the streaming data
Given the streams with the 3 different moving averages, create 3 separate views to obtain the data.
End of explanation
context.submit("DISTRIBUTED", top.graph, username = "streamsadmin", password = "passw0rd")
Explanation: Submit To Distributed Streams Install
End of explanation
%matplotlib inline
%matplotlib notebook
from streamsx.rest import multi_graph_every
l = [ticker_view, ma_150_view, ma_50_view]
multi_graph_every(l, 'val', 1.0)
Explanation: Graph The Stock Price & Moving Averages
End of explanation |
1,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ๆญฃ่ฆ่กจ็พ
20. JSONใใผใฟใฎ่ชญใฟ่พผใฟ
Wikipedia่จไบใฎJSONใใกใคใซใ่ชญใฟ่พผใฟ, ใใคใฎใชในใใซ้ขใใ่จไบๆฌๆใ่กจ็คบใใ.
ๅ้ก21-29ใงใฏ, ใใใงๆฝๅบใใ่จไบๆฌๆใซๅฏพใใฆๅฎ่กใใ.
Step1: 21. ใซใใดใชๅใๅซใ่กใๆฝๅบ
่จไบไธญใงใซใใดใชๅใๅฎฃ่จใใฆใใ่กใๆฝๅบใใ๏ผ | Python Code:
import pandas as pd
import json
def get_article(title):
for line in open('jawiki-country.json', 'r'):
data = json.loads(line)
if data['title'] == title:
return data['text'].split('\n')
England = get_article('ใคใฎใชใน')
print(type(England), England)
Explanation: ๆญฃ่ฆ่กจ็พ
20. JSONใใผใฟใฎ่ชญใฟ่พผใฟ
Wikipedia่จไบใฎJSONใใกใคใซใ่ชญใฟ่พผใฟ, ใใคใฎใชในใใซ้ขใใ่จไบๆฌๆใ่กจ็คบใใ.
ๅ้ก21-29ใงใฏ, ใใใงๆฝๅบใใ่จไบๆฌๆใซๅฏพใใฆๅฎ่กใใ.
End of explanation
categorys
Explanation: 21. ใซใใดใชๅใๅซใ่กใๆฝๅบ
่จไบไธญใงใซใใดใชๅใๅฎฃ่จใใฆใใ่กใๆฝๅบใใ๏ผ
End of explanation |
1,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class Coding Lab
Step1: Part 1
Step2: Testing Out your API
The documentation for the API can be found here
Step3: Next we setup the headers and the rest is like calling any other API...
Step4: The response
Notice the service does a nice job recogizing $5 as a quantity and next week as a date range.
It also figures out that Syracuse, Rochester and Buffalo are all locations. VERY COOL
Name Entity Recognition has applications in identifying personally identifiable information in text such as Names, emails, phone numbers, credit cards, social-security numbers, etc.
It can also be used to identify places, quantities, time, and is useful for providing context to news headlines.
For each recoginzed entity, you are provided with a score (confidence score between 0-1), a type, and sub-type as appropriate.
1.1 You Code
Re-write the example above to perform named entity extraction on the following text
Step5: Curious as to what you can detect?
Check out
Step6: 1.2 You Code
Step7: Now that you know how to use it, what can you do with it?
Text Analytics technologies such as named entity recoginition, key phrase extraction and sentiment analysis are best used when combined with another service. For example
Step8: Key Phrases is to Sentiment as Peanut Butter is to Jelly!
From our key phrases analysis, it looks like customers are talking about eggs but are they speaking positively or negatively about their egg experiences? This is why sentiment analysis accompanies key phrase analysis. Key phrase identifies what they are talking about, and sentiment provides the context around it.
1.4 You Code
Perform sentiment analysis over the reviews to determine who likes the eggs and who does not. This time you must build the payload variable with the documents key yourself.
Step9: Putting it all together.
Let's put togther a feature which might be useful for automating a social media account or chat-bot. If you input some text that says something positive about any location, the program should respond
Step10: Metacognition
Rate your comfort level with this week's material so far.
1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.
2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below.
3 ==> I can do this on my own without any help.
4 ==> I can do this on my own and can explain/teach how to do it to others.
--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--
Questions And Comments
Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.
--== Double-click Here then Enter Your Questions Below this Line ==-- | Python Code:
# Run this to make sure you have the pre-requisites!
!pip install -q requests
# start by importing the modules we will need
import requests
import json
Explanation: Class Coding Lab: Web Services and APIs
Overview
The web has long evolved from user-consumption to device consumption. In the early days of the web when you wanted to check the weather, you opened up your browser and visited a website. Nowadays your smart watch / smart phone retrieves the weather for you and displays it on the device. Your device can't predict the weather. It's simply consuming a weather based service.
The key to making device consumption work are API's (Application Program Interfaces). Products we use everyday like smartphones, Amazon's Alexa, and gaming consoles all rely on API's. They seem "smart" and "powerful" but in actuality they're only interfacing with smart and powerful services in the cloud.
API consumption is the new reality of programming; it is why we cover it in this course. Once you undersand how to conusme API's you can write a program to do almost anything and harness the power of the internet to make your own programs look "smart" and "powerful."
This lab will be a walk-through for how to use a Web API. Specifically we will use the Microsoft Azure Text Analytics API to features like sentiment and entity recognition to our programs.
End of explanation
# record these values in code, too
key = "key-here"
endpoint = "endpoint-url-here"
Explanation: Part 1: Configure the Azure Text Analytics API
First, sign up for Azure for Students
First you will need to sign up for Microsoft Azure for Students. This is free for all Syracuse University Students. Azure is a cloud provider from Microsoft.
Go to https://azure.microsoft.com/en-us/free/students/
Click Activate Now
Login with your SU email [email protected]
Use your NetID Password.
When you log-in it should take you to the Azure Portal https://portal.azure.com
Next Add Text Analytics
From inside the Azure portal:
Click Create a Resource
Choose Text Analytics
Select Create
Fill out the form:
Subscription: Azure for Students
Resource Group: ist256-yournetid (for example: ist256-mafudge)
Location: Central US
Name: ist256-yournetid-text-analytics (eg. ist256-mafudge-text-analytics)
Pricing Tier: F0 (Important: Select the free tier!)
Click create.
When the deployment is done, click Go To Resource.
You are now on the quickstart screen
Click on Keys And Endpoint
Record your Key and Endpoint here:
Key1:
Endpoint:
End of explanation
payload = { "documents": [
{"id": "1", "text": "I would not pay $5 to see that Star Wars movie next week." },
{"id": "2", "text": "Syracuse and Rochester are nicer cities than Buffalo." }
]
}
Explanation: Testing Out your API
The documentation for the API can be found here: https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/quickstarts/client-libraries-rest-api?tabs=version-3-1&pivots=rest-api
While there is a Python module, we are going to use requests and the REST API. Why? Practice learning how to call Web API's of course!
Let's give it a try by using named entity recognition (NER). This attempts to extract meaning from text and is quite useful in applications which require natual language understanding.
For all requests, you provide:
- Your subscription key in the header, under the dictionary key Ocp-Apim-Subscription-Key
- A list of documents you wish the API to act upon. This is delivered via HTTP POST and in JSON format.
All of this is easy to do in Python, of course!
For example, let's extract entities from the following phrases:
1. "I would not pay $5 to see that Star Wars movie next week."
2. "Syracuse and Rochester are nicer cities than Buffalo."
first we create the POST payload, a dictionary. There is a documents key and a list of values which have keys id and text so we can order the documents.
End of explanation
url = f'{endpoint}text/analytics/v3.0/entities/recognition/general'
headers = { 'Ocp-Apim-Subscription-Key' : key}
response = requests.post(url,headers=headers, json=payload)
response.raise_for_status()
entities = response.json()
entities
Explanation: Next we setup the headers and the rest is like calling any other API...
End of explanation
# TODO Write code here
Explanation: The response
Notice the service does a nice job recogizing $5 as a quantity and next week as a date range.
It also figures out that Syracuse, Rochester and Buffalo are all locations. VERY COOL
Name Entity Recognition has applications in identifying personally identifiable information in text such as Names, emails, phone numbers, credit cards, social-security numbers, etc.
It can also be used to identify places, quantities, time, and is useful for providing context to news headlines.
For each recoginzed entity, you are provided with a score (confidence score between 0-1), a type, and sub-type as appropriate.
1.1 You Code
Re-write the example above to perform named entity extraction on the following text:
According to yesterday's news, four out of five New York City coders prefer Google to Microsoft.
End of explanation
text = "As of this year, my primary email address is [email protected] but I also use [email protected] and [email protected] from time to time."
url = f'{endpoint}text/analytics/v3.0/entities/recognition/general'
header = { 'Ocp-Apim-Subscription-Key' : key}
payload = {"documents": [
{"id": "1", "text": text }
]
}
response = requests.post(url,headers=header, json=payload)
response.raise_for_status()
entities = response.json()
for entity in entities['documents'][0]['entities']:
if entity['category'] == 'Email':
print( entity['text'])
Explanation: Curious as to what you can detect?
Check out: https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking#supported-types-for-named-entity-recognition-v2
Let's use this service write our own user-defined function to extract email addresses from any text.
The API call is the same, the difference is what we do with the results. We loop through the entities and if the entity category is of type Email we print it out.
End of explanation
# TODO Fix this code
text = input("Enter some text with locations in it. ")
payload = payload = { "documents": [ {"id": "1", "text": text } ] }
url = f'{endpoint}text/analytics/v3.ok/entities/recognition/general'
header = { 'Ocp-Apim-Subscription-Key' : 'key'}
response = requests.post(url,headers=header, json=payload)
response.raise_for_status()
entities = response.json()
for entity in entities['documents'][0]['entities']:
if entity['type'] == 'Location':
print( entity['text'])
Explanation: 1.2 You Code: Debug
Get this code working!
The following code will extract all of the entities with category Location from the input text.
If you are looking for some sample text, try: London is better than Paris.
End of explanation
review1 = "I don't think I will ever order the eggs again. Not very good."
review2 = "Went there last Wednesday. It was croweded and the pancakes and eggs were spot on! I enjoyed my meal."
review3 = "Not sure who is running the place but the eggs benedict were not that great. On the other hand I was happy with my toast."
url = f'{endpoint}text/analytics/v3.0/keyphrases'
header = { 'Ocp-Apim-Subscription-Key' : key}
payload = {"documents": [
{"id": "1", "text": review1 },
{"id": "2", "text": review2 },
{"id": "3", "text": review3 }
]
}
# TODO Write your code here to call the API, deserilaize the json, and display the output
Explanation: Now that you know how to use it, what can you do with it?
Text Analytics technologies such as named entity recoginition, key phrase extraction and sentiment analysis are best used when combined with another service. For example:
You can take a list of customer reviews from a diner restaurant and run sentiment analysis to determine how customers feel about it. Do they like this place or not?
Use named entity recognition to determine where those customers are from or when they visited.
Use key phrase extraction to determine what they are talking about? Pancakes, breakfast sandwiches, eggs, etc...
1.3 You Code
Write a program to extract key phrases from the three reviews provided below. Make 1 api call to the url endpoint that has been provided for you. It's up to you to print out the key phrases!
End of explanation
review1 = "I don't think I will ever order the eggs again. Not very good."
review2 = "Went there last Wednesday. It was croweded and the pancakes and eggs were spot on! I enjoyed my meal."
review3 = "Not sure who is running the place but the eggs benedict were not that great. On the other hand I was happy with my toast."
url = f'{endpoint}text/analytics/v3.0/sentiment'
header = { 'Ocp-Apim-Subscription-Key' : key}
# TODO: Write code here to build the documents structure then perform the sentiment analysis via the api
Explanation: Key Phrases is to Sentiment as Peanut Butter is to Jelly!
From our key phrases analysis, it looks like customers are talking about eggs but are they speaking positively or negatively about their egg experiences? This is why sentiment analysis accompanies key phrase analysis. Key phrase identifies what they are talking about, and sentiment provides the context around it.
1.4 You Code
Perform sentiment analysis over the reviews to determine who likes the eggs and who does not. This time you must build the payload variable with the documents key yourself.
End of explanation
# TODO Write code
Explanation: Putting it all together.
Let's put togther a feature which might be useful for automating a social media account or chat-bot. If you input some text that says something positive about any location, the program should respond:
How kind of you to say good things about {location}
For example, if I input: I had the best barbeque in Memphis this summer.
It will respond: How kind of you to say good things about Memphis.
ALGORITHM
input text
call sentiment api for text
call entity recognition api for text
determine if sentiment is positive
determine if entity has a location
if positive sentiment and a location, then
print response
End of explanation
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
Explanation: Metacognition
Rate your comfort level with this week's material so far.
1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.
2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below.
3 ==> I can do this on my own without any help.
4 ==> I can do this on my own and can explain/teach how to do it to others.
--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--
Questions And Comments
Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.
--== Double-click Here then Enter Your Questions Below this Line ==--
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.