markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
|
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
|
_____no_output_____
|
MIT
|
P1.ipynb
|
owennottank/CarND-LaneLines-P1
|
Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky!
|
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
|
_____no_output_____
|
MIT
|
P1.ipynb
|
owennottank/CarND-LaneLines-P1
|
Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
|
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
|
_____no_output_____
|
MIT
|
P1.ipynb
|
owennottank/CarND-LaneLines-P1
|
In-Class Coding Lab: Data Analysis with PandasIn this lab, we will perform a data analysis on the **RMS Titanic** passenger list. The RMS Titanic is one of the most famous ocean liners in history. On April 15, 1912 it sank after colliding with an iceberg in the North Atlantic Ocean. To learn more, read here: https://en.wikipedia.org/wiki/RMS_Titanic Our goal today is to perform a data analysis on a subset of the passenger list. We're looking for insights as to which types of passengers did and didn't survive. Women? Children? 1st Class Passengers? 3rd class? Etc. I'm sure you've heard the expression often said during emergencies: "Women and Children first" Let's explore this data set and find out if that's true!Before we begin you should read up on what each of the columns mean in the data dictionary. You can find this information on this page: https://www.kaggle.com/c/titanic/data Loading the data setFirst we load the dataset into a Pandas `DataFrame` variable. The `sample(10)` method takes a random sample of 10 passengers from the data set.
|
import pandas as pd
import numpy as np
# this turns off warning messages
import warnings
warnings.filterwarnings('ignore')
passengers = pd.read_csv('CCL-titanic.csv')
passengers.sample(10)
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
How many survived?One of the first things we should do is figure out how many of the passengers in this data set survived. Let's start with isolating just the `'Survivied'` column into a series:
|
passengers['Survived'].sample(10)
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
There's too many to display so we just display a random sample of 10 passengers. - 1 means the passenger survivied- 0 means the passenger diedWhat we really want is to count the number of survivors and deaths. We do this by querying the `value_counts()` of the `['Survived']` column, which returns a `Series` of counts, like this:
|
passengers['Survived'].value_counts()
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
Only 342 passengers survived, and 549 perished. Let's observe this same data as percentages of the whole. We do this by adding the `normalize=True` named argument to the `value_counts()` method.
|
passengers['Survived'].value_counts(normalize=True)
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
**Just 38% of passengers in this dataset survived.** Now you Try it!**FIRST** Write a Pandas expression to display counts of males and female passengers using the `Sex` variable:
|
passengers['Sex'].value_counts()
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
**NEXT** Write a Pandas expression to display male /female passenger counts as a percentage of the whole number of passengers in the data set.
|
passengers['Sex'].value_counts(normalize=True)
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
If you got things working, you now know that **35% of passengers were female**. Who survivies? Men or Women?We now know that 35% of the passengers were female, and 65% we male. **The next think to think about is how do survivial rates affect these numbers? **If the ratio is about the same for surviviors only, then we can conclude that your **Sex** did not play a role in your survival on the RMS Titanic. Let's find out.
|
survivors = passengers[passengers['Survived'] ==1]
survivors['PassengerId'].count()
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
Still **342** like we discovered originally. Now let's check the **Sex** split among survivors only:
|
survivors['Sex'].value_counts()
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
WOW! That is a huge difference! But you probably can't see it easily. Let's represent it in a `DataFrame`, so that it's easier to visualize:
|
sex_all_series = passengers['Sex'].value_counts()
sex_survivor_series = survivors['Sex'].value_counts()
sex_comparision_df = pd.DataFrame({ 'AllPassengers' : sex_all_series, 'Survivors' : sex_survivor_series })
sex_comparision_df['SexSurvivialRate'] = sex_comparision_df['Survivors'] / sex_comparision_df['AllPassengers']
sex_comparision_df
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
**So, females had a 74% survival rate. Much better than the overall rate of 38%** We should probably briefly explain the code above. - The first two lines get a series count of all passengers by Sex (male / female) and count of survivors by sex- The third line creates DataFrame. Recall a pandas dataframe is just a dict of series. We have two keys 'AllPassengers' and 'Survivors'- The fourth line creates a new column in the dataframe which is just the survivors / all passengers to get the rate of survival for that Sex. Feature Engineering: Adults and ChildrenSometimes the variable we want to analyze is not readily available, but can be created from existing data. This is commonly referred to as **feature engineering**. The name comes from machine learning where we use data called *features* to predict an outcome. Let's create a new feature called `'AgeCat'` as follows:- When **Age** <=18 then 'Child'- When **Age** >18 then 'Adult'This is easy to do in pandas. First we create the column and set all values to `np.nan` which means 'Not a number'. This is Pandas way of saying no value. Then we set the values based on the rules we set for the feature.
|
passengers['AgeCat'] = np.nan # Not a number
passengers['AgeCat'][ passengers['Age'] <=18 ] = 'Child'
passengers['AgeCat'][ passengers['Age'] > 18 ] = 'Adult'
passengers.sample(5)
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
Let's get the count and distrubutions of Adults and Children on the passenger list.
|
passengers['AgeCat'].value_counts()
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
And here's the percentage as a whole:
|
passengers['AgeCat'].value_counts(normalize=True)
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
So close to **80%** of the passengers were adults. Once again let's look at the ratio of `AgeCat` for survivors only. If your age has no bearing of survivial, then the rates should be the same. Here's the counts of Adult / Children among the survivors only:
|
survivors = passengers[passengers['Survived'] ==1]
survivors['AgeCat'].value_counts()
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
Now You Try it!Calculate the `AgeCat` survival rate, similar to how we did for the `SexSurvivalRate`.
|
agecat_all_series = passengers['AgeCat'].value_counts()
agecat_survivor_series = survivors['AgeCat'].value_counts()
# todo make a data frame, add AgeCatSurvivialRate column, display dataframe
agecat_comparision_df = pd.DataFrame({ 'AllPassengers' : agecat_all_series, 'Survivors' : agecat_survivor_series })
agecat_comparision_df['AgeCatSurvivialRate'] = agecat_comparision_df['Survivors'] / agecat_comparision_df['AllPassengers']
agecat_comparision_df
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
**So, children had a 50% survival rate, better than the overall rate of 38%** So, women and children first?It looks like the RMS really did have the motto: "Women and Children First."Here's our insights. We know:- If you were a passenger, you had a 38% chance of survival.- If you were a female passenger, you had a 74% chance of survival.- If you were a child passenger, you had a 50% chance of survival. Now you try it for Passenger ClassRepeat this process for `Pclass` The passenger class variable. Display the survival rates for each passenger class. What does the information tell you about passenger class and survival rates?I'll give you a hint... "Money Talks"
|
# todo: repeat the analysis in the previous cell for Pclass
pclass_all_series = passengers['Pclass'].value_counts()
pclass_survivor_series = survivors['Pclass'].value_counts()
pclass_comparision_df = pd.DataFrame({ 'AllPassengers' : pclass_all_series, 'Survivors' : pclass_survivor_series })
pclass_comparision_df['PclassSurvivialRate'] = pclass_comparision_df['Survivors'] / pclass_comparision_df['AllPassengers']
pclass_comparision_df
|
_____no_output_____
|
MIT
|
content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb
|
MahopacHS/spring2019-DavisGrimm
|
Generate small plot
|
cells = make_spheroids.generate_artificial_spheroid(10)['cells']
spheroid = {}
spheroid['cells'] = cells
G = graph_generation_func.generate_voronoi_graph(spheroid, dCells = 0.6)
for ind in G.nodes():
if ind % 2 == 0:
G.add_node(ind, color = 'r')
else:
G.add_node(ind, color = 'b')
graph_plot.network_plot_3D(G)
#plt.savefig('example_code.pdf')
path = r'/Users/gustaveronteix/Documents/Projets/Projets Code/3D-Segmentation-Sebastien/data'
spheroid_data = pandas.read_csv(os.path.join(path, 'spheroid_table_3.csv'))
mapper = {"centroid-0": "z", "centroid-1": "x", "centroid-2": "y"}
spheroid_data = spheroid_data.rename(columns = mapper)
spheroid = pr.single_spheroid_process(spheroid_data)
G = graph.generate_voronoi_graph(spheroid, zRatio = 1, dCells = 20)
for ind in G.nodes():
G.add_node(ind, color ='g')
pos =nx.get_node_attributes(G,'pos')
gp.network_plot_3D(G, 5)
#plt.savefig('Example_image.pdf')
path = r'/Volumes/Multicell/Sebastien/Gustave_Jeremie/spheroid_sample_Francoise.csv'
spheroid_data = pandas.read_csv(path)
spheroid = pr.single_spheroid_process(spheroid_data)
G = graph.generate_voronoi_graph(spheroid, zRatio = 2, dCells = 35)
for ind in G.nodes():
G.add_node(ind, color = 'r')
pos =nx.get_node_attributes(G,'pos')
gp.network_plot_3D(G, 20)
plt.savefig('/Volumes/Multicell/Sebastien/Gustave_Jeremie/spheroid_sample_Francoise.pdf', transparent=True)
|
_____no_output_____
|
MIT
|
Examples/.ipynb_checkpoints/Example-checkpoint.ipynb
|
microfluidix/HMRF
|
Batch analyze the data
|
spheroid_path = './utility/spheroid_sample_1.csv'
spheroid_data = pandas.read_csv(spheroid_path)
spheroid = pr.single_spheroid_process(spheroid_data[spheroid_data['area'] > 200])
G = graph.generate_voronoi_graph(spheroid, zRatio = 2, dCells = 35)
import glob
from collections import defaultdict
degree_frame_Vor = pandas.DataFrame()
i = 0
degree_frame_Geo = pandas.DataFrame()
j = 0
deg_Vor = []
deg_Geo = []
for fname in glob.glob('./utility/*.csv'):
spheroid_data = pandas.read_csv(fname)
spheroid_data['x'] *= 1.25
spheroid_data['y'] *= 1.25
spheroid_data['z'] *= 1.25
spheroid_data = spheroid_data[spheroid_data['area']>200]
spheroid = pr.single_spheroid_process(spheroid_data)
G = generate_voronoi_graph(spheroid, zRatio = 1, dCells = 55)
degree_sequence = sorted([d for n, d in G.degree()], reverse=True)
degreeCount = collections.Counter(degree_sequence)
for key in degreeCount.keys():
N_tot = 0
for k in degreeCount.keys():
N_tot += degreeCount[k]
degree_frame_Vor.loc[i, 'degree'] = key
degree_frame_Vor.loc[i, 'p'] = degreeCount[key]/N_tot
degree_frame_Vor.loc[i, 'fname'] = fname
i += 1
deg_Vor += list(degree_sequence)
G = graph.generate_geometric_graph(spheroid, zRatio = 1, dCells = 26)
degree_sequence = sorted([d for n, d in G.degree()], reverse=True)
degreeCount = collections.Counter(degree_sequence)
for key in degreeCount.keys():
N_tot = 0
for k in degreeCount.keys():
N_tot += degreeCount[k]
degree_frame_Geo.loc[j, 'degree'] = key
degree_frame_Geo.loc[j, 'p'] = degreeCount[key]/N_tot
degree_frame_Geo.loc[j, 'fname'] = fname
j += 1
deg_Geo.append(degreeCount[key])
indx = degree_frame_Vor.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).index
mean = degree_frame_Vor.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).values
std = degree_frame_Vor.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).std(axis = 1).values
indx_geo = degree_frame_Geo.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).index
mean_geo = degree_frame_Geo.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).values
std_geo = degree_frame_Geo.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).std(axis = 1).values
import seaborn as sns
sns.set_style('white')
plt.errorbar(indx+0.3, mean, yerr=std,
marker = 's', linestyle = ' ', color = 'b',
label = 'Voronoi')
plt.errorbar(indx_geo-0.3, mean_geo, yerr=std_geo,
marker = 'o', linestyle = ' ', color = 'r',
label = 'Geometric')
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.special import factorial
from scipy.stats import poisson
# the bins should be of integer width, because poisson is an integer distribution
bins = np.arange(25)-0.5
entries, bin_edges, patches = plt.hist(deg_Vor, bins=bins, density=True, label='Data')
# calculate bin centres
bin_middles = 0.5 * (bin_edges[1:] + bin_edges[:-1])
def fit_function(k, lamb):
'''poisson function, parameter lamb is the fit parameter'''
return poisson.pmf(k, lamb)
# fit with curve_fit
parameters, cov_matrix = curve_fit(fit_function, bin_middles, entries)
# plot poisson-deviation with fitted parameter
x_plot = np.arange(0, 25)
plt.plot(
x_plot,
fit_function(x_plot, *parameters),
marker='o', linestyle='',
label='Fit result',
)
plt.legend()
plt.show()
parameters
|
_____no_output_____
|
MIT
|
Examples/.ipynb_checkpoints/Example-checkpoint.ipynb
|
microfluidix/HMRF
|
Model Explainer ExampleIn this example we will: * [Describe the project structure](Project-Structure) * [Train some models](Train-Models) * [Create Tempo artifacts](Create-Tempo-Artifacts) * [Run unit tests](Unit-Tests) * [Save python environment for our classifier](Save-Classifier-Environment) * [Test Locally on Docker](Test-Locally-on-Docker) * [Production on Kubernetes via Tempo](Production-Option-1-(Deploy-to-Kubernetes-with-Tempo)) * [Prodiuction on Kuebrnetes via GitOps](Production-Option-2-(Gitops)) PrerequisitesThis notebooks needs to be run in the `tempo-examples` conda environment defined below. Create from project root folder:```bashconda env create --name tempo-examples --file conda/tempo-examples.yaml``` Project Structure
|
!tree -P "*.py" -I "__init__.py|__pycache__" -L 2
|
[01;34m.[00m
├── [01;34martifacts[00m
│ ├── [01;34mexplainer[00m
│ └── [01;34mmodel[00m
├── [01;34mk8s[00m
│ └── [01;34mrbac[00m
└── [01;34msrc[00m
├── constants.py
├── data.py
├── explainer.py
├── model.py
└── tempo.py
6 directories, 5 files
|
Apache-2.0
|
docs/examples/explainer/README.ipynb
|
outerbounds/tempo
|
Train Models * This section is where as a data scientist you do your work of training models and creating artfacts. * For this example we train sklearn and xgboost classification models for the iris dataset.
|
import os
import logging
import numpy as np
import json
import tempo
from tempo.utils import logger
from src.constants import ARTIFACTS_FOLDER
logger.setLevel(logging.ERROR)
logging.basicConfig(level=logging.ERROR)
from src.data import AdultData
data = AdultData()
from src.model import train_model
adult_model = train_model(ARTIFACTS_FOLDER, data)
from src.explainer import train_explainer
train_explainer(ARTIFACTS_FOLDER, data, adult_model)
|
_____no_output_____
|
Apache-2.0
|
docs/examples/explainer/README.ipynb
|
outerbounds/tempo
|
Create Tempo Artifacts
|
from src.tempo import create_explainer, create_adult_model
sklearn_model = create_adult_model()
Explainer = create_explainer(sklearn_model)
explainer = Explainer()
# %load src/tempo.py
import os
import dill
import numpy as np
from alibi.utils.wrappers import ArgmaxTransformer
from src.constants import ARTIFACTS_FOLDER, EXPLAINER_FOLDER, MODEL_FOLDER
from tempo.serve.metadata import ModelFramework
from tempo.serve.model import Model
from tempo.serve.pipeline import PipelineModels
from tempo.serve.utils import pipeline, predictmethod
def create_adult_model() -> Model:
sklearn_model = Model(
name="income-sklearn",
platform=ModelFramework.SKLearn,
local_folder=os.path.join(ARTIFACTS_FOLDER, MODEL_FOLDER),
uri="gs://seldon-models/test/income/model",
)
return sklearn_model
def create_explainer(model: Model):
@pipeline(
name="income-explainer",
uri="s3://tempo/explainer/pipeline",
local_folder=os.path.join(ARTIFACTS_FOLDER, EXPLAINER_FOLDER),
models=PipelineModels(sklearn=model),
)
class ExplainerPipeline(object):
def __init__(self):
pipeline = self.get_tempo()
models_folder = pipeline.details.local_folder
explainer_path = os.path.join(models_folder, "explainer.dill")
with open(explainer_path, "rb") as f:
self.explainer = dill.load(f)
def update_predict_fn(self, x):
if np.argmax(self.models.sklearn(x).shape) == 0:
self.explainer.predictor = self.models.sklearn
self.explainer.samplers[0].predictor = self.models.sklearn
else:
self.explainer.predictor = ArgmaxTransformer(self.models.sklearn)
self.explainer.samplers[0].predictor = ArgmaxTransformer(self.models.sklearn)
@predictmethod
def explain(self, payload: np.ndarray, parameters: dict) -> str:
print("Explain called with ", parameters)
self.update_predict_fn(payload)
explanation = self.explainer.explain(payload, **parameters)
return explanation.to_json()
# explainer = ExplainerPipeline()
# return sklearn_model, explainer
return ExplainerPipeline
|
_____no_output_____
|
Apache-2.0
|
docs/examples/explainer/README.ipynb
|
outerbounds/tempo
|
Save Explainer
|
!ls artifacts/explainer/conda.yaml
tempo.save(Explainer)
|
Collecting packages...
Packing environment at '/home/clive/anaconda3/envs/tempo-d87b2b65-e7d9-4e82-9c0d-0f83f48c07a3' to '/home/clive/work/mlops/fork-tempo/docs/examples/explainer/artifacts/explainer/environment.tar.gz'
[########################################] | 100% Completed | 1min 13.1s
|
Apache-2.0
|
docs/examples/explainer/README.ipynb
|
outerbounds/tempo
|
Test Locally on DockerHere we test our models using production images but running locally on Docker. This allows us to ensure the final production deployed model will behave as expected when deployed.
|
from tempo import deploy_local
remote_model = deploy_local(explainer)
r = json.loads(remote_model.predict(payload=data.X_test[0:1], parameters={"threshold":0.90}))
print(r["data"]["anchor"])
r = json.loads(remote_model.predict(payload=data.X_test[0:1], parameters={"threshold":0.99}))
print(r["data"]["anchor"])
remote_model.undeploy()
|
_____no_output_____
|
Apache-2.0
|
docs/examples/explainer/README.ipynb
|
outerbounds/tempo
|
Production Option 1 (Deploy to Kubernetes with Tempo) * Here we illustrate how to run the final models in "production" on Kubernetes by using Tempo to deploy Prerequisites Create a Kind Kubernetes cluster with Minio and Seldon Core installed using Ansible as described [here](https://tempo.readthedocs.io/en/latest/overview/quickstart.htmlkubernetes-cluster-with-seldon-core).
|
!kubectl apply -f k8s/rbac -n production
from tempo.examples.minio import create_minio_rclone
import os
create_minio_rclone(os.getcwd()+"/rclone-minio.conf")
tempo.upload(sklearn_model)
tempo.upload(explainer)
from tempo.serve.metadata import SeldonCoreOptions
runtime_options = SeldonCoreOptions(**{
"remote_options": {
"namespace": "production",
"authSecretName": "minio-secret"
}
})
from tempo import deploy_remote
remote_model = deploy_remote(explainer, options=runtime_options)
r = json.loads(remote_model.predict(payload=data.X_test[0:1], parameters={"threshold":0.95}))
print(r["data"]["anchor"])
remote_model.undeploy()
|
_____no_output_____
|
Apache-2.0
|
docs/examples/explainer/README.ipynb
|
outerbounds/tempo
|
Production Option 2 (Gitops) * We create yaml to provide to our DevOps team to deploy to a production cluster * We add Kustomize patches to modify the base Kubernetes yaml created by Tempo
|
from tempo import manifest
from tempo.serve.metadata import SeldonCoreOptions
runtime_options = SeldonCoreOptions(**{
"remote_options": {
"namespace": "production",
"authSecretName": "minio-secret"
}
})
yaml_str = manifest(explainer, options=runtime_options)
with open(os.getcwd()+"/k8s/tempo.yaml","w") as f:
f.write(yaml_str)
!kustomize build k8s
|
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
annotations:
seldon.io/tempo-description: ""
seldon.io/tempo-model: '{"model_details": {"name": "income-explainer", "local_folder":
"/home/clive/work/mlops/fork-tempo/docs/examples/explainer/artifacts/explainer",
"uri": "s3://tempo/explainer/pipeline", "platform": "tempo", "inputs": {"args":
[{"ty": "numpy.ndarray", "name": "payload"}, {"ty": "builtins.dict", "name":
"parameters"}]}, "outputs": {"args": [{"ty": "builtins.str", "name": null}]},
"description": ""}, "protocol": "tempo.kfserving.protocol.KFServingV2Protocol",
"runtime_options": {"runtime": "tempo.seldon.SeldonKubernetesRuntime", "state_options":
{"state_type": "LOCAL", "key_prefix": "", "host": "", "port": ""}, "insights_options":
{"worker_endpoint": "", "batch_size": 1, "parallelism": 1, "retries": 3, "window_time":
0, "mode_type": "NONE", "in_asyncio": false}, "ingress_options": {"ingress":
"tempo.ingress.istio.IstioIngress", "ssl": false, "verify_ssl": true}, "replicas":
1, "minReplicas": null, "maxReplicas": null, "authSecretName": "minio-secret",
"serviceAccountName": null, "add_svc_orchestrator": false, "namespace": "production"}}'
labels:
seldon.io/tempo: "true"
name: income-explainer
namespace: production
spec:
predictors:
- annotations:
seldon.io/no-engine: "true"
componentSpecs:
- spec:
containers:
- name: classifier
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 500m
memory: 500Mi
graph:
envSecretRefName: minio-secret
implementation: TEMPO_SERVER
modelUri: s3://tempo/explainer/pipeline
name: income-explainer
serviceAccountName: tempo-pipeline
type: MODEL
name: default
replicas: 1
protocol: kfserving
---
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
annotations:
seldon.io/tempo-description: ""
seldon.io/tempo-model: '{"model_details": {"name": "income-sklearn", "local_folder":
"/home/clive/work/mlops/fork-tempo/docs/examples/explainer/artifacts/model",
"uri": "gs://seldon-models/test/income/model", "platform": "sklearn", "inputs":
{"args": [{"ty": "numpy.ndarray", "name": null}]}, "outputs": {"args": [{"ty":
"numpy.ndarray", "name": null}]}, "description": ""}, "protocol": "tempo.kfserving.protocol.KFServingV2Protocol",
"runtime_options": {"runtime": "tempo.seldon.SeldonKubernetesRuntime", "state_options":
{"state_type": "LOCAL", "key_prefix": "", "host": "", "port": ""}, "insights_options":
{"worker_endpoint": "", "batch_size": 1, "parallelism": 1, "retries": 3, "window_time":
0, "mode_type": "NONE", "in_asyncio": false}, "ingress_options": {"ingress":
"tempo.ingress.istio.IstioIngress", "ssl": false, "verify_ssl": true}, "replicas":
1, "minReplicas": null, "maxReplicas": null, "authSecretName": "minio-secret",
"serviceAccountName": null, "add_svc_orchestrator": false, "namespace": "production"}}'
labels:
seldon.io/tempo: "true"
name: income-sklearn
namespace: production
spec:
predictors:
- annotations:
seldon.io/no-engine: "true"
graph:
envSecretRefName: minio-secret
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/test/income/model
name: income-sklearn
type: MODEL
name: default
replicas: 1
protocol: kfserving
|
Apache-2.0
|
docs/examples/explainer/README.ipynb
|
outerbounds/tempo
|
Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"). Neural Machine Translation with Attention Run in Google Colab View source on GitHub This notebook is still under construction! Please come back later.This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using TF 2.0 APIs. This is an advanced example that assumes some knowledge of sequence to sequence models.After training the model in this notebook, you will be able to input a Spanish sentence, such as *"¿todavia estan en casa?"*, and return the English translation: *"are you still at home?"*The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:Note: This example takes approximately 10 mintues to run on a single P100 GPU.
|
import collections
import io
import itertools
import os
import random
import re
import time
import unicodedata
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
import matplotlib.pyplot as plt
print(tf.__version__)
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
Download and prepare the datasetWe'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:```May I borrow this book? ¿Puedo tomar prestado este libro?```There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:1. Clean the sentences by removing special characters.1. Add a *start* and *end* token to each sentence.1. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).1. Pad each sentence to a maximum length.
|
# TODO(brianklee): This preprocessing should ideally be implemented in TF
# because preprocessing should be exported as part of the SavedModel.
# Converts the unicode file to ascii
# https://stackoverflow.com/a/518232/2809427
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
START_TOKEN = u'<start>'
END_TOKEN = u'<end>'
def preprocess_sentence(w):
# remove accents; lowercase everything
w = unicode_to_ascii(w.strip()).lower()
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# https://stackoverflow.com/a/3645931/3645946
w = re.sub(r'([?.!,¿])', r' \1 ', w)
# replacing everything with space except (a-z, '.', '?', '!', ',')
w = re.sub(r'[^a-z?.!,¿]+', ' ', w)
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence))
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset (of course, translation quality degrades with less data).
|
def load_anki_data(num_examples=None):
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip) + '/spa-eng/spa.txt'
with io.open(path_to_file, 'rb') as f:
lines = f.read().decode('utf8').strip().split('\n')
# Data comes as tab-separated strings; one per line.
eng_spa_pairs = [[preprocess_sentence(w) for w in line.split('\t')] for line in lines]
# The translations file is ordered from shortest to longest, so slicing from
# the front will select the shorter examples. This also speeds up training.
if num_examples is not None:
eng_spa_pairs = eng_spa_pairs[:num_examples]
eng_sentences, spa_sentences = zip(*eng_spa_pairs)
eng_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
spa_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
eng_tokenizer.fit_on_texts(eng_sentences)
spa_tokenizer.fit_on_texts(spa_sentences)
return (eng_spa_pairs, eng_tokenizer, spa_tokenizer)
NUM_EXAMPLES = 30000
sentence_pairs, english_tokenizer, spanish_tokenizer = load_anki_data(NUM_EXAMPLES)
# Turn our english/spanish pairs into TF Datasets by mapping words -> integers.
def make_dataset(eng_spa_pairs, eng_tokenizer, spa_tokenizer):
eng_sentences, spa_sentences = zip(*eng_spa_pairs)
eng_ints = eng_tokenizer.texts_to_sequences(eng_sentences)
spa_ints = spa_tokenizer.texts_to_sequences(spa_sentences)
padded_eng_ints = tf.keras.preprocessing.sequence.pad_sequences(
eng_ints, padding='post')
padded_spa_ints = tf.keras.preprocessing.sequence.pad_sequences(
spa_ints, padding='post')
dataset = tf.data.Dataset.from_tensor_slices((padded_eng_ints, padded_spa_ints))
return dataset
# Train/test split
train_size = int(len(sentence_pairs) * 0.8)
random.shuffle(sentence_pairs)
train_sentence_pairs, test_sentence_pairs = sentence_pairs[:train_size], sentence_pairs[train_size:]
# Show length
len(train_sentence_pairs), len(test_sentence_pairs)
_english, _spanish = train_sentence_pairs[0]
_eng_ints, _spa_ints = english_tokenizer.texts_to_sequences([_english])[0], spanish_tokenizer.texts_to_sequences([_spanish])[0]
print("Source language: ")
print('\n'.join('{:4d} ----> {}'.format(i, word) for i, word in zip(_eng_ints, _english.split())))
print("Target language: ")
print('\n'.join('{:4d} ----> {}'.format(i, word) for i, word in zip(_spa_ints, _spanish.split())))
# Set up datasets
BATCH_SIZE = 64
train_ds = make_dataset(train_sentence_pairs, english_tokenizer, spanish_tokenizer)
test_ds = make_dataset(test_sentence_pairs, english_tokenizer, spanish_tokenizer)
train_ds = train_ds.shuffle(len(train_sentence_pairs)).batch(BATCH_SIZE, drop_remainder=True)
test_ds = test_ds.batch(BATCH_SIZE, drop_remainder=True)
print("Dataset outputs elements with shape ({}, {})".format(
*train_ds.output_shapes))
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
Write the encoder and decoder modelHere, we'll implement an encoder-decoder model with attention. The following diagram shows that each input word is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*.
|
ENCODER_SIZE = DECODER_SIZE = 1024
EMBEDDING_DIM = 256
MAX_OUTPUT_LENGTH = train_ds.output_shapes[1][1]
def gru(units):
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, encoder_size):
super(Encoder, self).__init__()
self.embedding_dim = embedding_dim
self.encoder_size = encoder_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(encoder_size)
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state=hidden)
return output, state
def initial_hidden_state(self, batch_size):
return tf.zeros((batch_size, self.encoder_size))
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
For the decoder, we're using *Bahdanau attention*. Here are the equations that are implemented:Lets decide on notation before writing the simplified form:* FC = Fully connected (dense) layer* EO = Encoder output* H = hidden state* X = input to the decoderAnd the pseudo-code:* `score = FC(tanh(FC(EO) + FC(H)))`* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.* `embedding output` = The input to the decoder X is passed through an embedding layer.* `merged vector = concat(embedding output, context vector)`* This merged vector is then given to the GRU The shapes of all the vectors at each step have been specified in the comments in the code:
|
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, hidden_state, enc_output):
# enc_output shape = (batch_size, max_length, hidden_size)
# (batch_size, hidden_size) -> (batch_size, 1, hidden_size)
hidden_with_time = tf.expand_dims(hidden_state, 1)
# score shape == (batch_size, max_length, 1)
score = self.V(tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum = (batch_size, hidden_size)
context_vector = attention_weights * enc_output
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, decoder_size):
super(Decoder, self).__init__()
self.vocab_size = vocab_size
self.embedding_dim = embedding_dim
self.decoder_size = decoder_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(decoder_size)
self.fc = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(decoder_size)
def call(self, x, hidden, enc_output):
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
Define a translate functionNow, let's put the encoder and decoder halves together. The encoder step is fairly straightforward; we'll just reuse Keras's dynamic unroll. For the decoder, we have to make some choices about how to feed the decoder RNN. Overall the process goes as follows:1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.2. The encoder output, encoder hidden state and the <START> token is passed to the decoder.3. The decoder returns the *predictions* and the *decoder hidden state*.4. The encoder output, hidden state and next token is then fed back into the decoder repeatedly. This has two different behaviors under training and inference: - during training, we use *teacher forcing*, where the correct next token is fed into the decoder, regardless of what the decoder emitted. - during inference, we use `tf.argmax(predictions)` to select the most likely continuation and feed it back into the decoder. Another strategy that yields more robust results is called *beam search*.5. Repeat step 4 until either the decoder emits an <END> token, indicating that it's done translating, or we run into a hardcoded length limit.
|
class NmtTranslator(tf.keras.Model):
def __init__(self, encoder, decoder, start_token_id, end_token_id):
super(NmtTranslator, self).__init__()
self.encoder = encoder
self.decoder = decoder
# (The token_id should match the decoder's language.)
# Uses start_token_id to initialize the decoder.
self.start_token_id = tf.constant(start_token_id)
# Check for sequence completion using this token_id
self.end_token_id = tf.constant(end_token_id)
@tf.function
def call(self, inp, target=None, max_output_length=MAX_OUTPUT_LENGTH):
'''Translate an input.
If target is provided, teacher forcing is used to generate the translation.
'''
batch_size = inp.shape[0]
hidden = self.encoder.initial_hidden_state(batch_size)
enc_output, enc_hidden = self.encoder(inp, hidden)
dec_hidden = enc_hidden
if target is not None:
output_length = target.shape[1]
else:
output_length = max_output_length
predictions_array = tf.TensorArray(tf.float32, size=output_length - 1)
attention_array = tf.TensorArray(tf.float32, size=output_length - 1)
# Feed <START> token to start decoder.
dec_input = tf.cast([self.start_token_id] * batch_size, tf.int32)
# Keep track of which sequences have emitted an <END> token
is_done = tf.zeros([batch_size], dtype=tf.bool)
for i in tf.range(output_length - 1):
dec_input = tf.expand_dims(dec_input, 1)
predictions, dec_hidden, attention_weights = self.decoder(dec_input, dec_hidden, enc_output)
predictions = tf.where(is_done, tf.zeros_like(predictions), predictions)
# Write predictions/attention for later visualization.
predictions_array = predictions_array.write(i, predictions)
attention_array = attention_array.write(i, attention_weights)
# Decide what to pass into the next iteration of the decoder.
if target is not None:
# if target is known, use teacher forcing
dec_input = target[:, i + 1]
else:
# Otherwise, pick the most likely continuation
dec_input = tf.argmax(predictions, axis=1, output_type=tf.int32)
# Figure out which sentences just completed.
is_done = tf.logical_or(is_done, tf.equal(dec_input, self.end_token_id))
# Exit early if all our sentences are done.
if tf.reduce_all(is_done):
break
# [time, batch, predictions] -> [batch, time, predictions]
return tf.transpose(predictions_array.stack(), [1, 0, 2]), tf.transpose(attention_array.stack(), [1, 0, 2, 3])
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
Define the loss functionOur loss function is a word-for-word comparison between true answer and model prediction. real = [, 'This', 'is', 'the', 'correct', 'answer', '.', '', ''] pred = ['This', 'is', 'what', 'the', 'model', 'emitted', '.', '']results in comparing This/This, is/is, the/what, correct/the, answer/model, ./emitted, /.and ignoring the rest of the prediction.
|
def loss_fn(real, pred):
# The prediction doesn't include the <start> token.
real = real[:, 1:]
# Cut down the prediction to the correct shape (We ignore extra words).
pred = pred[:, :real.shape[1]]
# If real == <OOV>, then mask out the loss.
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
# Sum loss over the time dimension, but average it over the batch dimension.
return tf.reduce_mean(tf.reduce_sum(loss_, axis=1))
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
Configure model directoryWe'll use one directory to save all of our relevant artifacts (summary logs, checkpoints, SavedModel exports, etc.)
|
# Where to save checkpoints, tensorboard summaries, etc.
MODEL_DIR = '/tmp/tensorflow/nmt_attention'
def apply_clean():
if tf.io.gfile.exists(MODEL_DIR):
print('Removing existing model dir: {}'.format(MODEL_DIR))
tf.io.gfile.rmtree(MODEL_DIR)
# Optional: remove existing data
apply_clean()
# Summary writers
train_summary_writer = tf.summary.create_file_writer(
os.path.join(MODEL_DIR, 'summaries', 'train'), flush_millis=10000)
test_summary_writer = tf.summary.create_file_writer(
os.path.join(MODEL_DIR, 'summaries', 'eval'), flush_millis=10000, name='test')
# Set up all stateful objects
encoder = Encoder(len(english_tokenizer.word_index) + 1, EMBEDDING_DIM, ENCODER_SIZE)
decoder = Decoder(len(spanish_tokenizer.word_index) + 1, EMBEDDING_DIM, DECODER_SIZE)
start_token_id = spanish_tokenizer.word_index[START_TOKEN]
end_token_id = spanish_tokenizer.word_index[END_TOKEN]
model = NmtTranslator(encoder, decoder, start_token_id, end_token_id)
# TODO(brianklee): Investigate whether Adam defaults have changed and whether it affects training.
optimizer = tf.keras.optimizers.Adam(epsilon=1e-8)# tf.keras.optimizers.SGD(learning_rate=0.01)#Adam()
# Checkpoints
checkpoint_dir = os.path.join(MODEL_DIR, 'checkpoints')
checkpoint_prefix = os.path.join(checkpoint_dir, 'ckpt')
checkpoint = tf.train.Checkpoint(
encoder=encoder, decoder=decoder, optimizer=optimizer)
# Restore variables on creation if a checkpoint exists.
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# SavedModel exports
export_path = os.path.join(MODEL_DIR, 'export')
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
Visualize the model's outputLet's visualize our model's output. (It hasn't been trained yet, so it will output gibberish.)We'll use this visualization to check on the model's progress.
|
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence.split(), fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence.split(), fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def ints_to_words(tokenizer, ints):
return ' '.join(tokenizer.index_word[int(i)] if int(i) != 0 else '<OOV>' for i in ints)
def sentence_to_ints(tokenizer, sentence):
sentence = preprocess_sentence(sentence)
return tf.constant(tokenizer.texts_to_sequences([sentence])[0])
def translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, ints, target_ints=None):
"""Run translation on a sentence and plot an attention matrix.
Sentence should be passed in as list of integers.
"""
ints = tf.expand_dims(ints, 0)
predictions, attention = model(ints)
prediction_ids = tf.squeeze(tf.argmax(predictions, axis=-1))
attention = tf.squeeze(attention)
sentence = ints_to_words(english_tokenizer, ints[0])
predicted_sentence = ints_to_words(spanish_tokenizer, prediction_ids)
print(u'Input: {}'.format(sentence))
print(u'Predicted translation: {}'.format(predicted_sentence))
if target_ints is not None:
print(u'Correct translation: {}'.format(ints_to_words(spanish_tokenizer, target_ints)))
plot_attention(attention, sentence, predicted_sentence)
def translate_and_plot_words(model, english_tokenizer, spanish_tokenizer, sentence, target_sentence=None):
"""Same as translate_and_plot_ints, but pass in a sentence as a string."""
english_ints = sentence_to_ints(english_tokenizer, sentence)
spanish_ints = sentence_to_ints(spanish_tokenizer, target_sentence) if target_sentence is not None else None
translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, english_ints, target_ints=spanish_ints)
translate_and_plot_words(model, english_tokenizer, spanish_tokenizer, u"it's really cold here", u'hace mucho frio aqui')
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
Train the model
|
def train(model, optimizer, dataset):
"""Trains model on `dataset` using `optimizer`."""
start = time.time()
avg_loss = tf.keras.metrics.Mean('loss', dtype=tf.float32)
for inp, target in dataset:
with tf.GradientTape() as tape:
predictions, _ = model(inp, target=target)
loss = loss_fn(target, predictions)
avg_loss(loss)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
if tf.equal(optimizer.iterations % 10, 0):
tf.summary.scalar('loss', avg_loss.result(), step=optimizer.iterations)
avg_loss.reset_states()
rate = 10 / (time.time() - start)
print('Step #%d\tLoss: %.6f (%.2f steps/sec)' % (optimizer.iterations, loss, rate))
start = time.time()
if tf.equal(optimizer.iterations % 100, 0):
# translate_and_plot_words(model, english_index, spanish_index, u"it's really cold here.", u'hace mucho frio aqui.')
translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, inp[0], target[0])
def test(model, dataset, step_num):
"""Perform an evaluation of `model` on the examples from `dataset`."""
avg_loss = tf.keras.metrics.Mean('loss', dtype=tf.float32)
for inp, target in dataset:
predictions, _ = model(inp)
loss = loss_fn(target, predictions)
avg_loss(loss)
print('Model test set loss: {:0.4f}'.format(avg_loss.result()))
tf.summary.scalar('loss', avg_loss.result(), step=step_num)
NUM_TRAIN_EPOCHS = 10
for i in range(NUM_TRAIN_EPOCHS):
start = time.time()
with train_summary_writer.as_default():
train(model, optimizer, train_ds)
end = time.time()
print('\nTrain time for epoch #{} ({} total steps): {}'.format(
i + 1, optimizer.iterations, end - start))
with test_summary_writer.as_default():
test(model, test_ds, optimizer.iterations)
checkpoint.save(checkpoint_prefix)
# TODO(brianklee): This seems to be complaining about input shapes not being set?
# tf.saved_model.save(model, export_path)
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
Next steps* [Download a different dataset](http://www.manythings.org/anki/) to experiment with translations, for example, English to German, or English to French.* Experiment with training on a larger dataset, or using more epochs
|
_____no_output_____
|
Apache-2.0
|
community/en/nmt.ipynb
|
thezwick/examples
|
|
OpenAI GymWe're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.That's where OpenAI gym comes into play. It's a python library that wraps many classical decision problems including robot control, videogames and board games.So here's how it works:
|
import gym
env = gym.make("MountainCar-v0")
plt.imshow(env.render('rgb_array'))
print("Observation space:", env.observation_space)
print("Action space:", env.action_space)
|
Observation space: Box(2,)
Action space: Discrete(3)
|
MIT
|
gym_basics_mountain_car_v0.ipynb
|
PratikSavla/Reinformemt-Learning-Examples
|
Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away. Gym interfaceThe three main methods of an environment are* __reset()__ - reset environment to initial state, _return first observation_* __render()__ - show current environment state (a more colorful version :) )* __step(a)__ - commit action __a__ and return (new observation, reward, is done, info) * _new observation_ - an observation right after commiting the action __a__ * _reward_ - a number representing your reward for commiting action __a__ * _is done_ - True if the MDP has just finished, False if still in progress * _info_ - some auxilary stuff about what just happened. Ignore it ~~for now~~.
|
obs0 = env.reset()
print("initial observation code:", obs0)
# Note: in MountainCar, observation is just two numbers: car position and velocity
print("taking action 2 (right)")
new_obs, reward, is_done, _ = env.step(2)
print("new observation code:", new_obs)
print("reward:", reward)
print("is game over?:", is_done)
# Note: as you can see, the car has moved to the riht slightly (around 0.0005)
|
taking action 2 (right)
new observation code: [ -4.02551631e-01 1.12759220e-04]
reward: -1.0
is game over?: False
|
MIT
|
gym_basics_mountain_car_v0.ipynb
|
PratikSavla/Reinformemt-Learning-Examples
|
Play with itBelow is the code that drives the car to the right. However, it doesn't reach the flag at the far right due to gravity. __Your task__ is to fix it. Find a strategy that reaches the flag. You're not required to build any sophisticated algorithms for now, feel free to hard-code :)_Hint: your action at each step should depend either on __t__ or on __s__._
|
# create env manually to set time limit. Please don't change this.
TIME_LIMIT = 250
env = gym.wrappers.TimeLimit(gym.envs.classic_control.MountainCarEnv(),
max_episode_steps=TIME_LIMIT + 1)
s = env.reset()
actions = {'left': 0, 'stop': 1, 'right': 2}
# prepare "display"
%matplotlib notebook
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
def policy(t):
if t>50 and t<100:
return actions['left']
else:
return actions['right']
for t in range(TIME_LIMIT):
s, r, done, _ = env.step(policy(t))
#draw game image on display
ax.clear()
ax.imshow(env.render('rgb_array'))
fig.canvas.draw()
if done:
print("Well done!")
break
else:
print("Time limit exceeded. Try again.")
|
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
|
MIT
|
gym_basics_mountain_car_v0.ipynb
|
PratikSavla/Reinformemt-Learning-Examples
|
Submit to coursera
|
from submit import submit_interface
submit_interface(policy, "[email protected]", "IT3M0zwksnBtCJXV")
|
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
Submitted to Coursera platform. See results on assignment page!
|
MIT
|
gym_basics_mountain_car_v0.ipynb
|
PratikSavla/Reinformemt-Learning-Examples
|
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
|
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
|
_____no_output_____
|
CC0-1.0
|
04_BVP_problems.ipynb
|
lihu8918/numerical-methods-pdes
|
Boundary Value Problems: Discretization Model Problems The simplest boundary value problem (BVP) we will run into is the one-dimensional version of Poisson's equation$$ u''(x) = f(x).$$ Usually we solve this equation on a finite interval with either Dirichlet or Neumann boundary condtions. Because there are two derivatives in the equation we need two boundary conditions to solve the PDE (really and ODE in this case) uniquely. To start let us consider the following basic problem$$\begin{aligned} u''(x) = f(x) ~~~ \Omega = [a, b] \\ u(a) = \alpha ~~~ u(b) = \beta.\end{aligned}$$ BVPs of this sort are often the result of looking at the steady-state form of a time dependent PDE. For instance, if we were considering the steady-state solution to the heat equation$$ u_t(x,t) = \kappa u_{xx}(x,t) + \Psi(x,t) ~~~~ \Omega = [0, T] \times [a, b] \\ u(x, 0) = u^0(x) ~~~ u(a, t) = \alpha(t) ~~~ u(b, t) = \beta(t)$$we would solve the equation where $u_t = 0$ and arrive at$$ u''(x) = - \Psi / \kappa,$$a version of Poisson's equation above. In higher spatial dimensions the second derivative turns into a Laplacian. Notation varies for this but all these are equivalent statements:$$\begin{aligned} \nabla^2 u(\vec{x}) &= f(\vec{x}) \\ \Delta u(\vec{x}) &= f(\vec{x}) \\ \sum^N_{i=1} u_{x_i x_i} &= f(\vec{x}).\end{aligned}$$ One-Dimensional DiscretizationAs a first approach to solving the one-dimensional Poisson's equation let's break up the domain into `m` points, often called a *mesh* or *grid*. Our goal is to approximate the unknown function $u(x)$ as the mesh points $x_i$. First we can relate the number of mesh points `m` to the distance between with$$ \Delta x = \frac{1}{m + 1}.$$The mesh points $x_i$ can be written as$$ x_i = a + i \Delta x.$$ We can let $\Delta x$ vary and many of the formulas above have only minor modifications but we will leave that for homework. Notationally we will also adopt the notation$$ U_i \approx u(x_i)$$so that $U_i$ are the approximate solution at the grid points and retain the lower-case $u$ to denote the true solution. To simplify our discussion let's consider the ODE$$ u''(x) = f(x) ~~~ \Omega = [0, 1] \\ u(0) = \alpha ~~~ u(1) = \beta.$$ Applying the 2nd order, centered difference approximation for the 2nd derivative we have the equation$$ D^2 U_i = \frac{1}{\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1})$$so that we end up with the approximate algebraic expression at every grid point of$$ \frac{1}{\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1}) = f(x_i) ~~~ i = 1, 2, 3, \ldots, m.$$ Note at this point that these algebraic equations are coupled as each $U_i$ depends on its neighbors. This means we can write these as system of coupled equations$$ A U = F.$$ Write the system of equations$$ \frac{1}{\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1}) = f(x_i) ~~~ i = 1, 2, 3, \ldots, m.$$Note the boundary conditions! $$ \frac{1}{\Delta x^2} \begin{bmatrix} -2 & 1 & & & \\ 1 & -2 & 1 & & \\ & 1 & -2 & 1 & \\ & & 1 & -2 & 1 \\ & & & 1 & -2 \\ \end{bmatrix} \begin{bmatrix} U_1 \\ U_2 \\ U_3 \\ U_4 \\ U_5 \end{bmatrix} = \begin{bmatrix} f(x_1) - \frac{\alpha}{\Delta x^2} \\ f(x_2) \\ f(x_3) \\ f(x_4) \\ f(x_5) - \frac{\beta}{\Delta x^2} \\ \end{bmatrix}.$$ ExampleWant to solve the BVP$$ u_{xx} = e^x, ~~~~ x \in [0, 1] ~~~~ \text{with} ~~~~ u(0) = 0.0, \text{ and } u(1) = 3$$via the construction of a linear system of equations. $$\begin{aligned} u_{xx} &= e^x \\ u_x &= A + e^x \\ u &= Ax + B + e^x\\ u(0) &= B + 1 = 0 \Rightarrow B = -1 \\ u(1) &= A - 1 + e^{1} = 3 \Rightarrow A = 4 - e\\ ~\\ u(x) &= (4 - e) x - 1 + e^x\end{aligned}$$
|
# Problem setup
a = 0.0
b = 1.0
u_a = 0.0
u_b = 3.0
f = lambda x: numpy.exp(x)
u_true = lambda x: (4.0 - numpy.exp(1.0)) * x - 1.0 + numpy.exp(x)
# Descretization
m = 10
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m, m))
diagonal = numpy.ones(m) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x)
b[0] -= u_a / delta_x**2
b[-1] -= u_b / delta_x**2
# Solve system
U = numpy.empty(m + 2)
U[0] = u_a
U[-1] = u_b
U[1:-1] = numpy.linalg.solve(A, b)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
plt.show()
|
_____no_output_____
|
CC0-1.0
|
04_BVP_problems.ipynb
|
lihu8918/numerical-methods-pdes
|
Error AnalysisA natural question to ask given our approximation $U_i$ is how close this is to the true solution $u(x)$ at the grid points $x_i$. To address this we will define the error $E$ as$$ E = U - \hat{U}$$where $U$ is the vector of the approximate solution and $\hat{U}$ is the vector composed of the $u(x_i)$. This leaves $E$ as a vector still so often we ask the question how does the norm of $E$ behave given a particular $\Delta x$. For the $\infty$-norm we would have$$ ||E||_\infty = \max_{1 \leq i \leq m} |E_i| = \max_{1 \leq i \leq m} |U_i - u(x_i)|$$ If we can show that $||E||_\infty$ goes to zero as $\Delta x \rightarrow 0$ we can then claim that the approximate solution $U_i$ at any of the grid points $E_i \rightarrow 0$. If we would like to use other norms we often define slightly modified versions of the norms that also contain the grid width $\Delta x$ where$$\begin{aligned} ||E||_1 &= \Delta x \sum^m_{i=1} |E_i| \\ ||E||_2 &= \left( \Delta x \sum^m_{i=1} |E_i|^2 \right )^{1/2}\end{aligned}$$These are referred to as *grid function norms*.The $E$ defined above is known as the *global error*. One of our main goals throughout this course is to understand how $E$ behaves given other factors as we defined later. Local Truncation ErrorThe *local truncation error* (LTE) can be defined by replacing the approximate solution $U_i$ by the approximate solution $u(x_i)$. Since the algebraic equations are an approximation to the original BVP, we do not expect that the true solution will exactly satisfy these equations, this resulting difference is the LTE. For our one-dimensional finite difference approximation from above we have$$ \frac{1}{\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1}) = f(x_i).$$ Replacing $U_i$ with $u(x_i)$ in this equation leads to$$ \tau_i = \frac{1}{\Delta x^2} (u(x_{i+1}) - 2 u(x_i) + u(x_{i-1})) - f(x_i).$$ In this form the LTE is not as useful but if we assume $u(x)$ is smooth we can repalce the $u(x_i)$ with their Taylor series counterparts, similar to what we did for finite differences. The relevant Taylor series are$$ u(x_{i \pm 1}) = u(x_i) \pm u'(x_i) \Delta x + \frac{1}{2} u''(x_i) \Delta x^2 \pm \frac{1}{6} u'''(x_i) \Delta x^3 + \frac{1}{24} u^{(4)}(x_i) \Delta x^4 + \mathcal{O}(\Delta x^5)$$ This leads to an expression for $\tau_i$ of$$\begin{aligned} \tau_i &= \frac{1}{\Delta x^2} \left [u''(x_i) \Delta x^2 + \frac{1}{12} u^{(4)}(x_i) \Delta x^4 + \mathcal{O}(\Delta x^5) \right ] - f(x_i) \\ &= u''(x_i) + \frac{1}{12} u^{(4)}(x_i) \Delta x^2 + \mathcal{O}(\Delta x^4) - f(x_i) \\ &= \frac{1}{12} u^{(4)}(x_i) \Delta x^2 + \mathcal{O}(\Delta x^4)\end{aligned}$$where we note that the true solution would satisfy $u''(x) = f(x)$.As long as $ u^{(4)}(x_i) $ remains finite (smooth) we know that $\tau_i \rightarrow 0$ as $\Delta x \rightarrow 0$ We can also write the vector of LTEs as$$ \tau = A \hat{U} - F$$which implies$$ A\hat{U} = F + \tau.$$ Global ErrorWhat we really want to bound is the global error $E$. To relate the global error and LTE we can substitute $E = U - \hat{U}$ into our expression for the LTE to find$$ A E = -\tau.$$This means that the global error is the solution to the system of equations we defined for the approximation except with $\tau$ as the forcing function rather than $F$! This also implies that the global error $E$ can be thought of as an approximation to similar BVP as we started with where$$ e''(x) = -\tau(x) ~~~ \Omega = [0, 1] \\ e(0) = 0 ~~~ e(1) = 0.$$ We can solve this ODE directly by integrating twice since to find to leading order$$\begin{aligned} e(x) &\approx -\frac{1}{12} \Delta x^2 u''(x) + \frac{1}{12} \Delta x^2 (u''(0) + x (u''(1) - u''(0))) \\ &= \mathcal{O}(\Delta x^2) \\ &\rightarrow 0 ~~~ \text{as} ~~~ \Delta x \rightarrow 0.\end{aligned}$$ StabilityWe showed that the continuous analog to $E$, $e(x)$, does in fact go to zero as $\Delta x \rightarrow 0$ but what about $E$? Instead of showing something based on $e(x)$ let's look back at the original system of equations for the global error$$ A^{\Delta x} E^{\Delta x} = - \tau^{\Delta x}$$where we now denote a particular realization of the system by the corresponding grid spacing $\Delta x$. If we could invert $A^{\Delta x}$ we could compute $E^{\Delta x}$ directly. Assuming that we can and taking an appropriate norm we find$$\begin{aligned} E^{\Delta x} &= (A^{\Delta x})^{-1} \tau^{\Delta x} \\ ||E^{\Delta x}|| &= ||(A^{\Delta x})^{-1} \tau^{\Delta x}|| \\ & \leq ||(A^{\Delta x})^{-1} ||~|| \tau^{\Delta x}||\end{aligned}$$ We know that $\tau^{\Delta x} \rightarrow 0$ as $\Delta x \rightarrow 0$ already for our example so if we can bound the norm of the matrix $(A^{\Delta x})^{-1}$ by some constant $C$ for sufficiently small $\Delta x$ we can then write a bound on the global error of$$ ||E^{\Delta x}|| \leq C ||\tau^{\Delta x}||$$demonstrating that $E^{\Delta x} \rightarrow 0 $ at least as fast as $\tau^{\Delta x} \rightarrow 0$. We can generalize this observation to all linear BVP problems by supposing that we have a finite difference approximation to a linear BVP of the form$$ A^{\Delta x} U^{\Delta x} = F^{\Delta x},$$where $\Delta x$ is the grid spacing. We say the approximation is *stable* if $(A^{\Delta x})^{-1}$ exists $\forall \Delta x < \Delta x_0$ and there is a constant $C$ such that$$ ||(A^{\Delta x})^{-1}|| \leq C ~~~~ \forall \Delta x < \Delta x_0.$$ ConsistencyA related and important idea for the discretization of any PDE is that it be consistent with the equation we are approximating. If$$ ||\tau^{\Delta x}|| \rightarrow 0 ~~\text{as}~~ \Delta x \rightarrow 0$$then we say an approximation is *consistent* with the differential equation. ConvergenceWe now have all the pieces to say something about the global error $E$. A method is said to be *convergent* if$$ ||E^{\Delta x}|| \rightarrow 0 ~~~ \text{as} ~~~ \Delta x \rightarrow 0.$$ If an approximation is both consistent ($||\tau^{\Delta x}|| \rightarrow 0 ~~\text{as}~~ \Delta x \rightarrow 0$) and stable ($||E^{\Delta x}|| \leq C ||\tau^{\Delta x}||$) then the approximation is convergent. We have only derived this in the case of linear BVPs but in fact these criteria for convergence are often found to be true for any finite difference approximation (and beyond for that matter). This statement of convergence can also often be strengthened to say$$ \mathcal{O}(\Delta x^p) ~\text{LTE}~ + ~\text{stability} ~ \Rightarrow \mathcal{O}(\Delta x^p) ~\text{global error}.$$ It turns out the most difficult part of this process is usually the statement regarding stability. In the next section we will see for our simple example how we can prove stability in the 2-norm. Stability in the 2-NormRecalling our definition of stability, we need to show that for our previously defined $A$ that$$ (A^{\Delta x})^{-1}$$exists and$$ ||(A^{\Delta x})^{-1}|| \leq C ~~~ \forall \Delta x < \Delta x_0$$for some $C$. We can show that $A$ is in fact invertible but can we bound the norm of the inverse? Recall that the 2-norm of a symmetric matrix is equal to its spectral radius, i.e.$$ ||A||_2 = \rho(A) = \max_{1\leq p \leq m} |\lambda_p|.$$ Since the inverse of $A$ is also symmetric the eigenvalues of $A^{-1}$ are the inverses of the eigenvalues of $A$ implying that$$ ||A^{-1}||_2 = \rho(A^{-1}) = \max_{1\leq p \leq m} \left| \frac{1}{\lambda_p} \right| = \frac{1}{\max_{1\leq p \leq m} \left| \lambda_p \right|}.$$ If none of the $\lambda_p$ of $A$ are zero for sufficiently small $\Delta x$ and the rest are finite as $\Delta x \rightarrow 0$ we have shown the stability of the approximation. The eigenvalues of the matrix $A$ from above can be written as$$ \lambda_p = \frac{2}{\Delta x^2} (\cos(p \pi \Delta x) - 1)$$with the corresponding eigenvectors $v^p$ $$ v^p_j = \sin(p \pi j \Delta x)$$as the $j$th component with $j = 1, \ldots, m$. Check that these are in fact the eigenpairs of the matrix $A$$$ \lambda_p = \frac{2}{\Delta x^2} (\cos(p \pi \Delta x) - 1)$$$$ v^p_j = \sin(p \pi j \Delta x)$$ $$\begin{aligned} (A v^p)_j &= \frac{1}{\Delta x^2} (v^p_{j-1} - 2 v^p_j + v^p_{j+1} ) \\ &= \frac{1}{\Delta x^2} (\sin(p \pi (j-1) \Delta x) - 2 \sin(p \pi j \Delta x) + \sin(p \pi (j+1) \Delta x) ) \\ &= \frac{1}{\Delta x^2} (\sin(p \pi j \Delta x) \cos(p \pi \Delta x) - 2 \sin(p \pi j \Delta x) + \sin(p \pi j \Delta x) \cos(p \pi \Delta x) \\ &= \lambda_p v^p_j.\end{aligned}$$ Compute the smallest eigenvalueIf we can show that the eigenvalues are away from the origin then we know $||A||_2$ will be bounded. In this case the eigenvalues are negative so we need to show that they are always strictly less than zero.$$ \lambda_p = \frac{2}{\Delta x^2} (\cos(p \pi \Delta x) - 1)$$Use a Taylor series to get an idea of how this behaves with respect to $\Delta x$ From these expressions we know that smallest eigenvalue is$$\begin{aligned} \lambda_1 &= \frac{2}{\Delta x^2} (\cos(p \pi \Delta x) - 1) \\ &= \frac{2}{\Delta x^2} \left (-\frac{1}{2} p^2 \pi^2 \Delta x^2 + \frac{1}{24} p^4 \pi^4 \Delta x^4 + \mathcal{O}(\Delta^6) \right ) \\ &= -p^2 \pi^2 + \mathcal{O}(\Delta x^2).\end{aligned}$$Note that this also gives us an error bound as this eigenvalue also will also lead to the largest eigenvalue of the inverse matrix. We can therefore say$$ ||E^{\Delta x}||_2 \leq ||(A^{\Delta x})^{-1}||_2 ||\tau^{\Delta x}||_2 \approx \frac{1}{\pi^2} ||\tau^{\Delta x}||_2.$$ Stability in the $\infty$-NormThe straight forward approach to show that $||E||_\infty \rightarrow 0$ as $\Delta x \rightarrow 0$ would be to use the matrix bound$$ ||E||_\infty \leq \frac{1}{\sqrt{\Delta x}} ||E||_2.$$ For our example problem we showed that $||E||_2 = \mathcal{O}(\Delta x^2)$ so this implies that we at least know that $||E||_\infty = \mathcal{O}(\Delta x^{3/2})$. This is unfortunate as we expect $||E||_\infty = \mathcal{O}(\Delta x^{2})$ due to the discretization. In order to alleviate this problem let's go back and consider our definition of stability but this time consider the $\infty$-norm. It turns out that our matrix $A$ can be seen as a number of discrete approximations to *Green's functions* in each column. This is more broadly applicable later on so we will spend some time reviewing the theory of Green's functions and apply them to our simple example problem. Green's FunctionsConsider the BVP with Dirichlet boundary conditions$$ u''(x) = f(x) ~~~~ \Omega = [0, 1] \\ u(0) = \alpha ~~~~ u(1) = \beta.$$Pick a fixed point $\bar{x} \in \Omega$, the Green's function $G(x ; \bar{x})$ solves the BVP above with$$ f(x) = \delta(x - \bar{x})$$and $\alpha = \beta = 0$. You could think of this as the result of a steady-state problem of the heat equation with a point-loss of heat somewhere in the domain. To find the Green's function for our particular problem we can integrate just around the point $\bar{x}$ near the $\delta$ function source to find$$\begin{aligned} \int^{\bar{x} + \epsilon}_{\bar{x} - \epsilon} u''(x) dx &= \int^{\bar{x} + \epsilon}_{\bar{x} - \epsilon} \delta(x - \bar{x}) dx \\ u'(\bar{x} + \epsilon) - u'(\bar{x} - \epsilon) &= 1\end{aligned}$$recalling that by definition the integral of the $\delta$ function must be 1 if the interval of integration includes $\bar{x}$. We see that the jump in the derivative at $\bar{x}$ from the left and right should be 1. After a bit of algebra we can solve for the Green's function for our model BVP as$$ G(x; \bar{x}) = \left \{ \begin{aligned} (\bar{x} - 1) x & & 0 \leq x \leq \bar{x} \\ \bar{x} (x - 1) & & \bar{x} \leq x \leq 1 \end{aligned} \right . .$$ One imporant property of linear PDEs (or ODEs) in general is that they exhibit the principle of superposition. The reason we care about this with Green's functions is that if we have a $f(x)$ composed of two $\delta$ functions, it turns out the solution is the sum of the corresponding two Green's functions. For instance if$$ f(x) = \delta(x - 0.25) + 2 \delta(x - 0.5)$$then$$ u(x) = G(x ; 0.25) + 2 G(x ; 0.5).$$ This of course can be extended to an infinite number of $\delta$ functions so that$$ f(x) = \int^1_0 f(\bar{x}) \delta(x - \bar{x}) d\bar{x}$$and therefore$$ u(x) = \int^1_0 f(\bar{x}) G(x ; \bar{x}) d\bar{x}.$$ To incorporate the effects of boundary conditions we can continue to add Green's functions to the solution to find the general solution of our original BVP as$$ u(x) = \alpha (1 - x) + \beta x + \int^1_0 f(\bar{x}) G(x ; \bar{x}) d\bar{x}.$$ So why did we do all this? Well the Green's function solution representation above can be thought of as a linear operator on the function $f(x)$. Written in perhaps more familiar terms we have$$ \mathcal{A} u = f ~~~~ u = \mathcal{A}^{-1} f.$$We see now that our linear operator $\mathcal{A}$ may be the continuous analog to our discrete matrix $A$. To proceed we will modify our original matrix $A$ into a slightly different version based on the same discretization. Instead of moving the boundary terms to the right-hand-side of the equation instead we will introduce two new "unknowns", called *ghost cells*, that will be placed at the edges of the grid. We will label these $U_0$ and $U_{m+1}$. In reality we know the value of these points, they are the boundary conditions! The modified system then looks like$$ A = \frac{1}{\Delta x^2} \begin{bmatrix} \Delta x^2 & 0 \\ 1 & -2 & 1 \\ & 1 & -2 & 1 \\ & & \ddots & \ddots & \ddots \\ & & & 1 & -2 & 1 \\ & & & & 1 & -2 & 1 \\ & & & & & 0 & \Delta x^2 \end{bmatrix} ~~~ U = \begin{bmatrix} U_0 \\ U_1 \\ \vdots \\ U_m \\ U_{m+1} \end{bmatrix}~~~~~ F = \begin{bmatrix} \alpha \\ f(x_1) \\ \vdots \\ f(x_{m}) \\ \beta \end{bmatrix} $$ This has the advantage later that we can implement more general boundary conditions and it isolates the algebraic dependence on the boundary conditions. The drawbacks are that the matrix no longer has as simple of a form as before. Let's finally turn to the form of the matrix $A^{-1}$. Introducing a bit more notation, let $A_{j}$ denote the $j$th column and $A_{ij}$ denote the $i$th $j$th element of the matrix $A$.We know that $$ A A^{-1}_j = e_j$$where $e_j$ is the unit vector with $1$ in the $j$th row ($j$th column of the identity matrix). Note that the above system has some similarities to a discretized version of the Green's function problem. Here $e_j$ represents the $\delta$ function, $A$ the original operator, and $A^{-1}_j$ the effect that the $j$th $\delta$ function (corresponding to the $\bar{x}$) has on the full solution. It turns out that we can write down the inverse matrix directly using Green's functions (see LeVeque for the details) but we end up with$$ A^{-1}_{ij} = \Delta xG(x_i ; x_j) = \left \{ \begin{aligned} \Delta x (x_j - 1) x_i, & & i &= 1,2, \ldots j \\ \Delta x (x_i - 1) x_j, & & i &= j, j+1, \ldots , m \end{aligned} \right . .$$ We can also write the effective right-hand side of our system as$$ F = \alpha e_0 + \beta e_{m+1} + \sum^m_{j=1} f_j e_j$$and finally the solution as$$ U = \alpha A^{-1}_{0} + \beta A^{-1}_{m+1} + \sum^m_{j=1} f_j A^{-1}_{j}$$whose elements are$$ U_i = \alpha(1 - x_i) + \beta x_i + \Delta x \sum^m_{j=1} f_j G(x_i ; x_j).$$ Alright, where has all this gotten us? Well, since we now know what the form of $A^{-1}$ is we may be able to get at the $\infty$-norm of this matrix. Recall that the $\infty$-norm of a matrix (induced from the $\infty$-norm) for a vector is$$ || C ||_\infty = \max_{0\leq i \leq m+1} \sum^{m+1}_{j=0} |C_{ij}|$$ Note that due to the form of the matrix $A^{-1}$ the first row's sum is $$ \sum^{m+1}_{j=0} A_{0j}^{-1} = 1$$as is the last rows $A^{-1}_{m+1}$. We also know that for the other rows $A^{-1}_{i,0} < 1$ and $A^{-1}_{i,m+1} < 1$. The intermediate rows are also all bounded as$$ \sum^{m+1}_{j=0} |A^{-1}_{ij}| \leq 1 + 1 + m \Delta x < 3$$using the fact we know that$$ \Delta x = \frac{1}{m+1}.$$This completes our stability wanderings as we can now say definitively that$$ ||A^{-1}||_\infty < 3 ~~~ \forall \Delta x.$$ Neumann Boundary ConditionsAs mentioned before, we can incorporate other types of boundary conditions into our discretization using the modified version of our matrix. Let's try to do this for our original problem but with one side having Neumann boundary conditions:$$ u''(x) = f(x) ~~~ \Omega = [-1, 1] \\ u(-1) = \alpha ~~~ u'(1) = \sigma.$$ **Group Work**$$ u''(x) = f(x) ~~~ \Omega = [-1, 1] \\ u(-1) = \alpha ~~~ u'(1) = \sigma.$$$u(x) = -(5 + e) x - (2 + e + e^{-1}) + e^x$Explore implementing the Neumann boundary condition by1. using a one-sided 1st order expression,1. using a centered 2nd order expression, and1. using a one-sided 2nd order expression
|
def solve_mixed_1st_order_one_sided(m):
# Problem setup
a = -1.0
b = 1.0
alpha = 3.0
sigma = -5.0
f = lambda x: numpy.exp(x)
# Descretization
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m + 2, m + 2))
diagonal = numpy.ones(m + 2) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x_bc)
# Boundary conditions
A[0, 0] = 1.0
A[0, 1] = 0.0
A[-1, -1] = 1.0 / (delta_x)
A[-1, -2] = -1.0 / (delta_x)
b[0] = alpha
b[-1] = sigma
# Solve system
U = numpy.linalg.solve(A, b)
return x_bc, U
u_true = lambda x: -(5.0 + numpy.exp(1.0)) * x - (2.0 + numpy.exp(1.0) + numpy.exp(-1.0)) + numpy.exp(x)
x_bc, U = solve_mixed_1st_order_one_sided(10)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
plt.show()
def solve_mixed_2nd_order_centered(m):
# Problem setup
a = -1.0
b = 1.0
alpha = 3.0
sigma = -5.0
f = lambda x: numpy.exp(x)
# Descretization
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m + 2, m + 2))
diagonal = numpy.ones(m + 2) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x_bc)
# Boundary conditions
A[0, 0] = 1.0
A[0, 1] = 0.0
A[-1, -1] = -1.0 / (delta_x)
A[-1, -2] = 1.0 / (delta_x)
b[0] = alpha
b[-1] = delta_x / 2.0 * f(x_bc[-1]) - sigma
# Solve system
U = numpy.linalg.solve(A, b)
return x_bc, U
x_bc, U = solve_mixed_2nd_order_centered(10)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
plt.show()
def solve_mixed_2nd_order_one_sided(m):
# Problem setup
a = -1.0
b = 1.0
alpha = 3.0
sigma = -5.0
f = lambda x: numpy.exp(x)
# Descretization
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m + 2, m + 2))
diagonal = numpy.ones(m + 2) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x_bc)
# Boundary conditions
A[0, 0] = 1.0
A[0, 1] = 0.0
A[-1, -1] = 3.0 / (2.0 * delta_x)
A[-1, -2] = -4.0 / (2.0 * delta_x)
A[-1, -3] = 1.0 / (2.0 * delta_x)
b[0] = alpha
b[-1] = sigma
# Solve system
U = numpy.linalg.solve(A, b)
return x_bc, U
x_bc, U = solve_mixed_2nd_order_one_sided(10)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
plt.show()
# Problem setup
a = -1.0
b = 1.0
alpha = 3.0
sigma = -5.0
f = lambda x: numpy.exp(x)
u_true = lambda x: -(5.0 + numpy.exp(1.0)) * x - (2.0 + numpy.exp(1.0) + numpy.exp(-1.0)) + numpy.exp(x)
# Compute the error as a function of delta_x
m_range = numpy.arange(10, 200, 20)
delta_x = numpy.empty(m_range.shape)
error = numpy.empty((m_range.shape[0], 3))
for (i, m) in enumerate(m_range):
x = numpy.linspace(a, b, m + 2)
delta_x[i] = (b - a) / (m + 1)
# Compute solution
_, U = solve_mixed_1st_order_one_sided(m)
error[i, 0] = numpy.linalg.norm(U - u_true(x), ord=numpy.infty)
_, U = solve_mixed_2nd_order_one_sided(m)
error[i, 1] = numpy.linalg.norm(U - u_true(x), ord=numpy.infty)
_, U = solve_mixed_2nd_order_centered(m)
error[i, 2] = numpy.linalg.norm(U - u_true(x), ord=numpy.infty)
titles = ["1st Order, One-Sided", "2nd Order, Centered", "2nd Order, One-Sided"]
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
for i in xrange(3):
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_x, error[:, i], 'ko', label="Approx. Derivative")
axes.loglog(delta_x, order_C(delta_x[0], error[0,i], 1.0) * delta_x**1.0, 'r--', label="1st Order")
axes.loglog(delta_x, order_C(delta_x[0], error[0,i], 2.0) * delta_x**2.0, 'b--', label="2nd Order")
axes.legend(loc=4)
axes.set_title(titles[i])
axes.set_xlabel("$\Delta x$")
axes.set_ylabel("$|u(x) - U|$")
plt.show()
U = solve_mixed_1st_order_one_sided(10)
U = solve_mixed_2nd_order_one_sided(10)
U = solve_mixed_2nd_order_centered(10)
|
_____no_output_____
|
CC0-1.0
|
04_BVP_problems.ipynb
|
lihu8918/numerical-methods-pdes
|
Existance and UniquenessOne question that should be asked before embarking upon a numerical solution to any equation is whether the original is *well-posed*. Well-posedness is defined as a problem that has a unique solution and depends continuously on the input data (inital condition and boundary conditions are examples). Consider the BVP we have been exploring but now add strictly Neumann boundary conditions$$ u''(x) = f(x) ~~~ \Omega = [0, 1] \\ u'(0) = \sigma_0 ~~~ u'(1) = \sigma_1.$$We can easily discretize this using one of our methods developed above but we run into problems.
|
# Problem setup
a = -1.0
b = 1.0
alpha = 3.0
sigma = -5.0
f = lambda x: numpy.exp(x)
# Descretization
m = 50
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m + 2, m + 2))
diagonal = numpy.ones(m + 2) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x_bc)
# Boundary conditions
A[0, 0] = -1.0 / delta_x
A[0, 1] = 1.0 / delta_x
A[-1, -1] = -1.0 / (delta_x)
A[-1, -2] = 1.0 / (delta_x)
b[0] = alpha
b[-1] = delta_x / 2.0 * f(x_bc[-1]) - sigma
# Solve system
U = numpy.linalg.solve(A, b)
|
_____no_output_____
|
CC0-1.0
|
04_BVP_problems.ipynb
|
lihu8918/numerical-methods-pdes
|
We can see why $A$ is singular, the constant vector $e = [1, 1, 1, 1, 1,\ldots, 1]^T$ is in fact in the null-space of $A$. Our numerical method has actually demonstrated this problem is *ill-posed*! Indeed, since the boundary conditions are only on the derivatives there are an infinite number of solutions to the BVP (this could also occur if there were no solutions). Another way to understand why this is the case is to examine this problem again as the steady-state problem originating with the heat equation. Consider the heat equation with $\sigma_0 = \sigma_1 = 0$ and $f(x) = 0$. This setup would preserve and heat in the rod as none can escape through the ends of the rod. In fact, the solution to the steady-state problem would simply to redistribute the heat in the rod evenly across the rod based on the initial condition. We then would have a solution$$ u(x) = \int^1_0 u^0(x) dx = C.$$ The problem comes from the fact that the steady-state problem does not know about this bit of information by itself. This means that the BVP as it stands could pick out any $C$ and it would be a solution. The solution is similar if we had the same setup except $f(x) \neq 0$. Now we are either adding or subtracting heat in the rod. In this case there may not be a steady state at all! You can actually show that if the addition and subtraction of heat exactly cancels we may in fact have a solution if$$ \int^1_0 f(x) dx = 0$$which leads again to an infinite number of solutions. General Linear Second Order DiscretizationLet's now describe a method for solving the equation$$ a(x) u''(x) + b(x) u'(x) + c(x) u(x) = f(x) ~~~~ \Omega = [a, b] \\ u(a) = \alpha ~~~~ u(b) = \beta.$$ Try discretizing this using second order finite differences and write the system for$$ a(x) u''(x) + b(x) u'(x) + c(x) u(x) = f(x) ~~~~ \Omega = [a, b] \\ u(a) = \alpha ~~~~ u(b) = \beta.$$ The general, second order finite difference approximation to the above equation can be written as$$ a_i \frac{U_{i+1} - 2 U_i + U_{i-1}}{\Delta x^2} + b_i \frac{U_{i+1} - U_{i-1}}{2 \Delta x} + c_i U_i = f_i$$leading to the matrix entries$$ A_{i,i} = -\frac{2 a_i}{\Delta x^2} + c_i$$on the diagonal and$$ A_{i,i\pm1} = \frac{a_i}{\Delta x^2} \pm \frac{b_i}{2 \Delta x}$$on the sub-diagonals. We can take of the boundary conditions by either using the ghost-points approach or by incorporating them into the right hand side evaluation. Example:Consider the steady-state heat conduction problem with a variable $\kappa(x)$ so that$$ (\kappa(x) u'(x))' = f(x), ~~~~ \Omega = [0, 1] \\ u(0) = \alpha ~~~~ u(1) = \beta$$ By the chain rule we know$$ \kappa(x) u''(x) + \kappa'(x) u'(x) = f(x).$$ It turns out that in this case this approach is not really the best approach to solving the problem. In many cases it is best to discretize the original form of the physics rather than a perhaps equivalent formulation. To demonstrate this let's try to construct a system to solve the original equations$$ (\kappa(x) u'(x))' = f(x).$$ First we will approximate the expression$$ \kappa(x) u'(x)$$but at the points half-way in between the points $x_i$, i.e. $x_{i + 1/2}$. We also will take this approximation effectively to be $\Delta x / 2$ and find$$ \kappa(x_{i+1/2}) u'(x_{i+1/2}) = \kappa_{i+1/2} \frac{U_{i+1} - U_i}{\Delta x}.$$ Now taking this approximation and differencing it with the same difference centered at $x_{i-1/2}$ leads to$$\begin{aligned} (\kappa(x_i) u'(x_i))' &= \frac{1}{\Delta x} \left [ \kappa_{i+1/2} \frac{U_{i+1} - U_i}{\Delta x} - \kappa_{i-1/2} \frac{U_{i} - U_{i-1}}{\Delta x} \right ] \\ &= \frac{\kappa_{i+1/2}U_{i+1} - \kappa_{i+1/2} U_i -\kappa_{i-1/2} U_{i} + \kappa_{i-1/2} U_{i-1}}{\Delta x^2} \\ &= \frac{\kappa_{i+1/2}U_{i+1} - (\kappa_{i+1/2} - \kappa_{i-1/2}) U_i + \kappa_{i-1/2} U_{i-1}}{\Delta x^2}\end{aligned}$$ Note that these formulations are actually equivalent to $\mathcal{O}(\Delta x^2)$. The matrix entries are$$\begin{aligned} A_{i,i} = -\frac{\kappa_{i+1/2} - \kappa_{i-1/2}}{\Delta x^2} \\ A_{i,i \pm 1} = \frac{\kappa_{i\pm 1/2}}{\Delta x^2}.\end{aligned}$$Note that this latter discretization is symmetric. This will have consequences as to how well or quickly we can solve the resulting system of linear equations. Non-Linear EquationsOur model problem, Poisson's equation, is a linear BVP. How would we approach a non-linear problem? As a new model problem let's consider the non-linear pendulum problem. The physical system is a mass $m$ connected to a rigid, massless rod of length $L$ which is allowed to swing about a point. The angle $\theta(t)$ is taken with reference to the stable at-rest point with the mass hanging downwards. This system can be described by$$ \theta''(t) = \frac{-g}{L} \sin(\theta(t)).$$We will take $\frac{g}{L} = 1$ for convenience. Looking at the Taylor series of $\sin$ we can approximate this equation for small $\theta$ as$$ \sin(\theta) \approx \theta - \frac{\theta^3}{6} + \mathcal{O}(\theta^5)$$so that$$ \theta'' = -\theta.$$ We know that this equation has solutions of the form$$ \theta(t) = C_1 \cos t + C_2 \sin t.$$We clearly need two boundary conditions to uniquely specify the system which can be a bit awkward given that we usually specify these at two points in the spatial domain. Since we are in time we can specify the initial position of the pendulum $\theta(0) = \alpha$ however the second condition would specify where the pendulum would be sometime in the future, say $\theta(T) = \beta$. We could also specify another initial condition such as the angular velocity $\theta'(0) = \sigma$.
|
# Simple linear pendulum solutions
def linear_pendulum(t, alpha=0.01, beta=0.01, T=1.0):
C_1 = alpha
C_2 = (beta - alpha * numpy.cos(T)) / numpy.sin(T)
return C_1 * numpy.cos(t) + C_2 * numpy.sin(t)
alpha = [0.1, -0.1, -1.0]
beta = [0.1, 0.1, 0.0]
T = [1.0, 1.0, 1.0]
t = numpy.linspace(0, 10.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
for i in xrange(len(alpha)):
axes.plot(t, linear_pendulum(t, alpha[i], beta[i], T[i]))
axes.set_title("Solutions to the Linear Pendulum Problem")
axes.set_xlabel("t")
axes.set_ylabel("$\theta$")
plt.show()
|
_____no_output_____
|
CC0-1.0
|
04_BVP_problems.ipynb
|
lihu8918/numerical-methods-pdes
|
But how would we go about handling the fully non-linear problem? First let's discretize using our approach to date with the second order, centered second derivative finite difference approximation to find$$ \frac{1}{\Delta t^2}(\theta_{i+1} - 2 \theta_i + \theta_{i-1}) + \sin (\theta_i) = 0.$$ The most common approach to solving a non-linear BVP like this (and many non-linear PDEs for that matter) is to use Newton's method. Recall that if we have a non-linear function $G(\theta)$ and we want to find $\theta$ such that$$ G(\theta) = 0$$we can expand $G(\theta)$ in a Taylor series to find$$ G(\theta^{[k+1]}) = G(\theta^{[k]}) + G'(\theta^{[k]}) (\theta^{[k+1]} - \theta^{[k]}) + \mathcal{O}((\theta^{[k+1]} - \theta^{[k]})^2)$$ If we want $G(\theta^{[k+1]}) = 0$ we can set this in the expression above (this is also known as a fixed point iteration) and dropping the higher order terms we can solve for $\theta^{[k+1]}$ to find$$\begin{aligned} 0 &= G(\theta^{[k]}) + G'(\theta^{[k]}) (\theta^{[k+1]} - \theta^{[k]} )\\ G'(\theta^{[k]}) \theta^{[k+1]} &= G'(\theta^{[k]}) \theta^{[k]} - G(\theta^{[k]})\end{aligned}$$ At this point we need to be careful, if we have a system of equations we cannot simply divide through by $G'(\theta^{[k]})$ (which is now a matrix) to find our new value $\theta^{[k+1]}$. Instead we need to invert the matrix $G'(\theta^{[k]})$. Another way to write this is as an update to the value $\theta^{[k+1]}$ where$$ \theta^{[k+1]} = \theta^{[k]} + \delta^{[k]}$$where$$ J(\theta^{[k]}) \delta^{[k]} = -G(\theta^{[k]}).$$ Here we have introduced notation for the **Jacobian matrix** whose elements are$$ J_{ij}(\theta) = \frac{\partial}{\partial \theta_j} G_i(\theta).$$ So how do we compute the Jacobian matrix? Since we know the system of equations in this case we can write down in general what the entries of $J$ are.$$ \frac{1}{\Delta t^2}(\theta_{i+1} - 2 \theta_i + \theta_{i-1}) + \sin (\theta_i) = 0.$$ $$ J_{ij}(\theta) = \left \{ \begin{aligned} &\frac{1}{\Delta t^2} & & j = i - 1, j = i + 1 \\ -&\frac{2}{\Delta t^2} + \cos(\theta_i) & & j = i \\ &0 & & \text{otherwise} \end{aligned} \right .$$ With the Jacobian in hand we can solve the BVP by iterating until some stopping criteria is met (we have converged to our satisfaction). ExampleSolve the linear and non-linear pendulum problem with $T=2\pi$, $\alpha = \beta = 0.7$. - Does the linear equation have a unique solution - Do you expect the original problem to have a unique solution (i.e. does the non-linear problem have a unique solution)?
|
def solve_nonlinear_pendulum(m, alpha, beta, T, max_iterations=100, tolerance=1e-3, verbose=False):
# Discretization
t_bc = numpy.linspace(0.0, T, m + 2)
t = t_bc[1:-1]
delta_t = T / (m + 1)
diagonal = numpy.ones(t.shape)
G = numpy.empty(t_bc.shape)
# Initial guess
theta = 0.7 * numpy.cos(t_bc)
theta[0] = alpha
theta[-1] = beta
# Main iteration loop
success = False
for num_step in xrange(1, max_iterations):
# Construct Jacobian matrix
J = numpy.diag(diagonal * -2.0 / delta_t**2 + numpy.cos(theta[1:-1]), 0)
J += numpy.diag(diagonal[:-1] / delta_t**2, -1)
J += numpy.diag(diagonal[:-1] / delta_t**2, 1)
# Construct vector G
G = (theta[:-2] - 2.0 * theta[1:-1] + theta[2:]) / delta_t**2 + numpy.sin(theta[1:-1])
# Take care of BCs
G[0] = (alpha - 2.0 * theta[1] + theta[2]) / delta_t**2 + numpy.sin(theta[1])
G[-1] = (theta[-3] - 2.0 * theta[-2] + beta) / delta_t**2 + numpy.sin(theta[-2])
# Solve
delta = numpy.linalg.solve(J, -G)
theta[1:-1] += delta
if verbose:
print " (%s) Step size: %s" % (num_step, numpy.linalg.norm(delta))
if numpy.linalg.norm(delta) < tolerance:
success = True
break
if not success:
print numpy.linalg.norm(delta)
raise ValueError("Reached maximum allowed steps before convergence criteria met.")
return t_bc, theta
t, theta = solve_nonlinear_pendulum(100, 0.7, 0.7, 2.0 * numpy.pi, tolerance=1e-9, verbose=True)
plt.plot(t, theta)
plt.show()
# Linear Problem
alpha = 0.7
beta = 0.7
T = 2.0 * numpy.pi
t = numpy.linspace(0, T, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, linear_pendulum(t, alpha, beta, T), 'r-', label="Linear")
# Non-linear problem
t, theta = solve_nonlinear_pendulum(100, alpha, beta, T)
axes.plot(t, theta, 'b-', label="Non-Linear")
axes.set_title("Solutions to the Pendulum Problem")
axes.set_xlabel("t")
axes.set_ylabel("$\theta$")
plt.show()
|
_____no_output_____
|
CC0-1.0
|
04_BVP_problems.ipynb
|
lihu8918/numerical-methods-pdes
|
Many to Many ClassificationSimple example for Many to Many Classification (Simple pos tagger) by Recurrent Neural Networks- Creating the **data pipeline** with `tf.data`- Preprocessing word sequences (variable input sequence length) using `padding technique` by `user function (pad_seq)`- Using `tf.nn.embedding_lookup` for getting vector of tokens (eg. word, character)- Training **many to many classification** with `tf.contrib.seq2seq.sequence_loss`- Masking unvalid token with `tf.sequence_mask`- Creating the model as **Class**- Reference - https://github.com/aisolab/sample_code_of_Deep_learning_Basics/blob/master/DLEL/DLEL_12_2_RNN_(toy_example).ipynb Setup
|
import os, sys
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import string
%matplotlib inline
slim = tf.contrib.slim
print(tf.__version__)
|
1.10.0
|
Apache-2.0
|
week07/02_many_to_many_classification_exercise.ipynb
|
modulabs/modu-tensorflow
|
Prepare example data
|
sentences = [['I', 'feel', 'hungry'],
['tensorflow', 'is', 'very', 'difficult'],
['tensorflow', 'is', 'a', 'framework', 'for', 'deep', 'learning'],
['tensorflow', 'is', 'very', 'fast', 'changing']]
pos = [['pronoun', 'verb', 'adjective'],
['noun', 'verb', 'adverb', 'adjective'],
['noun', 'verb', 'determiner', 'noun', 'preposition', 'adjective', 'noun'],
['noun', 'verb', 'adverb', 'adjective', 'verb']]
# word dic
word_list = []
for elm in sentences:
word_list += elm
word_list = list(set(word_list))
word_list.sort()
word_list = ['<pad>'] + word_list
word_dic = {word : idx for idx, word in enumerate(word_list)}
print(word_dic)
# pos dic
pos_list = []
for elm in pos:
pos_list += elm
pos_list = list(set(pos_list))
pos_list.sort()
pos_list = ['<pad>'] + pos_list
print(pos_list)
pos_dic = {pos : idx for idx, pos in enumerate(pos_list)}
pos_dic
pos_idx_to_dic = {elm[1] : elm[0] for elm in pos_dic.items()}
pos_idx_to_dic
|
_____no_output_____
|
Apache-2.0
|
week07/02_many_to_many_classification_exercise.ipynb
|
modulabs/modu-tensorflow
|
Create pad_seq function
|
def pad_seq(sequences, max_len, dic):
seq_len, seq_indices = [], []
for seq in sequences:
seq_len.append(len(seq))
seq_idx = [dic.get(char) for char in seq]
seq_idx += (max_len - len(seq_idx)) * [dic.get('<pad>')] # 0 is idx of meaningless token "<pad>"
seq_indices.append(seq_idx)
return seq_len, seq_indices
|
_____no_output_____
|
Apache-2.0
|
week07/02_many_to_many_classification_exercise.ipynb
|
modulabs/modu-tensorflow
|
Pre-process data
|
max_length = 10
X_length, X_indices = pad_seq(sequences = sentences, max_len = max_length, dic = word_dic)
print(X_length, np.shape(X_indices))
y = [elm + ['<pad>'] * (max_length - len(elm)) for elm in pos]
y = [list(map(lambda el : pos_dic.get(el), elm)) for elm in y]
print(np.shape(y))
y
|
_____no_output_____
|
Apache-2.0
|
week07/02_many_to_many_classification_exercise.ipynb
|
modulabs/modu-tensorflow
|
Define SimPosRNN
|
class SimPosRNN:
def __init__(self, X_length, X_indices, y, n_of_classes, hidden_dim, max_len, word_dic):
# Data pipeline
with tf.variable_scope('input_layer'):
# input layer를 구현해보세요
# tf.get_variable을 사용하세요
# tf.nn.embedding_lookup을 사용하세요
self._X_length = X_length
self._X_indices = X_indices
self._y = y
# RNN cell (many to many)
with tf.variable_scope('rnn_cell'):
# RNN cell을 구현해보세요
# tf.contrib.rnn.BasicRNNCell을 사용하세요
# tf.nn.dynamic_rnn을 사용하세요
# tf.contrib.rnn.OutputProjectionWrapper를 사용하세요
with tf.variable_scope('seq2seq_loss'):
# tf.sequence_mask를 사용하여 masks를 정의하세요
# tf.contrib.seq2seq.sequence_loss의 weights argument에 masks를 넣으세요
with tf.variable_scope('prediction'):
# tf.argmax를 사용하세요
def predict(self, sess, X_length, X_indices):
# predict instance method를 구현하세요
return sess.run(self._prediction, feed_dict = feed_prediction)
|
_____no_output_____
|
Apache-2.0
|
week07/02_many_to_many_classification_exercise.ipynb
|
modulabs/modu-tensorflow
|
Create a model of SimPosRNN
|
# hyper-parameter#
lr = .003
epochs = 100
batch_size = 2
total_step = int(np.shape(X_indices)[0] / batch_size)
print(total_step)
## create data pipeline with tf.data
# tf.data를 이용해서 직접 구현해보세요
# 최종적으로 model은 아래의 코드를 통해서 생성됩니다.
sim_pos_rnn = SimPosRNN(X_length = X_length_mb, X_indices = X_indices_mb, y = y_mb,
n_of_classes = 8, hidden_dim = 16, max_len = max_length, word_dic = word_dic)
|
_____no_output_____
|
Apache-2.0
|
week07/02_many_to_many_classification_exercise.ipynb
|
modulabs/modu-tensorflow
|
Creat training op and train model
|
## create training op
opt = tf.train.AdamOptimizer(learning_rate = lr)
training_op = opt.minimize(loss = sim_pos_rnn.seq2seq_loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
tr_loss_hist = []
for epoch in range(epochs):
avg_tr_loss = 0
tr_step = 0
sess.run(tr_iterator.initializer)
try:
while True:
# 여기를 직접구현하시면 됩니다.
except tf.errors.OutOfRangeError:
pass
avg_tr_loss /= tr_step
tr_loss_hist.append(avg_tr_loss)
if (epoch + 1) % 10 == 0:
print('epoch : {:3}, tr_loss : {:.3f}'.format(epoch + 1, avg_tr_loss))
yhat = sim_pos_rnn.predict(sess = sess, X_length = X_length, X_indices = X_indices)
yhat
y
yhat = [list(map(lambda elm : pos_idx_to_dic.get(elm), row)) for row in yhat]
for elm in yhat:
print(elm)
|
_____no_output_____
|
Apache-2.0
|
week07/02_many_to_many_classification_exercise.ipynb
|
modulabs/modu-tensorflow
|
Check the distribution of the true and false trials
|
mu, sigma = 0, 0.1 # mean and standard deviation
s = np.random.normal(mu, sigma, 1000)
k2_test, p_test = sc.stats.normaltest(s, axis=0, nan_policy='omit')
print("p = {:g}".format(p_test))
if p_test < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis
print('This random distribution is not normally distributed')
else:
print('This random distribution is normally distributed')
trueTrials = file.FramesInView[file.TrialStatus == 1]
k2_true, p_true = sc.stats.normaltest(np.log(trueTrials), axis=0, nan_policy='omit')
print("p = {:g}".format(p_true))
if p_true < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis
print('the true trials are not normally distributed')
else:
print('The true trials are normally distributed')
falseTrials = file.FramesInView[file.TrialStatus == 0]
k2_false, p_false = sc.stats.normaltest(np.log(falseTrials), axis=0, nan_policy='omit')
print("p = {:g}".format(p_false))
if p_false < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis
print('the false trials are not normally distributed')
else:
print('The false trials are normally distributed')
x = np.asarray(file.FramesInView)
y = np.zeros(len(x))
data = np.transpose(np.array([x,y]))
Manual_Label = np.asarray(file.TrialStatus)
plt.scatter(data[:,0],data[:,1], c = Manual_Label) #see what the data looks like
# build the linear classifier
clf = svm.SVC(kernel = 'linear', C = 1.0)
clf.fit(data,Manual_Label)
w = clf.coef_[0]
y0 = clf.intercept_
new_line = w[0]*data[:,0] - y0
new_line.shape
# see what the classifier did to the labels - find a way to draw a line along the "point" and draw "margin"
plt.hist(trueTrials, bins =10**np.linspace(0, 4, 40), color = 'lightyellow', label = 'true trials', zorder=0)
plt.hist(falseTrials, bins =10**np.linspace(0, 4, 40), color = 'mediumpurple', alpha=0.35, label = 'false trials', zorder=5)
annotation = []
for x,_ in data:
YY = clf.predict([[x,0]])[0]
annotation.append(YY)
plt.scatter(data[:,0],data[:,1]+10, c = annotation,
alpha=0.3, edgecolors='none', zorder=10, label = 'post-classification')
# plt.plot(new_line)
plt.xscale("log")
plt.yscale('linear')
plt.xlabel('Trial length (in frame Number)')
plt.title('Using a Classifier to indentify true trials')
plt.legend()
# plt.savefig(r'C:\Users\Daniellab\Desktop\Light_level_videos_c-10\Data\Step3\Annotation\Figuers_3.svg')
plt.tight_layout()
# run the predictor for all dataset and annotate them
direc = r'C:\Users\Daniellab\Desktop\Light_level_videos_second_batch\Data\Step2_Tanvi_Method'
new_path = r'C:\Users\Daniellab\Desktop\Light_level_videos_second_batch\Data\Step3'
file = [file for file in os.listdir(direc) if file.endswith('.csv')]
# test = file[0]
for item in file:
print(item)
df = pd.read_csv(direc + '/' + item)
label = []
# run the classifer on this
for xx in df.Frames_In_View:
YY = clf.predict([[xx,0]])[0]
label.append(YY)
df1 = pd.DataFrame({'label': label})
new_df = pd.concat([df, df1], axis = 1)
# new_df.to_csv(new_path + '/' + item[:-4] + '_labeled.csv')
|
L0.1_c-3_m10_MothInOut.csv
L0.1_c-3_m12_MothInOut.csv
L0.1_c-3_m20_MothInOut.csv
L0.1_c-3_m21_MothInOut.csv
L0.1_c-3_m22_MothInOut.csv
L0.1_c-3_m23_MothInOut.csv
L0.1_c-3_m24_MothInOut.csv
L0.1_c-3_m25_MothInOut.csv
L0.1_c-3_m27_MothInOut.csv
L0.1_c-3_m2_MothInOut.csv
L0.1_c-3_m32_MothInOut.csv
L0.1_c-3_m34_MothInOut.csv
L0.1_c-3_m37_MothInOut.csv
L0.1_c-3_m38_MothInOut.csv
L0.1_c-3_m39_MothInOut.csv
L0.1_c-3_m40_MothInOut.csv
L0.1_c-3_m41_MothInOut.csv
L0.1_c-3_m43_MothInOut.csv
L0.1_c-3_m44_MothInOut.csv
L0.1_c-3_m45_MothInOut.csv
L0.1_c-3_m46_MothInOut.csv
L0.1_c-3_m47_MothInOut.csv
L0.1_c-3_m48_MothInOut.csv
L0.1_c-3_m49_MothInOut.csv
L0.1_c-3_m50_MothInOut.csv
L0.1_c-3_m54_MothInOut.csv
L0.1_c-3_m57_MothInOut.csv
L0.1_c-3_m5_MothInOut.csv
L0.1_c-3_m8_MothInOut.csv
L50_c-3_m10_MothInOut.csv
L50_c-3_m12_MothInOut.csv
L50_c-3_m13_MothInOut.csv
L50_c-3_m14_MothInOut.csv
L50_c-3_m15_MothInOut.csv
L50_c-3_m21_MothInOut.csv
L50_c-3_m22_MothInOut.csv
L50_c-3_m24_MothInOut.csv
L50_c-3_m25_MothInOut.csv
L50_c-3_m26_MothInOut.csv
L50_c-3_m2_MothInOut.csv
L50_c-3_m30_MothInOut.csv
L50_c-3_m32_MothInOut.csv
L50_c-3_m33_MothInOut.csv
L50_c-3_m34_MothInOut.csv
L50_c-3_m35_MothInOut.csv
L50_c-3_m37_MothInOut.csv
L50_c-3_m38_MothInOut.csv
L50_c-3_m39_MothInOut.csv
L50_c-3_m45_MothInOut.csv
L50_c-3_m49_MothInOut.csv
L50_c-3_m50_MothInOut.csv
L50_c-3_m51_MothInOut.csv
L50_c-3_m58_MothInOut.csv
L50_c-3_m6_MothInOut.csv
L50_c-3_m9_MothInOut.csv
|
MIT
|
Step3_SVM-ClassifierTo-LabelVideoData.ipynb
|
itsMahad/MothAbdominalRestriction
|
Infocomparison of vhgpr and sgpr
|
import sys
sys.path.append("../../")
import numpy as np
import matplotlib.pyplot as plt
import scipy.io as sio
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import WhiteKernel, RBF, ConstantKernel as C
from core import VHGPR
plt.rcParams.update({'font.size': 16})
|
_____no_output_____
|
MIT
|
HGPextreme/examples/motorcycle/func_prediction.ipynb
|
umbrellagong/HGPextreme
|
data and test points
|
Data = sio.loadmat('motorcycle.mat')
DX = Data['X']
DY = Data['y'].flatten()
x = np.atleast_2d(np.linspace(0,60,100)).T # Test points
|
_____no_output_____
|
MIT
|
HGPextreme/examples/motorcycle/func_prediction.ipynb
|
umbrellagong/HGPextreme
|
VHGPR
|
kernelf = C(10.0, (1e-1, 5*1e3)) * RBF(5, (1e-1, 1e2)) # mean kernel
kernelg = C(10.0, (1e-1, 1e2)) * RBF(5, (1e-1, 1e2)) # variance kernel
model_v = VHGPR(kernelf, kernelg)
results_v = model_v.fit(DX, DY).predict(x)
|
_____no_output_____
|
MIT
|
HGPextreme/examples/motorcycle/func_prediction.ipynb
|
umbrellagong/HGPextreme
|
Standard GPR
|
kernel = C(1e1, (1e-1, 1e4)) * RBF(1e1, (1e-1, 1e2)) + WhiteKernel(1e1, (1e-1, 1e4))
model_s = GaussianProcessRegressor(kernel, n_restarts_optimizer = 5)
results_s = model_s.fit(DX, DY).predict(x, return_std = True)
|
_____no_output_____
|
MIT
|
HGPextreme/examples/motorcycle/func_prediction.ipynb
|
umbrellagong/HGPextreme
|
Comparison
|
plt.figure(figsize = (6,4))
plt.plot(DX,DY,"o")
plt.plot(x, results_v[0],'r', label='vhgpr')
plt.plot(x, results_v[0] + 2 * np.sqrt(np.exp(results_v[2])), 'r--')
plt.plot(x, results_v[0] - 2 * np.sqrt(np.exp(results_v[2])),'r--')
plt.plot(x, results_s[0],'k', label='sgpr')
plt.plot(x, results_s[0] + 2* np.sqrt(np.exp(model_s.kernel_.theta[2])),'k--')
plt.plot(x, results_s[0] - 2* np.sqrt(np.exp(model_s.kernel_.theta[2])),'k--')
plt.xlim(0,60)
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.grid()
plt.show()
|
_____no_output_____
|
MIT
|
HGPextreme/examples/motorcycle/func_prediction.ipynb
|
umbrellagong/HGPextreme
|
$EXERCISE_PREAMBLE$As always, run the setup code below before working on the questions (and if you leave this notebook and come back later, remember to run the setup code again).
|
from learntools.core import binder; binder.bind(globals())
from learntools.python.ex5 import *
print('Setup complete.')
|
_____no_output_____
|
Apache-2.0
|
notebooks/python/raw/ex_5.ipynb
|
NeuroLaunch/learntools
|
Exercises 1.Have you ever felt debugging involved a bit of luck? The following program has a bug. Try to identify the bug and fix it.
|
def has_lucky_number(nums):
"""Return whether the given list of numbers is lucky. A lucky list contains
at least one number divisible by 7.
"""
for num in nums:
if num % 7 == 0:
return True
else:
return False
|
_____no_output_____
|
Apache-2.0
|
notebooks/python/raw/ex_5.ipynb
|
NeuroLaunch/learntools
|
Try to identify the bug and fix it in the cell below:
|
def has_lucky_number(nums):
"""Return whether the given list of numbers is lucky. A lucky list contains
at least one number divisible by 7.
"""
for num in nums:
if num % 7 == 0:
return True
else:
return False
q1.check()
#_COMMENT_IF(PROD)_
q1.hint()
#_COMMENT_IF(PROD)_
q1.solution()
|
_____no_output_____
|
Apache-2.0
|
notebooks/python/raw/ex_5.ipynb
|
NeuroLaunch/learntools
|
2. a.Look at the Python expression below. What do you think we'll get when we run it? When you've made your prediction, uncomment the code and run the cell to see if you were right.
|
#[1, 2, 3, 4] > 2
|
_____no_output_____
|
Apache-2.0
|
notebooks/python/raw/ex_5.ipynb
|
NeuroLaunch/learntools
|
bR and Python have some libraries (like numpy and pandas) compare each element of the list to 2 (i.e. do an 'element-wise' comparison) and give us a list of booleans like `[False, False, True, True]`. Implement a function that reproduces this behaviour, returning a list of booleans corresponding to whether the corresponding element is greater than n.
|
def elementwise_greater_than(L, thresh):
"""Return a list with the same length as L, where the value at index i is
True if L[i] is greater than thresh, and False otherwise.
>>> elementwise_greater_than([1, 2, 3, 4], 2)
[False, False, True, True]
"""
pass
q2.check()
#_COMMENT_IF(PROD)_
q2.solution()
|
_____no_output_____
|
Apache-2.0
|
notebooks/python/raw/ex_5.ipynb
|
NeuroLaunch/learntools
|
3.Complete the body of the function below according to its docstring
|
def menu_is_boring(meals):
"""Given a list of meals served over some period of time, return True if the
same meal has ever been served two days in a row, and False otherwise.
"""
pass
q3.check()
#_COMMENT_IF(PROD)_
q3.hint()
#_COMMENT_IF(PROD)_
q3.solution()
|
_____no_output_____
|
Apache-2.0
|
notebooks/python/raw/ex_5.ipynb
|
NeuroLaunch/learntools
|
4. 🌶️Next to the Blackjack table, the Python Challenge Casino has a slot machine. You can get a result from the slot machine by calling `play_slot_machine()`. The number it returns is your winnings in dollars. Usually it returns 0. But sometimes you'll get lucky and get a big payday. Try running it below:
|
play_slot_machine()
|
_____no_output_____
|
Apache-2.0
|
notebooks/python/raw/ex_5.ipynb
|
NeuroLaunch/learntools
|
By the way, did we mention that each play costs $1? Don't worry, we'll send you the bill later.On average, how much money can you expect to gain (or lose) every time you play the machine? The casino keeps it a secret, but you can estimate the average value of each pull using a technique called the **Monte Carlo method**. To estimate the average outcome, we simulate the scenario many times, and return the average result.Complete the following function to calculate the average value per play of the slot machine.
|
def estimate_average_slot_payout(n_runs):
"""Run the slot machine n_runs times and return the average net profit per run.
Example calls (note that return value is nondeterministic!):
>>> estimate_average_slot_payout(1)
-1
>>> estimate_average_slot_payout(1)
0.5
"""
pass
|
_____no_output_____
|
Apache-2.0
|
notebooks/python/raw/ex_5.ipynb
|
NeuroLaunch/learntools
|
When you think you know the expected value per spin, uncomment the line below to see how close you were.
|
#_COMMENT_IF(PROD)_
q4.solution()
|
_____no_output_____
|
Apache-2.0
|
notebooks/python/raw/ex_5.ipynb
|
NeuroLaunch/learntools
|
Your first neural networkIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
|
%matplotlib inline
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
Load and prepare the dataA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
|
data_path = "Bike-Sharing-Dataset/hour.csv"
rides = pd.read_csv(data_path)
rides.head()
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
Checking out the dataThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
|
rides[: 24 * 10].plot(x="dteday", y="cnt")
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
Dummy variablesHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
|
dummy_fields = ["season", "weathersit", "mnth", "hr", "weekday"]
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = [
"instant",
"dteday",
"season",
"weathersit",
"weekday",
"atemp",
"mnth",
"workingday",
"hr",
]
data = rides.drop(fields_to_drop, axis=1)
data.head()
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
Scaling target variablesTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.The scaling factors are saved so we can go backwards when we use the network for predictions.
|
quant_features = ["casual", "registered", "cnt", "temp", "hum", "windspeed"]
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean) / std
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
Splitting the data into training, testing, and validation setsWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
|
# Save data for approximately the last 21 days
test_data = data[-21 * 24 :]
# Now remove the test data from the data set
data = data[: -21 * 24]
# Separate the data into features and targets
target_fields = ["cnt", "casual", "registered"]
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = (
test_data.drop(target_fields, axis=1),
test_data[target_fields],
)
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
|
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[: -60 * 24], targets[: -60 * 24]
val_features, val_targets = features[-60 * 24 :], targets[-60 * 24 :]
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
Time to build the networkBelow you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.Below, you have these tasks:1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.2. Implement the forward pass in the `train` method.3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.4. Implement the forward pass in the `run` method.
|
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y - Y) ** 2)
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
Unit testsRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly before you starting trying to train it. These tests must all be successful to pass the project.
|
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]])
test_w_h_o = np.array([[0.3], [-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == "bike-sharing-dataset/hour.csv")
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(
np.all(network.activation_function(0.5) == 1 / (1 + np.exp(-0.5)))
)
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(
np.allclose(
network.weights_hidden_to_output,
np.array([[0.37275328], [-0.03172939]]),
)
)
self.assertTrue(
np.allclose(
network.weights_input_to_hidden,
np.array(
[
[0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801],
]
),
)
)
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
Training the networkHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.You'll also be using a method known as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterationsThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing. Choose the learning rateThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodesIn a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
|
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {"train": [], "validation": []}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.iloc[batch].values, train_targets.iloc[batch]["cnt"]
network.train(X, y)
# Printing out the training progress
train_loss = MSE(
np.array(network.run(train_features)).T, train_targets["cnt"].values
)
val_loss = MSE(np.array(network.run(val_features)).T, val_targets["cnt"].values)
sys.stdout.write(
"\rProgress: {:2.1f}".format(100 * ii / float(iterations))
+ "% ... Training loss: "
+ str(train_loss)[:5]
+ " ... Validation loss: "
+ str(val_loss)[:5]
)
sys.stdout.flush()
losses["train"].append(train_loss)
losses["validation"].append(val_loss)
plt.plot(losses["train"], label="Training loss")
plt.plot(losses["validation"], label="Validation loss")
plt.legend()
_ = plt.ylim()
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
Check out your predictionsHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
|
fig, ax = plt.subplots(figsize=(8, 4))
mean, std = scaled_features["cnt"]
predictions = np.array(network.run(test_features)).T * std + mean
ax.plot(predictions[0], label="Prediction")
ax.plot((test_targets["cnt"] * std + mean).values, label="Data")
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.iloc[test_data.index]["dteday"])
dates = dates.apply(lambda d: d.strftime("%b %d"))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
|
_____no_output_____
|
Apache-2.0
|
deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb
|
TeoZosa/deep-learning-v2-pytorch
|
Introduction to Colab setup
|
# install rlberry library
!git clone https://github.com/rlberry-py/rlberry.git
!cd rlberry && git pull && pip install -e . > /dev/null 2>&1
# install ffmpeg-python for saving videos
!pip install ffmpeg-python > /dev/null 2>&1
# install optuna for hyperparameter optimization
!pip install optuna > /dev/null 2>&1
# packages required to show video
!pip install pyvirtualdisplay > /dev/null 2>&1
!apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1
print("")
print(" ~~~ Libraries installed, please restart the runtime! ~~~ ")
print("")
# Create directory for saving videos
!mkdir videos > /dev/null 2>&1
# Initialize display and import function to show videos
import rlberry.colab_utils.display_setup
from rlberry.colab_utils.display_setup import show_video
|
_____no_output_____
|
MIT
|
notebooks/introduction_to_rlberry.ipynb
|
antoine-moulin/rlberry
|
Interacting with a simple environment
|
from rlberry.envs import GridWorld
# A grid world is a simple environment with finite states and actions, on which
# we can test simple algorithms.
# -> The reward function can be accessed by: env.R[state, action]
# -> And the transitions: env.P[state, action, next_state]
env = GridWorld(nrows=3, ncols=10,
reward_at = {(1,1):0.1, (2, 9):1.0},
walls=((1,4),(2,4), (1,5)),
success_probability=0.9)
# Let's visuzalize a random policy in this environment!
env.enable_rendering()
env.reset()
for tt in range(20):
action = env.action_space.sample()
next_state, reward, is_terminal, info = env.step(action)
# save video and clear buffer
env.save_video('./videos/gw.mp4', framerate=5)
env.clear_render_buffer()
# show video
show_video('./videos/gw.mp4')
|
videos/gw.mp4
|
MIT
|
notebooks/introduction_to_rlberry.ipynb
|
antoine-moulin/rlberry
|
Creating an agentLet's create an agent that runs value iteration to find a near-optimal policy.This is possible in our GridWorld, because we have access to the transitions `env.P` and the rewards `env.R`.An Agent must implement at least two methods, **fit()** and **policy()**.It can also implement **sample_parameters()** used for hyperparameter optimization with [Optuna](https://optuna.org/).
|
import numpy as np
from rlberry.agents import Agent
class ValueIterationAgent(Agent):
name = 'ValueIterationAgent'
def __init__(self, env, gamma=0.99, epsilon=1e-5, **kwargs): # it's important to put **kwargs to ensure compatibility with the base class
"""
gamma: discount factor
episilon: precision of value iteration
"""
Agent.__init__(self, env, **kwargs) # self.env is initialized in the base class
self.gamma = gamma
self.epsilon = epsilon
self.Q = None # Q function to be computed in fit()
def fit(self, **kwargs):
"""
Run value iteration.
"""
S, A = env.observation_space.n, env.action_space.n
Q = np.zeros((S, A))
V = np.zeros(S)
while True:
TQ = np.zeros((S, A))
for ss in range(S):
for aa in range(A):
TQ[ss, aa] = env.R[ss, aa] + self.gamma*env.P[ss, aa, :].dot(V)
V = TQ.max(axis=1)
if np.abs(TQ-Q).max() < self.epsilon:
break
Q = TQ
self.Q = Q
def policy(self, observation, **kwargs):
return self.Q[observation, :].argmax()
@classmethod
def sample_parameters(cls, trial):
"""
Sample hyperparameters for hyperparam optimization using Optuna (https://optuna.org/)
"""
gamma = trial.suggest_categorical('gamma', [0.1, 0.25, 0.5, 0.75, 0.99])
return {'gamma':gamma}
# Now, let's fit and test the agent!
agent = ValueIterationAgent(env)
agent.fit()
# Run agent's policy
env.enable_rendering()
state = env.reset()
for tt in range(20):
action = agent.policy(state)
state, reward, is_terminal, info = env.step(action)
# save video and clear buffer
env.save_video('./videos/gw.mp4', framerate=5)
env.clear_render_buffer()
# show video
show_video('./videos/gw.mp4')
|
videos/gw.mp4
|
MIT
|
notebooks/introduction_to_rlberry.ipynb
|
antoine-moulin/rlberry
|
`AgentStats`: A powerfull class for hyperparameter optimization, training and evaluating agents.
|
# Create random agent as a baseline
class RandomAgent(Agent):
name = 'RandomAgent'
def __init__(self, env, gamma=0.99, epsilon=1e-5, **kwargs): # it's important to put **kwargs to ensure compatibility with the base class
"""
gamma: discount factor
episilon: precision of value iteration
"""
Agent.__init__(self, env, **kwargs) # self.env is initialized in the base class
def fit(self, **kwargs):
pass
def policy(self, observation, **kwargs):
return self.env.action_space.sample()
from rlberry.stats import AgentStats, compare_policies
# Define parameters
vi_params = {'gamma':0.1, 'epsilon':1e-3}
# Create AgentStats to fit 4 agents using 1 job
vi_stats = AgentStats(ValueIterationAgent, env, eval_horizon=20, init_kwargs=vi_params, n_fit=4, n_jobs=1)
vi_stats.fit()
# Create AgentStats for baseline
baseline_stats = AgentStats(RandomAgent, env, eval_horizon=20, n_fit=1)
# Compare policies using 10 Monte Carlo simulations
output = compare_policies([vi_stats, baseline_stats], n_sim=10)
# The value of gamma above makes our VI agent quite bad! Let's optimize it.
vi_stats.optimize_hyperparams(n_trials=15, timeout=30, n_sim=5, n_fit=1, n_jobs=1, sampler_method='random', pruner_method='none')
# fit with optimized params
vi_stats.fit()
# ... and see the results
output = compare_policies([vi_stats, baseline_stats], n_sim=10)
|
[32m[I 2020-11-22 15:33:24,381][0m A new study created in memory with name: no-name-853cb971-a1ac-45a3-8d33-82ccd07b32c1[0m
[32m[I 2020-11-22 15:33:24,406][0m Trial 0 finished with value: 0.9 and parameters: {'gamma': 0.25}. Best is trial 0 with value: 0.9.[0m
[32m[I 2020-11-22 15:33:24,427][0m Trial 1 finished with value: 0.8799999999999999 and parameters: {'gamma': 0.5}. Best is trial 0 with value: 0.9.[0m
[32m[I 2020-11-22 15:33:24,578][0m Trial 2 finished with value: 2.0 and parameters: {'gamma': 0.99}. Best is trial 2 with value: 2.0.[0m
[32m[I 2020-11-22 15:33:24,595][0m Trial 3 finished with value: 0.9399999999999998 and parameters: {'gamma': 0.25}. Best is trial 2 with value: 2.0.[0m
[32m[I 2020-11-22 15:33:24,612][0m Trial 4 finished with value: 0.96 and parameters: {'gamma': 0.1}. Best is trial 2 with value: 2.0.[0m
[32m[I 2020-11-22 15:33:24,630][0m Trial 5 finished with value: 0.8599999999999998 and parameters: {'gamma': 0.5}. Best is trial 2 with value: 2.0.[0m
[32m[I 2020-11-22 15:33:24,650][0m Trial 6 finished with value: 0.8599999999999998 and parameters: {'gamma': 0.1}. Best is trial 2 with value: 2.0.[0m
[32m[I 2020-11-22 15:33:24,667][0m Trial 7 finished with value: 0.8799999999999999 and parameters: {'gamma': 0.1}. Best is trial 2 with value: 2.0.[0m
[32m[I 2020-11-22 15:33:24,683][0m Trial 8 finished with value: 0.9399999999999998 and parameters: {'gamma': 0.25}. Best is trial 2 with value: 2.0.[0m
[32m[I 2020-11-22 15:33:24,704][0m Trial 9 finished with value: 1.4200000000000002 and parameters: {'gamma': 0.75}. Best is trial 2 with value: 2.0.[0m
[32m[I 2020-11-22 15:33:24,725][0m Trial 10 finished with value: 1.2599999999999998 and parameters: {'gamma': 0.75}. Best is trial 2 with value: 2.0.[0m
[32m[I 2020-11-22 15:33:24,741][0m Trial 11 finished with value: 0.78 and parameters: {'gamma': 0.1}. Best is trial 2 with value: 2.0.[0m
[32m[I 2020-11-22 15:33:24,890][0m Trial 12 finished with value: 2.02 and parameters: {'gamma': 0.99}. Best is trial 12 with value: 2.02.[0m
[32m[I 2020-11-22 15:33:24,909][0m Trial 13 finished with value: 0.96 and parameters: {'gamma': 0.25}. Best is trial 12 with value: 2.02.[0m
[32m[I 2020-11-22 15:33:24,937][0m Trial 14 finished with value: 0.8400000000000001 and parameters: {'gamma': 0.75}. Best is trial 12 with value: 2.02.[0m
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.1s remaining: 0.0s
|
MIT
|
notebooks/introduction_to_rlberry.ipynb
|
antoine-moulin/rlberry
|
Setup Imports
|
import sys
sys.path.append('../')
del sys
%reload_ext autoreload
%autoreload 2
from toolbox.parsers import standard_parser, add_task_arguments, add_model_arguments
from toolbox.utils import load_task, get_pretrained_model, to_class_name
import modeling.models as models
|
_____no_output_____
|
Apache-2.0
|
notebooks/run_model.ipynb
|
clementjumel/master_thesis
|
Notebook functions
|
from numpy import argmax, mean
def run_models(model_names, word2vec, bart, args, train=False):
args.word2vec = word2vec
args.bart = bart
pretrained_model = get_pretrained_model(args)
for model_name in model_names:
args.model = model_name
print(model_name)
model = getattr(models, to_class_name(args.model))(args=args, pretrained_model=pretrained_model)
model.play(task=task, args=args)
if train:
valid_scores = model.valid_scores['average_precision']
test_scores = model.test_scores['average_precision']
valid_scores = [mean(epoch_scores) for epoch_scores in valid_scores]
test_scores = [mean(epoch_scores) for epoch_scores in test_scores]
i_max = argmax(valid_scores)
print("max for epoch %i" % (i_max+1))
print("valid score: %.5f" % valid_scores[i_max])
print("test score: %.5f" % test_scores[i_max])
|
_____no_output_____
|
Apache-2.0
|
notebooks/run_model.ipynb
|
clementjumel/master_thesis
|
Parameters
|
ap = standard_parser()
add_task_arguments(ap)
add_model_arguments(ap)
args = ap.parse_args(["-m", "",
"--root", ".."])
|
_____no_output_____
|
Apache-2.0
|
notebooks/run_model.ipynb
|
clementjumel/master_thesis
|
Load the data
|
task = load_task(args)
|
Task loaded from ../results/modeling_task/context-dependent-same-type_50-25-25_rs24_bs4_cf-v0_tf-v0.pkl.
|
Apache-2.0
|
notebooks/run_model.ipynb
|
clementjumel/master_thesis
|
Basic baselines
|
run_models(model_names=["random",
"frequency"],
word2vec=False,
bart=False,
args=args)
|
random
Evaluation on the valid loader...
|
Apache-2.0
|
notebooks/run_model.ipynb
|
clementjumel/master_thesis
|
Basic baselines
|
run_models(model_names=["summaries-count",
"summaries-unique-count",
"summaries-overlap",
"activated-summaries",
"context-count",
"context-unique-count",
"summaries-context-count",
"summaries-context-unique-count",
"summaries-context-overlap"],
word2vec=False,
bart=False,
args=args)
|
summaries-count
Evaluation on the valid loader...
|
Apache-2.0
|
notebooks/run_model.ipynb
|
clementjumel/master_thesis
|
Embedding baselines
|
run_models(model_names=["summaries-average-embedding",
"summaries-overlap-average-embedding",
"context-average-embedding",
"summaries-context-average-embedding",
"summaries-context-overlap-average-embedding"],
word2vec=True,
bart=False,
args=args)
|
Word2Vec embedding loaded.
summaries-average-embedding
Evaluation on the valid loader...
|
Apache-2.0
|
notebooks/run_model.ipynb
|
clementjumel/master_thesis
|
Custom classifier
|
run_models(model_names=["custom-classifier"],
word2vec=True,
bart=False,
args=args,
train=True)
|
Word2Vec embedding loaded.
custom-classifier
Learning answers counts...
|
Apache-2.0
|
notebooks/run_model.ipynb
|
clementjumel/master_thesis
|
CountVectorizer with Mulinomial Naive Bayes (Benchmark Model)
|
from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer
from sklearn.naive_bayes import BernoulliNB, MultinomialNB
countVect = CountVectorizer()
X_train_countVect = countVect.fit_transform(X_train_cleaned)
print("Number of features : %d \n" %len(countVect.get_feature_names())) #6378
print("Show some feature names : \n", countVect.get_feature_names()[::1000])
# Train MultinomialNB classifier
mnb = MultinomialNB()
mnb.fit(X_train_countVect, y_train)
import pickle
pickle.dump(countVect,open('countVect_imdb.pkl','wb'))
from sklearn import metrics
from sklearn.metrics import accuracy_score,roc_auc_score
def modelEvaluation(predictions):
'''
Print model evaluation to predicted result
'''
print ("\nAccuracy on validation set: {:.4f}".format(accuracy_score(y_test, predictions)))
print("\nAUC score : {:.4f}".format(roc_auc_score(y_test, predictions)))
print("\nClassification report : \n", metrics.classification_report(y_test, predictions))
print("\nConfusion Matrix : \n", metrics.confusion_matrix(y_test, predictions))
predictions = mnb.predict(countVect.transform(X_test_cleaned))
modelEvaluation(predictions)
import pickle
pickle.dump(mnb,open('Naive_Bayes_model_imdb.pkl','wb'))
|
_____no_output_____
|
MIT
|
IMDB Reviews NLP.ipynb
|
gsingh1629/SentAnalysis
|
TfidfVectorizer with Logistic Regression
|
from sklearn.linear_model import LogisticRegression
tfidf = TfidfVectorizer(min_df=5) #minimum document frequency of 5
X_train_tfidf = tfidf.fit_transform(X_train)
print("Number of features : %d \n" %len(tfidf.get_feature_names())) #1722
print("Show some feature names : \n", tfidf.get_feature_names()[::1000])
# Logistic Regression
lr = LogisticRegression()
lr.fit(X_train_tfidf, y_train)
feature_names = np.array(tfidf.get_feature_names())
sorted_coef_index = lr.coef_[0].argsort()
print('\nTop 10 features with smallest coefficients :\n{}\n'.format(feature_names[sorted_coef_index[:10]]))
print('Top 10 features with largest coefficients : \n{}'.format(feature_names[sorted_coef_index[:-11:-1]]))
predictions = lr.predict(tfidf.transform(X_test_cleaned))
modelEvaluation(predictions)
from sklearn.model_selection import GridSearchCV
from sklearn import metrics
from sklearn.metrics import roc_auc_score, accuracy_score
from sklearn.pipeline import Pipeline
estimators = [("tfidf", TfidfVectorizer()), ("lr", LogisticRegression())]
model = Pipeline(estimators)
params = {"lr__C":[0.1, 1, 10],
"tfidf__min_df": [1, 3],
"tfidf__max_features": [1000, None],
"tfidf__ngram_range": [(1,1), (1,2)],
"tfidf__stop_words": [None, "english"]}
grid = GridSearchCV(estimator=model, param_grid=params, scoring="accuracy", n_jobs=-1)
grid.fit(X_train_cleaned, y_train)
print("The best paramenter set is : \n", grid.best_params_)
# Evaluate on the validaton set
predictions = grid.predict(X_test_cleaned)
modelEvaluation(predictions)
|
The best paramenter set is :
{'lr__C': 10, 'tfidf__max_features': None, 'tfidf__min_df': 3, 'tfidf__ngram_range': (1, 2), 'tfidf__stop_words': None}
Accuracy on validation set: 0.8720
AUC score : 0.8720
Classification report :
precision recall f1-score support
0 0.87 0.87 0.87 249
1 0.87 0.88 0.87 251
accuracy 0.87 500
macro avg 0.87 0.87 0.87 500
weighted avg 0.87 0.87 0.87 500
Confusion Matrix :
[[216 33]
[ 31 220]]
|
MIT
|
IMDB Reviews NLP.ipynb
|
gsingh1629/SentAnalysis
|
Word2Vec**Step 1 : Parse review text to sentences (Word2Vec model takes a list of sentences as inputs)****Step 2 : Create volcabulary list using Word2Vec model.****Step 3 : Transform each review into numerical representation by computing average feature vectors of words therein.****Step 4 : Fit the average feature vectors to Random Forest Classifier.**
|
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
def parseSent(review, tokenizer, remove_stopwords=False):
raw_sentences = tokenizer.tokenize(review.strip())
sentences = []
for raw_sentence in raw_sentences:
if len(raw_sentence) > 0:
sentences.append(cleanText(raw_sentence, remove_stopwords, split_text=True))
return sentences
# Parse each review in the training set into sentences
sentences = []
for review in X_train_cleaned:
sentences += parseSent(review, tokenizer,remove_stopwords=False)
print('%d parsed sentence in the training set\n' %len(sentences))
print('Show a parsed sentence in the training set : \n', sentences[10])
|
4500 parsed sentence in the training set
Show a parsed sentence in the training set :
['the', 'crimson', 'rivers', 'is', 'one', 'of', 'the', 'most', 'over', 'directed', 'over', 'the', 'top', 'over', 'everything', 'mess', 'i', 've', 'ever', 'seen', 'come', 'out', 'of', 'france', 'there', 's', 'nothing', 'worse', 'than', 'a', 'french', 'production', 'trying', 'to', 'out', 'do', 'films', 'made', 'in', 'hollywood', 'and', 'cr', 'is', 'a', 'perfect', 'example', 'of', 'such', 'a', 'wannabe', 'horror', 'action', 'buddy', 'flick', 'i', 'almost', 'stopped', 'it', 'halfway', 'through', 'because', 'i', 'knew', 'it', 'wouldn', 't', 'amount', 'to', 'anything', 'but', 'french', 'guys', 'trying', 'to', 'show', 'off', 'the', 'film', 'starts', 'off', 'promisingly', 'like', 'some', 'sort', 'of', 'expansive', 'horror', 'film', 'but', 'it', 'quickly', 'shifts', 'genres', 'from', 'horror', 'to', 'action', 'to', 'x', 'files', 'type', 'to', 'buddy', 'flick', 'that', 'in', 'the', 'end', 'cr', 'is', 'all', 'of', 'it', 'and', 'also', 'none', 'of', 'it', 'it', 's', 'so', 'full', 'of', 'clich', 's', 'that', 'at', 'one', 'point', 'i', 'thought', 'the', 'whole', 'thing', 'was', 'a', 'comedy', 'the', 'painful', 'dialogue', 'and', 'those', 'silent', 'pauses', 'with', 'fades', 'outs', 'and', 'fades', 'ins', 'just', 'at', 'the', 'right', 'expositionary', 'moments', 'made', 'me', 'groan', 'i', 'thought', 'only', 'films', 'made', 'in', 'hollywood', 'used', 'this', 'hackneyed', 'technique', 'the', 'chase', 'scene', 'with', 'vincent', 'cassel', 'running', 'after', 'the', 'killer', 'is', 'so', 'over', 'directed', 'and', 'over', 'done', 'that', 'it', 's', 'almost', 'a', 'thing', 'of', 'beauty', 'the', 'climax', 'on', 'top', 'of', 'the', 'mountain', 'with', 'the', 'stupid', 'revelation', 'about', 'the', 'killer', 's', 'with', 'cassel', 'and', 'reno', 'playing', 'buddies', 'like', 'nolte', 'and', 'murphy', 'in', 'hrs', 'completely', 'derailed', 'what', 'little', 'credibility', 'the', 'film', 'had', 'by', 'then', 'it', 's', 'difficult', 'to', 'believe', 'that', 'the', 'director', 'of', 'the', 'crimson', 'rivers', 'also', 'directed', 'gothika', 'which', 'though', 'had', 'its', 'share', 'of', 'problems', 'doesn', 't', 'even', 'come', 'close', 'to', 'the', 'awfulness', 'of', 'this', 'overbaked', 'confused', 'film']
|
MIT
|
IMDB Reviews NLP.ipynb
|
gsingh1629/SentAnalysis
|
Creating Volcabulary List usinhg Word2Vec Model
|
from wordcloud import WordCloud
from gensim.models import word2vec
from gensim.models.keyedvectors import KeyedVectors
num_features = 300 #embedding dimension
min_word_count = 10
num_workers = 4
context = 10
downsampling = 1e-3
print("Training Word2Vec model ...\n")
w2v = Word2Vec(sentences, workers=num_workers, min_count = min_word_count,\
window = context, sample = downsampling)
w2v.init_sims(replace=True)
w2v.save("w2v_300features_10minwordcounts_10context") #save trained word2vec model
print("Number of words in the vocabulary list : %d \n" %len(w2v.wv.index2word)) #4016
print("Show first 10 words in the vocalbulary list vocabulary list: \n", w2v.wv.index2word[0:10])
|
Training Word2Vec model ...
|
MIT
|
IMDB Reviews NLP.ipynb
|
gsingh1629/SentAnalysis
|
Averaging Feature Vectors
|
def makeFeatureVec(review, model, num_features):
'''
Transform a review to a feature vector by averaging feature vectors of words
appeared in that review and in the volcabulary list created
'''
featureVec = np.zeros((num_features,),dtype="float32")
nwords = 0.
index2word_set = set(model.wv.index2word) #index2word is the volcabulary list of the Word2Vec model
isZeroVec = True
for word in review:
if word in index2word_set:
nwords = nwords + 1.
featureVec = np.add(featureVec, model[word])
isZeroVec = False
if isZeroVec == False:
featureVec = np.divide(featureVec, nwords)
return featureVec
def getAvgFeatureVecs(reviews, model, num_features):
'''
Transform all reviews to feature vectors using makeFeatureVec()
'''
counter = 0
reviewFeatureVecs = np.zeros((len(reviews),num_features),dtype="float32")
for review in reviews:
reviewFeatureVecs[counter] = makeFeatureVec(review, model,num_features)
counter = counter + 1
return reviewFeatureVecs
X_train_cleaned = []
for review in X_train:
X_train_cleaned.append(cleanText(review, remove_stopwords=True, split_text=True))
trainVector = getAvgFeatureVecs(X_train_cleaned, w2v, num_features)
print("Training set : %d feature vectors with %d dimensions" %trainVector.shape)
# Get feature vectors for validation set
X_test_cleaned = []
for review in X_test:
X_test_cleaned.append(cleanText(review, remove_stopwords=True, split_text=True))
testVector = getAvgFeatureVecs(X_test_cleaned, w2v, num_features)
print("Validation set : %d feature vectors with %d dimensions" %testVector.shape)
|
_____no_output_____
|
MIT
|
IMDB Reviews NLP.ipynb
|
gsingh1629/SentAnalysis
|
Random Forest Classifer
|
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=1000)
rf.fit(trainVector, y_train)
predictions = rf.predict(testVector)
modelEvaluation(predictions)
|
_____no_output_____
|
MIT
|
IMDB Reviews NLP.ipynb
|
gsingh1629/SentAnalysis
|
LSTM**Step 1 : Prepare X_train and X_test to 2D tensor.** **Step 2 : Train a simple LSTM (embeddign layer => LSTM layer => dense layer).** **Step 3 : Compile and fit the model using log loss function and ADAM optimizer.**
|
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Lambda
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM, SimpleRNN, GRU
from keras.preprocessing.text import Tokenizer
from collections import defaultdict
from keras.layers.convolutional import Convolution1D
from keras import backend as K
from keras.layers.embeddings import Embedding
top_words = 40000
maxlen = 200
batch_size = 62
nb_classes = 4
nb_epoch = 6
# Vectorize X_train and X_test to 2D tensor
tokenizer = Tokenizer(nb_words=top_words) #only consider top 20000 words in the corpse
tokenizer.fit_on_texts(X_train)
# tokenizer.word_index #access word-to-index dictionary of trained tokenizer
sequences_train = tokenizer.texts_to_sequences(X_train)
sequences_test = tokenizer.texts_to_sequences(X_test)
X_train_seq = sequence.pad_sequences(sequences_train, maxlen=maxlen)
X_test_seq = sequence.pad_sequences(sequences_test, maxlen=maxlen)
# one-hot encoding of y_train and y_test
y_train_seq = np_utils.to_categorical(y_train, nb_classes)
y_test_seq = np_utils.to_categorical(y_test, nb_classes)
print('X_train shape:', X_train_seq.shape)
print("========================================")
print('X_test shape:', X_test_seq.shape)
print("========================================")
print('y_train shape:', y_train_seq.shape)
print("========================================")
print('y_test shape:', y_test_seq.shape)
print("========================================")
model1 = Sequential()
model1.add(Embedding(top_words, 128, dropout=0.2))
model1.add(LSTM(128, dropout_W=0.2, dropout_U=0.2))
model1.add(Dense(nb_classes))
model1.add(Activation('softmax'))
model1.summary()
model1.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model1.fit(X_train_seq, y_train_seq, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
# Model evluation
score = model1.evaluate(X_test_seq, y_test_seq, batch_size=batch_size)
print('Test loss : {:.4f}'.format(score[0]))
print('Test accuracy : {:.4f}'.format(score[1]))
len(X_train_seq),len(y_train_seq)
print("Size of weight matrix in the embedding layer : ", \
model1.layers[0].get_weights()[0].shape)
# get weight matrix of the hidden layer
print("Size of weight matrix in the hidden layer : ", \
model1.layers[1].get_weights()[0].shape)
# get weight matrix of the output layer
print("Size of weight matrix in the output layer : ", \
model1.layers[2].get_weights()[0].shape)
import pickle
pickle.dump(model1,open('model1.pkl','wb'))
|
_____no_output_____
|
MIT
|
IMDB Reviews NLP.ipynb
|
gsingh1629/SentAnalysis
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.