repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
ocelot-collab/ocelot | demos/ipython_tutorials/6_coupler_kick.ipynb | gpl-3.0 | # the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
from time import time
# this python library provides generic shallow (copy)
# and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot main modules and functions
from ocelot import *
# extra function to track the Particle though a lattice
from ocelot.cpbd.track import lattice_track
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# import lattice
from xfel_l1 import *
tws0 = Twiss()
tws0.E = 0.005
tws0.beta_x = 7.03383607232
tws0.beta_y = 4.83025657816
tws0.alpha_x = 0.981680481977
tws0.alpha_y = -0.524776086698
tws0.E = 0.1300000928
lat = MagneticLattice(cell_l1, start=bpmf_103_i1, stop=qd_210_b1)
# twiss parameters without coupler kick
tws1 = twiss(lat, tws0)
# adding coupler coefficients in [1/m]
for elem in lat.sequence:
if elem.__class__ == Cavity:
if not(".AH1." in elem.id):
# 1.3 GHz cavities
elem.vx_up = (-56.813 + 10.751j) * 1e-6
elem.vy_up = (-41.091 + 0.5739j) * 1e-6
elem.vxx_up = (0.99943 - 0.81401j) * 1e-3
elem.vxy_up = (3.4065 - 0.4146j) * 1e-3
elem.vx_down = (-24.014 + 12.492j) * 1e-6
elem.vy_down = (36.481 + 7.9888j) * 1e-6
elem.vxx_down = (-4.057 - 0.1369j) * 1e-3
elem.vxy_down = (2.9243 - 0.012891j) * 1e-3
else:
# AH1 cavity (3.9 GHz) module names are 'C3.AH1.1.1.I1', 'C3.AH1.1.2.I1', ...
# Modules with odd and even number X 'C3.AH1.1.X.I1' have different coefficients
module_number = float(elem.id.split(".")[-2])
if module_number % 2 == 1:
elem.vx_up = -0.00057076 - 1.3166e-05j
elem.vy_up = -3.5079e-05 + 0.00012636j
elem.vxx_up = -0.026045 - 0.042918j
elem.vxy_up = 0.0055553 - 0.023455j
elem.vx_down = -8.8766e-05 - 0.00024852j
elem.vy_down = 2.9889e-05 + 0.00014486j
elem.vxx_down = -0.0050593 - 0.013491j
elem.vxy_down = 0.0051488 + 0.024771j
else:
elem.vx_up = 0.00057076 + 1.3166e-05j
elem.vy_up = 3.5079e-05 - 0.00012636j
elem.vxx_up = -0.026045 - 0.042918j
elem.vxy_up = 0.0055553 - 0.023455j
elem.vx_down = 8.8766e-05 + 0.00024852j
elem.vy_down = -2.9889e-05 - 0.00014486j
elem.vxx_down = -0.0050593 - 0.013491j
elem.vxy_down = 0.0051488 + 0.024771j
# update transfer maps
lat.update_transfer_maps()
tws = twiss(lat, tws0)
"""
Explanation: This notebook was created by Sergey Tomin ([email protected]). Source and license info is on GitHub. April 2020.
Tutorial N6. Coupler Kick.
Second order tracking with coupler kick in TESLA type cavity of the 200k particles.
As an example, we will use linac L1 of the European XFEL Injector.
The input coupler and the higher order mode couplers of the RF cavities distort the axial symmetry of the electromagnetic (EM) field and affect the electron beam. This effect can be calculated by direct tracking of the particles in the asymmetric (due to the couplers) 3D EM field using a tracking code (e.g. ASTRA). For fast estimation of the coupler effect a discrete coupler model (as described, for example in M. Dohlus et al, Coupler Kick for Very Short Bunches and its Compensation, Proc. of EPAC08, MOPP013 or T.Hellert and M.Dohlus, Detuning related coupler kick variation of a superconducting nine-cell 1.3 GHz cavity) was implemented in OCELOT. Coefficients for 1.3 GHz modules are given in M.Dohlus, Effects of RF coupler kicks in L1 of EXFEL. The 1st order part of the model includes time and offset dependency; the offset dependency has a skew component. To include effect of all couplers, the kicks are applied at the entrance and the exit of each cavity.
The zeroth and first order kick $\vec k$ on a bunch induced by a coupler can be expressed as
\begin{equation}
\vec k(x, y) \approx \frac{eV_0}{E_0} \Re \left{ \left(
\begin{matrix}
V_{x0}\
V_{y0}
\end{matrix} \right) + \left(
\begin{matrix}
V_{xx} & V_{xy} \
V_{yx} & V_{yy}
\end{matrix}\right)
\left(
\begin{matrix}
x\
y
\end{matrix} \right) e^{i \phi}\right}
\end{equation}
with $E_0$ being the beam energy at the corresponding coupler region, $V_0$ and $\phi$ the amplitude and phase of the accelerating field, respectively, $e$ the elementary charge and $x$ and $y$ the transverse beam position at the coupler location. From Maxwell equations it follows that $V_{yy} = −V_{xx}$ and $V_{xy} = V_{yx}$. Thus, coupler kicks are up to first order well described with four normalized coupler kick coefficients $[V_{0x}, V_{0y}, V_{xx}, V_{xy}]$.
In OCELOT one can define copler kick coefficients for upstream and downstream coplers.
python
Cavity(l=0., v=0., phi=0., freq=0., vx_up=0, vy_up=0, vxx_up=0, vxy_up=0,
vx_down=0, vy_down=0, vxx_down=0, vxy_down=0, eid=None)
This example will cover the following topics:
Defining the coupler coefficients for Cavity
tracking of second order with Coupler Kick effect.
Details of implementation in the code
New in version 20.04.0
The coupler kicks are implemented in the code the same way as it was done for Edge elements. At the moment of inizialisation of MagneticLattice around Cavity element are created elemnents CouplerKick, the coupler kick before Cavity use coefficents with suffix "_up" (upstream) and after Cavity is placed CouplerKick with coefficent "_down" (downstream). The Coupler Kick elements are created even though coupler kick coefficennts are zeros.
End of explanation
"""
bx0 = [tw.beta_x for tw in tws1]
by0 = [tw.beta_y for tw in tws1]
s0 = [tw.s for tw in tws1]
bx = [tw.beta_x for tw in tws]
by = [tw.beta_y for tw in tws]
s = [tw.s for tw in tws]
fig, ax = plot_API(lat, legend=False)
ax.plot(s0, bx0, "b", lw=1, label=r"$\beta_x$")
ax.plot(s, bx, "b--", lw=1, label=r"$\beta_x$, CK")
ax.plot(s0, by0, "r", lw=1, label=r"$\beta_y$")
ax.plot(s, by, "r--", lw=1, label=r"$\beta_y$, CK")
ax.set_ylabel(r"$\beta_{x,y}$, m")
ax.legend()
plt.show()
"""
Explanation: Twiss parameters with and without coupler kick
End of explanation
"""
def plot_trajectories(lat):
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
for a in np.arange(-0.6, 0.6, 0.1):
cix_118_i1.angle = a*0.001
lat.update_transfer_maps()
p = Particle(px=0, E=0.130)
plist = lattice_track(lat, p)
s = [p.s for p in plist]
x = [p.x for p in plist]
y = [p.y for p in plist]
px = [p.px for p in plist]
py = [p.py for p in plist]
ax1.plot(s, x)
ax2.plot(s, y)
plt.xlabel("z [m]")
plt.show()
plot_trajectories(lat)
"""
Explanation: Trajectories with Coupler Kick
End of explanation
"""
for elem in lat.sequence:
if elem.__class__ == Cavity:
if not(".AH1." in elem.id):
# 1.3 GHz cavities
elem.vx_up = 0.
elem.vy_up = 0.
elem.vxx_up = (0.99943 - 0.81401j) * 1e-3
elem.vxy_up = (3.4065 - 0.4146j) * 1e-3
elem.vx_down = 0.
elem.vy_down = 0.
elem.vxx_down = (-4.057 - 0.1369j) * 1e-3
elem.vxy_down = (2.9243 - 0.012891j) * 1e-3
# update transfer maps
lat.update_transfer_maps()
# plot the trajectories
plot_trajectories(lat)
"""
Explanation: Horizantal and vertical emittances
Before start we remove zero order terms (dipole kicks) from coupler kicks coefficients.
And check if we have any asymmetry.
End of explanation
"""
# create ParticleArray with "one clice"
parray = generate_parray(sigma_tau=0., sigma_p=0.0, chirp=0.0)
print(parray)
# track the beam though the lattice
navi = Navigator(lat)
tws_track, _ = track(lat, parray, navi)
# plot emittances
emit_x = np.array([tw.emit_x for tw in tws_track])
emit_y = np.array([tw.emit_y for tw in tws_track])
gamma = np.array([tw.E for tw in tws_track])/m_e_GeV
s = [tw.s for tw in tws_track]
fig, ax = plot_API(lat, legend=False)
ax.plot(s, emit_x * gamma * 1e6, "b", lw=1, label=r"$\varepsilon_x$ [mm $\cdot$ mrad]")
ax.plot(s, emit_y * gamma * 1e6, "r", lw=1, label=r"$\varepsilon_y$ [mm $\cdot$ mrad]")
ax.set_ylabel(r"$\varepsilon_{x,y}$ [mm $\cdot$ mrad]")
ax.legend()
plt.show()
"""
Explanation: Tracking of the particles though lattice with coupler kicks
Steps:
* create ParticleArray with zero length and zero energy spread and chirp
* track the Particle array through the lattice
* plot the emittances
End of explanation
"""
# plot emittances
emit_x = np.array([tw.eigemit_1 for tw in tws_track])
emit_y = np.array([tw.eigemit_2 for tw in tws_track])
gamma = np.array([tw.E for tw in tws_track])/m_e_GeV
s = [tw.s for tw in tws_track]
fig, ax = plot_API(lat, legend=False)
ax.plot(s, emit_x * gamma * 1e6, "b", lw=1, label=r"$\varepsilon_x$ [mm $\cdot$ mrad]")
ax.plot(s, emit_y * gamma * 1e6, "r", lw=1, label=r"$\varepsilon_y$ [mm $\cdot$ mrad]")
ax.set_ylabel(r"$\varepsilon_{x,y}$ [mm $\cdot$ mrad]")
ax.legend()
plt.show()
"""
Explanation: Eigenemittance
As can we see, the projected emittances are not preserved, although all matrices are symplectic. The reason is the coupler kicks inroduce coupling between $X$ and $Y$ planes while the projected emittances are invariants under linear uncoupled (with respect to the laboratory coordinate system) symplectic transport.
However, there are invariants under arbitrary (possibly coupled) linear symplectic transformations - eigenemittances. Details can be found here V. Balandin and N. Golubeva "Notes on Linear Theory of Coupled Particle Beams with Equal Eigenemittances" and V.Balandin et al "Twiss Parameters of Coupled Particle Beams with Equal Eigenemittances"
End of explanation
"""
|
feststelltaste/software-analytics | notebooks/Read in semi-structured data with pandas.ipynb | gpl-3.0 | !cp ../../joa_spring-petclinic/git_log_numstat.log datasets/git_log_raw_stats_spring_petclinic.log
import pandas as pd
log = pd.read_csv(
"datasets/git_log_raw_stats_spring_petclinic.log",
sep="\n",
names=['raw'])
log.head()
"""
Explanation: Read in semi-structured data with pandas
When analyzing software systems in a Software Analytics style with pandas, you might face data that isn't yet in a tabular format you can easily read. In this notebook, I'll show you how you can read in semi-structured data. It's a set of tips and tricks how you can
transform a list of data into separate columns
split information in one entry into multiple columns
merge information across multiple rows into one row
So let's dive in!
Dataset
In our case, we want to analyze data from a version control system. The dataset was generated from the Git repository JavaOnAutobahn/spring-petclinic with the command git log --stat > git_log_stat.log.
This exports the history of the Git repository, including some information about the file changes per commit. Here is an excerpt from this created dataset:
```
commit 4d3d9de655faa813781027d8b1baed819c6a56fe
Author: Markus Harrer feststelltaste@googlemail.com
Date: Tue Mar 5 22:32:20 2019 +0100
add virtual bounded contexts
20 1 jqassistant/business.adoc
```
For each commit, we have this text fragment. The dataset isn't structured data in a tabular way but a more row-based style of data. Each row contains a different kind of information, e.g., the commit id, the author's name, the commit date, the commit message (in the worst case: spread across multiple lines!), as well as the changed files with the number of added and deleted lines of code.
The question is: Can we get this kind of data into a pandas DataFrame?
Let's see!
Note: You can also export data from Git with the --format options to create a tabular output. Use this to save you some headaches. But there might be data sources that don't have this option. So it's a good idea to be prepared!
Feedback: This notebook shows my brute force approach for handling semi-structured data with pandas. I would be very happy if you have some suggestions on how to improve this in a more simple way!
Read in the data
We first load this semi-structured data into a DataFrame. We use a little trick for doing this. Using the newline symbol as separator reads that data in line by line.
End of explanation
"""
log['sha'] = log.loc[log['raw'].str.startswith("commit ")]['raw'].str.split("commit ").str[1]
log['author'] = log.loc[log['raw'].str.startswith("Author: ")]['raw'].str.split("Author: ").str[1]
log['timestamp'] = log.loc[log['raw'].str.startswith("Date: ")]['raw'].str.split("Date: ").str[1]
log.head()
"""
Explanation: Information Extraction Adventure
Now we have to extract each bit of information accordingly. It's a thankless job. But it works quite well in most cases.
Extract a row to a separate column
We start with the rows contain information that can be put into separate columns relatively easily. For this, we look for markers e.g., at the beginning of a row. We can then use these markers to find the rows we like to extract and apply custom string splittings. This approach works for the information about the commit id, the author's name, and the commit date.
End of explanation
"""
log['message'] = log.loc[log['raw'].str.startswith(" "*4)]['raw'].str[4:]
log.head()
"""
Explanation: Extract further rows to one column
Next, we want to handle the multiline commit messages. These are also kind of marked by four consecutive whitespaces at the beginning. We can also extract them with the same approach as above (ugly, but it works!).
End of explanation
"""
log['no_entry'] = \
log['sha'].isna() & \
log['author'].isna() & \
log['timestamp'].isna() & \
log['message'].isna()
log.head()
"""
Explanation: Note: We still have to treat commit messages that span across multiple rows. We have to care about that later on.
Extract multiple columns from multiple row
Now for the remaining rows: The information about the additions and deletions per filename. This is a little bit tricky in three ways:
There is no dedicated marker for the file statistics
There are multiple information about the modified file in one row (added & deleted lines as well as the filename)
There are multiple rows for all the changed files within one commit
We can handle this step by step. First, we mark the rows that haven't been extracted yet into separate columns by creating a new column no_entry with True entries for those.
End of explanation
"""
log['sha'] = log['sha'].fillna(method="ffill")
log.head()
"""
Explanation: In the next step, we need to signal which file statistics information belongs to which commit. Luckily, there is a marker from this that we've already extracted: the sha column. This information is also the start of a commit entry. So we can use this entry to mark all the follow up entries of a commit to signal that these rows belong together.
End of explanation
"""
sha_files = log[log['no_entry']][['sha', 'raw']]
sha_files = sha_files.set_index('sha')
sha_files.head()
"""
Explanation: OK, we see, this seems to get somehow complicated. So let's create a separate DataFrame for this called sha_files, were we just treat the file statistics. This DataFrame contains now for each commit all the change information for each changed file.
End of explanation
"""
sha_files[['additions', 'deletions', 'filename']] = sha_files['raw'].str.split("\t", expand=True)
del(sha_files['raw'])
sha_files.head()
"""
Explanation: We are now able to focus on the files statistics. We can split the raw entries with the tabular symbol and throw away the raw data. This fives as us nicely formatted DataFrame with the files statistics' information.
End of explanation
"""
meta_data = log.groupby('sha')[['author', 'timestamp']].first()
meta_data.head()
"""
Explanation: Next, we want to join this data with the other, bigger log DataFrame that contains all the other information about the commits. This means we have to arrange the other DataFrame so that we can join our newly created sha_files DataFrame. We can accomplish this by groupby by the sha columns. We also try to reduce complexity by just preserving the meta information with the author and the timestamp for now.
End of explanation
"""
changes = meta_data.join(sha_files, how='right')
changes.head()
"""
Explanation: With both DataFrames having the same index column sha, we can now join DataFrames. We set the join method to right because we have multiple file statistics entries for each commit. This expands the meta_data DataFrame, i.e., duplicates each meta data entry for a file statistics entry.
End of explanation
"""
sha_msg = log.dropna(subset=['message']).groupby('sha')['message'].apply(' '.join)
sha_msg.head()
"""
Explanation: Alright, we are almost done. Hang in there!
Combine multiple rows to one entry in a column
We still have to treat the commit messages that span across multiple lines. So back to the message information. Thanks to the sha column, we can concatenate all the messages that belong to one commit and join the messages' parts in one single row.
End of explanation
"""
changes = changes.join(sha_msg)
changes.head()
"""
Explanation: Combining commit messages and change information
Finally, we also join this separate Series with the main DataFrame. Done!
End of explanation
"""
|
r-shekhar/NYC-transport | 06_repartition/repartition_all_spark.ipynb | bsd-3-clause | # standard imports
funcs = pyspark.sql.functions
types = pyspark.sql.types
sqlContext.sql("set spark.sql.shuffle.partitions=32")
bike = spark.read.parquet('/data/citibike.parquet')
bike.registerTempTable('bike')
spark.sql('select * from bike limit 5').toPandas()
bike = (bike
.withColumn('start_time',
funcs.from_unixtime((bike.start_time/1000000).cast(types.IntegerType()))
.cast(types.TimestampType()))
.withColumn('stop_time',
funcs.from_unixtime((bike.stop_time/1000000).cast(types.IntegerType()))
.cast(types.TimestampType()))
# .withColumn('start_time',
# (funcs.substring(bike.start_time, 1, 20)).cast(types.TimestampType()))
# .withColumn('stop_time', bike.stop_time.cast(types.TimestampType())) \
.withColumn('start_taxizone_id', bike.start_taxizone_id.cast(types.FloatType()))
.withColumn('end_taxizone_id', bike.end_taxizone_id.cast(types.FloatType()))
)
bike.registerTempTable('bike2')
spark.sql('select * from bike2 limit 5').toPandas()
bike.sort('start_time') \
.write.parquet('/data/citibike_spark.parquet', compression='snappy', mode='overwrite')
"""
Explanation: This notebook must be run under PySpark (2.0.2 +)
I had better luck when I restarted the notebook kernel in between different parquet groups.
Convert and repartition Citibike DataFrame using Spark
End of explanation
"""
# standard imports
funcs = pyspark.sql.functions
types = pyspark.sql.types
sqlContext.sql("set spark.sql.shuffle.partitions=32")
subway = spark.read.parquet('/data/subway.parquet')
subway.registerTempTable("subway")
spark.sql('select * from subway limit 5').toPandas()
subway = \
subway.withColumn('endtime', funcs.from_unixtime((subway.endtime/1000000).cast(types.IntegerType())) \
.cast(types.TimestampType()))
subway.registerTempTable("subway2")
spark.sql('select * from subway2 limit 5').toPandas()
subway = subway.sort("ca", "unit", "scp", "endtime")
subway.write.parquet('/data/subway_spark.parquet', compression='snappy', mode='overwrite')
"""
Explanation: Convert and repartition Subway Dataframe using PySpark
End of explanation
"""
# standard imports
funcs = pyspark.sql.functions
types = pyspark.sql.types
sqlContext.sql("set spark.sql.shuffle.partitions=800")
taxi = spark.read.parquet('/data/all_trips_unprocessed.parquet')
taxi = (
taxi.withColumn('dropoff_datetime',
funcs.from_unixtime((taxi.dropoff_datetime/1000000).cast(types.IntegerType()))
.cast(types.TimestampType())) \
.withColumn('pickup_datetime',
funcs.from_unixtime((taxi.pickup_datetime/1000000).cast(types.IntegerType()))
.cast(types.TimestampType())) \
.withColumn('dropoff_taxizone_id', taxi.dropoff_taxizone_id.cast(types.IntegerType())) \
.withColumn('pickup_taxizone_id', taxi.pickup_taxizone_id.cast(types.IntegerType())) \
.withColumn('dropoff_latitude', taxi.dropoff_latitude.cast(types.FloatType())) \
.withColumn('dropoff_longitude', taxi.dropoff_longitude.cast(types.FloatType())) \
.withColumn('ehail_fee', taxi.ehail_fee.cast(types.FloatType())) \
.withColumn('extra', taxi.extra.cast(types.FloatType())) \
.withColumn('fare_amount', taxi.fare_amount.cast(types.FloatType())) \
.withColumn('improvement_surcharge', taxi.improvement_surcharge.cast(types.FloatType())) \
.withColumn('mta_tax', taxi.mta_tax.cast(types.FloatType())) \
.withColumn('pickup_latitude', taxi.pickup_latitude.cast(types.FloatType())) \
.withColumn('pickup_longitude', taxi.pickup_longitude.cast(types.FloatType())) \
.withColumn('tip_amount', taxi.tip_amount.cast(types.FloatType())) \
.withColumn('tolls_amount', taxi.tolls_amount.cast(types.FloatType())) \
.withColumn('total_amount', taxi.total_amount.cast(types.FloatType())) \
.withColumn('trip_distance', taxi.trip_distance.cast(types.FloatType())) \
.withColumn('passenger_count', taxi.passenger_count.cast(types.IntegerType())) \
.withColumn('rate_code_id', taxi.rate_code_id.cast(types.IntegerType()))
# .withColumn('trip_id', funcs.monotonically_increasing_id())
)
taxi.sort('pickup_datetime').withColumn('trip_id', funcs.monotonically_increasing_id()) \
.write.parquet('/data/all_trips_spark.parquet', compression='snappy', mode='overwrite')
"""
Explanation: Convert, repartition, and sort Taxi Dataframe using PySpark
End of explanation
"""
import pandas as pd
import numpy as np
pd.options.display.max_rows = 100
pd.options.display.max_columns = 100
import dask.dataframe as dd
import dask.distributed
client = dask.distributed.Client()
trips = dd.read_parquet('/data/all_trips_spark.parquet', engine='arrow')
trips.head()
trips.tail()
# arrow engine adds quotes to all string fields for some reason. Strip them out.
dtypedict = dict(trips.dtypes)
for k in dtypedict:
if dtypedict[k] == np.dtype('O'):
trips[k] = trips[k].str.strip('"')
trips = trips.set_index('pickup_datetime', npartitions=trips.npartitions, sorted=True, compute=False)
trips.to_parquet('/data/all_trips.parquet', has_nulls=True, compression="SNAPPY", object_encoding='json')
"""
Explanation: Using Dask, Read, and set index on Taxi Dataframe produced using PySpark, then write to disk for easy reading in Dask
For some reason I don't yet understand, this code seems dependent on Dask 0.14.3, and breaks in 0.15.
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder.ipynb | apache-2.0 | # Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
%%capture
#@title Setup Environment
# Install the latest Tensorflow version.
!pip install tensorflow_text
!pip install bokeh
!pip install simpleneighbors[annoy]
!pip install tqdm
#@title Setup common imports and functions
import bokeh
import bokeh.models
import bokeh.plotting
import numpy as np
import os
import pandas as pd
import tensorflow.compat.v2 as tf
import tensorflow_hub as hub
from tensorflow_text import SentencepieceTokenizer
import sklearn.metrics.pairwise
from simpleneighbors import SimpleNeighbors
from tqdm import tqdm
from tqdm import trange
def visualize_similarity(embeddings_1, embeddings_2, labels_1, labels_2,
plot_title,
plot_width=1200, plot_height=600,
xaxis_font_size='12pt', yaxis_font_size='12pt'):
assert len(embeddings_1) == len(labels_1)
assert len(embeddings_2) == len(labels_2)
# arccos based text similarity (Yang et al. 2019; Cer et al. 2019)
sim = 1 - np.arccos(
sklearn.metrics.pairwise.cosine_similarity(embeddings_1,
embeddings_2))/np.pi
embeddings_1_col, embeddings_2_col, sim_col = [], [], []
for i in range(len(embeddings_1)):
for j in range(len(embeddings_2)):
embeddings_1_col.append(labels_1[i])
embeddings_2_col.append(labels_2[j])
sim_col.append(sim[i][j])
df = pd.DataFrame(zip(embeddings_1_col, embeddings_2_col, sim_col),
columns=['embeddings_1', 'embeddings_2', 'sim'])
mapper = bokeh.models.LinearColorMapper(
palette=[*reversed(bokeh.palettes.YlOrRd[9])], low=df.sim.min(),
high=df.sim.max())
p = bokeh.plotting.figure(title=plot_title, x_range=labels_1,
x_axis_location="above",
y_range=[*reversed(labels_2)],
plot_width=plot_width, plot_height=plot_height,
tools="save",toolbar_location='below', tooltips=[
('pair', '@embeddings_1 ||| @embeddings_2'),
('sim', '@sim')])
p.rect(x="embeddings_1", y="embeddings_2", width=1, height=1, source=df,
fill_color={'field': 'sim', 'transform': mapper}, line_color=None)
p.title.text_font_size = '12pt'
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_standoff = 16
p.xaxis.major_label_text_font_size = xaxis_font_size
p.xaxis.major_label_orientation = 0.25 * np.pi
p.yaxis.major_label_text_font_size = yaxis_font_size
p.min_border_right = 300
bokeh.io.output_notebook()
bokeh.io.show(p)
"""
Explanation: 使用 Multilingual Universal Sentence Encoder 研究跨语言相似度和构建语义搜索引擎
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
<td> <a href="https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a> </td>
</table>
此笔记本演示了如何访问 Multilingual Universal Sentence Encoder 模块,以及如何将它用于跨多种语言的句子相似度研究。本模块是原始 Universal Sentence Encoder 模块的扩展。
此笔记本分为以下两个部分:
第一部分展示了成对语言之间句子的可视化。这是一项学术性较强的练习。
在第二部分中,我们将展示如何从多种语言的 Wikipedia 语料库样本构建语义搜索引擎。
引用
研究论文在使用本 Colab 中探讨的模型时应引用以下内容:
Multilingual universal sentence encoder for semantic retrieval
Multilingual universal sentence encoder for semantic retrieval
设置
本部分将对访问 Multilingual Universal Sentence Encoder 模块的环境进行设置,并准备一组英语句子及其翻译。在以下部分中,多语模块将用于计算跨语言相似度。
End of explanation
"""
# The 16-language multilingual module is the default but feel free
# to pick others from the list and compare the results.
module_url = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3' #@param ['https://tfhub.dev/google/universal-sentence-encoder-multilingual/3', 'https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3']
model = hub.load(module_url)
def embed_text(input):
return model(input)
"""
Explanation: 下面是附加的样板代码,我们在其中导入了预训练的 ML 模型,在此笔记本中我们将用它来对文本进行编码。
End of explanation
"""
# Some texts of different lengths in different languages.
arabic_sentences = ['كلب', 'الجراء لطيفة.', 'أستمتع بالمشي لمسافات طويلة على طول الشاطئ مع كلبي.']
chinese_sentences = ['狗', '小狗很好。', '我喜欢和我的狗一起沿着海滩散步。']
english_sentences = ['dog', 'Puppies are nice.', 'I enjoy taking long walks along the beach with my dog.']
french_sentences = ['chien', 'Les chiots sont gentils.', 'J\'aime faire de longues promenades sur la plage avec mon chien.']
german_sentences = ['Hund', 'Welpen sind nett.', 'Ich genieße lange Spaziergänge am Strand entlang mit meinem Hund.']
italian_sentences = ['cane', 'I cuccioli sono carini.', 'Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.']
japanese_sentences = ['犬', '子犬はいいです', '私は犬と一緒にビーチを散歩するのが好きです']
korean_sentences = ['개', '강아지가 좋다.', '나는 나의 개와 해변을 따라 길게 산책하는 것을 즐긴다.']
russian_sentences = ['собака', 'Милые щенки.', 'Мне нравится подолгу гулять по пляжу со своей собакой.']
spanish_sentences = ['perro', 'Los cachorros son agradables.', 'Disfruto de dar largos paseos por la playa con mi perro.']
# Multilingual example
multilingual_example = ["Willkommen zu einfachen, aber", "verrassend krachtige", "multilingüe", "compréhension du langage naturel", "модели.", "大家是什么意思" , "보다 중요한", ".اللغة التي يتحدثونها"]
multilingual_example_in_en = ["Welcome to simple yet", "surprisingly powerful", "multilingual", "natural language understanding", "models.", "What people mean", "matters more than", "the language they speak."]
# Compute embeddings.
ar_result = embed_text(arabic_sentences)
en_result = embed_text(english_sentences)
es_result = embed_text(spanish_sentences)
de_result = embed_text(german_sentences)
fr_result = embed_text(french_sentences)
it_result = embed_text(italian_sentences)
ja_result = embed_text(japanese_sentences)
ko_result = embed_text(korean_sentences)
ru_result = embed_text(russian_sentences)
zh_result = embed_text(chinese_sentences)
multilingual_result = embed_text(multilingual_example)
multilingual_in_en_result = embed_text(multilingual_example_in_en)
"""
Explanation: 可视化语言之间的文本相似度
现在有了句子嵌入向量,我们就能可视化不同语言之间的语义相似度。
计算文本嵌入向量
我们首先定义一组同时翻译成各种语言的句子。然后预计算所有句子的嵌入向量。
End of explanation
"""
visualize_similarity(multilingual_in_en_result, multilingual_result,
multilingual_example_in_en, multilingual_example, "Multilingual Universal Sentence Encoder for Semantic Retrieval (Yang et al., 2019)")
"""
Explanation: 可视化相似度
有了文本嵌入向量,我们就可以利用它们的点积来呈现句子在不同语言之间的相似程度。颜色越深,嵌入向量在语义上越相似。
多语言相似度
End of explanation
"""
visualize_similarity(en_result, ar_result, english_sentences, arabic_sentences, 'English-Arabic Similarity')
"""
Explanation: 英语-阿拉伯语相似度
End of explanation
"""
visualize_similarity(en_result, ru_result, english_sentences, russian_sentences, 'English-Russian Similarity')
"""
Explanation: 英语-俄语相似度
End of explanation
"""
visualize_similarity(en_result, es_result, english_sentences, spanish_sentences, 'English-Spanish Similarity')
"""
Explanation: 英语-西班牙语相似度
End of explanation
"""
visualize_similarity(en_result, it_result, english_sentences, italian_sentences, 'English-Italian Similarity')
"""
Explanation: 英语-意大利语相似度
End of explanation
"""
visualize_similarity(it_result, es_result, italian_sentences, spanish_sentences, 'Italian-Spanish Similarity')
"""
Explanation: 意大利语-西班牙语相似度
End of explanation
"""
visualize_similarity(en_result, zh_result, english_sentences, chinese_sentences, 'English-Chinese Similarity')
"""
Explanation: 英语-中文相似度
End of explanation
"""
visualize_similarity(en_result, ko_result, english_sentences, korean_sentences, 'English-Korean Similarity')
"""
Explanation: 英语-韩语相似度
End of explanation
"""
visualize_similarity(zh_result, ko_result, chinese_sentences, korean_sentences, 'Chinese-Korean Similarity')
"""
Explanation: 中文-韩语相似度
End of explanation
"""
corpus_metadata = [
('ar', 'ar-en.txt.zip', 'News-Commentary.ar-en.ar', 'Arabic'),
('zh', 'en-zh.txt.zip', 'News-Commentary.en-zh.zh', 'Chinese'),
('en', 'en-es.txt.zip', 'News-Commentary.en-es.en', 'English'),
('ru', 'en-ru.txt.zip', 'News-Commentary.en-ru.ru', 'Russian'),
('es', 'en-es.txt.zip', 'News-Commentary.en-es.es', 'Spanish'),
]
language_to_sentences = {}
language_to_news_path = {}
for language_code, zip_file, news_file, language_name in corpus_metadata:
zip_path = tf.keras.utils.get_file(
fname=zip_file,
origin='http://opus.nlpl.eu/download.php?f=News-Commentary/v11/moses/' + zip_file,
extract=True)
news_path = os.path.join(os.path.dirname(zip_path), news_file)
language_to_sentences[language_code] = pd.read_csv(news_path, sep='\t', header=None)[0][:1000]
language_to_news_path[language_code] = news_path
print('{:,} {} sentences'.format(len(language_to_sentences[language_code]), language_name))
"""
Explanation: 以及更多…
上面的示例可以扩展到英语、阿拉伯语、中文、荷兰语、法语、德语、意大利语、日语、韩语、波兰语、葡萄牙语、俄语、西班牙语、泰语和土耳其语中的任何语言对。编程愉快!
创建多语语义相似度搜索引擎
在前面的示例中,我们可视化了少量句子,而在本部分,我们将使用来自 Wikipedia 语料库的约 200,000 个句子构建一个语义搜索索引。其中,大约一半将使用英语,另一半使用西班牙语,以演示 Universal Sentence Encoder 的多语言功能。
将数据下载到索引
首先,我们将从 News-Commentary 语料库 [1] 中下载多种语言的新闻句子。在不失一般性的前提下,此方法应该也可以用来为其余支持的语言建立索引。
为了加快演示速度,我们将每种语言限制为 1000 个句子。
End of explanation
"""
# Takes about 3 minutes
batch_size = 2048
language_to_embeddings = {}
for language_code, zip_file, news_file, language_name in corpus_metadata:
print('\nComputing {} embeddings'.format(language_name))
with tqdm(total=len(language_to_sentences[language_code])) as pbar:
for batch in pd.read_csv(language_to_news_path[language_code], sep='\t',header=None, chunksize=batch_size):
language_to_embeddings.setdefault(language_code, []).extend(embed_text(batch[0]))
pbar.update(len(batch))
"""
Explanation: 使用预训练的模型将句子转换为向量
我们分批次计算嵌入向量,使其适合 GPU 的 RAM。
End of explanation
"""
%%time
# Takes about 8 minutes
num_index_trees = 40
language_name_to_index = {}
embedding_dimensions = len(list(language_to_embeddings.values())[0][0])
for language_code, zip_file, news_file, language_name in corpus_metadata:
print('\nAdding {} embeddings to index'.format(language_name))
index = SimpleNeighbors(embedding_dimensions, metric='dot')
for i in trange(len(language_to_sentences[language_code])):
index.add_one(language_to_sentences[language_code][i], language_to_embeddings[language_code][i])
print('Building {} index with {} trees...'.format(language_name, num_index_trees))
index.build(n=num_index_trees)
language_name_to_index[language_name] = index
%%time
# Takes about 13 minutes
num_index_trees = 60
print('Computing mixed-language index')
combined_index = SimpleNeighbors(embedding_dimensions, metric='dot')
for language_code, zip_file, news_file, language_name in corpus_metadata:
print('Adding {} embeddings to mixed-language index'.format(language_name))
for i in trange(len(language_to_sentences[language_code])):
annotated_sentence = '({}) {}'.format(language_name, language_to_sentences[language_code][i])
combined_index.add_one(annotated_sentence, language_to_embeddings[language_code][i])
print('Building mixed-language index with {} trees...'.format(num_index_trees))
combined_index.build(n=num_index_trees)
"""
Explanation: 构建语义向量的索引
我们使用 SimpleNeighbors 库(这是 Annoy 库的封装容器)高效地从语料库中查找结果。
End of explanation
"""
sample_query = 'The stock market fell four points.' #@param ["Global warming", "Researchers made a surprising new discovery last week.", "The stock market fell four points.", "Lawmakers will vote on the proposal tomorrow."] {allow-input: true}
index_language = 'English' #@param ["Arabic", "Chinese", "English", "French", "German", "Russian", "Spanish"]
num_results = 10 #@param {type:"slider", min:0, max:100, step:10}
query_embedding = embed_text(sample_query)[0]
search_results = language_name_to_index[index_language].nearest(query_embedding, n=num_results)
print('{} sentences similar to: "{}"\n'.format(index_language, sample_query))
search_results
"""
Explanation: 验证语义相似度搜索引擎是否有效
我们将在本部分演示以下内容:
语义搜索功能:从语料库中检索与给定查询语义相似的句子。
多语言功能:查询语言和索引语言匹配时以多种语言显示
跨语言功能:使用与索引语料库不同的语言发起查询
混合语言语料库:以上所有内容均在一个索引中,包含来自所有语言的条目
语义搜索跨语言功能
在本部分中,我们将展示如何检索与一组英语例句相关的句子。待尝试的内容如下:
尝试一些不同的例句
尝试更改返回结果的数量(它们会按照相似度的顺序返回)
通过返回不同语言的结果来尝试跨语言功能(可以使用 Google 翻译将部分结果翻译成您的母语来进行健全性检查)
End of explanation
"""
sample_query = 'The stock market fell four points.' #@param ["Global warming", "Researchers made a surprising new discovery last week.", "The stock market fell four points.", "Lawmakers will vote on the proposal tomorrow."] {allow-input: true}
num_results = 40 #@param {type:"slider", min:0, max:100, step:10}
query_embedding = embed_text(sample_query)[0]
search_results = language_name_to_index[index_language].nearest(query_embedding, n=num_results)
print('{} sentences similar to: "{}"\n'.format(index_language, sample_query))
search_results
"""
Explanation: 混合语料库功能
现在,我们将以英语发起查询,但结果将来自任何一种已建立索引的语言。
End of explanation
"""
query = 'The stock market fell four points.' #@param {type:"string"}
num_results = 30 #@param {type:"slider", min:0, max:100, step:10}
query_embedding = embed_text(sample_query)[0]
search_results = combined_index.nearest(query_embedding, n=num_results)
print('{} sentences similar to: "{}"\n'.format(index_language, query))
search_results
"""
Explanation: 尝试您自己的查询:
End of explanation
"""
|
2015fallhw/user9999 | content/notebook/.ipynb_checkpoints/Solving the TSP with GAs-checkpoint.ipynb | agpl-3.0 | import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
import random, operator
import time
import itertools
import numpy
import math
%matplotlib inline
random.seed(time.time()) # planting a random seed
"""
Explanation: <img src='http://www.puc-rio.br/sobrepuc/admin/vrd/brasao/download/ass_vertpb_reduz4.jpg' align='left'/>
Demonstration Class 03
Using genetic algorithms to solve the traveling salesperson problem
Luis Martí, LIRA/DEE/PUC-Rio
http://lmarti.com; [email protected]
Advanced Evolutionary Computation: Theory and Practice
The notebook is better viewed rendered as slides. You can convert it to slides and view them by:
- using nbconvert with a command like:
bash
$ ipython nbconvert --to slides --post serve <this-notebook-name.ipynb>
- installing Reveal.js - Jupyter/IPython Slideshow Extension
- using the online IPython notebook slide viewer (some slides of the notebook might not be properly rendered).
This and other related IPython notebooks can be found at the course github repository:
* https://github.com/lmarti/evolutionary-computation-course
Traveling Salesperson Problem (TSP):
Given a set of cities, and the distances between each pair of cities, find a tour of the cities with the minimum total distance. A tour means you start at one city, visit every other city exactly once, and then return to the starting city.
This notebook relies on Peter Norvig's IPython notebook on the traveling salesperson problem.
I will be showing how to apply evolutionary algorithms to solve the TSP.
This is a well-known [intractable](http://en.wikipedia.org/wiki/Intractability_(complexity) problem, meaning that there are no efficient solutions that work for a large number of cities.
We can create an inefficient algorithm that works fine for a small number of cites (about a dozen).
We can also find a nearly-shortest tour over thousands of cities.
Actually, the fact there is no efficient algorithm is liberating:
This means that we can use a very simple, inefficient algorithm and not feel too bad about it.
The vocabulary of the problem:
City: For the purpose of this exercise, a city is "atomic" in the sense that we don't have to know anything about the components or attributes of a city, just how far it is from other cities.
Cities: We will need to represent a set of cities; Python's set datatype might be appropriate for that.
Distance: We will need the distance between two cities. If A and B are cities. This could be done with a function, distance(A, B), or with a dict, distance[A][B] or distance[A, B], or with an array if A and B are integer indexes. The resulting distance will be a real number (which Python calls a float).
Tour: A tour is an ordered list of cities; Python's list or tuple datatypes would work.
Total distance: The sum of the distances of adjacent cities in the tour. We will probably have a function, total_distance(tour).
We are doing this demonstration as an IPython notebook. Therefore, we need to perform some initialization.
End of explanation
"""
def exact_TSP(cities):
"Generate all possible tours of the cities and choose the shortest one."
return shortest(alltours(cities))
def shortest(tours):
"Return the tour with the minimum total distance."
return min(tours, key=total_distance)
"""
Explanation: First algorithm: find the tour with shortest total distance fro all possible tours
Generate all the possible tours of the cities, and choose the shortest one (the tour with the minimum total distance).
We can implement this as the Python function exact_TSP (TSP is the standard abbreviation for Traveling Salesperson Problem, and "exact" means that it finds the shortest tour, exactly, not just an approximation to the shortest tour). Here's the design philosophy we will use:
Write Python code that closely mirrors the English description of the algorithm. This will probably require
some auxilliary functions and data structures; just assume we will be able to define them as well, using the same design philosophy.
End of explanation
"""
alltours = itertools.permutations # The permutation function is already defined in the itertools module
cities = {1, 2, 3}
list(alltours(cities))
"""
Explanation: Note 1: We have not yet defined the function total_distance, nor alltours.
Note 2: In Python min(collection,key=function) means to find the element x that is a member of collection such that function(x) is minimized. So shortest finds the tour whose total_distance in the minimal among the tours. So our Python code implements (and closely mimics) our English description of the algorithm. Now we need to define what a tour is, and how to measure total distance.
Representing Tours
A tour starts in one city, and then visits each of the other cities in order, before finally retirning to the start.
A natural representation of the set of available cities is a Python set, and a natural representation of a tour is a sequence that is a permutation of the set.
The tuple (1, 2, 3), for example, represents a tour that starts in city 1, moves to 2, then 3, and then returns to 1 to finish the tour.
End of explanation
"""
def total_distance(tour):
"The total distance between each pair of consecutive cities in the tour."
return sum(distance(tour[i], tour[i-1])
for i in range(len(tour)))
"""
Explanation: Representing Cities and Distance
Now for the notion of distance. We define total_distance(tour) as the sum of the distances between consecutive cities in the tour; that part is shown below and is easy (with one Python-specific trick: when i is 0, then distance(tour[0], tour[-1]) gives us the wrap-around distance between the first and last cities, because tour[-1] is the last element of tour).
End of explanation
"""
City = complex # Constructor for new cities, e.g. City(300, 400)
def distance(A, B):
"The Euclidean distance between two cities."
return abs(A - B)
A = City(300, 0)
B = City(0, 400)
distance(A, B)
def generate_cities(n):
"Make a set of n cities, each with random coordinates."
return set(City(random.randrange(10, 890),
random.randrange(10, 590))
for c in range(n))
cities8, cities10, cities100, cities1000 = generate_cities(8), generate_cities(10), generate_cities(100), generate_cities(1000)
cities8
"""
Explanation: Distance between cities
Before we can define distance(A, B), the distance between two cities, we have to make a choice. In the fully general version of the TSP problem, the distance between two cities could be anything: it could be the amount of time it takes to travel between cities, the number of dollars it costs, or anything else.
How will we represent a two-dimensional point? Here are some choices, with their pros and cons:
Tuple: A point (or city) is a two-tuple of (x, y) coordinates, for example, (300, 0).
Pro: Very simple, easy to break a point down into components. Reasonably efficient.
Con: doesn't distinguish points from other two-tuples. If p is a point, can't do p.x or p.y.
class: Define City as a custom class with x and y fields.
Pro: explicit, gives us p.x accessors.
Con: less efficient because of the overhead of creating user-defined objects.
Distance between cities (contd)
complex: Python already has the two-dimensional point as a built-in numeric data type, but in a non-obvious way: as complex numbers, which inhabit the two-dimensional (real × complex) plane. We can make this use more explicit by defining "City = complex", meaning that we can construct the representation of a city using the same constructor that makes complex numbers.
Pro: most efficient, because it uses a builtin type that is already a pair of numbers. The distance between two points is simple: the absolute value of their difference.
Con: it may seem confusing to bring complex numbers into play; can't say p.x.
subclass: Define "class Point(complex): pass", meaning that points are a subclass of complex numbers.
Pro: All the pros of using complex directly, with the added protection of making it more explicit that these are treated as points, not as complex numbers.
Con: less efficient than using complex directly; still can't do p.x or p.y.
subclass with properties: Define "class Point(complex): x, y = property(lambda p: p.real), property(lambda p: p.imag)".
Pro: All the pros of previous approach, and we can finally say p.x.
Con: less efficient than using complex directly.
From possible alternatives Peter chose to go with complex numbers:
End of explanation
"""
def plot_tour(tour, alpha=1, color=None):
# Plot the tour as blue lines between blue circles, and the starting city as a red square.
plotline(list(tour) + [tour[0]], alpha=alpha, color=color)
plotline([tour[0]], 'rs', alpha=alpha)
# plt.show()
def plotline(points, style='bo-', alpha=1, color=None):
"Plot a list of points (complex numbers) in the 2-D plane."
X, Y = XY(points)
if color:
plt.plot(X, Y, style, alpha=alpha, color=color)
else:
plt.plot(X, Y, style, alpha=alpha)
def XY(points):
"Given a list of points, return two lists: X coordinates, and Y coordinates."
return [p.real for p in points], [p.imag for p in points]
"""
Explanation: A cool thing is to be able to plot a tour
End of explanation
"""
tour = exact_TSP(cities8)
plot_tour(tour)
"""
Explanation: We are ready to test our algorithm
End of explanation
"""
def all_non_redundant_tours(cities):
"Return a list of tours, each a permutation of cities, but each one starting with the same city."
start = first(cities)
return [[start] + list(tour)
for tour in itertools.permutations(cities - {start})]
def first(collection):
"Start iterating over collection, and return the first element."
for x in collection: return x
def exact_non_redundant_TSP(cities):
"Generate all possible tours of the cities and choose the shortest one."
return shortest(all_non_redundant_tours(cities))
all_non_redundant_tours({1, 2, 3})
"""
Explanation: Improving the algorithm: Try All Non-Redundant Tours
The permutation (1, 2, 3) represents the tour that goes from 1 to 2 to 3 and back to 1. You may have noticed that there aren't really six different tours of three cities: the cities 1, 2, and 3 form a triangle; any tour must connect the three points of the triangle; and there are really only two ways to do this: clockwise or counterclockwise. In general, with $n$ cities, there are $n!$ (that is, $n$ factorial) permutations, but only $(n-1)!$, tours that are distinct: the tours 123, 231, and 312 are three ways of representing the same tour.
So we can make our TSP program $n$ times faster by never considering redundant tours. Arbitrarily, we will say that all tours must start with the "first" city in the set of cities. We don't have to change the definition of TSP—just by making alltours return only nonredundant tours, the whole program gets faster.
(While we're at it, we'll make tours be represented as lists, rather than the tuples that are returned by permutations. It doesn't matter now, but later on we will want to represent partial tours, to which we will want to append cities one by one; that can only be done to lists, not tuples.)
End of explanation
"""
%timeit exact_TSP(cities8)
%timeit exact_non_redundant_TSP(cities8)
%timeit exact_non_redundant_TSP(cities10)
"""
Explanation: Results of the improvement
End of explanation
"""
def greedy_TSP(cities):
"At each step, visit the nearest neighbor that is still unvisited."
start = first(cities)
tour = [start]
unvisited = cities - {start}
while unvisited:
C = nearest_neighbor(tour[-1], unvisited)
tour.append(C)
unvisited.remove(C)
return tour
def nearest_neighbor(A, cities):
"Find the city in cities that is nearest to city A."
return min(cities, key=lambda x: distance(x, A))
"""
Explanation: It takes a few seconds on my machine to solve this problem. In general, the function exact_non_redundant_TSP() looks at $(n-1)!$ tours for an $n$-city problem, and each tour has $n$ cities, so the time for $n$ cities should be roughly proportional to $n!$. This means that the time grows rapidly with the number of cities; we'd need longer than the age of the Universe to run exact_non_redundant_TSP() on just 24 cities:
<table>
<tr><th>n cities<th>time
<tr><td>10<td>3 secs
<tr><td>12<td>3 secs × 12 × 11 = 6.6 mins
<tr><td>14<td>6.6 mins × 13 × 14 = 20 hours
<tr><td>24<td>3 secs × 24! / 10! = <a href="https://www.google.com/search?q=3+seconds+*+24!+%2F+10!+in+years">16 billion years</a>
</table>
There must be a better way... or at least we need to look for it until quantum computing comes around.
Approximate (Heuristic) Algorithms
The general, exact Traveling Salesperson Problem is intractable;
there is no efficient algorithm to find the tour with minimum total distance.
But if we restrict ourselves to Euclidean distance and if we are willing to settle for a tour that is reasonably short but not the shortest, then the news is much better.
We will consider several approximate algorithms, which find tours that are usually within 10 or 20% of the shortest possible and can handle thousands of cities in a few seconds.
Greedy Nearest Neighbor (greedy_TSP)
Here is our first approximate algorithm:
Start at any city; at each step extend the tour by moving from the previous city to its nearest neighbor that has not yet been visited.
This is called a greedy algorithm, because it greedily takes what looks best in the short term (the nearest neighbor) even when that won't always be the best in the long term.
To implement the algorithm I need to represent all the noun phrases in the English description:
* start: a city which is arbitrarily the first city;
* the tour: a list of cities, initialy just the start city);
* previous city: the last element of tour, that is, tour[-1]);
* nearest neighbor: a function that, when given a city, A, and a list of other cities, finds the one with minimal distance from A); and
* not yet visited: we will keep a set of unvisited cities; initially all cities but the start city are unvisited).
Once these are initialized, we repeatedly find the nearest unvisited neighbor, C, and add it to the tour and remove it from unvisited.
End of explanation
"""
cities = generate_cities(9)
%timeit exact_non_redundant_TSP(cities)
plot_tour(exact_non_redundant_TSP(cities))
%timeit greedy_TSP(cities)
plot_tour(greedy_TSP(cities))
"""
Explanation: (In Python, as in the formal mathematical theory of computability, lambda is the symbol for function, so "lambda x: distance(x, A)" means the function of x that computes the distance from x to the city A. The name lambda comes from the Greek letter λ.)
We can compare the fast approximate greedy_TSP algorithm to the slow exact_TSP algorithm on a small map, as shown below. (If you have this page in a IPython notebook you can repeatedly run the cell, and see how the algorithms compare. Cities(9) will return a different set of cities each time. I ran it 20 times, and only once did the greedy algorithm find the optimal solution, but half the time it was within 10% of optimal, and it was never more than 25% worse than optimal.)
End of explanation
"""
%timeit greedy_TSP(cities100)
plot_tour(greedy_TSP(cities100))
%timeit greedy_TSP(cities1000)
plot_tour(greedy_TSP(cities1000))
"""
Explanation: greedy_TSP() can handle bigger problems
End of explanation
"""
from deap import algorithms, base, creator, tools
"""
Explanation: But... don't be greedy!
A greedy algorithm is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.
For many problmes greedy algorithms fail to produce the optimal solution, and may even produce the unique worst possible solution. One example is the traveling salesman problem mentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest neighbor heuristic produces the unique worst possible tour.
A thought on computational complexity
<img src='http://imgs.xkcd.com/comics/travelling_salesman_problem.png' align='center' width='65%'/>
from XKCD
Check out Peter Norvig's IPython notebook on the traveling salesperson problem on more alternatives for the TSP.
Nature-inspired metaheuristics
We have seen in class some examples of nature-inspired metaheuristics.
They are an option in which we dedicate a little more computational effort in order to produce better solutions than greedy_TSP().
We will be using the DEAP library to code this tackle this problem using a genetic algorithm.
<img src='https://raw.githubusercontent.com/DEAP/deap/master/doc/_static/deap_long.png' width='29%' align='center'/>
End of explanation
"""
num_cities = 30
cities = generate_cities(num_cities)
"""
Explanation: Elements to take into account solving problems with genetic algorithms
Individual representation (binary, floating-point, etc.);
evaluation and fitness assignment;
selection, that establishes a partial order of individuals in the population using their fitness function value as reference and determines the degree at which individuals in the population will take part in the generation of new (offspring) individuals.
variation, that applies a range of evolution-inspired operators, like crossover, mutation, etc., to synthesize offspring individuals from the current (parent) population. This process is supposed to prime the fittest individuals so they play a bigger role in the generation of the offspring.
stopping criterion, that determines when the algorithm shoulod be stopped, either because the optimum was reach or because the optimization process is not progressing.
Hence a 'general' evolutionary algorithm can be described as
```
def evolutionary_algorithm():
'Pseudocode of an evolutionary algorithm'
populations = [] # a list with all the populations
populations[0] = initialize_population(pop_size)
t = 0
while not stop_criterion(populations[t]):
fitnesses = evaluate(populations[t])
offspring = matting_and_variation(populations[t],
fitnesses)
populations[t+1] = environmental_selection(
populations[t],
offspring)
t = t+1
```
Some preliminaries for the experiment
We will carry out our tests with a 30-cities problem.
End of explanation
"""
toolbox = base.Toolbox()
"""
Explanation: The toolbox stored the setup of the algorithm. It describes the different elements to take into account.
End of explanation
"""
creator.create("FitnessMin", base.Fitness, weights=(-1.0,))
creator.create("Individual", list, fitness=creator.FitnessMin)
"""
Explanation: Individual representation and evaluation
Individuals represent possible solutions to the problem.
In the TSP case, it looks like the tour itself can be a suitable representation.
For simplicity, an individual can be a list with the indexes corresponding to each city.
This will simplify the crossover and mutation operators.
We can rely on the total_distance() function for evaluation and set the fitness assignment as to minimize it.
End of explanation
"""
toolbox.register("indices", numpy.random.permutation, len(cities))
toolbox.register("individual", tools.initIterate, creator.Individual,
toolbox.indices)
toolbox.register("population", tools.initRepeat, list,
toolbox.individual)
"""
Explanation: Let's now define that our individuals are composed by indexes that referr to elements of cities and, correspondingly, the population is composed by individuals.
End of explanation
"""
toolbox.register("mate", tools.cxOrdered)
toolbox.register("mutate", tools.mutShuffleIndexes, indpb=0.05)
"""
Explanation: Defining the crossover and mutation operators can be a challenging task.
There are various <a href='http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)#Crossover_for_Ordered_Chromosomes'>crossover operators</a> that have been devised to deal with ordered individuals like ours.
We will be using DEAP's deap.tools.cxOrdered() crossover.
For mutation we will swap elements from two points of the individual.
This is performed by deap.tools.mutShuffleIndexes().
End of explanation
"""
def create_tour(individual):
return [list(cities)[e] for e in individual]
def evaluation(individual):
'''Evaluates an individual by converting it into
a list of cities and passing that list to total_distance'''
return (total_distance(create_tour(individual)),)
toolbox.register("evaluate", evaluation)
"""
Explanation: Evaluation can be easily defined from the total_distance() definition.
End of explanation
"""
toolbox.register("select", tools.selTournament, tournsize=3)
"""
Explanation: We will employ tournament selection with size 3.
End of explanation
"""
pop = toolbox.population(n=100)
%%time
result, log = algorithms.eaSimple(pop, toolbox,
cxpb=0.8, mutpb=0.2,
ngen=400, verbose=False)
"""
Explanation: Lets' run the algorithm with a population of 100 individuals and 400 generations.
End of explanation
"""
best_individual = tools.selBest(result, k=1)[0]
print('Fitness of the best individual: ', evaluation(best_individual)[0])
plot_tour(create_tour(best_individual))
"""
Explanation: We can now review the results
The best individual of the last population:
End of explanation
"""
fit_stats = tools.Statistics(key=operator.attrgetter("fitness.values"))
fit_stats.register('mean', numpy.mean)
fit_stats.register('min', numpy.min)
"""
Explanation: It is interesting to assess how the fitness of the population changed as the evolution process took place.
We can prepare an deap.tools.Statistics instance to specify what data to collect.
End of explanation
"""
result, log = algorithms.eaSimple(toolbox.population(n=100), toolbox,
cxpb=0.5, mutpb=0.2,
ngen=400, verbose=False,
stats=fit_stats)
"""
Explanation: We are all set now but lets run again the genetic algorithm configured to collect the statistics that we want to gather:
End of explanation
"""
plt.figure(1, figsize=(11, 4), dpi=500)
plots = plt.plot(log.select('min'),'c-', log.select('mean'), 'b-', antialiased=True)
plt.legend(plots, ('Minimum fitness', 'Mean fitness'))
plt.ylabel('Fitness')
plt.xlabel('Iterations')
"""
Explanation: Plotting mean and minimium fitness as evolution took place.
End of explanation
"""
pop_stats = tools.Statistics(key=numpy.copy)
pop_stats.register('pop', numpy.copy) # -- copies the populations themselves
pop_stats.register('fitness', # -- computes and stores the fitnesses
lambda x : [evaluation(a) for a in x])
"""
Explanation: How has the population evolved?
Ok, but how the population evolved? As TSP solutions are easy to visualize, we can plot the individuals of each population the evolution progressed. We need a new Statistics instance prepared for that.
End of explanation
"""
result, log = algorithms.eaSimple(toolbox.population(n=100), toolbox,
cxpb=0.5, mutpb=0.2,
ngen=400, verbose=False,
stats=pop_stats)
"""
Explanation: Note: I am aware that this could be done in a more efficient way.
End of explanation
"""
def plot_population(record, min_fitness, max_fitness):
'''
Plots all individuals in a population.
Darker individuals have a better fitness.
'''
pop = record['pop']
fits = record['fitness']
index = sorted(range(len(fits)), key=lambda k: fits[k])
norm=colors.Normalize(vmin=min_fitness,
vmax=max_fitness)
sm = cmx.ScalarMappable(norm=norm,
cmap=plt.get_cmap('PuBu'))
for i in range(len(index)):
color = sm.to_rgba(max_fitness - fits[index[i]][0])
plot_tour(create_tour(pop[index[i]]), alpha=0.5, color=color)
min_fitness = numpy.min(log.select('fitness'))
max_fitness = numpy.max(log.select('fitness'))
"""
Explanation: Plotting the individuals and their fitness (color-coded)
End of explanation
"""
plt.figure(1, figsize=(11,11), dpi=500)
for i in range(0, 12):
plt.subplot(4,3,i+1)
it = int(math.ceil((len(log)-1.)/15))
plt.title('t='+str(it*i))
plot_population(log[it*i], min_fitness, max_fitness)
"""
Explanation: We can now plot the population as the evolutionary process progressed. Darker blue colors imply better fitness.
End of explanation
"""
%timeit total_distance(greedy_TSP(cities))
print('greedy_TSP() distance: ', total_distance(greedy_TSP(cities)))
print('Genetic algorithm best distance: ', evaluation(best_individual)[0])
"""
Explanation: Comprarison with greedy_TSP()
End of explanation
"""
from JSAnimation import IPython_display
from matplotlib import animation
def update_plot_tour(plot, points, alpha=1, color='blue'):
'A function for updating a plot with an individual'
X, Y = XY(list(points) + [points[0]])
plot.set_data(X, Y)
plot.set_color(color)
return plot
def init():
'Initialization of all plots to empty data'
for p in list(tour_plots):
p.set_data([], [])
return tour_plots
def animate(i):
'Updates all plots to match frame _i_ of the animation'
pop = log[i]['pop']
fits = log[i]['fitness']
index = sorted(range(len(fits)), key=lambda k: fits[k])
norm=colors.Normalize(vmin=min_fitness,
vmax=max_fitness)
sm = cmx.ScalarMappable(norm=norm,
cmap=plt.get_cmap('PuBu'))
for j in range(len(tour_plots)):
color = sm.to_rgba(max_fitness - fits[index[j]][0])
update_plot_tour(tour_plots[j],
create_tour(pop[index[j]]),
alpha=0.5, color=color)
return tour_plots
"""
Explanation: The genetic algorithm outperformed the greedy approach at a viable computational cost.
Note 1: Viable depends on the particular problem, of course.
Note 2: These results depend on the cities that were randomly generated. Your milleage may vary.
Homework
We have just performed one run of the experiment, but genetic algorithms are stochastic algorithms and their performace should be assessed in statistical terms. Modify the genetic algorithm code in order to be able to report the comparison with greedy_TSP() in statistically sound terms.
Population size should have an impact on the performace of the algorithm. Make an experiment regarding that.
What is the influence of the mutation and crossover probabilities in the performance of the genetic algorithm?
Extra credit
The population of the previous experiment can be better appreciated in animated form. We are going to use matplotlib.animation and the JSAnimation library (you need to install it if you plan to run this notebook locally). Similarly, this functionality needs an HTML5 capable browser.
Part of this code has also been inspired by A Simple Animation: The Magic Triangle.
End of explanation
"""
fig = plt.figure()
ax = plt.axes(xlim=(0, 900), ylim=(0, 600))
tour_plots = [ax.plot([], [], 'bo-', alpha=0.1) for i in range(len(log[0]['pop']))]
tour_plots = [p[0] for p in tour_plots]
animation.FuncAnimation(fig, animate, init_func=init,
frames=200, interval=60, blit=True)
"""
Explanation: The next step takes some time to execute. Use the video controls to see the evolution in animated form.
End of explanation
"""
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=200, interval=60, blit=True)
anim.save('tsp-populations.gif', writer='imagemagick')
"""
Explanation: Embeding the previous animation in the online notebook makes it really big. I have removed the result of the previous cell and created a .gif version of the animation for online viewing.
End of explanation
"""
|
reachtarunhere/aima-python | csp.ipynb | mit | from csp import *
"""
Explanation: Constraint Satisfaction Problems (CSPs)
This IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence: A Modern Approach. We make use of the implementations in csp.py module. Even though this notebook includes a brief summary of the main topics familiarity with the material present in the book is expected. We will look at some visualizations and solve some of the CSP problems described in the book. Let us import everything from the csp module to get started.
End of explanation
"""
%psource CSP
"""
Explanation: Review
CSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code.
End of explanation
"""
s = UniversalDict(['R','G','B'])
s[5]
"""
Explanation: The _ init _ method parameters specify the CSP. Variable can be passed as a list of strings or integers. Domains are passed as dict where key specify the variables and value specify the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list its value which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.
Graph Coloring
We use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter which it returns as value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment.
End of explanation
"""
%psource different_values_constraint
"""
Explanation: For our CSP we also need to define a constraint function f(A, a, B, b). In this what we need is that the neighbors must not have the same color. This is defined in the function different_values_constraint of the module.
End of explanation
"""
%pdoc parse_neighbors
"""
Explanation: The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows to take input in the form of strings and return a Dict of the form compatible with the CSP Class.
End of explanation
"""
%psource MapColoringCSP
australia, usa, france
"""
Explanation: The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables our the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. australia, usa and france are three CSPs that have been created using MapColoringCSP. australia corresponds to Figure 6.1 in the book.
End of explanation
"""
%psource queen_constraint
"""
Explanation: NQueens
The N-queens puzzle is the problem of placing N chess queens on a N×N chessboard so that no two queens threaten each other. Here N is a natural number. Like the graph coloring, problem NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit the particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed on the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal.
End of explanation
"""
%psource NQueensCSP
"""
Explanation: The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve the CSP assign and unassign are modified to record conflicts. More details about the structures rows, downs, ups which help in recording conflicts are explained in the docstring.
End of explanation
"""
eight_queens = NQueensCSP(8)
"""
Explanation: The _ init _ method takes only one parameter n the size of the problem. To create an instance we just pass the required n into the constructor.
End of explanation
"""
import copy
class InstruCSP(CSP):
def __init__(self, variables, domains, neighbors, constraints):
super().__init__(variables, domains, neighbors, constraints)
self.assingment_history = []
def assign(self, var, val, assignment):
super().assign(var,val, assignment)
self.assingment_history.append(copy.deepcopy(assignment))
def unassign(self, var, assignment):
super().unassign(var,assignment)
self.assingment_history.append(copy.deepcopy(assignment))
"""
Explanation: Helper Functions
We will now implement few helper functions that will help us visualize the Coloring Problem. We will make some modifications to the existing Classes and Functions for additional book keeping. To begin with we modify the assign and unassign methods in the CSP to add a copy of the assignment to the assingment_history. We call this new class InstruCSP. This would allow us to see how the assignment evolves over time.
End of explanation
"""
def make_instru(csp):
return InstruCSP(csp.variables, csp.domains, csp.neighbors,
csp.constraints)
"""
Explanation: Next, we define make_instru which takes an instance of CSP and returns a InstruCSP instance.
End of explanation
"""
neighbors = {
0: [6, 11, 15, 18, 4, 11, 6, 15, 18, 4],
1: [12, 12, 14, 14],
2: [17, 6, 11, 6, 11, 10, 17, 14, 10, 14],
3: [20, 8, 19, 12, 20, 19, 8, 12],
4: [11, 0, 18, 5, 18, 5, 11, 0],
5: [4, 4],
6: [8, 15, 0, 11, 2, 14, 8, 11, 15, 2, 0, 14],
7: [13, 16, 13, 16],
8: [19, 15, 6, 14, 12, 3, 6, 15, 19, 12, 3, 14],
9: [20, 15, 19, 16, 15, 19, 20, 16],
10: [17, 11, 2, 11, 17, 2],
11: [6, 0, 4, 10, 2, 6, 2, 0, 10, 4],
12: [8, 3, 8, 14, 1, 3, 1, 14],
13: [7, 15, 18, 15, 16, 7, 18, 16],
14: [8, 6, 2, 12, 1, 8, 6, 2, 1, 12],
15: [8, 6, 16, 13, 18, 0, 6, 8, 19, 9, 0, 19, 13, 18, 9, 16],
16: [7, 15, 13, 9, 7, 13, 15, 9],
17: [10, 2, 2, 10],
18: [15, 0, 13, 4, 0, 15, 13, 4],
19: [20, 8, 15, 9, 15, 8, 3, 20, 3, 9],
20: [3, 19, 9, 19, 3, 9]
}
"""
Explanation: We will now use a graph defined as a dictonary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their corresponding values are the nodes are they are connected to.
End of explanation
"""
coloring_problem = MapColoringCSP('RGBY', neighbors)
coloring_problem1 = make_instru(coloring_problem)
"""
Explanation: Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.
End of explanation
"""
result = backtracking_search(coloring_problem1)
result # A dictonary of assingments.
"""
Explanation: Backtracking Search
For solving a CSP the main issue with Naive search algorithms is that they can continue expanding obviously wrong paths. In backtracking search, we check constraints as we go. Backtracking is just the above idea combined with the fact that we are dealing with one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and few other optional parameters which can be used to further speed it up. The function returns the correct assignment if it satisfies the goal. We will discuss these later. Let us solve our coloring_problem1 with backtracking_search.
End of explanation
"""
coloring_problem1.nassigns
"""
Explanation: Let us also check the number of assingments made.
End of explanation
"""
len(coloring_problem1.assingment_history)
"""
Explanation: Now let us check the total number of assingments and unassingments which is the lentgh ofour assingment history.
End of explanation
"""
%psource mrv
%psource num_legal_values
%psource CSP.nconflicts
"""
Explanation: Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out to methods in the CSP class that help make this work.
The first of these is select_unassigned_variable. It takes in a function that helps in deciding the order in which variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the fewest legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by a number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values.
End of explanation
"""
%psource lcv
"""
Explanation: Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out the fewest values in the remaining variables. The intuition behind selecting the lcv is that it leaves a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, we might better try the ones that are likely. So for vars, we face the hard ones first.
End of explanation
"""
solve_simple = copy.deepcopy(usa)
solve_parameters = copy.deepcopy(usa)
backtracking_search(solve_simple)
backtracking_search(solve_parameters, order_domain_values=lcv, select_unassigned_variable=mrv, inference=mac )
solve_simple.nassigns
solve_parameters.nassigns
"""
Explanation: Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can know more about these by looking up the source code.
Now let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance usa for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments.
End of explanation
"""
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib
import time
"""
Explanation: Graph Coloring Visualization
Next, we define some functions to create the visualisation from the assingment_history of coloring_problem1. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as the graph that needs to be colored or as a constraint graph for this problem. If interested you can read a dead simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline.
End of explanation
"""
def make_update_step_function(graph, instru_csp):
def draw_graph(graph):
# create networkx graph
G=nx.Graph(graph)
# draw graph
pos = nx.spring_layout(G,k=0.15)
return (G, pos)
G, pos = draw_graph(graph)
def update_step(iteration):
# here iteration is the index of the assingment_history we want to visualize.
current = instru_csp.assingment_history[iteration]
# We convert the particular assingment to a default dict so that the color for nodes which
# have not been assigned defaults to black.
current = defaultdict(lambda: 'Black', current)
# Now we use colors in the list and default to black otherwise.
colors = [current[node] for node in G.node.keys()]
# Finally drawing the nodes.
nx.draw(G, pos, node_color=colors, node_size=500)
labels = {label:label for label in G.node}
# Labels shifted by offset so as to not overlap nodes.
label_pos = {key:[value[0], value[1]+0.03] for key, value in pos.items()}
nx.draw_networkx_labels(G, label_pos, labels, font_size=20)
# show graph
plt.show()
return update_step # <-- this is a function
def make_visualize(slider):
''' Takes an input a slider and returns
callback function for timer and animation
'''
def visualize_callback(Visualize, time_step):
if Visualize is True:
for i in range(slider.min, slider.max + 1):
slider.value = i
time.sleep(float(time_step))
return visualize_callback
"""
Explanation: The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which return such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. This will be more clear with the example below. If this sounds confusing do not worry this is not the part of the core material and our only goal is to help you visualize how the process works.
End of explanation
"""
step_func = make_update_step_function(neighbors, coloring_problem1)
"""
Explanation: Finally let us plot our problem. We first use the function above to obtain a step function.
End of explanation
"""
matplotlib.rcParams['figure.figsize'] = (18.0, 18.0)
"""
Explanation: Next we set the canvas size.
End of explanation
"""
import ipywidgets as widgets
from IPython.display import display
iteration_slider = widgets.IntSlider(min=0, max=len(coloring_problem1.assingment_history)-1, step=1, value=0)
w=widgets.interactive(step_func,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
"""
Explanation: Finally our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the coloring change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
End of explanation
"""
def label_queen_conflicts(assingment,grid):
''' Mark grid with queens that are under conflict. '''
for col, row in assingment.items(): # check each queen for conflict
row_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()
if temp_row == row and temp_col != col}
up_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()
if temp_row+temp_col == row+col and temp_col != col}
down_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()
if temp_row-temp_col == row-col and temp_col != col}
# Now marking the grid.
for col, row in row_conflicts.items():
grid[col][row] = 3
for col, row in up_conflicts.items():
grid[col][row] = 3
for col, row in down_conflicts.items():
grid[col][row] = 3
return grid
def make_plot_board_step_function(instru_csp):
'''ipywidgets interactive function supports
single parameter as input. This function
creates and return such a function by taking
in input other parameters.
'''
n = len(instru_csp.variables)
def plot_board_step(iteration):
''' Add Queens to the Board.'''
data = instru_csp.assingment_history[iteration]
grid = [[(col+row+1)%2 for col in range(n)] for row in range(n)]
grid = label_queen_conflicts(data, grid) # Update grid with conflict labels.
# color map of fixed colors
cmap = matplotlib.colors.ListedColormap(['white','lightsteelblue','red'])
bounds=[0,1,2,3] # 0 for white 1 for black 2 onwards for conflict labels (red).
norm = matplotlib.colors.BoundaryNorm(bounds, cmap.N)
fig = plt.imshow(grid, interpolation='nearest', cmap = cmap,norm=norm)
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
# Place the Queens Unicode Symbol
for col, row in data.items():
fig.axes.text(row, col, u"\u265B", va='center', ha='center', family='Dejavu Sans', fontsize=32)
plt.show()
return plot_board_step
"""
Explanation: NQueens Visualization
Just like the Graph Coloring Problem we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing 3 in positions in a position where there is a conflict.
End of explanation
"""
twelve_queens_csp = NQueensCSP(12)
backtracking_instru_queen = make_instru(twelve_queens_csp)
result = backtracking_search(backtracking_instru_queen)
backtrack_queen_step = make_plot_board_step_function(backtracking_instru_queen) # Step Function for Widgets
"""
Explanation: Now let us visualize a solution obtained via backtracking. We use of the previosuly defined make_instru function for keeping a history of steps.
End of explanation
"""
matplotlib.rcParams['figure.figsize'] = (8.0, 8.0)
matplotlib.rcParams['font.family'].append(u'Dejavu Sans')
iteration_slider = widgets.IntSlider(min=0, max=len(backtracking_instru_queen.assingment_history)-1, step=0, value=0)
w=widgets.interactive(backtrack_queen_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
"""
Explanation: Now finally we set some matplotlib parameters to adjust how our plot will look. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe the how queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click.The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
End of explanation
"""
conflicts_instru_queen = make_instru(twelve_queens_csp)
result = min_conflicts(conflicts_instru_queen)
conflicts_step = make_plot_board_step_function(conflicts_instru_queen)
"""
Explanation: Now let us finally repeat the above steps for min_conflicts solution.
End of explanation
"""
iteration_slider = widgets.IntSlider(min=0, max=len(conflicts_instru_queen.assingment_history)-1, step=0, value=0)
w=widgets.interactive(conflicts_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
"""
Explanation: The visualization has same features as the above. But here it also highlights the conflicts by labeling the conflicted queens with a red background.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mohc/cmip6/models/sandbox-1/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-1', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
JasonNK/udacity-dlnd | intro-to-rnns/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# print("x: ", x)
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
# print("y:", y)
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/[email protected]" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
print(logits.shape, targets.shape, lstm_size, num_classes) # (10000, 83) (100, 100) 512 83
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
print(y_one_hot.shape) # (100, 100, 83)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
print(y_reshaped.shape) # y_reshaped
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss) # if this line is left, then bug will arise
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
"""
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
eford/rebound | ipython_examples/SaturnsRings.ipynb | gpl-3.0 | import rebound
import numpy as np
sim = rebound.Simulation()
"""
Explanation: Simulating Saturn's rings
In this example, we will simulate a small patch of Saturn's rings. The simulation is similar to the C example in examples/shearing_sheet.
We first import REBOUND and numpy, then create an instance of the Simulation class to work with.
End of explanation
"""
OMEGA = 0.00013143527 # [1/s]
"""
Explanation: Next up, setting up several constants. We will be simulating a shearing sheet, a box with shear-periodic boundary conditions. This is a local approximation which makes the approximation that the epicyclic frequency $\Omega$ is the same for all particles.
We work with a value of $\Omega$ that corresponds to a semi-major axis of $a\sim 130000$ km.
End of explanation
"""
sim.ri_sei.OMEGA = OMEGA
"""
Explanation: Next, we need to let REBOUND know about $\Omega$. Within REBOUND $\Omega$ is used by the integrator SEI, the Symplectic Epicycle Integrator (see Rein and Tremaine 2012).
End of explanation
"""
surface_density = 400. # kg/m^2
particle_density = 400. # kg/m^3
"""
Explanation: Finally, let us define the surface density of the ring and the particle density.
End of explanation
"""
sim.G = 6.67428e-11 # N m^2 / kg^2
"""
Explanation: The gravitational constant in SI units is
End of explanation
"""
sim.dt = 1e-3*2.*np.pi/OMEGA
"""
Explanation: We choose a timestep of 1/1000th of the orbital period.
End of explanation
"""
sim.softening = 0.2 # [m]
"""
Explanation: We enable gravitational softening to smear out any potential numerical artefacts at very small scales.
End of explanation
"""
boxsize = 200. # [m]
sim.configure_box(boxsize)
"""
Explanation: Next up, we configure the simulation box. By default REBOUND used no boundary conditions, but here we have shear periodic boundaries and a finite simulation domain, so we need to let REBOUND know about the simulation boxsize (note that it is significantly smaller than $a$, so our local approximation is very good. In this example we'll work in SI units.
End of explanation
"""
sim.configure_ghostboxes(2,2,0)
"""
Explanation: Because we have shear-periodic boundary conditions, we use ghost boxes to simulate the gravity of neighbouring ring patches. The more ghostboxes we use, the smoother the gravitational force accross the boundary. Here, two layers of ghost boxes in the x and y direction are enough (this is a total of 24 ghost boxes). We don't need ghost boxes in the z direction because a rings is a two dimensional system.
End of explanation
"""
sim.integrator = "sei"
sim.boundary = "shear"
sim.gravity = "tree"
sim.collision = "tree"
"""
Explanation: We can now setup which REBOUND modules we want to use for our simulation. Besides the SEI integrator and the shear-periodic boundary conditions mentioned above, we select the tree modules for both gravity and collisions. This speeds up the code from $O(N^2)$ to $O(N \log(N))$ for large numbers of particles $N$.
End of explanation
"""
def cor_bridges(r, v):
eps = 0.32*pow(abs(v)*100.,-0.234)
if eps>1.:
eps=1.
if eps<0.:
eps=0.
return eps
sim.coefficient_of_restitution = cor_bridges
"""
Explanation: When two ring particles collide, they loose energy during their the bounce. We here use a velocity dependent Bridges et. al. coefficient of restitution. It is implemented as a python function (a C implementation would be faster!). We let REBOUND know which function we want to use by setting the coefficient_of_restitution function pointer in the simulation instance.
End of explanation
"""
def powerlaw(slope, min_v, max_v):
y = np.random.uniform()
pow_max = pow(max_v, slope+1.)
pow_min = pow(min_v, slope+1.)
return pow((pow_max-pow_min)*y + pow_min, 1./(slope+1.))
"""
Explanation: To initialize the particles, we will draw random numbers from a power law distribution.
End of explanation
"""
total_mass = 0.
while total_mass < surface_density*(boxsize**2):
radius = powerlaw(slope=-3, min_v=1, max_v=4) # [m]
mass = particle_density*4./3.*np.pi*(radius**3)
x = np.random.uniform(low=-boxsize/2., high=boxsize/2.)
sim.add(
m=mass,
r=radius,
x=x,
y=np.random.uniform(low=-boxsize/2., high=boxsize/2.),
z=np.random.normal(),
vx = 0.,
vy = -3./2.*x*OMEGA,
vz = 0.)
total_mass += mass
"""
Explanation: Now we can finally add particles to REBOUND. Note that we initialize particles so that they have initially no velovity relative to the mean shear flow.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as patches
def plotParticles(sim):
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111,aspect='equal')
ax.set_ylabel("radial coordinate [m]")
ax.set_xlabel("azimuthal coordinate [m]")
ax.set_ylim(-boxsize/2.,boxsize/2.)
ax.set_xlim(-boxsize/2.,boxsize/2.)
for i, p in enumerate(sim.particles):
circ = patches.Circle((p.y, p.x), p.r, facecolor='darkgray', edgecolor='black')
ax.add_patch(circ)
plotParticles(sim)
"""
Explanation: To see what is going on in our simulation, we create a function to plot the current positions of particles and call it once to visualise the initial conditions.
End of explanation
"""
sim.integrate(2.*np.pi/OMEGA)
"""
Explanation: We now integrate for one orbital period $P=2\pi/\Omega$.
End of explanation
"""
plotParticles(sim)
"""
Explanation: The integration takes a few seconds, then we can visualise the final particle positions.
End of explanation
"""
|
jdamiani27/DataSciUF-Tutorial-Student | DataSciUF - Python II.ipynb | mit | # Function to sum up numbers in a dictionary
"""
Explanation: iPython Magics
iPython does a lot of neat things. The % and %% symbols are used to indicate a line that is not a Python statement but a command for iPython to interpret. These commands are called magics and can change the behavior of iPython, interact with the operating system, change the display of items, etc.
One trick is to tell iPython to interpret cells using different interpreters besides Python. This lets you embed may kinds of code in an iPython notebook.
We're going to use this trick to install a Python module we'll need later as part of the astronomy workbook.
First, you need a separate cell.
Python Review
Let's start with a warmup excercise to refresh what you learned in Python I.
Write a function that takes one argument, a dictionary. The dictionary can contain different types of objects. Use a for loop to look at the objects in the dictionary and if the object is a number, add that number to a running total. Return the total at the end of the function.
Call your function with a test dictionary that contains at least one string, one float, and one integer.
Here are some reminders about the syntax of Python:
Dictionaries are created with curly braces {}
Use print to look at variables
Functions definitions start with the key word def
Code blocks start with a colon : and continue as long as lines are indented
For loops iterate over collections like lists and dictionaries
If statements only execute when a condition is true
You can determine the type of a variable with the isinstance() function, types include int and float
Use return to return a value from a function
End of explanation
"""
# def download file
"""
Explanation: Data Science Tutorial
Now that we've covered some Python basics, we will begin a tutorial going through many tasks a data scientist may perform. We will obtain real world data and go through the process of auditing, analyzing, visualing, and building classifiers from the data.
We will use a database of breast cancer data obtained from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg. The data is a collection of samples from Dr. Wolberg's clinical cases with attributes pertaining to tumors and a class labeling the sample as benign or malignant.
| Attribute | Domain |
|--------------------------------|---------------------------------|
| 1. Sample code number | id number |
| 2. Clump Thickness | 1 - 10 |
| 3. Uniformity of Cell Size | 1 - 10 |
| 4. Uniformity of Cell Shape | 1 - 10 |
| 5. Marginal Adhesion | 1 - 10 |
| 6. Single Epithelial Cell Size | 1 - 10 |
| 7. Bare Nuclei | 1 - 10 |
| 8. Bland Chromatin | 1 - 10 |
| 9. Normal Nucleoli | 1 - 10 |
| 10. Mitoses | 1 - 10 |
| 11. Class | (2 for benign, 4 for malignant) |
For more information on this data set:
https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
Obtaining the Data
Lets begin by programmatically obtaining the data. Here I'll define a function we can use to make HTTP requests and download the data
End of explanation
"""
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data'
filename = 'breast-cancer-wisconsin.csv'
"""
Explanation: Now we'll specify the url of the file and the file name we will save to
End of explanation
"""
# execute download file
"""
Explanation: And make a call to <code>download_file</code>
End of explanation
"""
# pandas, read
"""
Explanation: Now this might seem like overkill for downloading a single, small csv file, but we can use this same function to access countless APIs available on the World Wide Web by building an API request in the url.
Wrangling the Data
Now that we have some data, lets get it into a useful form. For this task we will use a package called pandas. pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for Python. The most fundamental data structure in pandas is the dataframe, which is similar to the data.frame data structure found in the R statistical programming language.
For more information: http://pandas.pydata.org
pandas dataframes are a 2-dimensional labeled data structures with columns of potentially different types. Dataframes can be thought of as similar to a spreadsheet or SQL table.
There are numerous ways to build a dataframe with pandas. Since we have already attained a csv file, we can use a parser built into pandas called <code>read_csv</code> which will read the contents of a csv file directly into a data frame.
For more information: http://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.parsers.read_csv.html
End of explanation
"""
# \ allows multi line wrapping
cancer_header = [ \
'sample_code_number', \
'clump_thickness', \
'uniformity_cell_size', \
'uniformity_cell_shape', \
'marginal_adhesion', \
'single_epithelial_cell_size', \
'bare_nuclei', \
'bland_chromatin', \
'normal_nucleoli', \
'mitoses', \
'class']
"""
Explanation: Whoops, looks like our csv file did not contain a header row. <code>read_csv</code> assumes the first row of the csv is the header by default.
Lets check out the file located here: https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.names
This contains information about the data set including the names of the attributes.
Lets create a list of these attribute names to use when reading the csv file
End of explanation
"""
# read csv
"""
Explanation: Lets try the import again, this time specifying the names. When specifying names, the <code>read_csv</code> function requires us to set the <code>header</code> row number to <code>None</code>
End of explanation
"""
# describe data
"""
Explanation: Lets take a look at some simple statistics for the clump_thickness column
End of explanation
"""
# describe data
"""
Explanation: Referring to the documentation link above about the data, the count, range of values (min = 1, max = 10), and data type (dtype = float64) look correct.
Lets take a look at another column, this time bare_nuclei
End of explanation
"""
# unique values
"""
Explanation: Well at least the count is correct. We were expecting no more than 10 unique values and now the data type is an object.
Whats up with our data?
We have arrived at arguably the most important part of performing data science: dealing with messy data. One of most important tools in a data scientist's toolbox is the ability to audit, clean, and reshape data. The real world is full of messy data and your sources may not always have data in the exact format you desire.
In this case we are working with csv data, which is a relatively straightforward format, but this will not always be the case when performing real world data science. Data comes in all varieties from csv all the way to something as unstructured as a collection of emails or documents. A data scientist must be versed in a wide variety of technologies and methodologies in order to be successful.
Now, lets do a little bit of digging into why were are not getting a numeric pandas column
End of explanation
"""
# convert to numeric
"""
Explanation: Using <code>unique</code> we can see that '?' is one of the distinct values that appears in this series. Looking again at the documentation for this data set, we find the following:
Missing attribute values: 16
There are 16 instances in Groups 1 to 6 that contain a single missing
(i.e., unavailable) attribute value, now denoted by "?".
It was so nice of them to tell us to expect these missing values, but as a data scientist that will almost never be the case. Lets see what we can do with these missing values.
End of explanation
"""
cancer_data["bare_nuclei"].unique()
"""
Explanation: Here we have attempted to convert the bare_nuclei series to a numeric type. Lets see what the unique values are now.
End of explanation
"""
# fillna
"""
Explanation: The decimal point after each number means that it is an integer value being represented by a floating point number. Now instead of our pesky '?' we have <code>nan</code> (not a number). <code>nan</code> is a construct used by pandas to represent the absence of value. It is a data type that comes from the package numpy, used internally by pandas, and is not part of the standard Python library.
Now that we have <code>nan</code> values in place of '?', we can use some nice features in pandas to deal with these missing values.
What we are about to do is what is called "imputing" or providing a replacement for missing values so the data set becomes easier to work with. There are a number of strategies for imputing missing values, all with their own pitfalls. In general, imputation introduces some degree of bias to the data, so the imputation strategy taken should be in an attempt to minimize that bias.
Here, we will simply use the mean of all of the non-nan values in the series as a replacement. Since we already know that the data is integer in possible values, we will round the mean to the nearest whole number.
End of explanation
"""
cancer_data.mean().round()
"""
Explanation: <code>fillna</code> is a dataframe function that replaces all nan values with either a scalar value, a series of values with the same indices as found in the dataframe, or a dataframe that is indexed by the columns of the target dataframe.
<code>cancer_data.mean().round()</code> will take the mean of each column (this computation ignores the currently present nan values), then round, and return a dataframe indexed by the columns of the original dataframe:
End of explanation
"""
cancer_data = pd.read_csv('breast-cancer-wisconsin.csv', header=None, names=cancer_header)
cancer_data = cancer_data.convert_objects(convert_numeric=True)
cancer_data.fillna(cancer_data.mean().round(), inplace=True)
cancer_data["bare_nuclei"].describe()
cancer_data["bare_nuclei"].unique()
"""
Explanation: <code>inplace=True</code> allows us to make this modification directly on the dataframe, without having to do any assignment.
Now that we have figured out how to impute these missing values in a single column, lets start over and quickly apply this technique to the entire dataframe.
End of explanation
"""
# describe
"""
Explanation: Structurally, Pandas dataframes are a collection of Series objects sharing a common index. In general, the Series object and Dataframe object share a large number of functions with some behavioral differences. In other words, whatever computation you can do on a single column can generally be applied to the entire dataframe.
Now we can use the dataframe version of <code>describe</code> to get an overview of all of our data
End of explanation
"""
# The following line is NOT Python code, but a special syntax for enabling inline plotting in IPython
%matplotlib inline
from ggplot import *
import warnings
# ggplot usage of pandas throws a future warning
warnings.filterwarnings('ignore')
"""
Explanation: Visualizing the Data
Another important tool in the data scientist's toolbox is the ability to create visualizations from data. Visualizing data is often the most logical place to start getting a deeper intuition of the data. This intuition will shape and drive your analysis.
Even more important than visualizing data for your own personal benefit, it is often the job of the data scientist to use the data to tell a story. Creating illustrative visuals that succinctly convey an idea are the best way to tell that story, especially to stakeholders with less technical skillsets.
Here we will be using a Python package called ggplot (https://ggplot.yhathq.com). The ggplot package is an attempt to bring visuals following the guidelines outlayed in the grammar of graphics (http://vita.had.co.nz/papers/layered-grammar.html) to Python. It is based off of and intended to mimic the features of the ggplot2 library found in R. Additionally, ggplot is designed to work with Pandas dataframes, making things nice and simple.
We'll start by doing a bit of setup
End of explanation
"""
# ggplot cancer_data
"""
Explanation: So we enabled plotting in IPython and imported everything from the ggplot package. Now we'll create a plot and then break down the components
End of explanation
"""
plt = ggplot(aes(x = 'clump_thickness'), data = cancer_data) + \
geom_histogram(binwidth = 1, fill = 'steelblue') + \
geom_vline(xintercept = [cancer_data['clump_thickness'].mean()], linetype='dashed')
print plt
"""
Explanation: A plot begins with the <code>ggplot</code> function. Here, we pass in the cancer_data pandas dataframe and a special function called <code>aes</code> (short for aesthetic). The values provided to <code>aes</code> change depending on which type of plot is being used. Here we are going to make a histogram from the clump_thickness column in cancer_data, so that column name needs to be passed as the x parameter to <code>aes</code>.
The grammar of graphics is based off of a concept of "geoms" (short for geometric objects). These geoms provide granular control of the plot and are progressively added to the base call to <code>ggplot</code> with + syntax.
Lets say we wanted to show the mean clump_thickness on this plot. We could do something like the following
End of explanation
"""
# scatter plot
"""
Explanation: As you can see, each geom has its own set of parameters specific to the appearance of that geom (also called aesthetics).
Lets try a scatter plot to get some multi-variable action
End of explanation
"""
# scatter with jitter
"""
Explanation: Sometimes when working with integer data, or data that takes on a limited range of values, it is easier to visualize the plot with added jitter to the points. We can do that by adding an aesthetic to <code>geom_point</code>.
End of explanation
"""
# colored scatter
"""
Explanation: With a simple aesthetic addition, we can see how these two variables play into our cancer classification
End of explanation
"""
plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei', color = 'class'), data = cancer_data) + \
geom_point(position = 'jitter') + \
ggtitle("The Effect of the Bare Nuclei and Cell Shape Uniformity on Classification") + \
ylab("Amount of Bare Nuclei") + \
xlab("Uniformity in Cell shape")
print plt
"""
Explanation: By adding <code>color = 'class'</code> as a parameter to the aes function, we now give a color to each unique value found in that column and automatically get a legend. Remember, 2 is benign and 4 is malignant.
We can also do things such as add a title or change the axis labeling with geoms
End of explanation
"""
plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei'), data = cancer_data) + \
geom_point(position = 'jitter') + \
ggtitle("The Effect of the Bare Nuclei and Cell Shape Uniformity on Classification") + \
facet_grid('class')
print plt
"""
Explanation: There is definitely some patterning going on in that plot.
A slightly different way to convey this idea is to use faceting. Faceting is the creation of multiple related plots arranged by the values of a given faceted variable
End of explanation
"""
plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei', color = 'class'), data = cancer_data) + \
geom_point(position = 'jitter') + \
ggtitle("The Effect of the Bare Nuclei and Cell Shape Uniformity on Classification") + \
facet_grid('clump_thickness', 'marginal_adhesion')
print plt
"""
Explanation: Rather than set the color equal to the class, we have created two plots based off of the class. With a facet, we can get very detailed. Lets through some more variables into the mix
End of explanation
"""
# cancer features
"""
Explanation: Unfortunately, legends for faceting are not yet implemented in the Python ggplot package. In this example we faceted on the x-axis with clump_thickness and along the y-axis with marginal_adhesion, then created 100 plots of uniformity_cell_shape vs. bare_nuclei effect on class.
I highly encourage you to check out https://ggplot.yhathq.com/docs/index.html to see all of the available geoms. The best way to learn is to play with and visualize the data with many different plots and aesthetics.
Machine Learning
So now that we've acquired, audited, cleaned, and visualized our data, we have arrived at machine learning. By formal definition from Tom Mitchell:
A computer program is set to learn from an experience E with
respect to some task T and some performance measure P if its performance
on T as measured by P improves with experience E.
Okay, thats a bit ridiculous. Essentially machine learning is the science of building algorithms that learn from data in order make predictions about the data. There are two main classes of machine learning: supervised and unsupervised.
In supervised learning, an algorithm will use the features of the data given to make a prediction about a known label. For example, we will use supervised learning here to take features such as bare_nuclei and uniformity_cell_shape and predict a tumor class (benign or malignant). This type of machine learning is called supervised because the class labels (benign or malignant) are a known quantity during learning, so we are supervising the algorithm with the "correct" answer.
In unsupervised learning, an algorithm will use the features of the data to discover what types of labels there could be. The "correct" answer is not known.
In this session we will be mostly focused on supervised learning as we attempt to predict whether a tumor is benign or malignant. We will also be focused on doing some practical machine learning, and will glaze over the algorithmic details.
The first thing we have to do is to extract the class labels and features from <code>cancer_data</code> and store them as separate arrays. In our first classifier we will only choose two features from <code>cancer_data</code> to keep things simple
End of explanation
"""
# labels and features
"""
Explanation: Here we call <code>values</code> on the dataframe to extract the values stored in the dataframe as an array of numpy arrays with the same dimensions as our subsetted dataframe. Numpy is a powerful, high performance scientific computing package that implements arrays. It is used internally by pandas. We will use <code>labels</code> and <code>features</code> later on in our machine learning classifier
End of explanation
"""
from sklearn.cross_validation import train_test_split
features_train, features_test, labels_train, labels_test = train_test_split(features,
labels,
test_size = 0.3,
random_state = 42)
"""
Explanation: An important concept in machine learning is to split the data set into training data and testing data. The machine learning algorithm will use the subset of training data to build a classifier to predict labels. We then test the accuracy of this classifier on the subset of testing data. This is done in order to prevent overfitting the classifier to one given set of data.
Overfitting is a major concern in the design of machine learning algorithms. Conceptually, overfitting is when a classifier is really good at predicting the data used to build it, but isn't robust or general enough to predict new, yet unseen data all that well.
To perform machine learning, we will use a package called sci-kit learn (sklearn for short). The sklearn cross_validation module contains a function called <code>train_test_split</code> that will take in features and labels, and randomly select values into the training and testing subsets
End of explanation
"""
# import decision trees
"""
Explanation: For this example, we will build a Decision Tree Classifier. The goal of a decision tree is to create a prediction by outlining a simple tree of decision rules. These rules are built from the training data by slicing the data on simple boundaries and trying to minimize the prediction error of that boundary. More details on decision trees can be found here: http://scikit-learn.org/stable/modules/tree.html
The first step is to import the classifier from the <code>sklearn.tree</code> module.
End of explanation
"""
# create decision tree
"""
Explanation: Next, we create a variable to store the classifier
End of explanation
"""
#fit classifier
"""
Explanation: Then we have to fit the classifier to the training data. Both the training features (uniformity_cell_shape and bare_nuclei) and the labels (benign vs. malignant) are passed to the fit function
End of explanation
"""
# test accuracy
"""
Explanation: The classifier is now ready to make some predictions. We can use the score function to see how accurate the classifier is on the test data. The score function will take the data in <code>features_test</code>, make a prediction of benign or malignant based on the decision tree that was fit to the training data, and compare that prediction to the true values in <code>labels_test</code>
End of explanation
"""
# plot decisions
"""
Explanation: Nearly all classifiers, decision trees included, will have paremeters that can be tuned to build a more accurate model. Without any parameter tuning and using just two features we have made a pretty accurate prediction. Good job!
To get a better idea of what is going on, I have included a helper function to plot our test data along with the decision boundary
End of explanation
"""
|
rafburzy/Statistics | 06_KNN.ipynb | mit | # importing all required modules
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
"""
Explanation: K Nearest Neighbors method used on Iris dataset
End of explanation
"""
# importing datasets
from sklearn import datasets
iris = datasets.load_iris()
"""
Explanation: Scikit learn contains a database of pre-loaded datasets that can be accessed in the following way:
End of explanation
"""
type(iris)
"""
Explanation: However the type is not a typical Pandas dataframe or Numpy array
End of explanation
"""
iris.keys()
# displaying the set first ten rows
iris.data[:10]
# assigning data and target to X and y variables that will be used in machine learning
X = iris.data
y = iris.target
y
"""
Explanation: The content of the data set can be accessed in the following way:
End of explanation
"""
iris.target_names
"""
Explanation: 0 = iris-setosa <br>
1 = iris-versicolor <br>
2 = iris-virginica
End of explanation
"""
df = pd.DataFrame(X, columns=iris.feature_names)
df.head()
df.info()
"""
Explanation: In order to faciliate the display of the data, a data frame can be created
End of explanation
"""
# Import necessary modules
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
# spliting the dataset between test and training data (using 40% for test data because of small size of dataset)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state=42, stratify=y)
# Creating the knn classifier with 6 neighbors
knn = KNeighborsClassifier(n_neighbors=6)
# fitting the data
knn.fit(X_train, y_train)
# predicting the outcomes
y_pred = knn.predict(X_test)
y_pred
# model accuracy
knn.score(X_test, y_test)
"""
Explanation: KNN Method applied to Iris dataset
End of explanation
"""
# Setup arrays to store train and test accuracies
neighbors = np.arange(1, 15)
train_accuracy = np.empty(len(neighbors))
test_accuracy = np.empty(len(neighbors))
# Loop over different values of k
for i, k in enumerate(neighbors):
# Setup a k-NN Classifier with k neighbors: knn
knn = KNeighborsClassifier(n_neighbors=k)
# Fit the classifier to the training data
knn.fit(X_train, y_train)
#Compute accuracy on the training set
train_accuracy[i] = knn.score(X_train, y_train)
#Compute accuracy on the testing set
test_accuracy[i] = knn.score(X_test, y_test)
# Generate plot
plt.title('k-NN: Varying Number of Neighbors')
plt.plot(neighbors, test_accuracy, label = 'Testing Accuracy')
plt.plot(neighbors, train_accuracy, label = 'Training Accuracy')
plt.legend()
plt.xlabel('Number of Neighbors')
plt.ylabel('Accuracy');
"""
Explanation: Looking for the best number of neighbors for the model
End of explanation
"""
# using sklearn to obtain other validation of the model
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
"""
Explanation: It can be concluded the the best accuracies are obtained with 3 or from 7 to 10 neighbors
End of explanation
"""
print(classification_report(y_test, y_pred))
"""
Explanation: The array above shows that iris-setosa (0) is classified well, the same for iris-versicolor (1) but in case of iris-virginica (2) only 16 are properly classified and 4 are misclassified as versicolor.
End of explanation
"""
from sklearn.model_selection import GridSearchCV
# parameters grid (in our case just one parameter = n_neighbours)
param_grid = {'n_neighbors': np.arange(1,50)}
knn2 = KNeighborsClassifier()
knn_cv = GridSearchCV(knn2, param_grid, cv=5)
knn_cv.fit(X, y)
knn_cv.best_params_
knn_cv.best_score_
"""
Explanation: pecision = TP/(TP+FP), recall = TP/(TP + FN), f1-score = 2 * precision * recall/(precision + recall)
Parameters hypertuning with sklearn
End of explanation
"""
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
# setting up pipeline steps
steps =[('scaler', StandardScaler()), ('knn3', KNeighborsClassifier())]
pipeline = Pipeline(steps)
# parameters grid is set up in the cell above but it must be redef
parameters = {'knn3__n_neighbors': np.arange(1,50)}
# using Grid search to build the model
cv = GridSearchCV(pipeline, param_grid=parameters, cv=5)
# Fit to the training set
cv.fit(X_train, y_train)
# Predict the labels of the test set: y_pred_cv
y_pred_cv = cv.predict(X_test)
# Compute and print metrics
print("Accuracy: {}".format(cv.score(X_test, y_test)))
print(classification_report(y_test, y_pred))
print("Tuned Model Parameters: {}".format(cv.best_params_))
"""
Explanation: Using pipeline for classification
End of explanation
"""
sns.heatmap(df.corr(), square=True, cmap='RdYlGn');
np.arange(1,50)
"""
Explanation: Use of seaborn heat map to show correlation in the dataset
End of explanation
"""
|
mari-linhares/tensorflow-workshop | code_samples/estimators-for-free/.ipynb_checkpoints/estimators_for_free-checkpoint.ipynb | apache-2.0 | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# our model
import model as m
# tensorflow
import tensorflow as tf
print(tf.__version__) #tested with tf v1.2
from tensorflow.contrib import learn
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.python.estimator.inputs import numpy_io
# MNIST data
from tensorflow.examples.tutorials.mnist import input_data
# Numpy
import numpy as np
# Enable TensorFlow logs
tf.logging.set_verbosity(tf.logging.INFO)
"""
Explanation: Before start: make sure you deleted the output_dir folder from this path
Some things we get for free by using Estimators
Estimators are a high level abstraction (Interface) that supports all the basic operations you need to support a ML model on top of TensorFlow.
Estimators:
* provide a simple interface for users of canned model architectures: Training, evaluation, prediction, export for serving.
* provide a standard interface for model developers
* drastically reduces the amount of user code required. This avoids bugs and speeds up development significantly.
* enable building production services against a standard interface.
* using experiments abstraction give you free data-parallelism (more here)
In the Estimator's interface includes: Training, evaluation, prediction, export for serving.
Image from Effective TensorFlow for Non-Experts (Google I/O '17)
You can use a already implemented estimator (canned estimator) or implement your own (custom estimator).
This tutorial is not focused on how to build your own estimator, we're using a custom estimator that implements a CNN classifier for MNIST dataset defined in the model.py file, but we're not going into details about how that's implemented.
Here we're going to show how Estimators make your life easier, once you have a estimator model is very simple to change your model and compare results.
Having a look at the code and running the experiment
Dependencies
End of explanation
"""
# Import the MNIST dataset
mnist = input_data.read_data_sets("/tmp/MNIST/", one_hot=True)
x_train = np.reshape(mnist.train.images, (-1, 28, 28, 1))
y_train = mnist.train.labels
x_test = np.reshape(mnist.test.images, (-1, 28, 28, 1))
y_test = mnist.test.labels
"""
Explanation: Getting the data
We're not going into details here
End of explanation
"""
BATCH_SIZE = 128
x_train_dict = {'x': x_train }
train_input_fn = numpy_io.numpy_input_fn(
x_train_dict, y_train, batch_size=BATCH_SIZE,
shuffle=True, num_epochs=None,
queue_capacity=1000, num_threads=4)
x_test_dict = {'x': x_test }
test_input_fn = numpy_io.numpy_input_fn(
x_test_dict, y_test, batch_size=BATCH_SIZE, shuffle=False, num_epochs=1)
"""
Explanation: Defining the input function
If we look at the image above we can see that there're two main parts in the diagram, a input function interacting with data files and the Estimator interacting with the input function and checkpoints.
This means that the estimator doesn't know about data files, it knows about input functions. So if we want to interact with a data set we need to creat an input function that interacts with it, in this example we are creating a input function for the train and test data set.
You can learn more about input functions here
End of explanation
"""
# parameters
LEARNING_RATE = 0.01
STEPS = 1000
# create experiment
def generate_experiment_fn():
def _experiment_fn(run_config, hparams):
del hparams # unused, required by signature.
# create estimator
model_params = {"learning_rate": LEARNING_RATE}
estimator = tf.estimator.Estimator(model_fn=m.get_model(),
params=model_params,
config=run_config)
train_input = train_input_fn
test_input = test_input_fn
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input,
eval_input_fn=test_input,
train_steps=STEPS
)
return _experiment_fn
"""
Explanation: Creating an experiment
After an experiment is created (by passing an Estimator and inputs for training and evaluation), an Experiment instance knows how to invoke training and eval loops in a sensible fashion for distributed training. More about it here
End of explanation
"""
OUTPUT_DIR = 'output_dir/model1'
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))
"""
Explanation: Run the experiment
End of explanation
"""
STEPS = STEPS + 1000
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))
"""
Explanation: Running a second time
Okay, the model is definitely not good... But, check OUTPUT_DIR path, you'll see that a output_dir folder was created and that there are a lot of files there that were created automatically by TensorFlow!
So, most of these files are actually checkpoints, this means that if we run the experiment again with the same model_dir it will just load the checkpoint and start from there instead of starting all over again!
This means that:
If we have a problem while training you can just restore from where you stopped instead of start all over again
If we didn't train enough we can just continue to train
If you have a big file you can just break it into small files and train for a while with each small file and the model will continue from where it stopped at each time :)
This is all true as long as you use the same model_dir!
So, let's run again the experiment for more 1000 steps to see if we can improve the accuracy. So, notice that the first step in this run will actually be the step 1001. So, we need to change the number of steps to 2000 (otherwhise the experiment will find the checkpoint and will think it already finished training)
End of explanation
"""
LEARNING_RATE = 0.05
OUTPUT_DIR = 'output_dir/model2'
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))
"""
Explanation: Tensorboard
Another thing we get for free is tensorboard.
If you run: tensorboard --logdir=OUTPUT_DIR
You'll see that we get the graph and some scalars, also if you use an embedding layer you'll get an embedding visualization in tensorboard as well!
So, we can make small changes and we'll have an easy (and totally for free) way to compare the models.
Let's make these changes:
1. change the learning rate to 0.05
2. change the OUTPUT_DIR to some path in output_dir/
The 2. is must be inside output_dir/ because we can run: tensorboard --logdir=output_dir/
And we'll get both models visualized at the same time in tensorboard.
You'll notice that the model will start from step 1, because there's no existing checkpoint in this path.
End of explanation
"""
|
thiank/Projects-with-Ning | T-Test vs Permutation Test, Sunday (Aug 27) .ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy import stats
from sklearn.model_selection import permutation_test_score, StratifiedKFold
from sklearn.linear_model import LogisticRegression
from random import shuffle
"""
Explanation: Today's topic: T-tests vs. Permutation Tests
<br />So. Which one is better?
<br />Who knows, so let the BATTLE ROYALE begin.
<br />Or more correctly put - which test provides a better characterization of the data? How do we show what these datasets tell us better?
What we did here:
<br />Say we have two datasets that are borderline different / significant at p = 0.05.
<br />Our question is - are these datasets significantly different really/like srsly/IRL?
<br />So we're going to simulate an extreme case with small sample sizes.
First, we do a t-test. Well, we simulate a t-test:
<br />We have a dataset A that has a mean of 1.24 and a standard deviation of 1.75, with a N = 10.
<br />Then we have another dataset B that has a mean of 1.78 and a standard deviation of 1.1, with a N = 15.
p.s. Obv. SPSS can't do this, because it don't do different Ns. #python-FTW
When doing t-tests, we have to assume:
independent sampling (every single subject is randomly sampled, aka, subject 1 and subject 2 and subject 3 are independently sampled. but they're not a lot of the times. but we can't do anything about dis. meh, aka in psychology terms, wutever.)
normalization, aka normal distribution (personal note: I don't think any of the stuff we test is normal a lot of the times, but let's thank Fisher's gargantuan legacy and the lazy posterity children that we are for just going with the flow)
equal variance (we can assume both cases: equal and non-equal variance, so we will below (because we be thorough))
/beginrant
<br />So what this means is that, in using t-tests, we generally and literally violate every single assumption in any psych/neuro/#FAKE-scientific study we do.
<br />So why do we use t-tests?
<br />Because there are no better alternatives. p.s. but it don't have to be. We are the purveyors of dis light, the deus ex machina of your statistical nonchalance.
<br />/endrant
End of explanation
"""
import IPython.display as dis
dis.YouTubeVideo("Jwjj5gowpLA",start=41)
"""
Explanation: T-test
Simulating datasets A and B, and making sure that they match teh criteria:
p value be close to like .05, like reallllllllllllly #fracking close (less than .001 difference)
all the data points in dataset A are positive
all the data points in dataset B are positive
Criteria 2 and 3 make sure that we don't have a normal distribution. Explanation: when mean is 1.24 and the standard deviation is 1.75, how. do. they. not. have. some. negative. data. points. amirite?
The reason why we did this is to make sure that we have data that violate the assumptions of doing t tests.
Reason why Ning was forced to use blue and yellow instead of blue and red like a normal human being:
End of explanation
"""
p = 1
for iii in range(int(1e20)):
dataset_A = np.random.normal(1.24, 1.75, 10)
dataset_B = np.random.normal(1.78, 1.1, 15)
t_stats , p = stats.ttest_ind(dataset_A, dataset_B, equal_var = False)
if (abs(p - 0.05) < 0.001) & ((dataset_A > 0).all()) & ((dataset_B > 0).all()):
break
a,b = stats.ttest_ind(dataset_A, dataset_B, equal_var = False)
c,d = stats.ttest_ind(dataset_A, dataset_B, equal_var = True)
print('Results:','\n','non-equal variance, t(9) = %.2f, p = %.3f'%(a,b),'\n',
'equal variance, t(14) = %.2f, p = %.3f'%(c,d))
fig,ax = plt.subplots()
ax.hist(dataset_A,color = "blue", alpha = .3, label = 'dataset_A')
ax.hist(dataset_B,color = "yellow", alpha = .3, label = 'dataset_B')
ax.axvline(dataset_A.mean(), label = 'Mean of dataset_A', color = 'blue')
ax.axvline(dataset_B.mean(), label = 'Mean of dataset_B', color = 'yellow')
ax.legend(loc = "best")
ax.set(title = "Histogram of Datasets A and B", xlabel = "Value of generated data points",
ylabel = "Count / Frequency of X")
"""
Explanation: ANYWAY, results, results, results.
<br />You can see on the Results + Blue-And-Yellow coloured figure that we have generated two sets of data (datasets A and B) and t-tested them.
<br />You can also see that assuming equal variance and non-equal variance changes the statistical significance of the difference between the datasets A and B. #told-you-so #assumptions-matter
End of explanation
"""
# preprocessing
label1 = np.zeros(dataset_A.shape)
label2 = np.ones(dataset_B.shape)
data = np.concatenate([dataset_A,dataset_B])
label = np.concatenate([label1,label2])
# https://youtu.be/KQ6zr6kCPj8?t=3m39s
idx = np.arange(len(data))
for ii in range(100):
shuffle(idx)
data = data[idx]
label = label[idx]
#let's define the machine learning model and the cross validation method
cv = StratifiedKFold(n_splits = 5, shuffle = True, random_state = 12345)
clf = LogisticRegression(random_state = 12345)
score, permutation_score, pval = permutation_test_score(clf, data.reshape(-1,1), label,
n_permutations = 1000, cv = cv,
random_state = 12345, scoring = 'roc_auc')
fig,ax = plt.subplots()
ax.hist(permutation_score, label = 'randomized scores', color = "red")
ax.axvline(score, label = 'true score : %.2f , pval : %.3f'%(score, pval))
ax.legend()
ax.set(title = "Histogram of Permutated Datasets A and B", xlabel = "Score",
ylabel = "Count / Frequency of Permutation Scores")
"""
Explanation: Permutation Test
Second, we do a permutation test (using scikit-learn).
huh and wat? Permutation test = resampling = get all the data points from datasets A and B and exchange the labels on each data point (e.g. say a data point from dataset A is 1.1, this data point could be labeled as being in dataset B or A when resampled) many many times.
This will generate a distribution of randomized labels on the data points. It's like exploring the "What if the data points weren't labeled as belonging to dataset A or B?"
Another frame of reference in understanding just what in the hell we're doing is through the combinations formula that we slept through in high school:
$$C(n,r) = C(25,10)$$
$$ $$
$$= \frac{25!}{10!(25-10)!}$$
$$ $$
$$=3,268,760$$
Given that dataset A has a N = 10 and dataset B has a N = 15, we have a total n (objects) = 10 + 15 = 25 , and a r (sample) of 10 , which means that we choose 10 of them to be label A. Watever is left is gonna be label B. The order doesn't matter, thus the combinations formula, and not the permutation formula.
But I only have a macbook, and not some Watson so ima just run 1,000 permutations and not 3 million and change permutations. They don't make much difference, and if you don't believe me, bug Ning, not me.
Note. As we increase group sizes: from only 2 groups to 3 or more, we don't necessarily increase the number of permutations a lot (like from 1,000 to 1,000,000) only for approximation. Rather, we increase the number or permutations based on both computation power and the log(combination formula) * 100.
The significance of all this rambling is that we have one assumption - independent sampling, which is the foundation of modern maths.
Unlike most parametric tests (which includes the high school quarterback t-test that we all admired but secretly wanted to ditch in a river bank), we don't assume any gender, sexual orientation, and most importantly normalization and equal variance.
So how do we do dis?
<br />First, we have to do some preprocessing.
1. p.s. Yo. So, a small note. Many peeps don't know or realize that 99.9999% of the work involved in ML (machine learning) is p.r.e.p.r.o.c.es.s.ing. t.eh. da.t.a. This is super important. credit: Ning "It's simple, but important."
2. The idea behind preprocessing for dis is that we formulate a statistical problem into a classification problem. In classification, the feature will be the data we have, and the label is what we generate for each group.
3. In principle, we don't have to shuffle the data, but to make the classification test more robust, we could shuffle our small sample size dataset 100 times as we did and feed them into the machine learning model (logistic regression). p.s. feel free to use other models. Idc. aka random forest etc etc.
4. Before we define a machine learning model, we need to define how we are going to cross validate the model and how we are going to measure the cross validation results. Here, we use StratifiedKFold for cross validation with 5 folds. This cross-validation method is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class.
5. For simplicity, we use logistic regression for the machine learning model. Thus my p.s. comment in #3.
6. Cross validation is not something I can explain now, because it's Sunday and I'm really tired doing this instead of I dunno, doing anything else? now Kapish? Gud.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
#let's define the machine learning model and the cross validation method
cv = StratifiedKFold(n_splits = 5, shuffle = True, random_state = 12345)
clf = RandomForestClassifier(random_state = 12345)
score, permutation_score, pval = permutation_test_score(clf, data.reshape(-1,1), label,
n_permutations = 1000, cv = cv,
random_state = 12345, scoring = 'roc_auc')
fig,ax = plt.subplots()
ax.hist(permutation_score, label = 'randomized scores', color = "red")
ax.axvline(score, label = 'true score : %.2f , pval : %.3f'%(score, pval))
ax.legend()
ax.set(title = "Histogram of Permutated Datasets A and B", xlabel = "Score",
ylabel = "Count / Frequency of Permutation Scores")
"""
Explanation: So. Results for permutating away my free Sunday night:
T-test results from above:
non-equal variance, t(9) = -2.07, p = 0.050
equal variance, t(14) = -1.99, p = 0.059
Compared to the T-test results, what we see here is that the classification is not significant at the p = 0.05 level in telling the difference between datasets A and B. The blue line is at AUC = 0.70, which means that it can't classify Mr. Jack aka the permutated data points don't separate out datasets A and B with logistic regression to a satisfying post-Thanksgiving level.
Conclusion: it could be that
<br />Explanation A. the logistic regression is not powerful enough, or
<br />Explanation B. the data itself is #C.R.A.P.py (Commonly Recorded Artifactual Potentials), and t test is doing a very unreliable job.
So let's try Random Forest:
End of explanation
"""
|
a-slide/iPython-Notebook | Notebooks/2015_04_16_AL_Analyse_cross_conta_data_Pierre.ipynb | gpl-2.0 | with open('./jeter.tsv', 'r') as file:
for i in range (10):
print (next(file))
"""
Explanation: Calculate the percentage of incorrectly attributed reads in the following file for sample 1 and sample2
reads_sample1_supporting_sample2 vs all reads of sample1
reads_sample2_supporting_sample1 vs all reads of sample2
The percentage of incorectly attributed read is then ploted with matplotlib according to the amount of reads found
End of explanation
"""
%pylab inline
import csv
import matplotlib.pyplot as plt
"""
Explanation: Import packages
End of explanation
"""
res_s1_sup_s2 = [0 for i in range (26)]
res_s2_sup_s1 = [0 for i in range (26)]
res_s1_sup_other = [0 for i in range (26)]
res_s2_sup_other = [0 for i in range (26)]
"""
Explanation: Create lists to store results for different thresholds from 0 to 25%
End of explanation
"""
with open('./jeter.tsv', 'r') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
# remove header
next(reader)
# iterate over rows
for R in reader:
s1_sup_s2 = float(R[6])*100/(float(R[5])+float(R[6])+float(R[7]))
s2_sup_s1 = float(R[9])*100/(float(R[8])+float(R[9])+float(R[10]))
s1_sup_other = float(R[7])*100/(float(R[5])+float(R[6])+float(R[7]))
s2_sup_other = float(R[10])*100/(float(R[8])+float(R[9])+float(R[10]))
for seuil in range (26):
if s1_sup_s2 >= seuil:
res_s1_sup_s2[seuil]+=1
if s2_sup_s1 >= seuil:
res_s2_sup_s1[seuil]+=1
if s1_sup_other >= seuil:
res_s1_sup_other[seuil]+=1
if s2_sup_other >= seuil:
res_s2_sup_other[seuil]+=1
print (res_s1_sup_s2)
print (res_s2_sup_s1)
print (res_s1_sup_other)
print (res_s2_sup_other)
plt.figure(figsize=(20, 10))
plt.title("percentage of samples with a number of read with incorect an genotype corresponding to another sample from the same lane or a randon error")
plt.xlabel("Percentage of sample")
plt.ylabel("Number of read with incorrect genotype")
plt.ylim(1,77869)
line1 = plt.semilogy(res_s1_sup_s2, 'b', label = 'reads_sample1_supporting_sample2')
line2 = plt.semilogy(res_s2_sup_s1, 'g', label = 'reads_sample2_supporting_sample1')
line3 = plt.semilogy(res_s1_sup_other, 'r', label = 'reads_sample1_supporting_others')
line4 = plt.semilogy(res_s2_sup_other, 'm', label = 'reads_sample2_supporting_others')
plt.legend(loc='best')
"""
Explanation: parse file and populate the list
End of explanation
"""
|
AllenDowney/ThinkStats2 | examples/auroc.ipynb | gpl-3.0 | # Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
sns.set_context('talk')
"""
Explanation: Area under the receiver operating curve
Copyright 2019 Allen Downey
License: http://creativecommons.org/licenses/by/4.0/
End of explanation
"""
mu1 = 0
sigma = 1
d = 1
mu2 = mu1 + d;
"""
Explanation: Area under ROC
As a way of understanding AUC ROC, let's look at the relationship between AUC and Cohen's effect size.
Cohen's effect size, d, expresses the difference between two groups as the number of standard deviations between the means.
As d increases, we expect it to be easier to distinguish between groups, so we expect AUC to increase.
I'll start in one dimension and then generalize to multiple dimensions.a
Here are the means and standard deviations for two hypothetical groups.
End of explanation
"""
n = 1000
sample1 = np.random.normal(mu1, sigma, n)
sample2 = np.random.normal(mu2, sigma, n);
"""
Explanation: I'll generate two random samples with these parameters.
End of explanation
"""
thresh = (mu1 + mu2) / 2
np.mean(sample1 > thresh)
"""
Explanation: If we put a threshold at the midpoint between the means, we can compute the fraction of Group 0 that would be above the threshold.
I'll call that the false positive rate.
End of explanation
"""
np.mean(sample2 < thresh)
"""
Explanation: And here's the fraction of Group 1 that would be below the threshold, which I'll call the false negative rate.
End of explanation
"""
from scipy.stats import gaussian_kde
def make_kde(sample):
"""Kernel density estimate.
sample: sequence
returns: Series
"""
xs = np.linspace(-4, 4, 101)
kde = gaussian_kde(sample)
ys = kde.evaluate(xs)
return pd.Series(ys, index=xs)
def plot_kde(kde, clipped, color):
"""Plot a KDE and fill under the clipped part.
kde: Series
clipped: Series
color: string
"""
plt.plot(kde.index, kde, color=color)
plt.fill_between(clipped.index, clipped, color=color, alpha=0.3)
def plot_misclassification(sample1, sample2, thresh):
"""Plot KDEs and shade the areas of misclassification.
sample1: sequence
sample2: sequence
thresh: number
"""
kde1 = make_kde(sample1)
clipped = kde1[kde1.index>=thresh]
plot_kde(kde1, clipped, 'C0')
kde2 = make_kde(sample2)
clipped = kde2[kde2.index<=thresh]
plot_kde(kde2, clipped, 'C1')
"""
Explanation: Plotting misclassification
To see what these overlapping distributions look like, I'll plot a kernel density estimate (KDE).
End of explanation
"""
plot_misclassification(sample1, sample2, 0)
"""
Explanation: Here's what it looks like with the threshold at 0. There are many false positives, shown in blue, and few false negatives, in orange.
End of explanation
"""
plot_misclassification(sample1, sample2, 1)
"""
Explanation: With a higher threshold, we get fewer false positives, at the cost of more false negatives.
End of explanation
"""
def fpr_tpr(sample1, sample2, thresh):
"""Compute false positive and true positive rates.
sample1: sequence
sample2: sequence
thresh: number
returns: tuple of (fpr, tpf)
"""
fpr = np.mean(sample1>thresh)
tpr = np.mean(sample2>thresh)
return fpr, tpr
"""
Explanation: The receiver operating curve
The receiver operating curve (ROC) represents this tradeoff.
To plot the ROC, we have to compute the false positive rate (which we saw in the figure above), and the true positive rate (not shown in the figure).
The following function computes these metrics.
End of explanation
"""
fpr_tpr(sample1, sample2, 1)
"""
Explanation: When the threshold is high, the false positive rate is low, but so is the true positive rate.
End of explanation
"""
fpr_tpr(sample1, sample2, 0)
"""
Explanation: As we decrease the threshold, the true positive rate increases, but so does the false positive rate.
End of explanation
"""
from scipy.integrate import trapz
def plot_roc(sample1, sample2, label):
"""Plot the ROC curve and return the AUC.
sample1: sequence
sample2: sequence
label: string
returns: AUC
"""
threshes = np.linspace(5, -3)
roc = [fpr_tpr(sample1, sample2, thresh)
for thresh in threshes]
fpr, tpr = np.transpose(roc)
plt.plot(fpr, tpr, label=label)
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
auc = trapz(tpr, fpr)
return auc
"""
Explanation: The ROC shows this tradeoff over a range of thresholds.
I sweep thresholds from high to low so the ROC goes from left to right.
End of explanation
"""
auc = plot_roc(sample1, sample2, '')
"""
Explanation: Here's the ROC for the samples.
With d=1, the area under the curve is about 0.75. That might be a good number to remember.
End of explanation
"""
mu1 = 0
sigma = 1
n = 1000
res = []
for mu2 in [3, 2, 1.5, 0.75, 0.25]:
sample1 = np.random.normal(mu1, sigma, n)
sample2 = np.random.normal(mu2, sigma, n)
d = (mu2-mu1) / sigma
label = 'd = %0.2g' % d
auc = plot_roc(sample1, sample2, label)
res.append((d, auc))
plt.legend();
"""
Explanation: Now let's see what that looks like for a range of d.
End of explanation
"""
def plot_auc_vs_d(res, label):
d, auc = np.transpose(res)
plt.plot(d, auc, label=label, alpha=0.8)
plt.xlabel('Cohen effect size')
plt.ylabel('Area under ROC')
"""
Explanation: This function computes AUC as a function of d.
End of explanation
"""
plot_auc_vs_d(res, '')
"""
Explanation: The following figure shows AUC as a function of d.
End of explanation
"""
from scipy.stats import multivariate_normal
d = 1
mu1 = [0, 0]
mu2 = [d, d]
rho = 0
sigma = [[1, rho], [rho, 1]]
sample1 = multivariate_normal(mu1, sigma).rvs(n)
sample2 = multivariate_normal(mu2, sigma).rvs(n);
"""
Explanation: Not suprisingly, AUC increases as d increases.
Multivariate distributions
Now let's see what happens if we have more than one variable, with a difference in means along more than one dimension.
First, I'll generate a 2-D sample with d=1 along both dimensions.
End of explanation
"""
np.mean(sample1, axis=0)
"""
Explanation: The mean of sample1 should be near 0 for both features.
End of explanation
"""
np.mean(sample2, axis=0)
"""
Explanation: And the mean of sample2 should be near 1.
End of explanation
"""
x, y = sample1.transpose()
plt.plot(x, y, '.', alpha=0.3)
x, y = sample2.transpose()
plt.plot(x, y, '.', alpha=0.3)
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Scatter plot for samples with d=1 in both dimensions');
"""
Explanation: The following scatterplot shows what this looks like in 2-D.
End of explanation
"""
# Based on an example at https://plot.ly/ipython-notebooks/2d-kernel-density-distributions/
def kde_scipy(sample):
"""Use KDE to compute an array of densities.
sample: sequence
returns: tuple of matrixes, (X, Y, Z)
"""
x = np.linspace(-4, 4)
y = x
X, Y = np.meshgrid(x, y)
positions = np.vstack([Y.ravel(), X.ravel()])
kde = gaussian_kde(sample.T)
kde(positions)
Z = np.reshape(kde(positions).T, X.shape)
return [X, Y, Z]
X, Y, Z = kde_scipy(sample1)
plt.contour(X, Y, Z, cmap=plt.cm.Blues, alpha=0.7)
X, Y, Z = kde_scipy(sample2)
plt.contour(X, Y, Z, cmap=plt.cm.Oranges, alpha=0.7)
plt.xlabel('X')
plt.ylabel('Y')
plt.title('KDE for samples with d=1 in both dimensions');
"""
Explanation: Some points are clearly classifiable, but there is substantial overlap in the distributions.
We can see the same thing if we estimate a 2-D density function and make a contour plot.
End of explanation
"""
df1 = pd.DataFrame(sample1)
df1['label'] = 1
df1.describe()
df1[[0,1]].corr()
df2 = pd.DataFrame(sample2)
df2['label'] = 2
df2.describe()
df2[[0,1]].corr()
df = pd.concat([df1, df2], ignore_index=True)
df.label.value_counts()
"""
Explanation: Classification with logistic regression
To see how distinguishable the samples are, I'll use logistic regression.
To get the data into the right shape, I'll make two DataFrames, label them, concatenate them, and then extract the labels and the features.
End of explanation
"""
X = df[[0, 1]]
y = df.label;
"""
Explanation: X is the array of features; y is the vector of labels.
End of explanation
"""
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs').fit(X, y);
"""
Explanation: Now we can fit the model.
End of explanation
"""
from sklearn.metrics import roc_auc_score
y_pred_prob = model.predict_proba(X)[:,1]
auc = roc_auc_score(y, y_pred_prob)
"""
Explanation: And compute the AUC.
End of explanation
"""
def multivariate_normal_auc(d, rho=0):
"""Generate multivariate normal samples and classify them.
d: Cohen's effect size along each dimension
num_dims: number of dimensions
returns: AUC
"""
mu1 = [0, 0]
mu2 = [d, d]
sigma = [[1, rho], [rho, 1]]
# generate the samples
sample1 = multivariate_normal(mu1, sigma).rvs(n)
sample2 = multivariate_normal(mu2, sigma).rvs(n)
# label the samples and extract the features and labels
df1 = pd.DataFrame(sample1)
df1['label'] = 1
df2 = pd.DataFrame(sample2)
df2['label'] = 2
df = pd.concat([df1, df2], ignore_index=True)
X = df.drop(columns='label')
y = df.label
# run the model
model = LogisticRegression(solver='lbfgs').fit(X, y)
y_pred_prob = model.predict_proba(X)[:,1]
# compute AUC
auc = roc_auc_score(y, y_pred_prob)
return auc
"""
Explanation: With two features, we can do better than with just one.
AUC as a function of rho
The following function contains the code from the previous section, with rho as a parameter.
End of explanation
"""
res = [(rho, multivariate_normal_auc(d=1, rho=rho))
for rho in np.linspace(-0.9, 0.9)]
rhos, aucs = np.transpose(res)
plt.plot(rhos, aucs)
plt.xlabel('Correlation (rho)')
plt.ylabel('Area under ROC')
plt.title('AUC as a function of correlation');
"""
Explanation: Now we can sweep a range of values for rho.
End of explanation
"""
def multivariate_normal_auc(d, num_dims=2):
"""Generate multivariate normal samples and classify them.
d: Cohen's effect size along each dimension
num_dims: number of dimensions
returns: AUC
"""
# compute the mus
mu1 = np.zeros(num_dims)
mu2 = np.full(num_dims, d)
# and sigma
sigma = np.identity(num_dims)
# generate the samples
sample1 = multivariate_normal(mu1, sigma).rvs(n)
sample2 = multivariate_normal(mu2, sigma).rvs(n)
# label the samples and extract the features and labels
df1 = pd.DataFrame(sample1)
df1['label'] = 1
df2 = pd.DataFrame(sample2)
df2['label'] = 2
df = pd.concat([df1, df2], ignore_index=True)
X = df.drop(columns='label')
y = df.label
# run the model
model = LogisticRegression(solver='lbfgs').fit(X, y)
y_pred_prob = model.predict_proba(X)[:,1]
# compute AUC
auc = roc_auc_score(y, y_pred_prob)
return auc
"""
Explanation: AUC as a function of d
The following function contains the code from the previous section, generalized to handle more than 2 dimensions.
End of explanation
"""
multivariate_normal_auc(d=1, num_dims=1)
multivariate_normal_auc(d=1, num_dims=2)
"""
Explanation: Confirming what we have seen before:
End of explanation
"""
def compute_auc_vs_d(num_dims):
"""Sweep a range of effect sizes and compute AUC.
num_dims: number of dimensions
returns: list of
"""
effect_sizes = np.linspace(0, 4)
return [(d, multivariate_normal_auc(d, num_dims))
for d in effect_sizes]
res1 = compute_auc_vs_d(1)
res2 = compute_auc_vs_d(2)
res3 = compute_auc_vs_d(3)
res4 = compute_auc_vs_d(4);
"""
Explanation: Now we can sweep a range of effect sizes.
End of explanation
"""
plot_auc_vs_d(res4, 'num_dim=4')
plot_auc_vs_d(res3, 'num_dim=3')
plot_auc_vs_d(res2, 'num_dim=2')
plot_auc_vs_d(res1, 'num_dim=1')
plt.title('AUC vs d for different numbers of features')
plt.legend();
"""
Explanation: And plot the results.
End of explanation
"""
|
joelagnel/lisa | ipynb/examples/energy_meter/EnergyMeter_AEP.ipynb | apache-2.0 | import logging
from conf import LisaLogging
LisaLogging.setup()
"""
Explanation: Energy Meter Examples
ARM Energy Probe
NOTE: caiman is required to collect data from the probe. Instructions on how to install it can be found here https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirements#arm-energy-probe-aep.
End of explanation
"""
# Generate plots inline
%matplotlib inline
import os
# Support to access the remote target
import devlib
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
"""
Explanation: Import required modules
End of explanation
"""
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
# Folder where all the results will be collected
"results_dir" : "EnergyMeter_AEP",
# Define devlib modules to load
"exclude_modules" : [ 'hwmon' ],
# Energy Meters Configuration for ARM Energy Probe
"emeter" : {
"instrument" : "aep",
"conf" : {
# Value of the shunt resistor in Ohm
'resistor_values' : [0.099],
# Device entry assigned to the probe on the host
'device_entry' : '/dev/ttyACM0',
},
'channel_map' : {
'BAT' : 'BAT'
}
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
"rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
"""
Explanation: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
End of explanation
"""
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# EnergyMeter Start
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
# EnergyMeter Stop and samples collection
nrg_report = te.emeter.report(te.res_dir)
logging.info("Collected data:")
!tree $te.res_dir
"""
Explanation: Workload Execution and Power Consumptions Samping
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
Each EnergyMeter derived class has two main methods: reset and report.
- The reset method will reset the energy meter and start sampling from channels specified in the target configuration. <br>
- The report method will stop capture and will retrieve the energy consumption data. This returns an EnergyReport composed of the measured channels energy and the report file. Each of the samples can also be obtained, as you can see below.
End of explanation
"""
logging.info("Measured channels energy:")
logging.info("%s", nrg_report.channels)
logging.info("Generated energy file:")
logging.info(" %s", nrg_report.report_file)
!cat $nrg_report.report_file
logging.info("Samples collected for the BAT channel (only first 10)")
samples_file = os.path.join(te.res_dir, 'samples.csv')
!head $samples_file
"""
Explanation: Power Measurements Data
End of explanation
"""
|
idekerlab/sdcsb-advanced-tutorial | tutorials/Lesson_1_Introduction_to_cyREST.ipynb | mit | # HTTP Client for Python
import requests
# Standard JSON library
import json
# Basic Setup
PORT_NUMBER = 1234 # This is the default port number of CyREST
"""
Explanation: SDCSB Tutorial
Advanced Cytoscape: Cytoscape, IPython, Docker, and reproducible network data visualization workflows
Friday, 4/17/2015 at Sanford
Lesson 1: Introduction to cyREST
by Keiichiro Ono
Welcome!
This is an introduction to cyREST and its basic API. In this section, you will learn how to access Cytoscape through RESTful API.
Prerequisites
Basic knowledge of RESTful API
This is a good introduction to REST
Basic Python skill - only basics, such as conditional statements, loops, basic data types.
Basic knowledge of Cytoscape
Cytoscape data types - Networks, Tables, and Styles.
System Requirments
This tutorial is tested on the following platform:
Client machine running Cytoscape
Java SE 8
Cytoscape 3.2.1
Latest version of cyREST app
Server Running IPython Notebook
Docker running this image
1. Import Python Libraries and Basic Setup
Libraries
In this tutorial, we will use several popular Python libraries to make this workflow more realistic.
NumPy
SciPy
Pandas
igraph
NetworkX
etc.
Do I need to install all of them?
NO. Because we are running this notebook server in Docker container with all dependencies.
HTTP Client
Since you need to access Cytoscape via RESTful API, HTTP client library is the most important tool you need to understand. In this example, we use Requests library to simplify API call code.
JSON Encoding and Decoding
Data will be exchanged as JSON between Cytoscape and Python code. Python has built-in support for JSON and we will use it in this workflow.
Basic Setup for the API
At this point, there is only one option for the cy-rest module: port number.
Change Port Number
By default, port number used by cy-rest module is 1234. To change this, you need set a global Cytoscape property from Edit → Preserences → Properties... and add a new property resr.port.
What is happing in your machine?
Mac / Windows
Linux
Actual Docker runtime is only available to Linux operating system and if you use Mac or Windows version of Docker, it is running on a Linux virtual machine (called boot2docker).
URL to Access Cytoscape REST API
We assume you are running Cytoscape desktop application and IPython Notebook server in a Docker container we provide. To access Cytoscape REST API, use the following URL:
url
http://IP_of_your_machine:PORT_NUMBER/v1/
where v1 is the current version number of API. Once the final release is ready, we guarantee compatibility of your scripts as long as major version number is the same.
Check your machine's IP
For Linux and Mac:
bash
ifconfig
For Windows:
ipconfig
Viewing JSON
All data exchanged between Cytoscape and other applications is in JSON. You can make the JSON data more humanreadable by using browser extensions:
JSONView for Firefox
JSONView for Chrome
If you prefer command-line tools, jq is the best choice.
End of explanation
"""
# IP address of your PHYSICAL MACHINE (NOT VM)
IP = '137.110.137.158'
"""
Explanation: Don't forget to update this line! This should be your host machine's IP address.
End of explanation
"""
BASE = 'http://' + IP + ':' + str(PORT_NUMBER) + '/v1/'
# Header for posting data to the server as JSON
HEADERS = {'Content-Type': 'application/json'}
# Clean-up
requests.delete(BASE + 'session')
# Utility function to display JSON (Pretty-printer)
def pp(json_data):
print(json.dumps(json_data, indent=4))
"""
Explanation:
End of explanation
"""
# Get server status
res = requests.get(BASE)
status_object = res.json()
print(json.dumps(status_object, indent=4))
"""
Explanation: 2. Test Cytoscape REST API
Check the status of server
First, send a simple request and check the server status.
Roundtrip between JSON and Python Object
Object returned from the requests contains return value of API as JSON. Let's convert it into Python object. JSON library in Python converts JSON string into simple Python object.
End of explanation
"""
print(BASE)
"""
Explanation: And of course, you can access this API from other tools, including web browsers.
Click the following URL:
End of explanation
"""
print(status_object['apiVersion'])
print(status_object['memoryStatus']['usedMemory'])
"""
Explanation: How cyREST works?
Basic mechanism of cyREST is very simple. It accesses resources in Cytoscape with standard HTTP verbs: POST, GET, PUT, and DELETE. The URL above means "give me status of cyREST server."
And once you store the return values in Python object, you can access them through standard Python code:
End of explanation
"""
# Small utility function to create networks from list of URLs
def create_from_list(network_list, collection_name='Yeast Collection'):
payload = {'source': 'url', 'collection': collection_name}
server_res = requests.post(BASE + 'networks', data=json.dumps(network_list), headers=HEADERS, params=payload)
return server_res.json()
# Array of data source.
network_files = [
#This should be path in the LOCAL file system!
'file:////Users/kono/prog/git/sdcsb-advanced-tutorial/tutorials/data/yeast.json',
# SIF file on a web server
'http://chianti.ucsd.edu/cytoscape-data/galFiltered.sif'
# And of course, you can add as many files as you need...
]
# Create!
res_json = create_from_list(network_files)
print(json.dumps(res_json, indent=4))
"""
Explanation: If you are comfortable with these, you are ready to go!
3. Import Networks from various data sources
There are many ways to load networks into Cytoscape from REST API:
Load from files
Load from web services
Send Cytoscape.js style JSON directly to Cytoscape
Send edgelist
3.1 Create networks from local files and URLs
Let's start from a simple file loading examples. The POST method is used to create new Cytoscape objects. For example,
bash
POST http://localhost:1234/v1/networks
means create new network(s) by specified method. If you want to create networks from files on your machine or remote servers, all you need to do is create a list of file locations and post it to Cytoscape.
End of explanation
"""
sample_network_suids = []
for new_network in res_json:
sample_network_suids.append(new_network['networkSUID'][0])
sample_network_suids
"""
Explanation: What Happened?
Send list of resource (file) locations as URL from this notebook
cyREST interpret the requrest for Cytoscape
Cytoscape uses its own data file loaders from the resource locations in the list
Cytoscape returns list of new networks created in the session.
What is SUID?
SUID is the unique identifier for all graph objects in Cytoscape. You can access any objects in current session as long as you have its SUID. For the example above, you can access the new network SUIDs by:
End of explanation
"""
# Resource location as URL
tca_cycle_human = 'http://rest.kegg.jp/get/hsa00020/kgml'
# Pass it to Cytoscape
pp(create_from_list([tca_cycle_human], 'KEGG Metabolic Pathways'))
"""
Explanation: Note that Cytoscape may creates multiple networks from a single network resource. This is why you need index number after networkSUID. In this tutorial, all network resource (file) contains only one network, so you can just use 0 to access the result.
Where is my local data file?
This is a bit trickey part. When you specify local file, you need to absolute path
On Docker container, your data file is mounted on:
/notebooks/data
However, actual file is in:
PATH_TO_YOUR_WORKSPACE/vizbi-2015-cytoscape-tutorial/notebooks/data
Although you can see the data directory on /notebooks/data, you need to use absolute path to access actual data from Cytoscape. You may think this is a bit annoying, but actually, this is the power of container technology. You can use completely isolated environment to run your workflow.
3.2 Create networks from public web services
There are many public data sources and web services for biology. If the service supports Cytoscape-readable file formats, you can directly specify the query URL as the network resource location. For example, the following URL calls KEGG REST API and load the TCA Cycle pathway diagram for human.
KEGG PATHWAY: hsa00020 Citrate cycle (TCA cycle) - Homo sapiens (human)
REST API to download the pathway in KGML format: http://rest.kegg.jp/get/hsa00020/kgml
Hand-drawn pathway diagram in KEGG:
Warning: You need to install KEGGScape App to Cytoscape before running the following cells!
You can just click the link above and press Install to directly install the app from the web.
Loading external data directly from API
If the data format is supported in Cytoscape, you can import it programmatically by passing the resource location (URL) to Cytoscape:
End of explanation
"""
# Find pathways involving cancer
res = requests.get('http://togows.org/search/kegg-pathway/cancer/1,50.json')
pp(res.json())
"""
Explanation: and now your Cytoscape window should look like this:
Connect multiple web services
OK, this is not so interesting because it can be done manually from GUI if we want. But what happens if you need to check hundreds of resources and filter the results? You can easily handle such problems if you know how to write your workflow as notebook (code).
In this example, we will do the following:
Send simple query to fild list of pathways involving cancer (use TogoWS)
Convert the result tomake it readable for other web service (KEGG API)
Import some of the result into Cytoscape
End of explanation
"""
# Convert to URLs. This can be done with for loop, but for simplicity, we use map function.
# Extract ID portion of entries
path_ids = list(map(lambda x: x.split('\t')[0], res.json()))
# Make it consumable by KEGG API (Convert to list of URLs)
path_url_human = list(map(lambda x: 'http://rest.kegg.jp/get/' + x.replace('path:map', 'hsa') + '/kgml', path_ids))
pp(path_url_human)
"""
Explanation: This raw result needs some work to make it usable in other services. In the following cell, Python creates URLs from the list of pathway ID/pathway name pair:
End of explanation
"""
# This may take a while...
pp(create_from_list(path_url_human[0:3], 'KEGG Metabolic Pathways'))
"""
Explanation: Let's pick first 3 result and import actual pathways.
End of explanation
"""
# Get a list of network IDs
get_all_networks_url = BASE + 'networks'
print(get_all_networks_url)
res = requests.get(get_all_networks_url)
pp(res.json())
# Pick the first network from the list above:
network_suid = res.json()[0]
get_network_url = BASE + 'networks/' + str(network_suid)
print(get_network_url)
# Get number of nodes in the network
get_nodes_count_url = BASE + 'networks/' + str(network_suid) + '/nodes/count'
print(get_nodes_count_url)
# Get all nodes
get_nodes_url = BASE + 'networks/' + str(network_suid) + '/nodes'
print(get_nodes_url)
# Get Node data table as CSV
get_node_table_url = BASE + 'networks/' + str(network_suid) + '/tables/defaultnode.csv'
print(get_node_table_url)
"""
Explanation: Discussion
The pipeline above is just a toy example, but you can automate your data preparation and import part if you use Python. You can try other web services to make it more realistic.
Understand REST Principles
We used modern best practices to design cyREST API. All HTTP verbs are mapped to Cytoscape resources:
| HTTP Verb | Description |
|:----------:|:------------|
| GET | Retrieving resources (in most cases, it is Cytoscape data objects, such as networks or tables) |
| POST | Creating resources |
| PUT | Changing/replacing resources or collections |
| DELETE | Deleting resources |
This design style is called Resource Oriented Architecture (ROA).
Actually, basic idea is very simple: mapping all operations to existing HTTP verbs. It is easy to understand once you try actual examples.
GET (Get a resource)
End of explanation
"""
# Write your answers here...
"""
Explanation: Exercise 1: Guess URLs
If a system's RESTful API is well-designed based on ROA best practices, it should be easy to guess similar functions as URLs.
Display a clickable URLs for the following functions:
Show number of networks in current session
Show all edges in a network
Show full information for a node (can be any node)
Show information for all columns in the default node table
Show all values in default node table under "name" column
End of explanation
"""
# Add a new nodes to existing network (with time stamps)
import datetime
new_nodes =[
'Node created at ' + str(datetime.datetime.now()),
'Node created at ' + str(datetime.datetime.now())
]
res = requests.post(get_nodes_url, data=json.dumps(new_nodes), headers=HEADERS)
new_node_ids = res.json()
pp(new_node_ids)
"""
Explanation: POST (Create a new resource)
To create new resource (objects), you should use POST methods. URLs follow ROA standards, but you need to read API documents to understand data format for each object.
End of explanation
"""
# Delete all nodes
requests.delete(BASE + 'networks/' + str(sample_network_suids[0]) + '/nodes')
# Delete a network
requests.delete(BASE + 'networks/' + str(sample_network_suids[0]))
"""
Explanation: DELETE (Delete a resource)
End of explanation
"""
# Update a node name
new_values = [
{
'SUID': new_node_ids[0]['SUID'],
'value' : 'updated 1'
},
{
'SUID': new_node_ids[1]['SUID'],
'value' : 'updated 2'
}
]
requests.put(BASE + 'networks/' + str(network_suid) + '/tables/defaultnode/columns/name', data=json.dumps(new_values), headers=HEADERS)
"""
Explanation: PUT (Update a resource)
PUT method is used to update information for existing resources. Just like POST methods, you need to know the data format to be posted.
End of explanation
"""
# Manually generates JSON as a Python dictionary
def create_network():
network = {
'data': {
'name': 'I\'m empty!'
},
'elements': {
'nodes':[],
'edges':[]
}
}
return network
# Difine a simple utility function
def postNetwork(data):
url_params = {
'collection': 'My Network Colleciton'
}
res = requests.post(BASE + 'networks', params=url_params, data=json.dumps(data), headers=HEADERS)
return res.json()['networkSUID']
# POST data to Cytoscape
empty_net_1 = create_network()
empty_net_1_suid = postNetwork(empty_net_1)
print('Empty network has SUID ' + str(empty_net_1_suid))
"""
Explanation: 3.3 Create networks from Python objects
And this is the most powerful feature in Cytoscape REST API. You can easily convert Python objects into Cytoscape networks, tables, or Visual Styles
How does this work?
Cytoscape REST API sends and receives data as JSON. For networks, it uses Cytoscape.js style JSON (support for more file formats are comming!). You can programmatically generates networks by converting Python dictionary into JSON.
3.3.1 Prepare Network as Cytoscape.js JSON
Let's start with the simplest network JSON:
End of explanation
"""
# Create sequence of letters (A-Z)
seq_letters = list(map(chr, range(ord('A'), ord('Z')+1)))
print(seq_letters)
# Option 1: Add nods and edges with for loops
def add_nodes_edges(network):
nodes = []
edges = []
for lt in seq_letters:
node = {
'data': {
'id': lt
}
}
nodes.append(node)
for lt in seq_letters:
edge = {
'data': {
'source': lt,
'target': 'A'
}
}
edges.append(edge)
network['elements']['nodes'] = nodes
network['elements']['edges'] = edges
network['data']['name'] = 'A is the hub.'
# Option 2: Add nodes and edges in functional way
def add_nodes_edges_functional(network):
network['elements']['nodes'] = list(map(lambda x: {'data': { 'id': x }}, seq_letters))
network['elements']['edges'] = list(map(lambda x: {'data': { 'source': x, 'target': 'A' }}, seq_letters))
network['data']['name'] = 'A is the hub (Functional Way)'
# Uncomment this if you want to see the actual JSON object
# print(json.dumps(empty_network, indent=4))
net1 = create_network()
net2 = create_network()
add_nodes_edges_functional(net1)
add_nodes_edges(net2)
networks = [net1, net2]
def visualize(net):
suid = postNetwork(net)
net['data']['SUID'] = suid
# Apply layout and Visual Style
requests.get(BASE + 'apply/layouts/force-directed/' + str(suid))
requests.get(BASE + 'apply/styles/Directed/' + str(suid))
for net in networks:
visualize(net)
"""
Explanation: Modify network dara programmatically
Since it's a simple Python dictionary, it is easy to add data to the network. Let's add some nodes and edges:
End of explanation
"""
from IPython.display import Image
Image(url=BASE+'networks/' + str(net1['data']['SUID'])+ '/views/first.png', embed=True)
"""
Explanation: Now, your Cytoscpae window should look like this:
Embed images in IPython Notebook
cyRest has function to generate PNG image directly from current network view. Let's try to see the result in this notebook.
End of explanation
"""
%%writefile data/model.sif
Model parent_of ViewModel_1
Model parent_of ViewModel_2
Model parent_of ViewModel_3
ViewModel_1 parent_of Presentation_A
ViewModel_1 parent_of Presentation_B
ViewModel_2 parent_of Presentation_C
ViewModel_3 parent_of Presentation_D
ViewModel_3 parent_of Presentation_E
ViewModel_3 parent_of Presentation_F
model = [
'file:////Users/kono/prog/git/sdcsb-advanced-tutorial/tutorials/data/model.sif'
]
# Create!
res = create_from_list(model)
model_suid = res[0]['networkSUID'][0]
requests.get(BASE + 'apply/layouts/force-directed/' + str(model_suid))
Image(url=BASE+'networks/' + str(model_suid)+ '/views/first.png', embed=True)
"""
Explanation: Introduction to Cytoscape Data Model
Essentially, writing your workflow as a notebook is a programming. To control Cytoscape efficiently from Notebooks, you need to understand basic data model of Cytoscape. Let me explain it as a notebook...
First, let's create a data file to visualize Cytoscape data model
End of explanation
"""
view_url = BASE + 'networks/' + str(model_suid) + '/views/first'
print('You can access (default) network view from this URL: ' + view_url)
"""
Explanation: Mode, View Model, and Presentation
Model
Essentially, Model in Cytoscape means networks and tables. Internally, Model can have multiple View Models.
View Model
State of the view.
This is why you need to use views instead of view in the API:
/v1/networks/SUID/views
However, Cytoscape 3.2.x has only one rendering engine for now, and end-users do not have access to this feature. Until Cytoscape Desktop supports multiple renderers, best practice is just use one view per model. To access the default view, there is a utility method first:
End of explanation
"""
data_str = ''
n = 0
while n <100:
data_str = data_str + str(n) + '\t' + str(n+1) + '\n'
n = n + 1
# Join the first and last nodes
data_str = data_str + '100\t0\n'
# print(data_str)
# You can create multiple networks by running simple for loop:
for i in range(3):
res = requests.post(BASE + 'networks?format=edgelist&collection=Rings!', data=data_str, headers=HEADERS)
circle_suid = res.json()['networkSUID']
requests.get(BASE + 'apply/layouts/circular/' + str(circle_suid))
Image(url=BASE+'networks/' + str(circle_suid) + '/views/first.png', embed=True)
"""
Explanation: Presentation
Presentation is a stateless, actual graphics you see in the window. A View Model can have multiple Presentations. For now, you can assume there is always one presentation per View Model.
What do you need to know as a cyREST user?
CyREST API is fairly low level, and you can access all levels of Cytoscpae data structures. But if you want to use Cytoscape as a simple network visualization engine for IPython Notebook, here are some tips:
Tip 1: Always keep SUID when you create any new object
ALL Cytoscape objects, networks, nodes, egdes, and tables have a session unique ID, called SUID. When you create any new data objects in Cytoscape, it returns SUIDs. You need to keep them as Python data objects (list, dict, amp, etc.) to access them later.
Tip 2: Create one view per model
Until Cytoscape Desktop fully support multiple view/presentation feature, keep it simple: one view per model.
Tip 3: Minimize number of API calls
Of course, there is a API to add / remove / update one data object per API call, but it is extremely inefficient!
3.3.2 Prepare Network as edgelist
Edgelist is a minimalistic data format for networks and it is widely used in popular libraries including NetworkX and igraph. Preparing edgelist in Python is straightforward. You just need to prepare a list of edges as string like:
a b
b c
a c
c d
d f
b f
f g
f h
In Python, there are many ways to generate string like this. Here is a naive approach:
End of explanation
"""
# Write your code here...
"""
Explanation: Exercise 2: Create a network from a simple edge list file
Edge list is a human-editable text file to represent a graph structure. Using the sample data abobe (edge list example in 3.3.2), create a new network in Cytoscape from the edge list and visualize it just like the ring network above.
Hint: Use Magic!
End of explanation
"""
|
dkirkby/astroml-study | Chapter4/Chapter 4.5 - 4.9.ipynb | mit | %pylab inline
import scipy.stats
"""
Explanation: 4.5 Confidence Estimates: the Bootstrap and the Jackknife
End of explanation
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from scipy.stats import norm
from matplotlib import pyplot as plt
from astroML.resample import bootstrap
from astroML.stats import sigmaG
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=15, usetex=True)
m = 1000 # number of points
n = 10000 # number of bootstraps
#------------------------------------------------------------
# sample values from a normal distribution
np.random.seed(123)
data = norm(0, 1).rvs(m)
#------------------------------------------------------------
# Compute bootstrap resamplings of data
mu1_bootstrap = bootstrap(data, n, np.std, kwargs=dict(axis=1, ddof=1))
mu2_bootstrap = bootstrap(data, n, sigmaG, kwargs=dict(axis=1))
#------------------------------------------------------------
# Compute the theoretical expectations for the two distributions
x = np.linspace(0.8, 1.2, 1000)
sigma1 = 1. / np.sqrt(2 * (m - 1))
pdf1 = norm(1, sigma1).pdf(x)
sigma2 = 1.06 / np.sqrt(m)
pdf2 = norm(1, sigma2).pdf(x)
#------------------------------------------------------------
# Plot the results
fig, ax = plt.subplots(figsize=(5*2, 3.75*2))
ax.hist(mu1_bootstrap, bins=50, normed=True, histtype='step',
color='blue', ls='dashed', label=r'$\sigma\ {\rm (std. dev.)}$')
ax.plot(x, pdf1, color='gray')
ax.hist(mu2_bootstrap, bins=50, normed=True, histtype='step',
color='red', label=r'$\sigma_G\ {\rm (quartile)}$')
ax.plot(x, pdf2, color='gray')
ax.set_xlim(0.82, 1.18)
ax.set_xlabel(r'$\sigma$',)
ax.set_ylabel(r'$p(\sigma|x,I)$')
ax.legend()
plt.show()
"""
Explanation: Bootstrap: Redraw new data from old data set with replacement. New data set with have the same size as the old one.
For data set of size N, N! distinct redrawn samples.
With each redrawn set compute the statistic of interest
standard deviation $\sigma = \sqrt{\frac{1}{N-1} \sum_{i=1}^{N} (x_i - \overline{x})^2}$
width estimator $\sigma_G = 0.7413 (q_{75} - q_{75})$
Figure has N=1000 with 10,000 bootstraps. Compute $\sigma$ and $\sigma_G$.
Figure 4.3
End of explanation
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from scipy.stats import norm
from matplotlib import pyplot as plt
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=15, usetex=True)
#------------------------------------------------------------
# sample values from a normal distribution
np.random.seed(123)
m = 1000 # number of points
data = norm(0, 1).rvs(m)
#------------------------------------------------------------
# Compute jackknife resamplings of data
from astroML.resample import jackknife
from astroML.stats import sigmaG
# mu1 is the mean of the standard-deviation-based width
mu1, sigma_mu1, mu1_raw = jackknife(data, np.std,
kwargs=dict(axis=1, ddof=1),
return_raw_distribution=True)
pdf1_theory = norm(1, 1. / np.sqrt(2 * (m - 1)))
pdf1_jackknife = norm(mu1, sigma_mu1)
# mu2 is the mean of the interquartile-based width
# WARNING: do not use the following in practice. This example
# shows that jackknife fails for rank-based statistics.
mu2, sigma_mu2, mu2_raw = jackknife(data, sigmaG,
kwargs=dict(axis=1),
return_raw_distribution=True)
pdf2_theory = norm(data.std(), 1.06 / np.sqrt(m))
pdf2_jackknife = norm(mu2, sigma_mu2)
print mu2, sigma_mu2
#------------------------------------------------------------
# plot the results
print "mu_1 mean: %.2f +- %.2f" % (mu1, sigma_mu1)
print "mu_2 mean: %.2f +- %.2f" % (mu2, sigma_mu2)
fig = plt.figure(figsize=(5*2, 2*2))
fig.subplots_adjust(left=0.11, right=0.95, bottom=0.2, top=0.9,
wspace=0.25)
ax = fig.add_subplot(121)
ax.hist(mu1_raw, np.linspace(0.996, 1.008, 100),
label=r'$\sigma^*\ {\rm (std.\ dev.)}$',
histtype='stepfilled', fc='white', normed=False)
ax.hist(mu2_raw, np.linspace(0.996, 1.008, 100),
label=r'$\sigma_G^*\ {\rm (quartile)}$',
histtype='stepfilled', fc='gray', normed=False)
ax.legend(loc='upper left', handlelength=2)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.004))
ax.set_xlabel(r'$\sigma^*$')
ax.set_ylabel(r'$N(\sigma^*)$')
ax.set_xlim(0.998, 1.008)
ax.set_ylim(0, 550)
ax = fig.add_subplot(122)
x = np.linspace(0.45, 1.15, 1000)
ax.plot(x, pdf1_jackknife.pdf(x),
color='blue', ls='dashed', label=r'$\sigma\ {\rm (std.\ dev.)}$',
zorder=2)
ax.plot(x, pdf1_theory.pdf(x), color='gray', zorder=1)
ax.plot(x, pdf2_jackknife.pdf(x),
color='red', label=r'$\sigma_G\ {\rm (quartile)}$', zorder=2)
ax.plot(x, pdf2_theory.pdf(x), color='gray', zorder=1, label='Theory')
plt.legend(loc='upper left', handlelength=2)
ax.set_xlabel(r'$\sigma$')
ax.set_ylabel(r'$p(\sigma|x,I)$')
ax.set_xlim(0.45, 1.15)
ax.set_ylim(0, 24)
plt.show()
"""
Explanation: Jackknife: Compute statistics with subsample of data.
For example, when removing 1 data point from a sample of size N, there are N subsamples.
Poor for rank based statistics.
Figure 4.4
End of explanation
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from scipy.stats import norm
from matplotlib import pyplot as plt
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=15, usetex=True)
#------------------------------------------------------------
# Generate and draw the curves
x = np.linspace(50, 200, 1000)
p1 = 0.9 * norm(100, 10).pdf(x)
p2 = 0.1 * norm(150, 12).pdf(x)
fig, ax = plt.subplots(figsize=(5*2, 3.75*2))
ax.fill(x, p1, ec='k', fc='#AAAAAA', alpha=0.5)
ax.fill(x, p2, '-k', fc='#AAAAAA', alpha=0.5)
ax.plot([120, 120], [0.0, 0.04], '--k')
ax.text(100, 0.036, r'$h_B(x)$', ha='center', va='bottom')
ax.text(150, 0.0035, r'$h_S(x)$', ha='center', va='bottom')
ax.text(122, 0.039, r'$x_c=120$', ha='left', va='top')
ax.text(125, 0.01, r'$(x > x_c\ {\rm classified\ as\ sources})$')
ax.set_xlim(50, 200)
ax.set_ylim(0, 0.04)
ax.set_xlabel('$x$')
ax.set_ylabel('$p(x)$')
plt.show()
"""
Explanation: Hypothesis Testing
CDF of null hypothesis: $0 \leq H_0(x) \leq 1$
p value: Probability that we would get a value at least as large as $x_i$ is:
$p(x>x_i) = 1 - H(x_i)$
Threshold p value adopted, called the significance level $\alpha$, the null hypothesis is rejected when $p\leq \alpha$
Type I Error : False Positive
Type II Error: Missed sources or false negatives
Simple Classification and Completeness vs. Contamination Trade-Off
underlying distribution
$h(x) = (1-a)\, h_b(x) + a \,h_s(x)$
Type I errors
$n_{spurious} = N\,(1-a) \int_{x_c}^{\infty} h_b(x) {\rm d}x$
Type II errors
$n_{missed} = N\, a \int_0^{x_c} h_s(x) {\rm d}x$
Total number of instances classified as source
$n_{source} = N \, a - n_{missed} + n_{spurious}$
sample completeness
$\eta = \frac{N \,a - n_{missed}}{N \, a}$
sample contamination
$\epsilon = \frac{n_{spurious}}{n_{source}}$
End of explanation
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from scipy.stats import norm
from matplotlib import pyplot as plt
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=15, usetex=True)
#------------------------------------------------------------
# Set up the background and foreground distributions
background = norm(100, 10)
foreground = norm(150, 12)
f = 0.1
# Draw from the distribution
N = 1E6
X = np.random.random(N)
mask = (X < 0.1)
X[mask] = foreground.rvs(np.sum(mask))
X[~mask] = background.rvs(np.sum(~mask))
#------------------------------------------------------------
# Perform Benjamini-Hochberg method
p = 1 - background.cdf(X)
p_sorted = np.sort(p)
#------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(5*2, 3.75*2))
fig.subplots_adjust(bottom=0.15)
ax = plt.axes(xscale='log', yscale='log')
# only plot every 1000th; plotting all 1E6 takes too long
ax.plot(p_sorted[::1000], np.linspace(0, 1, 1000), '-k')
ax.plot(p_sorted[::1000], p_sorted[::1000], ':k', lw=1)
# plot the cutoffs for various values of expsilon
p_reg_over_eps = 10 ** np.linspace(-3, 0, 100)
for (i, epsilon) in enumerate([0.1, 0.01, 0.001, 0.0001]):
x = p_reg_over_eps * epsilon
y = p_reg_over_eps
ax.plot(x, y, '--k')
ax.text(x[1], y[1],
r'$\epsilon = %.1g$' % epsilon,
ha='center', va='bottom', rotation=70)
ax.xaxis.set_major_locator(plt.LogLocator(base=100))
ax.set_xlim(1E-12, 1)
ax.set_ylim(1E-3, 1)
ax.set_xlabel('$p = 1 - H_B(i)$')
ax.set_ylabel('normalized $C(p)$')
plt.show()
"""
Explanation: Benjamini and Hochberg Method
Assign p value to all data based on background model. There will be an excess of low p value that correspond to the source.
If there was only background the cumulative distribution would be uniform.
Threshold:
$C(p_c) = N p_c /\epsilon_o$
Figure 4.6
End of explanation
"""
#1-sample KS test
N1 = 100
vals1 = np.random.normal(loc = 0,scale = 1,size = N1)
x1 = np.sort(vals1);
y1 = np.arange(0.,N1)/N1
plt.figure(figsize = (10,10))
plt.plot(x1,y1,'b-',lw = 3)
D,p = scipy.stats.kstest(vals1,"norm")
plt.text(-3,0.9,'D= '+str(D)[:5],fontsize = 24)
plt.text(-3,0.8,'p= '+str(p)[:5],fontsize = 24)
plt.xlim(-3.5,3.5);
#2 sample KS test:
#drawing from a normal distribution
N1 = 1000
vals1 = np.random.normal(loc = 0,scale = 1,size = N1)
x1 = np.sort(vals1)
y1 = np.arange(0.,N1)/N1
#drawing from a uniform distribution
N2 = 1000
vals2 = np.random.rand(N2)*4-2
x2 = np.sort(vals2)
y2 = np.arange(0.,N2)/N2
#plotting and KS test
plt.figure(figsize = (10,10))
plt.plot(x1,y1,'b-',lw = 3)
plt.plot(x2,y2,'g--',lw = 3)
D,p = scipy.stats.ks_2samp(vals1,vals2)
plt.text(-3,0.9,'D= '+str(D)[:5],fontsize = 24)
if str(p)[-4]=='e':
plt.text(-3,0.8,'p= '+str(p)[:4]+str(p)[-4:],fontsize = 24)
else:
plt.text(-3,0.8,'p= '+str(p)[:6],fontsize = 24)
plt.xlim(-3.5,3.5);
#Drawing from a GMM
from sklearn.mixture import GMM
N1=1000
np.random.seed(1)
gmm = GMM(3, n_iter=1)
gmm.means_ = np.array([[-1], [0], [1.5]])
gmm.covars_ = np.array([[1.5], [1], [0.5]]) ** 2
gmm.weights_ = np.array([0.1, 0.8, 0.1])
vals1 = gmm.sample(N1).T[0]
x1 = np.sort(vals1)
y1 = np.arange(0.,N1)/N1
#Drawing from a normal distribution
N2 = 100000
vals2 = np.random.normal(loc = 0,scale = 1,size = N2)
x2 = np.sort(vals2)
y2 = np.arange(0.,N2)/N2
#plotting and KS test
plt.figure(figsize = (10,10))
plt.plot(x1,y1,'b-',lw = 3)
plt.plot(x2,y2,'g--',lw = 3)
D,p = scipy.stats.ks_2samp(vals1,vals2)
plt.text(-3,0.9,'D= '+str(D)[:5],fontsize = 24)
if str(p)[-4]=='e':
plt.text(-3,0.8,'p= '+str(p)[:4]+str(p)[-4:],fontsize = 24)
else:
plt.text(-3,0.8,'p= '+str(p)[:6],fontsize = 24)
plt.xlim(-3.5,3.5);
"""
Explanation: 4.7 - Comparing Distributions
Nonparametric tests
Kolmogorov-Smirnov Test -
The KS test measures the maximum distance between the cumulative distriutions of two samples, or one sample and one distribution (equivalent to drawing an infinitely large sample). The relevant test statistic is
$$D = max(|F_1(x_1) - F_2(x_2)|)$$
where $F_n(x_n)$ is a cumulative distribution of sample n. If the underlying distributions being drawn from are the same, the probability of finding D larger than a given value is given by
$$Q_{KS}(\lambda) = 2 \sum\limits_{k=1}^{\infty} (-1)^{k-1}e^{-2k^2\lambda^2}$$
where
$$\lambda = (0.12+\sqrt{n_e} +\frac{0.11}{\sqrt{n_e}})\times D$$
and
$$n_e = \frac{N_1N_2}{N_1+N_2}$$
if D > 10, we can use
$$D_{KS,crit} = \frac{C(\alpha)}{\sqrt{n_e}}$$
which establishes a critical D above which we reject the null hypothesis that the samples are drawn from the same distribution. This critical D depends on alpha, the probability threshold below which we reject, and inversely on sample size, such that with larger samples it becomes "easier" to reject the null hypothesis. If we are comparing a single sample to a distribution, $n_e$ becomes
$$\lim_{N_2\to\infty}n_e = N_1$$
Note that, as with many statistical tests, there is a built-in scipy package that performs the test
End of explanation
"""
#Drawing from a GMM
from sklearn.mixture import GMM
N1=100
np.random.seed(1)
gmm = GMM(3, n_iter=1)
gmm.means_ = np.array([[-1], [0.5], [1.5]])
gmm.covars_ = np.array([[1.5], [1], [0.5]]) ** 2
gmm.weights_ = np.array([0.1, 0.8, 0.1])
vals1 = gmm.sample(N1).T[0]
x1 = np.sort(vals1)
y1 = np.arange(0.,N1)/N1
#Drawing from a normal distribution
N2 = 100
vals2 = np.random.normal(loc = 0,scale = 1,size = N2)
x2 = np.sort(vals2)
y2 = np.arange(0.,N2)/N2
#plotting and U test
plt.figure(figsize = (10,10))
plt.plot(x1,y1,'b-',lw = 3)
plt.plot(x2,y2,'g--',lw = 3)
U,p = scipy.stats.mannwhitneyu(vals2,vals1)
s = str(U)
s1 = s.index('.')
plt.text(-3,0.9,'U= '+str(U)[:s1+2],fontsize = 24)
plt.text(-3,0.8,r'$\mu_U$ = '+str(N1*N2/2),fontsize = 24)
if str(p)[-4]=='e':
plt.text(-3,0.7,'p= '+str(p)[:4]+str(p)[-4:],fontsize = 24)
else:
plt.text(-3,0.7,'p= '+str(p)[:6],fontsize = 24)
plt.xlim(-3.5,3.5);
"""
Explanation: U test -
The KS test is sensitive to all differences in data (e.g. location, shape, etc.). What if we want to probe differences in, say, only location? We could use the Mann-Whitney-Wilcoxen test, or U test. If we have two samples, {$x_i$} and {$y_i$}, we concatenate and sort them. Then for each $x_i$ we count the number of lower-rank $y_i$ and sum these counts. As an example, if our combined list is
$x_1,x_2,y_1,x_3,y_2,y_3,y_4,x_4$
then $U_x = 0+0+1+4 = 5$ and $U_y = 2+3+3+3 = 11$ (Notice that $U_x+U_y = N_1N_2$)
In the large sample limit, U is a Gaussian variable with $$\mu_U = \frac{N_1N_2}{2}$$ and $$\sigma_U = \sqrt{\frac{N_1N_2(N_1+N_2+1)}{12}}$$,
If sample size is large, a quicker estimate of U may be obtained by
$$ U_x = \sum\limits_{i=1}^{N_1} \rm rank(x_i) - \frac{N_1(N_1-1)}{2} $$
where $\rm rank$$(x_i)$ refers to the integer rank order of a given datapoint. For example, $\rm rank$$(x_3$) = 4 above.
Below we compare a the mean of two models to each other using the U test
End of explanation
"""
x_true = np.random.normal(5,3,10000)
y_true = np.random.normal(5,4,10000)
plt.figure(figsize=(10,10))
plt.plot(x_true,y_true,'k,')
plt.xlim(-10,20)
plt.ylim(-15,25)
plt.title('True Distribution')
selection_fn = y_true<12-x_true
x=x_true[selection_fn]
y=y_true[selection_fn]
plt.figure(figsize=(10,10))
plt.plot(x,y,'k,')
plt.xlim(-10,20)
plt.ylim(-15,25)
plt.title('Observed Distribution')
"""
Explanation: Parametric Methods
Parametric tests have the benefit of being more efficient than non-parametric tests, however the gain in efficiency can be small and not worth the trade-off of needing to know the form of the underlying distribution. Nevertheless, here are two methods of comparing sample statistics if the form of the samples is known to be Gaussian.
t and f tests-
If the two samples have the same known $\sigma$, then the difference in the sample means, $\Delta$, is a gaussian variable, with $\mu_\Delta = 0$ and $\sigma_\Delta = \sigma \sqrt{1/N_{1}^{2} + 1/N_{2}^{2}}$.
If $\sigma$ is unknown but known to be equal between the samples, then $\sigma_\Delta = \sqrt{s_{12}^{2}(1/N_{1} + 1/N_{2})}$, where $s_{12}$ is an estimate of the common standard deviation of the samples:
$$s_{12} = \sqrt{\frac{(N_1-1)s_{1}^{2} + (N_2-1)s_{2}^{2}}{N_1+N_2-2}}$$
where $s_{1}$ and $s_{2}$ are the sample standard deviations.
The F test compares the variance between two samples and is simply the ratio of the sample variances. Under the null hypothesis, this ratio follows a Fisher F distribution.
Selection Effects
Lynden-Bell's $C^-$ method
It is often useful to, in truncated or censored data, recover the true distribution. This is easily visualized in one dimension: if $f(x)$ is the observed distribution and $s(x)$ is the selection function, then the implied true distribution is $h(x) = f(x)/s(x)$. An example of such a correction in higher dimensions is a $1/V_max$ correction frequently applied to galaxy counts (i.e. galaxies only visible nearby are weighted more heavily). A better solution is Lynden-Bell's $C^-$ method, illustrated here. We will consider the toy case of a two dimensional gaussian with a selection function defined such that S(x,y) = 1 below the line y = 12-x and S(x,y) = 0 above it.
End of explanation
"""
R,N = [],[]
for i in range(len(x)):
y_max = 12-x[i]
sel_J = np.array([(x[ind] <= x[i])&(y[ind] < y_max) for ind in range(len(x))])
x_j,y_j = x[sel_J],y[sel_J]
if i ==0:
plt.figure(figsize=(10,10))
plt.plot(x_j,y_j,'k,')
plt.scatter([x[i]],[y[i]],s=49)
plt.xlim(-10,20)
plt.ylim(-15,25)
y_js = np.sort(y_j)
R_i = list(y_js).index(y[i])+1
N_i = len(y_js)
R.append(R_i)
N.append(N_i)
tau = sum(np.array(R)*1. -np.array(N)/2.)/np.sqrt(sum((np.array(N)**2)/12))
print tau
"""
Explanation: In order the use the method, we must determine if f(x,y) is separable, that is $f(x,y) = \psi(x)\rho(y)$. To do this, we follow the following procedure:
For each $x_i$, we define a set of data points $J_i$ such that every point in the set has $x_j \lt x_i$ and $y_j \lt y_{max,i}$. The number of points in this set is $N_j$.
Sort $J_i$ by $y_j$, giving each element in the set a rank $R_j$.
Define the rank of $y_i$ as $R_i$
If x and y are independent, $R_i$ must be distributed uniformly over $N_i$. We can define a test statistic
$$\tau = \frac{\sum_{i}^{} (R_i - N_i/2)}{\sqrt{\sum_{i}^{} N_i^2/12}}$$
If $\tau \lt 1$, x and y are independent at the $1\sigma$ level
We do this explicitly below:
End of explanation
"""
argy=argsort(x)
x_s =x[argy]
N_s =np.array(N)[argy]
Nk=1.+1./N_s
Nk[0] = 1
phi = np.array([prod(Nk[:i]) for i in range(len(Nk))])
phi = phi/phi[-1]
#for i in np.arange(len(x)-1)+1:
plt.figure(figsize = (10,10))
plt.plot(x_s,phi)
plt.xlabel('x');
plt.ylabel(r'$\Phi$');
"""
Explanation: Since $\tau \lt 1$, we see that x and y are independent. We now define the cumulative functions $\Phi(x) = \int\limits_{-\infty}^{x} \psi(x') dx'$ and $\Sigma(y) = \int\limits_{-\infty}^{y} \rho(y') dy'$. The Lyndel-Bell paper showed that $\Phi(x_i) = \Phi(x_1) \prod\limits_{k=2}^{i} (1+1/N_k)$, definied on a grid of unequal spacing given by {$x_i$}. Here we requre {$x_i$} to be sorted.
End of explanation
"""
yp = np.arange(0,1,.0001)
xp = np.interp(yp,phi,x_s)
plt.figure(figsize = (10,10))
plt.hist(xp,normed = 1,histtype = 'step',label = 'Lyndel-Bell $C^-$',bins = 20,lw=3);
plt.hist(x,normed = 1,histtype = 'step',label = 'Observed',bins =20,lw=3);
plt.hist(x_true,normed = 1,histtype = 'step',label = 'True',bins =20,lw=3);
plt.xlabel('x');
plt.legend(loc = 2);
"""
Explanation: To get the differential distribution function, we interpolate and bin along the x axis:
End of explanation
"""
Rk,M = [],[]
for i in range(len(y)):
x_max = 12-y[i]
sel_J = np.array([(x[ind] <= x_max)&(y[ind] < y[i]) for ind in range(len(y))])
x_j,y_j = x[sel_J],y[sel_J]
if i ==0:
plt.figure(figsize=(10,10))
plt.plot(x_j,y_j,'k,')
plt.scatter([x[i]],[y[i]],s=49)
plt.xlim(-10,20)
plt.ylim(-15,25)
M_k = len(y_j)
M.append(M_k)
argy=argsort(y)
y_s =y[argy]
M_s =np.array(M)[argy]
Mk=1.+1./M_s
Mk[0] = 1
sigma= np.array([prod(Mk[:i]) for i in range(len(Nk))])
sigma = sigma/sigma[-1]
#for i in np.arange(len(x)-1)+1:
plt.figure(figsize = (10,10))
plt.plot(y_s,sigma)
plt.xlabel('y');
plt.ylabel(r'$\Sigma$');
yp = np.arange(0,1,.0001)
xp = np.interp(yp,sigma,y_s)
plt.figure(figsize = (10,10))
plt.hist(xp,normed = 1,histtype = 'step',label = 'Lyndel-Bell $C^-$',bins = 20,lw=3);
plt.hist(y,normed = 1,histtype = 'step',label = 'Observed',bins =20,lw=3);
plt.hist(y_true,normed = 1,histtype = 'step',label = 'True',bins =20,lw=3);
plt.xlabel('y');
plt.legend(loc = 2);
"""
Explanation: To find the distribution in y, we find $J_k$ such that every point in the set has $x_j \lt x_{max,i}$ and $y_j \lt y_{i}$, and the counts $M_k$ in the $J_k$s
End of explanation
"""
|
samirma/deep-learning | gradient-descent/GradientDescent.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
"""
Explanation: Implementing the Gradient Descent Algorithm
In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
End of explanation
"""
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
"""
Explanation: Reading and plotting the data
End of explanation
"""
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
pass
# Output (prediction) formula
def output_formula(features, weights, bias):
pass
# Error (log-loss) formula
def error_formula(y, output):
pass
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
pass
"""
Explanation: TODO: Implementing the basic functions
Here is your turn to shine. Implement the following formulas, as explained in the text.
- Sigmoid activation function
$$\sigma(x) = \frac{1}{1+e^{-x}}$$
Output (prediction) formula
$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$
Error function
$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$
The function that updates the weights
$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$
$$ b \longrightarrow b + \alpha (y - \hat{y})$$
End of explanation
"""
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
"""
Explanation: Training function
This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
End of explanation
"""
train(X, y, epochs, learnrate, True)
"""
Explanation: Time to train the algorithm!
When we run the function, we'll obtain the following:
- 10 updates with the current training loss and accuracy
- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.
- A plot of the error function. Notice how it decreases as we go through more epochs.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/cf9b035ec9fdf9fb55b24e8c3a75ad55/psf_ctf_vertices.ipynb | bsd-3-clause | # Authors: Olaf Hauk <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
from mne.minimum_norm import (make_inverse_resolution_matrix, get_cross_talk,
get_point_spread)
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects/'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fname_evo = data_path + '/MEG/sample/sample_audvis-ave.fif'
# read forward solution
forward = mne.read_forward_solution(fname_fwd)
# forward operator with fixed source orientations
mne.convert_forward_solution(forward, surf_ori=True,
force_fixed=True, copy=False)
# noise covariance matrix
noise_cov = mne.read_cov(fname_cov)
# evoked data for info
evoked = mne.read_evokeds(fname_evo, 0)
# make inverse operator from forward solution
# free source orientation
inverse_operator = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward, noise_cov=noise_cov, loose=0.,
depth=None)
# regularisation parameter
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'MNE' # can be 'MNE' or 'sLORETA'
# compute resolution matrix for sLORETA
rm_lor = make_inverse_resolution_matrix(forward, inverse_operator,
method='sLORETA', lambda2=lambda2)
# get PSF and CTF for sLORETA at one vertex
sources = [1000]
stc_psf = get_point_spread(rm_lor, forward['src'], sources, norm=True)
stc_ctf = get_cross_talk(rm_lor, forward['src'], sources, norm=True)
del rm_lor
"""
Explanation: Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
Visualise PSF and CTF at one vertex for sLORETA.
End of explanation
"""
# Which vertex corresponds to selected source
vertno_lh = forward['src'][0]['vertno']
verttrue = [vertno_lh[sources[0]]] # just one vertex
# find vertices with maxima in PSF and CTF
vert_max_psf = vertno_lh[stc_psf.data.argmax()]
vert_max_ctf = vertno_lh[stc_ctf.data.argmax()]
brain_psf = stc_psf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir)
brain_psf.show_view('ventral')
brain_psf.add_text(0.1, 0.9, 'sLORETA PSF', 'title', font_size=16)
# True source location for PSF
brain_psf.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh',
color='green')
# Maximum of PSF
brain_psf.add_foci(vert_max_psf, coords_as_verts=True, scale_factor=1.,
hemi='lh', color='black')
"""
Explanation: Visualize
PSF:
End of explanation
"""
brain_ctf = stc_ctf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir)
brain_ctf.add_text(0.1, 0.9, 'sLORETA CTF', 'title', font_size=16)
brain_ctf.show_view('ventral')
brain_ctf.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh',
color='green')
# Maximum of CTF
brain_ctf.add_foci(vert_max_ctf, coords_as_verts=True, scale_factor=1.,
hemi='lh', color='black')
"""
Explanation: CTF:
End of explanation
"""
|
akseshina/dl_course | seminar_3/classwork_1.ipynb | gpl-3.0 | print(tf.nn.softmax_cross_entropy_with_logits.__doc__)
"""
Explanation: Activation functions
Why do we need tf.nn.softmax_cross_entropy_with_logits ?
End of explanation
"""
import tensorflow as tf
from keras.layers.advanced_activations import LeakyReLU, PReLU
def LeakyRelu(x, alpha):
return tf.maximum(alpha*x, x)
with tf.Session() as sess:
inp = tf.Variable(initial_value=tf.random_uniform(shape=[5], minval=-5, maxval=5, dtype=tf.float32))
alpha = 0.5
res = LeakyRelu(inp, alpha)
sess.run(tf.global_variables_initializer())
before, after = sess.run([inp, res])
print('before', before)
print('after', after)
def PRelu(x):
alpha = tf.Variable(initial_value=tf.random_normal(shape=x.shape))
return tf.where(x < 0, alpha * x, tf.nn.relu(x))
with tf.Session() as sess:
inp = tf.Variable(initial_value=tf.random_uniform(shape=[5], minval=-5, maxval=5, dtype=tf.float32))
alpha = 0.5
res = PRelu(inp)
sess.run(tf.global_variables_initializer())
before, after = sess.run([inp, res])
print('before', before)
print('after', after)
def spp_layer(input_, levels=[2, 1], name = 'SPP_layer'):
'''Multiple Level SPP layer.
Works for levels=[1, 2, 3, 6].'''
shape = input_.get_shape().as_list()
with tf.variable_scope(name):
pool_outputs = []
for l in levels:
pool = tf.nn.max_pool(input_, ksize=[1, np.ceil(shape[1] * 1. / l).astype(np.int32),
np.ceil(shape[2] * 1. / l).astype(np.int32), 1],
strides=[1, np.floor(shape[1] * 1. / l + 1).astype(np.int32),
np.floor(shape[2] * 1. / l + 1), 1],
padding='SAME')
pool_outputs.append(tf.reshape(pool, [shape[0], -1]))
spp_pool = tf.concat(1, pool_outputs)
return spp_pool
"""
Explanation: Definition:
$softmax(x) = \frac{\exp(x)}{\sum_j \exp(x_j)}$
What do we want:
$layer(x) = \frac{\exp(W x + b)}{\sum_j \exp(W x_j + b)}$
How we did it in practice:
tf.nn.softmax_cross_entropy_with_logits
Why not FullyConnected + SoftMax?
Numeric error!
$\sum_{i=1}^N \log softmax_i(x_i) = \sum_{i=1}^N \sum_{j=1}^C [y_i = j] \log softmax_j(x_i) =$
$\sum_{i=1}^N \sum_{j=1}^C [y_i = j](x_{ij} - \log \sum_k \exp(x_k) =$
$\sum_{i=1}^N \sum_{j=1}^C [y_i = j](x_{ij} - \log exp(x_{max})(\sum_k \exp(x_k - x_{max}))$
End of explanation
"""
|
spacedrabbit/PythonBootcamp | Statements Assessment Test.ipynb | mit | st = 'Print only the words that start with s in this sentence'
#Code here
# to note: a for in for a string iterates through letters, not numbers
for word in st.split():
letter = word[0].lower()
if letter == 's':
print word
"""
Explanation: Statements Assessment Test
Lets test your knowledge!
Use for, split(), and if to create a Statement that will print out words that start with 's':
End of explanation
"""
#Code Here
range(0, 11, 2)
"""
Explanation: Use range() to print all the even numbers from 0 to 10.
End of explanation
"""
#Code in this cell
[num for num in range(1, 50) if num % 3 == 0]
"""
Explanation: Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.
End of explanation
"""
st = 'Print every word in this sentence that has an even number of letters'
#Code in this cell
for word in st.split():
if len(word) % 2 == 0:
print word
"""
Explanation: Go through the string below and if the length of a word is even print "even!"
End of explanation
"""
#Code in this cell
def fizzbuzz(start, end):
for i in range(start, end):
is_fizzy = i % 3 == 0
is_buzzy = i % 5 == 0
if is_fizzy and not is_buzzy:
print "Fizz"
elif is_buzzy and not is_fizzy:
print "Buzz"
elif is_fizzy and is_buzzy:
print "FizzBuzz"
else:
print i
fizzbuzz(0, 100)
"""
Explanation: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".
End of explanation
"""
st = 'Create a list of the first letters of every word in this string'
#Code in this cell
running_list = []
for word in st.split():
running_list.append(word[0])
print running_list
"""
Explanation: Use List Comprehension to create a list of the first letters of every word in the string below:
End of explanation
"""
|
deehzee/cs231n | assignment2/BatchNormalization.ipynb | mit | # As usual, a bit of setup
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import \
eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
# set default size of plots
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) \
/ (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print('{}:'.format(k), v.shape)
"""
Explanation: Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3),
{'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in xrange(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
"""
Explanation: Batch normalization: Forward
In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.
End of explanation
"""
# Gradient check batchnorm backward pass
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
"""
Explanation: Batch Normalization: backward
Now implement the backward pass for batch normalization in the function batchnorm_backward.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
End of explanation
"""
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference:', rel_error(dx1, dx2))
print('dgamma difference:', rel_error(dgamma1, dgamma2))
print('dbeta difference:', rel_error(dbeta1, dbeta2))
print('speedup: {:.2f}x'.format((t2 - t1) / (t3 - t2)))
"""
Explanation: Batch Normalization: alternative backward
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
NOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.
End of explanation
"""
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg =', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2,
dtype=np.float64, use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss:', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name],
verbose=False, h=1e-5)
print('{} relative error: {:.2e}'.format(
name, rel_error(grad_num, grads[name])))
if reg == 0:
print('')
"""
Explanation: Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.
End of explanation
"""
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale,
use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale,
use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
print('With batch normalization...')
bn_solver.train()
print()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
print('Without batch normalization...')
solver.train()
"""
Explanation: Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
End of explanation
"""
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
End of explanation
"""
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale {} / {}'.format(
i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims,
weight_scale=weight_scale,
use_batchnorm=True)
model = FullyConnectedNet(hidden_dims,
weight_scale=weight_scale,
use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(
bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gcf().set_size_inches(10, 15)
plt.show()
"""
Explanation: Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
End of explanation
"""
|
rashikaranpuria/Machine-Learning-Specialization | Regression/Assignmet_five/week-5-lasso-assignment-1-blank.ipynb | mit | import graphlab
"""
Explanation: Regression Week 5: Feature Selection and LASSO (Interpretation)
In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:
* Run LASSO with different L1 penalties.
* Choose best L1 penalty using a validation set.
* Choose best L1 penalty using a validation set, with additional constraint on the size of subset.
In the second notebook, you will implement your own LASSO solver, using coordinate descent.
Fire up graphlab create
End of explanation
"""
sales = graphlab.SFrame('kc_house_data.gl/kc_house_data.gl')
"""
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
"""
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float)
sales['floors_square'] = sales['floors']*sales['floors']
"""
Explanation: Create new features
As in Week 2, we consider features that are some transformations of inputs.
End of explanation
"""
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
"""
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
End of explanation
"""
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10)
"""
Explanation: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
End of explanation
"""
model_all.get("coefficients").print_rows(num_rows=18, num_columns=3)
model_all['coefficients']['value'].nnz()
"""
Explanation: Find what features had non-zero weight.
End of explanation
"""
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
"""
Explanation: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION:
According to this list of weights, which of the features have been chosen?
Selecting an L1 penalty
To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:
* Split our sales data into 2 sets: training and test
* Further split our training data into two sets: train, validation
Be very careful that you use seed = 1 to ensure you get the same answer!
End of explanation
"""
import numpy as np
def get_RSS(prediction, output):
residual = output - prediction
# square the residuals and add them up
RS = residual*residual
RSS = RS.sum()
return(RSS)
for l1_penalty in np.logspace(1, 7, num=13):
A = 0
print l1_penalty
model_all = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0, l1_penalty=l1_penalty)
predictions=model_all.predict(validation)
A = get_RSS(predictions,validation['price'])
print A
model_all = graphlab.linear_regression.create(testing, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0, l1_penalty=10)
model_all['coefficients']['value'].nnz()
"""
Explanation: Next, we write a loop that does the following:
* For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
* Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
* Report which l1_penalty produced the lowest RSS on validation data.
When you call linear_regression.create() make sure you set validation_set = None.
Note: you can turn off the print out of linear_regression.create() with verbose = False
End of explanation
"""
max_nonzeros = 7
"""
Explanation: QUIZ QUESTIONS
1. What was the best value for the l1_penalty?
2. What is the RSS on TEST data of the model with the best l1_penalty?
QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal:
1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.
End of explanation
"""
l1_penalty_values = np.logspace(8, 10, num=20)
"""
Explanation: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values:
End of explanation
"""
for l1_penalty in np.logspace(8, 10, num=20):
A = 0
predictions = 0
print l1_penalty
model_all = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0, l1_penalty=l1_penalty)
predictions=model_all.predict(validation)
A = get_RSS(predictions,validation['price'])
print A
print model_all['coefficients']['value'].nnz()
"""
Explanation: Now, implement a loop that search through this space of possible l1_penalty values:
For l1_penalty in np.logspace(8, 10, num=20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!
End of explanation
"""
l1_penalty_min = 2976351441.63
l1_penalty_max = 3792690190.73
"""
Explanation: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find:
* The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
* Store this value in the variable l1_penalty_min (we will use it later)
* The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
* Store this value in the variable l1_penalty_max (we will use it later)
Hint: there are many ways to do this, e.g.:
* Programmatically within the loop above
* Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.
End of explanation
"""
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
"""
Explanation: QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found:
End of explanation
"""
for l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
A = 0
predictions = 0
model_all = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0, l1_penalty=l1_penalty)
predictions=model_all.predict(validation)
A = get_RSS(predictions,validation['price'])
if model_all['coefficients']['value'].nnz() <= 7:
print l1_penalty
print A
print model_all['coefficients']['value'].nnz()
"""
Explanation: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Measure the RSS of the learned model on the VALIDATION set
Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.
End of explanation
"""
model_all = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0, l1_penalty=3448968612.16)
# model_all['coefficients']['value'].nnz()
# print np.linspace(l1_penalty_min,l1_penalty_max,20)[0]
print model_all['coefficients'].print_rows(num_rows=18, num_columns=3)
"""
Explanation: QUIZ QUESTIONS
1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
2. What features in this model have non-zero coefficients?
End of explanation
"""
|
AtmaMani/pyChakras | udemy_ml_bootcamp/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb | mit | import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Principal Component Analysis
Let's discuss PCA! Since this isn't exactly a full machine learning algorithm, but instead an unsupervised learning algorithm, we will just have a lecture on this topic, but no full machine learning project (although we will walk through the cancer set with PCA).
PCA Review
Make sure to watch the video lecture and theory presentation for a full overview of PCA!
Remember that PCA is just a transformation of your data and attempts to find out what features explain the most variance in your data. For example:
<img src='PCA.png' />
Libraries
End of explanation
"""
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
cancer.keys()
print(cancer['DESCR'])
df = pd.DataFrame(cancer['data'],columns=cancer['feature_names'])
#(['DESCR', 'data', 'feature_names', 'target_names', 'target'])
df.head()
"""
Explanation: The Data
Let's work with the cancer data set again since it had so many features.
End of explanation
"""
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df)
scaled_data = scaler.transform(df)
"""
Explanation: PCA Visualization
As we've noticed before it is difficult to visualize high dimensional data, we can use PCA to find the first two principal components, and visualize the data in this new, two-dimensional space, with a single scatter-plot. Before we do this though, we'll need to scale our data so that each feature has a single unit variance.
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(scaled_data)
"""
Explanation: PCA with Scikit Learn uses a very similar process to other preprocessing functions that come with SciKit Learn. We instantiate a PCA object, find the principal components using the fit method, then apply the rotation and dimensionality reduction by calling transform().
We can also specify how many components we want to keep when creating the PCA object.
End of explanation
"""
x_pca = pca.transform(scaled_data)
scaled_data.shape
x_pca.shape
"""
Explanation: Now we can transform this data to its first 2 principal components.
End of explanation
"""
plt.figure(figsize=(8,6))
plt.scatter(x_pca[:,0],x_pca[:,1],c=cancer['target'],cmap='plasma')
plt.xlabel('First principal component')
plt.ylabel('Second Principal Component')
"""
Explanation: Great! We've reduced 30 dimensions to just 2! Let's plot these two dimensions out!
End of explanation
"""
pca.components_
"""
Explanation: Clearly by using these two components we can easily separate these two classes.
Interpreting the components
Unfortunately, with this great power of dimensionality reduction, comes the cost of being able to easily understand what these components represent.
The components correspond to combinations of the original features, the components themselves are stored as an attribute of the fitted PCA object:
End of explanation
"""
df_comp = pd.DataFrame(pca.components_,columns=cancer['feature_names'])
plt.figure(figsize=(12,6))
sns.heatmap(df_comp,cmap='plasma',)
"""
Explanation: In this numpy matrix array, each row represents a principal component, and each column relates back to the original features. we can visualize this relationship with a heatmap:
End of explanation
"""
|
mattilyra/gensim | docs/notebooks/Corpora_and_Vector_Spaces.ipynb | lgpl-2.1 | import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
import os
import tempfile
TEMP_FOLDER = tempfile.gettempdir()
print('Folder "{}" will be used to save temporary dictionary and corpus.'.format(TEMP_FOLDER))
"""
Explanation: Tutorial 1: Corpora and Vector Spaces
See this gensim tutorial on the web here.
Don’t forget to set:
End of explanation
"""
from gensim import corpora
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
"""
Explanation: if you want to see logging events.
From Strings to Vectors
This time, let’s start from documents represented as strings:
End of explanation
"""
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
# remove words that appear only once
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1] for text in texts]
from pprint import pprint # pretty-printer
pprint(texts)
"""
Explanation: This is a tiny corpus of nine documents, each consisting of only a single sentence.
First, let’s tokenize the documents, remove common words (using a toy stoplist) as well as words that only appear once in the corpus:
End of explanation
"""
dictionary = corpora.Dictionary(texts)
dictionary.save(os.path.join(TEMP_FOLDER, 'deerwester.dict')) # store the dictionary, for future reference
print(dictionary)
"""
Explanation: Your way of processing the documents will likely vary; here, I only split on whitespace to tokenize, followed by lowercasing each word. In fact, I use this particular (simplistic and inefficient) setup to mimic the experiment done in Deerwester et al.’s original LSA article (Table 2).
The ways to process documents are so varied and application- and language-dependent that I decided to not constrain them by any interface. Instead, a document is represented by the features extracted from it, not by its “surface” string form: how you get to the features is up to you. Below I describe one common, general-purpose approach (called bag-of-words), but keep in mind that different application domains call for different features, and, as always, it’s garbage in, garbage out...
To convert documents to vectors, we’ll use a document representation called bag-of-words. In this representation, each document is represented by one vector where a vector element i represents the number of times the ith word appears in the document.
It is advantageous to represent the questions only by their (integer) ids. The mapping between the questions and ids is called a dictionary:
End of explanation
"""
print(dictionary.token2id)
"""
Explanation: Here we assigned a unique integer ID to all words appearing in the processed corpus with the gensim.corpora.dictionary.Dictionary class. This sweeps across the texts, collecting word counts and relevant statistics. In the end, we see there are twelve distinct words in the processed corpus, which means each document will be represented by twelve numbers (ie., by a 12-D vector). To see the mapping between words and their ids:
End of explanation
"""
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
print(new_vec) # the word "interaction" does not appear in the dictionary and is ignored
"""
Explanation: To actually convert tokenized documents to vectors:
End of explanation
"""
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize(os.path.join(TEMP_FOLDER, 'deerwester.mm'), corpus) # store to disk, for later use
for c in corpus:
print(c)
"""
Explanation: The function doc2bow() simply counts the number of occurrences of each distinct word, converts the word to its integer word id and returns the result as a bag-of-words--a sparse vector, in the form of [(word_id, word_count), ...].
As the token_id is 0 for "human" and 2 for "computer", the new document “Human computer interaction” will be transformed to [(0, 1), (2, 1)]. The words "computer" and "human" exist in the dictionary and appear once. Thus, they become (0, 1), (2, 1) respectively in the sparse vector. The word "interaction" doesn't exist in the dictionary and, thus, will not show up in the sparse vector. The other ten dictionary words, that appear (implicitly) zero times, will not show up in the sparse vector and , ,there will never be a element in the sparse vector like (3, 0).
For people familiar with scikit learn, doc2bow() has similar behaviors as calling transform() on CountVectorizer. doc2bow() can behave like fit_transform() as well. For more details, please look at gensim API Doc.
End of explanation
"""
from smart_open import smart_open
class MyCorpus(object):
def __iter__(self):
for line in smart_open('datasets/mycorpus.txt', 'rb'):
# assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(line.lower().split())
"""
Explanation: By now it should be clear that the vector feature with id=10 represents the number of times the word "graph" occurs in the document. The answer is “zero” for the first six documents and “one” for the remaining three. As a matter of fact, we have arrived at exactly the same corpus of vectors as in the Quick Example. If you're running this notebook yourself the word IDs may differ, but you should be able to check the consistency between documents comparing their vectors.
Corpus Streaming – One Document at a Time
Note that corpus above resides fully in memory, as a plain Python list. In this simple example, it doesn’t matter much, but just to make things clear, let’s assume there are millions of documents in the corpus. Storing all of them in RAM won’t do. Instead, let’s assume the documents are stored in a file on disk, one document per line. Gensim only requires that a corpus be able to return one document vector at a time:
End of explanation
"""
corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
print(corpus_memory_friendly)
"""
Explanation: The assumption that each document occupies one line in a single file is not important; you can design the __iter__ function to fit your input format, whatever that may be - walking directories, parsing XML, accessing network nodes... Just parse your input to retrieve a clean list of tokens in each document, then convert the tokens via a dictionary to their IDs and yield the resulting sparse vector inside __iter__.
End of explanation
"""
for vector in corpus_memory_friendly: # load one vector into memory at a time
print(vector)
"""
Explanation: corpus_memory_friendly is now an object. We didn’t define any way to print it, so print just outputs address of the object in memory. Not very useful. To see the constituent vectors, let’s iterate over the corpus and print each document vector (one at a time):
End of explanation
"""
from six import iteritems
from smart_open import smart_open
# collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in smart_open('datasets/mycorpus.txt', 'rb'))
# remove stop words and words that appear only once
stop_ids = [dictionary.token2id[stopword] for stopword in stoplist
if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
# remove stop words and words that appear only once
dictionary.filter_tokens(stop_ids + once_ids)
print(dictionary)
"""
Explanation: Although the output is the same as for the plain Python list, the corpus is now much more memory friendly, because at most one vector resides in RAM at a time. Your corpus can now be as large as you want.
We are going to create the dictionary from the mycorpus.txt file without loading the entire file into memory. Then, we will generate the list of token ids to remove from this dictionary by querying the dictionary for the token ids of the stop words, and by querying the document frequencies dictionary (dictionary.dfs) for token ids that only appear once. Finally, we will filter these token ids out of our dictionary. Keep in mind that dictionary.filter_tokens (and some other functions such as dictionary.add_document) will call dictionary.compactify() to remove the gaps in the token id series thus enumeration of remaining tokens can be changed.
End of explanation
"""
# create a toy corpus of 2 documents, as a plain Python list
corpus = [[(1, 0.5)], []] # make one document empty, for the heck of it
corpora.MmCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.mm'), corpus)
"""
Explanation: And that is all there is to it! At least as far as bag-of-words representation is concerned. Of course, what we do with such a corpus is another question; it is not at all clear how counting the frequency of distinct words could be useful. As it turns out, it isn’t, and we will need to apply a transformation on this simple representation first, before we can use it to compute any meaningful document vs. document similarities. Transformations are covered in the next tutorial, but before that, let’s briefly turn our attention to corpus persistency.
Corpus Formats
There exist several file formats for serializing a Vector Space corpus (~sequence of vectors) to disk. Gensim implements them via the streaming corpus interface mentioned earlier: documents are read from (or stored to) disk in a lazy fashion, one document at a time, without the whole corpus being read into main memory at once.
One of the more notable file formats is the Matrix Market format. To save a corpus in the Matrix Market format:
End of explanation
"""
corpora.SvmLightCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.svmlight'), corpus)
corpora.BleiCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.lda-c'), corpus)
corpora.LowCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.low'), corpus)
"""
Explanation: Other formats include Joachim’s SVMlight format, Blei’s LDA-C format and GibbsLDA++ format.
End of explanation
"""
corpus = corpora.MmCorpus(os.path.join(TEMP_FOLDER, 'corpus.mm'))
"""
Explanation: Conversely, to load a corpus iterator from a Matrix Market file:
End of explanation
"""
print(corpus)
"""
Explanation: Corpus objects are streams, so typically you won’t be able to print them directly:
End of explanation
"""
# one way of printing a corpus: load it entirely into memory
print(list(corpus)) # calling list() will convert any sequence to a plain Python list
"""
Explanation: Instead, to view the contents of a corpus:
End of explanation
"""
# another way of doing it: print one document at a time, making use of the streaming interface
for doc in corpus:
print(doc)
"""
Explanation: or
End of explanation
"""
corpora.BleiCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.lda-c'), corpus)
"""
Explanation: The second way is obviously more memory-friendly, but for testing and development purposes, nothing beats the simplicity of calling list(corpus).
To save the same Matrix Market document stream in Blei’s LDA-C format,
End of explanation
"""
import gensim
import numpy as np
numpy_matrix = np.random.randint(10, size=[5,2])
corpus = gensim.matutils.Dense2Corpus(numpy_matrix)
numpy_matrix_dense = gensim.matutils.corpus2dense(corpus, num_terms=10)
"""
Explanation: In this way, gensim can also be used as a memory-efficient I/O format conversion tool: just load a document stream using one format and immediately save it in another format. Adding new formats is dead easy, check out the code for the SVMlight corpus for an example.
Compatibility with NumPy and SciPy
Gensim also contains efficient utility functions to help converting from/to numpy matrices:
End of explanation
"""
import scipy.sparse
scipy_sparse_matrix = scipy.sparse.random(5,2)
corpus = gensim.matutils.Sparse2Corpus(scipy_sparse_matrix)
scipy_csc_matrix = gensim.matutils.corpus2csc(corpus)
"""
Explanation: and from/to scipy.sparse matrices:
End of explanation
"""
|
RaoUmer/lightning-example-notebooks | plots/map.ipynb | mit | from lightning import Lightning
from numpy import random
"""
Explanation: <img style='float: left' src="http://lightning-viz.github.io/images/logo.png"> <br> <br> Map plots in <a href='http://lightning-viz.github.io/'><font color='#9175f0'>Lightning</font></a>
<hr> Setup
End of explanation
"""
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
"""
Explanation: Connect to server
End of explanation
"""
states = ["NA", "AK", "AL", "AR", "AZ", "CA", "CO","CT",
"DC","DE","FL","GA","HI","IA","ID","IL","IN",
"KS","KY","LA","MA","MD","ME","MI","MN","MO",
"MS","MT","NC","ND","NE","NH","NJ","NM","NV",
"NY","OH","OK","OR","PA","RI","SC","SD","TN",
"TX","UT","VA","VI","VT","WA","WI","WV","WY"]
values = random.randn(len(states))
lgn.map(states, values, colormap='Purples')
"""
Explanation: <hr> US Map
To make a US map with states colored by value, just pass a list of states, a list of values, and a colormap.
End of explanation
"""
values = (random.rand(len(states)) * 5).astype('int')
lgn.map(states, values, colormap='Pastel1')
"""
Explanation: Discrete values are automatically handled for appriopriate colormaps
End of explanation
"""
values = (random.rand(len(states)) * 5).astype('int')
lgn.map(states, values, colormap='Lightning')
"""
Explanation: Including our custom Lightning colormap
End of explanation
"""
countries = ['ISO', 'SLE', 'COD', 'CAF', 'TCD', 'AGO', 'GNB', 'GNQ', 'MLI', 'MWI',
'BDI', 'NGA', 'SOM', 'SSD', 'MOZ', 'CIV', 'CMR', 'GIN', 'BFA', 'AFG',
'ZMB', 'MRT', 'SWZ', 'LSO', 'TGO', 'BEN', 'COG', 'COM', 'LBR', 'PAK',
'UGA', 'NER', 'DJI', 'YEM', 'TZA', 'GMB', 'RWA', 'ETH', 'KEN', 'TJK',
'GHA', 'SEN', 'ERI', 'MMR', 'ZWE', 'ZAF', 'GAB', 'KHM', 'TLS', 'IND',
'TKM', 'PNG', 'HTI', 'LAO', 'UZB', 'STP', 'BOL', 'MDG', 'NPL', 'ESH',
'BGD', 'NAM', 'SLB', 'AZE', 'BTN', 'KIR', 'BWA', 'KGZ', 'FSM', 'IRQ',
'MAR', 'PRY', 'GUY', 'MNG', 'GTM', 'DZA', 'DOM', 'IDN', 'VUT', 'HND',
'PRK', 'KAZ', 'TTO', 'JAM', 'BRA', 'EGY', 'PHL', 'WSM', 'PSE', 'SUR',
'TON', 'GEO', 'CPV', 'NIC', 'ECU', 'ARM', 'PER', 'IRN', 'SLV', 'JOR',
'COL', 'TUN', 'VCT', 'CHN', 'FJI', 'PAN', 'VEN', 'LBY', 'MEX', 'TUR',
'ALB', 'ABW', 'VNM', 'BLZ', 'MDA', 'MDV', 'NCL', 'SYR', 'GUF', 'SAU',
'ARG', 'MUS', 'URY', 'UKR', 'ROU', 'MKD', 'LCA', 'THA', 'BRB', 'GUM',
'MNE', 'VIR', 'LKA', 'GRD', 'SYC', 'BHS', 'ATG', 'LBN', 'CRI', 'BGR',
'OMN', 'KWT', 'BIH', 'PYF', 'BHR', 'LVA', 'MTQ', 'QAT', 'CHL', 'PRI',
'GLP', 'ARE', 'USA', 'BLR', 'SVK', 'POL', 'LTU', 'MLT', 'HRV', 'MYT',
'REU', 'HUN', 'CAN', 'TWN', 'BRN', 'CUB', 'MAC', 'NZL', 'GBR', 'MYS',
'EST', 'KOR', 'AUS', 'CYP', 'GRC', 'CHE', 'NLD', 'ISR', 'DNK', 'BEL',
'AUT', 'IRL', 'DEU', 'FRA', 'ESP', 'ITA', 'PRT', 'CZE', 'NOR', 'SVN',
'FIN', 'JPN', 'SWE', 'LUX', 'SGP', 'ISL', 'HKG', 'FLK', 'SMR', 'TCA',
'VAT', 'RUS', 'GRL']
"""
Explanation: <hr> World Map
World maps are generated similarity, just use three-letter country codes instead of states
End of explanation
"""
values = (random.rand(len(countries)) * 5).astype('int')
lgn.map(countries, values, colormap='Pastel1', width=900)
"""
Explanation: Now plot random values. We'll also make it bigger so it's easier to see.
End of explanation
"""
|
benbovy/cosmogenic_dating | GS_Wintrich_4params.ipynb | mit | import math
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats
import seaborn as sns
import yaml
%matplotlib inline
"""
Explanation: Grid Search - Wintrich - 4 free parameters
Wintrich site, MLE with 4 free parameters (grid search method).
For more info about the method used, see the notebook Inference_Notes.
This notebook has the following external dependencies:
End of explanation
"""
import models
"""
Explanation: The mathematical (Predictive) Model
The mathematical model is available in the models Python module (see the notebook Models)
End of explanation
"""
profile_data = pd.read_csv('profiles_data/wintrich_10Be_profile_data.csv',
index_col='sample',
delim_whitespace=True,
comment='#',
quoting=csv.QUOTE_NONNUMERIC, quotechar='\"',
na_values=[-9999],
dtype={'depth': 'f', 'depth_g-cm-2': 'f',
'C': 'f', 'std': 'f'})
profile_data
with open('profiles_data/wintrich_10Be_settings.yaml') as f:
wintrich_settings = yaml.load(f)
wintrich_settings
"""
Explanation: The Data
End of explanation
"""
import gridsearch
"""
Explanation: The dataset is stored as a :class:pandas.DataFrame object.
Fitting the model
The grid search method is implemented in the gridsearch module (see the notebook Grid-Search for more info).
End of explanation
"""
gstest = gridsearch.CosmogenicInferenceGC(description='wintrich 4 parameters')
"""
Explanation: Create a new object for setup and results
End of explanation
"""
gstest.set_profile_measured(
profile_data['depth'].values,
profile_data['C'].values,
profile_data['std'].values,
None,
)
"""
Explanation: Set the data
End of explanation
"""
def C_10Be_wintrich(depth, erosion, exposure,
density, inheritance):
"""
10Be wintrich
"""
return models.C_10Be(depth, erosion, exposure,
density, inheritance,
P_0=wintrich_settings['P_0'])
gstest.set_profile_model(C_10Be_wintrich)
"""
Explanation: Set the model
End of explanation
"""
gstest.set_parameter(
'erosion_rate',
[0., 5e-4, 70j],
stats.uniform(loc=0, scale=5e-4).pdf
)
gstest.set_parameter(
'exposure_time',
[1e5, 1e6, 130j],
stats.uniform(loc=1e5, scale=1e6).pdf
)
gstest.set_parameter(
'soil_density',
[1.7, 2.5, 60j],
stats.uniform(loc=1.7, scale=2.5).pdf
)
gstest.set_parameter(
'inheritance',
[0., 1e5, 60j],
stats.uniform(loc=0., scale=1e5).pdf
)
"""
Explanation: Define the parameters to fit and their search ranges / steps. The order must be the same than the order of the arguments of the function used for the model!
End of explanation
"""
print gstest.setup_summary()
"""
Explanation: Grid search setup summary
End of explanation
"""
gstest.compute_mle()
"""
Explanation: Perform Maximum likelihood estimation on the search grid
End of explanation
"""
gstest.mle
"""
Explanation: Get the MLE (i.e., the parameter values at the maximum likelihood), in the same order than the definition of the parameters
End of explanation
"""
%matplotlib inline
def plot_proflike1d(cobj, pname, clevels=[0.68, 0.95, 0.997],
true_val=None, ax=None):
p = cobj.parameters[pname]
pindex = cobj.parameters.keys().index(pname)
x = cobj.grid[pindex].flatten()
proflike = cobj.proflike1d[pindex]
if ax is None:
ax = plt.subplot(111)
difflike = proflike - cobj.maxlike
ax.plot(x, difflike, label='profile loglike')
ccrit = gridsearch.profile_likelihood_crit(
cobj.proflike1d[pindex],
cobj.maxlike,
clevels=clevels
)
ccrit -= cobj.maxlike
for lev, cc in zip(clevels, ccrit):
l = ax.axhline(cc, color='k')
hpos = x.min() + (x.max() + x.min()) * 0.05
ax.text(hpos, cc, str(lev * 100),
size=9, color = l.get_color(),
ha="center", va="center",
bbox=dict(ec='1',fc='1'))
if true_val is not None:
ax.axvline(true_val, color='r')
plt.setp(ax, xlabel=pname,
ylabel='profile log-like - max log-like',
xlim=p['range'][0:2],
ylim=[ccrit[-1], 0.])
def plot_proflike1d_all(cobj, n_subplot_cols=2, **kwargs):
n_subplots = len(cobj.parameters)
n_subplot_rows = int(math.ceil(1. *
n_subplots /
n_subplot_cols))
fig, aax = plt.subplots(nrows=n_subplot_rows,
ncols=n_subplot_cols,
**kwargs)
axes = aax.flatten()
fig.text(0.5, 0.975,
"Profile log-like: " + cobj.description,
horizontalalignment='center',
verticalalignment='top')
for i, pname in enumerate(cobj.parameters.keys()):
ax = axes[i]
plot_proflike1d(cobj, pname,
ax=ax)
plt.tight_layout()
plt.subplots_adjust(top=0.93)
plot_proflike1d_all(gstest, figsize=(12, 6))
"""
Explanation: Plot the profile log-likelihood for each parameter. The blue lines represent the difference between the profile log-likelihood and the maximum log-likelihood, The intersections between the blue line and the black lines define the confidence intervals at the given confidence levels (based on the likelihood ratio test). The red lines indicate the true values.
End of explanation
"""
def plot_proflike2d(cobj, p1p2, ax=None,
cmap='Blues', show_colorbar=True):
pname1, pname2 = p1p2
idim = cobj.parameters.keys().index(pname2)
jdim = cobj.parameters.keys().index(pname1)
if ax is None:
ax = plt.subplot(111)
X, Y = np.meshgrid(cobj.grid[idim].flatten(),
cobj.grid[jdim].flatten())
difflike = cobj.proflike2d[idim][jdim] - cobj.maxlike
ccrit = gridsearch.profile_likelihood_crit(
cobj.proflike2d[idim][jdim],
cobj.maxlike,
clevels=[0.68, 0.95]
)
ccrit -= cobj.maxlike
contours = np.linspace(np.median(difflike),
0,
10)
P2D = ax.contourf(Y, X, difflike,
contours,
cmap=plt.get_cmap(cmap))
ci68 = ax.contour(Y, X, difflike,
[ccrit[0]], colors='w',
linestyles='solid')
plt.clabel(ci68, fontsize=8, inline=True,
fmt='68')
ci95 = ax.contour(Y, X, difflike,
[ccrit[1]], colors=['k'],
linestyles='solid')
plt.clabel(ci95, fontsize=8, inline=True,
fmt='95')
plt.setp(ax, xlabel=pname1, ylabel=pname2)
if show_colorbar:
plt.colorbar(P2D, ax=ax)
#ax.axhline(true_exposure, color='r')
#ax.axvline(true_erosion, color='r')
fig = plt.figure(figsize=(11, 9))
ax = plt.subplot(321)
plot_proflike2d(gstest, ('erosion_rate', 'exposure_time'), ax=ax)
ax2 = plt.subplot(322)
plot_proflike2d(gstest, ('exposure_time', 'inheritance'), ax=ax2)
ax3 = plt.subplot(323)
plot_proflike2d(gstest, ('erosion_rate', 'inheritance'), ax=ax3)
ax4 = plt.subplot(324)
plot_proflike2d(gstest, ('exposure_time', 'soil_density'), ax=ax4)
ax5 = plt.subplot(325)
plot_proflike2d(gstest, ('erosion_rate', 'soil_density'), ax=ax5)
ax6 = plt.subplot(326)
plot_proflike2d(gstest, ('soil_density', 'inheritance'), ax=ax6)
plt.tight_layout()
"""
Explanation: Show the profile log-likelihood for couples of parameters. Confidence regions are also shown (also based on the likelihood ratio test).
End of explanation
"""
sns.set_context('notebook')
depths = np.linspace(profile_data['depth'].min(),
profile_data['depth'].max(),
100)
Cm_fitted = C_10Be_wintrich(depths, *gstest.mle)
plt.figure()
plt.plot(Cm_fitted, -depths, label='best-fitted model')
plt.errorbar(profile_data['C'],
-profile_data['depth'],
xerr=profile_data['std'],
fmt='o', markersize=4,
label='data')
plt.setp(plt.gca(),
xlabel='10Be concentration [atoms g-1]',
ylabel='-1 * depth [cm]',
xlim=[0, None], ylim=[None, 0])
plt.legend(loc='lower right')
"""
Explanation: Plot the measured concentrations and the predicted profile corresponding to the best fitted data model
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/tutorials/keras/text_classification.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import matplotlib.pyplot as plt
import os
import re
import shutil
import string
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import losses
from tensorflow.keras import preprocessing
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
print(tf.__version__)
"""
Explanation: 电影评论文本分类
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/keras/text_classification"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png"> 在 TensorFlow.org 上查看</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/text_classification.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/text_classification.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/keras/text_classification.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td>
</table>
本教程演示了从存储在磁盘上的纯文本文件开始的文本分类。您将训练一个二元分类器对 IMDB 数据集执行情感分析。在笔记本的最后,有一个练习供您尝试,您将在其中训练一个多类分类器来预测 Stack Overflow 上编程问题的标签。
End of explanation
"""
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
"""
Explanation: 情感分析
此笔记本训练了一个情感分析模型,利用评论文本将电影评论分类为正面或负面评价。这是一个二元(或二类)分类示例,也是一个重要且应用广泛的机器学习问题。
您将使用 Large Movie Review Dataset,其中包含 Internet Movie Database 中的 50,000 条电影评论文本 。我们将这些评论分为两组,其中 25,000 条用于训练,另外 25,000 条用于测试。训练集和测试集是均衡的,也就是说其中包含相等数量的正面评价和负面评价。
下载并探索 IMDB 数据集
我们下载并提取数据集,然后浏览一下目录结构。
End of explanation
"""
sample_file = os.path.join(train_dir, 'pos/1181_9.txt')
with open(sample_file) as f:
print(f.read())
"""
Explanation: aclImdb/train/pos 和 aclImdb/train/neg 目录包含许多文本文件,每个文件都是一条电影评论。我们来看看其中的一条评论。
End of explanation
"""
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
"""
Explanation: 加载数据集
接下来,您将从磁盘加载数据并将其准备为适合训练的格式。为此,您将使用有用的 text_dataset_from_directory 实用工具,它期望的目录结构如下所示。
main_directory/
...class_a/
......a_text_1.txt
......a_text_2.txt
...class_b/
......b_text_1.txt
......b_text_2.txt
要准备用于二元分类的数据集,磁盘上需要有两个文件夹,分别对应于 class_a 和 class_b。这些将是正面和负面的电影评论,可以在 aclImdb/train/pos 和 aclImdb/train/neg 中找到。由于 IMDB 数据集包含其他文件夹,因此您需要在使用此实用工具之前将其移除。
End of explanation
"""
batch_size = 32
seed = 42
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
"""
Explanation: 接下来,您将使用 text_dataset_from_directory 实用工具创建带标签的 tf.data.Dataset。tf.data 是一组强大的数据处理工具。
运行机器学习实验时,最佳做法是将数据集拆成三份:训练、验证 和 测试。
IMDB 数据集已经分成训练集和测试集,但缺少验证集。我们来通过下面的 validation_split 参数,使用 80:20 拆分训练数据来创建验证集。
End of explanation
"""
for text_batch, label_batch in raw_train_ds.take(1):
for i in range(3):
print("Review", text_batch.numpy()[i])
print("Label", label_batch.numpy()[i])
"""
Explanation: 如上所示,训练文件夹中有 25,000 个样本,您将使用其中的 80%(或 20,000 个)进行训练。稍后您将看到,您可以通过将数据集直接传递给 model.fit 来训练模型。如果您不熟悉 tf.data,还可以遍历数据集并打印出一些样本,如下所示。
End of explanation
"""
print("Label 0 corresponds to", raw_train_ds.class_names[0])
print("Label 1 corresponds to", raw_train_ds.class_names[1])
"""
Explanation: 请注意,评论包含原始文本(带有标点符号和偶尔出现的 HTML 代码,如 <br/>)。我们将在以下部分展示如何处理这些问题。
标签为 0 或 1。要查看它们与正面和负面电影评论的对应关系,可以查看数据集上的 class_names 属性。
End of explanation
"""
raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/test',
batch_size=batch_size)
"""
Explanation: 接下来,您将创建验证数据集和测试数据集。您将使用训练集中剩余的 5,000 条评论进行验证。
注:使用 validation_split 和 subset 参数时,请确保要么指定随机种子,要么传递 shuffle=False,这样验证拆分和训练拆分就不会重叠。
End of explanation
"""
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation),
'')
"""
Explanation: 注:以下部分中使用的 Preprocessing API 在 TensorFlow 2.3 中是实验性的,可能会发生变更。
准备用于训练的数据集
接下来,您将使用有用的 preprocessing.TextVectorization 层对数据进行标准化、词例化和向量化。
标准化是指对文本进行预处理,通常是移除标点符号或 HTML 元素以简化数据集。词例化是指将字符串分割成词例(例如,通过空格将句子分割成单个单词)。向量化是指将词例转换为数字,以便将它们输入神经网络。所有这些任务都可以通过这个层完成。
正如您在上面看到的,评论包含各种 HTML 代码,例如 <br />。TextVectorization 层(默认情况下会将文本转换为小写并去除标点符号,但不会去除 HTML)中的默认标准化程序不会移除这些代码。您将编写一个自定义标准化函数来移除 HTML。
注:为了防止训练/测试偏差(也称为训练/应用偏差),在训练和测试时间对数据进行相同的预处理非常重要。为此,可以将 TextVectorization 层直接包含在模型中,如本教程后面所示。
End of explanation
"""
max_features = 10000
sequence_length = 250
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=max_features,
output_mode='int',
output_sequence_length=sequence_length)
"""
Explanation: <br>接下来,您将创建一个 TextVectorization 层。您将使用该层对我们的数据进行标准化、词例化和向量化。您将 output_mode 设置为 int 以便为每个词例创建唯一的整数索引。
请注意,您使用的是默认拆分函数,以及您在上面定义的自定义标准化函数。您还将为模型定义一些常量,例如显式的最大 sequence_length,这会使层将序列填充或截断为精确的 sequence_length 值。
End of explanation
"""
# Make a text-only dataset (without labels), then call adapt
train_text = raw_train_ds.map(lambda x, y: x)
vectorize_layer.adapt(train_text)
"""
Explanation: 接下来,您将调用 adapt 以使预处理层的状态适合数据集。这会使模型构建字符串到整数的索引。
注:在调用时请务必仅使用您的训练数据(使用测试集会泄漏信息)。
End of explanation
"""
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label
# retrieve a batch (of 32 reviews and labels) from the dataset
text_batch, label_batch = next(iter(raw_train_ds))
first_review, first_label = text_batch[0], label_batch[0]
print("Review", first_review)
print("Label", raw_train_ds.class_names[first_label])
print("Vectorized review", vectorize_text(first_review, first_label))
"""
Explanation: 我们来创建一个函数来查看使用该层预处理一些数据的结果。
End of explanation
"""
print("1287 ---> ",vectorize_layer.get_vocabulary()[1287])
print(" 313 ---> ",vectorize_layer.get_vocabulary()[313])
print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary())))
"""
Explanation: 正如您在上面看到的,每个词例都被一个整数替换了。您可以通过在该层上调用 .get_vocabulary() 来查找每个整数对应的词例(字符串)。
End of explanation
"""
train_ds = raw_train_ds.map(vectorize_text)
val_ds = raw_val_ds.map(vectorize_text)
test_ds = raw_test_ds.map(vectorize_text)
"""
Explanation: 差不多可以训练您的模型了。作为最后的预处理步骤,将之前创建的 TextVectorization 层应用于训练数据集、验证数据集和测试数据集。
End of explanation
"""
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)
"""
Explanation: 配置数据集以提高性能
以下是加载数据时应该使用的两种重要方法,以确保 I/O 不会阻塞。
从磁盘加载后,.cache() 会将数据保存在内存中。这将确保数据集在训练模型时不会成为瓶颈。如果您的数据集太大而无法放入内存,也可以使用此方法创建高性能的磁盘缓存,这比许多小文件的读取效率更高。
prefetch() 会在训练时将数据预处理和模型执行重叠。
您可以在数据性能指南中深入了解这两种方法,以及如何将数据缓存到磁盘。
End of explanation
"""
embedding_dim = 16
model = tf.keras.Sequential([
layers.Embedding(max_features + 1, embedding_dim),
layers.Dropout(0.2),
layers.GlobalAveragePooling1D(),
layers.Dropout(0.2),
layers.Dense(1)])
model.summary()
"""
Explanation: 创建模型
是时候创建您的神经网络了:
End of explanation
"""
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
"""
Explanation: 层按顺序堆叠以构建分类器:
第一层是嵌入(Embedding)层。该层采用整数编码的词汇表,并查找每个词索引的嵌入向量(embedding vector)。这些向量是通过模型训练学习到的。向量向输出数组增加了一个维度。得到的维度为:(batch, sequence, embedding)。
接下来,GlobalAveragePooling1D 将通过对序列维度求平均值来为每个样本返回一个定长输出向量。这允许模型以尽可能最简单的方式处理变长输入。
该定长输出向量通过一个有 16 个隐层单元的全连接(Dense)层传输。
最后一层与单个输出结点密集连接。使用 Sigmoid 激活函数,其函数值为介于 0 与 1 之间的浮点数,表示概率或置信度。
损失函数与优化器
一个模型需要损失函数和优化器来进行训练。由于这是一个二分类问题且模型输出概率值(一个使用 sigmoid 激活函数的单一单元层),我们将使用 binary_crossentropy 损失函数。
这不是损失函数的唯一选择,例如,您可以选择 mean_squared_error 。但是,一般来说 binary_crossentropy 更适合处理概率——它能够度量概率分布之间的“距离”,或者在我们的示例中,指的是度量 ground-truth 分布与预测值之间的“距离”。
End of explanation
"""
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs)
"""
Explanation: 训练模型
以 512 个样本的 mini-batch 大小迭代 40 个 epoch 来训练模型。这是指对 x_train 和 y_train 张量中所有样本的的 40 次迭代。在训练过程中,监测来自验证集的 10,000 个样本上的损失值(loss)和准确率(accuracy):
End of explanation
"""
loss, accuracy = model.evaluate(test_ds)
print("Loss: ", loss)
print("Accuracy: ", accuracy)
"""
Explanation: 评估模型
我们来看一下模型的性能如何。将返回两个值。损失值(loss)(一个表示误差的数字,值越低越好)与准确率(accuracy)。
End of explanation
"""
history_dict = history.history
history_dict.keys()
"""
Explanation: 这种十分朴素的方法得到了约 87% 的准确率(accuracy)。若采用更好的方法,模型的准确率应当接近 95%。
创建准确率和损失随时间变化的图表
model.fit() 会返回包含一个字典的 History 对象。该字典包含训练过程中产生的所有信息:
End of explanation
"""
acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
"""
Explanation: 其中有四个条目:每个条目代表训练和验证过程中的一项监测指标。您可以使用这些指标来绘制用于比较的训练损失和验证损失图表,以及训练准确率和验证准确率图表:
End of explanation
"""
export_model = tf.keras.Sequential([
vectorize_layer,
model,
layers.Activation('sigmoid')
])
export_model.compile(
loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy']
)
# Test it with `raw_test_ds`, which yields raw strings
loss, accuracy = export_model.evaluate(raw_test_ds)
print(accuracy)
"""
Explanation: 在该图表中,虚线代表训练损失和准确率,实线代表验证损失和准确率。
请注意,训练损失会逐周期下降,而训练准确率则逐周期上升。使用梯度下降优化时,这是预期结果,它应该在每次迭代中最大限度减少所需的数量。
但是,对于验证损失和准确率来说则不然——它们似乎会在训练转确率之前达到顶点。这是过拟合的一个例子:模型在训练数据上的表现要好于在之前从未见过的数据上的表现。经过这一点之后,模型会过度优化和学习特定于训练数据的表示,但无法泛化到测试数据。
对于这种特殊情况,您可以通过在验证准确率不再增加时直接停止训练来防止过度拟合。一种方式是使用 tf.keras.callbacks.EarlyStopping 回调。
导出模型
在上面的代码中,您在向模型馈送文本之前对数据集应用了 TextVectorization。 如果您想让模型能够处理原始字符串(例如,为了简化部署),您可以在模型中包含 TextVectorization 层。为此,您可以使用刚刚训练的权重创建一个新模型。
End of explanation
"""
examples = [
"The movie was great!",
"The movie was okay.",
"The movie was terrible..."
]
export_model.predict(examples)
"""
Explanation: 使用新数据进行推断
要获得对新样本的预测,只需调用 model.predict() 即可。
End of explanation
"""
|
phanrahan/magmathon | notebooks/tutorial/icestick/Add.ipynb | mit | import magma as m
m.set_mantle_target("ice40")
"""
Explanation: Add
In this tutorial, we will construct a n-bit adder from n full adders.
Magma has built in support for addition using the + operator,
so please don't think Magma is so low-level that you need to create
logical and arithmetic functions in order to use it!
We use this example to show how circuits are composed to form new circuits.
Since we are using the ICE40, we need to set the target of Mantle to "ice40".
End of explanation
"""
from mantle import FullAdder
"""
Explanation: Mantle FullAdder
In the last example, we defined a Python function that created a full adder.
In this example, we are going to use the built-in FullAdder from Mantle.
Mantle is our standard library of useful circuits.
End of explanation
"""
print(FullAdder)
"""
Explanation: We can print out the interface of the FullAdder.
End of explanation
"""
fulladder = FullAdder()
print(fulladder.I0, type(fulladder.I0))
print(fulladder.I1, type(fulladder.I1))
print(fulladder.CIN, type(fulladder.CIN))
print(fulladder.O, type(fulladder.O))
print(fulladder.COUT, type(fulladder.O))
"""
Explanation: This tells us that the full adder has three inputs I0, I1, and CIN.
Note that the type of these arguments are In(Bit).
There are also two outputs O and COUT, both with type Out(Bit).
In Magma arguments in the circuit interface are normally qualified to be inputs or outputs.
End of explanation
"""
from magma.simulator import PythonSimulator
fulladder = PythonSimulator(FullAdder)
assert fulladder(1, 0, 0) == (1, 0), "Failed"
assert fulladder(0, 1, 0) == (1, 0), "Failed"
assert fulladder(1, 1, 0) == (0, 1), "Failed"
assert fulladder(1, 0, 1) == (0, 1), "Failed"
assert fulladder(1, 1, 1) == (1, 1), "Failed"
print("Success!")
"""
Explanation: Before testing the full adder on the IceStick board,
let's test it using the Python simulator.
End of explanation
"""
class Add2(m.Circuit):
IO = ['I0', m.In(m.UInt[2]), 'I1', m.In(m.UInt[2]), 'CIN', m.In(m.Bit),
'O', m.Out(m.UInt[2]), 'COUT', m.Out(m.Bit) ]
@classmethod
def definition(io):
n = len(io.I0)
O = []
COUT = io.CIN
for i in range(n):
fulladder = FullAdder()
Oi, COUT = fulladder(io.I0[i], io.I1[i], COUT)
O.append(Oi)
io.O <= m.uint(O)
io.COUT <= COUT
"""
Explanation: class Add2 - Defining a Circuit
Now let's build a 2-bit adder using FullAdder.
We'll use a simple ripple carry adder design by connecting the carry out of one full adder
to the carry in of the next full adder.
The resulting adder will accept as input a carry in,
and generate a final carry out. Here's a logisim diagram of the circuit we will construct:
Here is a Python class that implements a 2-bit adder.
End of explanation
"""
def DefineAdd(n):
class _Add(m.Circuit):
name = f'Add{n}'
IO = ['I0', m.In(m.UInt[n]), 'I1', m.In(m.UInt[n]), 'CIN', m.In(m.Bit),
'O', m.Out(m.UInt[n]), 'COUT', m.Out(m.Bit) ]
@classmethod
def definition(io):
O = []
COUT = io.CIN
for i in range(n):
fulladder = FullAdder()
Oi, COUT = fulladder(io.I0[i], io.I1[i], COUT)
O.append(Oi)
io.O <= m.uint(O)
io.COUT <= COUT
return _Add
def Add(n):
return DefineAdd(n)()
def add(i0, i1, cin):
assert len(i0) == len(i1)
return Add(len(i0))(i0, i1, cin)
"""
Explanation: Although we are making an 2-bit adder,
we do this using a for loop that can be generalized to construct an n-bit adder.
Each time through the for loop we create an instance of a full adder
by calling FullAdder().
Recall that circuits are python classes,
so that calling a class returns an instance of that class.
Note how we wire up the full adders.
Calling an circuit instance has the effect of wiring
up the arguments to the inputs of the circuit.
That is,
O, COUT = fulladder(I0, I1, CIN)
is equivalent to
m.wire(IO, fulladder.I0)
m.wire(I1, fulladder.I1)
m.wire(CIN, fulladder.CIN)
O = fulladder.O
COUT = fulladder.COUT
The outputs of the circuit are returned.
Inside this loop we append single bit outputs from the full adders
to the Python list O.
We also set the CIN of the next full adder to the COUT of the previous instance.
Finally, we then convert the list O to a Uint(n).
In addition to Bits(n),
Magma also has built in types UInt(n) and SInt(n)
to represent unsigned and signed ints.
Magma also has type conversion functions bits, uint, and sint to convert
between different types.
In this example, m.uint(C) converts the list of bits to a UInt(len(C)).
DefineAdd Generator
One question you may be asking yourself, is how can this code be generalized to produce an n-bit adder. We do this by creating an add generator.
A generator is a Python function that takes parameters and returns a circuit class.
Calling the generator with different parameter values will create different circuits.
The power of Magma results from being to use all the features of Python
to create powerful hardware generators.
Here is the code:
End of explanation
"""
N = 2
from loam.boards.icestick import IceStick
icestick = IceStick()
for i in range(N):
icestick.J1[i].input().on()
icestick.J1[i+N].input().on()
for i in range(N+1):
icestick.J3[i].output().on()
"""
Explanation: First, notice that a circuit generator by convention begins with the prefix Define.
In this example,
DefineAdd has a parameter n which is the width of the adder.
A circuit generator returns a subclass of Circuit.
A standard way to write this is to construct a new Circuit class
within the body of the generator.
The code within the body of the generator can refer to the arguments
to the generator.
Like Verilog modules, Magma circuits must have unique names.
Because Python does not provide the facilities
to dynamically generate the class name,
dynamically constructed Magma circuits are named using the name class variable.
Python generators need to create unique names for each generated circuit
because Magma will cache circuit definitions based on the name.
Note how the name of the circuit is set using the format string f'Add{n}'.
For example, if n is 2, the name of the circuit will be Add2.
Magma allows you to use Python string manipulation functions to create mnemonic names.
As we will see, the resulting verilog module will have the same name.
This is very useful for debugging.
We also can create the parameterized types within the generator.
In this example, we use the type UInt(n) which depends on n.
The loop within definition can also refer to the parameter n'
Finally, notice we defined three interrelated functions:
DefineAdd(n), Add(n), and add(i0, i1, cin).
Why are there three functions?
Because there are three stages in using Magma to create hardware.
The first stage is to generate or define circuits.
The second stage is to create instances of these circuits.
And the third stage is to wire up the circuits.
Functions named DefineX are generators. Generators are functions that return Circuits.
Functions named X return circuit instances. This is done by calling DefineX and then instancing the circuit. This may seem very inefficient. Fortunately, circuits classes are cached and only defined once.
Finally, functions named lowercase x do one more thing. They wire the arguments of to x to the circuit. They can also construct the appropriate circuit class depending on the types of the arguments.
In this example, add constructs an n-bit adder, where n is the width of the inputs.
We strongly recommend that you follow this naming convention.
Running on the IceStick
In order to test the adder,
we setup the IceStick board
to have two 2-bit inputs and one 3-bit output.
As before, J1 will be used for inputs and J3 for outputs.
End of explanation
"""
main = icestick.DefineMain()
O, COUT = add( main.J1[0:N], main.J1[N:2*N], 0 )
main.J3[0:N] <= O
main.J3[N] <= COUT
m.EndDefine()
"""
Explanation: We define a main function that instances our 2-bit adder and wires it up to J1 and J3. Notice the use of Python's slicing syntax using our width variable N.
End of explanation
"""
m.compile('build/add', main)
"""
Explanation: As before, we compile.
End of explanation
"""
%%bash
cd build
yosys -q -p 'synth_ice40 -top main -blif add.blif' add.v
arachne-pnr -q -d 1k -o add.txt -p add.pcf add.blif
icepack add.txt add.bin
#iceprog add.bin
"""
Explanation: And use our yosys, arcachne-pnr, and icestorm tool flow.
End of explanation
"""
%cat build/add.pcf
"""
Explanation: You can test the program by connecting up some switches and LEDs to the headers. You should see the sum of the inputs displayed on the LEDs. First, we need to find out what pins J1 and J3 are wired up to. (Note: you can use % to execute shell commands inline in Jupyter notebooks)
End of explanation
"""
%cat build/add.v
"""
Explanation: In this example, we have J1 wire up to the four switch/LED circuits on the left, and J3 wired up to the three LED (no switch) circuits on the right
Again, it can be useful to examine the compiled Verilog.
Notice that it includes a Verilog definition of the mantle FullAdder implemented using the SB_LUT4 and SB_CARRY primtives. The Add2 module instances two FullAdders and wires them up.
End of explanation
"""
#DefineAdd(4)
"""
Explanation: You can also display the circuit using graphviz.
End of explanation
"""
|
Smith42/neuralnet-mcg | CNNs/ECG-CNN-2D-VCG.ipynb | gpl-3.0 | import tensorflow as tf
#import tensorflow.contrib.learn.python.learn as learn
import tflearn
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from random import shuffle, randint
from sklearn.utils import shuffle as mutualShuf
import os
import pandas as pd
import sklearn
import datetime
%matplotlib inline
"""
Explanation: ipynb for a 2-D CNN for classifying ECGs
Best results found so far used:
* 3 VCG leads concatenated
* 200 buffer, 150 shift (looking at QRS -> T lump)
* Input data chunked into 10000 healthy and 10000 unhealthy samples
* Peak finder threshold of 0.02 on differentiated and absoluted input data
(then it is returned to undiff, unabs data before it is fed in)
* Trained over 1 epoch.
* The CNN:
* Conv with 32 features, map 5x3.
* 2x2 max pool.
* Conv 64 features, map 5x3.
* 2x2 max pool.
* 1024 neuron dense layer, L2 regularisation with weight_decay=0.001.
* 50% dropout layer.
* 2 wide softmax layer.
* ADAM optimiser with learning_rate=0.00001.
* Loss function is categorical x-entropy.
This gives a result of Sensitivity: 1.0 Specifity: 0.9965 Accuracy: 0.9982 for data taken from the training set (but not trained with).
And Sensitivity: 0.9988 Specifity: 0.9959 Accuracy: 0.9974 on patients it hasn't seen before.
End of explanation
"""
def importData(filepath):
ppt = np.genfromtxt(filepath)
dppt = np.diff(np.transpose(ppt))
print(filepath, "Shape:", dppt[1:16,:].shape)
return dppt[1:16,:]
pathIll = "./inData/clean_ecg/ill/"
pathHealth = "./inData/clean_ecg/health/"
illLst = []
healthLst = []
for file in os.listdir(pathIll):
illLst.append(importData(pathIll+file))
for file in os.listdir(pathHealth):
healthLst.append(importData(pathHealth+file))
print("Outputing Frank leads")
healthPat = np.concatenate((healthLst[:]), axis=1)[12:15]
illPat = np.concatenate((illLst[:]), axis=1)[12:15]
print(healthPat.shape, illPat.shape)
def findAbove(arr, threshold, skip):
"""
Return indices for values above threshhold in array, arr. Keep only first items in sequence.
"""
inlst = []
for index, item in enumerate(arr):
if item >= threshold:
inlst.append(index)
return inlst[::skip]
def processClassData(classData):
"""
Process classData.
Returns a one-hot array of shape [len(classData), 2].
"""
# Convert label data to one-hot array
classDataOH = np.zeros((len(classData),2))
classDataOH[np.arange(len(classData)), classData] = 1
return classDataOH
def getSamples(Arr, indexArr, buffer):
"""
Get samples for inputting into CNN.
"""
sampleArr = []
for index, item in enumerate(indexArr):
if Arr[0:, item-buffer:item+buffer].shape != (Arr.shape[0], buffer*2):
pass
else:
sampleArr.append(Arr[0:, item-buffer:item+buffer])
return np.array(sampleArr)
def visualiseData(ecgData, classData, gridSize, axis):
"""
Plot labelled example data in a gridSize*gridSize grid.
"""
fig, ax = plt.subplots(gridSize, gridSize, subplot_kw=dict(projection='3d'))
plt.suptitle("Labelled example data")
r = randint(0,len(classData)-ecgData.shape[1])
k = 0
if gridSize == 1:
ax.plot(ecgData[r+k,0], ecgData[r+k,1], ecgData[r+k,2])
else:
for i in np.arange(0,gridSize,1):
for j in np.arange(0,gridSize,1):
k = k + 1
ax[i,j].plot(ecgData[r+k,0], ecgData[r+k,1], ecgData[r+k,2])
if axis == False:
ax[i,j].axis("off")
ax[i,j].annotate(classData[r+k], xy=(0, 0), xycoords='axes points',\
size=10, ha='left', va='top')
def undiff(ecgData, buffer):
"""
Reverse the differentiation done earlier through np.cumsum.
"""
ecgData = np.array(ecgData)
ecgData = np.reshape(ecgData, (ecgData.shape[0], ecgData.shape[1], buffer*2))
for i in np.arange(0,ecgData.shape[0],1):
for j in np.arange(0,ecgData.shape[1],1):
ecgData[i,j] = np.cumsum(ecgData[i,j])
ecgData = np.reshape(ecgData, (ecgData.shape[0], ecgData.shape[1], buffer*2, 1))
return ecgData
def splitData(coilData, classData):
"""
Split data into healthy and ill types.
"""
illData = []
healthData = []
for index, item in enumerate(classData):
if item == 1:
illData.append(coilData[index])
if item == 0:
healthData.append(coilData[index])
return illData, healthData
def chunkify(lst,n):
""" Chunk a list into n chunks of approximately equal size """
return [ lst[i::n] for i in range(n) ]
def functionTownCat(illArr, healthArr, illThreshold, healthThreshold, skip, shift, buffer, shuffle):
"""
Return the processed ecgData with the leads concatenated into a 2d array per heartbeat
and the classData (one-hot). Also return arrays of ill and healthy ppts.
If shuffle is true, shuffle data.
"""
illPeakArr = findAbove(np.abs(illArr[0]), illThreshold, skip)
sampleArrI = getSamples(illArr, np.array(illPeakArr), buffer)
healthPeakArr = findAbove(np.abs(healthArr[0]), healthThreshold, skip)
sampleArrH = getSamples(healthArr, np.array(healthPeakArr), buffer)
chunkyI = chunkify(sampleArrI, 10000)
chunkyH = chunkify(sampleArrH , 10000)
avgI = []
avgH = []
for i in np.arange(0,len(chunkyI),1):
avgI.append(np.mean(chunkyI[i], axis=0))
for i in np.arange(0,len(chunkyH),1):
avgH.append(np.mean(chunkyH[i], axis=0))
sampleArrI = np.array(avgI)
sampleArrH = np.array(avgH)
print("Total ill samples", len(illPeakArr), ". Compressed to", sampleArrI.shape)
print("Total healthy samples", len(healthPeakArr), ". Compressed to", sampleArrH.shape)
classData = []
for i in np.arange(0, sampleArrI.shape[0], 1):
classData.append(1)
for i in np.arange(0, sampleArrH.shape[0], 1):
classData.append(0)
ecgData = np.concatenate((sampleArrI, sampleArrH), axis=0)
if shuffle == True:
classData, ecgData = mutualShuf(np.array(classData), ecgData, random_state=0)
classDataOH = processClassData(classData)
ecgData = np.reshape(ecgData, [-1, sampleArrI.shape[1], buffer*2, 1])
return ecgData, classDataOH, classData
buffer = 300
healthThreshold = 0.02
illThreshold = 0.02
skip = 1
shift = 0
shuf = True
ecgData, classDataOH, classData = functionTownCat(illPat, healthPat, illThreshold, healthThreshold, skip,\
shift, buffer, shuf)
# Reintegrate the found values...
ecgData = undiff(ecgData, buffer)
# Take 20% for testing later:
testData = ecgData[:round(ecgData.shape[0]*0.2)]
trainData = ecgData[round(ecgData.shape[0]*0.2):]
testLabels = classDataOH[:round(ecgData.shape[0]*0.2)]
trainLabels = classDataOH[round(ecgData.shape[0]*0.2):]
print(ecgData.shape)
visualiseData(np.reshape(ecgData,(-1,ecgData.shape[1],buffer*2))[:,:], classData, 2, True)
#plt.plot(ecgData[0,0,:]*ecgData[0,1,:])
#plt.savefig("./outData/figures/exampleDataECGundiff.pdf")
print(trainData.shape)
"""
Explanation: Import and process data
End of explanation
"""
sess = tf.InteractiveSession()
tf.reset_default_graph()
tflearn.initializations.normal()
# ecgData = np.zeros((50,12,400,1)) # If ecgData is not defined
# Input layer:
net = tflearn.layers.core.input_data(shape=[None, buffer*2, buffer*2, buffer*2, 1])
# First layer:
net = tflearn.layers.conv.conv_3d(net, 32, 5, activation="leaky_relu")
net = tflearn.layers.conv.max_pool_3d(net, 2)
# Second layer:
net = tflearn.layers.conv.conv_3d(net, 64, 5, activation="leaky_relu")
net = tflearn.layers.conv.max_pool_3d(net, 2)
net = tflearn.layers.core.flatten(net)
# Fully connected layer 1:
net = tflearn.layers.core.fully_connected(net, 1024, regularizer="L2", weight_decay=0.001, activation="leaky_relu")
# Dropout layer:
net = tflearn.layers.core.dropout(net, keep_prob=0.5)
# Output layer:
net = tflearn.layers.core.fully_connected(net, 2, activation="softmax")
net = tflearn.layers.estimator.regression(net, optimizer='adam', loss='categorical_crossentropy',\
learning_rate=0.00001)
model = tflearn.DNN(net, tensorboard_verbose=3)
model.fit(trainData, trainLabels, n_epoch=1, show_metric=True)
# Save model?
#now = datetime.datetime.now()
#model.save("./outData/models/cleanECG_2dconv_12lead_"+now.isoformat()+"_.tflearn")
"""
Explanation: Neural Network
End of explanation
"""
#model.load("./outData/models/cleanECG_undiff_20e_300buff_0shift_2017-02-21T19:20:35.702943_.tflearn")
#model.load("./outData/models/cleanECG_undiff_20e_150buff_2017-02-21T16:15:02.602923_.tflearn")
#model.load("./outData/models/cleanECG_2dconv_12lead_2017-03-08T10:15:17.200943_.tflearn")
#model.load("./outData/models/cleanECG_2dconv_12lead_2017-03-09T18:05:18.655939_.tflearn")
labellst = classData[:round(ecgData.shape[0]*0.2)]
healthTest = []
illTest = []
for index, item in enumerate(labellst):
if item == 1:
illTest.append(testData[index])
if item == 0:
healthTest.append(testData[index])
healthLabel = np.tile([1,0], (len(healthTest), 1))
illLabel = np.tile([0,1], (len(illTest), 1))
print("Sensitivity:", model.evaluate(np.array(healthTest), healthLabel), "Specifity:",\
model.evaluate(np.array(illTest), illLabel),\
"Accuracy:", model.evaluate(testData, testLabels))
"""
Explanation: Test accuracy of model(s)
20% of training data held back for testing (4000 "heartbeats")
End of explanation
"""
tpathIll = "./inData/clean_ecg/testIll/"
tpathHealth = "./inData/clean_ecg/testHealth/"
tillLst = []
thealthLst = []
for file in os.listdir(tpathIll):
tillLst.append(importData(tpathIll+file))
for file in os.listdir(tpathHealth):
thealthLst.append(importData(tpathHealth+file))
if frank == False:
print("Outputing standard ECG leads...")
thealth = np.concatenate((thealthLst[:]), axis=1)[0:12]
till = np.concatenate((tillLst[:]), axis=1)[0:12]
elif frank == True:
print("Outputing Frank leads...")
thealth = np.concatenate((thealthLst[:]), axis=1)[12:15]
till = np.concatenate((tillLst[:]), axis=1)[12:15]
print(thealth.shape, till.shape)
unseenData, unseenClassOH, unseenClass = functionTownCat(till, thealth, illThreshold, healthThreshold, \
skip, shift, buffer, True)
# Undifferentiate values
unseenData = undiff(unseenData, buffer)
tillarr, thealtharr = splitData(unseenData, unseenClass)
sens = model.evaluate(np.array(thealtharr), np.tile([1,0], (len(thealtharr), 1)))[0]
spec = model.evaluate(np.array(tillarr), np.tile([0,1], (len(tillarr), 1)))[0]
acc = model.evaluate(unseenData, unseenClassOH)[0]
lenh = len(thealtharr)
leni = len(tillarr)
print("Sensitivity:", sens,\
"Specifity:", spec,\
"Accuracy:", acc)
visualiseData(np.reshape(unseenData,(-1,unseenData.shape[1],buffer*2))[:,:,::20], unseenClass, 3, False)
"""
Explanation: What if the model hasn't seen data from the patient? What then?!
End of explanation
"""
|
amirziai/learning | deep-learning/Convolutional-model-application.ipynb | mit | import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
"""
Explanation: Convolutional Neural Networks: Application
Welcome to Course 4's second assignment! In this notebook, you will:
Implement helper functions that you will use when implementing a TensorFlow model
Implement a fully functioning ConvNet using TensorFlow
After this assignment you will be able to:
Build and train a ConvNet in TensorFlow for a classification problem
We assume here that you are already familiar with TensorFlow. If you are not, please refer the TensorFlow Tutorial of the third week of Course 2 ("Improving deep neural networks").
1.0 - TensorFlow model
In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call.
As usual, we will start by loading in the packages.
End of explanation
"""
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
"""
Explanation: Run the next cell to load the "SIGNS" dataset you are going to use.
End of explanation
"""
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
"""
Explanation: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width:800px;height:300px;">
The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of index below and re-run to see different examples.
End of explanation
"""
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
"""
Explanation: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
End of explanation
"""
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
"""
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(shape=[None, n_H0, n_W0, n_C0], dtype=np.float32)
Y = tf.placeholder(shape=[None, n_y], dtype=np.float32)
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
"""
Explanation: 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
Exercise: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension [None, n_H0, n_W0, n_C0] and Y should be of dimension [None, n_y]. Hint.
End of explanation
"""
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable('W1',
[4, 4, 3, 8],
initializer=tf.contrib.layers.xavier_initializer(seed = 0))
W2 = tf.get_variable('W2',
[2, 2, 8, 16],
initializer=tf.contrib.layers.xavier_initializer(seed = 0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
"""
Explanation: Expected Output
<table>
<tr>
<td>
X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32)
</td>
</tr>
<tr>
<td>
Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32)
</td>
</tr>
</table>
1.2 - Initialize parameters
You will initialize weights/filters $W1$ and $W2$ using tf.contrib.layers.xavier_initializer(seed = 0). You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.
Exercise: Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:
python
W = tf.get_variable("W", [1,2,3,4], initializer = ...)
More Info.
End of explanation
"""
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1,
ksize=[1, 8, 8, 1],
strides=[1, 8, 8, 1],
padding='SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1, W2, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2,
ksize=[1, 4, 4, 1],
strides=[1, 4, 4, 1],
padding='SAME')
# FLATTEN
P2 = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(P2, 6, activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
W1 =
</td>
<td>
[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 <br>
-0.06847463 0.05245192]
</td>
</tr>
<tr>
<td>
W2 =
</td>
<td>
[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 <br>
-0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 <br>
-0.22779644 -0.1601823 -0.16117483 -0.10286498]
</td>
</tr>
</table>
1.2 - Forward propagation
In TensorFlow, there are built-in functions that carry out the convolution steps for you.
tf.nn.conv2d(X,W1, strides = [1,s,s,1], padding = 'SAME'): given an input $X$ and a group of filters $W1$, this function convolves $W1$'s filters on X. The third input ([1,f,f,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). You can read the full documentation here
tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'): given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. You can read the full documentation here
tf.nn.relu(Z1): computes the elementwise ReLU of Z1 (which can be any shape). You can read the full documentation here.
tf.contrib.layers.flatten(P): given an input P, this function flattens each example into a 1D vector it while maintaining the batch-size. It returns a flattened tensor with shape [batch_size, k]. You can read the full documentation here.
tf.contrib.layers.fully_connected(F, num_outputs): given a the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation here.
In the last function above (tf.contrib.layers.fully_connected), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters.
Exercise:
Implement the forward_propagation function below to build the following model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED. You should use the functions above.
In detail, we will use the following parameters for all the steps:
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME"
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME"
- Flatten the previous output.
- FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost.
End of explanation
"""
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3,
labels=Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
"""
Explanation: Expected Output:
<table>
<td>
Z3 =
</td>
<td>
[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] <br>
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
</td>
</table>
1.3 - Compute cost
Implement the compute cost function below. You might find these two functions helpful:
tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y): computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation here.
tf.reduce_mean: computes the mean of elements across dimensions of a tensor. Use this to sum the losses over all the examples to get the overall cost. You can check the full documentation here.
Exercise: Compute the cost below using the function above.
End of explanation
"""
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
"""
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer, cost],
feed_dict={X: minibatch_X,
Y: minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
"""
Explanation: Expected Output:
<table>
<td>
cost =
</td>
<td>
2.91034
</td>
</table>
1.4 Model
Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset.
You have implemented random_mini_batches() in the Optimization programming assignment of course 2. Remember that this function returns a list of mini-batches.
Exercise: Complete the function below.
The model below should:
create placeholders
initialize parameters
forward propagate
compute the cost
create an optimizer
Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. Hint for initializing the variables
End of explanation
"""
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
"""
Explanation: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
End of explanation
"""
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
"""
Explanation: Expected output: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease.
<table>
<tr>
<td>
**Cost after epoch 0 =**
</td>
<td>
1.917929
</td>
</tr>
<tr>
<td>
**Cost after epoch 5 =**
</td>
<td>
1.506757
</td>
</tr>
<tr>
<td>
**Train Accuracy =**
</td>
<td>
0.940741
</td>
</tr>
<tr>
<td>
**Test Accuracy =**
</td>
<td>
0.783333
</td>
</tr>
</table>
Congratulations! You have finised the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance).
Once again, here's a thumbs up for your work!
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.12/_downloads/plot_info.ipynb | bsd-3-clause | from __future__ import print_function
import mne
import os.path as op
"""
Explanation: .. _tut_info_objects:
The :class:Info <mne.Info> data structure
End of explanation
"""
# Read the info object from an example recording
info = mne.io.read_info(
op.join(mne.datasets.sample.data_path(), 'MEG', 'sample',
'sample_audvis_raw.fif'), verbose=False)
"""
Explanation: The :class:Info <mne.Info> data object is typically created
when data is imported into MNE-Python and contains details such as:
date, subject information, and other recording details
the samping rate
information about the data channels (name, type, position, etc.)
digitized points
sensor–head coordinate transformation matrices
and so forth. See the :class:the API reference <mne.Info>
for a complete list of all data fields. Once created, this object is passed
around throughout the data analysis pipeline.
It behaves as a nested Python dictionary:
End of explanation
"""
print('Keys in info dictionary:\n', info.keys())
"""
Explanation: List all the fields in the info object
End of explanation
"""
print(info['sfreq'], 'Hz')
"""
Explanation: Obtain the sampling rate of the data
End of explanation
"""
print(info['chs'][0])
"""
Explanation: List all information about the first data channel
End of explanation
"""
channel_indices = mne.pick_channels(info['ch_names'], ['MEG 0312', 'EEG 005'])
"""
Explanation: .. _picking_channels:
Obtaining subsets of channels
There are a number of convenience functions to obtain channel indices, given
an :class:mne.Info object.
Get channel indices by name
End of explanation
"""
channel_indices = mne.pick_channels_regexp(info['ch_names'], 'MEG *')
"""
Explanation: Get channel indices by regular expression
End of explanation
"""
channel_indices = mne.pick_types(info, meg=True) # MEG only
channel_indices = mne.pick_types(info, eeg=True) # EEG only
"""
Explanation: Get channel indices by type
End of explanation
"""
channel_indices = mne.pick_types(info, meg='grad', eeg=True)
"""
Explanation: MEG gradiometers and EEG channels
End of explanation
"""
channel_indices_by_type = mne.io.pick.channel_indices_by_type(info)
print('The first three magnetometers:', channel_indices_by_type['mag'][:3])
"""
Explanation: Get a dictionary of channel indices, grouped by channel type
End of explanation
"""
# Channel type of a specific channel
channel_type = mne.io.pick.channel_type(info, 75)
print('Channel #75 is of type:', channel_type)
"""
Explanation: Obtaining information about channels
End of explanation
"""
meg_channels = mne.pick_types(info, meg=True)[:10]
channel_types = [mne.io.pick.channel_type(info, ch) for ch in meg_channels]
print('First 10 MEG channels are of type:\n', channel_types)
"""
Explanation: Channel types of a collection of channels
End of explanation
"""
# Only keep EEG channels
eeg_indices = mne.pick_types(info, meg=False, eeg=True)
reduced_info = mne.pick_info(info, eeg_indices)
print(reduced_info)
"""
Explanation: Dropping channels from an info structure
It is possible to limit the info structure to only include a subset of
channels with the :func:mne.pick_info function:
End of explanation
"""
|
robertoalotufo/ia898 | master/DemoPhaseCorrelation.ipynb | mit | import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
%matplotlib inline
import matplotlib.image as mpimg
#f = ia.normalize(ia.gaussian((151,151), [[75],[75]], [[800,0],[0,800]]), [0,200]).astype(uint8)
f = mpimg.imread("../data/astablet.tif")
H,W = f.shape
f = f[:,H//2:H//2+H]
#ia.adshow(ia.isolines(f,10,3), "Image in cartesian coordinates")
g = ia.polar(f,(150,200),2*np.pi)
ia.adshow(f)
ia.adshow(g)
#ia.adshow(ia.isolines(g.astype(int),10,3), "Image in polar coordinates")
#adshow(g, "Image in polar coordinates")
f1 = f
f2 = f.T[:,::-1]
g2 = ia.polar(f2,(150,200),2*np.pi)
ia.adshow(f2)
ia.adshow(g2)
nb = ia.nbshow(2)
nb.nbshow(g)
nb.nbshow(g2)
nb.nbshow()
h = ia.phasecorr(g,g2)
print(h.shape)
ia.adshow(ia.normalize(h))
i = np.argmax(h)
row,col = np.unravel_index(i,h.shape)
v = h[row,col]
print(np.array(g.shape) - np.array((row,col)))
print(v)
"""
Explanation: Demo Phase Correlation
Illustrate using Phase Correlation to estimate rotation and translation between images.
Description
In this lesson we explain how to use Phase Correlation to
estimate the angle of rotation and the translation between 2D
images.
Converting an image from Cartesian to Polar coordinates
It converts a plane from coordinates $(x,y)$ to
$(\theta,r)$, with $x = r \cos \theta$ and $y = r \sin \theta$.
Notice that the domain in polar coordinates must be
informed explicitaly and will influence in the angle resolution.
End of explanation
"""
def rotphasecorr2d(f,h):
F = np.fft.fftn(f)
H = np.fft.fftn(h)
pF = ia.polar(ia.dftview(F),(F.shape[0]/2,360),np.pi)
pH = ia.polar(ia.dftview(H),(H.shape[0]/2,360),np.pi)
return ia.phasecorr(pF, pH)
"""
Explanation: Estimating the angle of rotation
The following function will be used to estimate the angle of rotation between 2D images.
End of explanation
"""
f = mpimg.imread("../data/cameraman.tif")
print(f.dtype)
t = np.zeros(np.array(f.shape)+200,dtype=np.uint8)
t[100:f.shape[0]+100,100:f.shape[1]+100] = f
f = t
t1 = np.array([
[1,0,-f.shape[0]/2.],
[0,1,-f.shape[1]/2.],
[0,0,1]]);
t2 = np.array([
[1,0,f.shape[0]/2.],
[0,1,f.shape[1]/2.],
[0,0,1]]);
theta = np.radians(30)
r1 = np.array([
[np.cos(theta),-np.sin(theta),0],
[np.sin(theta),np.cos(theta),0],
[0,0,1]]);
T = t2.dot(r1).dot(t1)
print(f.dtype)
f1 = ia.affine(f,T,0)
#f1.shape = f.shape
nb.nbshow(f, "f:Original image")
nb.nbshow(f1, "f1:Image rotated by 30°")
nb.nbshow()
nb = ia.nbshow(2)
F = np.fft.fftn(f)
F1 = np.fft.fftn(f1)
FS = ia.dftview(F)
F1S = ia.dftview(F1)
nb.nbshow(FS,'FS')
nb.nbshow(F1S,'F1S')
nb.nbshow()
pFS = ia.polar(FS,(FS.shape[0]//2,360),np.pi)
pF1S = ia.polar(F1S,(F1S.shape[0]//2,360),np.pi)
nb.nbshow(ia.normalize(pFS),'polar FS')
nb.nbshow(ia.normalize(pF1S),'polar F1S')
nb.nbshow()
pg = ia.phasecorr(pFS,pF1S)
ia.adshow(ia.normalize(pg))
peak = np.unravel_index(np.argmax(pg), pg.shape)
# Calculate the angle
ang = (float(peak[1])/pg.shape[1])*180
print(ang)
"""
Explanation: The function can be applied as follows.
End of explanation
"""
import scipy
def trphasecorr2d(f,h):
rg = ia.rotphasecorr2d(f,h)
peak = np.unravel_index(argmax(rg), rg.shape)
ang = (float(peak[1])/rg.shape[1])*180
h_rot = scipy.ndimage.interpolation.rotate(h, -ang, reshape=False)
g = ia.phasecorr(f,h_rot)
return g, rg
"""
Explanation: Estimating the angle of rotation and the translation
Now we will compute the angle of rotation and the translation. The function below first find the
angle of rotation; after that, it rotate the image and find the translation. Two phase correlation
maps are returned: one for the translation and other for rotation.
End of explanation
"""
t3 = np.array([
[1,0,50],
[0,1,32],
[0,0,1]]);
T = np.dot(t3,T)
h = ia.affine(f,T,0)
h.shape = f.shape
ia.adshow(f, "Original image")
ia.adshow(h, "Image rotated by 30° and translated by (50,32)")
g, rg = trphasecorr2d(f,h)
g = ia.normalize(g)
rg = ia.normalize(rg)
trans_peak = np.unravel_index(argmax(g), g.shape)
rot_peak = np.unravel_index(argmax(rg), rg.shape)
ang = (float(rot_peak[1])/rg.shape[1])*180
trans = (np.array(h.shape)-np.array(trans_peak))
np.adshow(g, "Translation correlation map - Peak %s, \n corresponds to translation %s"%(str(trans_peak), str(tuple(trans))))
np.adshow(ianormalize(rg), "Rotation correlation map - Peak %s, corresponds to angle %f°"%(str(rot_peak),ang))
t4 = np.array([
[1,0,-trans[0]],
[0,1,-trans[1]],
[0,0,1]]);
theta1 = radians(-ang)
r2 = np.array([
[np.cos(theta1),-np.sin(theta1),0],
[np.sin(theta1),np.cos(theta1),0],
[0,0,1]]);
T1 = dot(t4,dot(t2,dot(r2,t1)))
f1 = ia.affine(h,T1,0)
f1.shape = h.shape
ia.adshow(f1, "Sample image rotated and translated by %f° and %s, respectively"%(-ang,tuple(-trans)))
"""
Explanation: The following code find the angle of rotation and the translation. Then, the original image is obtained
from the rotated and translated sample image.
End of explanation
"""
|
openstreams/wflow | notebooks/wflow-reservoir.ipynb | gpl-3.0 | # First import the model. Here we use the HBV version
from wflow.wflow_sbm import *
import IPython
from IPython.display import display, clear_output
%pylab inline
#clear_output = IPython.core.display.clear_output
# Here we define a simple fictious reservoir
reservoirstorage = 15000
def simplereservoir(inputq,storage):
K = 0.087
storage = storage + inputq
outflow = storage * K
storage = storage - outflow
return outflow, storage
"""
Explanation: Use of the wflow OpenStreams framework API to connect a reservoir model
http://ops-wflow.sourceforge.net/1.0RC7/
This ipython notebook demonstrates how to load an openstreams python model, execute it step-by-step and investigate the (intermediate) results. It also shows how to re-route surface water through a reservoir model. The first steps is to load the model and framework:
End of explanation
"""
# define start and stop time of the run
startTime = 1
stopTime = 200
currentTime = 1
# set runid, cl;onemap and casename. Also define the ini file
runId = "reservoirtest_1"
#configfile="wflow_hbv_mem.ini"
configfile="wflow_sbm.ini"
wflow_cloneMap = 'wflow_subcatch.map'
# the casename points to the complete model setup with both static and dynamic input
caseName="../examples/wflow_rhine_sbm/"
#make a usermodel object
myModel = WflowModel(wflow_cloneMap, caseName,runId,configfile)
# initialise the framework
dynModelFw = wf_DynamicFramework(myModel, stopTime,startTime)
dynModelFw.createRunId(NoOverWrite=False,level=logging.ERROR)
dynModelFw.setQuiet(1)
# Run the initial part of the model (reads parameters and sets initial values)
dynModelFw._runInitial() # Runs initial part
dynModelFw._runResume() # gets the state variables from disk
# Get list of variables supplied by the model
#print dynModelFw.wf_supplyVariableNamesAndRoles()
"""
Explanation: Set model run-time parameters
Set the:
start and time time
set the runid (this is where the results are stored, relative to the casename)
set the name of the configfile (stire in the case directory
set the clone mape (usually the wflow_subcatch.map)
set the casename. This is where all the model the model resides
End of explanation
"""
# A pit can be set in the ldd by specifying the direction 5
# (see pcraster.eu for the ldd direction conventions)
ret = dynModelFw.wf_setValueLdd("TopoLdd",5.0,8.40943,49.6682)
report(myModel.TopoLdd,"n_ldd.map")
"""
Explanation: Here we make a pit in the middle of the main river. This will be the inflow to the reservoir
End of explanation
"""
f, ax = plt.subplots(1,3,figsize=(14, 4))
plotar = []
plotarstorage = []
plotaroutflow = []
for ts in range(1,45):
# Add inflow to outflow downstream of the pit
# See the API setion of the INI file
# Get Q value at pit, the reservoir inflow
inflowQ = dynModelFw.wf_supplyScalar("SurfaceRunoff",8.40943,49.6682)
# save for plotting
plotar.append(inflowQ)
# Feed to the reservoir model
outflow, reservoirstorage = simplereservoir(inflowQ, reservoirstorage)
# save for plotting
plotarstorage.append(reservoirstorage)
plotaroutflow.append(outflow)
#dynModelFw._userModel().IF = cover(0.0)
dynModelFw.wf_setValue("IF", outflow ,8.40943,49.7085)
# update runoff ONLY NEEDED IF YOU FIDDLE WITH THE KIN_WAVE RESERVOIR
myModel.updateRunOff()
dynModelFw._runDynamic(ts,ts) # runs for this timesteps
# Now get some results for display
run = dynModelFw.wf_supplyMapAsNumpy("SurfaceRunoff")
uz = dynModelFw.wf_supplyMapAsNumpy("FirstZoneCapacity")
sm = dynModelFw.wf_supplyMapAsNumpy("UStoreDepth")
sm[sm == -999] = np.nan
uz[uz == -999] = np.nan
run[run == -999] = np.nan
ax[0].imshow(log(run))
ax[1].plot(plotarstorage,'k')
ax[1].set_title("Reservoir storage")
ax[2].plot(plotar,'b')
ax[2].plot(plotaroutflow,'r')
ax[2].set_title("Blue inflow, red outflow:" + str(ts))
clear_output()
display(f)
plt.close()
dynModelFw._runSuspend() # saves the state variables
dynModelFw._wf_shutdown()
imshow(dynModelFw.wf_supplyMapAsNumpy("SurfaceRunoff"))
"""
Explanation: Run for a number of timesteps
End of explanation
"""
|
jjonte/udacity-deeplearning-nd | py3/project-1/dlnd-your-first-neural-network.ipynb | unlicense | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
@staticmethod
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
self.activation_function = NeuralNetwork.sigmoid
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
### Forward pass ###
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
### Backward pass ###
output_errors = targets - final_outputs
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
hidden_grad = hidden_outputs * (1 - hidden_outputs)
self.weights_hidden_to_output += self.lr * (output_errors * hidden_outputs).T
self.weights_input_to_hidden += self.lr * np.dot((hidden_errors * hidden_grad), inputs.T)
def run(self, inputs_list):
inputs = np.array(inputs_list, ndmin=2).T
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import sys
### Set the hyperparameters here ###
epochs = 1000
learning_rate = 0.1
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
It does pretty well up until Dec 22, then the accuracy drops dramatically - it thinks the demand would be higher than it really is for the last 10 days of the year. I would guess the impact of the Christmas holiday and the time people take time off work would cause this.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation
"""
|
eaton-lab/toytree | sandbox/SVG-animation-ideas.ipynb | bsd-3-clause | import numpy as np
import toyplot
#import toytree
import toyplot.svg
from IPython.display import SVG
"""
Explanation: Curved edges
It doesn't appear that toyplot has the functionality to do radial curvature of edges. I need to dive into the actual SVG code that it writes to check...
https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorial/Paths
Can it be done using toyplot ellipses?
End of explanation
"""
%%HTML
<svg viewBox="0 0 100 20" xmlns="http://www.w3.org/2000/svg" overflow="auto" stroke="red">
<text x="15" y="23"> This text is wider than the SVG, so there should be a scrollbar shown.</text>
</svg>
%%SVG
<svg width='300' height='300' viewBox="0 0 10 10">
<rect width="10" height="10">
<animate attributeName="rx" values="0;10;0" dur="1s" repeatCount="indefinite" />
</rect>
</svg>
%%SVG
<svg width="300" height="300" >
<g class="general" stroke="green" stroke-width="3" fill="none">
<path d="M 100 100 L 150 100 L 150 200" stroke-opacity="0.3"/>
<path d="M 100 100 L 50 100 L 50 200" stroke-opacity="0.3"/>
<path d="M 100 100 C 150 100, 150 100, 150 150" stroke='blue' stroke-opacity='0.3'/>
<path d="M 100 100 C 50 100, 50 100, 50 200" stroke='blue' stroke-opacity='0.3'/>
</g>
</svg>
%%SVG
<svg width="300" height="300" >
<g class="general" stroke="green" stroke-width="3" fill="none">
<path d="M 100 100 L 100 50 L 200 50 "/>
<path d="M 100 100 L 100 150 L 200 150"/>
<path d="M 100 50 A 50 50, 0, 0, 0, 100 200" fill='grey' fill-opacity="0.2"/>
</g>
</svg>
"""
Explanation: Primer
M: move to. Moves cursor to this position.
L: line to. Draws line from cursor to this position.
C: Bezier curves (x1 y1, x2 y2, x y):
Q: Quadratic curve (x1 y1, x y):
A: Arc (rx ry x-axis-rotation large-arc-flag sweep-flag x y)
SEQVIEW ALIGN
to get overflow it needs to be set on a div or maybe g element not on the svg.
End of explanation
"""
%%SVG
<svg width="300" height="300" >
<g class="general" stroke="green" stroke-width="3" fill="none">
<path d="M 150 150 L 200 150" stroke="black"/>
<path d="M 200 150 A 50 50, 0, 0, 0, 150 100" stroke="blue"/>
<path d="M 150 100 A 50 50, 0, 0, 0, 100 150" stroke="green"/>
<path d="M 100 150 A 50 50, 0, 0, 0, 150 200" stroke="orange"/>
<path d="M 150 200 L 150 250" stroke="black"/>
<path d="M 150 250 A 100 100, 0, 0, 1, 50 150" stroke="red" />
<path d="M 50 150 A 100 100, 0, 0, 1, 150 50" stroke="indigo" />
<path d="M 150 50 A 100 100, 0, 0, 1, 150 250" stroke="indigo" />
</g>
</svg>
125 + 100, 175 + 100
100 + 100, 150 + 50
100 - 150, 100 - 50
125 - 175, 100 - 100
50 - 225, 100 - 100
175 / 2.
x = """
<svg width="320" height="320" xmlns="http://www.w3.org/2000/svg">
<path d=" M100 100 A 50 50 0 0 1 150 50" fill="yellow" fill-opacity='0.25' stroke='black'/>
<path d=" M125 100 A 25 25 0 0 1 175 100" fill='blue' fill-opacity='0.5' stroke='black' />
<path d=" M50 100 A 87 87 0 0 1 250 100" fill='blue' fill-opacity='0.5' stroke='black' />
<path d=" M 200 200 A 50 50 0 0 1 100 100 " fill="none" stroke="black" />
<path d=" M 100 100 A 50 50 0 0 0 200 200 " fill="none" stroke="black" />
<path d=" M 100 200 A 50 50 0 0 0 200 250 " fill="none" stroke="orange" />
<path d=" M 200 200 A 50 50 0 0 0 100 200 " fill="none" stroke="green" />
</svg>
"""
from IPython.display import SVG
SVG(x)
%%SVG
<svg width="320" height="320" xmlns="http://www.w3.org/2000/svg">
<path d="M 10 315
L 110 215
A 30 50 0 0 1 162.55 162.45
L 172.55 152.45
A 30 50 -45 0 1 215.1 109.9
L 315 10" stroke="black" fill="green" stroke-width="2" fill-opacity="0.5"/>
</svg>
"""
Explanation: GOAL FOR CIRLCE LAYOUT
End of explanation
"""
%%SVG
<svg width="300" height="300" >
<g class="general" stroke="green" stroke-width="3" fill="none">
<path d="M 150 150 L 200 150" stroke="black"/>
<path d="M 200 150 A 50 50, 0, 0, 0, 150 100" stroke="blue" stroke-width='10'/>
<path d="M 150 100 A 50 50, 0, 0, 0, 100 150" stroke="green" stroke-width='10'/>
<path d="M 100 150 A 50 50, 0, 0, 0, 150 200" stroke="orange" stroke-width='10'/>
<path d="M 150 200 A 50 50, 0, 0, 0, 200 150" stroke="violet" stroke-width='10'/>
</g>
</svg>
#SVG(x)
"""
Explanation: GOAL FOR PIE CHARTS
End of explanation
"""
verts = np.array([(0, 0), (1, 0), (-1, 0)])
edges = np.array([(0, 1), (1, 2)])
# set up the canvas
c = toyplot.Canvas(width=400, height=300);
a = c.cartesian()
# add straight edges
a.graph(
np.array([(0, 1)]),
vcoordinates=[(0, 0), (1, 0)],
vlshow=False,
);
# add curved edges
a.graph(
np.array([(0, 1)]),
vcoordinates=[(1, 0), (-1, 0)],
layout=toyplot.layout.IgnoreVertices(
edges=ArcsEdges((0, 0))),
vlshow=False,
);
"""
Explanation: Plan
Edges Class can be used to return ecoordinates and eshapes. Here I will set MA as opposed to MQ to indicate the use of SVG arc elements.
toyplot.mark seems to be where the MA info is expanded to make a path d="..." element. For curved edges this must incorporate the 'curvature' argument somehow, but maybe not, since it does not seem to work currently.
Like 'curvature', my Edge class should be able to build A elements from a single argument, origin, or alternatively, an x, y coordinate of the origin. From this it only needs to calculate the rx and ry (radius) and the sweep.
Curved edges
Currently the only option is curved edges, which uses bezier curves. This won't work, we need to use arcs.
End of explanation
"""
??toyplot.mark
import numpy
class ArcsEdges(toyplot.layout.EdgeLayout):
"""Creates curved edges as arcs on a circle.
Parameters
----------
origin: tuple
The origin is the x,y coordinates of the circle center.
"""
def __init__(self, origin):
self._origin = origin
def edges(self, vcoordinates, edges):
# check for loops
loops = edges.T[0] == edges.T[1]
if numpy.any(loops):
toyplot.log.warning(
"Graph contains %s loop edges that will not be visible.",
numpy.count_nonzero(loops))
# M will map start coords, A will map arc shape
eshapes = numpy.tile("MA", len(edges))
ecoordinates = numpy.empty((len(edges) * 3, 2))
# store start and end points
sources = vcoordinates[edges.T[0]]
targets = vcoordinates[edges.T[1]]
# calculate midpoints of arcs (TODO)
offsets = numpy.dot(targets - sources, [[0, 1], [-1, 0]]) * self._origin[0]
midpoints = ((sources + targets) * 0.5) + offsets
ecoordinates[0::3] = sources
ecoordinates[1::3] = midpoints
ecoordinates[2::3] = targets
return eshapes, ecoordinates
def get_path(self):
"""
The sweep-flag determines if the arc should begin moving
at positive angles or negative angles.
"""
# orientation depends on x-axis (no rotation)
sweep = 0
if y1 - y0:
sweep = 1
# the svg path string expanded
path = "M {x0} {y0} A {rx} {ry}, 0, 0, {sweep}, {x1, y1}"
path = path.format(**cdict)
return path
"""
Explanation: Create an ArcEdges class similar to CurvedEdges
It takes an origin argument from which the radius can always be calculated, and it will also determine the sweep-flag automatically.
End of explanation
"""
toyplot.svg.render(c, "test.svg")
svg = toyplot.svg.render(c)
html = toyplot.html.render(c, "test.html")
import xml.etree.ElementTree as ET
svg
svg.tag, svg.attrib
for child in svg:
print(child.tag, child.attrib)
for item in svg.iter('g'):
print(item.attrib)
for country in svg.findall('g'):
print(country)
"""
Explanation: Parse SVG with lxml
End of explanation
"""
|
kit-cel/wt | ccgbc/ch4_LDPC_Analysis/LDPC_Optimization_BEC.ipynb | gpl-2.0 | import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plot
from ipywidgets import interactive
import ipywidgets as widgets
import math
%matplotlib inline
"""
Explanation: Optimization of Degree Distributions on the BEC
This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.
This code illustrates
* Using linear programming to optimize degree distributions on the BEC
End of explanation
"""
# returns rho polynomial (highest exponents first) corresponding to average check node degree c_avg
def c_avg_to_rho(c_avg):
ct = math.floor(c_avg)
r1 = ct*(ct+1-c_avg)/c_avg
r2 = (c_avg - ct*(ct+1-c_avg))/c_avg
rho_poly = np.concatenate(([r2,r1], np.zeros(ct-1)))
return rho_poly
"""
Explanation: We specify the check node degree distribution polynomial $\rho(Z)$ by fixing the average check node degree $d_{\mathtt{c},\text{avg}}$ and assuming that the code contains only check nodes with degrees $\tilde{d}{\mathtt{c}} := \lfloor d{\mathtt{c},\text{avg}}\rfloor$ and $\tilde{d}{\mathtt{c}}+1$. This is the so-called check-concentrated degree distribution. As shown in the lecture, we have:
$$
\rho(Z) = \frac{\tilde{d}{\mathtt{c}}(\tilde{d}{\mathtt{c}}+1-d{\mathtt{c},\text{avg}})}{d_{\mathtt{c},\text{avg}}}Z^{\tilde{d}{\mathtt{c}}-1} + \frac{d{\mathtt{c},\text{avg}}-\tilde{d}{\mathtt{c}}(\tilde{d}{\mathtt{c}}+1-d_{\mathtt{c},\text{avg}})}{d_{\mathtt{c},\text{avg}}}Z^{\tilde{d}{\mathtt{c}}}
$$
The following function converts $d{\mathtt{c},\text{avg}}$ into a polynomial $\rho(Z)$ which is given as an array where the first entry corresponds to the largest exponents and the last entry corresponds to the constant part.
End of explanation
"""
def find_best_lambda(epsilon, v_max, c_avg):
rho = c_avg_to_rho(c_avg)
# quantization of fixed-point condition
D = 500
xi_range = np.arange(1.0, D+1, 1)/D
# Variable to optimize is lambda with v_max entries
v_lambda = cp.Variable(shape=v_max)
# objective function
cv = 1/np.arange(v_max,0,-1)
objective = cp.Maximize(v_lambda @ cv)
# constraints
# constraint 1, v_lambda are fractions between 0 and 1 and sum up to 1
constraints = [cp.sum(v_lambda) == 1, v_lambda >= 0]
# constraint 2, no variable nodes of degree 1
constraints += [v_lambda[v_max-1] == 0]
# constraints 3, fixed point condition for all the descrete xi values (a total number of D, for each \xi)
for xi in xi_range:
constraints += [v_lambda @ [epsilon * (1-np.polyval(rho,1.0-xi))**(v_max-1-j) for j in range(v_max)] - xi <= 0]
# constraint 4, stability condition
constraints += [v_lambda[v_max-2] <= 1/epsilon/np.polyval(np.polyder(rho),1.0)]
# set up the problem and solve
problem = cp.Problem(objective, constraints)
problem.solve()
if problem.status == "optimal":
r_lambda = v_lambda.value
# remove entries close to zero and renormalize
r_lambda[r_lambda <= 1e-7] = 0
r_lambda = r_lambda / sum(r_lambda)
else:
r_lambda = np.array([])
return r_lambda
"""
Explanation: The following function solves the optimization problem that returns the best $\lambda(Z)$ for a given BEC erasure probability $\epsilon$, for an average check node degree $d_{\mathtt{c},\text{avg}}$, and for a maximum variable node degree $d_{\mathtt{v},\max}$. This optimization problem is derived in the lecture as
$$
\begin{aligned}
& \underset{\lambda_1,\ldots,\lambda_{d_{\mathtt{v},\max}}}{\text{maximize}} & & \sum_{i=1}^{d_{\mathtt{v},\max}}\frac{\lambda_i}{i} \
& \text{subject to} & & \lambda_1 = 0 \
& & & \lambda_i \geq 0, \quad \forall i \in{2,3,\ldots,d_{\mathtt{v},\max}} \
& & & \sum_{i=2}^{d_{\mathtt{v},\max}}\lambda_i = 1 \
& & & \sum_{i=2}^{d_{\mathtt{v},\max}}\lambda_i\cdot \epsilon(1-\rho(1-\tilde{\xi}j))^{i-1}-\tilde{\xi}_j \leq 0,\quad \forall j \in {1,\ldots, D} \
& & & \lambda_2 \leq \frac{1}{\epsilon\rho^\prime(1)} = \frac{1}{\epsilon\sum{i=2}^{d_{\mathtt{c},\max}}(i-1)\rho_i}
\end{aligned}
$$
If this optimization problem is feasible, then the function returns the polynomial $\lambda(Z)$ as a coefficient array where the first entry corresponds to the largest exponent ($\lambda_{d_{\mathtt{v},\max}}$) and the last entry to the lowest exponent ($\lambda_1$). If the optimization problem has no solution (e.g., it is unfeasible), then the empty vector is returned.
End of explanation
"""
best_lambda = find_best_lambda(0.2949219, 16, 12.98)
print(np.poly1d(best_lambda, variable='Z'))
"""
Explanation: As an example, we consider the case of optimization carried out in the lecture after 9 iterations, where we have $\epsilon = 0.2949219$ and $d_{\mathtt{c},\text{avg}} = 12.98$ with $d_{\mathtt{v},\max}=16$
End of explanation
"""
def best_lambda_interactive(epsilon, c_avg, v_max):
# get lambda and rho polynomial from optimization and from c_avg, respectively
p_lambda = find_best_lambda(epsilon, v_max, c_avg)
p_rho = c_avg_to_rho(c_avg)
# if optimization successful, compute rate and show plot
if p_lambda.size == 0:
print('Optimization infeasible, no solution found')
else:
design_rate = 1 - np.polyval(np.polyint(p_rho),1)/np.polyval(np.polyint(p_lambda),1)
if design_rate <= 0:
print('Optimization feasible, but no code with positive rate found')
else:
print("Lambda polynomial:")
print(np.poly1d(p_lambda, variable='Z'))
print("Design rate r_d = %1.3f" % design_rate)
# Plot EXIT-Chart
print("EXIT Chart:")
plot.figure(3)
x = np.linspace(0, 1, num=100)
y_v = [1 - epsilon*np.polyval(p_lambda, 1-xv) for xv in x]
y_c = [np.polyval(p_rho,xv) for xv in x]
plot.plot(x, y_v, '#7030A0')
plot.plot(y_c, x, '#008000')
plot.axis('equal')
plot.gca().set_aspect('equal', adjustable='box')
plot.xlim(0,1)
plot.ylim(0,1)
plot.xlabel('$I^{[A,V]}$, $I^{[E,C]}$')
plot.ylabel('$I^{[E,V]}$, $I^{[A,C]}$')
plot.grid()
plot.show()
interactive_plot = interactive(best_lambda_interactive, \
epsilon=widgets.FloatSlider(min=0.01,max=1,step=0.001,value=0.5, continuous_update=False, description=r'\(\epsilon\)',layout=widgets.Layout(width='50%')), \
c_avg = widgets.FloatSlider(min=3,max=20,step=0.1,value=4, continuous_update=False, description=r'\(d_{\mathtt{c},\text{avg}}\)'), \
v_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\(d_{\mathtt{v},\max}\)'))
output = interactive_plot.children[-1]
output.layout.height = '400px'
interactive_plot
"""
Explanation: In the following, we provide an interactive widget that allows you to choose the parameters of the optimization yourself and get the best possible $\lambda(Z)$. Additionally, the EXIT chart is plotted to visualize the good fit of the obtained degree distribution.
End of explanation
"""
def find_best_rate(epsilon, v_max, c_max):
c_range = np.linspace(3, c_max, num=100)
rates = np.zeros_like(c_range)
# loop over all c_avg, add progress bar
f = widgets.FloatProgress(min=0, max=np.size(c_range))
display(f)
for index,c_avg in enumerate(c_range):
f.value += 1
p_lambda = find_best_lambda(epsilon, v_max, c_avg)
p_rho = c_avg_to_rho(c_avg)
if np.array(p_lambda).size > 0:
design_rate = 1 - np.polyval(np.polyint(p_rho),1)/np.polyval(np.polyint(p_lambda),1)
if design_rate >= 0:
rates[index] = design_rate
# find largest rate
largest_rate_index = np.argmax(rates)
best_lambda = find_best_lambda(epsilon, v_max, c_range[largest_rate_index])
print("Found best code of rate %1.3f for average check node degree of %1.2f" % (rates[largest_rate_index], c_range[largest_rate_index]))
print("Corresponding lambda polynomial")
print(np.poly1d(best_lambda, variable='Z'))
# Plot curve with all obtained results
plot.figure(4, figsize=(10,3))
plot.plot(c_range, rates, 'b')
plot.plot(c_range[largest_rate_index], rates[largest_rate_index], 'bs')
plot.xlim(3, c_max)
plot.ylim(0, (1.1*(1-epsilon)))
plot.xlabel('$d_{c,avg}$')
plot.ylabel('design rate $r_d$')
plot.grid()
plot.show()
return rates[largest_rate_index]
interactive_optim = interactive(find_best_rate, \
epsilon=widgets.FloatSlider(min=0.01,max=1,step=0.001,value=0.5, continuous_update=False, description=r'\(\epsilon\)',layout=widgets.Layout(width='50%')), \
v_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\(d_{\mathtt{v},\max}\)'), \
c_max = widgets.IntSlider(min=3, max=40, step=1, value=22, continuous_update=False, description=r'\(d_{\mathtt{c},\max}\)'))
output = interactive_optim.children[-1]
output.layout.height = '400px'
interactive_optim
"""
Explanation: Now, we carry out the optimization over a wide range of $d_{\mathtt{c},\text{avg}}$ values for a given $\epsilon$ and find the largest possible rate.
End of explanation
"""
target_rate = 0.7
dv_max = 16
dc_max = 22
T_Delta = 0.001
epsilon = 0.5
Delta_epsilon = 0.5
while Delta_epsilon >= T_Delta:
print('Running optimization for epsilon = %1.5f' % epsilon)
rate = find_best_rate(epsilon, dv_max, dc_max)
if rate > target_rate:
epsilon = epsilon + Delta_epsilon / 2
else:
epsilon = epsilon - Delta_epsilon / 2
Delta_epsilon = Delta_epsilon / 2
"""
Explanation: Run binary search to find best irregular code for a given target rate on the BEC.
End of explanation
"""
|
patrick-kidger/diffrax | examples/symbolic_regression.ipynb | apache-2.0 | import tempfile
from typing import List
import equinox as eqx # https://github.com/patrick-kidger/equinox
import jax
import jax.numpy as jnp
import optax # https://github.com/deepmind/optax
import pysr # https://github.com/MilesCranmer/PySR
import sympy
# Note that PySR, which we use for symbolic regression, uses Julia as a backend.
# You'll need to install a recent version of Julia if you don't have one.
# (And can get funny errors if you have a too-old version of Julia already.)
# You may also need to restart Python after running `pysr.install()` the first time.
pysr.silence_julia_warning()
pysr.install(quiet=True)
"""
Explanation: Symbolic Regression
This example combines neural differential equations with regularised evolution to discover the equations
$\frac{\mathrm{d} x}{\mathrm{d} t}(t) = \frac{y(t)}{1 + y(t)}$
$\frac{\mathrm{d} y}{\mathrm{d} t}(t) = \frac{-x(t)}{1 + x(t)}$
directly from data.
References:
This example appears as an example in:
bibtex
@phdthesis{kidger2021on,
title={{O}n {N}eural {D}ifferential {E}quations},
author={Patrick Kidger},
year={2021},
school={University of Oxford},
}
Whilst drawing heavy inspiration from:
```bibtex
@inproceedings{cranmer2020discovering,
title={{D}iscovering {S}ymbolic {M}odels from {D}eep {L}earning with {I}nductive
{B}iases},
author={Cranmer, Miles and Sanchez Gonzalez, Alvaro and Battaglia, Peter and
Xu, Rui and Cranmer, Kyle and Spergel, David and Ho, Shirley},
booktitle={Advances in Neural Information Processing Systems},
publisher={Curran Associates, Inc.},
year={2020},
}
@software{cranmer2020pysr,
title={PySR: Fast \& Parallelized Symbolic Regression in Python/Julia},
author={Miles Cranmer},
publisher={Zenodo},
url={http://doi.org/10.5281/zenodo.4041459},
year={2020},
}
```
This example is available as a Jupyter notebook here.
End of explanation
"""
def quantise(expr, quantise_to):
if isinstance(expr, sympy.Float):
return expr.func(round(float(expr) / quantise_to) * quantise_to)
elif isinstance(expr, sympy.Symbol):
return expr
else:
return expr.func(*[quantise(arg, quantise_to) for arg in expr.args])
class SymbolicFn(eqx.Module):
fn: callable
parameters: jnp.ndarray
def __call__(self, x):
# Dummy batch/unbatching. PySR assumes its JAX'd symbolic functions act on
# tensors with a single batch dimension.
return jnp.squeeze(self.fn(x[None], self.parameters))
class Stack(eqx.Module):
modules: List[eqx.Module]
def __call__(self, x):
return jnp.stack([module(x) for module in self.modules], axis=-1)
def expr_size(expr):
return sum(expr_size(v) for v in expr.args) + 1
def _replace_parameters(expr, parameters, i_ref):
if isinstance(expr, sympy.Float):
i_ref[0] += 1
return expr.func(parameters[i_ref[0]])
elif isinstance(expr, sympy.Symbol):
return expr
else:
return expr.func(
*[_replace_parameters(arg, parameters, i_ref) for arg in expr.args]
)
def replace_parameters(expr, parameters):
i_ref = [-1] # Distinctly sketchy approach to making this conversion.
return _replace_parameters(expr, parameters, i_ref)
"""
Explanation: Now for a bunch of helpers. We'll use these in a moment; skip over them for now.
End of explanation
"""
def main(
symbolic_dataset_size=2000,
symbolic_num_populations=100,
symbolic_population_size=20,
symbolic_migration_steps=4,
symbolic_mutation_steps=30,
symbolic_descent_steps=50,
pareto_coefficient=2,
fine_tuning_steps=500,
fine_tuning_lr=3e-3,
quantise_to=0.01,
):
#
# First obtain a neural approximation to the dynamics.
# We begin by running the previous example.
#
# Runs the Neural ODE example.
# This defines the variables `ts`, `ys`, `model`.
print("Training neural differential equation.")
%run neural_ode.ipynb
#
# Now symbolically regress across the learnt vector field, to obtain a Pareto
# frontier of symbolic equations, that trades loss against complexity of the
# equation. Select the "best" from this frontier.
#
print("Symbolically regressing across the vector field.")
vector_field = model.func.mlp # noqa: F821
dataset_size, length_size, data_size = ys.shape # noqa: F821
in_ = ys.reshape(dataset_size * length_size, data_size) # noqa: F821
in_ = in_[:symbolic_dataset_size]
out = jax.vmap(vector_field)(in_)
with tempfile.TemporaryDirectory() as tempdir:
symbolic_regressor = pysr.PySRRegressor(
niterations=symbolic_migration_steps,
ncyclesperiteration=symbolic_mutation_steps,
populations=symbolic_num_populations,
npop=symbolic_population_size,
optimizer_iterations=symbolic_descent_steps,
optimizer_nrestarts=1,
procs=1,
verbosity=0,
tempdir=tempdir,
temp_equation_file=True,
output_jax_format=True,
)
symbolic_regressor.fit(in_, out)
best_equations = symbolic_regressor.get_best()
expressions = [b.sympy_format for b in best_equations]
symbolic_fns = [
SymbolicFn(b.jax_format["callable"], b.jax_format["parameters"])
for b in best_equations
]
#
# Now the constants in this expression have been optimised for regressing across
# the neural vector field. This was good enough to obtain the symbolic expression,
# but won't quite be perfect -- some of the constants will be slightly off.
#
# To fix this we now plug our symbolic function back into the original dataset
# and apply gradient descent.
#
print("Optimising symbolic expression.")
symbolic_fn = Stack(symbolic_fns)
flat, treedef = jax.tree_flatten(
model, is_leaf=lambda x: x is model.func.mlp # noqa: F821
)
flat = [symbolic_fn if f is model.func.mlp else f for f in flat] # noqa: F821
symbolic_model = jax.tree_unflatten(treedef, flat)
@eqx.filter_grad
def grad_loss(symbolic_model):
vmap_model = jax.vmap(symbolic_model, in_axes=(None, 0))
pred_ys = vmap_model(ts, ys[:, 0]) # noqa: F821
return jnp.mean((ys - pred_ys) ** 2) # noqa: F821
optim = optax.adam(fine_tuning_lr)
opt_state = optim.init(eqx.filter(symbolic_model, eqx.is_inexact_array))
@eqx.filter_jit
def make_step(symbolic_model, opt_state):
grads = grad_loss(symbolic_model)
updates, opt_state = optim.update(grads, opt_state)
symbolic_model = eqx.apply_updates(symbolic_model, updates)
return symbolic_model, opt_state
for _ in range(fine_tuning_steps):
symbolic_model, opt_state = make_step(symbolic_model, opt_state)
#
# Finally we round each constant to the nearest multiple of `quantise_to`.
#
trained_expressions = []
for module, expression in zip(symbolic_model.func.mlp.modules, expressions):
expression = replace_parameters(expression, module.parameters.tolist())
expression = quantise(expression, quantise_to)
trained_expressions.append(expression)
print(f"Expressions found: {trained_expressions}")
main()
"""
Explanation: Okay, let's get started.
We start by running the Neural ODE example.
Then we extract the learnt neural vector field, and symbolically regress across this.
Finally we fine-tune the resulting symbolic expression.
End of explanation
"""
|
harmsm/pythonic-science | chapters/01_simulation/01_scipy-stats_key.ipynb | unlicense | x = np.arange(-10,10,0.2)
y = np.cos(x)
noisy_y = y + np.random.normal(0,0.3,len(y))
plt.plot(x,y)
plt.plot(x,noisy_y)
"""
Explanation: <cont style="margin:auto">
<img src="https://s-media-cache-ak0.pinimg.com/originals/33/07/24/330724abbfde900c94af94ed0fbc5f9f.jpg" height="85%" width="85%" />
</font>
<ul>
<li><code class="python">np.random.seed</code><div class="fragment" style="color:blue">Sets seed to allow reproducible randomness</div></li>
<li> List and array operations:
<ul>
<li><code class="python">np.random.choice,np.random.shuffle</code></li>
</ul>
<div class="fragment" style="color:blue">Shuffle or choose random entries from lists</div>
</li>
<li> Distributions:
<ul>
<li><code class="python">np.random.normal,np.random.binomial,</code></li>
<li><code class="python">np.random.uniform,np.random.poisson</code></li>
</ul>
<div class="fragment" style="color:blue">Sample random numbers from distributions</div>
</li>
<li><code class="python">plt.hist</code> <div class="fragment" style="color:blue">Plots histograms</div></li>
</ul>
So what is all of this useful for, anyway?
Adding Noise
If you are developing an analysis pipeline, you probably want to simulate the noise you expect from your experimental data.
Quick noise example
End of explanation
"""
wildtype = np.random.normal(5,1.4,5)
mutant = np.random.normal(5.5,1.4,5)
print(wildtype)
print(mutant)
"""
Explanation: Simulating sampling
You might want to do an experiment computationally before you actually do the experiment to make sure you'll be able to detect what you want to detect
You are measuring the length of microtubule bundles in S. pombe yeast.
The average length of these bundles is $5.00 \pm 1.4 \mu m$.
You introduce a mutation and expect you the microtubules will now be longer: $5.5 \pm 1.4 \mu m$.
You only have time to measure bundle length for 5 wildtype and 5 mutant cells.
Assuming your expectation is right, will you be able to tell that the mutant had any effect?
Simulate the sampling
5 samples from $5 \pm 1.4$
5 samples from $5.5 \pm 1.4$
End of explanation
"""
import scipy.stats
"""
Explanation: How do we test to see if these are different?
End of explanation
"""
d = scipy.stats.ttest_ind(mutant,wildtype)
d.pvalue
n_list = [5,10,50,100,500,1000]
p_list = []
for n in n_list:
wildtype = np.random.normal(5,1.4,n)
mutant = np.random.normal(5.5,1.4,n)
d = scipy.stats.ttest_ind(mutant,wildtype)
p_list.append(d.pvalue)
plt.plot(n_list,p_list,"-")
"""
Explanation: Figure out how to use scipy.stats.ttest_ind.
+ Determine the p-value you for a t-test between your 5 wildtype and 5 mutant measurements.
+ Can you figure out how many samples you need to meausure to reliably get a p-value < 0.05 for this expected difference in means?
End of explanation
"""
d = scipy.stats.norm()
x = np.arange(-5,5,0.01)
prob_density = d.pdf(x)
cum_density = d.cdf(x)
def plot_distrib(d,x,name):
"""
Function that plots the probability density and cumulative density
functions of a distribution over the range defined in x.
"""
fig, ax = plt.subplots(1,2,figsize=(12,5))
ax[0].plot(x,d.pdf(x),"k-")
ax[0].set_title("Probability Density Function ({})".format(name))
ax[0].set_xlabel("x")
ax[0].set_ylabel("P(X == x)")
ax[1].plot(x,d.cdf(x),"k-")
ax[1].set_title("Cumulative Density Function ({})".format(name))
ax[1].set_xlabel("x")
ax[1].set_ylabel("P(X $\geq$ x)")
bottom = np.arange(np.min(x),d.interval(0.95)[0],0.01)
top = np.arange(d.interval(0.95)[1],np.max(x),0.01)
ax[0].fill_between(bottom,d.pdf(bottom),color="gray")
ax[0].fill_between(top,d.pdf(top),color="gray")
plot_distrib(d,x,"Normal $\mu = 0$, $\sigma = 1$")
pareto = scipy.stats.pareto(1)
x = range(1,50)
plot_distrib(pareto,x,"Pareto c = 1")
"""
Explanation: Stats provides a wide variety of statistical tests
t-test: scipy.stats.ttest_ind
One-way ANOVA: scipy.stats.f_oneway
Wilcoxan Rank: scipy.stats.ranksums
$\chi^{2}$: scipy.stats.chisquare
Pearson's Correlation: scipy.stats.pearsonr
Stats also provides access to probability distributions
End of explanation
"""
|
scikit-rf/examples | metrology/Measuring a Mutiport Device with a 2-Port Network Analyzer.ipynb | bsd-3-clause | import skrf as rf
from itertools import combinations
"""
Explanation: Measuring a Mutiport Device with a 2-Port Network Analyzer
Introduction
This notebook demonstrates a numerical test of the technique described in
"A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer" [1].
In microwave measurements, one commonly needs to measure a n-port deveice with a m-port network analyzer ($ m<n $ of course). Generally, this is done by terminating each non-measured port with a matched load, and assuming the reflected power is negligable. However, in some cases this may not provide the most accurate results, or even be possible in all measurement environments. The paper above presents an elegent solution to this problem, using impedance renormalization. We'll call it Tippet's technique, because it has a good ring to it.
In Tippets technique, several sub-networks are measured in a similar way as before, but the port terminations are not assumed to be matched. Instead, the terminations just have to be known and no more than one can be completely reflective. So, in general $|\Gamma| \ne 1$. During measurements, each port is terminated with a consistent termination. So port 1 is always terminated with $Z_1$ when not being measured. Once measured, each sub-network is re-normalized to these port impedances. Think about that. Finally the composit network is contructed, and may then be re-normalized to the desired system impedance, say $50 ohm $
[1] J. C. Tippet and R. A. Speciale, “A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer,” IEEE Transactions on Microwave Theory and Techniques, vol. 30, no. 5, pp. 661–666, May 1982.
Outline of Tippet's Technique
Following the example given in [1], measuring a 4-port network with a 2-port network analyzer.
An outline of the technique:
Calibrate 2-port network analyzer
Get four known terminations ($Z_1, Z_2, Z_3,Z_4$). No more than one can have $|\Gamma| = 1$
Measure all combinations of 2-port subnetworks (there are 6). Each port not currently being measured must be terminated with its corresponding load.
Renormalize each subnetwork to the impedances of the loads used to terminate it when note being measured.
Build composite 4-port, renormalize to VNA impedance.
Implementation
End of explanation
"""
wg = rf.wr10
wg.frequency.npoints = 101
"""
Explanation: First, we create a Media object, which is used to generate networks for testing. We will use WR-10 Rectangular waveguide.
End of explanation
"""
dut = wg.random(n_ports = 4,name= 'dut')
dut
"""
Explanation: Next, lets generate a random 4-port network which will be the DUT, that we are trying to measure with out 2-port network analyzer.
End of explanation
"""
loads = [wg.load(.1+.1j),
wg.load(.2-.2j),
wg.load(.3+.3j),
wg.load(.5),
]
# construct the impedance array, of shape FXN
z_loads = array([k.z.flatten() for k in loads]).T
"""
Explanation: Now, we need to define the loads used to terminate each port when it is not being measured, note as described in [1] not more than one can be have full reflection, $|\Gamma| = 1$
End of explanation
"""
ports = arange(dut.nports)
port_combos = list(combinations(ports, 2))
port_combos
"""
Explanation: Create required measurement port combinations. There are 6 different measurements required to measure a 4-port with a 2-port VNA. In general, #measurements = $n\choose 2$, for n-port DUT on a 2-port VNA.
End of explanation
"""
composite = wg.match(nports = 4) # composite network, to be filled.
measured,measured_renorm = {},{} # measured subnetworks and renormalized sub-networks
# ports `a` and `b` are the ports we will connect the VNA too
for a,b in port_combos:
# port `c` and `d` are the ports which we will connect the loads too
c,d =ports[(ports!=a)& (ports!=b)]
# determine where `d` will be on four_port, after its reduced to a three_port
e = where(ports[ports!=c]==d)[0][0]
# connect loads
three_port = rf.connect(dut,c, loads[c],0)
two_port = rf.connect(three_port,e, loads[d],0)
# save raw and renormalized 2-port subnetworks
measured['%i%i'%(a,b)] = two_port.copy()
two_port.renormalize(c_[z_loads[:,a],z_loads[:,b]])
measured_renorm['%i%i'%(a,b)] = two_port.copy()
# stuff this 2-port into the composite 4-port
for i,m in enumerate([a,b]):
for j,n in enumerate([a,b]):
composite.s[:,m,n] = two_port.s[:,i,j]
# properly copy the port impedances
composite.z0[:,a] = two_port.z0[:,0]
composite.z0[:,b] = two_port.z0[:,1]
# finally renormalize from
composite.renormalize(50)
"""
Explanation: Now to do it. Ok we loop over the port combo's and connect the loads to the right places, simulating actual measurements. Each raw subnetwork measurement is saved, along with the renormalized subnetwork. Finally, we stuff the result into the 4-port composit network.
End of explanation
"""
measured_renorm
"""
Explanation: Results
Self-Consistency
Note that 6-measurements of 2-port subnetworks works out to 24-sparameters, and we only need 16. This is because each reflect, s-parameter is measured three-times. As, in [1], we will use this redundent measurement as a check of our accuracy.
The renormalized networks are stored in a dictionary with names based on their port indecies, from this you can see that each have been renormalized to the appropriate z0.
End of explanation
"""
s11_set = rf.NS([measured[k] for k in measured if k[0]=='0'])
figure(figsize = (8,4))
subplot(121)
s11_set .plot_s_db(0,0)
subplot(122)
s11_set .plot_s_deg(0,0)
tight_layout()
"""
Explanation: Plotting all three raw measurements of $S_{11}$, we can see that they are not in agreement. These plots answer to plots 5 and 7 of [1]
End of explanation
"""
s11_set = rf.NS([measured_renorm[k] for k in measured_renorm if k[0]=='0'])
figure(figsize = (8,4))
subplot(121)
s11_set .plot_s_db(0,0)
subplot(122)
s11_set .plot_s_deg(0,0)
tight_layout()
"""
Explanation: However, the renormalized measurements agree perfectly. These plots answer to plots 6 and 8 of [1]
End of explanation
"""
composite == dut
"""
Explanation: Test For Accuracy
Making sure our composite network is the same as our DUT
End of explanation
"""
sum((composite - dut).s_mag)
"""
Explanation: Nice!. How close ?
End of explanation
"""
def tippits(dut, gamma, noise=None):
'''
simulate tippits technique on a 4-port dut.
'''
ports = arange(dut.nports)
port_combos = list(combinations(ports, 2))
loads = [wg.load(gamma) for k in ports]
# construct the impedance array, of shape FXN
z_loads = array([k.z.flatten() for k in loads]).T
composite = wg.match(nports = dut.nports) # composite network, to be filled.
#measured,measured_renorm = {},{} # measured subnetworks and renormalized sub-networks
# ports `a` and `b` are the ports we will connect the VNA too
for a,b in port_combos:
# port `c` and `d` are the ports which we will connect the loads too
c,d =ports[(ports!=a)& (ports!=b)]
# determine where `d` will be on four_port, after its reduced to a three_port
e = where(ports[ports!=c]==d)[0][0]
# connect loads
three_port = rf.connect(dut,c, loads[c],0)
two_port = rf.connect(three_port,e, loads[d],0)
if noise is not None:
two_port.add_noise_polar(*noise)
# save raw and renormalized 2-port subnetworks
measured['%i%i'%(a,b)] = two_port.copy()
two_port.renormalize(c_[z_loads[:,a],z_loads[:,b]])
measured_renorm['%i%i'%(a,b)] = two_port.copy()
# stuff this 2-port into the composite 4-port
for i,m in enumerate([a,b]):
for j,n in enumerate([a,b]):
composite.s[:,m,n] = two_port.s[:,i,j]
# properly copy the port impedances
composite.z0[:,a] = two_port.z0[:,0]
composite.z0[:,b] = two_port.z0[:,1]
# finally renormalize from
composite.renormalize(50)
return composite
wg.frequency.npoints = 11
dut = wg.random(4)
#er = lambda gamma: mean((tippits(dut,gamma)-dut).s_mag)/mean(dut.s_mag)
def er(gamma, *args):
return max(abs(tippits(dut, rf.db_2_mag(gamma),*args).s_db-dut.s_db).flatten())
gammas = linspace(-80,0,11)
title('Error vs $|\Gamma|$')
plot(gammas, [er(k) for k in gammas])
plot(gammas, [er(k) for k in gammas])
semilogy()
xlabel('$|\Gamma|$ of Loads (dB)')
ylabel('Max Error in DUT\'s dB(S)')
figure()
#er = lambda gamma: max(abs(tippits(dut,gamma,(1e-5,.1)).s_db-dut.s_db).flatten())
noise = (1e-5,.1)
title('Error vs $|\Gamma|$ with reasonable noise')
plot(gammas, [er(k, noise) for k in gammas])
plot(gammas, [er(k,noise) for k in gammas])
semilogy()
xlabel('$|\Gamma|$ of Loads (dB)')
ylabel('Max Error in DUT\'s dB(S)')
"""
Explanation: Dang!
Practical Application
This could be used in many ways. In waveguide, one could just make a measurement of a radiating open after a standard two-port calibration (like TRL). Then using Tippets technique, you can leave each port wide open while not being measured. This way you dont have to buy a bunch of loads. How sweet would that be?
More Complex Simulations
End of explanation
"""
|
mtasende/Machine-Learning-Nanodegree-Capstone | notebooks/.ipynb_checkpoints/n1_preparation-checkpoint.ipynb | mit | import yahoo_finance
import requests
import datetime
def print_unix_timestamp_date(timestamp):
print(
datetime.datetime.fromtimestamp(
int(timestamp)
).strftime('%Y-%m-%d %H:%M:%S')
)
print_unix_timestamp_date("1420077600")
print_unix_timestamp_date("1496113200")
EXAMPLE_QUERY = "http:/query1.finance.yahoo.com/v7/finance/download/AMZN?period1=1483585200&period2=1496113200&interval=1d&events=history&crumb=mFcCyf2I8jh"
import urllib2
response = urllib2.urlopen(EXAMPLE_QUERY)
html = response.read()
csv_values = requests.get(EXAMPLE_QUERY)
csv_values
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
%matplotlib inline
%load_ext autoreload
%autoreload 2
pd.__version__
"""
Explanation: On this notebook the initial steps towards solving the capstone project will be taken. Some data gathering and others...
End of explanation
"""
import pandas_datareader as pdr
pdr.__version__
from pandas_datareader import data, wb
SPY_CREATION_DATE = dt.datetime(1993,1,22)
start = SPY_CREATION_DATE
end = dt.datetime(1995,12,31)
#Let's try to get SPY
SPY_df = data.DataReader(name='SPY',data_source='google',start=start,
end=end)
print(SPY_df.shape)
SPY_df.head()
from yahoo_finance import Share
yahoo = Share('YHOO')
print(yahoo.get_price())
yahoo.get_historical('2005-01-01','2016-12-31')
import pandas_datareader.data as web
SPY_CREATION_DATE = dt.datetime(1993,1,22)
start = SPY_CREATION_DATE
end = dt.datetime(2016,12,31)
tickers = ['SPY','GOOG','AAPL','NVDA']
#Create the (empty) dataframe
dates = pd.date_range(start,end)
data_df = pd.DataFrame(index=dates)
#Let's try to get SPY
SPY_df = web.DataReader(name='SPY',data_source='google',start=start,
end=end)
print(SPY_df.shape)
SPY_df.head()
SPY_df['Close'].plot()
(SPY_df.index[-1]-SPY_df.index[0]).days / 365
"""
Explanation: Getting the data
End of explanation
"""
data_df
# This will add the data of one ticker
def add_ticker(data,ticker_df,ticker_name):
for key in data.keys():
column_df = pd.DataFrame(ticker_df[key]).rename(columns={key:ticker_name})
data[key] = data[key].join(column_df, how='left')
return data
def add_tickers(data, tickers, source):
for name in tickers:
if(not (name in data['Open'].columns)):
ticker_df = web.DataReader(name=name,data_source=source,start=start,end=end)
data = add_ticker_data(data,ticker_df,name)
print('Added: '+name)
else:
print(name+' was already added')
return data
"""
Explanation: So, Google has a limit of 15 years of data on each query
End of explanation
"""
iterables = [SPY_df.index, SPY_df.columns]
indexes = pd.MultiIndex.from_product(iterables, names=['date', 'feature'])
data_multi = pd.DataFrame(index=indexes)
print(data_multi.shape)
data_multi.head(20)
data_multi.xs('2001-02-08', level='date')
SPY_df.iloc[0]
SPY_df.head()
data_multi['sd'] = np.nan
data_multi.loc['2001-02-05','Open']['sd'] = SPY_df.loc['2001-02-05','Open']
data_multi
SPY_df.reset_index(inplace=True)
SPY_df.head()
SPY_df.set_index(['Date','Open'])
"""
Explanation: Keep dictionary or use multiindex?
End of explanation
"""
|
the-deep-learners/TensorFlow-LiveLessons | notebooks/first_tensorflow_graphs.ipynb | mit | import numpy as np
import tensorflow as tf
"""
Explanation: First TensorFlow Graphs
In this notebook, we execute elementary TensorFlow computational graphs.
Load dependencies
End of explanation
"""
x1 = tf.placeholder(tf.float32)
x2 = tf.placeholder(tf.float32)
sum_op = tf.add(x1, x2)
product_op = tf.multiply(x1, x2)
with tf.Session() as session:
sum_result = session.run(sum_op, feed_dict={x1: 2.0, x2: 0.5}) # run again with {x1: [2.0, 2.0, 2.0], x2: [0.5, 1.0, 2.0]}
product_result = session.run(product_op, feed_dict={x1: 2.0, x2: 0.5}) # ...and with {x1: [2.0, 4.0], x2: 0.5}
sum_result
product_result
"""
Explanation: Simple arithmetic
End of explanation
"""
with tf.Session() as session:
sum_result = session.run(sum_op, feed_dict={x1: [2.0, 2.0, 2.0], x2: [0.5, 1.0, 2.0]})
product_result = session.run(product_op, feed_dict={x1: [2.0, 4.0], x2: 0.5})
sum_result
product_result
"""
Explanation: Simple array arithmetic
End of explanation
"""
|
khaziev/sheath-models | docs/stangeby-sheath.ipynb | mit | plasma_params = {'T_e': 1., 'T_i': 1., 'm_i': 2e-3/const.N_A, 'gamma': 1, 'c': 1., 'alpha': np.pi/180*2}
def calc_stangeby_params(plasma_params):
'''
Calculate parameters of the plasma sheath for stangeby's model
----------------------------------------------
plasma_params - dictionary like
'''
plasma_params['u0'] = plasma_params['c'] *np.sin(plasma_params['alpha'])
#calculate argument
argument = 2 *np.pi *const.m_e /plasma_params['m_i'] *(1. + plasma_params['T_i']/plasma_params['T_e'])
plasma_params['alpha_critical'] = np.arcsin(np.sqrt(argument))
plasma_params['mach_cs_critical'] = np.sin(plasma_params['alpha'])/np.sqrt(argument)
#calculate all oft he parameters needed for Stangeby's model
calc_stangeby_params(plasma_params)
plasma_params
def func_f(u, plasma_params):
c = plasma_params['c']
u0 = plasma_params['u0']
alpha = plasma_params['alpha']
#find and calculte the result of te function
w = (c + 1./c) /np.cos(alpha) - np.tan(alpha) * (u + 1./u)
return c**2 + 2. *np.log(u/u0) - u**2 - w**2
func_f(1e-1, plasma_params)
n_plots_f = 100
u_space_f = np.linspace(0,1, n_plots_f)
fig, ax = plt.subplots(figsize=(6,6))
f_values = [func_f(u, plasma_params) for u in u_space_f]
plt.plot(u_space_f, f_values, label=r'$f(u)$')
plt.plot([0,1], [0, 0], linestyle='--', dashes=(5, 2.5), color='black')
plt.xlim(0, 0.1)
plt.xlabel(r'$u$')
plt.ylabel(r'$f(u)$')
plt.title(r'$f(u)<0$ before $u$ reaches 0', fontsize=22)
plt.show()
"""
Explanation: Introduction
Standeby has developed a collision presheath model based on Rieman plasma sheath set of equations. The model is collisionless, therefore it does not include ionization in the plasma sheath. The paper is centerer around the idea that Debye Sheath (DS) dissapeard at the grazing magnetic angles ($\alpha < \alpha^*$), then the Collisional Sheath (CS) can be completely characterized by the set of Rieman plasma sheath equations.
Governing equations
All of the plasma sheath properties are connected to the normalized velocity of the wall approach $u$, which is determined as
$$f\left(u\right) = c^2 + 2 \log\left(u/u_o\right) - u^2 - \left[\frac{c^2+ 1}{c \cos \alpha} - \tan \alpha \left(\frac{u^2 + 1}{u}\right)\right]^2$$
Where $c$ is a Mach number at the collisonal sheath entrance, and according to Chodura's definition it should be set to 1. For $z \rightarrow \infty$
$$ u_0 = c \sin \alpha$$
$$ v_0 = 0$$
$$ v_0 = c \cos \alpha$$
The resulting equation is valid for angle less than
$$ \alpha^* = \sin^{-1} \left[ \sqrt{\frac{2 \pi m_e}{m_i} \left( 1+ \frac{T_i}{T_e}\right)} \right]$$
End of explanation
"""
def integrand_u(u, plasma_params):
'''
Defines integrand for zeta function
-------------------------------------------
parameters:
u - float like
plasma_params - dictionary like
'''
return (1. - u**2) /u /np.sqrt(func_f(u, plasma_params))
def get_sheath_zeta(u_space, plasma_params, max_u = 1):
'''
Find values of zeta for a given list of the u values
-----------------------------------------------------
u_space - sequence (ex. list, np.array), contains values between 0 and 1
plasma_params - dictionary like
max_u - float, if set to 1, provide classic profile
'''
#result = [integrate.romberg(integrand_u, u, max_u, args=(plasma_params,), show=True,divmax=100) for u in u_space]
result = [integrate.fixed_quad(integrand_u, u, max_u, args=(plasma_params,), n=500)[0] for u in u_space]
return result
def get_sheath_w(u_space, plasma_params):
'''
Finds the values of the drift velocity in ExB planes in the direction parallel to the wall
-----------------------------------------------------
u_space - sequence (ex. list, np.array), contains values between 0 and 1
plasma_params - dictionary like
'''
#conversion function of w
f_w = lambda u: 2. - (u + 1./u) *np.sin(plasma_params['alpha'])/np.cos(plasma_params['alpha'])
w = [f_w(u) for u in u_space]
return w
def get_sheath_v(u_space, w_space, plasma_params):
'''
Finds the values of the drift velocity in the direction of ExB drift
-----------------------------------------------------
u_space - sequence (ex. list, np.array), contains values between 0 and 1
plasma_params - dictionary like
'''
alpha = plasma_params['alpha']
#conversion function for v
f_v = lambda u, w: np.sqrt(2 *np.log(u/np.sin(alpha)) + 1 - u**2 -w**2)
v = [f_v(u, w) for u, w in zip(u_space, w_space)]
return v
def get_sheath_potential(u_space, plasma_params):
'''
Calculates plasma potential in physical units
'''
#scale for plasma potential
scale = -plasma_params['T_e']
#evaulate potential
potential = [scale *np.log(u) for u in u_space]
return potential
np.logspace(np.log10(1e-3), np.log10(0.6), 5)
def get_sheath_density(u_space, potential, plasma_params):
'''
Calculates plasma potential in relative units
'''
#scale for electrostatic potential
scale = plasma_params['T_e']
#scale for density
#evaulate potential
density = [np.exp(u/scale) for u in potential]
return density
n_points = 100
u_space = np.linspace(0, 1, n_points)
#find sheath location
zeta_space_classic = get_sheath_zeta(u_space, plasma_params)
zeta_space_stangeby = get_sheath_zeta(u_space, plasma_params, max_u = plasma_params['mach_cs_critical'])
w_space_stangeby = get_sheath_w(u_space, plasma_params)
v_space_stangeby = get_sheath_v(u_space, w_space_stangeby, plasma_params)
fig, ax = plt.subplots(figsize=(6,6))
plt.plot(zeta_space_classic, u_space, label = 'Classical')
plt.plot(zeta_space_stangeby, u_space, label = 'Stangeby')
plt.xlim(0, 7.5)
plt.xlabel(r'$\zeta$')
plt.ylabel(r'$u$')
plt.legend()
plt.title('Stangeby vs Classical models', fontsize=20)
plt.show()
fig, ax = plt.subplots(figsize=(6,6))
plt.plot(zeta_space_stangeby, u_space, label = 'u')
plt.plot(zeta_space_stangeby, w_space_stangeby, label = 'w')
plt.plot(zeta_space_stangeby, v_space_stangeby, label = 'v')
plt.xlim(0, 6)
plt.xlabel(r'$\zeta$')
plt.ylabel(r'$u$')
plt.title('Drift velocity')
plt.legend(fontsize=20)
plt.show()
"""
Explanation: The classical equation dependence between $u$ and $\zeta$ is defined by integral equation
$$ \zeta(u) = \int_u^1 \mathbf{d} u \frac{1-u^2}{u \sqrt{f(u)}}$$
End of explanation
"""
normal_coeff_file = 'table_c1.csv'
grazing_coeff_file = 'table_c2.csv'
df_normal = pd.read_csv(normal_coeff_file)
df_normal.head()
df_grazing = pd.read_csv(grazing_coeff_file)
df_grazing.head()
"""
Explanation: Import precalculated Stangeby's coeficients
Stangeby has evaluated coefficients of his model for grazing magnetic angles
End of explanation
"""
#plasma_params = {'T_e': 1, 'T_i': 1, 'm_i': 1e-3/const.N_A, 'gamma': 1}
def bohm_speed(params):
def zeta(z, rho_i)
"""
Explanation: $\zeta$ is a normalized by larmour radius distance fom the wall
Preliminary plots of the plasma profiles
Setting up plasma parameters dictionary
End of explanation
"""
|
liuhanfei0615/liupengyuan.github.io | chapter2/homework/computer/5-10/201611680275.ipynb | mit | fh=open(r'd:\temp\秘密花园.txt')
text = fh.read()
words = text.split(' ')
fh.close()
"""
Explanation: 文件开始为:
the whispers in the morning of lovers sleeping tight are rolling by like thunder now as i look in your eyes i hold on to your body and feel each move you make your voice is warm and tender a love that i could not forsake cause i am your lady and you are my man whenever you reach for me i will do all that i can
End of explanation
"""
import random
fh=open(r'd:\temp\秘密花园.txt','w')
for word in words:
length=len(word)
idiom=word
number=random.randint(100000000,999999999)
key=(length-1)*10**9+number
key1=str(key)
i=0
wordkey=[]
if key/1000000000>=1:
wordkey.append(chr(97+int(key1[0])))
for i in range(1,length+1):
if ord(idiom[i-1])+int(key1[i])<123:
wordkey.append(chr(ord(idiom[i-1])+int(key1[i])))
else:
wordkey.append(chr(ord(idiom[i-1])+int(key1[i])-57))
for i in range(length,9):
wordkey.append(chr(97+int(key1[i])))
else:
wordkey.append('a')
for i in range(0,length+1):
if ord(idiom[i-1])+int(key1[i-1])<123:
wordkey.append(chr(ord(idiom[i-1])+int(key1[i-1])))
else:
wordkey.append(chr(ord(idiom[i-1])+int(key1[i-1])-57))
for i in range(length,8):
wordkey.append(chr(97+int(key1[i])))
wordkey1=''.join(wordkey)
wordkey2=wordkey1+' '
fh.write(wordkey2)
fh.close()
"""
Explanation: 编辑为密码:
End of explanation
"""
fh=open(r'd:\temp\秘密花园.txt','r+')
text = fh.read()
wordkey2 = text.split(' ')
true_word=[]
for wordkey1 in wordkey2:
if wordkey1[0]=='a':
if ord(wordkey1[1])<97:
true_word=chr(ord(wordkey1[1])+57-int(key1[1]))
else:
true_word=chr(ord(wordkey1[1])-int(key1[1]))
else:
headword=ord(wordkey1[0])-97
for i in range(1,headword):
if ord(wordkey1[i-1])<97:
true_word.append=chr(ord(wordkey1[i])+57-int(key1[i]))
elif ord(wordkey1[i-1])>97:
true_word.append=chr(ord(wordkey1[i])-int(key1[i]))
true_word1=''.join(true_word)
true_word2=true_word1+' '
fh.write(true_word2)
fh.close()
"""
Explanation: 文件最终显示为:
cDmfbdajbj hEmjCxjrai bogeejbbeh cDmfbbbggj gtqBrpqdfh bpbhjgghcb fosxitcfgi hylmnwlsff evkmibgbab civeabibij gvoloivicd bkjbeigfjf dnnoeaidfi gwmurigcga cttffbhaaj bffgfibjfd afihibjije dspvhbfcai bpheffiagh dFxDidiiij diymifgebc aidhjaihai dqsncjceaf bpbigafaii bBhhgiceab dzquahhajh deogdhgcbj cguhdfdggc dkghdeiied dhfdbhegdg dpuvafifjc cGxjafhhbe dverhfdhjd dGxwchjbae eDujgehaab bkcdeeghfa dzhwfcecff cbreggeghe fxmsdnjjai agjcbgfagg droEicdajj dvjbbfhfif aeeccfbedi ekxCmbejhf cqrdhjbejb gjurvimcjb eefExfafje adfeaecjee biigjgjaie dCtzffhcfg dncdahiafb chtgjbhfhe cIvhcjbdgb cdxgcefgih buijffhagd coddaacijb hxpepkFlhg cCpbafbbjb esjhljagjd cordedjaja bvjffeiddd afbgcjcdbj dFpshabefc bmjdjadfge ciujeiaffg dymjjagbci abafghefce cfggecgigd
解密:
End of explanation
"""
|
hvillanua/deep-learning | batch-norm/Batch_Normalization_Exercises.ipynb | mit | import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
"""
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
"""
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
"""
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
"""
def fully_connected(prev_layer, num_units, training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=None)
layer = tf.layers.batch_normalization(layer, training=training)
layer = tf.nn.relu(layer)
return layer
"""
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
"""
def conv_layer(prev_layer, layer_depth, training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
"""
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
"""
def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
"""
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
"""
def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
"""
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation
"""
|
letsgoexploring/teaching | winter2017/econ129/python/Econ129_Winter2017_Homework1.ipynb | mit | # Question 1.1
# Question 1.2
"""
Explanation: Homework 1 (DUE: Tuesday January 24)
Instructions: Complete the instructions in this notebook. You may work together with other students in the class and you may take full advantage of any internet resources available. You must provide thorough comments in your code so that it's clear that you understand what your code is doing and so that your code is readable.
Submit the assignment by saving your notebook as an html file (File -> Download as -> HTML) and uploading it to the appropriate Dropbox folder on EEE.
Question 1
The Cobb-Douglas production function can be written in per worker terms:
\begin{align}
y & = A k^{\alpha},
\end{align}
where $y$ denotes output per worker, $k$ denotes capital per worker, and $A$ denotes total factor productivity or technology
Do the following:
Suppose that $A$ = 1 and $\alpha = 0.35$. Construct a well-labeled plot of the Cobb-Douglas production function with $k$ on the horizontal axis and $y$ on the vertical axis for $k$ between 0 and 10. Your plot must have a title and axis labels.
Plot the Cobb-Douglas production for $A = 0.75 , 1,$ and $1.25$ with $\alpha = 0.35$ and $k$ ranging from 0 to 10. Each line should have a different style (e.g., solid, dashed, dot-dashed). Your plot must have a title and axis labels. the plot should also contain a legend that clearly indicates which line is associated with which value of $A$ and does not cover the plotted lines.
End of explanation
"""
# Question 2
"""
Explanation: Question 2
The cardioid is a shape described by the parametric equations:
\begin{align}
x & = a(2\cos \theta - \cos 2\theta), \
y & = a(2\sin \theta - \sin 2\theta).
\end{align}
Construct a well-labeled graph of the cardiod for $a=1$ and $\theta$ in $[0,2\pi]$. Each line should have a different style (e.g., solid, dashed, dot-dashed). Your plot must have a title and axis labels.
End of explanation
"""
# Question 3.1
# Question 3.2
# Question 3.3
# Question 3.4
# Question 3 bonus
"""
Explanation: Question 3
Recall the two good utility maximization problem from microeconomics. Let $x$ and $y$ denotes the amount of two goods that a person consumes. The person receives utility from consumption given by:
\begin{align}
u(x,y) & = x^{\alpha}y^{\beta}
\end{align}
The person has income $M$ to spend on the two goods and the price of the goods are $p_x$ and $p_y$. The consumer's budget constraint is:
\begin{align}
M & = p_x x + p_y y
\end{align}
Suppose that $M = 100$, $\alpha=0.25$, $\beta=0.75$, $p_x = 1$. and $p_y = 0.5$. The consumer's problem is to maximize their utility subject to the budget constraint. While this problem can easily be solved by hand, we're going to use a computational approach.
Do the following:
Use the budget constraint to solve for $y$ in terms of $x$, $p_x$, $p_y$, and $M$. Use the result to write the consumer's utility as a function of $x$ only. Create a variable called x equal to an array of values from 0 to 80 with step size equal to 0.001 and a variable called utility equal to the consumer's utility. Plot the consumer's utility against $x$.
The NumPy function np.max() returns the highest value in an array and np.argmax() returns the index of the highest value. Print the highest value and index of the highest value of utility.
Use the index of the highest value of utility to find the value in x with the same index and store value in a new variable called xstar. Print the value of xstar.
Use the budget constraint to the find the implied utility-maximizing vaue of $y$ and store this in a variable called ystar. Print ystar.
Bonus question: Create a well-labeled plot of the consumer's budget constraint and the indifference curve that corresponds with the optimal choice of $x$ and $y$.
End of explanation
"""
|
tuwien-musicir/rp_extract | RP_extract_Tutorial.v3.ipynb | gpl-3.0 | # to install iPython notebook on your computer, use this in Terminal
sudo pip install "ipython[notebook]"
"""
Explanation: <center><h1>Rhythm and Timbre Analysis from Music</h1></center>
<center><h2>Rhythm Pattern Music Features</h2></center>
<center><h2>Extraction and Application Tutorial</h2></center>
<br>
<center><h3>Thomas Lidy and Alexander Schindler</h3>
<h3>[email protected]</h3>
<br>
<b>Institute of Software Technology and Interactive Systems</b><br>TU Wien
<br>
<center><h3>http://www.ifs.tuwien.ac.at/mir</h3></center>
<br>
<br>
Table of Contents
<a href="#requirements">Requirements</a>
<a href="#processing">Audio Processing</a>
<a href="#extraction">Audio Feature Extraction</a>
<a href="#application">Application Scenarios</a><br>
4.1 <a href="#getsoundcloud">Getting Songs from Soundcloud</a><br>
4.2. <a href="#similar">Finding Similar Sounding Songs</a>
<a name="requirements"><font color="#0404B4">1. Requirements</font></a>
This Tutorial uses iPython Notebook for interactive coding. If you use iPython Notebook, you can interactively execute your code (and the code here in the tutorial) directly in the Web browser. Otherwise you can copy & paste code from here to your prefered Python editor.
End of explanation
"""
# in Terminal
git clone https://github.com/tuwien-musicir/rp_extract.git
"""
Explanation: RP Extract Library
This is our mean library for rhythmic and timbral audio feature analysis:
<ul>
<li><a href="https://github.com/tuwien-musicir/rp_extract">RP_extract</a> Rhythm Patterns Audio Feature Extraction Library (includes <a href="https://github.com/WarrenWeckesser/wavio">Wavio</a> for reading wav files (incl. 24 bit)) </li>
</ul>
download <a href="https://github.com/tuwien-musicir/rp_extract/archive/master.zip">ZIP</a> or check out from GitHub:
End of explanation
"""
# in Terminal
sudo pip install numpy scipy matplotlib
"""
Explanation: Python Libraries
RP_extract depends on the following libraries. If not already included in your Python installation,
please install these Python libraries using pip or easy_install:
<ul>
<li><a href="http://www.numpy.org/">Numpy</a>: the fundamental package for scientific computing with Python. It implements a wide range of fast and powerful algebraic functions.</li>
<li><a href="http://www.scipy.org/install.html">Scipy</a>: Scientific Python library</li>
<li><a href="http://matplotlib.org">matplotlib</a>: only needed for plotting (if you skipt the plots below, you are fine without) </li>
</ul>
They can usually be installed via Python PIP installer on command line:
End of explanation
"""
# in Terminal
sudo pip install soundcloud urllib unicsv scikit-learn
git clone https://github.com/tuwien-musicir/mir_utils.git
"""
Explanation: Additional Libraries
These libraries are used in the later tutorial steps, but not necessarily needed if you want to use the RP_extract library alone:
<ul>
<li><a href="https://github.com/tuwien-musicir/mir_utils">mir_utils</a>: these are additional functions used for the Soundcloud Demo data set in the tutorial below</li>
<li><a href="https://developers.soundcloud.com">Soundcloud API</a>: used to retrieve and analyze music from Soundcloud.com</li>
<li>urllib: for downloading content from the web (may be pre-installed already, then you can skip it)</li>
<li><a href="https://pypi.python.org/pypi/unicsv/1.0.0">unicsv</a>: used in rp_extract_files.py for batch iteration over many wav or mp3 files, and storing features in CSV (only needed when you want to do batch feature extraction to CSV)</li>
<li><a href="http://scikit-learn.org/stable/">sklearn</a>: Scikit-Learn machine learning package - used in later tutorial steps for finding similar songs and/or using machine learning / classification
</ul>
End of explanation
"""
import os
path = '/path/to/ffmpeg/'
os.environ['PATH'] += os.pathsep + path
"""
Explanation: MP3 Decoder
If you want to use MP3 files as input, you need to have one of the following MP3 decoders installed in your system:
<ul>
<li>Windows: FFMpeg (ffmpeg.exe is included in RP_extract library on Github above, nothing to install)</li>
<li>Mac: <a href="http://www.thalictrum.com/en/products/lame.html">Lame for Mac</a> or <a href="http://ffmpegmac.net">FFMPeg for Mac</a></li>
<li>Linux: please install mpg123, lame or ffmpeg from your Software Install Center or Package Repository</li>
</ul>
Note: If you don't install it to a path which can be found by the operating system, use this to add path where you installed the MP3 decoder binary to your system PATH so Python can call it:
End of explanation
"""
%pylab inline
import warnings
warnings.filterwarnings('ignore')
%load_ext autoreload
%autoreload 2
# numerical processing and scientific libraries
import numpy as np
# plotting
import matplotlib.pyplot as plt
# reading wav and mp3 files
from audiofile_read import * # included in the rp_extract git package
# Rhythm Pattern Audio Extraction Library
from rp_extract_python import rp_extract
from rp_plot import * # can be skipped if you don't want to do any plots
# misc
from urllib import urlopen
import urllib2
import gzip
import StringIO
"""
Explanation: Import + Test your Environment
If you have installed all required libraries, the follwing imports should run without errors.
End of explanation
"""
# provide/adjust the path to your wav or mp3 file
audiofile = "music/1972-048 Elvis Presley - Burning Love 22khz.mp3"
samplerate, samplewidth, wavedata = audiofile_read(audiofile)
samplerate, samplewidth, wavedata = audiofile_read(audiofile, normalize=False)
wavedata.shape
"""
Explanation: <a name="processing"><font color="#0404B4">2. Audio Processing</font></a>
Feature Extraction is the core of content-based description of audio files. With feature extraction from audio, a computer is able to recognize the content of a piece of music without the need of annotated labels such as artist, song title or genre. This is the essential basis for information retrieval tasks, such as similarity based searches (query-by-example, query-by-humming, etc.), automatic classification into categories, or automatic organization and clustering of music archives.
Content-based description requires the development of feature extraction techniques that analyze the acoustic characteristics of the signal. Features extracted from the audio signal are intended to describe the stylistic content of the music, e.g. beat, presence of voice, timbre, etc.
We use methods from digital signal processing and consider psycho-acoustic models in order to extract suitable semantic information from music. We developed various feature sets, which are appropriate for different tasks.
Load Audio Files
Load audio data from wav or mp3 file
We provide a library (audiofile_read.py) that is capable of reading WAV and MP3 files (MP3 through an external decoder, see Installation Requirements above).
Take any MP3 or WAV file on your disk - or download one from e.g. <a href="http://freemusicarchive.org">freemusicarchive.org</a>.
End of explanation
"""
nsamples = wavedata.shape[0]
nchannels = wavedata.shape[1]
print "Successfully read audio file:", audiofile
print samplerate, "Hz,", samplewidth*8, "bit,", nchannels, "channel(s),", nsamples, "samples"
"""
Explanation: <b>Note about Normalization:</b> Normalization is automatically done by audiofile_read() above.
Usually, an audio file stores integer values for the samples. However, for audio processing we need float values that's why the audiofile_read library already converts the input data to float values in the range of (-1,1).
This is taken care of by audiofile_read. In the rare case you don't want to normalize, use this line instead of the one above:
samplerate, samplewidth, wavedata = audiofile_read(audiofile, normalize=False)
In case you use another library to read in WAV files (such as scipy.io.wavfile.read) please have a look into audiofile_read code to do the normalization in the same way. Note that scipy.io.wavfile.read does not correctly read 24bit WAV files.
Audio Information
Let's print some information about the audio file just read:
End of explanation
"""
max_samples_plot = 4 * samplerate # limit number of samples to plot (to 4 sec), to avoid graphical overflow
if nsamples < max_samples_plot:
max_samples_plot = nsamples
plot_waveform(wavedata[0:max_samples_plot], 16, 5);
"""
Explanation: Plot Wave form
we use this to check if the WAV or MP3 file has been correctly loaded
End of explanation
"""
# use combine the channels by calculating their geometric mean
wavedata_mono = np.mean(wavedata, axis=1)
"""
Explanation: Audio Pre-processing
For audio processing and feature extraction, we use a single channel only.
Therefore in case we have a stereo signal, we combine the separate channels:
End of explanation
"""
plot_waveform(wavedata_mono[0:max_samples_plot], 16, 3)
plotstft(wavedata_mono, samplerate, binsize=512, ignore=True);
"""
Explanation: Below an example waveform of a mono channel after combining the stereo channels by arithmetic mean:
End of explanation
"""
features = rp_extract(wavedata, # the two-channel wave-data of the audio-file
samplerate, # the samplerate of the audio-file
extract_rp = True, # <== extract this feature!
transform_db = True, # apply psycho-accoustic transformation
transform_phon = True, # apply psycho-accoustic transformation
transform_sone = True, # apply psycho-accoustic transformation
fluctuation_strength_weighting=True, # apply psycho-accoustic transformation
skip_leadin_fadeout = 1, # skip lead-in/fade-out. value = number of segments skipped
step_width = 1) #
plotrp(features['rp'])
"""
Explanation: <a name="extraction"><font color="#0404B4">3. Audio Feature Extraction</font></a>
Rhythm Patterns
<img width="350" src="http://www.ifs.tuwien.ac.at/mir/audiofeatureextraction/feature_extraction_RP_SSD_RH_web.png" style="float:right;margin-left:20px;margin-bottom:20px">
Rhythm Patterns (also called Fluctuation Patterns) describe modulation amplitudes for a range of modulation frequencies on "critical bands" of the human auditory range, i.e. fluctuations (or rhythm) on a number of frequency bands. The feature extraction process for the Rhythm Patterns is composed of two stages:
First, the specific loudness sensation in different frequency bands is computed, by using a Short Time FFT, grouping the resulting frequency bands to psycho-acoustically motivated critical-bands, applying spreading functions to account for masking effects and successive transformation into the decibel, Phon and Sone scales. This results in a power spectrum that reflects human loudness sensation (Sonogram).
In the second step, the spectrum is transformed into a time-invariant representation based on the modulation frequency, which is achieved by applying another discrete Fourier transform, resulting in amplitude modulations of the loudness in individual critical bands. These amplitude modulations have different effects on human hearing sensation depending on their frequency, the most significant of which, referred to as fluctuation strength, is most intense at 4 Hz and decreasing towards 15 Hz. From that data, reoccurring patterns in the individual critical bands, resembling rhythm, are extracted, which – after applying Gaussian smoothing to diminish small variations – result in a time-invariant, comparable representation of the rhythmic patterns in the individual critical bands.
End of explanation
"""
features = rp_extract(wavedata, # the two-channel wave-data of the audio-file
samplerate, # the samplerate of the audio-file
extract_ssd = True, # <== extract this feature!
transform_db = True, # apply psycho-accoustic transformation
transform_phon = True, # apply psycho-accoustic transformation
transform_sone = True, # apply psycho-accoustic transformation
fluctuation_strength_weighting=True, # apply psycho-accoustic transformation
skip_leadin_fadeout = 1, # skip lead-in/fade-out. value = number of segments skipped
step_width = 1) #
plotssd(features['ssd'])
"""
Explanation: Statistical Spectrum Descriptor
The Sonogram is calculated as in the first part of the Rhythm Patterns calculation. According to the occurrence of beats or other rhythmic variation of energy on a specific critical band, statistical measures are able to describe the audio content. Our goal is to describe the rhythmic content of a piece of audio by computing the following statistical moments on the Sonogram values of each of the critical bands:
mean, median, variance, skewness, kurtosis, min- and max-value
End of explanation
"""
features = rp_extract(wavedata, # the two-channel wave-data of the audio-file
samplerate, # the samplerate of the audio-file
extract_rh = True, # <== extract this feature!
transform_db = True, # apply psycho-accoustic transformation
transform_phon = True, # apply psycho-accoustic transformation
transform_sone = True, # apply psycho-accoustic transformation
fluctuation_strength_weighting=True, # apply psycho-accoustic transformation
skip_leadin_fadeout = 1, # skip lead-in/fade-out. value = number of segments skipped
step_width = 1) #
plotrh(features['rh'])
"""
Explanation: Rhythm Histogram
The Rhythm Histogram features we use are a descriptor for general rhythmics in an audio document. Contrary to the Rhythm Patterns and the Statistical Spectrum Descriptor, information is not stored per critical band. Rather, the magnitudes of each modulation frequency bin of all critical bands are summed up, to form a histogram of "rhythmic energy" per modulation frequency. The histogram contains 60 bins which reflect modulation frequency between 0 and 10 Hz. For a given piece of audio, the Rhythm Histogram feature set is calculated by taking the median of the histograms of every 6 second segment processed.
End of explanation
"""
maxbin = features['rh'].argmax(axis=0) + 1 # +1 because it starts from 0
mod_freq_res = 1.0 / (2**18/44100.0) # resolution of modulation frequency axis (0.168 Hz) (= 1/(segment_size/samplerate))
#print mod_freq_res * 60 # resolution
bpm = maxbin * mod_freq_res * 60
print bpm
"""
Explanation: Get rough BPM from Rhythm Histogram
By looking at the maximum peak of a Rhythm Histogram, we can determine the beats per minute (BPM) very roughly by multiplying the Index of the Rhythm Histogram bin by the modulation frequency resolution (0.168 Hz) * 60. The resolution of this is however only at +/- 10 bpm.
End of explanation
"""
# adapt the fext array to your needs:
fext = ['rp','ssd','rh','mvd'] # sh, tssd, trh
features = rp_extract(wavedata,
samplerate,
extract_rp = ('rp' in fext), # extract Rhythm Patterns features
extract_ssd = ('ssd' in fext), # extract Statistical Spectrum Descriptor
extract_sh = ('sh' in fext), # extract Statistical Histograms
extract_tssd = ('tssd' in fext), # extract temporal Statistical Spectrum Descriptor
extract_rh = ('rh' in fext), # extract Rhythm Histogram features
extract_trh = ('trh' in fext), # extract temporal Rhythm Histogram features
extract_mvd = ('mvd' in fext), # extract Modulation Frequency Variance Descriptor
spectral_masking=True,
transform_db=True,
transform_phon=True,
transform_sone=True,
fluctuation_strength_weighting=True,
skip_leadin_fadeout=1,
step_width=1)
# let's see what we got in our dict
print features.keys()
# list the feature type dimensions
for k in features.keys():
print k, features[k].shape
"""
Explanation: Modulation Frequency Variance Descriptor
This descriptor measures variations over the critical frequency bands for a specific modulation frequency (derived from a rhythm pattern).
Considering a rhythm pattern, i.e. a matrix representing the amplitudes of 60 modulation frequencies on 24 critical bands, an MVD vector is derived by computing statistical measures (mean, median, variance, skewness, kurtosis, min and max) for each modulation frequency over the 24 bands. A vector is computed for each of the 60 modulation frequencies. Then, an MVD descriptor for an audio file is computed by the mean of multiple MVDs from the audio file's segments, leading to a 420-dimensional vector.
Temporal Statistical Spectrum Descriptor
Feature sets are frequently computed on a per segment basis and do not incorporate time series aspects. As a consequence, TSSD features describe variations over time by including a temporal dimension. Statistical measures (mean, median, variance, skewness, kurtosis, min and max) are computed over the individual statistical spec- trum descriptors extracted from segments at different time positions within a piece of audio. This captures timbral variations and changes over time in the audio spectrum, for all the critical Bark-bands. Thus, a change of rhythmic, instruments, voices, etc. over time is reflected by this feature set. The dimension is 7 times the dimension of an SSD (i.e. 1176).
Temporal Rhythm Histograms
Statistical measures (mean, median, variance, skewness, kurtosis, min and max) are computed over the individual Rhythm Histograms extracted from various segments in a piece of audio. Thus, change and variation of rhythmic aspects in time are captured by this descriptor.
Extract All Features
To extract ALL or selected ones of the before described features, you can use this command:
End of explanation
"""
# START SOUNDCLOUD API
import soundcloud
import urllib # for mp3 download
# To use soundcloud-python, you must first create a Client instance, passing at a minimum the client id you
# obtained when you registered your app:
# If you only need read-only access to public resources, simply provide a client id when creating a Client instance:
my_client_id= 'insert your soundcloud client id here'
client = soundcloud.Client(client_id=my_client_id)
# if there is no error after this, it should have worked
"""
Explanation: <a name="application"><font color="#0404B4">4. Application Scenarios</font></a>
Analyze Songs from Soundcloud
<a name="getsoundcloud"><font color="#0404B4">4.1. Getting Songs from Soundcloud</font></a>
In this step we are going to analyze songs from Soundcloud, using the Soundcloud API.
Please get your own API key first by clicking "Register New App" on <a href="https://developers.soundcloud.com">https://developers.soundcloud.com</a>.
Then we can start using the Soundcloud API:
End of explanation
"""
# GET TRACK INFO
#soundcloud_url = 'http://soundcloud.com/forss/flickermood'
soundcloud_url = 'https://soundcloud.com/majorlazer/be-together-feat-wild-belle'
track = client.get('/resolve', url=soundcloud_url)
print "TRACK ID:", track.id
print "Title:", track.title
print "Artist: ", track.user['username']
print "Genre: ", track.genre
print track.bpm, "bpm"
print track.playback_count, "times played"
print track.download_count, "times downloaded"
print "Downloadable?", track.downloadable
# if you want to see all information contained in 'track':
print vars(track)
"""
Explanation: Get Track Info
End of explanation
"""
if hasattr(track, 'download_url'):
print track.download_url
print track.stream_url
stream = client.get('/tracks/%d/streams' % track.id)
#print vars(stream)
print stream.http_mp3_128_url
"""
Explanation: Get Track URLs
End of explanation
"""
# set the MP3 download directory
mp3_dir = './music'
mp3_file = mp3_dir + os.sep + "%s.mp3" % track.title
# Download the 128 kbit stream MP3
urllib.urlretrieve (stream.http_mp3_128_url, mp3_file)
print "Downloaded " + mp3_file
"""
Explanation: Download Preview MP3
End of explanation
"""
# use your own soundcloud urls here
soundcloud_urls = [
'https://soundcloud.com/absencemusik/lana-del-rey-born-to-die-absence-remix',
'https://soundcloud.com/princefoxmusic/raindrops-feat-kerli-prince-fox-remix',
'https://soundcloud.com/octobersveryown/remyboyz-my-way-rmx-ft-drake'
]
mp3_dir = './music'
mp3_files = []
own_track_ids = []
for url in soundcloud_urls:
print url
track = client.get('/resolve', url=url)
mp3_file = mp3_dir + os.sep + "%s.mp3" % track.title
mp3_files.append(mp3_file)
own_track_ids.append(track.id)
stream = client.get('/tracks/%d/streams' % track.id)
if hasattr(stream, 'http_mp3_128_url'):
mp3_url = stream.http_mp3_128_url
elif hasattr(stream, 'preview_mp3_128_url'): # if we cant get the full mp3 we take the 1:30 preview
mp3_url = stream.preview_mp3_128_url
else:
print "No MP3 can be downloaded for this song."
mp3_url = None # in this case we can't get an mp3
if not mp3_url == None:
urllib.urlretrieve (mp3_url, mp3_file) # Download the 128 kbit stream MP3
print "Downloaded " + mp3_file
# show list of mp3 files we got:
# print mp3_files
"""
Explanation: Iterate over a List of Soundcloud Tracks
This will take a number of Souncloud URLs and get the track info for them and download the mp3 stream if available.
End of explanation
"""
# mp3_files is the list of downloaded Soundcloud files as stored above (mp3_files.append())
# all_features will be a list of dict entries for all files
all_features = []
for mp3 in mp3_files:
# Read the Audio file
samplerate, samplewidth, wavedata = audiofile_read(mp3)
print "Successfully read audio file:", mp3
nsamples = wavedata.shape[0]
nchannels = wavedata.shape[1]
print samplerate, "Hz,", samplewidth*8, "bit,", nchannels, "channel(s),", nsamples, "samples"
# Extract the Audio Features
# (adapt the fext array to your needs)
fext = ['rp','ssd','rh','mvd'] # sh, tssd, trh
features = rp_extract(wavedata,
samplerate,
extract_rp = ('rp' in fext), # extract Rhythm Patterns features
extract_ssd = ('ssd' in fext), # extract Statistical Spectrum Descriptor
extract_sh = ('sh' in fext), # extract Statistical Histograms
extract_tssd = ('tssd' in fext), # extract temporal Statistical Spectrum Descriptor
extract_rh = ('rh' in fext), # extract Rhythm Histogram features
extract_trh = ('trh' in fext), # extract temporal Rhythm Histogram features
extract_mvd = ('mvd' in fext), # extract Modulation Frequency Variance Descriptor
)
all_features.append(features)
print "Finished analyzing", len(mp3_files), "files."
"""
Explanation: <a name="soundcloudanalysis"><font color="#0404B4">4.2. Analyzing Songs from Soundcloud</font></a>
Analyze the previously loaded Songs
Now this combines reading all the MP3s we've got and analyzing the features
End of explanation
"""
# iterates over all featuers (files) we extracted
for feat in all_features:
plotrp(feat['rp'])
plotrh(feat['rh'])
maxbin = feat['rh'].argmax(axis=0) + 1 # +1 because it starts from 0
bpm = maxbin * mod_freq_res * 60
print "roughly", round(bpm), "bpm"
"""
Explanation: <b>Note:</b> also see source file <b>rp_extract_files.py</b> on how to iterate over ALL mp3 or wav files in a directory.
Look at the results
End of explanation
"""
# currently this does not work
genre = 'Dancehall'
curr_offset = 0 # Note: the API has a limit of 50 items per response, so to get more you have to query multiple times with an offset.
tracks = client.get('/tracks', genres=genre, offset=curr_offset)
print "Retrieved", len(tracks), "track objects data"
# original Soundcloud example, searching for genre and bpm
# currently this does not work
tracks = client.get('/tracks', genres='punk', bpm={'from': 120})
"""
Explanation: Further Example: Get a list of tracks by Genre
This is an example on how to retrieve Songs from Soundcloud by genre and/or bpm.
currently this does not work ... (issue on Soundcloud side?)
End of explanation
"""
# IMPORTING mir_utils (installed from git above in parallel to rp_extract (otherwise ajust path))
import sys
sys.path.append("../mir_utils")
from demo.NotebookUtils import *
from demo.PlottingUtils import *
from demo.Soundcloud_Demo_Dataset import SoundcloudDemodatasetHandler
# IMPORTS for NearestNeighbor Search
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import NearestNeighbors
"""
Explanation: <a name="similar"><font color="#0404B4">4.3. Finding Similar Sounding Songs</font></a>
In these application scenarios we try to find similar songs or classify music into different categories.
For these Use Cases we need to import a few additional functions from the sklearn package and from mir_utils (installed from git above in parallel to rp_extract):
End of explanation
"""
# show the data set as Souncloud playlist
iframe = '<iframe width="100%" height="450" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/106852365&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=false"></iframe>'
HTML(iframe)
"""
Explanation: The Soundcloud Demo Dataset
The Soundcloud Demo Dataset is a collection of commonly known mainstream radio songs hosted on the online streaming platform Soundcloud. The Dataset is available as playlist and is intended to be used to demonstrate the performance of MIR algorithms with the help of well known songs.
<!-- not working on Mac
<iframe width="100%" height="450" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/106852365&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true"></iframe>
-->
End of explanation
"""
# first argument is local file path for downloaded MP3s and local metadata (if present, otherwise None)
scds = SoundcloudDemodatasetHandler(None, lazy=False)
"""
Explanation: The SoundcloudDemodatasetHandler abstracts the access to the TU-Wien server. On this server the extracted features are stored as csv-files. The SoundcloudDemodatasetHandler remotely loads the features and returns them by request. The features have been extracted using the method explained in the previous sections.
End of explanation
"""
# Initialize the similarity search object
sim_song_search = NearestNeighbors(n_neighbors = 6, metric='euclidean')
"""
Explanation: Finding rhythmically similar songs
End of explanation
"""
# set feature type
feature_set = 'rh'
# get features from Soundcloud demo set
demoset_features = scds.features[feature_set]["data"]
# Normalize the extracted features
scaled_feature_space = StandardScaler().fit_transform(demoset_features)
# Fit the Nearest-Neighbor search object to the extracted features
sim_song_search.fit(scaled_feature_space)
"""
Explanation: Finding rhythmically similar songs using Rhythm Histograms
End of explanation
"""
query_track_soundcloud_id = 68687842 # Mr. Saxobeat
HTML(scds.getPlayerHTMLForID(query_track_soundcloud_id))
"""
Explanation: Our query-song:
This is a query song from the pre-analyzed data set:
End of explanation
"""
query_track_feature_vector = scaled_feature_space[scds.features[feature_set]["ids"] == query_track_soundcloud_id]
"""
Explanation: Retrieve the feature vector for the query song
End of explanation
"""
(distances, similar_songs) = sim_song_search.kneighbors(query_track_feature_vector, return_distance=True)
print distances
print similar_songs
# For now we use only the song indices without distances
similar_songs = sim_song_search.kneighbors(query_track_feature_vector, return_distance=False)[0]
# because we are searching in the entire collection, the top-most result is the query song itself. Thus, we can skip it.
similar_songs = similar_songs[1:]
"""
Explanation: Search the nearest neighbors of the query-feature-vector
This retrieves the most similar song indices and their distance:
End of explanation
"""
similar_soundcloud_ids = scds.features[feature_set]["ids"][similar_songs]
print similar_soundcloud_ids
"""
Explanation: Lookup the corresponding Soundcloud-IDs
End of explanation
"""
SoundcloudTracklist(similar_soundcloud_ids, width=90, height=120, visual=False)
"""
Explanation: Listen to the results
End of explanation
"""
def search_similar_songs_by_id(query_song_id, feature_set, skip_query=True):
scaled_feature_space = StandardScaler().fit_transform(scds.features[feature_set]["data"])
sim_song_search.fit(scaled_feature_space);
query_track_feature_vector = scaled_feature_space[scds.features[feature_set]["ids"] == query_song_id]
similar_songs = sim_song_search.kneighbors(query_track_feature_vector, return_distance=False)[0]
if skip_query:
similar_songs = similar_songs[1:]
similar_soundcloud_ids = scds.features[feature_set]["ids"][similar_songs]
return similar_soundcloud_ids
similar_soundcloud_ids = search_similar_songs_by_id(query_track_soundcloud_id,
feature_set='rp')
SoundcloudTracklist(similar_soundcloud_ids, width=90, height=120, visual=False)
"""
Explanation: Finding rhythmically similar songs using Rhythm Patterns
This time we define a function that performs steps analogously to the RH retrieval above:
End of explanation
"""
similar_soundcloud_ids = search_similar_songs_by_id(query_track_soundcloud_id,
feature_set='ssd')
SoundcloudTracklist(similar_soundcloud_ids, width=90, height=120, visual=False)
"""
Explanation: Finding songs based on Timbral Similarity
Finding songs based on timbral similarity using Statistical Spectral Descriptors
End of explanation
"""
track_id = 68687842 # 40439758
results_track_1 = search_similar_songs_by_id(track_id, feature_set='ssd', skip_query=False)
results_track_2 = search_similar_songs_by_id(track_id, feature_set='rh', skip_query=False)
compareSimilarityResults([results_track_1, results_track_2],
width=100, height=120, visual=False,
columns=['Statistical Spectrum Descriptors', 'Rhythm Histograms'])
"""
Explanation: Compare the Results of Timbral and Rhythmic Similarity
First entry is query-track
End of explanation
"""
# check which files we got
mp3_files
# select from the list above the number of the song you want to use as a query (counting from 1)
song_id = 3 # count from 1
# select the feature vector type
feat_type = 'rp' # 'rh' or 'ssd' or 'rp'
# from the all_features data structure, we get the desired feature vector belonging to that song
query_feature_vector = all_features[song_id - 1][feat_type]
# get all the feature vectors of desired feature type from the Soundcloud demo set
demo_features = scds.features[feat_type]["data"]
# Initialize Neighbour Search space with demo set features
sim_song_search.fit(demo_features)
# use our own query_feature_vector for search in the demo set
(distances, similar_songs) = sim_song_search.kneighbors(query_feature_vector, return_distance=True)
print distances
print similar_songs
# now we got the song indices for similar songs in the demo set
similar_songs = similar_songs[0]
similar_songs
# and we get the according Soundcloud Track IDs
similar_soundcloud_ids = scds.features[feat_type]["ids"][similar_songs]
similar_soundcloud_ids
# we add our own Track ID at the beginning to show the seed song below:
my_track_id = own_track_ids[song_id - 1]
print my_track_id
result = np.insert(similar_soundcloud_ids,0,my_track_id)
"""
Explanation: Using your Own Query Song from the self-extracted Souncloud tracks above
End of explanation
"""
print "Feature Type:", feat_type
SoundcloudTracklist(result, width=90, height=120, visual=False)
"""
Explanation: Visual Player with the Songs most similar to our Own Song
first song is the query song
End of explanation
"""
def search_similar_songs_with_combined_sets(scds, query_song_id, feature_sets, skip_query=True, n_neighbors=6):
features = scds.getCombinedFeaturesets(feature_sets)
sim_song_search = NearestNeighbors(n_neighbors = n_neighbors, metric='l2')
#
scaled_feature_space = StandardScaler().fit_transform(features)
#
sim_song_search.fit(scaled_feature_space);
#
query_track_feature_vector = scaled_feature_space[scds.getFeatureIndexByID(query_song_id, feature_sets[0])]
#
similar_songs = sim_song_search.kneighbors(query_track_feature_vector, return_distance=False)[0]
if skip_query:
similar_songs = similar_songs[1:]
#
similar_soundcloud_ids = scds.getIdsByIndex(similar_songs, feature_sets[0])
return similar_soundcloud_ids
feature_sets = ['ssd','rh']
compareSimilarityResults([search_similar_songs_with_combined_sets(scds, 68687842, feature_sets=feature_sets, n_neighbors=5),
search_similar_songs_with_combined_sets(scds, 40439758, feature_sets=feature_sets, n_neighbors=5)],
width=100, height=120, visual=False,
columns=[scds.getNameByID(68687842),
scds.getNameByID(40439758)])
"""
Explanation: Add On: Combining different Music Descriptors
Here we merge SSD and RH features together to account for <b>both</b> timbral and rhythmic similarity:
End of explanation
"""
|
eds-uga/csci1360e-su17 | lectures/L17.ipynb | mit | book = None
try: # Good coding practices!
f = open("Lecture17/alice.txt", "r")
book = f.read()
except FileNotFoundError:
print("Could not find alice.txt.")
else:
f.close()
print(book[:71]) # Print the first 71 characters.
"""
Explanation: Lecture 17: Natural Language Processing I
CSCI 1360E: Foundations for Informatics and Analytics
Overview and Objectives
We've covered about all the core basics of Python and are now solidly into how we wield these tools in the realm of data science. One extremely common, almost unavoidable application is text processing. It's a messy, complex, but very rewarding subarea that has reams of literature devoted to it, whereas we have this single lecture. By the end of this lecture, you should be able to:
Differentiate structured from unstructured data
Understand the different string parsing tools available through Python
Grasp some of the basic preprocessing steps required when text is involved
Define the "bag of words" text representation
Part 1: Text Preprocessing
"Preprocessing" is something of a recursively ambiguous: it's the processing before the processing (what?).
More colloquially, it's the processing that you do in order to put your data in a useful format for the actual analysis you intend to perform. As we saw in the previous lecture, this is what data scientists spend the majority of their time doing, so it's important to know and understand the basic steps.
The vast majority of interesting data is in unstructured format. You can think of this kind of like data in its natural habitat. Like wild animals, though, data in unstructured form requires significantly more effort to study effectively.
Our goal in preprocessing is, in a sense, to turn unstructured data into structured data, or data that has a logical flow and format.
To start, let's go back to the Alice in Wonderland example from the previous lecture (you can download the text version of the book here).
End of explanation
"""
print(type(book))
lines = book.split("\n") # Split the string. Where should the splits happen? On newline characters, of course.
print(type(lines))
"""
Explanation: Recalling the mechanics of file I/O, you'll see we opened up a file descriptor to alice.txt and read the whole file in a single go, storing all the text as a single string book. We then closed the file descriptor and printed out the first line (or first 71 characters), while wrapping the entire operation in a try / except block.
But as we saw before, it's also pretty convenient to split up a large text file by lines. You could use the readlines() method instead, but you can take a string and split it up into a list of strings as well.
End of explanation
"""
print(len(lines))
"""
Explanation: voilà! lines is now a list of strings.
End of explanation
"""
sentences = book.split(".")
print(sentences[0])
"""
Explanation: ...a list of over 3,700 lines of text, no less o_O
Newline characters
Let's go over this point in a little more detail.
A "newline" character is an actual character--like "a" or "b" or "1" or ":"--that represents pressing the "enter" key. However, like tabs and spaces, this character falls under the category of a "whitespace" character, meaning that in print you can't actually see it; the computer hides it.
But when in programming languages like Python (and Java, and C, and Matlab, and R, and and and...), they need a way to explicitly represent these whitespace characters, specifically when processing text like we're doing right now.
So, even though you can't see tabs or newlines in the actual text--go ahead and open up Alice in Wonderland and tell me if you can see the actual characters representing newlines and tabs--you can see these characters in Python.
Tabs are represented by a backslash followed by the letter "t", the whole thing in quotes: "\t"
Newlines are represented by a backslash followed by the letter "n", the whole thing in quotes: "\n"
"But wait!" you say, "Slash-t and slash-n are two characters each, not one! What kind of shenanigans are you trying to pull?"
Yes, it's weird. If you build a career in text processing, you'll find the backslash has a long and storied history as a kind of "meta"-character, in that it tells whatever programming language that the character after it is a super-special snowflake. So in some sense, the backslash-t and backslash-n constructs are actually one character, because the backslash is the text equivalent of a formal introduction.
Back to text parsing
When we called split() on the string holding the entire Alice in Wonderland book, we passed in the argument "\n", which is the newline character. In doing so, we instructed Python to
Split up the original string (hence, the name of the function) into a list of strings
The end of one list and the beginning of the next list would be delimited by the occurrence of a newline character "\n" in the original string. In a sense, we're treating the book as a "newline-delimited" format
Return a list of strings, where each string is one line of the book
An important distinction for text processing neophytes: this splits the book up on a line by line basis, NOT a sentence by sentence basis. There are a lot of implicit semantic assumptions we hold from a lifetime of taking our native language for granted, but which Python has absolutely no understanding of beyond what we tell it to do.
You certainly could, in theory, split the book on punctuation, rather than newlines. This is a bit trickier to do without regular expressions (see Part 3), but to give an example of splitting by period:
End of explanation
"""
print("Even though there's no newline in the string I wrote, Python's print function still adds one.")
print() # Blank line!
print("There's a blank line above.")
"""
Explanation: You can already see some problems with this approach: not all sentences end with periods. Sure, you could split things again on question marks and exclamation points, but this still wouldn't tease out the case of the title--which has NO punctuation to speak of!--and doesn't account for important literary devices like semicolons and parentheses. These are valid punctuation characters in English! But how would you handle them?
Cleaning up trailing whitespace
You may have noticed that, whenever you invoke the print() statement, you automatically get a new line even though I doubt you've ever added a "\n" to the end of the string you're printing.
End of explanation
"""
print("Here's a string with an explicit newline --> \n")
print()
print("Now there are TWO blank lines above!")
"""
Explanation: This is fine for 99% of cases, except when the string already happens to have a newline at the end.
End of explanation
"""
readlines = None
try:
with open("Lecture17/alice.txt", "r") as f:
readlines = f.readlines()
except:
print("Something went wrong.")
print(readlines[0])
print(readlines[2])
print("There are blank lines because of the trailing newline characters.")
"""
Explanation: "But wait!" you say again, "You read in the text file and split it on newlines a few slides ago, but when you printed out the first line, there was no extra blank line underneath! Why did that work today but not in previous lectures?"
An excellent question. It has to do with the approach we took. Previously, we used the readline() method, which hands you back one line of text at a time with the trailing newline intact:
End of explanation
"""
print(readlines[0]) # This used readlines(), so it STILL HAS trailing newlines.
print(lines[0]) # This used split(), so the newlines were REMOVED.
print("No trailing newline when using split()!")
"""
Explanation: On the other hand, when you call split() on a string, it not only identifies all the instances of the character you specify as the endpoints of each successive list, but it also removes those characters from the ensuing lists.
End of explanation
"""
trailing_whitespace = " \t this is the important part \n \n \t "
no_whitespace = trailing_whitespace.strip()
print("Border --> |{}| <-- Border".format(no_whitespace))
"""
Explanation: Is this getting confusing? If so, just remember the following:
In general, make liberal use of the strip() function for strings you read in from files.
This function strips (hence, the name) any whitespace off the front AND end of a string. So in the following example:
End of explanation
"""
print(lines[410])
print(lines[411])
"""
Explanation: All the pesky spaces, tabs, and newlines have been stripped off the string. This is extremely useful and pretty much a must when you're preprocessing text.
Capitalization
This is one of those insidious that seems like such a tiny detail but can radically alter your analysis if left unnoticed: developing a strategy for how you're going to handle uppercase versus lowercase.
Take the following example from Alice in Wonderland, lines 410 and 411:
End of explanation
"""
print(lines[0])
title = lines[0].lower()
print(title)
"""
Explanation: You'll notice the word "and" appears twice: once at the beginning of the sentence in line 410, and again in the middle of the sentence in line 411. It's the same word, but given their difference in capitalization, it's entirely likely that your analysis framework would treat those as two separate words. After all, "and" != "And". Go ahead and try!
A common strategy is to simply lowercase everything. Yes, you likely lose a little bit of information, as it becomes more difficult to identify proper nouns, but a significant source of confusion--is it a proper noun, or just the start of a sentence? has the meaning of the word changed if it's in lowercase versus ALL CAPS? what if you're comparing multiple styles of writing and the authors use different literary forms of capitalizatoin?--is removed entirely.
You can do this with the Python string's lower() method:
End of explanation
"""
from collections import defaultdict
word_counts = defaultdict(int) # All values are integers.
"""
Explanation: Now everything is, in some sense, "equivalent."
Part 2: The "Bag of Words"
The "bag of words" model is one of the most popular ways of representing a large collection of text, and one of the easiest ways to structure text.
The "bag of words" on display on the 8th floor of the Computer Science building at Carnegie Mellon University:
When using this model, the implicit assumptions behind it are saying
Relative word order and grammar DON'T MATTER to the overall meaning of the text.
Relative word frequencies ABSOLUTELY MATTER to the overall meaning of the text.
Formally, the bag of words is a "multiset", but you can think of it like a Python dictionary. In fact, at its simplest, that's all the bag of words is: a count of how many times each word occurs in your text. But like dictionaries, ordering no longer matters.
To illustrate, let's go ahead and design a word counter for Alice in Wonderland! First, we'll initialize our dictionary of counts. To make our lives easier, we'll use a defaultdict, a special kind of dictionary you can use when you want automatic default values enforced for keys that don't exist.
End of explanation
"""
for line in lines: # Iterate through the lines of the book
words = line.split() # If you don't give split() any arguments, the *default* split character is ANY whitespace.
for word in words:
w = word.lower() # Convert to lowercase.
word_counts[w] += 1 # Add 1 to the count for that word in our word dictionary.
"""
Explanation: It otherwise behaves exactly like a regular Python dictionary, except we won't get a KeyError if we reference a key that doesn't exist; instead, a new key will be automatically created and a default value set. For the int type, this default value is 0.
Next, we'll iterate through the lines of the book. There are a couple things we need to do here:
For each line, split the line into single words. We'll go back yet again to our good friend split().
Now we'll have a list of words, so we'll need to iterate over these words, lowercasing them all and then adding them up.
So the code should look something like this:
End of explanation
"""
print("Unique words: {}".format(len(word_counts.keys())))
"""
Explanation: Let's take a look at what we have! First, we'll count how many unique words there are.
End of explanation
"""
print("Total words: {}".format(sum(word_counts.values())))
"""
Explanation: Next, we'll count the total number of words in the book.
End of explanation
"""
maxcount = -1
maxitem = None
for k, v in word_counts.items():
if v > maxcount:
maxcount = v
maxitem = k
print("'{}' occurred most often ({} times).".format(maxitem, maxcount))
"""
Explanation: Now we'll find the word that occurred most often:
End of explanation
"""
from collections import Counter
counts = Counter(word_counts)
print(counts.most_common(20)) # Find the 20 words with the highest counts!
"""
Explanation: Well, there's a shocker. /sarcasm
Python has another incredibly useful utility class for whenever we're counting things: a Counter! This will let us easily find the n words with the highest counts.
End of explanation
"""
print("Here's the notation --> {}".format("another string"))
"""
Explanation: Pretty boring, right? Most of these words are referred to as stop words, or words that used pretty much in every context and therefore don't tell you anything particularly interesting. They're usually filtered out, but because of some interesting corner cases, there's no universal "stop word list"; it's generally up to you to decide what words to remove (though pretty much all of the above top 20, with the exception of "alice", can be removed).
So, in addition to stripping out and splitting on whitespace, and lowercasing all the words, we also check if the word is part of some pre-built stop-word list. If it is, just throw it out; if not, then we'll count it.
Part 3: String Formatting
We've seen previously how to convert strings and numbers (integers and floating-point values) back and forth; just using the str(), int(), and float() functions. Pretty easy.
Here's a harder question: how do you represent a floating-point number as a string, but to only 2 decimal places?
Another hard question: how do you represent an integer as string, but with 3 leading zeros?
You've probably noticed the bizarre notation I've used when printing out strings.
End of explanation
"""
print("{}, {}, and {}".format("a", "b", "c"))
"""
Explanation: By using the curly braces {} inside the string, I've effectively created a placeholder for another string to be inserted. That other string is the argument(s) to the format() function.
But there's a lot more to the curly braces than just {}.
The simplest is just using the curly braces and nothing else. If you specify multiple pairs of curly braces, you'll need to specify an equal number of arguments to format(), and they'll be inserted into the string in the order you gave them to format().
End of explanation
"""
print("{0}, {2}, and {1}".format("a", "b", "c"))
"""
Explanation: Alternatively, you can specify the indices of the format() arguments inside the curly braces:
End of explanation
"""
print("{first_arg}, {second_arg}, and {third_arg}".format(second_arg = "b", first_arg = "a", third_arg = "c"))
"""
Explanation: Notice the 2nd and 3rd arguments were flipped in their final ordering!
You can even provide arbitrary named arguments inside the curly braces, which format() will then expect.
End of explanation
"""
print("One leading zero: {:02}".format(1))
print("Two leading zeros: {:03}".format(1))
print("One leading zero: {:04}".format(100))
print("Two leading zeros: {:05}".format(100))
"""
Explanation: Leading zeros and decimal precision
You can also use this same syntax to specify leading zeros and decimal precision, but the notation gets a little more complicated.
You'll need to first enter a colon ":", followed by the number 0, followed by the number of places that should be counted:
End of explanation
"""
import numpy as np
print("Unformatted: {}".format(np.pi))
print("Two decimal places: {:.2f}".format(np.pi))
"""
Explanation: Decimal precision is very similar, but instead of a 0, you'll specify a decimal point "." followed by the level of precision you want (a number), followed by the letter "f" to signify that it's a floating-point:
End of explanation
"""
big_number = 98483745834
print("Big number: {}".format(big_number))
print("Big number with commas: {:,}".format(big_number))
"""
Explanation: Finally, you can also include the comma in large numbers so you can actually read them more easily:
End of explanation
"""
print("'Wonderland' occurs {} times.".format(book.count("Wonderland")))
"""
Explanation: Additional string functions
There is an entire ecosystem of Python string functions that I highly encourage you to investigate, but I'll go over a few of the most common here.
upper() and lower(): we've seen the latter already, but the former can be just as useful.
count() will give you the number of times a substring occurs in the actual string. If you're interested in one word in particular, this can be a very efficient way of finding it:
End of explanation
"""
print("'Wonderland' is first found {} characters in.".format(book.find("Wonderland")))
"""
Explanation: What if you need to find the actual location in a string of that substring? As in, where is "Wonderland" first mentioned in the book? find() to the rescue!
End of explanation
"""
print("'Wonderland' is first found {} characters in.".format(book.find("Wonderland", 43 + 1)))
"""
Explanation: ...well, that's embarrassing; that's probably the "Wonderland" that's in the book title. How about the second occurrence, then? We can use the index of the first one to tell find() that we want to start looking from there.
End of explanation
"""
my_book = book.replace("Wonderland", "Las Vegas") # Replace the 1st thing with the 2nd thing
print(my_book[:71])
"""
Explanation: Now, I've decided I don't want this book to be Alice in Wonderland, but rather Alice in Las Vegas! How can I make this happen? replace()!
End of explanation
"""
print(lines[8])
print(lines[8].startswith("Title"))
print(lines[8].endswith("Wonderland"))
"""
Explanation: Two more very useful string functions are startswith() and endswith(). These are great if you're testing for leading or trailing characters or words.
End of explanation
"""
words = lines[8].split(" ")
print(words)
"""
Explanation: Finally, the join() method. This is a little tricky to use, but insanely useful. It's cropped up on a couple previous assignments.
You'll want to use this method whenever you have a list of strings that you want to "glue" together into a single string. Perhaps you have a list of words and want to put them back together into a sentence!
End of explanation
"""
between_char = " "
sentence = between_char.join(words)
print(sentence)
"""
Explanation: We can do this by specifying first the character we want to put in between all the words we're joining--in this case, just a space character--then calling join() on that character, and passing in the list of words we want to glue together as the argument to the function.
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td2a/td2a_cenonce_session_5.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 2A.i - Modèle relationnel, analyse d'incidents dans le transport aérien
Base de données relationnelles, logique SQL.
End of explanation
"""
import pyensae.datasource
pyensae.datasource.download_data("tp_2a_5_compagnies.zip")
import os
import pandas
df_Incident = pandas.read_csv('Incident.csv', sep=';')
df_Flights = pandas.read_csv('Flights.csv', sep=';')
df_Crews = pandas.read_csv('Crews.csv', sep=';')
df_Crews_planes_habilitation = pandas.read_csv('Crews_planes_habilitation.csv', sep=';')
df_Planes = pandas.read_csv('Planes.csv', sep=';')
df_Plane_models = pandas.read_csv('Plane_models.csv', sep=';')
df_Motors = pandas.read_csv('Motors.csv', sep=';')
df_Motor_models = pandas.read_csv('Motor_models.csv', sep=';')
df_Compagnies = pandas.read_csv('Compagnies.csv', sep=';')
df_Cities = pandas.read_csv('Cities.csv', sep=';')
df_Incident.head(5)
df_Flights.head(5)
df_Crews.head(5)
df_Crews_planes_habilitation.head(5)
df_Planes.head(5)
df_Plane_models.head(5)
df_Motors.head(5)
df_Motor_models.head(5)
df_Compagnies.head(5)
df_Cities.head(5)
"""
Explanation: Données
Le code suivant télécharge les données nécessaires tp_2a_5_compagnies.zip.
End of explanation
"""
import numpy as np
try:
df_Flights.reset_index( inplace = True )
df_Incident.reset_index( inplace = True )
except Exception:
pass
## On suppose que cela vient du fait que les index ont déjà été remis à zéros
df_Flight_Incident = pandas.merge( df_Flights, df_Incident, left_on = "Id", right_on = "Flight_id", how="outer" )
df_Flight_Incident["Is_incident"] = np.isnan( df_Flight_Incident["Flight_id"] ) == False
df_Flight_Incident.head(5)
"""
Explanation: Bien sûr, toutes les informations ne sont pas dans les tables telles quelles, il faudra faire principalement des jointures et des groupby pour obtenir les informations que l'on souhaite. Pour obtenir une table contenant les vols avec une colonne "est_incident", il faut faire :
End of explanation
"""
df_Flight_Incident.groupby( "Departure_id" )["Is_incident"].mean()
"""
Explanation: On peut aussi faire des statistiques par ville de départ ...
End of explanation
"""
|
balarsen/pymc_learning | Foil Open Area/Open Area.ipynb | bsd-3-clause | import itertools
from pprint import pprint
from operator import getitem
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import numpy as np
import spacepy.plot as spp
import pymc as mc
import tqdm
from MCA_file_viewer_v001 import GetMCAfile
def plot_box(x, y, c='r', lw=0.6, ax=None):
if ax is None:
plt.plot((xind[0], xind[0]), (yind[0], yind[1]), lw=lw, c=c)
plt.plot((xind[1], xind[1]), (yind[0], yind[1]), lw=lw, c=c)
plt.plot((xind[0], xind[1]), (yind[0], yind[0]), lw=lw, c=c)
plt.plot((xind[0], xind[1]), (yind[1], yind[1]), lw=lw, c=c)
else:
ax.plot((xind[0], xind[0]), (yind[0], yind[1]), lw=lw, c=c)
ax.plot((xind[1], xind[1]), (yind[0], yind[1]), lw=lw, c=c)
ax.plot((xind[0], xind[1]), (yind[0], yind[0]), lw=lw, c=c)
ax.plot((xind[0], xind[1]), (yind[1], yind[1]), lw=lw, c=c)
ZZ, XX, YY = GetMCAfile('16090203.mca')
# It is believed as of 2016-09-19 that the MCA records 2 counts for each count.
# This means all data are even and all the data can be divided by 2 to give the
# right number of counts. Per emails Larsen-Fernandes 2016-09-17
# These data are integres and care muct be taken to assure that /2 does not
# lead to number that are not representable in float
ZZ = ZZ.astype(float)
ZZ /= 2
XX = XX.astype(np.uint16) # as they all should be integers anyway
xind = (986, 1003)
yind = (492, 506)
fig = plt.figure(figsize=(20,8))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
pc = ax1.pcolormesh(XX, YY, ZZ, norm=LogNorm())
plt.colorbar(pc, ax=ax1)
plot_box(xind, yind, ax=ax1)
ax2.hist(ZZ.flatten(), 20)
ax2.set_yscale('log')
ax3.hist(ZZ.flatten(), 20, normed=True)
ax3.set_yscale('log')
"""
Explanation: Experimental data analysis on foil open area
Brian Larsen, ISR-1
Data provided by Phil Fernandes, ISR-1 2016-9-14
The setup is a foil in its holder mounted to a foil holder meant to bock incident ions. The foil has a ~0.6mm hole in it to provide a baseline. The goal is to use the relative intensity of the witness hole to determine the intensity of holes in the foil.
A quick summary:
* Foil is placed 0.66” from front of MCP surface
* Beam is rastered to cover full foil and “witness” aperture
* Beam is 1.0 keV Ar+, slightly underfocused
* Accumulate data for set period of time (either 60s or 180s, identified in spreadsheet)
* Total_cts is the # of counts through the foil and the witness aperture
* Witness_cts is the # of counts in the witness aperture only
* Foil_cts = total_cts – witness_cts
* Open area OA = (foil_cts/witness_cts) * (witness_area/foil_area)
End of explanation
"""
total_cnts = ZZ.sum()
print('Total counts:{0} -- Phil got {1} -- remember /2'.format(total_cnts, 4570/2)) # remember we did a /2
# Is the whitness hole at x=1000, y=500?
XX.shape, YY.shape, ZZ.shape
print(ZZ[yind[0]:yind[1], xind[0]:xind[1]])
plt.figure()
plt.pcolormesh(XX[xind[0]:xind[1]], YY[yind[0]:yind[1]], ZZ[yind[0]:yind[1], xind[0]:xind[1]] , norm=LogNorm())
plt.colorbar()
witness_counts = ZZ[yind[0]:yind[1], xind[0]:xind[1]].sum()
print('Witness counts: {0}, Phil got {1}/2={2}'.format(witness_counts, 658, 658/2))
wit_pixels = 46
print('There {0} pixels in the witness peak'.format(wit_pixels))
total_counts = ZZ.sum()
print("There are a total of {0} counts".format(total_counts))
"""
Explanation: Do some calculations to try and match Phil's analysis
Phil's data:
File name Witness cts Total cts Foil cts Open area
16090203 658 4570 3912 0.00102
End of explanation
"""
def neighbor_inds(x, y, xlim=(0,1023), ylim=(0,1023), center=False, mask=False):
"""
given an x and y index return the 8 neighbor indices
if center also return the center index
if mask return a boolean mask over the whole 2d array
"""
xi = np.clip([x + v for v in [-1, 0, 1]], xlim[0], xlim[1])
yi = np.clip([y + v for v in [-1, 0, 1]], ylim[0], ylim[1])
ans = [(i, j) for i, j in itertools.product(xi, yi)]
if not center:
ans.remove((x,y))
if mask:
out = np.zeros((np.diff(xlim)+1, np.diff(ylim)+1), dtype=np.bool)
for c in ans:
out[c] = True
else:
out = ans
return np.asarray(out)
print(neighbor_inds(2,2))
print(neighbor_inds(2,2, mask=True))
print(ZZ[neighbor_inds(500, 992, mask=True)])
def get_alone_pixels(dat):
"""
loop over all the data and store the value of all lone pixels
"""
ans = []
for index, x in tqdm.tqdm_notebook(np.ndenumerate(dat)):
if (np.sum([ZZ[i, j] for i, j in neighbor_inds(index[0], index[1])]) == 0) and x != 0:
ans.append((index, x))
return ans
# print((neighbor_inds(5, 4)))
alone = get_alone_pixels(ZZ)
pprint(alone)
# ZZ[neighbor_inds(5, 4)[0]].shape
# print((neighbor_inds(5, 4))[0])
# print(ZZ[(neighbor_inds(5, 4))[0]].shape)
# ZZ[4,3]
ZZ[(965, 485)]
print(neighbor_inds(4,3)[0])
print(ZZ[neighbor_inds(4,3)[0]])
print(ZZ[3,2])
ni = neighbor_inds(4,3)[0]
print(ZZ[ni[0], ni[1]])
(ZZ % 2).any() # not all even any longer
"""
Explanation: Can we get a noise estimate?
1) Try all pixels with a value where a neighbor does not. This assumes that real holes are large enough to have a point spread function and therefore cannot be in a single pixel.
End of explanation
"""
n_noise = np.sum([v[1] for v in alone])
n_pixels = 1024*1024
noise_pixel = n_noise/n_pixels
print("There were a total of {0} random counts over {1} pixels, {2} cts/pixel".format(n_noise, n_pixels, noise_pixel))
"""
Explanation: Noise estimates
Not we assume that all lone counts are noise that can be considered random and uniform over the MCP.
This then provides a number of counts per MCA pixel that we can use.
End of explanation
"""
minx_tmp = ZZ.sum(axis=0)
minx_tmp.shape
print(minx_tmp)
miny_tmp = ZZ.sum(axis=1)
miny_tmp.shape
print(miny_tmp)
"""
Explanation: Maybe we should consider just part of the MCP, lets get the min,max X and min,max Y where there are counts and just use that area. This will increase the cts/pixel.
End of explanation
"""
Aw = np.pi*(0.2/2)**2 # mm**2
Af = 182.75 # mm**2 this is the area of the foil
W_F_ratio = Aw/Af
print(Aw, Af, W_F_ratio)
C = wit_pixels/n_pixels
D = (n_pixels-wit_pixels)/n_pixels
print('C', C, 'D', D)
nbkg = mc.Uniform('nbkg', 1, n_noise*5) # just 1 to some large number
obsnbkg = mc.Poisson('obsnbkg', nbkg, observed=True, value=n_noise)
witc = mc.Uniform('witc', 0, witness_counts*5) # just 0 to some large number
obswit = mc.Poisson('obswit', nbkg*C + witc, observed=True, value=witness_counts)
realc = mc.Uniform('realc', 0, total_counts*5) # just 0 to some large number
obsopen = mc.Poisson('obsopen', nbkg*D + realc, observed=True, value=total_counts-witness_counts)
@mc.deterministic(plot=True)
def open_area(realc=realc, witc=witc):
return realc*Aw/witc/Af
model = mc.MCMC([nbkg, obsnbkg, witc, obswit, realc, obsopen, open_area])
model.sample(200000, burn=100, thin=30, burn_till_tuned=True)
mc.Matplot.plot(model)
# 1000, burn=100, thin=30 0.000985 +/- 0.000058
# 10000, burn=100, thin=30 0.000982 +/- 0.000061
# 100000, burn=100, thin=30 0.000984 +/- 0.000059
# 200000, burn=100, thin=30 0.000986 +/- 0.000059
# 1000000, burn=100, thin=30 0.000985 +/- 0.000059
print("Foil 1 \n")
witc_mean = np.mean(witc.trace()[...])
witc_std = np.std(witc.trace()[...])
print("Found witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(witness_counts, witc_mean, witc_std, witc_std/witc_mean*100))
realc_mean = np.mean(realc.trace()[...])
realc_std = np.std(realc.trace()[...])
print("Found non-witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(total_counts-witness_counts, realc_mean, realc_std, realc_std/realc_mean*100))
nbkg_mean = np.mean(nbkg.trace()[...])
nbkg_std = np.std(nbkg.trace()[...])
print("Found noise counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(0, nbkg_mean, nbkg_std, nbkg_std/nbkg_mean*100))
OA_median = np.median(open_area.trace()[...])
OA_mean = np.mean(open_area.trace()[...])
OA_std = np.std(open_area.trace()[...])
print("The open area fraction is {0:.6f} +/- {1:.6f} ({2:.2f}%) at the 1 stddev level from 1 measurement\n".format(OA_mean, OA_std,OA_std/OA_mean*100 ))
print("Phil got {0} for 1 measurement\n".format(0.00139))
print("The ratio Brian/Phil is: {0:.6f} or {1:.6f}".format(OA_mean/0.00139, 0.00139/OA_mean))
"""
Explanation: Looks to go all the way to all sides in X-Y.
Work to total open area calculations
Now we can model the total open area of the foil given the noise estimate per pixel and the pixels that are a part of the witness sample and the total area.
We model the observed background as Poisson with center at the real background:
$obsnbkg \sim Pois(nbkg)$
We model the observed witness sample, $obswit$, as Poisson with center of background per pixel times number of pixels in peak plus the number of real counts:
$obswit \sim Pois(nbkg/C + witc)$, $C = \frac{A_w}{A_t}$
This then leaves the number of counts in open areas of the system (excluding witness) as a Poisson with center of background per pixel times number of pixels in the system (less witness) plus the real number of counts.
$obsopen \sim Pois(nbkg/D + realc)$, $D=\frac{A_t - A_w}{A_t}$
Then then the open area is given by the ratio number of counts, $realc$, over an unknown area, $A_o$, as related to witness counts, $witc$, to the witness area, $A_w$, which is assumed perfect as as 0.6mm hole.
$\frac{A_o}{realc}=\frac{A_w}{witc} => A_o = \frac{A_w}{witc}realc $
End of explanation
"""
_Aw = np.pi*(0.2/2)**2 # mm**2
_Af = 182.75 # mm**2 this is the area of the foil
Aw = mc.Normal('Aw', _Aw, (_Aw*0.2)**-2) # 20%
Af = mc.Normal('Af', _Af, (_Af*0.1)**-2) # 10%
print(_Aw, _Af)
C = wit_pixels/n_pixels
D = (n_pixels-wit_pixels)/n_pixels
print('C', C, 'D', D)
nbkg = mc.Uniform('nbkg', 1, n_noise*5) # just 1 to some large number
obsnbkg = mc.Poisson('obsnbkg', nbkg, observed=True, value=n_noise)
witc = mc.Uniform('witc', 0, witness_counts*5) # just 0 to some large number
obswit = mc.Poisson('obswit', nbkg*C + witc, observed=True, value=witness_counts)
realc = mc.Uniform('realc', 0, total_counts*5) # just 0 to some large number
obsopen = mc.Poisson('obsopen', nbkg*D + realc, observed=True, value=total_counts-witness_counts)
@mc.deterministic(plot=True)
def open_area(realc=realc, witc=witc, Aw=Aw, Af=Af):
return realc*Aw/witc/Af
model = mc.MCMC([nbkg, obsnbkg, witc, obswit, realc, obsopen, open_area, Af, Aw])
model.sample(200000, burn=100, thin=30, burn_till_tuned=True)
mc.Matplot.plot(nbkg)
mc.Matplot.plot(witc)
mc.Matplot.plot(realc)
# mc.Matplot.plot(open_area)
mc.Matplot.plot(Aw)
_ = spp.plt.hist(open_area.trace(), 20)
print("Foil 1 \n")
witc_mean = np.mean(witc.trace()[...])
witc_std = np.std(witc.trace()[...])
print("Found witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(witness_counts, witc_mean, witc_std, witc_std/witc_mean*100))
realc_mean = np.mean(realc.trace()[...])
realc_std = np.std(realc.trace()[...])
print("Found non-witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(total_counts-witness_counts, realc_mean, realc_std, realc_std/realc_mean*100))
nbkg_mean = np.mean(nbkg.trace()[...])
nbkg_std = np.std(nbkg.trace()[...])
print("Found noise counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(0, nbkg_mean, nbkg_std, nbkg_std/nbkg_mean*100))
OA_median = np.median(open_area.trace()[...])
OA_mean = np.mean(open_area.trace()[...])
OA_std = np.std(open_area.trace()[...])
print("The open area fraction is {0:.6f} +/- {1:.6f} ({2:.2f}%) at the 1 stddev level from 1 measurement\n".format(OA_mean, OA_std,OA_std/OA_mean*100 ))
print("Phil got {0} for 1 measurement\n".format(0.00139))
print("The ratio Brian/Phil is: {0:.6f} or {1:.6f}".format(OA_mean/0.00139, 0.00139/OA_mean))
mc.Matplot.plot(Aw)
"""
Explanation: Run again allowing some uncertainity on witness and foil areas
End of explanation
"""
|
intel-analytics/BigDL | python/chronos/use-case/network_traffic/network_traffic_autots_forecasting.ipynb | apache-2.0 | import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
raw_df = pd.read_csv("data/data.csv")
"""
Explanation: Network Traffic Forecasting with AutoTSEstimator
In telco, accurate forecast of KPIs (e.g. network traffic, utilizations, user experience, etc.) for communication networks ( 2G/3G/4G/5G/wired) can help predict network failures, allocate resource, or save energy.
In this notebook, we demostrate a reference use case where we use the network traffic KPI(s) in the past to predict traffic KPI(s) in the future. We demostrate how to use AutoTS in project Chronos to do time series forecasting in an automated and distributed way.
For demonstration, we use the publicly available network traffic data repository maintained by the WIDE project and in particular, the network traffic traces aggregated every 2 hours (i.e. AverageRate in Mbps/Gbps and Total Bytes) in year 2018 and 2019 at the transit link of WIDE to the upstream ISP (dataset link).
Download raw dataset and load into dataframe
Now we download the dataset and load it into a pandas dataframe. Steps are as below.
First, run the script get_data.sh to download the raw data. It will download the monthly aggregated traffic data in year 2018 and 2019 into data folder. The raw data contains aggregated network traffic (average MBPs and total bytes) as well as other metrics.
Second, run extract_data.sh to extract relavant traffic KPI's from raw data, i.e. AvgRate for average use rate, and total for total bytes. The script will extract the KPI's with timestamps into data/data.csv.
Finally, use pandas to load data/data.csv into a dataframe as shown below
End of explanation
"""
raw_df.head()
"""
Explanation: Below are some example records of the data
End of explanation
"""
df = pd.DataFrame(pd.to_datetime(raw_df.StartTime))
# we can find 'AvgRate' is of two scales: 'Mbps' and 'Gbps'
raw_df.AvgRate.str[-4:].unique()
# Unify AvgRate value
df['AvgRate'] = raw_df.AvgRate.apply(lambda x:float(x[:-4]) if x.endswith("Mbps") else float(x[:-4])*1000)
df["total"] = raw_df["total"]
df.head()
df.describe()
"""
Explanation: Data pre-processing
Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
For the network traffic data we're using, the processing contains 2 parts:
1. Convert string datetime to TimeStamp
2. Unify the measurement scale for AvgRate value - some uses Mbps, some uses Gbps
End of explanation
"""
ax = df.plot(y='AvgRate',figsize=(12,5), title="AvgRate of network traffic data")
"""
Explanation: Plot the data to see how the KPI's look like
End of explanation
"""
from bigdl.orca import init_orca_context
init_orca_context(cores=10, init_ray_on_spark=True)
"""
Explanation: Time series forecasting with AutoTS
AutoTS provides AutoML support for building end-to-end time series analysis pipelines (including automatic feature generation, model selection and hyperparameter tuning).
The general workflow using automated training contains below two steps.
1. create a AutoTSEstimator to train a TSPipeline, save it to file to use later or elsewhere if you wish.
2. use TSPipeline to do prediction, evaluation, and incremental fitting as well.
Chronos uses Orca to enable distributed training and AutoML capabilities. Init orca as below. View Orca Context for more details. Note that argument init_ray_on_spark must be True for Chronos.
End of explanation
"""
from bigdl.chronos.autots import AutoTSEstimator, TSPipeline
import torch
import bigdl.orca.automl.hp as hp
auto_estimator = AutoTSEstimator(model='lstm',
search_space="normal",
past_seq_len=hp.randint(50, 100),
future_seq_len=1,
metric="mse",
cpus_per_trial=2)
"""
Explanation: Then we initialize a AutoTSEstimator.
End of explanation
"""
from bigdl.chronos.data import TSDataset
from sklearn.preprocessing import StandardScaler
tsdata_train, tsdata_val, tsdata_test = TSDataset.from_pandas(df,
dt_col="StartTime",
target_col="AvgRate",
with_split=True,
val_ratio=0.1,
test_ratio=0.1)
standard_scaler = StandardScaler()
for tsdata in [tsdata_train, tsdata_val, tsdata_test]:
tsdata.gen_dt_feature(one_hot_features=["HOUR", "WEEKDAY"])\
.impute(mode="last")\
.scale(standard_scaler, fit=(tsdata is tsdata_train))
"""
Explanation: We need to split the data frame into train, validation and test data frame before training.
Then we impute the data to handle missing data and scale the data.
You can use TSDataset as an easy way to finish it.
End of explanation
"""
%%time
ts_pipeline = auto_estimator.fit(data=tsdata_train,
epochs=20,
batch_size=128,
validation_data=tsdata_val,
n_sampling=24)
"""
Explanation: Then we fit on train data and evaluate on validation data.
End of explanation
"""
best_config = auto_estimator.get_best_config()
best_config
"""
Explanation: We get a TSPipeline after training. Let's print the hyper paramters selected.
Note that past_seq_len is the lookback value that is automatically chosen
End of explanation
"""
y_pred = ts_pipeline.predict(tsdata_test)
"""
Explanation: We use tspipeline to predict and evaluate.
End of explanation
"""
# plot the predicted values and actual values
lookback = best_config['past_seq_len']
plt.figure(figsize=(16,6))
test_df = tsdata_test.unscale().to_pandas()
tsdata_test.scale(standard_scaler, fit=False)
plt.plot(test_df.StartTime[lookback - 1:], y_pred[:,0,0], color='red', label='predicted values')
plt.plot(test_df.StartTime[lookback - 1:], test_df.AvgRate[lookback - 1:], color='blue', label='actual values')
plt.title('the predicted values and actual values (for the test data)')
plt.xlabel('datetime')
plt.legend(loc='upper left')
plt.show()
"""
Explanation: plot actual and prediction values for AvgRate KPI
End of explanation
"""
mse, smape = ts_pipeline.evaluate(tsdata_test, metrics=["mse", "smape"])
print("Evaluate: the mean square error is", mse)
print("Evaluate: the smape value is", smape)
"""
Explanation: Calculate mean square error and the symetric mean absolute percentage error.
End of explanation
"""
# save pipeline file
my_ppl_file_path = "/tmp/saved_pipeline"
ts_pipeline.save(my_ppl_file_path)
"""
Explanation: You can save the pipeline to file and reload it to do incremental fitting or others.
End of explanation
"""
from bigdl.orca import stop_orca_context
stop_orca_context()
"""
Explanation: You can stop the orca context after auto training.
End of explanation
"""
new_ts_pipeline = TSPipeline.load(my_ppl_file_path)
"""
Explanation: Next, we demonstrate how to do incremental fitting with your saved pipeline file.
First load saved pipeline file.
End of explanation
"""
new_ts_pipeline.fit(tsdata_val)
"""
Explanation: Then do incremental fitting with TSPipeline.fit(). We use validation data frame as additional data for demonstration. You can use your new data frame.
End of explanation
"""
# predict results of test_df
y_pred = new_ts_pipeline.predict(tsdata_test)
lookback = best_config['past_seq_len']
plt.figure(figsize=(16,6))
test_df = tsdata_test.unscale().to_pandas()
tsdata_test.scale(standard_scaler, fit=False)
plt.plot(test_df.StartTime[lookback - 1:], y_pred[:,0,0], color='red', label='predicted values')
plt.plot(test_df.StartTime[lookback - 1:], test_df.AvgRate[lookback - 1:], color='blue', label='actual values')
plt.title('the predicted values and actual values (for the test data)')
plt.xlabel('datetime')
plt.legend(loc='upper left')
plt.show()
"""
Explanation: predict and plot the result after incremental fitting.
End of explanation
"""
mse, smape = new_ts_pipeline.evaluate(tsdata_test, metrics=["mse", "smape"])
print("Evaluate: the mean square error is", mse)
print("Evaluate: the smape value is", smape)
"""
Explanation: Calculate mean square error and the symetric mean absolute percentage error.
End of explanation
"""
|
INGEOTEC/CursoCategorizacionTexto | 06_conclusiones.ipynb | apache-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
import gzip
import json
import numpy as np
def read_data(fname):
with gzip.open(fname) as fpt:
d = json.loads(str(fpt.read(), encoding='utf-8'))
return d
%matplotlib inline
plt.figure(figsize=(20, 10))
mx_pos = read_data('spanish/polarity_by_country/MX.json.gz')
ticks = [str(x[0])[2:] for x in mx_pos]
mu = [x[1] for x in mx_pos]
plt.plot(mu)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=90)
plt.grid()
"""
Explanation: Aprendizaje computacional en grandes volúmenes de texto
Mario Graff ([email protected], [email protected])
Sabino Miranda ([email protected])
Daniela Moctezuma ([email protected])
Eric S. Tellez ([email protected])
CONACYT, INFOTEC y CentroGEO
https://github.com/ingeotec
Objetivo
El alumno será capaz de crear modelos de texto multilenguaje aplicables a grandes volúmenes de información. Sobre estos modelos, el alumno será capaz de aplicar algoritmos de aprendizaje supervisado para diferentes dominios de aplicación, como por ejemplo, clasificadores de polaridad, determinar la autoría basado en el texto, determinar la temática de un texto, entre otras.
Temas
Introducción
Motivación (análisis de sentimientos, detección de predadores, spam, género, edad, autoría en general, marketing, prestigio, etc)
Estado del arte (competencias)
Uso de herramientas: $\mu$TC, Python, numpy, nltk, sklearn
Representación vectorial del texto
Normalización
Tokenización (n-words, q-grams, skip-grams)
Pesado de texto (TFIDF)
Medidas de similitud
Aprendizaje supervisado
Modelo general de aprendizaje; Entrenamiento, test, score (accuracy, recall, precision, f1)
Máquinas de soporte vectorial (SVM)
Programación genética (EvoDAG)
Distant supervision
$\mu$TC
Pipeline de transformaciones
Optimización de parámetros
Clasificadores
Uso del $\mu$TC
Aplicaciones
Análisis de sentimientos
Determinación de autoría
Clasificación de noticias
Spam
Género y edad
Conclusiones
Análisis de Polaridad de Tuits GEO Referenciados
Análisis de la polaridad de tuits geo-referenciados
Recolectados desde 16 de diciembre del 2015 hasta 25 de noviembre de 2016
Todos los tuits están escritos en español y están geo-localizados.
Servicio Web de Análisis de Polaridad (SWAP)
Metodología
Seleccionaron aquellos que indican que su país de origen es México (etiqueta MX)
Aproximadamente 37,198,787 tuits
Generados por 695,345 usuarios
Analizado por SWAP
Valor es la positividad del tuit.
Eliminar el sezgo que pueden introducir los usuarios más activos
Midiendo la positividad promedio por usuario, por día.
La positividad es la media el promedio de la positividad promedio de los usuarios por día.
Análisis de Positividad para México
El eje de abscisas se encuentran los diferentes días
En el eje de las ordenadas está el valor de positividad
El máximo valor es 1 y el mínimo valor es 0.
En las abscisas se muestran los 10 picos mas significativos así como el valle más pronunciado.
24 y 25 de diciembre del 2015
31 de diciembre y 1 de enero del 2016
14 de febrero
8 de marzo (día de la mujer)
30 de abril
10 de mayo
15 de mayo
19 de junio (día del padre)
9 de noviembre de 2016 (elecciones presidenciales de Estados Unidos)
End of explanation
"""
def remove_median(pos):
median = np.array(mx_pos[: -int((len(pos) % 7))])[:, 1]
median.shape = (int(median.shape[0] / 7), 7)
median = np.median(median, axis=0)
median = np.concatenate((np.concatenate([median for x in range(int(len(pos) / median.shape[0]))], axis=0),
median[:int(len(mx_pos) % 7)]), axis=0)
return [(x[0], x[1] - y) for x, y in zip(pos, median)]
plt.figure(figsize=(20, 10))
nmx_pos = remove_median(mx_pos)
mu = np.array([x[1] for x in nmx_pos])
plt.plot(mu)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=90)
plt.grid()
plt.figure(figsize=(20, 20))
for k, D in enumerate([mx_pos, remove_median(mx_pos)]):
ticks = [str(x[0])[2:] for x in D]
mu = [x[1] for x in D]
plt.subplot(4, 1, k+1)
plt.plot(mu)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=45)
plt.grid()
"""
Explanation: Efecto oscilatorio cuyo periodo corresponde a los días de la semana.
TGIF (Thanks god, it's friday).
Remover este fenómeno
Quitar la mediana por día
End of explanation
"""
pos = [read_data('spanish/polarity_by_country/%s.json.gz' % x) for x in ['US', 'AR', 'ES']]
us_pos, ar_pos, es_pos = pos
plt.figure(figsize=(20, 10))
for code, D, k in zip(['US', 'MX', 'AR', 'ES'], [us_pos, mx_pos, ar_pos, es_pos],
range(4)):
D = remove_median(D)
ticks = [str(x[0])[2:] for x in D]
mu = [x[1] for x in D]
plt.subplot(4, 1, k+1)
plt.plot(mu)
plt.title(code)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=45)
plt.grid()
plt.ylim(-0.20, 0.20)
"""
Explanation: Análisis de polaridad en Estados Unidos, Argentina, México y España
Estados Unidos
México
Argentina
España.
19 de marzo
19 de junio es importante para todas las naciones excepto España
20 de julio que se celebra el día del amigo en Argentina.
End of explanation
"""
%matplotlib inline
from glob import glob
from multiprocessing import Pool
from tqdm import tqdm
from collections import Counter
def number_users(fname):
return fname, len(read_data(fname))
fnames = [i for i in glob('spanish/users_by_country/*.json.gz') if len(i.split('.')[0].split('/')[1]) == 2]
p = Pool(8)
res = [x for x in p.imap_unordered(number_users, fnames)]
p.close()
country_code = Counter()
for name, value in res:
code = name.split('.')[0].split('/')[1]
country_code[code] = value
mc = country_code.most_common()
size = 19
first = mc[:size]
extra = ('REST', sum([x[1] for x in mc[size:]]))
first.append(extra)
plt.figure(figsize=(10, 10))
_ = plt.pie([x[1] for x in first], labels=[x[0] for x in first])
"""
Explanation: Análisis Descriptivo de los Tweets en Español
A partir de los tweets almacenados se presentan algunas estadísticas básicas que describen algunas características interesantes de los datos como la cantidad de usuarios por país y la movilidad de los usuarios.
Cantidad de usuarios por país
La siguiente figura muestra los usuarios por país, se puede observar que Estados Unidos es el que cuenta con un mayor número de usuarios, seguido por Argentina y en tercer lugar México. Ademas sorpresivamente Brasil es el cuarto y en quinto lugar se encuentra España.
End of explanation
"""
def migration(country_code='MX'):
fname = 'spanish/users_by_country/%s.json.gz' % country_code
d = read_data(fname)
other = Counter()
for x in d.values():
if len(x) == 1:
continue
c = Counter(x)
for xx in c.most_common()[1:]:
if xx[0] == country_code:
continue
other[xx[0]] += 1
return other
plt.figure(figsize=(10, 10))
for k, c in enumerate(['US', 'AR', 'MX', 'ES']):
other = migration(c)
mc = other.most_common()
first = mc[:size]
extra = ('REST', sum([x[1] for x in mc[size:]]))
first.append(extra)
plt.subplot(2, 2, k+1)
_ = plt.pie([x[1] for x in first], labels=[x[0] for x in first])
plt.title(c)
"""
Explanation: Movilidad de los Tuiteros (de habla hispana)
La siguiente figura muestra cuáles son los países que visitan más frecuentemente usuarios de un país en particular. Por ejemplo, la mayoría de los usuarios de Estados Unidos que viajan a otro país viajan a México, en segundo lugar a Puerto Rico y así sucesivamente; los usuarios de Argentina viajan a Brasil en primer lugar;
los de México viajan a Estados Unidos; y los de España también viajan a Estados Unidos.
End of explanation
"""
|
exowanderer/SpitzerDeepLearningNetwork | Notebooks/tensorflow_DNNRegressor_Spitzer - RandomForests - relu.ipynb | mit | import pandas as pd
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
from matplotlib import pyplot as plt
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler, minmax_scale
from sklearn.ensemble import RandomForestRegressor, ExtraTreesRegressor, AdaBoostRegressor
plt.rcParams['figure.dpi'] = 300
from corner import corner
from sklearn.metrics import r2_score
from time import time
start0 = time()
from sklearn import feature_selection, feature_extraction, decomposition
"""
Explanation: TF-DNNRegressor - RandomForests - Spitzer Calibration Data
This script show a simple example of using tf.contrib.learn library to create our model.
The code is divided in following steps:
Load CSVs data
Filtering Categorical and Continuous features
Converting Data into Tensors
Selecting and Engineering Features for the Model
Defining The Regression Model
Training and Evaluating Our Model
Predicting output for test data
v0.1: Added code for data loading, modeling and prediction model.
v0.2: Removed unnecessary output logs.
PS: I was able to get a score of 1295.07972 using this script with 70% (of train.csv) data used for training and rest for evaluation. Script took 2hrs for training and 3000 steps were used.
End of explanation
"""
spitzerDataRaw = pd.read_csv('pmap_ch2_0p1s_x4_rmulti_s3_7.csv')
PLDpixels = pd.DataFrame({key:spitzerDataRaw[key] for key in spitzerDataRaw.columns.values if 'pix' in key})
PLDpixels
PLDnorm = np.sum(np.array(PLDpixels),axis=1)
PLDpixels = (PLDpixels.T / PLDnorm).T
PLDpixels
spitzerData = spitzerDataRaw.copy()
for key in spitzerDataRaw.columns:
if key in PLDpixels.columns:
spitzerData[key] = PLDpixels[key]
testPLD = np.array(pd.DataFrame({key:spitzerData[key] for key in spitzerData.columns.values if 'pix' in key}))
assert(not sum(abs(testPLD - np.array(PLDpixels))).all())
print('Confirmed that PLD Pixels have been Normalized to Spec')
notFeatures = ['flux', 'fluxerr', 'dn_peak']#, 'yerr', 'xerr', 'xycov']
feature_columns = spitzerData.drop(notFeatures,axis=1).columns.values
features = spitzerData.drop(notFeatures,axis=1).values
labels = spitzerData['flux'].values
features[::100].T.shape
features.T.shape
stdScaler = StandardScaler()
minMaxScaler = MinMaxScaler()
features_MMscaled = minMaxScaler.fit_transform(features)
labels_MMscaled = minMaxScaler.fit_transform(labels[:,None]).ravel()
features_SSscaled = stdScaler.fit_transform(features)
labels_SSscaled = stdScaler.fit_transform(labels[:,None]).ravel()
features_MMscaled.shape, labels_MMscaled.shape
"""
Explanation: Load CSVs data
End of explanation
"""
# for nComps in range(1,spitzerData.shape[1]):
randForest = RandomForestRegressor(n_estimators=1000, criterion='mse', max_depth=None, min_samples_split=2, \
min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', \
max_leaf_nodes=None, bootstrap=True, oob_score=True, \
n_jobs=-1, random_state=42, verbose=0, warm_start=True)#min_impurity_split=1e-07,
randForest.fit(features_MMscaled, labels_MMscaled)
randForest.oob_score_
"""
Explanation: Standard Random Forest Approach
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA()
pca_feature_set = pca.fit_transform(features_SSscaled)
# for nComps in range(1,spitzerData.shape[1]):
randForest_PCA = RandomForestRegressor(n_estimators=1000, criterion='mse', max_depth=None, min_samples_split=2, \
min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', \
max_leaf_nodes=None, min_impurity_split=1e-07, bootstrap=True, oob_score=True, \
n_jobs=-1, random_state=42, verbose=0, warm_start=True)
randForest_PCA.fit(pca_feature_set, labels_SSscaled)
randForest_PCA.oob_score_
"""
Explanation: PCA Pretrained Random Forest Approach
End of explanation
"""
from sklearn.decomposition import FastICA
ica = FastICA()
ica_feature_set = ica.fit_transform(features_SSscaled)
# for nComps in range(1,spitzerData.shape[1]):
randForest_ICA = RandomForestRegressor(n_estimators=1000, criterion='mse', max_depth=None, min_samples_split=2, \
min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', \
max_leaf_nodes=None, min_impurity_split=1e-07, bootstrap=True, oob_score=True, \
n_jobs=-1, random_state=42, verbose=0, warm_start=True)
randForest_ICA.fit(ica_feature_set, labels_SSscaled)
randForest_ICA.oob_score_
"""
Explanation: ICA Pretrained Random Forest Approach
End of explanation
"""
importances = randForest.feature_importances_
indices = np.argsort(importances)[::-1]
std = np.std([tree.feature_importances_ for tree in randForest.estimators_],axis=0)
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(features.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(features.shape[1]), feature_columns[indices], rotation=60)
plt.xlim([-1, features.shape[1]])
plt.show()
cumsum = np.cumsum(importances[indices])
nImportantSamples = np.argmax(cumsum >= 0.95) + 1
nImportantSamples
"""
Explanation: Importance Sampling
End of explanation
"""
nImportantSamples = d
rfi_feature_set = features_SSscaled[importances[indices][:nImportantSamples]]
# for nComps in range(1,spitzerData.shape[1]):
randForest_RF = RandomForestRegressor(n_estimators=1000, criterion='mse', max_depth=None, min_samples_split=2, \
min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', \
max_leaf_nodes=None, min_impurity_split=1e-07, bootstrap=True, oob_score=True, \
n_jobs=-1, random_state=42, verbose=0, warm_start=True)
randForest_RF.fit(rfi_feature_set, labels_SSscaled)
randForest_RF.oob_score_
"""
Explanation: Random Forest Pretrained Random Forest Approach
End of explanation
"""
# for nComps in range(1,spitzerData.shape[1]):
randForestSqrt = RandomForestRegressor(n_estimators=1000, criterion='mse', max_depth=None, min_samples_split=2, \
min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='sqrt', \
max_leaf_nodes=None, min_impurity_split=1e-07, bootstrap=True, oob_score=True, \
n_jobs=-1, random_state=42, verbose=0, warm_start=True)
randForestSqrt.fit(features, labels)
importancesSqrt = randForestSqrt.feature_importances_
indicesSqrt = np.argsort(importancesSqrt)[::-1]
stdSqrt = np.std([tree.feature_importances_ for tree in randForestSqrt.estimators_],axis=0)
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(features.shape[1]), importancesSqrt[indicesSqrt],
color="r", yerr=stdSqrt[indicesSqrt], align="center")
plt.xticks(range(features.shape[1]), feature_columns[indicesSqrt], rotation=60)
plt.xlim([-1, features.shape[1]])
plt.show()
cumsumSqrt = np.cumsum(importancesSqrt[indicesSqrt])
dSqrt = np.argmax(cumsumSqrt >= 0.95) + 1
dSqrt
# for nComps in range(1,spitzerData.shape[1]):
randForestLog2 = RandomForestRegressor(n_estimators=1000, criterion='mse', max_depth=None, min_samples_split=2, \
min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='log2', \
max_leaf_nodes=None, min_impurity_split=1e-07, bootstrap=True, oob_score=True, \
n_jobs=-1, random_state=42, verbose=0, warm_start=True)
randForestLog2.fit(features, labels)
importancesLog2 = randForestLog2.feature_importances_
indicesLog2 = np.argsort(importancesLog2)[::-1]
stdLog2 = np.std([tree.feature_importances_ for tree in randForestLog2.estimators_],axis=0)
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(features.shape[1]), importancesLog2[indicesLog2],
color="r", yerr=stdLog2[indicesLog2], align="center")
plt.xticks(range(features.shape[1]), feature_columns[indicesLog2], rotation=60)
plt.xlim([-1, features.shape[1]])
plt.show()
cumsumLog2 = np.cumsum(importancesLog2[indicesLog2])
dLog2 = np.argmax(cumsumLog2 >= 0.95) + 1
dLog2
# for nComps in range(1,spitzerData.shape[1]):
randForestNone = RandomForestRegressor(n_estimators=1000, criterion='mse', max_depth=None, min_samples_split=2, \
min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='None', \
max_leaf_nodes=None, min_impurity_split=1e-07, bootstrap=True, oob_score=True, \
n_jobs=-1, random_state=42, verbose=0, warm_start=True)
randForestNone.fit(features, labels)
importancesNone = randForestNone.feature_importances_
indicesNone = np.argsort(importancesNone)[::-1]
stdNone = np.std([tree.feature_importances_ for tree in randForestNone.estimators_],axis=0)
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(features.shape[1]), importancesNone[indicesNone],
color="r", yerr=stdNone[indicesNone], align="center")
plt.xticks(range(features.shape[1]), feature_columns[indicesNone], rotation=60)
plt.xlim([-1, features.shape[1]])
plt.show()
cumsumNone = np.cumsum(importancesNone[indicesNone])
dNone = np.argmax(cumsumNone >= 0.95) + 1
dNone
# for nComps in range(1,spitzerData.shape[1]):
randForestAdaBoost = AdaBoostRegressor(n_estimators=1000)
randForestAdaBoost.fit(features, labels)
importancesAda = randForestAdaBoost.feature_importances_
indicesAda = np.argsort(randForestAdaBoost)[::-1]
stdAda = np.std([tree.feature_importances_ for tree in randForestAdaBoost.estimators_],axis=0)
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(features.shape[1]), importancesAda[indicesAda],
color="r", yerr=stdAda[indicesAda], align="center")
plt.xticks(range(features.shape[1]), feature_columns[indicesAda], rotation=60)
plt.xlim([-1, features.shape[1]])
plt.show()
cumsumAda = np.cumsum(importancesAda[indicesAda])
dAda = np.argmax(cumsumAda >= 0.95) + 1
dAda
plt.plot(cumsum,'o');
plt.plot(cumsumSqrt,'o');
plt.plot(cumsumLog2,'o');
plt.plot(cumsumNone,'o');
plt.plot(cumsumAda,'o');
plt.axhline(.95);
plt.xticks(np.arange(0,21,2));
# for nComps in range(1,spitzerData.shape[1]):
randForestExtra = ExtraTreesRegressor(n_estimators=1000)
randForestExtra.fit(features, labels)
importancesExtra = randForestExtra.feature_importances_
indicesExtra = np.argsort(randForestExtra)[::-1]
stdExtra = np.std([tree.feature_importances_ for tree in randForestExtra.estimators_],axis=0)
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(features.shape[1]), importancesExtra[indicesExtra],
color="r", yerr=stdExtra[indicesExtra], align="center")
plt.xticks(range(features.shape[1]), feature_columns[indicesExtra], rotation=90)
plt.xlim([-1, features.shape[1]])
plt.show()
cumsumExtra = np.cumsum(importancesExtra[indicesExtra])
dExtra = np.argmax(cumsumExtra >= 0.95) + 1
dExtra
plt.plot(cumsum,'o', label='default');
plt.plot(cumsumSqrt,'o', label='Sqrt');
plt.plot(cumsumLog2,'o', label='Log2');
plt.plot(cumsumNone,'o', label='None');
plt.plot(cumsumAda,'o', label='AdaBost');
plt.plot(cumsumExtra,'o', label='Extra');
plt.axhline(.95);
plt.xticks(np.arange(0,21,2), np.arange(0,21,2).astype(int));
"""
Explanation: Random Forest Regulation Attempts
End of explanation
"""
# plt.hist(randForest9.components_.T, bins=1000);
for feature in randForestAdaBoost.components_:
plt.hist(feature, bins=1000, alpha=0.25, normed=True);
# plt.xlim(-0.005, 0.005);
for k in range(randForestAdaBoost.n_components_):
plt.figure()
plt.scatter(randForestAdaBoost.components_[k][::10], labels_SSscaled[::10],alpha=0.25, lw=0);
"""
Explanation: That Shall Not Pass (yet)
End of explanation
"""
x_val, x_traintest, y_val, y_traintest = train_test_split(randForest9.components_.T, labels_SSscaled, test_size=0.8, random_state=42)
x_train, x_test, y_train, y_test = train_test_split(x_traintest, y_traintest, test_size=0.5, random_state=42)
# ReNormalization = False # leave them mean/std scaled
ReNormalization = True # min-max scale them (0-1)
if ReNormalization:
# min-max scale it (0-1)
x_val = minmax_scale(x_val.astype('float32'))
x_train = minmax_scale(x_train.astype('float32'))
x_test = minmax_scale(x_test.astype('float32'))
y_val = minmax_scale(y_val.astype('float32'))
y_train = minmax_scale(y_train.astype('float32'))
y_test = minmax_scale(y_test.astype('float32'))
print(x_val.shape[0] , 'validation samples')
print(x_train.shape[0], 'train samples')
print(x_test.shape[0] , 'test samples')
randForest_feature_columns = ['randForest' + str(k) for k in range(randForest9.explained_variance_ratio_.size)]
randForest_feature_columns
train_df = pd.DataFrame(np.c_[x_train, y_train], columns=list(randForest_feature_columns) + ['flux'])
test_df = pd.DataFrame(np.c_[x_test , y_test ], columns=list(randForest_feature_columns) + ['flux'])
evaluate_df = pd.DataFrame(np.c_[x_val , y_val ], columns=list(randForest_feature_columns) + ['flux'])
train_df.columns
[key for key in train_df.columns if 'flux' not in key]
"""
Explanation: NEED TO TEST NORMALIZING BEFORE OR AFTER THE TRAIN TEST SPLIT
I am pretty sure that we need to renormalize before the train test split. I wrote this part of the code before quickly, without thinking it through.
End of explanation
"""
MODEL_DIR = "tf_model_spitzer/withrandForest_MinMax01/relu"
print("train_df.shape = " , train_df.shape)
print("test_df.shape = " , test_df.shape)
print("evaluate_df.shape = ", evaluate_df.shape)
"""
Explanation: We only take first 1000 rows for training/testing and last 500 row for evaluation.
This done so that this script does not consume a lot of kaggle system resources.
End of explanation
"""
# categorical_features = [feature for feature in features if 'cat' in feature]
categorical_features = []
continuous_features = [feature for feature in train_df.columns]# if 'cat' in feature]
LABEL_COLUMN = 'flux'
"""
Explanation: Filtering Categorical and Continuous features
We store Categorical, Continuous and Target features names in different variables. This will be helpful in later steps.
End of explanation
"""
# Converting Data into Tensors
def input_fn(df, training = True):
# Creates a dictionary mapping from each continuous feature column name (k) to
# the values of that column stored in a constant Tensor.
continuous_cols = {k: tf.constant(df[k].values)
for k in continuous_features}
# Creates a dictionary mapping from each categorical feature column name (k)
# to the values of that column stored in a tf.SparseTensor.
# categorical_cols = {k: tf.SparseTensor(
# indices=[[i, 0] for i in range(df[k].size)],
# values=df[k].values,
# shape=[df[k].size, 1])
# for k in categorical_features}
# Merges the two dictionaries into one.
feature_cols = continuous_cols
# feature_cols = dict(list(continuous_cols.items()) + list(categorical_cols.items()))
if training:
# Converts the label column into a constant Tensor.
label = tf.constant(df[LABEL_COLUMN].values)
# Returns the feature columns and the label.
return feature_cols, label
# Returns the feature columns
return feature_cols
def train_input_fn():
return input_fn(train_df, training=True)
def eval_input_fn():
return input_fn(evaluate_df, training=True)
# def test_input_fn():
# return input_fn(test_df.drop(LABEL_COLUMN,axis=1), training=False)
def test_input_fn():
return input_fn(test_df, training=False)
"""
Explanation: Converting Data into Tensors
When building a TF.Learn model, the input data is specified by means of an Input Builder function. This builder function will not be called until it is later passed to TF.Learn methods such as fit and evaluate. The purpose of this function is to construct the input data, which is represented in the form of Tensors or SparseTensors.
Note that input_fn will be called while constructing the TensorFlow graph, not while running the graph. What it is returning is a representation of the input data as the fundamental unit of TensorFlow computations, a Tensor (or SparseTensor).
More detail on input_fn.
End of explanation
"""
engineered_features = []
for continuous_feature in continuous_features:
engineered_features.append(
tf.contrib.layers.real_valued_column(continuous_feature))
# for categorical_feature in categorical_features:
# sparse_column = tf.contrib.layers.sparse_column_with_hash_bucket(
# categorical_feature, hash_bucket_size=1000)
# engineered_features.append(tf.contrib.layers.embedding_column(sparse_id_column=sparse_column, dimension=16,
# combiner="sum"))
"""
Explanation: Selecting and Engineering Features for the Model
We use tf.learn's concept of FeatureColumn which help in transforming raw data into suitable input features.
These engineered features will be used when we construct our model.
End of explanation
"""
nHidden1 = 5
nHidden2 = 5
# nHidden3 = 5
regressor = tf.contrib.learn.DNNRegressor(activation_fn=tf.nn.relu, dropout=0.5, optimizer=tf.train.AdamOptimizer,
feature_columns=engineered_features, hidden_units=[nHidden1, nHidden2], model_dir=MODEL_DIR)
"""
Explanation: Defining The Regression Model
Following is the simple DNNRegressor model. More detail about hidden_units, etc can be found here.
model_dir is used to save and restore our model. This is because once we have trained the model we don't want to train it again, if we only want to predict on new data-set.
End of explanation
"""
# Training Our Model
nFitSteps = 50000
start = time()
wrap = regressor.fit(input_fn=train_input_fn, steps=nFitSteps)
print('TF Regressor took {} seconds'.format(time()-start))
# Evaluating Our Model
print('Evaluating ...')
results = regressor.evaluate(input_fn=eval_input_fn, steps=1)
for key in sorted(results):
print("%s: %s" % (key, results[key]))
print("Val Acc: %s" % (1-results[key]))
"""
Explanation: Training and Evaluating Our Model
End of explanation
"""
def de_median(x):
return x - np.median(x)
predicted_output = regressor.predict(input_fn=test_input_fn)
x = list(predicted_output)
# print([predicted_output() for _ in range(10)])
plt.plot((x - np.median(x)) / np.std(x),'.',alpha=0.1);
plt.plot((test_df['flux'].values - np.median(test_df['flux'].values)) / np.std(test_df['flux'].values),'.',alpha=0.1);
plt.plot(de_median(x - test_df['flux'].values)/x,'.',alpha=0.1);
plt.ylim(-1.0,1.0);
test_df['flux'].values.size/0.4
r2_score(test_df['flux'].values,x)*100
print('Full notebook took {} seconds'.format(time()-start0))
"""
Explanation: Predicting output for test data
Most of the time prediction script would be separate from training script (we need not to train on same data again) but I am providing both in same script here; as I am not sure if we can create multiple notebook and somehow share data between them in Kaggle.
End of explanation
"""
|
maartenbreddels/vaex | docs/source/example_io.ipynb | mit | import vaex
# Reading a HDF5 file
df_names = vaex.open('./data/io/sample_names_1.hdf5')
df_names
# Reading an arrow file
df_fruits = vaex.open('./data/io/sample_fruits.arrow')
df_fruits
"""
Explanation: <style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
I/O Kung-Fu: get your data in and out of Vaex
If you want to try out this notebook with a live Python kernel, use mybinder:
<a class="reference external image-reference" href="https://mybinder.org/v2/gh/vaexio/vaex/latest?filepath=docs%2Fsource%2Fexample_io.ipynb"><img alt="https://mybinder.org/badge_logo.svg" src="https://mybinder.org/badge_logo.svg" width="150px"></a>
Data input
Every project starts with reading in some data. Vaex supports several data sources:
Binary file formats:
HDF5
Apache Arrow
Apache Parquet
FITS
Text based file formats:
CSV
ASCII
JSON
In-memory data representations:
pandas DataFrames and everything that pandas can read
Apache Arrow Tables
numpy arrays
Python dictionaries
Single row DataFrames
The following examples show the best practices of getting your data in Vaex.
Binary file formats
If your data is already in one of the supported binary file formats (HDF5, Apache Arrow, Apache Parquet, FITS), opening it with Vaex rather simple:
End of explanation
"""
df_names_all = vaex.open('./data/io/sample_names_*.hdf5')
df_names_all
"""
Explanation: Opening such data is instantenous regardless of the file size on disk: Vaex will just memory-map the data instead of reading it in memory. This is the optimal way of working with large datasets that are larger than available RAM.
If your data is contained within multiple files, one can open them all simultaneously like this:
End of explanation
"""
df_names_all = vaex.open_many(['./data/io/sample_names_1.hdf5',
'./data/io/sample_names_2.hdf5'])
df_names_all
"""
Explanation: Alternatively, one can use the open_many method to pass a list of files to open:
End of explanation
"""
df_from_s3 = vaex.open('s3://vaex/testing/xys.hdf5?anon=true')
df_from_s3
"""
Explanation: The result will be a single DataFrame object containing all of the data coming from all files.
The data does not necessarily have to be local. With Vaex you can open a HDF5 file straight from Amazon's S3:
End of explanation
"""
# Reading a parquet file
df_cars = vaex.open('./data/io/sample_cars.parquet')
df_cars
"""
Explanation: In this case the data will be lazily downloaded and cached to the local machine. "Lazily downloaded" means that Vaex will only download the portions of the data you really need. For example: imagine that we have a file hosted on S3 that has 100 columns and 1 billion rows. Getting a preview of the DataFrame via print(df) for instance will download only the first and last 5 rows. If we than proceed to make calculations or plots with only 5 columns, only the data from those columns will be downloaded and cached to the local machine.
By default, data that is streamed from S3 is cached at $HOME/.vaex/file-cache/s3, and thus successive access is as fast as native disk access. One can also use the profile_name argument to use a specific S3 profile, which will than be passed to s3fs.core.S3FileSystem.
With Vaex one can also read-in parquet files:
End of explanation
"""
df_nba = vaex.from_csv('./data/io/sample_nba_1.csv', copy_index=False)
df_nba
"""
Explanation: Text based file formats
Datasets are still commonly stored in text-based file formats such as CSV. Since text-based file formats are not memory-mappable, they have to be read in memory. If the contents of a CSV file fits into the available RAM, one can simply do:
End of explanation
"""
df_nba = vaex.read_csv('./data/io/sample_nba_1.csv', copy_index=False)
df_nba
"""
Explanation: or alternatively:
End of explanation
"""
list_of_files = ['./data/io/sample_nba_1.csv',
'./data/io/sample_nba_2.csv',
'./data/io/sample_nba_3.csv',]
# Convert each CSV file to HDF5
for file in list_of_files:
df_tmp = vaex.from_csv(file, convert=True, copy_index=False)
"""
Explanation: Vaex is using pandas for reading CSV files in the background, so one can pass any arguments to the vaex.from_csv or vaex.read_csv as one would pass to pandas.read_csv and specify for example separators, column names and column types. The copy_index parameter specifies if the index column of the pandas DataFrame should be read as a regular column, or left out to save memory. In addition to this, if you specify the convert=True argument, the data will be automatically converted to an HDF5 file behind the scenes, thus freeing RAM and allowing you to work with your data in a memory-efficient, out-of-core manner.
If the CSV file is so large that it can not fit into RAM all at one time, one can convert the data to HDF5 simply by:
df = vaex.from_csv('./my_data/my_big_file.csv', convert=True, chunk_size=5_000_000)
When the above line is executed, Vaex will read the CSV in chunks, and convert each chunk to a temporary HDF5 file on disk. All temporary files are then concatenated into a single HDF5 file, and the temporary files deleted. The size of the individual chunks to be read can be specified via the chunk_size argument. Note that this automatic conversion requires free disk space of twice the final HDF5 file size.
It often happens that the data we need to analyse is spread over multiple CSV files. One can convert them to the HDF5 file format like this:
End of explanation
"""
df = vaex.open('./data/io/sample_nba_*.csv.hdf5')
df
"""
Explanation: The above code block converts in turn each CSV file to the HDF5 format. Note that the conversion will work regardless of the file size of each individual CSV file, provided there is sufficient storage space.
Working with all of the data is now easy: just open all of the relevant HDF5 files as described above:
End of explanation
"""
df.export('./data/io/sample_nba_combined.hdf5')
"""
Explanation: One can than additionally export this combined DataFrame to a single HDF5 file. This should lead to minor performance improvements.
End of explanation
"""
df_isles = vaex.from_json('./data/io/sample_isles.json', orient='table', copy_index=False)
df_isles
"""
Explanation: It is also common the data to be stored in JSON files. To read such data in Vaex one can do:
End of explanation
"""
import pandas as pd
pandas_df = pd.read_csv('./data/io/sample_nba_1.csv')
pandas_df
df = vaex.from_pandas(df=pandas_df, copy_index=True)
df
"""
Explanation: This is a convenience method which simply wraps pandas.read_json, so the same arguments and file reading strategy applies. If the data is distributed amongs multiple JSON files, one can apply a similar strategy as in the case of multiple CSV files: read each JSON file with the vaex.from_json method, convert it to a HDF5 or Arrow file format. Than use vaex.open or vaex.open_many methods to open all the converted files as a single DataFrame.
To learn more about different options of exporting data with Vaex, please read the next section below.
In-memory data representations
One can construct a Vaex DataFrame from a variety of in-memory data representations. Such a common operation is converting a pandas into a Vaex DataFrame. Let us read in a CSV file with pandas and than convert it to a Vaex DataFrame:
End of explanation
"""
pandas_df = pd.read_sas('./data/io/sample_airline.sas7bdat')
df = vaex.from_pandas(pandas_df, copy_index=False)
df
"""
Explanation: The copy_index argument specifies whether the index column of a pandas DataFrame should be imported into the Vaex DataFrame. Converting a pandas into a Vaex DataFrame is particularly useful since pandas can read data from a large variety of file formats. For instance, we can use pandas to read data from a database, and then pass it to Vaex like so:
```
import vaex
import pandas as pd
import sqlalchemy
connection_string = 'postgresql://readonly:' + 'my_password' + '@server.company.com:1234/database_name'
engine = sqlalchemy.create_engine(connection_string)
pandas_df = pd.read_sql_query('SELECT * FROM MYTABLE', con=engine)
df = vaex.from_pandas(pandas_df, copy_index=False)
```
Another example is using pandas to read in SAS files:
End of explanation
"""
import pyarrow.csv
arrow_table = pyarrow.csv.read_csv('./data/io/sample_nba_1.csv')
arrow_table
"""
Explanation: One can read in an arrow table as a Vaex DataFrame in a similar manner. Let us first use pyarrow to read in a CSV file as an arrow table.
End of explanation
"""
df = vaex.from_arrow_table(arrow_table)
df
"""
Explanation: Once we have the arrow table, converting it to a DataFrame is simple:
End of explanation
"""
import numpy as np
x = np.arange(2)
y = np.array([10, 20])
z = np.array(['dog', 'cat'])
df_numpy = vaex.from_arrays(x=x, y=y, z=z)
df_numpy
"""
Explanation: It also common to construct a Vaex DataFrame from numpy arrays. That can be done like this:
End of explanation
"""
# Construct a DataFrame from Python dictionary
data_dict = dict(x=[2, 3], y=[30, 40], z=['cow', 'horse'])
df_dict = vaex.from_dict(data_dict)
df_dict
"""
Explanation: Constructing a DataFrame from a Python dict is also straight-forward:
End of explanation
"""
df_single_row = vaex.from_scalars(x=4, y=50, z='mouse')
df_single_row
"""
Explanation: At times, one may need to create a single row DataFrame. Vaex has a convenience method which takes individual elements (scalars) and creates the DataFrame:
End of explanation
"""
df = vaex.concat([df_numpy, df_dict, df_single_row])
df
"""
Explanation: Finally, we can choose to concatenate different DataFrames, without any memory penalties like so:
End of explanation
"""
df.export_hdf5('./data/io/output_data.hdf5')
df.export_arrow('./data/io/output_data.arrow')
df.export_parquet('./data/io/output_data.parquet')
"""
Explanation: Data export
One can export Vaex DataFrames to multiple file or in-memory data representations:
Binary file formats:
HDF5
Apache Arrow
Apache Parquet
FITS
Text based file formats:
CSV
ASCII
In-memory data representations:
DataFrames:
panads DataFrame
Apache Arrow Table
numpy arrays
Dask arrays
Python dictionaries
Python items list ( a list of ('column_name', data) tuples)
Expressions:
panads Series
numpy array
Dask array
Python list
Binary file formats
The most efficient way to store data on disk when you work with Vaex is to use binary file formats. Vaex can export a DataFrame to HDF5, Apache Arrow, Apache Parquet and FITS:
End of explanation
"""
df.export('./data/io/output_data.hdf5')
df.export('./data/io/output_data.arrow')
df.export('./data/io/output_data.parquet')
"""
Explanation: Alternatively, one can simply use:
End of explanation
"""
df.export_csv('./data/io/output_data.csv') # `chunk_size` has a default value of 1_000_000
"""
Explanation: where Vaex will determine the file format of the based on the specified extension of the file name. If the extension is not recognized, an exception will be raised.
If your data is large, i.e. larger than the available RAM, we recomment exporting to HDF5.
Text based file format
At times, it may be useful to export the data to disk in a text based file format such as CSV. In that case one can simply do:
End of explanation
"""
pandas_df = df.to_pandas_df()
pandas_df # looks the same doesn't it?
"""
Explanation: The df.export_csv method is using pandas_df.to_csv behind the scenes, and thus one can pass any argument to df.export_csv as would to pandas_df.to_csv. The data is exported in chunks and the size of those chunks can be specified by the chunk_size argument in df.export_csv. In this way, data that is too large to fit in RAM can be saved to disk.
In memory data representation
Python has a rich ecosystem comprised of various libraries for data manipulation, that offer different functionality. Thus, it is often useful to be able to pass data from one library to another. Vaex is able to pass on its data to other libraries via a number of in-memory representations.
DataFrame representations
A Vaex DataFrame can be converted to a pandas DataFrame like so:
End of explanation
"""
gen = df.to_pandas_df(chunk_size=3)
for i1, i2, chunk in gen:
print(i1, i2)
print(chunk)
print()
"""
Explanation: For DataFrames that are too large to fit in memory, one can specify the chunk_size argument, in which case the to_pandas_dfmethod returns a generator yileding a pandas DataFrame with as many rows as indicated by the chunk_size argument:
End of explanation
"""
arrow_table = df.to_arrow_table()
arrow_table
"""
Explanation: The generator also yields the row number of the first and the last element of that chunk, so we know exactly where in the parent DataFrame we are. The following DataFrame methods also support the chunk_size argument with the same behaviour.
Converting a Vaex DataFrame into an arrow table is similar:
End of explanation
"""
arrays = df.to_arrays()
arrays
"""
Explanation: One can simply convert the DataFrame to a list of arrays. By default, the data is exposed as a list of numpy arrays:
End of explanation
"""
arrays = df.to_arrays(array_type='xarray')
arrays # list of xarrays
arrays = df.to_arrays(array_type='list')
arrays # list of lists
"""
Explanation: By specifying the array_type argument, one can choose whether the data will be represented by numpy arrays, xarrays, or Python lists.
End of explanation
"""
d_dict = df.to_dict(array_type='numpy')
d_dict
"""
Explanation: Keeping it close to pure Python, one can export a Vaex DataFrame as a dictionary. The same array_type keyword argument applies here as well:
End of explanation
"""
# Get a single item list
items = df.to_items(array_type='list')
items
"""
Explanation: Alternatively, one can also convert a DataFrame to a list of tuples, were the first element of the tuple is the column name, while the second element is the array representation of the data.
End of explanation
"""
gen = df.to_dict(array_type='list', chunk_size=2)
for i1, i2, chunk in gen:
print(i1, i2, chunk)
"""
Explanation: As mentioned earlier, with all of the above example, one can use the chunk_size argument which creates a generator, yielding a portion of the DataFrame in the specified format. In the case of .to_dict method:
End of explanation
"""
dask_arrays = df[['x', 'y']].to_dask_array() # String support coming soon
dask_arrays
"""
Explanation: Last but not least, a Vaex DataFrame can be lazily exposed as a Dask array:
End of explanation
"""
# pandas Series
x_series = df.x.to_pandas_series()
x_series
# numpy array
x_numpy = df.x.to_numpy()
x_numpy
# Python list
x_list = df.x.tolist()
x_list
# Dask array
x_dask_array = df.x.to_dask_array()
x_dask_array
"""
Explanation: Expression representations
A single Vaex Expression can be also converted to a variety of in-memory representations:
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/ml_ops/stage3/get_started_with_machine_management.ipynb | apache-2.0 | import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q
! pip3 install --upgrade kfp $USER_FLAG -q
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG -q
"""
Explanation: Notebook is a revised version of unpublished notebook from Boliang (Bo) Dai
E2E ML on GCP: MLOps stage 3 : formalization: get started with machine management
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_machine_management.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samplestree/main/notebooks/community/ml_ops/stage3/get_started_with_machine_management.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage3/get_started_with_machine_management.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
Overview
This tutorial demonstrates how to manage machine resources when training as a component in Vertex AI Pipelines.
Dataset
The dataset is the MNIST dataset. The dataset consists of 28x28 grayscale images of the digits 0 .. 9.
Objective
In this tutorial, you convert a self-contained custom training component into a Vertex AI CustomJob, whereby:
- The training job and artifacts are trackable.
- Set machine resources, such as machine-type, cpu/gpu, memory, disk, etc.
This tutorial uses the following Google Cloud ML services:
Vertex AI Pipelines
The steps performed in this tutorial include:
Create a custom component with a self-contained training job.
Execute pipeline using component-level settings for machine resources
Convert the self-contained training component into a Vertex AI CustomJob.
Execute pipeline using customjob-level settings for machine resources
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage.
Installations
Install the packages required for executing this notebook.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the Kernel
Once you've installed the Vertex AI SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
"""
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = "gs://" + BUCKET_NAME
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library for Python, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_URI
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_URI
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your service account from gcloud
if not IS_COLAB:
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
if IS_COLAB:
shell_output = ! gcloud projects describe $PROJECT_ID
# print("shell_output=", shell_output)
project_number = shell_output[-1].split(":")[1].strip().replace("'", "")
SERVICE_ACCOUNT = f"{project_number}[email protected]"
print("Service Account:", SERVICE_ACCOUNT)
"""
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
"""
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI
"""
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
"""
import json
import numpy as np
from google.cloud import aiplatform
from google_cloud_pipeline_components.v1.custom_job import \
create_custom_training_job_from_component
from kfp.v2 import compiler, dsl
from kfp.v2.dsl import component
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries
End of explanation
"""
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
"""
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aiplatform.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aiplatform.gapic.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aiplatform.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
"""
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more about hardware accelerator support for your region.
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
"""
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
"""
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
"""
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
@component(
output_component_file="demo_componet.yaml",
base_image="python:3.9",
packages_to_install=["tensorflow"],
)
def self_contained_training_component(
model_dir: str,
epochs: int,
) -> str:
import numpy as np
import tensorflow as tf
def get_data():
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = (x_train / 255.0).astype(np.float32)
x_test = (x_test / 255.0).astype(np.float32)
return (x_train, y_train, x_test, y_test)
def get_model():
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Flatten
model = Sequential(
[
Flatten(input_shape=(28, 28, 1)),
Dense(128, activation="relu"),
Dense(256, activation="relu"),
Dense(128, activation="relu"),
Dense(10, activation="softmax"),
]
)
model.compile(
optimizer="Adam", loss="sparse_categorical_crossentropy", metrics=["acc"]
)
return model
def train_model(x_train, y_train, model, epochs):
history = model.fit(x_train, y_train, epochs=epochs)
return history
(x_train, y_train, _, _) = get_data()
model = get_model()
train_model(x_train, y_train, model, epochs)
model.save(model_dir)
return model_dir
"""
Explanation: Create a self-contained custom training component
First, you create a component that self-contains the entire training step. This component trains a simple MNIST model using TensorFlow framework. The training is wholly self-contained in the component:
- Get and preprocess the data.
- Get/build the model.
- Train the model.
- Save the model.
The component takes the following parameters:
model_dir: The Cloud Storage location to save the trained model artifacts.
epochs: The number of epochs to train the model.
End of explanation
"""
PIPELINE_ROOT = "{}/pipeline_root/machine_settings".format(BUCKET_URI)
CPU_LIMIT = "8" # vCPUs
MEMORY_LIMIT = "8G"
@dsl.pipeline(
name="component-level-set-resources",
description="A simple pipeline that requests component-level machine resource",
pipeline_root=PIPELINE_ROOT,
)
def pipeline(epochs: int, model_dir: str, project: str = PROJECT_ID):
from google_cloud_pipeline_components.types import artifact_types
from google_cloud_pipeline_components.v1.model import ModelUploadOp
from kfp.v2.components import importer_node
training_job_task = (
self_contained_training_component(epochs=epochs, model_dir=model_dir)
.set_display_name("self-contained-training")
.set_cpu_limit(CPU_LIMIT)
.set_memory_limit(MEMORY_LIMIT)
.add_node_selector_constraint(
value=TRAIN_GPU.name, label_name="cloud.google.com/gke-accelerator"
)
.set_gpu_limit(TRAIN_NGPU)
)
import_unmanaged_model_task = importer_node.importer(
artifact_uri=training_job_task.output,
artifact_class=artifact_types.UnmanagedContainerModel,
metadata={
"containerSpec": {
"imageUri": DEPLOY_IMAGE,
},
},
).after(training_job_task)
model_upload = ModelUploadOp(
project=project,
display_name="mnist_model",
unmanaged_container_model=import_unmanaged_model_task.outputs["artifact"],
).after(import_unmanaged_model_task)
"""
Explanation: Create the self-contained-training pipeline
Next, you create the pipeline for training this component, consisting of the following steps:
Train the model. For this component, you set the following component level resources:
cpu_limit: The number of CPUs for the container's VM instance.
memory_limit: The amount of memory for the container's VM instance.
node_selector_constraint The type of GPU for the container's VM instance.
gpu_limit: The number of GPUs for the container's VM instance.
Import model artifacts into a Model Container artifact.
Upload the Container artifact into a Vertex AI Model resource.
End of explanation
"""
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="component_level_settings.json",
)
pipeline = aiplatform.PipelineJob(
display_name="component-level-settings",
template_path="component_level_settings.json",
pipeline_root=PIPELINE_ROOT,
parameter_values={"model_dir": BUCKET_URI, "epochs": 20, "project": PROJECT_ID},
enable_caching=False,
)
pipeline.run()
! rm -rf component_level_settings.json
"""
Explanation: Compile and execute the pipeline
Next, you compile the pipeline and then execute it. The pipeline takes the following parameters, which are passed as the dictionary parameter_values:
model_dir: The Cloud Storage location to save the model artifacts.
epochs: The number of epochs to train the model.
project: Your project ID.
End of explanation
"""
PROJECT_NUMBER = pipeline.gca_resource.name.split("/")[1]
print(PROJECT_NUMBER)
def print_pipeline_output(job, output_task_name):
JOB_ID = job.name
print(JOB_ID)
for _ in range(len(job.gca_resource.job_detail.task_details)):
TASK_ID = job.gca_resource.job_detail.task_details[_].task_id
EXECUTE_OUTPUT = (
PIPELINE_ROOT
+ "/"
+ PROJECT_NUMBER
+ "/"
+ JOB_ID
+ "/"
+ output_task_name
+ "_"
+ str(TASK_ID)
+ "/executor_output.json"
)
GCP_RESOURCES = (
PIPELINE_ROOT
+ "/"
+ PROJECT_NUMBER
+ "/"
+ JOB_ID
+ "/"
+ output_task_name
+ "_"
+ str(TASK_ID)
+ "/gcp_resources"
)
EVAL_METRICS = (
PIPELINE_ROOT
+ "/"
+ PROJECT_NUMBER
+ "/"
+ JOB_ID
+ "/"
+ output_task_name
+ "_"
+ str(TASK_ID)
+ "/evaluation_metrics"
)
if tf.io.gfile.exists(EXECUTE_OUTPUT):
! gsutil cat $EXECUTE_OUTPUT
return EXECUTE_OUTPUT
elif tf.io.gfile.exists(GCP_RESOURCES):
! gsutil cat $GCP_RESOURCES
return GCP_RESOURCES
elif tf.io.gfile.exists(EVAL_METRICS):
! gsutil cat $EVAL_METRICS
return EVAL_METRICS
return None
print("self-contained-training")
artifacts = print_pipeline_output(pipeline, "self-contained-training")
print("\n\n")
print("importer")
artifacts = print_pipeline_output(pipeline, "importer")
print("\n\n")
print("model-upload")
artifacts = print_pipeline_output(pipeline, "model-upload")
output = !gsutil cat $artifacts
output = json.loads(output[0])
model_id = output["artifacts"]["model"]["artifacts"][0]["metadata"]["resourceName"]
print("\n")
print("MODEL ID", model_id)
print("\n\n")
"""
Explanation: View the pipeline results
Once the pipeline has completed, you can view the artifact outputs for each component step.
End of explanation
"""
pipeline.delete()
"""
Explanation: Delete a pipeline job
After a pipeline job is completed, you can delete the pipeline job with the method delete(). Prior to completion, a pipeline job can be canceled with the method cancel().
End of explanation
"""
model = aiplatform.Model(model_id)
model.delete()
"""
Explanation: Delete the model
You can delete the Model resource generated by your pipeline with the delete() method.
End of explanation
"""
custom_job_op = create_custom_training_job_from_component(
self_contained_training_component,
display_name="test-component",
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
)
"""
Explanation: Convert self-contained training component to a Vertex AI CustomJob.
Next, you use the utility create_custom_training_job_from_component() into a Vertex AI CustomJob. This provides the benefits of:
Adds additional ML Metadata tracking as a custom job.
Can set resource controls specific to the custom job.
machine_type: The machine (VM) instance for the CustomJob.
accelerator_type: The type (if any) of GPU or TPU.
accerlator_count: The number of HW acclerators (GPU/TPU) or zero.
replica_count: The number of VM instances for the job (Default is 1).
boot_disk_type: Type of the boot disk (default is "pd-ssd").
boot_disk_size_gb: Size in GB of the boot disk (default is 100GB).
End of explanation
"""
@dsl.pipeline(
name="customjob-set-resources",
description="A simple pipeline that requests customjob-level machine resource",
pipeline_root=PIPELINE_ROOT,
)
def pipeline(
epochs: int, model_dir: str, project: str = PROJECT_ID, region: str = REGION
):
from google_cloud_pipeline_components.types import artifact_types
from google_cloud_pipeline_components.v1.model import ModelUploadOp
from kfp.v2.components import importer_node
training_job_task = custom_job_op(
epochs=epochs, model_dir=model_dir, project=project, location=region
)
import_unmanaged_model_task = importer_node.importer(
artifact_uri=training_job_task.outputs["output"],
artifact_class=artifact_types.UnmanagedContainerModel,
metadata={
"containerSpec": {
"imageUri": DEPLOY_IMAGE,
},
},
).after(training_job_task)
model_upload = ModelUploadOp(
project=project,
display_name="mnist_model",
unmanaged_container_model=import_unmanaged_model_task.outputs["artifact"],
).after(import_unmanaged_model_task)
"""
Explanation: Create the CustomJob pipeline
Next, you create the pipeline for training this component, consisting of the following steps:
Train the model. For this component, you set the following custom-job level resources:
machine_type: The machine (VM) instance.
accelerator_type: The type of GPU for the container's VM instance.
accelerator_count: The number of GPUs for the container's VM instance.
replica_count: The number of machine (VM) instances.
Import model artifacts into a Model Container artifact.
Upload the Container artifact into a Vertex AI Model resource.
End of explanation
"""
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="customjob_level_settings.json",
)
pipeline = aiplatform.PipelineJob(
display_name="customjob-level-settings",
template_path="customjob_level_settings.json",
pipeline_root=PIPELINE_ROOT,
parameter_values={"model_dir": BUCKET_URI, "epochs": 20, "project": PROJECT_ID},
enable_caching=False,
)
pipeline.run()
! rm -rf customjob_level_settings.json
"""
Explanation: Compile and execute the pipeline
Next, you compile the pipeline and then execute it. The pipeline takes the following parameters, which are passed as the dictionary parameter_values:
model_dir: The Cloud Storage location to save the model artifacts.
epochs: The number of epochs to train the model.
project: Your project ID.
End of explanation
"""
print("self-contained-training-component")
artifacts = print_pipeline_output(pipeline, "self-contained-training-component")
print("\n\n")
print("importer")
artifacts = print_pipeline_output(pipeline, "importer")
print("\n\n")
print("model-upload")
artifacts = print_pipeline_output(pipeline, "model-upload")
output = !gsutil cat $artifacts
output = json.loads(output[0])
model_id = output["artifacts"]["model"]["artifacts"][0]["metadata"]["resourceName"]
print("\n")
print("MODEL ID", model_id)
print("\n\n")
"""
Explanation: View the pipeline results
Once the pipeline has completed, you can view the artifact outputs for each component step.
End of explanation
"""
pipeline.delete()
"""
Explanation: Delete a pipeline job
After a pipeline job is completed, you can delete the pipeline job with the method delete(). Prior to completion, a pipeline job can be canceled with the method cancel().
End of explanation
"""
model = aiplatform.Model(model_id)
model.delete()
"""
Explanation: Delete the model
You can delete the Model resource generated by your pipeline with the delete() method.
End of explanation
"""
# Set this to true only if you'd like to delete your bucket
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation
"""
|
riddhishb/ipython-notebooks | Poisson Editing/SeamlessCloning_Sample/SeamlessImageCloningGeometric.ipynb | gpl-3.0 | import PIL
import PIL.Image
import scipy
import scipy.misc
ref = PIL.Image.open("sky.jpg")
ref = numpy.array(ref)
ref = scipy.misc.imresize(ref, 0.25, interp="bicubic")
target = PIL.Image.open("bird.jpg")
target = numpy.array(target)
target = scipy.misc.imresize(target, 0.25, interp="bicubic")
target_array_rgb = numpy.array(target)
target_array_rgb_resized = scipy.misc.imresize(target, 0.5, interp="bicubic")
rny = ref.shape[0]
rnx = ref.shape[1]
tny = target_array_rgb_resized.shape[0]
tnx = target_array_rgb_resized.shape[1]
oy = 10
ox = 50
#oy = 50
#ox = 250
target_array_rgb_resized_view = target_array_rgb_resized.view(dtype=[("r", numpy.uint8), ("g", numpy.uint8), ("b", numpy.uint8)]).squeeze()
zeros = numpy.zeros_like(ref)
zeros_view = zeros.view(dtype=[("r", numpy.uint8), ("g", numpy.uint8), ("b", numpy.uint8)]).squeeze()
target = zeros_view.copy()
target[oy:oy+tny, ox:ox+tnx] = target_array_rgb_resized_view
target = numpy.reshape(target.view(dtype=numpy.uint8), ref.shape)
mask = numpy.zeros((rny,rnx), dtype=numpy.uint8)
mask[oy:oy+tny, ox:ox+tnx] = 1
naive_clone = ref.copy()
naive_clone[mask == 1] = target[mask == 1]
figsize(19,4)
matplotlib.pyplot.subplot(141)
matplotlib.pyplot.imshow(ref);
matplotlib.pyplot.title("ref");
matplotlib.pyplot.subplot(142)
matplotlib.pyplot.imshow(target);
matplotlib.pyplot.title("target");
matplotlib.pyplot.subplot(143)
matplotlib.pyplot.imshow(mask);
matplotlib.pyplot.title("mask");
matplotlib.pyplot.subplot(144)
matplotlib.pyplot.imshow(naive_clone);
matplotlib.pyplot.title("naive_clone");
"""
Explanation: Seamless Image Cloning (Geometric)
The purpose of this code is to demonstrate the seamless image cloning algorithm. See [1] for details. To solve the sparse least-squares problem resulting from the algorithm in [1], we use a geometric Jacobi method inspired by [2].
[1] http://www.cs.jhu.edu/~misha/Fall07/Papers/Perez03.pdf
[2] http://http.developer.nvidia.com/GPUGems/gpugems_ch38.html
compute naive clone
End of explanation
"""
import skimage
import skimage.morphology
strict_interior = skimage.morphology.erosion(mask, numpy.ones((3,3), dtype=numpy.uint8))
strict_interior_indices = strict_interior.nonzero()
num_strict_interior_pixels = strict_interior_indices[0].shape[0]
border = mask - strict_interior
figsize(9,4)
matplotlib.pyplot.subplot(121);
matplotlib.pyplot.imshow(strict_interior, interpolation="nearest");
matplotlib.pyplot.title("strict_interior");
matplotlib.pyplot.subplot(122);
matplotlib.pyplot.imshow(border, interpolation="nearest");
matplotlib.pyplot.title("border");
"""
Explanation: compute strict interior and border regions
End of explanation
"""
import scipy
import scipy.sparse
import scipy.sparse.linalg
ref_greyscale = ref[:,:,0].copy()
target_greyscale = target[:,:,0].copy()
X_current = numpy.zeros_like(mask, dtype=numpy.float32)
X_next = numpy.zeros_like(mask, dtype=numpy.float32)
X_current[border == 1] = ref_greyscale[border == 1]
X_next[border == 1] = ref_greyscale[border == 1]
num_iterations = 1500
print_frequency = 25
for n in range(num_iterations):
if n % print_frequency == 0:
print n
seamless_clone_greyscale = ref_greyscale.copy()
seamless_clone_greyscale[strict_interior_indices] = X_current[strict_interior_indices]
scipy.misc.imsave("%d.png" % n, seamless_clone_greyscale / 255.0)
for i in range(num_strict_interior_pixels):
y = strict_interior_indices[0][i]
x = strict_interior_indices[1][i]
x_right = x+1
x_left = x-1
y_up = y-1
y_down = y+1
x_neighbors = []
y_neighbors = []
if x_right < rnx:
y_neighbors.append(y)
x_neighbors.append(x_right)
if y_up >= 0:
y_neighbors.append(y_up)
x_neighbors.append(x)
if x_left >= 0:
y_neighbors.append(y)
x_neighbors.append(x_left)
if y_down < rny:
y_neighbors.append(y_down)
x_neighbors.append(x)
y_neighbors = numpy.array(y_neighbors)
x_neighbors = numpy.array(x_neighbors)
strict_interior_neighbors = (strict_interior[(y_neighbors,x_neighbors)] == 1).nonzero()
border_neighbors = (strict_interior[(y_neighbors,x_neighbors)] == 0).nonzero()
num_neighbors = y_neighbors.shape[0]
sum_X_current_strict_interior_neighbors = numpy.sum(X_current[(y_neighbors[strict_interior_neighbors],x_neighbors[strict_interior_neighbors])])
sum_vq = (num_neighbors * target_greyscale[y,x]) - numpy.sum(target_greyscale[(y_neighbors, x_neighbors)])
sum_border_f = numpy.sum(ref_greyscale[(y_neighbors[border_neighbors],x_neighbors[border_neighbors])])
X_xy_next = (sum_X_current_strict_interior_neighbors + sum_border_f + sum_vq) / num_neighbors
X_next[y,x] = numpy.clip(X_xy_next, 0.0, 255.0)
#if i == 0:
# print "-"
# print ref_greyscale[(y_neighbors[border_neighbors],x_neighbors[border_neighbors])]
# print X_current[(y_neighbors,x_neighbors)]
# print "-"
# print sum_X_current_strict_interior_neighbors
# print sum_vq
# print sum_border_f
# print X_current[y,x]
# print X_xy_next
# print
#print "----"
X_current, X_next = X_next, X_current
seamless_clone_greyscale = ref_greyscale.copy()
seamless_clone_greyscale[strict_interior_indices] = X_current[strict_interior_indices]
figsize(15,15)
matplotlib.pyplot.imshow(seamless_clone_greyscale, interpolation="nearest", cmap="gray");
matplotlib.pyplot.title("seamless_clone_greyscale");
"""
Explanation: compute seamless clone (red)
End of explanation
"""
|
scikit-optimize/scikit-optimize.github.io | 0.7/notebooks/auto_examples/hyperparameter-optimization.ipynb | bsd-3-clause | print(__doc__)
import numpy as np
"""
Explanation: ============================================
Tuning a scikit-learn estimator with skopt
============================================
Gilles Louppe, July 2016
Katie Malone, August 2016
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
If you are looking for a :obj:sklearn.model_selection.GridSearchCV replacement checkout
sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py instead.
Problem statement
Tuning the hyper-parameters of a machine learning model is often carried out
using an exhaustive exploration of (a subset of) the space all hyper-parameter
configurations (e.g., using :obj:sklearn.model_selection.GridSearchCV), which
often results in a very time consuming operation.
In this notebook, we illustrate how to couple :class:gp_minimize with sklearn's
estimators to tune hyper-parameters using sequential model-based optimisation,
hopefully resulting in equivalent or better solutions, but within less
evaluations.
Note: scikit-optimize provides a dedicated interface for estimator tuning via
:class:BayesSearchCV class which has a similar interface to those of
:obj:sklearn.model_selection.GridSearchCV. This class uses functions of skopt to perform hyperparameter
search efficiently. For example usage of this class, see
sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py
example notebook.
End of explanation
"""
from sklearn.datasets import load_boston
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import cross_val_score
boston = load_boston()
X, y = boston.data, boston.target
n_features = X.shape[1]
# gradient boosted trees tend to do well on problems like this
reg = GradientBoostingRegressor(n_estimators=50, random_state=0)
"""
Explanation: Objective
To tune the hyper-parameters of our model we need to define a model,
decide which parameters to optimize, and define the objective function
we want to minimize.
End of explanation
"""
from skopt.space import Real, Integer
from skopt.utils import use_named_args
# The list of hyper-parameters we want to optimize. For each one we define the
# bounds, the corresponding scikit-learn parameter name, as well as how to
# sample values from that dimension (`'log-uniform'` for the learning rate)
space = [Integer(1, 5, name='max_depth'),
Real(10**-5, 10**0, "log-uniform", name='learning_rate'),
Integer(1, n_features, name='max_features'),
Integer(2, 100, name='min_samples_split'),
Integer(1, 100, name='min_samples_leaf')]
# this decorator allows your objective function to receive a the parameters as
# keyword arguments. This is particularly convenient when you want to set
# scikit-learn estimator parameters
@use_named_args(space)
def objective(**params):
reg.set_params(**params)
return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1,
scoring="neg_mean_absolute_error"))
"""
Explanation: Next, we need to define the bounds of the dimensions of the search space
we want to explore and pick the objective. In this case the cross-validation
mean absolute error of a gradient boosting regressor over the Boston
dataset, as a function of its hyper-parameters.
End of explanation
"""
from skopt import gp_minimize
res_gp = gp_minimize(objective, space, n_calls=50, random_state=0)
"Best score=%.4f" % res_gp.fun
print("""Best parameters:
- max_depth=%d
- learning_rate=%.6f
- max_features=%d
- min_samples_split=%d
- min_samples_leaf=%d""" % (res_gp.x[0], res_gp.x[1],
res_gp.x[2], res_gp.x[3],
res_gp.x[4]))
"""
Explanation: Optimize all the things!
With these two pieces, we are now ready for sequential model-based
optimisation. Here we use gaussian process-based optimisation.
End of explanation
"""
from skopt.plots import plot_convergence
plot_convergence(res_gp)
"""
Explanation: Convergence plot
End of explanation
"""
|
maxentile/equilibrium-sampling-tinker | Annealed importance sampling.ipynb | mit | import numpy as np
import numpy.random as npr
npr.seed(0)
import matplotlib.pyplot as plt
plt.rc('font', family='serif')
%matplotlib inline
def annealed_importance_sampling(draw_exact_initial_sample,
transition_kernels,
annealing_distributions,
n_samples=1000):
'''
draw_exact_initial_sample:
Signature:
Arguments: none
Returns: R^d
transition_kernels:
length-T list of functions, each function signature:
Arguments: R^d
Returns: R^d
can be any transition operator that preserves its corresponding annealing distribution
annealing_distributions:
length-T list of functions, each function signature:
Arguments: R^d
Returns: R^+
annealing_distributions[0] is the initial density
annealing_distributions[-1] is the target density
n_samples:
positive integer
'''
dim=len(draw_exact_initial_sample())
T = len(annealing_distributions)
weights = np.ones(n_samples,dtype=np.double)
ratios = []
xs = []
for k in range(n_samples):
x = np.zeros((T,dim))
ratios_ = np.zeros(T-1,dtype=np.double)
x[0] = draw_exact_initial_sample()
for t in range(1,T):
f_tminus1 = annealing_distributions[t-1](x[t-1])
f_t = annealing_distributions[t](x[t-1])
ratios_[t-1] = f_t/f_tminus1
weights[k] *= ratios_[t-1]
x[t] = transition_kernels[t](x[t-1],target_f=annealing_distributions[t])
xs.append(x)
ratios.append(ratios_)
return np.array(xs), weights, np.array(ratios)
"""
Explanation: Annealed importance sampling
[This largely follows the review in section 3 of: Sandwiching the marginal likelihood using bidirectional Monte Carlo (Grosse, Ghahramani, and Adams, 2015)]
$\newcommand{\x}{\mathbf{x}}
\newcommand{\Z}{\mathcal{Z}}$
Goal:
We want to estimate the normalizing constant $\Z = \int p_T(\x) d\x$ of a complicated distribution $p_T$ we know only up to a normalizing constant.
A basic strategy:
Importance sampling, i.e. draw each sample from an easy distribution $\x^{(k)} \sim p_1$, then reweight by $w^{(k)}\equiv p_T(\x)/p_1(\x)$. After drawing $K$ such samples, we can estimate the normalizing constant as $$\hat{\Z} = \frac{1}{K} \sum_{k=1}^K w^{(k)} \equiv \frac{1}{K} \sum_{k=1}^K \frac{p_T(\x^{(k)} )}{p_1(\x^{(k)})}$$.
Problem:
Although importance sampling will eventually work as $K \to \infty$ as long as the support of $p_1$ contains the support of $p_T$, this will be extremely inefficient if $p_1$ and $p_T$ are very different.
Actual strategy:
Instead of doing the importance reweighting computation in one step, gradually convert a sample from the simpler distribution $p_1$ to the target distribution $p_T$ by introducing a series of intermediate distributions $p_1,p_2,\dots,p_{T}$, chosen so that no $p_t$ and $p_{t+1}$ are dramatically different. We can then estimate the overall importance weight as a product of more reasonable ratios.
Inputs:
- Desired number of samples $K$
- An initial distribution $p_1(\x)$ for which we can:
- Draw samples: $\x_s \sim p_1(\x)$
- Evaluate the normalizing constant: $\Z_1$
- A target (unnormalized) distribution function: $f_T(\x)$
- A sequence of annealing distribution functions $f_1,\dots,f_T$. These can be almost arbitrary, but here are some options:
- We can construct these generically by taking geometric averages of the initial and target distributions: $f_t(\x_) = f_1(\x)^{1-\beta_t}f_T(\x)^{\beta_t}$
- In the case of a target distribution $f_T(\x) \propto \exp(-U(\x) \beta)$ (where $\beta$ is the inverse temperature), we could also construct the annealing distributions as Boltzmann distributions at decreasing temperatures.
- In the case of a target distribution defined in terms of a force field, we could also construct the annealing distributions by starting from an alchemically softened form of the potential and gradually turning on various parts of the potential.
- Could use "boost potentials" from accelerated MD (http://www.ks.uiuc.edu/Research/namd/2.9/ug/node63.html)
- If we have some way to make dimension-matching proposals, we might use coarse-grained potentials as intermediates.
- A sequence of Markov transition kernels $\mathcal{T}_1,\dots,\mathcal{T}_T$, where each $\mathcal{T}_t$ leaves its corresponding distribution $p_t$ invariant. These can be almost arbitrary, but here are some options:
- Random-walk Metropolis
- Symplectic integrators of Hamiltonian dynamics
- NCMC
Outputs:
- A collection of weights $w^{(k)}$, from which we can compute an unbiased estimate of the normalizing constant of $f_t$ by $\hat{\Z}=\sum_{k=1}^K w^{(k)} / K$
Algorithm:
for $k=1$ to $K\$:
1. $\x_1 \leftarrow$ sample from $p_1(\x)$
2. $w^{(k)} \leftarrow \Z_1$
3. for $t=2$ to $T$:
- $w^{(k)} \leftarrow w^{(k)} \frac{f_t(\x_{t-1})}{f_{t-1}(\x_{t-1})}$
- $\x_t \leftarrow $ sample from $\mathcal{T}t(\x | \x{t-1})$
1. Implement AIS
End of explanation
"""
num_intermediates = 25
betas = np.linspace(0,1,num_intermediates+2)
dim=1
def initial_density(x):
return np.exp(-((x)**2).sum()/2)
def draw_from_initial():
return npr.randn(dim)
def target_density(x):
return np.exp(-((x-4)**2).sum()/2)
class GeometricMean():
def __init__(self,initial,target,beta):
self.initial = initial
self.target = target
self.beta = beta
def __call__(self,x):
f1_x = self.initial(x)
fT_x = self.target(x)
return f1_x**(1-self.beta) * fT_x**self.beta
annealing_distributions = [GeometricMean(initial_density,target_density,beta) for beta in betas]
"""
Explanation: 2. Define annealing distributions
Here we'll be annealing between two unnormalized Gaussian distributions with geometric mean intermediates.
They have the same variances, so they should have the same normalizing constants.
End of explanation
"""
x = np.linspace(-5,10,100)
for i,f in enumerate(annealing_distributions):
if i == 0 or i == len(annealing_distributions)-1:
if i == 0:
label='Initial'
else:
label='Target'
else:
label=None
y = np.array([f(x_) for x_ in x])
plt.plot(x,y/y.max(),label=label)
plt.title('Annealing distributions')
plt.xlabel(r'$x$')
plt.ylabel(r'$f_t(x)$')
plt.legend(loc='best')
"""
Explanation: 2.1. Plot annealing distributions
End of explanation
"""
def gaussian_random_walk(x,
target_f,
n_steps=10,
scale=0.5):
x_old = x
f_old = target_f(x_old)
dim=len(x)
for i in range(n_steps):
proposal = x_old + npr.randn(dim)*scale
f_prop = target_f(proposal)
if (f_prop / f_old) > npr.rand():
x_old = proposal
f_old = f_prop
return x_old
transition_kernels = [gaussian_random_walk]*len(annealing_distributions)
"""
Explanation: 3. Define transition kernel
Here we'll just do a metropolized random walk with spherical gaussian proposals.
End of explanation
"""
xs, weights, ratios = annealed_importance_sampling(draw_from_initial,
transition_kernels,
annealing_distributions,
n_samples=10000)
"""
Explanation: 4. Run AIS on this toy example
End of explanation
"""
plt.plot((np.cumsum(weights)/np.arange(1,len(weights)+1)))
plt.hlines(1.0,0,len(weights))
plt.xlabel('# samples')
plt.ylabel(r'Estimated $\mathcal{Z}_T / \mathcal{Z}_1$')
plt.title(r'Estimated $\mathcal{Z}_T / \mathcal{Z}_1$')
ratios_ = ratios
mean=ratios_.mean(0)[1:]
err = ratios_.std(0)[1:]
plt.plot(mean);
plt.fill_between(range(len(mean)),mean-err,mean+err,alpha=0.4);
plt.xlabel(r'Annealing distribution index ($t$)')
plt.ylabel(r'$f_{t+1}(\mathbf{x}_{t})/f_{t}(\mathbf{x}_{t})$')
plt.title(r'Weight updates $f_{t+1}(\mathbf{x}_{t})/f_{t}(\mathbf{x}_{t})$')
end_samples = np.array([x_[-1] for x_ in xs])
plt.hist(end_samples,bins=50,normed=True);
plt.plot(x,[initial_density(x_)/2 for x_ in x])
plt.plot(x,[target_density(x_)/2 for x_ in x])
plt.title(r"$x_T$ samples")
plt.xlabel(r'$x$')
plt.ylabel(r'$p_T(x)$')
"""
Explanation: 4.1. Plot results
It should converge to one, since the initial and target distributions have the same normalizing constant.
End of explanation
"""
from simtk.openmm.app import *
from simtk.openmm import *
from simtk.unit import *
from openmmtools.integrators import MetropolisMonteCarloIntegrator,HMCIntegrator
# all I want is the alanine dipeptide topology
from msmbuilder.example_datasets import AlanineDipeptide
ala = AlanineDipeptide().get().trajectories
top_md = ala[0][0].topology
topology = top_md.to_openmm()
n_atoms = top_md.n_atoms
dim = n_atoms*3
# create an openmm system
forcefield = ForceField('amber99sb.xml','amber10_obc.xml')
system = forcefield.createSystem(topology,
nonbondedCutoff=1*nanometer, constraints=HBonds)
integrator = HMCIntegrator(300*kelvin)
simulation = Simulation(topology, system, integrator)
# create a thin wrapper class
class PeptideSystem():
def __init__(self):
self.simulation = simulation
self.positions = self.simulation.context.getState(getPositions=True).getPositions().value_in_unit(nanometer)
self.n_atoms = len(self.positions)
def evaluate_potential_flat(self,position_vec):
positions = position_vec.reshape(self.n_atoms,3)
self.simulation.context.setPositions(positions)
return self.simulation.context.getState(getEnergy=True).getPotentialEnergy()
def propagate(self,position_vec,n_steps=1000,temp=300):
integrator = HMCIntegrator(temp*kelvin)
self.simulation = Simulation(topology,system,integrator)
positions = position_vec.reshape(self.n_atoms,3)
self.simulation.context.setPositions(positions)
simulation.step(n_steps)
return np.array(self.simulation.context.getState(getPositions=True).getPositions().value_in_unit(nanometer)).flatten()
def probability_at_temp(self,position_vec,temp=300.0):
return np.exp(-self.evaluate_potential_flat(position_vec).value_in_unit(kilojoule/mole)/temp)
peptide = PeptideSystem()
temperatures = np.logspace(3,0,1000)*300
# for some reason I can't create a bunch of parametrized anonymous functions in a list comprehension?
# i.e. [lambda x:peptide.probability_at_temp(x,t) for t in temperatures] gives me a list of functions
# that all evaluate probability at t = temperatures[-1]...
# just creating a seperate object for each temperature to avoid any surprises here
class TempDist():
def __init__(self,temperature):
self.temp = temperature
def __call__(self,x):
return peptide.probability_at_temp(x,self.temp)
annealing_distributions = [initial_density] + [TempDist(t) for t in temperatures]
#num_intermediates = 1000
#betas = np.linspace(0,1,num_intermediates+2)
#annealing_distributions = [GeometricMean(initial_density,TempDist(300),beta) for beta in betas]
# same deal for transition kernels at different temperatures
class TempProp():
def __init__(self,temperature):
self.temp = temperature
def __call__(self,x,target_f=None):
return peptide.propagate(x,n_steps=100,temp=self.temp)
#transition_kernels = [None] + [TempProp(t) for t in temperatures] # transition_kernels[0] is never referenced...
scales = np.logspace(1,0,len(annealing_distributions)+1)*0.005
class RwProp():
def __init__(self,n_steps=100,scale=0.05):
self.n_steps=n_steps
self.scale=scale
def __call__(self,x,target_f):
return gaussian_random_walk(x,target_f,n_steps=self.n_steps,scale=self.scale)
transition_kernels = [RwProp(n_steps=30,scale=s) for s in scales]
%%timeit
peptide.evaluate_potential_flat(npr.randn(dim))
%%timeit
transition_kernels[1](npr.randn(dim),annealing_distributions[1])
len(annealing_distributions),len(transition_kernels)
# annealing schedule
plt.plot(temperatures)
plt.xlabel('Annealing distribution #')
plt.ylabel('Temperature')
plt.title('Annealing schedule')
"""
Explanation: 5. A more interesting example
Let's sample a biomolecule's configuration space in this way, and maybe estimate its partition function.
For now, let's do alanine dipeptide.
5.1-5.2. Define annealing distributions and transition kernels
End of explanation
"""
%%time
xs, weights, ratios = annealed_importance_sampling(draw_from_initial,
transition_kernels,
annealing_distributions,
n_samples=1)
weights,np.log(weights)
# expected number of hours to collect 1000 samples:
(1000*36/60)/60
%%time
xs, weights, ratios = annealed_importance_sampling(draw_from_initial,
transition_kernels,
annealing_distributions,
n_samples=1000)
best_traj = weights.argmax()
coords = [xs[best_traj][i].reshape(n_atoms,3) for i in range(len(xs[0]))]
import mdtraj as md
annealing_traj = md.Trajectory(coords,top_md)
annealing_traj.save_pdb('annealing_traj.pdb')
plt.plot(np.log((np.cumsum(weights)/np.arange(1,len(weights)+1))))
plt.xlabel('# samples')
plt.ylabel(r'Estimated $\log ( \mathcal{Z}_T / \mathcal{Z}_1 )$')
plt.title(r'Estimated $\log ( \mathcal{Z}_T / \mathcal{Z}_1 )$')
plt.plot(np.log(weights))
ratios.mean(0)[:10]
ratios_ = ratios
mean=ratios_.mean(0)[1:]
err = ratios_.std(0)[1:]
plt.plot(mean);
plt.fill_between(range(len(mean)),mean-err,mean+err,alpha=0.4);
plt.xlabel(r'Annealing distribution index ($t$)')
plt.ylabel(r'$f_t/f_{t-1}$')
plt.title('Weight updates')
plt.savefig('weight_updates.jpg',dpi=300)
plt.close()
from IPython.display import Image
Image('weight_updates.jpg',retina=True)
# numerical underflow isn't as big a concern as I thought
np.exp(sum(np.log(ratios[0]))),weights[0]
np.savez('AIS_results_alanine_dipeptide.npz',ratios)
# what if, instead of running 30 steps of rw metropolis between each of a 1000 annealing distributions, we instead
# run 1 step of rw metropolis between each of 30,000 annealing distributions?
"""
Explanation: 5.3. Run AIS
From $\mathcal{N}(\mathbf{0},\mathbf{I})$ to $\exp[-U(\x)/k_BT]$ in only a gazillion annealing distributions!
End of explanation
"""
|
metpy/MetPy | v0.8/_downloads/Station_Plot_with_Layout.ipynb | bsd-3-clause | import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import pandas as pd
from metpy.calc import get_wind_components
from metpy.cbook import get_test_data
from metpy.plots import (add_metpy_logo, simple_layout, StationPlot,
StationPlotLayout, wx_code_map)
from metpy.units import units
"""
Explanation: Station Plot with Layout
Make a station plot, complete with sky cover and weather symbols, using a
station plot layout built into MetPy.
The station plot itself is straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
The StationPlotLayout class is used to standardize the plotting various parameters
(i.e. temperature), keeping track of the location, formatting, and even the units for use in
the station plot. This makes it easy (if using standardized names) to re-use a given layout
of a station plot.
End of explanation
"""
with get_test_data('station_data.txt') as f:
data_arr = pd.read_csv(f, header=0, usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),
names=['stid', 'lat', 'lon', 'slp', 'air_temperature',
'cloud_fraction', 'dew_point_temperature', 'weather',
'wind_dir', 'wind_speed'],
na_values=-99999)
data_arr.set_index('stid', inplace=True)
"""
Explanation: The setup
First read in the data. We use numpy.loadtxt to read in the data and use a structured
numpy.dtype to allow different types for the various columns. This allows us to handle
the columns with string data.
End of explanation
"""
# Pull out these specific stations
selected = ['OKC', 'ICT', 'GLD', 'MEM', 'BOS', 'MIA', 'MOB', 'ABQ', 'PHX', 'TTF',
'ORD', 'BIL', 'BIS', 'CPR', 'LAX', 'ATL', 'MSP', 'SLC', 'DFW', 'NYC', 'PHL',
'PIT', 'IND', 'OLY', 'SYR', 'LEX', 'CHS', 'TLH', 'HOU', 'GJT', 'LBB', 'LSV',
'GRB', 'CLT', 'LNK', 'DSM', 'BOI', 'FSD', 'RAP', 'RIC', 'JAN', 'HSV', 'CRW',
'SAT', 'BUY', '0CO', 'ZPC', 'VIH']
# Loop over all the whitelisted sites, grab the first data, and concatenate them
data_arr = data_arr.loc[selected]
# Drop rows with missing winds
data_arr = data_arr.dropna(how='any', subset=['wind_dir', 'wind_speed'])
# First, look at the names of variables that the layout is expecting:
simple_layout.names()
"""
Explanation: This sample data has way too many stations to plot all of them. Instead, we just select
a few from around the U.S. and pull those out of the data file.
End of explanation
"""
# This is our container for the data
data = {}
# Copy out to stage everything together. In an ideal world, this would happen on
# the data reading side of things, but we're not there yet.
data['longitude'] = data_arr['lon'].values
data['latitude'] = data_arr['lat'].values
data['air_temperature'] = data_arr['air_temperature'].values * units.degC
data['dew_point_temperature'] = data_arr['dew_point_temperature'].values * units.degC
data['air_pressure_at_sea_level'] = data_arr['slp'].values * units('mbar')
"""
Explanation: Next grab the simple variables out of the data we have (attaching correct units), and
put them into a dictionary that we will hand the plotting function later:
End of explanation
"""
# Get the wind components, converting from m/s to knots as will be appropriate
# for the station plot
u, v = get_wind_components(data_arr['wind_speed'].values * units('m/s'),
data_arr['wind_dir'].values * units.degree)
data['eastward_wind'], data['northward_wind'] = u, v
# Convert the fraction value into a code of 0-8, which can be used to pull out
# the appropriate symbol
data['cloud_coverage'] = (8 * data_arr['cloud_fraction']).fillna(10).values.astype(int)
# Map weather strings to WMO codes, which we can use to convert to symbols
# Only use the first symbol if there are multiple
wx_text = data_arr['weather'].fillna('')
data['present_weather'] = [wx_code_map[s.split()[0] if ' ' in s else s] for s in wx_text]
"""
Explanation: Notice that the names (the keys) in the dictionary are the same as those that the
layout is expecting.
Now perform a few conversions:
Get wind components from speed and direction
Convert cloud fraction values to integer codes [0 - 8]
Map METAR weather codes to WMO codes for weather symbols
End of explanation
"""
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
"""
Explanation: All the data wrangling is finished, just need to set up plotting and go:
Set up the map projection and set up a cartopy feature for state borders
End of explanation
"""
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1080, 290, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS, linewidth=2)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
simple_layout.plot(stationplot, data)
plt.show()
"""
Explanation: The payoff
End of explanation
"""
# Just winds, temps, and dewpoint, with colors. Dewpoint and temp will be plotted
# out to Farenheit tenths. Extra data will be ignored
custom_layout = StationPlotLayout()
custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots')
custom_layout.add_value('NW', 'air_temperature', fmt='.1f', units='degF', color='darkred')
custom_layout.add_value('SW', 'dew_point_temperature', fmt='.1f', units='degF',
color='darkgreen')
# Also, we'll add a field that we don't have in our dataset. This will be ignored
custom_layout.add_value('E', 'precipitation', fmt='0.2f', units='inch', color='blue')
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1080, 290, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS, linewidth=2)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
custom_layout.plot(stationplot, data)
plt.show()
"""
Explanation: or instead, a custom layout can be used:
End of explanation
"""
|
google-research/google-research | aptamers_mlpd/figures/Figure_3_Machine_learning_guided_aptamer_discovery_(submission).ipynb | apache-2.0 | import numpy as np
import pandas as pd
import plotnine as p9
"""
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Overview
This notebook generates the estimated affinity values (and corresponding summary plots) for experimental seeds (Figure 3A) and different walking strategies (Figure 3B).
End of explanation
"""
# Fraction of expected bead coverage from sequencing to consider non-contamination
# For example, a tolerated bead fraction of 0.2 means that if, based on read
# depth and number of beads, there are 100 reads expected per bead, then
# sequences with fewer than 20 reads would be excluded from analysis.
TOLERATED_BEAD_FRAC = 0.2
# Ratio cutoff between positive and negative pools to count as being real.
# The ratio is calculated normalized by read depth, so if the ratio is 0.5,
# then positive sequences are expected to have equal read depth (or more) in
# the positive pool as the negative pool. So, as a toy example, if the
# positive pool had 100 reads total and the negative pool had 200 reads total,
# then a sequence with 5 reads in the positive pool and 10 reads in the
# negative pool would have a ratio of 0.5.
POS_NEG_RATIO_CUTOFF = 0.5
# Minimum required reads (when 0 it uses only the above filters)
MIN_READ_THRESH = 0
#@title MLPD Data Parameters
apt_screened_list = [ 3283890.016, 6628573.952, 5801469.696, 3508412.512]
apt_collected_list = [12204, 50353, 153845, 201255]
seq_input = [200000] * 4
conditions = ['round1_very_positive',
'round1_high_positive',
'round1_medium_positive',
'round1_low_positive']
flags = ['round1_very_flag', 'round1_high_flag', 'round1_medium_flag',
'round1_low_flag']
stringency = ['Very High', 'High', 'Medium', 'Low']
mlpd_param_df = pd.DataFrame.from_dict({'apt_screened': apt_screened_list,
'apt_collected': apt_collected_list,
'seq_input': seq_input,
'condition': conditions,
'condition_flag': flags,
'stringency': stringency})
mlpd_param_df
"""
Explanation: Parameters used in manuscript
End of explanation
"""
# MLPD sequences with stringency / Kd
# Upload mlpd_input_data_manuscript.csv
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
with open('mlpd_input_data_manuscript.csv') as f:
mlpd_df = pd.read_csv(f)
"""
Explanation: Load in Data
End of explanation
"""
def generate_cutoffs_via_PD_stats(df, col, apt_screened, apt_collected, seq_input,
tolerated_bead_frac, min_read_thresh):
"""Use the experimental parameters to determine sequences passing thresholds.
Args:
df: Pandas dataframe with experiment results. Must have columns named
after the col function parameter, containing the read count, and a
column 'sequence'.
col: The string name of the column in the experiment dataframe with the
read count.
apt_screened: The integer number of aptamers screened, from the experiment
parameters.
apt_collected: The integer number of aptamers collected, from the experiment
parameters.
seq_input: The integer number of unique sequences in the sequence library
used to construct the aptamer particles.
tolerated_bead_frac: The float tolerated bead fraction threshold. In other
words, the sequencing depth required to keep a sequence, in units of
fractions of a bead based on the average expected read depth per bead.
min_read_threshold: The integer minimum number of reads that a sequence
must have in order not to be filtered.
Returns:
Pandas series of the sequences from the dataframe that pass filter.
"""
expected_bead_coverage = apt_screened / seq_input
tolerated_bead_coverage = expected_bead_coverage * tolerated_bead_frac
bead_full_min_sequence_coverage = (1. / apt_collected) * tolerated_bead_coverage
col_sum = df[col].sum()
# Look at sequenced counts calculated observed fraction of pool and raw count.
seqs = df[((df[col]/col_sum) > bead_full_min_sequence_coverage) & # Pool frac.
(df[col] > min_read_thresh) # Raw count
].sequence
return seqs
def generate_pos_neg_normalized_ratio(df, col_prefix):
"""Adds fraction columns to the dataframe with the calculated pos/neg ratio.
Args:
df: Pandas dataframe, expected to have columns [col_prefix]_positive and
[col_prefix]_negative contain read counts for the positive and negative
selection conditions, respectively.
col_prefix: String prefix of the columns to use to calculate the ratio.
For example 'round1_very_positive'.
Returns:
The original dataframe with three new columns:
[col_prefix]_positive_frac contains the fraction of the total positive
pool that is this sequence.
[col_prefix]_negative_frac contains the fraction of the total negative
pool that is this sequence.
[col_prefix]_pos_neg_ratio: The read-depth normalized fraction of the
sequence that ended in the positive pool.
"""
col_pos = col_prefix + '_' + 'positive'
col_neg = col_prefix + '_' + 'negative'
df[col_pos + '_frac'] = df[col_pos] / df[col_pos].sum()
df[col_neg + '_frac'] = df[col_neg] / df[col_neg].sum()
df[col_prefix + '_pos_neg_ratio'] = df[col_pos + '_frac'] / (
df[col_pos + '_frac'] + df[col_neg + '_frac'])
return df
def build_seq_sets_from_df (input_param_df, input_df, tolerated_bead_frac,
pos_neg_ratio, min_read_thresh):
"""Sets flags for sequences based on whether they clear stringencies.
This function adds a column 'seq_set' to the input_param_df (one row per
stringency level of a particle display experiment) containing all the
sequences in the experiment that passed that stringency level in the
experiment.
Args:
input_param_df: Pandas dataframe with experimental parameters. Expected
to have one row per stringency level in the experiment and
columns 'apt_screened', 'apt_collected', 'seq_input', 'condition', and
'condition_flag'.
input_df: Pandas dataframe with the experimental results (counts per
sequence) for the experiment covered in the input_param_df. Expected
to have a [col_prefix]_pos_neg_ratio column for each row of the
input_param_df (i.e. each stringency level).
tolerated_bead_frac: Float representing the minimum sequence depth, in
units of expected beads, for a sequence to be used in analysis.
pos_neg_ratio: The threshold for the pos_neg_ratio column for a sequence
to be used in the analysis.
min_read_thresh: The integer minimum number of reads for a sequence to
be used in the analysis (not normalized, a straight count.)
Returns:
Nothing.
"""
for _, row in input_param_df.iterrows():
# Get parameters to calculate bead fraction.
apt_screened = row['apt_screened']
apt_collected = row['apt_collected']
seq_input = row['seq_input']
condition = row['condition']
flag = row['condition_flag']
# Get sequences above tolerated_bead_frac in positive pool.
tolerated_bead_frac_seqs = generate_cutoffs_via_PD_stats(
input_df, condition, apt_screened, apt_collected, seq_input,
tolerated_bead_frac, min_read_thresh)
# Intersect with seqs > normalized positive sequencing count ratio.
condition_pre = condition.split('_positive')[0]
ratio_col = '%s_pos_neg_ratio' % (condition_pre)
pos_frac_seqs = input_df[input_df[ratio_col] > pos_neg_ratio].sequence
seqs = set(tolerated_bead_frac_seqs) & set(pos_frac_seqs)
input_df[flag] = input_df.sequence.isin(set(seqs))
"""
Explanation: Helper functions
End of explanation
"""
def set_stringency_level_mlpd (row):
"""Returns the highest bin for MLPD.
Args:
row: A row from the MLPD experiment results. Expected
to have columns: 'round1_very_flag',
'round1_high_flag', 'round1_medium_flag', 'round1_low_flag' indicating
whether the sequence has passed the stringency threshold for each of
those conditions.
Returns:
Integer from 0-4 indicating the highest stringency level passed by this
sequence, or -1 to indicate conflicting information (for example passing
the high stringency threshold but missing the medium stringency
threshold).
"""
v_flag = row.round1_very_flag
h_flag = row.round1_high_flag
m_flag = row.round1_medium_flag
l_flag = row.round1_low_flag
if v_flag and h_flag and m_flag and l_flag:
return 4
elif h_flag and m_flag and l_flag and not v_flag:
return 3
elif m_flag and l_flag and not v_flag and not h_flag:
return 2
elif not m_flag and l_flag and not v_flag and not h_flag:
return 1
elif not m_flag and not l_flag and not v_flag and not h_flag:
return 0
else:
return -1
"""
Explanation: Affnity Helper Function
End of explanation
"""
def make_stacked_boxplot(categories, models, stringency_levels, counts, fractions, y='fraction'):
"""Creates the stacked barplot showing relative enrichment.
The values of categories, models, stringency_levels, counts, and fractions
are the parallel lists created using construct_category_fractions_for_stacked_hists.
Each position in the list represents one segment of the stacked bar chart,
for example, the sequences that started from random seeds, were walked
with the Count model and have stringency_level 3 (=high, Kd < 32nM).
Args:
categories: String list of seed categories, corresponding to the three grey
shaded areas. These are one of 'Random Seeds', 'Expt. Seeds' and
'ML Seeds'.
models: String list of models used to walk the seeds. These represent the
bars within each seed category, i.e. 'Random', 'Counts', 'SuperBin',
'Binned'.
stringency_levels: Integer list of stringency levels, 0-4, representing
Kd levels.
counts: Integer list of the number of sequences in this group of sequences.
fractions: Float list of the fraction of sequences in this group out of
the total sequences at all stringency levels. (For example, the fraction
of sequences from Random Seeds walked by the Counts model that are at
stringency level 2 as a fraction of all the sequences generated by taking
Random Seeds and walking them with the Counts model.)
y: The string name of the column to plot on the y axis.
Returns:
p: the ggplot figure
fishplot_df: The Pandas dataframe with the calculated values.
"""
fishplot_df = pd.DataFrame.from_dict({'category': categories,
'walking_model': models,
'stringency_level': stringency_levels,
'count': counts,
'fraction': fractions})
fishplot_df['log_count'] = np.log2(fishplot_df['count'])
fishplot_df['log_fraction'] = np.log10(fishplot_df['fraction'])
fishplot_df['category_cat'] = pd.Categorical(fishplot_df['category'],
categories=['Random Seeds', 'ML Seeds', 'Expt. Seeds'][::-1],
ordered=True)
fishplot_df['model_cat'] = pd.Categorical(fishplot_df['walking_model'],
categories=['SuperBin',
'Binned',
'Counts',
'Random'][::-1],
ordered=True)
def stringency_to_kd(val):
super_bin_to_kd_map = dict(zip(range(5),
['> 512nM', '< 512 nM', '< 128 nM',
'< 32 nM', '< 8 nM']))
return super_bin_to_kd_map[val]
# For plotting skip the aptamers which did not pass any stringency values
fishplot_df = fishplot_df[fishplot_df.stringency_level > 0]
fishplot_df['$K_d$'] = fishplot_df['stringency_level'].apply(stringency_to_kd)
fishplot_df['$K_d$'] = pd.Categorical(
fishplot_df['$K_d$'],
categories=['< 512 nM', '< 128 nM', '< 32 nM', '< 8 nM'], ordered=True)
p = (p9.ggplot(fishplot_df,
p9.aes(x='model_cat', y=y, alpha='$K_d$', fill='model_cat')) +
p9.geom_col(position='stack', fill='#32CD32') + p9.coord_flip() +
p9.facet_grid(['category_cat', '.']) +
p9.theme_minimal() +
p9.theme(axis_text_x=p9.element_text(rotation=90, hjust=1),
figure_size=[5, 5], line=p9.element_line(color='white')))
p = (p9.ggplot(fishplot_df,
p9.aes(x='model_cat', y=y, alpha='$K_d$', fill='model_cat')) +
p9.geom_col(position='stack') + p9.coord_flip() +
p9.facet_grid(['category_cat', '.']) +
p9.theme(axis_text_x=p9.element_text(rotation=90, hjust=1),
figure_size=[5, 5], line=p9.element_line(color='white')) +
p9.scale_fill_manual(['#202124', '#174ea6', '#e37400', '#681da8', '#018774'][::-1]) +
p9.ylim([0.0, 0.2])
)
return p, fishplot_df
def construct_category_fractions_for_stacked_hists (
input_df, col, col_cat, cat_label, groupby_col, base_model,
categories, models, stringency_levels, counts, fractions, base_models
):
'''For each subcategory calculate the fraction of data at each Kd.
This function adds to the parallel lists of categories, models,
stringency_levels, counts, fractions, and base_models that are later used
in creating the stacked bar plots.
For example, col_cat=['RANDOM_SEEDS_WALKED'], col='sequence_set',
col_label='Random Seeds', groupby_col='walking_model', and base_model=None
would limit the input dataframe of sequences to those sequences where the
sequence set is 'RANDOM_SEEDS_WALKED' (in other words, the set of sequences
that started from random sequences and were walked by an ML model). These
sequences would then by grouped by the walking model column plus the
stringency level (i.e. 0-4), i.e. the model used to walk these sequences
and the affinity of the resulting sequence. For each group, the results
would be added to each of these parallel lists. So in this case, the
category would always be 'Random Seeds' (since that's the col_label),
and the model would be the walking model, and the stringency level would
be the stringency level of the group. So far this is all basic categories.
The next lists are the actual values used in plotting. The number of sequences
in the group gets added to the counts list, and the number of sequences in
this group relative to all those in this groupby_col is added to the fractions
list (in other words, what is the fraction of random seeds walked by the
Counts model that ended up at stringency level 2 compared to all the sequences
generated by walking all random seeds with the Counts model.)
Args:
input_df: (pd.DataFrame) Dataframe for grouping.
cols: (str) Column id to select when subsetting dataframe.
col_cat: (list) Category within col to match when subsetting.
cat_label: (str) New label to call category.
groupby_col: (str) Column idea to group remaining subgroup on.
base_model: (str or None) Optional base model. Not currently being used.
categories: (list) List of categories to append to.
models: (list) List of models to append to.
stringency_levels: (list) List of upated_super_bins to append to.
counts: (list) List of counts to append to.
fractions: (list) List of fractions to append to.
base_models: (list) List of base_models to append to.
'''
# Select out all unambiguous sequences in this set
sub_df = input_df[(input_df[col].isin(col_cat)) &
(input_df.stringency_level >= 0)]
# Create summary stats by stringency level
for (groupby_cat, stringency), grp in sub_df.groupby([groupby_col,
'stringency_level']):
categories.append(cat_label)
models.append(groupby_cat)
stringency_levels.append(stringency)
counts.append(len(grp))
denominator = len(sub_df[(sub_df[col].isin(col_cat)) &
(sub_df[groupby_col] == groupby_cat)])
fractions.append(float(len(grp)) / denominator)
base_models.append(base_model)
"""
Explanation: Stacked Bar Plot Helpers
End of explanation
"""
#@title Add positive_frac / (positive_frac + negative_frac) col to df
for col_prefix in ['round1_very', 'round1_high', 'round1_medium', 'round1_low']:
mlpd_df = generate_pos_neg_normalized_ratio(mlpd_df, col_prefix)
#@title Measure consistency of particle display data when increasing stringency thresholds and set stringeny levels
build_seq_sets_from_df (mlpd_param_df, mlpd_df,
TOLERATED_BEAD_FRAC,
POS_NEG_RATIO_CUTOFF, MIN_READ_THRESH)
mlpd_df['stringency_level'] = mlpd_df.apply(
lambda x: set_stringency_level_mlpd(x), axis=1)
"""
Explanation: Convert stringencies to Kd
End of explanation
"""
# First create 2 dataframes for the 25 and everything else
mlpd_high_count_df = mlpd_df[mlpd_df['multiple_copy_oligo_in_library']].copy()
mlpd_nonhigh_count_df = mlpd_df[~mlpd_df['multiple_copy_oligo_in_library']]
# Confirm that observed high count library is 400 fold greater than rest of library
print "Mean fold increase over library: ", mlpd_high_count_df.library.mean() / mlpd_nonhigh_count_df.library.mean()
print "Median fold increase over library: ", mlpd_high_count_df.library.median() / mlpd_nonhigh_count_df.library.median()
# Multiply tolerated bead frac by the 400 copies ordered in the library
build_seq_sets_from_df (mlpd_param_df, mlpd_high_count_df,
TOLERATED_BEAD_FRAC*400,
POS_NEG_RATIO_CUTOFF, MIN_READ_THRESH)
mlpd_high_count_df['stringency_level'] = mlpd_high_count_df.apply(
lambda x: set_stringency_level_mlpd(x), axis=1)
# Join the two sublibraries back together
mlpd_df = pd.concat([mlpd_high_count_df, mlpd_nonhigh_count_df])
"""
Explanation: Special Handling of 25 sequences ordered at higher multiplicity
The particle display experiment requires a minimum amount of positive material for sequencing to succeed. (Otherwise it's easy to lose the pellet during PCR steps, for example.)
We did not know a priori if we would meet this minimum level from our generated aptamers, therefore we spiked in a set of 25 sequences at higher copy number (400-fold) that were known/expected to be very good aptamers. Here, we refine affinity estimates for these 25 sequences due to this increase in input copy number.
End of explanation
"""
mlpd_stringency_to_kd_map = dict(zip(
range(-1, 5),
['ambiguous', '> 512nM', '< 512 nM', '< 128 nM', '< 32 nM', '< 8 nM']))
mlpd_df['Kd'] = mlpd_df.stringency_level.apply(
lambda x: mlpd_stringency_to_kd_map[x])
"""
Explanation: Set stringency levels to rough kd thresholds
End of explanation
"""
categories = []
models = []
stringency_levels = []
counts = []
fractions = []
base_models = []
construct_category_fractions_for_stacked_hists(
mlpd_df, 'sequence_set', ['RANDOM_SEEDS_WALKED'], 'Random Seeds', 'walking_model', None,
categories, models, stringency_levels, counts, fractions, base_models)
construct_category_fractions_for_stacked_hists(
mlpd_df, 'sequence_set', ['EXPT_SEEDS_WALKED'], 'Expt. Seeds', 'walking_model', None,
categories, models, stringency_levels, counts, fractions, base_models)
construct_category_fractions_for_stacked_hists(
mlpd_df, 'sequence_set', ['MODEL_INFERENCE_WALKED'], 'ML Seeds', 'walking_model', None,
categories, models, stringency_levels, counts, fractions, base_models)
"""
Explanation: Build plot components by adding in walks from random, experimental, and ML seeds
End of explanation
"""
# Only 398 of the 400 expt seed sequences were observed in the sequencing data
print 'Total sequences observed in MLPD sequencing : ', mlpd_df[mlpd_df['sequence_set'] == 'EXPT_SEEDS'].Kd.value_counts().sum()
# Extracting remaining 2 seeds from walks and set values to ambiguous (same as conflicting seeds in stringency map)
expt_seeds = mlpd_df[mlpd_df.sequence_set == 'EXPT_SEEDS_WALKED'].seed_seq.unique()
expt_seed_df = pd.DataFrame.from_dict({'sequence': expt_seeds})
mlpd_seed_df = expt_seed_df.merge(mlpd_df, on='sequence', how='left')
mlpd_seed_df['Kd'].fillna('ambiguous', inplace=True)
mlpd_seed_df.Kd.value_counts()
#3B
p, fishplot_df = make_stacked_boxplot(
categories, models, stringency_levels, counts, fractions, y='fraction')
p
"""
Explanation: Generate Figure Data/Plots
End of explanation
"""
|
kubeflow/code-intelligence | Issue_Embeddings/notebooks/05_EvaluateEmbeddings.ipynb | mit | import pandas as pd
import numpy as np
from random import randint
from matplotlib import pyplot as plt
import re
pd.set_option('max_colwidth', 1000)
df = pd.read_csv('https://storage.googleapis.com/issue_label_bot/k8s_issues/000000000000.csv')
df.labels = df.labels.apply(lambda x: eval(x))
df.head()
#remove target leakage from kubernetes which are the bot commands
df['body'] = df.body.apply(lambda x: re.sub('(/sig|/kind|/status/triage/|priority) \S+', '', str(x)))
"""
Explanation: Background
Unlike Issue-Label Bot which predicts generic bug, feature-request and question labels, we are attempting to build the capability to predict repo-specific labels. One of the primary challenges of doing this is a dearth of labeled examples for a particular repo. Therefore, we attempt to generate features via transfer learning from a language model trained over a large corpus of GitHub issues. These features are then fed downstream to a classifier with the goal of enabling the classifier to predict personalized issue labels based upon existing hand-labeled issues present in a repository.
As an initial test, we will evaluate the ability to predict sig/ labels on the Kubernetes/Kubernetes repo.
In order to measure the efficacy of these embeddings, we will use DataRobot as a benchmark to see if adding embeddings from transfer learning improves model performance relative to TFIDF n-gram techniques featurization of text.
SQL Query In BigQuery
```sql
standardSQL
SELECT *
FROM (
SELECT
updated_at
, MAX(updated_at) OVER (PARTITION BY url) as last_time
, FORMAT("%T", ARRAY_CONCAT_AGG(labels)) as labels
, repo, url, title, body, len_labels
FROM(
SELECT
TIMESTAMP(REGEXP_REPLACE(JSON_EXTRACT(payload, '$.issue.updated_at'), "\"", "")) as updated_at
, REGEXP_EXTRACT(JSON_EXTRACT(payload, '$.issue.url'), r'https://api.github.com/repos/(.*)/issues') as repo
, JSON_EXTRACT(payload, '$.issue.url') as url
-- extract the title and body removing parentheses, brackets, and quotes
, LOWER(TRIM(REGEXP_REPLACE(JSON_EXTRACT(payload, '$.issue.title'), r"\n|(|)|[|]|#|*||\"", ' '))) as title
, LOWER(TRIM(REGEXP_REPLACE(JSON_EXTRACT(payload, '$.issue.body'), r"\\n|\(|\)|\[|\]|#|\*||\"", ' '))) as body
, REGEXP_EXTRACT_ALL(JSON_EXTRACT(payload, "$.issue.labels"), ',"name\":"(.+?)","color') as labels
, ARRAY_LENGTH(REGEXP_EXTRACT_ALL(JSON_EXTRACT(payload, "$.issue.labels"), ',"name\":"(.+?)","color')) as len_labels
FROM githubarchive.month.20*
WHERE
_TABLE_SUFFIX BETWEEN '1601' and '1912'
and type="IssuesEvent"
)
WHERE
repo = 'kubernetes/kubernetes'
GROUP BY updated_at, repo, url, title, body, len_labels
)
WHERE last_time = updated_at and len_labels >= 1
```
The results of the above query can be downloaded as a csv file from this link:
https://storage.googleapis.com/issue_label_bot/k8s_issues/000000000000.csv
End of explanation
"""
def count_sig(l):
return(sum(['sig/' in x for x in l]))
from matplotlib.ticker import PercentFormatter
sig_counts = df.labels.apply(lambda x: count_sig(x))
plt.hist(sig_counts, weights=np.ones(len(sig_counts)) / len(sig_counts))
plt.gca().yaxis.set_major_formatter(PercentFormatter(1))
plt.title(f'Distribution of # of sig/ labels for kubernetes/kubernetes\n {len(sig_counts):,} issues pulled from GHArchive.')
plt.show()
"""
Explanation: Explore The Data
Question from @cblecker
@Hamel Husain that's how often a PR/issue has two different sig labels on it?
End of explanation
"""
from collections import Counter
c = Counter()
for row in df.labels:
c.update(row)
print(f'There are {len(c.keys())} unique labels in kubernetes/kubernetes')
nsig = sum(['sig/' in x for x in list(c.keys())])
print(f"number of sig labels: {nsig}")
"""
Explanation: Count Labels
End of explanation
"""
c.most_common(50)
len([(k, c[k]) for k in c if c[k] >= 100])
"""
Explanation: Top 50 Labels
End of explanation
"""
sig_labels = [x for x in list(c.keys()) if 'sig/' in x]
for l in sig_labels:
print(f'{l}: {c[l]}')
"""
Explanation: Sig/ Labels
End of explanation
"""
min_freq = 30
def contains_sig(l):
if not l:
return False
else:
# make sure there are at least 10 issues labeled with that value
return max(['sig/' in x and c[x] >=min_freq for x in l])
sig_df = df[df.labels.apply(lambda x: contains_sig(x))]
print(f'{sig_df.shape[0]:,} issues have sig/ labels')
sig_labels = [k for k in c.keys() if c[k] >= min_freq and 'sig/' in k]
print(f'{len(sig_labels)} sig labels that have at least {min_freq} issues')
# build an indicator matrix
indicator = []
for l in sig_df.labels.values:
zer = np.zeros(len(sig_labels))
mask = [sig_labels.index(x) for x in l if x in sig_labels]
zer[mask] = 1
indicator.append(zer[None, :])
indicator_matrix = pd.DataFrame(np.concatenate(indicator, axis=0), columns=sig_labels).astype(int)
corr_grid = indicator_matrix.T.dot(indicator_matrix)
for i, x in enumerate(corr_grid):
corr_grid.iloc[i][i:] = 0
import seaborn as sns
import matplotlib.pyplot as plt
#cmap = sns.diverging_palette(220, 10, as_cmap=True)
#normalize correlation grid
for label in corr_grid:
corr_grid.loc[label] = corr_grid.loc[label] / c[label]
plt.figure(figsize=(16, 14))
plt.title('Co-Occurence Matrix')
sns.heatmap(corr_grid, square=True, vmin=0, vmax=.4, mask=corr_grid<=0.05)
"""
Explanation: See correlation among labels
End of explanation
"""
def part_assign():
i = randint(1, 10)
if i <=5:
return i
else:
return 6
combined_sig_df = pd.concat([sig_df.reset_index(), indicator_matrix.reset_index()], axis=1)
combined_sig_df['part'] = combined_sig_df.repo.apply(lambda x: part_assign())
combined_sig_df.to_hdf('combined_sig_df.hdf')
combined_sig_df = pd.read_hdf('combined_sig_df.hdf')
#! pip install datarobot
import datarobot as dr
from datarobot import UserCV
from fastai.core import parallel
from datarobot import Blueprint
ucv = UserCV(user_partition_col='part', cv_holdout_level=6, seed=123)
dr.Client(token='something-something', endpoint='https://app.datarobot.com/api/v2')
def create_dr_proj(label):
temp_df = combined_sig_df[['title', 'body', 'part', label]]
proj = dr.Project.create(sourcedata=temp_df,
project_name=label,
)
proj.set_target(label,
positive_class=1,
partitioning_method=ucv,
target_type='Binary',
mode=dr.AUTOPILOT_MODE.MANUAL,
worker_count=9,
max_wait=600000)
bps = proj.get_blueprints()
bp = [b for b in bps if 'Nystroem' in str(b)][0]
proj.train(bp, sample_pct=49.8)
proj.unlock_holdout()
return proj
proj_list = []
for i, label in enumerate(sig_labels):
try:
print(f'creating project {i}: {label}')
proj = create_dr_proj(label)
proj_list.append(proj)
except:
pass
predictions = []
for proj in proj_list:
print(f'getting predictions for holdout set for {str(proj)}')
label = proj.target.replace('_', '-')
temp_df = combined_sig_df[['title', 'body', 'part', label]]
temp_df = temp_df[temp_df.part == 6]
ds = proj.upload_dataset(temp_df)
m = proj.get_models()[0]
predict_job = m.request_predictions(ds.id)
yhat = predict_job.get_result_when_complete()
predictions.append({label: yhat['positive_probability']})
result = {}
for d in predictions:
result.update(d)
baseline_holdout_predictions_df = pd.DataFrame(result)
baseline_holdout_predictions_df.columns = ['p_'+x for x in baseline_holdout_predictions_df.columns]
assert len(baseline_holdout_predictions_df) == len(combined_sig_df[combined_sig_df.part == 6])
predictions_df = pd.concat([combined_sig_df[combined_sig_df.part == 6].reset_index(drop=True),
baseline_holdout_predictions_df.reset_index(drop=True)], axis=1)
predictions_df['version'] = 'baseline'
predictions_df.to_hdf('prediction_baseline_df.hdf')
"""
Explanation: Obtain Baseline With Automated Machine Learning
End of explanation
"""
import pandas as pd
from inference import InferenceWrapper, pass_through
import os
import torch
from torch.cuda import empty_cache
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
wrapper = InferenceWrapper(model_path='/ds/lang_model/models_uxgcl1e1/',
model_file_name='trained_model_uxgcl1e1.hdf')
empty_cache()
combined_sig_df = pd.read_hdf('combined_sig_df.hdf')
# text = wrapper.process_df(combined_sig_df)
# text.to_hdf('textlm_df.hdf')
text = pd.read_hdf('textlm_df.hdf')
assert text['text'].isna().sum() == 0
features = []
from tqdm.auto import tqdm
with torch.no_grad():
for t in tqdm(text['text'].values):
feat = wrapper.get_pooled_features(t).cpu()
features.append(feat)
empty_cache()
feat_matrix = torch.cat(features, dim=0).numpy()
feat_matrix = feat_matrix[:, :1600]
feat_df = pd.DataFrame(feat_matrix)
feat_df.columns = ['f_' + str(x) for x in feat_df.columns]
feat_df.to_csv('feat_df.csv', index=False)
feat_df = pd.read_csv('feat_df.csv')
lm_combined_df = pd.concat([combined_sig_df.reset_index(drop=True),
feat_df.reset_index(drop=True)], axis=1)
import datarobot as dr
from datarobot import UserCV
ucv = UserCV(user_partition_col='part', cv_holdout_level=6, seed=123)
dr.Client(token='something', endpoint='https://app.datarobot.com/api/v2')
def create_dr_proj(label):
temp_df = lm_combined_df[['title', 'body', 'part', label] + list(feat_df.columns)]
proj = dr.Project.create(sourcedata=temp_df,
project_name='lm_'+label,
)
proj.set_target(label,
positive_class=1,
partitioning_method=ucv,
target_type='Binary',
mode=dr.AUTOPILOT_MODE.QUICK,
worker_count=9,
max_wait=600000)
proj.unlock_holdout()
return proj
proj_list_lm = []
for i, label in enumerate(sig_labels):
try:
print(f'creating project {i}: lm_{label}')
proj = create_dr_proj(label)
proj_list_lm.append(proj)
except:
pass
"""
Explanation: Get Embeddings and Repeat
End of explanation
"""
import datarobot as dr
from datarobot import UserCV
dr.Client(token='something-something', endpoint='https://app.datarobot.com/api/v2')
def get_metrics(modelobj):
return modelobj.metrics['AUC']['holdout']
projects = [p for p in dr.Project.list() if p.project_name.startswith('lm_')]
'hamel'.replace('am', 'gg')
label = []
category = []
auc = []
for proj in projects:
print(f'getting metrics for {proj.project_name}')
models = [m for m in proj.get_models() if m.sample_pct > 45]
baseline_model = sorted([m for m in models if m.featurelist_name == 'text only'], key=get_metrics, reverse=True)[0]
deep_model = sorted([m for m in models if m.featurelist_name != 'text only'], key=get_metrics, reverse=True)[0]
baseline_auc = get_metrics(baseline_model)
deep_auc = get_metrics(deep_model)
label.extend([proj.project_name.replace('lm_', '')] * 2)
category.extend(['baseline', 'deep'])
auc.extend([baseline_auc, deep_auc])
import pandas as pd
compare_df = pd.DataFrame({'label': label,
'category': category,
'auc': auc})
pivot = compare_df.pivot(index='label', columns='category', values='auc')
pivot['winner'] = pivot.apply(lambda x: 'deep' if x.deep > x.baseline else 'baseline', axis=1)
pivot['abs diff'] = pivot.apply(lambda x: abs(x.deep - x.baseline), axis=1)
pivot['label count'] = [c[x] for x in pivot.index.values]
pivot.sort_values(by=['label count'], ascending=False)
wrapper
len(wrapper.learn.data.vocab.itos)
pivot.to_hdf('pivot_df.hdf')
import pandas as pd
score_df = pd.read_hdf('score_df.hdf')
score_df.set_index('label', inplace=True)
score_df.columns = ['deep2']
new_pivot = pivot.join(score_df, how='left')[['baseline', 'deep', 'deep2', 'label count']]
def winner(x):
if x.baseline > x.deep and x.baseline > x.deep2:
return 'baseline'
elif x.deep > x.deep2:
return 'deep'
elif x.deep2 > x.deep:
return 'deep2'
new_pivot.dropna(inplace=True)
new_pivot['winner'] = new_pivot.apply(lambda x: winner(x), axis=1)
new_pivot['baseline minus best deep'] = new_pivot.apply(lambda x: x.baseline - max(x.deep, x.deep2), axis=1)
new_pivot['abs diff'] = new_pivot.apply(lambda x: abs(x['baseline minus best deep']), axis=1)
new_pivot.sort_values('label count', ascending=False)
new_pivot.mean()
"""
Explanation: Compare Transfer Learning vs. Regular Methods
End of explanation
"""
|
jsnajder/StrojnoUcenje | notebooks/SU-2015-0-SciPy.ipynb | cc0-1.0 | 10
_
?
%quickref
"""
Explanation: Sveučilište u Zagrebu<br>
Fakultet elektrotehnike i računarstva
Strojno učenje
<a href="http://www.fer.unizg.hr/predmet/su">http://www.fer.unizg.hr/predmet/su</a>
Ak. god. 2015./2016.
Bilježnica 0: Uvod u SciPy
(c) 2015 Jan Šnajder
<i>Verzija: 0.5 (2015-10-15) </i>
<p style="color:red">NEPOTPUNO</p>
1. SciPy stack
Glavni paketi (core packages):
* Python
* NumPy
* biblioteka SciPy
* IPython
* Matplotlib
* SymPy
* pandas
* nose
Dodatni paketi:
* Cython
* SciKits paketi: scikit-learn, scikit-multilearn, scikit-image, ...
2. IPython notebook
Ćelije se evaluiraju sa SHIFT+ENTER
Markdown tekst s posebnim formatiranjem i kodom u $\LaTeX$-u: $f(\mathbf{x}) = \sum_{i=1}^n \ln \frac{P(x)P(y)}{P(x, y)}$
End of explanation
"""
x = 5
x
print(x)
print x
type(x)
(x + 1) ** 2
x += 1; x
?x
del x
x
X=7; varijabla_s_vrlo_dugackim_imenom = 747
x=1; y=-2
x==y
(x==y)==False
x!=y
x==y or (x>0 and not y>0)
z = 42 if x==y else 66
z
moj_string = 'ringe ringe'
'hopa' + ' ' + "cupa"
moj_string += ' raja'; moj_string
len(moj_string)
print "X=%0.2f y=%d, s='%s'" % (x, y, moj_string)
1/2
1/2.0
1/float(2)
round(0.5)
"""
Explanation: Više: https://ipython.org/ipython-doc/3/interactive/tutorial.html
3. Python
3.1. Varijable i vrijednosti
End of explanation
"""
import math
math.sqrt(68)
math.exp(1)
math.log(_)
math.log(100, 2)
"""
Explanation: 3.2. Matematičke funkcije
End of explanation
"""
xs = [5, 6, 2, 3] # Stvara listu
xs
xs[0] # Zero-based indeksiranje
xs[-1] # Negativni indeksi broje od kraja liste
xs[1] = 100 # Ažuriranje liste
xs
xs[1] = 'foo' # Liste mogu biti heterogene
xs
xs[3] = [1,2]
xs
xs.append(x) # Dodaje na kraj
xs
xs + [77, 88]
xs.extend([77, 88]); xs
xs.pop() # Skida zadnji element liste
xs
xs[0:2]
xs[1:]
xs[:3]
xs[:]
xs[:-2] # Sve osim zadnja dva
xs[0:2] = [1,2]
xs
range(10)
range(1, 10)
range(0, 51, 5)
for x in range(5):
print x
for x in xs: print x
for ix, x in enumerate(range(0, 51, 5)):
print ix, x
xs = []
for x in range(10):
xs.append(x ** 2)
xs
[x ** 2 for x in range(10)]
[x ** 2 for x in range(10) if x % 2 == 0]
[(x, x ** 2) for x in range(10)]
zip([1, 2, 3], [4, 5, 6])
zip(*[(1, 4), (2, 5), (3, 6)])
xs, ys = zip(*[(1, 4), (2, 5), (3, 6)])
xs
map(lambda x : x + 1, xs)
[ x + 1 for x in xs ]
ys = []
for x in xs :
ys.append(x + 1)
ys
sum(ys)
"""
Explanation: Više: https://docs.python.org/2/library/math.html
3.3. Lista
End of explanation
"""
d = {'zagreb' : 790017, 'split' : 178102, 'rijeka' : 128624}
d['split']
d['osijek']
d.get('osijek', 0)
d['osijek'] = 108048; d
'rijeka' in d
d['zagreb'] = 790200; d
del d['rijeka']; d
"""
Explanation: 3.4. Rječnik (mapa)
End of explanation
"""
for grad in d:
print 'Grad %s ima %d stanovnika' % (grad, d[grad])
"""
Explanation: Iteriranje po rječniku:
End of explanation
"""
for grad, stanovnici in d.iteritems():
print 'Grad %s ima %d stanovnika' % (grad, stanovnici)
"""
Explanation: Iteriranje po ključevima i po vrijednostima:
End of explanation
"""
d2 = {'zagreb' : {'trešnjevka' : 120240, 'centar' : 145302}}
d2 ['zagreb']['trešnjevka']
"""
Explanation: Ugniježđeni rječnici:
End of explanation
"""
def inc(x): return x + 1
def sign(x):
if x > 0:
return 'pozitivno'
elif x < 0:
return 'negativno'
else:
return 'nula'
for x in [-1, 0, 1]:
print sign(x)
"""
Explanation: 3.5. Funkcije
End of explanation
"""
def broj_stanovnika(grad, godina=2015):
if grad in d:
return d[grad] + round((godina - 2015) * 10000 * (-1.2))
else:
raise ValueError('Nepoznat neki grad')
broj_stanovnika('zagreb')
broj_stanovnika('zagreb', godina=2020)
broj_stanovnika('zadar')
"""
Explanation: Podrazumijevani argumenti:
End of explanation
"""
class RegistarStanovnika:
# Konstruktor
def __init__(self, drzava, d):
self.drzava = drzava # Varijabla instance (drugačija za svaku instancu)
self.d = d
prirast = -1.2 # Varijabla klase (dijele ju sve instance)
# Metoda
def broj_stanovnika(self, grad, godina=2015):
if grad in self.d:
return self.d[grad] + round((godina - 2015) * 10000 * self.prirast)
else:
raise ValueError('Nepoznat neki grad')
def ukupno_stanovnika(self):
return sum(self.d.values())
reg = RegistarStanovnika('Hrvatska', {'zagreb' : 790017, 'split' : 178102, 'rijeka' : 128624})
reg.broj_stanovnika('split')
reg.ukupno_stanovnika()
"""
Explanation: 3.6. Klase
End of explanation
"""
import numpy as np
?np
np.__version__
"""
Explanation: 4. Numpy
End of explanation
"""
a = np.array([1, 2, 3])
a
print a
type(a)
a = np.array([1, 2, 3], dtype=np.float64)
a
a[0]
a[0] = 100; a
a.shape
len(a)
np.array([1,'a',2])
"""
Explanation: 4.1. Polja
Jednodimenzijsko polje (polje ranga 1):
End of explanation
"""
m = np.array([[1,2,3],[4,5,6]])
print m
m[1]
m[1,1]
m[1][1]
m.shape
m2 = np.array([[1,2,3],[4,5]])
print m2
"""
Explanation: Matrica (dvodimenzijsko polje, polje ranga 2):
End of explanation
"""
print m
m[:,1]
m[0,1:3]
m[1,:2] = [77, 78]
m
"""
Explanation: Izrezivanje (engl. slicing):
End of explanation
"""
m[:,0] # daje polje ranga 1
m[:,0:1] # daje polje ranga 2
"""
Explanation: Uočiti razliku:
End of explanation
"""
t = np.array([[[1,2],[3,4]],[[4,5],[6,7]]])
t.shape
t[0,1,1]
t[0]
t[0,:,1]
"""
Explanation: Trodimenzijsko polje (tenzor ranga 3):
End of explanation
"""
np.zeros((5,5))
np.ones((3,1))
np.full((5,5), 55)
np.eye(6)
np.random.random((4,4))
np.arange(1, 10)
np.arange(1, 10, 2)
np.linspace(1, 10, 5)
np.linspace(1, 10)
"""
Explanation: 4.2. Stvaranje polja
End of explanation
"""
a = np.array([[1,2], [3, 4], [5, 6]]); a
a[0,1]
a[[0,2]] # Nije isto kao a[0,2] !
a[[0,1,2], [0,1,0]] # Isto kao: np.array([a[0,0], a[1,1], a[2,0]])
"""
Explanation: Više: http://docs.scipy.org/doc/numpy/reference/routines.array-creation.html
4.3. Napredno indeksiranje
Indeksiranje poljem brojeva:
End of explanation
"""
a
bool_ix = a > 2
bool_ix
a[bool_ix]
a[a > 2]
"""
Explanation: Indeksiranje Booleovim poljem:
End of explanation
"""
x = np.array([[1, 2], [3, 4]])
v = np.array([1, 2])
print x
x + v
np.ones((2,2,3)) * 5
"""
Explanation: Više: http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
4.4. Širenje i naslagivanje
Širenje (eng. <i>broadcasting</i>):
End of explanation
"""
v
np.vstack([v, v])
np.vstack([x, x])
np.vstack((v, x))
np.hstack((v, v))
np.hstack((x, x))
np.hstack((v, x))
np.column_stack((v, x))
x
np.dstack((x, x))
np.shape(_)
"""
Explanation: Naslagivanje (engl. <i>stacking</i>):
End of explanation
"""
m = np.array([[ 1, 2, 3], [77, 78, 6]])
m.reshape(3, 2)
"""
Explanation: Preoblikovanje polja:
End of explanation
"""
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
print x; print y
"""
Explanation: 4.4. Operacije s poljima (vektorske i matrične operacije)
End of explanation
"""
x + y
x - y
x / 2.0
x.dtype
(x/2.0).dtype
x * y
x.dtype='float64'
y.dtype='float64'
x / y
np.sqrt(x)
"""
Explanation: Operacije "po elementima" (element-wise):
End of explanation
"""
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([1,2])
w = np.array([5,3])
"""
Explanation: Vektorske/matrične operacije:
End of explanation
"""
print v.dot(w)
print w.dot(v)
print np.dot(v, w)
"""
Explanation: Skalarni (unutarnji, dot) umnožak vektora:
$
\begin{pmatrix}
1 & 2 \
\end{pmatrix}
\cdot
\begin{pmatrix}
5\
3\
\end{pmatrix}
= 11
$
End of explanation
"""
x.dot(v)
np.dot(x, v)
"""
Explanation: Umnožak matrice i vektora:
$
\begin{pmatrix}
1 & 2 \
3 & 4 \
\end{pmatrix}
\cdot
\begin{pmatrix}
1\
2\
\end{pmatrix}
=
\begin{pmatrix}
5\
11\
\end{pmatrix}
$
End of explanation
"""
v.dot(x)
np.dot(v,x)
"""
Explanation: Umnožak vektora i matrice:
$
\begin{pmatrix}
1 & 2\
\end{pmatrix}
\cdot
\begin{pmatrix}
1 & 2 \
3 & 4 \
\end{pmatrix}
=
\begin{pmatrix}
7 & 10\
\end{pmatrix}
$
End of explanation
"""
x.dot(y)
np.dot(x, y)
"""
Explanation: Primijetite da nema razlike između vektor-stupca i vektor-retka.
Umnožak matrice i matrice:
$
\begin{pmatrix}
1 & 2\
3 & 4\
\end{pmatrix}
\cdot
\begin{pmatrix}
5 & 6 \
7 & 7 \
\end{pmatrix}
=
\begin{pmatrix}
19 & 22\
43 & 50\
\end{pmatrix}
$
End of explanation
"""
np.outer(v, w)
"""
Explanation: Vanjski umnožak vektora:
$
\begin{pmatrix}
1\
2\
\end{pmatrix}
\times
\begin{pmatrix}
5 \
3 \
\end{pmatrix}
=
\begin{pmatrix}
1\
2\
\end{pmatrix}
\cdot
\begin{pmatrix}
5 & 3\
\end{pmatrix}
=
\begin{pmatrix}
5 & 3 \
10 & 6 \
\end{pmatrix}
$
End of explanation
"""
x = np.array([0, 2, 4, 1])
np.max(x)
np.argmax(x)
"""
Explanation: Ostale operacije:
End of explanation
"""
x = np.random.random(10); x
np.mean(x)
np.median(x)
np.var(x)
np.std(x)
x = np.array([1, 2, np.nan])
np.mean(x)
np.nanmean(x)
np.ptp(x)
X = np.array([[1,2],[3,4]])
print X
np.mean(X)
np.mean(X, axis=0)
np.cov(X)
x = np.random.random(10000); x
np.histogram(x)
"""
Explanation: 4.5. Statističke funkcije
End of explanation
"""
x = np.array([[1,2],[3,4]]); x
np.sum(x)
np.sum(x, axis=0)
np.sum(x, axis=1)
x.T
v
v.T
x.diagonal()
x.trace() # == x.sum(x.diagonal())
"""
Explanation: Više: http://docs.scipy.org/doc/numpy/reference/routines.statistics.html
4.6. Druge često korištene funkcije
End of explanation
"""
x
np.apply_along_axis(sum, 1, x)
np.apply_along_axis(len, 1, x)
"""
Explanation: Aplikacija funkcije na polje:
End of explanation
"""
np.sign(x)
np.log(x)
"""
Explanation: Većina ugrađenih funkcija su vektorizirane, tj. moguće ih je primijeniti na cijelo polje tako da provode operaciju nad pojedinačnim elementima polja. Npr.:
End of explanation
"""
def inc(x) : return x + 1
inc(x)
"""
Explanation: Isto vrijedi i za korisnički definirane funkcije koje su definirane pomoći vektoriziranih ugrađenih funkcija:
End of explanation
"""
x = np.arange(0,10); x
np.random.permutation(x)
x
np.random.shuffle(x); x
x
"""
Explanation: Složenije funkcije treba eksplicitno vektorizirati pomoću numpy.vectorize (ili jednostavno aplicirati funkciju u for petlji, što funkcija vectorize zapravo i radi).
Permutacije:
End of explanation
"""
l = [1, 2, 3]
a = np.array(l); a
list(a)
a.tolist()
l = [[1, 2, 3], [4,5,6]]
a = np.array(l); a
list(a)
a.tolist()
"""
Explanation: Više: http://docs.scipy.org/doc/numpy/reference/routines.sort.html
4.7. Konverzija lista <-> polje
End of explanation
"""
import scipy as sp
sp.__version__
"""
Explanation: 5. SciPy
End of explanation
"""
x = sp.array([1,2,3])
"""
Explanation: SciPy importa NumPy. Npr.:
End of explanation
"""
from scipy import linalg
"""
Explanation: Iz biblioteke SciPy interesantni su nam moduli scipy.linalg i scipy.stats.
5.1. SciPy.linalg
End of explanation
"""
y
y_inv = linalg.inv(y); y_inv
sp.dot(y, y_inv)
"""
Explanation: Inverz matrice:
End of explanation
"""
linalg.det(y)
"""
Explanation: Determinanta:
End of explanation
"""
w
linalg.norm(w)
"""
Explanation: Euklidska norma ($l_2$-norma) vektora: $\|\mathbf{x}\|_2 = \sqrt{\sum_i x_i^2}$
End of explanation
"""
linalg.norm(w, ord=1)
linalg.norm(w, ord=sp.inf)
"""
Explanation: Općenita $p$-norma: $\|\mathbf{x}\|_p = \big(\sum_i |x_i|^p\big)^{1/p}$
End of explanation
"""
from scipy import stats
stats.norm
stats.norm.pdf(0)
xs = sp.linspace(-2, 2, 10);
stats.norm.pdf(xs)
stats.norm.pdf(xs, loc=1, scale=2)
"""
Explanation: Više: http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html
5.2. SciPy.stats
End of explanation
"""
stats.norm.rvs(loc=1, scale=2, size=10)
"""
Explanation: Uzorkovanje iz normalne distribucije:
End of explanation
"""
normal = stats.norm(1, 2)
normal.pdf(xs)
normal.rvs(size=5)
"""
Explanation: "Zamrzavanje" distribucije:
End of explanation
"""
?stats.multivariate_normal
mean = sp.array([1.0, 3.0])
cov = sp.array([[2.0, 0.3], [0.5, 0.7]])
mnormal = stats.multivariate_normal(mean, cov)
mnormal.pdf([1, 0])
np.random.seed(42) # Radi reproducibilnosti rezultata
mnormal.rvs(size=5)
"""
Explanation: Multivarijatna Gaussova distribucija:
End of explanation
"""
x, y = np.random.random((2, 10))
y
stats.pearsonr(x, y)
"""
Explanation: Koeficijent korelacije:
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib
matplotlib.__version__
%pylab inline
"""
Explanation: Više: http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html
6. Matplotlib
matplotlib sadrži više modula: pyplot, image, matplot3d, ...
End of explanation
"""
plt.plot([1,2,3,4,5], [4,5,5,7,3])
plt.show()
plt.plot([4,5,5,7,3]);
plt.plot([4,5,5,7,3], 'ro');
def f(x) : return x**2
xs = linspace(0,100); xs
f(xs)
plt.plot(xs, f(xs));
plt.plot(xs, f(xs), 'bo');
plt.plot(xs, f(xs), 'r+');
plt.plot(xs, 1 - f(xs), 'b', xs, f(xs)/2 - 1000, 'r--');
plt.plot(xs, f(xs), label='f(x)')
plt.plot(xs, 1 - f(xs), label='1-f(x)')
plt.legend()
plt.show()
xs = linspace(-5,5)
plt.plot(xs, stats.norm.pdf(xs), 'g--');
plt.plot(xs, stats.norm.pdf(xs, loc=1, scale=2), 'r', linewidth=3);
"""
Explanation: pylab kombinira pyplot i numpy. Gornja naredba (ipython magic) osigurava da pplotovi budu renderirani direktno u bilježnicu, umjesto da otvoaraju zaseban prozor.
6.1. Funkcija plot
End of explanation
"""
plt.scatter([0, 1, 2, 0], [4, 5, 2, 1])
plt.show()
plt.scatter([0,1,2,0], [4, 5, 2, 1], s=200, marker='s');
np.random.random(10)
for c in 'rgb':
plt.scatter(sp.random.random(100), sp.random.random(100), s=200, alpha=0.5, marker='o', c=c)
"""
Explanation: 6.2. Funkcija scatter
End of explanation
"""
x = np.linspace(1,5,5); x
X, Y = np.meshgrid(x, x)
X
Y
Z = 10 * X + Y
Z
plt.pcolormesh(X, Y, Z, cmap='gray')
plt.show()
"""
Explanation: 6.3. Grafikon konture i gustoće
End of explanation
"""
mnormal = stats.multivariate_normal([0, 1], [[1, 1], [0.2, 3]])
mnormal.pdf([1,1])
x = np.linspace(-1, 1)
y = np.linspace(-2, 2)
X, Y = np.meshgrid(x, y)
shape(X)
shape(XY)
mnormal.pdf(XY)
plt.pcolormesh(X, Y, mnormal.pdf(XY))
plt.show()
plt.contourf(X, Y, mnormal.pdf(XY));
plt.contourf(X, Y, mnormal.pdf(XY), levels=[0,0.06, 0.07]);
plt.contour(X, Y, mnormal.pdf(XY));
x = linspace(-10,10)
X, Y = np.meshgrid(x, x)
Z = X*3 + Y
plt.contour(X, Y, Z);
plt.contour(X, Y, Z, levels=[0]);
"""
Explanation: Više: http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.pcolormesh, http://matplotlib.org/users/colormaps.html
End of explanation
"""
plt.contour(X, Y, Z, levels=[0])
plt.scatter([-5,-3,2,5], [4, 5, 2, 1])
plt.show()
"""
Explanation: Kombinacija više grafikona:
End of explanation
"""
np.random.seed(42)
x = stats.norm.rvs(size=1000)
plt.hist(x);
"""
Explanation: 6.4. Histogram
End of explanation
"""
hist, bins = np.histogram(x)
centers = (bins[:-1] + bins[1:]) / 2
plt.bar(centers, hist);
"""
Explanation: Više-manje istovjetno s:
End of explanation
"""
import pandas as pd
pd.__version__
"""
Explanation: 6.5. Podgrafikoni
TODO
7. Pandas
End of explanation
"""
import sklearn
sklearn.__version__
"""
Explanation: TODO
8. Sklearn
End of explanation
"""
|
alvason/probability-insighter | code/mutation-drift-selection.ipynb | gpl-2.0 | import numpy as np
import itertools
"""
Explanation: Wright-Fisher model of mutation, selection and random genetic drift
A Wright-Fisher model has a fixed population size N and discrete non-overlapping generations. Each generation, each individual has a random number of offspring whose mean is proportional to the individual's fitness. Each generation, mutation may occur. Mutations may increase or decrease individual's fitness, which affects the chances of that individual's offspring in subsequent generations.
Here, I'm using a fitness model where some proportion of the time a mutation will have a fixed fitness effect, increasing or decreasing fitness by a fixed amount.
Setup
End of explanation
"""
pop_size = 100
seq_length = 10
alphabet = ['A', 'T']
base_haplotype = "AAAAAAAAAA"
fitness_effect = 1.1 # fitness effect if a functional mutation occurs
fitness_chance = 0.1 # chance that a mutation has a fitness effect
"""
Explanation: Make population dynamic model
Basic parameters
End of explanation
"""
pop = {}
pop["AAAAAAAAAA"] = 40
pop["AAATAAAAAA"] = 30
pop["AATTTAAAAA"] = 30
"""
Explanation: Population of haplotypes maps to counts and fitnesses
Store this as a lightweight Dictionary that maps a string to a count. All the sequences together will have count N.
End of explanation
"""
fitness = {}
fitness["AAAAAAAAAA"] = 1.0
fitness["AAATAAAAAA"] = 1.05
fitness["AATTTAAAAA"] = 1.10
pop["AAATAAAAAA"]
fitness["AAATAAAAAA"]
"""
Explanation: Map haplotype string to fitness float.
End of explanation
"""
mutation_rate = 0.005 # per gen per individual per site
def get_mutation_count():
mean = mutation_rate * pop_size * seq_length
return np.random.poisson(mean)
def get_random_haplotype():
haplotypes = pop.keys()
frequencies = [x/float(pop_size) for x in pop.values()]
total = sum(frequencies)
frequencies = [x / total for x in frequencies]
return np.random.choice(haplotypes, p=frequencies)
def get_mutant(haplotype):
site = np.random.randint(seq_length)
possible_mutations = list(alphabet)
possible_mutations.remove(haplotype[site])
mutation = np.random.choice(possible_mutations)
new_haplotype = haplotype[:site] + mutation + haplotype[site+1:]
return new_haplotype
"""
Explanation: Add mutation
End of explanation
"""
def get_fitness(haplotype):
old_fitness = fitness[haplotype]
if (np.random.random() < fitness_chance):
return old_fitness * fitness_effect
else:
return old_fitness
get_fitness("AAAAAAAAAA")
"""
Explanation: Mutations have fitness effects
End of explanation
"""
def mutation_event():
haplotype = get_random_haplotype()
if pop[haplotype] > 1:
pop[haplotype] -= 1
new_haplotype = get_mutant(haplotype)
if new_haplotype in pop:
pop[new_haplotype] += 1
else:
pop[new_haplotype] = 1
if new_haplotype not in fitness:
fitness[new_haplotype] = get_fitness(haplotype)
mutation_event()
pop
fitness
def mutation_step():
mutation_count = get_mutation_count()
for i in range(mutation_count):
mutation_event()
"""
Explanation: If a mutation event creates a new haplotype, assign it a random fitness.
End of explanation
"""
def get_offspring_counts():
haplotypes = pop.keys()
frequencies = [pop[haplotype]/float(pop_size) for haplotype in haplotypes]
fitnesses = [fitness[haplotype] for haplotype in haplotypes]
weights = [x * y for x,y in zip(frequencies, fitnesses)]
total = sum(weights)
weights = [x / total for x in weights]
return list(np.random.multinomial(pop_size, weights))
get_offspring_counts()
def offspring_step():
counts = get_offspring_counts()
for (haplotype, count) in zip(pop.keys(), counts):
if (count > 0):
pop[haplotype] = count
else:
del pop[haplotype]
"""
Explanation: Genetic drift and fitness affect which haplotypes make it to the next generation
Fitness weights the multinomial draw.
End of explanation
"""
def time_step():
mutation_step()
offspring_step()
generations = 5
def simulate():
for i in range(generations):
time_step()
"""
Explanation: Combine and iterate
End of explanation
"""
history = []
def simulate():
clone_pop = dict(pop)
history.append(clone_pop)
for i in range(generations):
time_step()
clone_pop = dict(pop)
history.append(clone_pop)
simulate()
"""
Explanation: Record
We want to keep a record of past population frequencies to understand dynamics through time. At each step in the simulation, we append to a history object.
End of explanation
"""
def get_distance(seq_a, seq_b):
diffs = 0
length = len(seq_a)
assert len(seq_a) == len(seq_b)
for chr_a, chr_b in zip(seq_a, seq_b):
if chr_a != chr_b:
diffs += 1
return diffs / float(length)
def get_diversity(population):
haplotypes = population.keys()
haplotype_count = len(haplotypes)
diversity = 0
for i in range(haplotype_count):
for j in range(haplotype_count):
haplotype_a = haplotypes[i]
haplotype_b = haplotypes[j]
frequency_a = population[haplotype_a] / float(pop_size)
frequency_b = population[haplotype_b] / float(pop_size)
frequency_pair = frequency_a * frequency_b
diversity += frequency_pair * get_distance(haplotype_a, haplotype_b)
return diversity
def get_diversity_trajectory():
trajectory = [get_diversity(generation) for generation in history]
return trajectory
"""
Explanation: Analyze trajectories
Calculate diversity
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
def diversity_plot():
mpl.rcParams['font.size']=14
trajectory = get_diversity_trajectory()
plt.plot(trajectory, "#447CCD")
plt.ylabel("diversity")
plt.xlabel("generation")
"""
Explanation: Plot diversity
End of explanation
"""
def get_divergence(population):
haplotypes = population.keys()
divergence = 0
for haplotype in haplotypes:
frequency = population[haplotype] / float(pop_size)
divergence += frequency * get_distance(base_haplotype, haplotype)
return divergence
def get_divergence_trajectory():
trajectory = [get_divergence(generation) for generation in history]
return trajectory
def divergence_plot():
mpl.rcParams['font.size']=14
trajectory = get_divergence_trajectory()
plt.plot(trajectory, "#447CCD")
plt.ylabel("divergence")
plt.xlabel("generation")
"""
Explanation: Analyze and plot divergence
End of explanation
"""
def get_frequency(haplotype, generation):
pop_at_generation = history[generation]
if haplotype in pop_at_generation:
return pop_at_generation[haplotype]/float(pop_size)
else:
return 0
def get_trajectory(haplotype):
trajectory = [get_frequency(haplotype, gen) for gen in range(generations)]
return trajectory
def get_all_haplotypes():
haplotypes = set()
for generation in history:
for haplotype in generation:
haplotypes.add(haplotype)
return haplotypes
colors = ["#781C86", "#571EA2", "#462EB9", "#3F47C9", "#3F63CF", "#447CCD", "#4C90C0", "#56A0AE", "#63AC9A", "#72B485", "#83BA70", "#96BD60", "#AABD52", "#BDBB48", "#CEB541", "#DCAB3C", "#E49938", "#E68133", "#E4632E", "#DF4327", "#DB2122"]
colors_lighter = ["#A567AF", "#8F69C1", "#8474D1", "#7F85DB", "#7F97DF", "#82A8DD", "#88B5D5", "#8FC0C9", "#97C8BC", "#A1CDAD", "#ACD1A0", "#B9D395", "#C6D38C", "#D3D285", "#DECE81", "#E8C77D", "#EDBB7A", "#EEAB77", "#ED9773", "#EA816F", "#E76B6B"]
def stacked_trajectory_plot(xlabel="generation"):
mpl.rcParams['font.size']=18
haplotypes = get_all_haplotypes()
trajectories = [get_trajectory(haplotype) for haplotype in haplotypes]
plt.stackplot(range(generations), trajectories, colors=colors_lighter)
plt.ylim(0, 1)
plt.ylabel("frequency")
plt.xlabel(xlabel)
"""
Explanation: Plot haplotype trajectories
End of explanation
"""
def get_snp_frequency(site, generation):
minor_allele_frequency = 0.0
pop_at_generation = history[generation]
for haplotype in pop_at_generation.keys():
allele = haplotype[site]
frequency = pop_at_generation[haplotype] / float(pop_size)
if allele != "A":
minor_allele_frequency += frequency
return minor_allele_frequency
def get_snp_trajectory(site):
trajectory = [get_snp_frequency(site, gen) for gen in range(generations)]
return trajectory
"""
Explanation: Plot SNP trajectories
End of explanation
"""
def get_all_snps():
snps = set()
for generation in history:
for haplotype in generation:
for site in range(seq_length):
if haplotype[site] != "A":
snps.add(site)
return snps
def snp_trajectory_plot(xlabel="generation"):
mpl.rcParams['font.size']=18
snps = get_all_snps()
trajectories = [get_snp_trajectory(snp) for snp in snps]
data = []
for trajectory, color in itertools.izip(trajectories, itertools.cycle(colors)):
data.append(range(generations))
data.append(trajectory)
data.append(color)
fig = plt.plot(*data)
plt.ylim(0, 1)
plt.ylabel("frequency")
plt.xlabel(xlabel)
"""
Explanation: Find all variable sites.
End of explanation
"""
pop_size = 50
seq_length = 100
generations = 500
mutation_rate = 0.0001 # per gen per individual per site
fitness_effect = 1.1 # fitness effect if a functional mutation occurs
fitness_chance = 0.1 # chance that a mutation has a fitness effect
"""
Explanation: Scale up
Here, we scale up to more interesting parameter values.
End of explanation
"""
seq_length * mutation_rate
"""
Explanation: In this case there are $\mu$ = 0.01 mutations entering the population every generation.
End of explanation
"""
2 * pop_size * seq_length * mutation_rate
base_haplotype = ''.join(["A" for i in range(seq_length)])
pop.clear()
fitness.clear()
del history[:]
pop[base_haplotype] = pop_size
fitness[base_haplotype] = 1.0
simulate()
plt.figure(num=None, figsize=(14, 14), dpi=80, facecolor='w', edgecolor='k')
plt.subplot2grid((3,2), (0,0), colspan=2)
stacked_trajectory_plot()
plt.subplot2grid((3,2), (1,0), colspan=2)
snp_trajectory_plot()
plt.subplot2grid((3,2), (2,0))
diversity_plot()
plt.subplot2grid((3,2), (2,1))
divergence_plot()
"""
Explanation: And the population genetic parameter $\theta$, which equals $2N\mu$, is 1.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/messy-consortium/cmip6/models/sandbox-3/landice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-3', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
abhipr1/DATA_SCIENCE_INTENSIVE | Week_2/statistics project 2/sliderule_dsi_inferential_statistics_exercise_2.ipynb | apache-2.0 | import pandas as pd
import numpy as np
from scipy import stats
data = pd.io.stata.read_stata('data/us_job_market_discrimination.dta')
# number of callbacks for balck-sounding names
sum(data[data.race=='b'].call)
"""
Explanation: Examining racial discrimination in the US job market
Background
Racial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés black-sounding or white-sounding names and observing the impact on requests for interviews from employers.
Data
In the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.
Note that the 'b' and 'w' values in race are assigned randomly to the resumes.
Exercise
You will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.
Answer the following questions in this notebook below and submit to your Github account.
What test is appropriate for this problem? Does CLT apply?
What are the null and alternate hypotheses?
Compute margin of error, confidence interval, and p-value.
Discuss statistical significance.
You can include written notes in notebook cells using Markdown:
- In the control panel at the top, choose Cell > Cell Type > Markdown
- Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
Resources
Experiment information and data source: http://www.povertyactionlab.org/evaluation/discrimination-job-market-united-states
Scipy statistical methods: http://docs.scipy.org/doc/scipy/reference/stats.html
Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
End of explanation
"""
data.head()
# Retrieve raca and call data.
race_call = data[['race','call']]
race_call_black = race_call[race_call.race=='b']
race_call_white = race_call[race_call.race=='w']
len(race_call_black),len(race_call_white)
"""
Explanation: What test is appropriate for this problem? Does CLT apply?
End of explanation
"""
p_b = len(race_call_black[race_call_black.call==1])/len(race_call_black)
p_b
p_w= len(race_call_white[race_call_white.call==1])/len(race_call_white)
p_w
"""
Explanation: Ans:- It is binomial distribution.
p_w = probability of success of white person.
p_b = probability of success of black person.
End of explanation
"""
print(len(race_call_white)*p_w)
print(len(race_call_white)*(1-p_w))
print(len(race_call_black)*p_b)
print(len(race_call_black)*(1-p_b))
"""
Explanation: Condition Check:
np >= 10
n(1-p) > 10
End of explanation
"""
import math
z = 1.96
margin_of_error = z*math.sqrt(p_w*(1-p_w)/len(race_call_white)+p_b*(1-p_b)/len(race_call_black))
margin_of_error
"""
Explanation: Above conditions are satisfied so CLT is applicable.
=======================================================================================================================
What are the null and alternate hypotheses?
Null Hypothesis: there is no racial discrimination. (p_b = p_w)
Alternate Hypothesis : There is. (p_b != p_w)
=======================================================================================================================
Compute margin of error, confidence interval, and p-value.
Assume 95% confidence interval. So the critical value = 1.96.
Margin of Error =
End of explanation
"""
[p_w - p_b - z * margin_of_error,
p_w - p_b + z * margin_of_error]
from statsmodels.stats.proportion import proportions_ztest as pz
white_call = len(race_call_white[race_call_white.call==1])
black_call = len(race_call_black[race_call_black.call==1])
zstat,p_value = pz(np.array([white_call,black_call]),np.array([len(race_call_white),len(race_call_black)]),value=0)
if p_value < 0.05:
print ("Null Hypotesis Rejected.\nThere is racial discrimination")
"""
Explanation: The confidence interval is:
p1 − p2 ± (margin of error)
End of explanation
"""
|
HubLot/PBxplore | doc/source/notebooks/Deformability.ipynb | mit | from pprint import pprint
from IPython.display import Image, display
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import urllib.request
import os
import numpy as np
# print date & versions
import datetime
print("Date & time:",datetime.datetime.now())
import sys
print("Python version:", sys.version)
print("Matplotlib version:", matplotlib.__version__)
import pbxplore as pbx
print("PBxplore version:", pbx.__version__)
"""
Explanation: Visualize protein deformability
Protein Blocks are great tools to study protein deformability. Indeed, if the block assigned to a residue changes between two frames of a trajectory, it represents a local deformation of the protein rather than the displacement of the residue.
The API allows to visualize Protein Block variability throughout a molecular dynamics simulation trajectory.
End of explanation
"""
# Assign PB sequences for all frames of a trajectory
topology, _ = urllib.request.urlretrieve('https://raw.githubusercontent.com/pierrepo/PBxplore/master/demo_doc/psi_md_traj.gro',
'psi_md_traj.gro')
trajectory, _ = urllib.request.urlretrieve('https://raw.githubusercontent.com/pierrepo/PBxplore/master/demo_doc/psi_md_traj.xtc',
'psi_md_traj.xtc')
sequences = []
for chain_name, chain in pbx.chains_from_trajectory(trajectory, topology):
dihedrals = chain.get_phi_psi_angles()
pb_seq = pbx.assign(dihedrals)
sequences.append(pb_seq)
"""
Explanation: Here we will look at a molecular dynamics simulation of the barstar. As we will analyse Protein Block sequences, we first need to assign these sequences for each frame of the trajectory.
End of explanation
"""
count_matrix = pbx.analysis.count_matrix(sequences)
"""
Explanation: Block occurences per position
The basic information we need to analyse protein deformability is the count of occurences of each PB for each position throughout the trajectory. This occurence matrix can be calculated with the pbxplore.analysis.count_matrix() function.
End of explanation
"""
im = plt.imshow(count_matrix, interpolation='none', aspect='auto')
plt.colorbar(im)
plt.xlabel('Position')
plt.ylabel('Block')
"""
Explanation: count_matrix is a numpy array with one row per PB and one column per position. In each cell is the number of time a position was assigned to a PB.
We can visualize count_matrix using Matplotlib as any 2D numpy array.
End of explanation
"""
pbx.analysis.plot_map('map.png', count_matrix)
!rm map.png
"""
Explanation: PBxplore provides the pbxplore.analysis.plot_map() function to ease the visualization of the occurence matrix.
End of explanation
"""
pbx.analysis.plot_map('map.png', count_matrix,
residue_min=20, residue_max=30)
!rm map.png
"""
Explanation: The pbxplore.analysis.plot_map() helper has a residue_min and a residue_max optional arguments to display only part of the matrix. These two arguments can be pass to all PBxplore functions that produce a figure.
End of explanation
"""
freq_matrix = pbx.analysis.compute_freq_matrix(count_matrix)
im = plt.imshow(freq_matrix, interpolation='none', aspect='auto')
plt.colorbar(im)
plt.xlabel('Position')
plt.ylabel('Block')
"""
Explanation: Note that matrix in the the figure produced by pbxplore.analysis.plot_map() is normalized so as the sum of each column is 1. The matrix can be normalized with the pbxplore.analysis.compute_freq_matrix().
End of explanation
"""
neq_by_position = pbx.analysis.compute_neq(count_matrix)
"""
Explanation: Protein Block entropy
The $N_{eq}$ is a measure of variability based on the count matrix calculated above. It can be computed with the pbxplore.analysis.compute_neq() function.
End of explanation
"""
#Residus start by default at 1.
resids = np.arange(1,len(neq_by_position)+1)
plt.plot(resids, neq_by_position)
plt.xlabel('Position')
plt.ylabel('$N_{eq}$')
"""
Explanation: neq_by_position is a 1D numpy array with the $N_{eq}$ for each residue.
End of explanation
"""
pbx.analysis.plot_neq('neq.png', neq_by_position)
!rm neq.png
"""
Explanation: The pbxplore.analysis.plot_neq() helper ease the plotting of the $N_{eq}$.
End of explanation
"""
pbx.analysis.plot_neq('neq.png', neq_by_position,
residue_min=20, residue_max=30)
!rm neq.png
"""
Explanation: The residue_min and residue_max arguments are available.
End of explanation
"""
# Let's assume you computed the RMSF (file rmsf.xvg)
# For this example, the rmsf was computed on the C-alpha and grouped by residue :
# g_rmsf -s psi_md_traj.gro -f psi_md_traj.xtc -res -o rmsf.xvg (Gromacs 4.6.7)
# Read rmsf file (ignore lines which start by '#@' and assume the data are in 2-column,
# first one the number of residue, 2nd one the rmsf
rmsf = np.array([line.split() for line in open("../../../demo_doc/rmsf.xvg") if not line[0] in '#@'], dtype=float)
#Generate 2 y-axes who share a same x-axis
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
#Left Axis
ax1.plot(rmsf[:,0], neq_by_position, color='#1f77b4', lw=2)
ax1.set_xlabel('Position')
ax1.set_ylabel('$N_{eq}$', color='#1f77b4')
ax1.set_ylim([0.0, 5])
#Right Axis
ax2.plot(rmsf[:,0], rmsf[:,1],color='#ff7f0e', lw=2)
ax2.set_ylim([0.0, 0.4])
ax2.set_ylabel('RMSF (nm)', color='#ff7f0e')
"""
Explanation: Neq with RMSF
The $N_{eq}$ and RMSF (Root Mean Square Fluctuation) can be plot together to highlight differences between flexible and rigid residues : the $N_{eq}$ is a metric of deformability and flexibility whereas RMSF quantifies mobility.
Here an example of a plot with both metrics (You can adapt this code to your own need):
End of explanation
"""
pbx.analysis.generate_weblogo('logo.png', count_matrix)
display(Image('logo.png'))
!rm logo.png
pbx.analysis.generate_weblogo('logo.png', count_matrix,
residue_min=20, residue_max=30)
display(Image('logo.png'))
!rm logo.png
"""
Explanation: We observe that the region 33-35 is rigid.
The high values of RMSF we observed were due to flexible residues in the vicinity of the region 33-35, probably acting as hinges (residues 32 and 36--37).
Those hinges, due to their flexibility, induced the mobility of the whole loop : the region 33-35 fluctuated but did not deform.
Display PB variability as a logo
End of explanation
"""
|
maxis42/ML-DA-Coursera-Yandex-MIPT | 1 Mathematics and Python/Lectures notebooks/1 introduction to ipython/introduction_to_ipython.ipynb | mit | ! echo 'hello, world!'
!echo $t
%%bash
mkdir test_directory
cd test_directory/
ls -a
#удаление директории, если она не нужна
! rm -r test_directory
"""
Explanation: text
Header
для редактирования формулы ниже использует синтаксис tex
$$ c = \sqrt{a^2 + b^2}$$
End of explanation
"""
%%cmd
mkdir test_directory
cd test_directory
dir
"""
Explanation: Ниже аналоги команд для пользователей Windows:
End of explanation
"""
%%cmd
rmdir test_directiory
%lsmagic
%pylab inline
y = range(11)
y
plot(y)
"""
Explanation: удаление директории, если она не нужна (windows)
End of explanation
"""
|
GoogleCloudPlatform/cloudml-samples | notebooks/scikit-learn/OnlinePredictionWithScikitLearnInCMLE.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 Google LLC
End of explanation
"""
%env PROJECT_ID PROJECT_ID
%env BUCKET_NAME BUCKET_NAME
%env MODEL_NAME census
%env VERSION_NAME v1
%env REGION us-central1
"""
Explanation: Online Prediction with scikit-learn on AI Platform
This notebook uses the Census Income Data Set to create a simple model, train the model, upload the model to Ai Platform, and lastly use the model to make predictions.
How to bring your model to AI Platform
Getting your model ready for predictions can be done in 5 steps:
1. Save your model to a file
1. Upload the saved model to Google Cloud Storage
1. Create a model resource on AI Platform
1. Create a model version (linking your scikit-learn model)
1. Make an online prediction
Prerequisites
Before you jump in, let’s cover some of the different tools you’ll be using to get online prediction up and running on AI Platform.
Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.
AI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.
Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.
Cloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel.
Part 0: Setup
Create a project on GCP
Create a Google Cloud Storage Bucket
Enable AI Platform Training and Prediction and Compute Engine APIs
Install Cloud SDK
Install scikit-learn
Install NumPy
Install pandas
Install Google API Python Client
These variables will be needed for the following steps.
Replace:
* PROJECT_ID <YOUR_PROJECT_ID> - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.
* BUCKET_NAME <YOUR_BUCKET_NAME> - with the bucket id you created above.
* MODEL_NAME <YOUR_MODEL_NAME> - with your model name, such as 'census'
* VERSION <YOUR_VERSION> - with your version name, such as 'v1'
* REGION <REGION> - select a region or use the default 'us-central1'. The region is where the model will be deployed.
End of explanation
"""
# Create a directory to hold the data
! mkdir census_data
# Download the data
! curl https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data --output census_data/adult.data
! curl https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test --output census_data/adult.test
"""
Explanation: Download the data
The Census Income Data Set that this sample
uses for training is hosted by the UC Irvine Machine Learning
Repository.
Training file is adult.data
Evaluation file is adult.test
Disclaimer
This dataset is provided by a third party. Google provides no representation,
warranty, or other guarantees about the validity or any other aspects of this dataset.
End of explanation
"""
import googleapiclient.discovery
import json
import numpy as np
import os
import pandas as pd
import pickle
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelBinarizer
# Define the format of your input data including unused columns (These are the columns from the census data files)
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn
CATEGORICAL_COLUMNS = (
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
)
# Load the training census dataset
with open('./census_data/adult.data', 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
train_features = raw_training_data.drop('income-level', axis=1).as_matrix().tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
train_labels = (raw_training_data['income-level'] == ' >50K').as_matrix().tolist()
# Load the test census dataset
with open('./census_data/adult.test', 'r') as test_data:
raw_testing_data = pd.read_csv(test_data, names=COLUMNS, skiprows=1)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
test_features = raw_testing_data.drop('income-level', axis=1).values.tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
test_labels = (raw_testing_data['income-level'] == ' >50K.').values.tolist()
# Since the census data set has categorical features, we need to convert
# them to numerical values. We'll use a list of pipelines to convert each
# categorical column and then use FeatureUnion to combine them before calling
# the RandomForestClassifier.
categorical_pipelines = []
# Each categorical column needs to be extracted individually and converted to a numerical value.
# To do this, each categorical column will use a pipeline that extracts one feature column via
# SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one.
# A scores array (created below) will select and extract the feature column. The scores array is
# created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN.
for i, col in enumerate(COLUMNS[:-1]):
if col in CATEGORICAL_COLUMNS:
# Create a scores array to get the individual categorical column.
# Example:
# data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical',
# 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States']
# scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
#
# Returns: [['State-gov']]
# Build the scores array.
scores = [0] * len(COLUMNS[:-1])
# This column is the categorical column we want to extract.
scores[i] = 1
skb = SelectKBest(k=1)
skb.scores_ = scores
# Convert the categorical column to a numerical value
lbn = LabelBinarizer()
r = skb.transform(train_features)
lbn.fit(r)
# Create the pipeline to extract the categorical feature
categorical_pipelines.append(
('categorical-{}'.format(i), Pipeline([
('SKB-{}'.format(i), skb),
('LBN-{}'.format(i), lbn)])))
# Create pipeline to extract the numerical features
skb = SelectKBest(k=6)
# From COLUMNS use the features that are numerical
skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]
categorical_pipelines.append(('numerical', skb))
# Combine all the features using FeatureUnion
preprocess = FeatureUnion(categorical_pipelines)
# Create the classifier
classifier = RandomForestClassifier()
# Transform the features and fit them to the classifier
classifier.fit(preprocess.transform(train_features), train_labels)
# Create the overall model as a single pipeline
pipeline = Pipeline([
('union', preprocess),
('classifier', classifier)
])
# Export the model to a file
joblib.dump(pipeline, 'model.joblib')
print('Model trained and saved')
"""
Explanation: Part 1: Train/Save the model
First, the data is loaded into a pandas DataFrame that can be used by scikit-learn. Then a simple model is created and fit against the training data. Lastly, sklearn's built in version of joblib is used to save the model to a file that can be uploaded to AI Platform.
End of explanation
"""
! gcloud config set project $PROJECT_ID
"""
Explanation: Part 2: Upload the model
Next, you'll need to upload the model to your project's storage bucket in GCS. To use your model with AI Platform, it needs to be uploaded to Google Cloud Storage (GCS). This step takes your local ‘model.joblib’ file and uploads it GCS via the Cloud SDK using gsutil.
Before continuing, make sure you're properly authenticated and have access to the bucket. This next command sets your project to the one specified above.
Note: If you get an error below, make sure the Cloud SDK is installed in the kernel's environment.
End of explanation
"""
! gsutil cp ./model.joblib gs://$BUCKET_NAME/model.joblib
"""
Explanation: Note: The exact file name of of the exported model you upload to GCS is important! Your model must be named “model.joblib”, “model.pkl”, or “model.bst” with respect to the library you used to export it. This restriction ensures that the model will be safely reconstructed later by using the same technique for import as was used during export.
End of explanation
"""
! gcloud ml-engine models create $MODEL_NAME --regions $REGION
"""
Explanation: Part 3: Create a model resource
AI Platform organizes your trained models using model and version resources. An AI Platform model is a container for the versions of your machine learning model. For more information on model resources and model versions look here.
At this step, you create a container that you can use to hold several different versions of your actual model.
End of explanation
"""
%%writefile ./config.yaml
deploymentUri: "gs://BUCKET_NAME/"
runtimeVersion: '1.4'
framework: "SCIKIT_LEARN"
pythonVersion: "3.5"
"""
Explanation: Part 4: Create a model version
Now it’s time to get your model online and ready for predictions. The model version requires a few components as specified here.
name - The name specified for the version when it was created. This will be the VERSION_NAME variable you declared at the beginning.
model - The name of the model container we created in Part 3. This is the MODEL_NAME variable you declared at the beginning.
deployment Uri - The Google Cloud Storage location of the trained model used to create the version. This is the bucket that you uploaded the model to with your BUCKET_NAME
runtime version - Select Google Cloud ML runtime version to use for this deployment. This is set to 1.4
framework - The framework specifies if you are using: TENSORFLOW, SCIKIT_LEARN, XGBOOST. This is set to SCIKIT_LEARN
pythonVersion - This specifies whether you’re using Python 2.7 or Python 3.5. The default value is set to “2.7”, if you are using Python 3.5, set the value to “3.5”
Note: If you require a feature of scikit-learn that isn’t available in the publicly released version yet, you can specify “runtimeVersion”: “HEAD” instead, and that would get the latest version of scikit-learn available from the github repo. Otherwise the following versions will be used:
* scikit-learn: 0.19.0
First, we need to create a YAML file to configure our model version.
REPLACE: PREVIOUSLY_SPECIFIED_BUCKET_NAME with your BUCKET_NAME
End of explanation
"""
! gcloud ml-engine versions create $VERSION_NAME \
--model $MODEL_NAME \
--config config.yaml
"""
Explanation: Use the created YAML file to create a model version.
Note: It can take several minutes for you model to be available.
End of explanation
"""
# Get one person that makes <=50K and one that makes >50K to test our model.
print('Show a person that makes <=50K:')
print('\tFeatures: {0} --> Label: {1}\n'.format(test_features[0], test_labels[0]))
with open('less_than_50K.json', 'w') as outfile:
json.dump(test_features[0], outfile)
print('Show a person that makes >50K:')
print('\tFeatures: {0} --> Label: {1}'.format(test_features[3], test_labels[3]))
with open('more_than_50K.json', 'w') as outfile:
json.dump(test_features[3], outfile)
"""
Explanation: Part 5: Make an online prediction
It’s time to make an online prediction with your newly deployed model. Before you begin, you'll need to take some of the test data and prepare it, so that the test data can be used by the deployed model.
End of explanation
"""
! gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances less_than_50K.json
"""
Explanation: Use gcloud to make online predictions
Use the two people (as seen in the table) gathered in the previous step for the gcloud predictions.
| Person | age | workclass | fnlwgt | education | education-num | marital-status | occupation |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
| 1 | 25| Private | 226802 | 11th | 7 | Never-married | Machine-op-inspect |
| 2 | 44| Private | 160323 | Some-college | 10 | Married-civ-spouse | Machine-op-inspct |
| Person | relationship | race | sex | capital-gain | capital-loss | hours-per-week | native-country || (Label) income-level|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:||:-:
| 1 | Own-child | Black | Male | 0 | 0 | 40 | United-States || False (<=50K) |
| 2 | Huasband | Black | Male | 7688 | 0 | 40 | United-States || True (>50K) |
Test the model with an online prediction using the data of a person who makes <=50K.
Note: If you see an error, the model from Part 4 may not be created yet as it takes several minutes for a new model version to be created.
End of explanation
"""
! gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances more_than_50K.json
"""
Explanation: Test the model with an online prediction using the data of a person who makes >50K.
End of explanation
"""
import googleapiclient.discovery
import os
import pandas as pd
PROJECT_ID = os.environ['PROJECT_ID']
VERSION_NAME = os.environ['VERSION_NAME']
MODEL_NAME = os.environ['MODEL_NAME']
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(PROJECT_ID, MODEL_NAME)
name += '/versions/{}'.format(VERSION_NAME)
# Due to the size of the data, it needs to be split in 2
first_half = test_features[:int(len(test_features)/2)]
second_half = test_features[int(len(test_features)/2):]
complete_results = []
for data in [first_half, second_half]:
responses = service.projects().predict(
name=name,
body={'instances': data}
).execute()
if 'error' in responses:
print(response['error'])
else:
complete_results.extend(responses['predictions'])
# Print the first 10 responses
for i, response in enumerate(complete_results[:10]):
print('Prediction: {}\tLabel: {}'.format(response, test_labels[i]))
"""
Explanation: Use Python to make online predictions
Test the model with the entire test set and print out some of the results.
Note: If running notebook server on Compute Engine, make sure to "allow full access to all Cloud APIs".
End of explanation
"""
actual = pd.Series(test_labels, name='actual')
online = pd.Series(complete_results, name='online')
pd.crosstab(actual,online)
"""
Explanation: [Optional] Part 6: Verify Results
Use a confusion matrix to create a visualization of the online predicted results from AI Platform.
End of explanation
"""
local_results = pipeline.predict(test_features)
local = pd.Series(local_results, name='local')
pd.crosstab(actual,local)
"""
Explanation: Use a confusion matrix create a visualization of the predicted results from the local model. These results should be identical to the results above.
End of explanation
"""
identical = 0
different = 0
for i in range(len(complete_results)):
if complete_results[i] == local_results[i]:
identical += 1
else:
different += 1
print('identical: {}, different: {}'.format(identical,different))
"""
Explanation: Directly compare the two results
End of explanation
"""
|
davidgutierrez/HeartRatePatterns | Jupyter/LoadDataMimic-III.ipynb | gpl-3.0 | import sys
sys.version_info
"""
Explanation: Cargue de datos s SciDB
1) Verificar Prerequisitos
Python
SciDB-Py requires Python 2.6-2.7 or 3.3
End of explanation
"""
import numpy as np
np.__version__
"""
Explanation: NumPy
tested with version 1.9 (1.13.1)
End of explanation
"""
import requests
requests.__version__
"""
Explanation: Requests
tested with version 2.7 (2.18.1) Required for using the Shim interface to SciDB.
End of explanation
"""
import pandas as pd
pd.__version__
"""
Explanation: Pandas (optional)
tested with version 0.15. (0.20.3) Required only for importing/exporting SciDB arrays as Pandas Dataframe objects.
End of explanation
"""
import scipy
scipy.__version__
"""
Explanation: SciPy (optional)
tested with versions 0.10-0.12. (0.19.0) Required only for importing/exporting SciDB arrays as SciPy sparse matrices.
End of explanation
"""
import scidbpy
scidbpy.__version__
from scidbpy import connect
"""
Explanation: 2) Importar scidb-py
pip install git+http://github.com/paradigm4/scidb-py.git
End of explanation
"""
sdb = connect('http://localhost:8080')
"""
Explanation: conectarse al servidor de Base de datos
End of explanation
"""
import urllib.request # urllib2 in python2 the lib that handles the url stuff
target_url = "https://physionet.org/physiobank/database/mimic3wdb/matched/RECORDS-waveforms"
data = urllib.request.urlopen(target_url) # it's a file like object and works just like a file
lines = data.readlines();
line = str(lines[2])
line
"""
Explanation: 3) Leer archivo con cada una de las ondas
End of explanation
"""
line = line.replace('b\'','').replace('\'','').replace('\\n','')
splited = line.split("/")
splited
carpeta,subCarpeta,onda = line.split("/")
carpeta = carpeta+"/"+subCarpeta
onda
"""
Explanation: Quitarle caracteres especiales
End of explanation
"""
import wfdb
carpeta = "p05/p050140"
onda = "p050140-2188-07-26-05-51"
sig, fields = wfdb.srdsamp(onda,pbdir='mimic3wdb/matched/'+carpeta, sampfrom=10000)
print(sig)
print("signame: " + str(fields['signame']))
print("units: " + str(fields['units']))
print("fs: " + str(fields['fs']))
print("comments: " + str(fields['comments']))
print("fields: " + str(fields))
"""
Explanation: 4) Importar WFDB para conectarse a physionet
End of explanation
"""
signalII = None
try:
signalII = fields['signame'].index("II")
except ValueError:
print("List does not contain value")
if(signalII!=None):
print("List contain value")
"""
Explanation: Busca la ubicacion de la señal tipo II
End of explanation
"""
#array = wfdb.processing.normalize(x=sig[:, signalII], lb=-2, ub=2)
array = sig[:, signalII]
array = array[~np.isnan(sig[:, signalII])]
arrayNun = np.trim_zeros(array)
array
"""
Explanation: Normaliza la señal y le quita los valores en null
End of explanation
"""
ondaName = onda.replace("-", "_")
if arrayNun.size>0 :
sdb.input(upload_data=array).store(ondaName,gc=False)
# sdb.iquery("store(input(<x:int64>[i], '{fn}', 0, '{fmt}'), "+ondaName+")", upload_data=array)
"""
Explanation: Cambiar los guiones "-" por raya al piso "_" porque por algun motivo SciDB tiene problemas con estos caracteres
Si el arreglo sin valores nulos no queda vacio lo sube al SciDB
End of explanation
"""
dir(sdb.arrays)
"""
Explanation: Check de list of arrays in SciDB
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session06/Day1/BuildingBetterModels.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import mosfit
import time
# Disable "retina" line below if your monitor doesn't support it.
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
"""
Explanation: Building Better Models for Inference:
How to construct practical models for existing tools
In this notebook, we will walk through fitting an observed optical light curve from a tidal disruption event (TDE), the destruction and accretion of a star by a supermassive black hole, using two different approaches.
As mentioned in the lecture, there are different kinds of models one can apply to a set of data. A code I have written, MOSFiT, is an attempt to provide a framework for building models that can be used within other optimizers/samplers. While MOSFiT can run independently with its own built-in samplers, in the notebook below we will simple be using it as a "black box" function for use in external optimization routines.
Our first approach will be using the tde model in MOSFiT. This model uses both interpolation tables and integrations, making an analytical derivative not available. Our second approach will be to construct a simple analytical function to fit the same data. We will then be comparing performance, both in terms of the quality of the resulting solution, but also the speed by which the solution was computed, and in how we relate our solution to what transpired in this event.
By J Guillochon (Harvard)
We will be mostly using the mosfit package and scipy routines. Both are available via conda.
End of explanation
"""
# Load the data from the Open Supernova Catalog.
# Note: if loading the data doesn't work, I have a local copy.
my_printer = mosfit.printer.Printer(quiet=True) # just to avoid spamming from MOSFiT routines.
my_fetcher = mosfit.fetcher.Fetcher()
fetched = my_fetcher.fetch('PS1-10jh.json')[0]
my_model = mosfit.model.Model(model='tde', printer=my_printer)
fetched_data = my_fetcher.load_data(fetched)
my_model.load_data(
fetched_data, event_name=fetched['name'],
exclude_bands=['u', 'r', 'i', 'z', 'F225W', 'NUV'], # ignore all bands but g when computing ln_likelihood.
smooth_times=100, # for plotting smooth fits later
user_fixed_parameters=['covariance']) # don't use GP objective function.
# Generate 100 random parameter realizations.
x = np.random.rand(100, my_model.get_num_free_parameters())
# Compute time per function call.
start_time = time.time()
ln_likes = [my_model.ln_likelihood(xx) for xx in x]
stop_time = time.time()
print('{}s per function call.'.format((stop_time - start_time)/100.0))
"""
Explanation: Problem 1) Fitting data with a blackbox model
In this first cell, we load the data of a particularly well-sampled tidal disruption event from the Pan-STARRS survey, PS1-10jh. This even is notable because it was caught on the rise, peak, and decay, with solid cadence.
The datafile can be aquired from https://northwestern.app.box.com/s/ekwpbf8ufe1ivogpxq9yyex302zx0t96.
End of explanation
"""
times = []
mags = []
errs = []
for x in fetched_data[fetched['name']]['photometry']:
# complete
plt.errorbar(times, mags, yerr=errs, fmt='o')
plt.gca().invert_yaxis()
plt.show()
"""
Explanation: Problem 1a
First, let's visualize the data we have downloaded. MOSFiT loads data in a format conforming to the OAC schema specification, which is a JSON dictionary where the top level of the structure is each event's name. The code snippet below will load a JSON dictionary for the event in question, plot the full time series of photometric data (with error bars) within the photometry key below.
Hint: The photometry is a mixture of different data types, and not every entry has the same set of keys. Optical/UV/IR photometry will always have a band key. Ignore upper limits (indicated with the upperlimit attribute). Use the .get() function liberally, and make sure everything is a float!
End of explanation
"""
import scipy
def my_func(x):
try:
fx = -float(my_model.ln_likelihood(x))
except:
fx = np.inf
return fx
eps = 0.00001
bounds = # complete
results = scipy.optimize.differential_evolution( # complete
best_x = results.x
print('All done! Best score: `{}`.'.format(-results.fun))
"""
Explanation: Problem 1b
We know what the data looks like, and we've loaded a model that can be used to fit the data which computes a likelihood. Let's minimize the parameters of this model using various scipy.optimize routines. Note that since we are trying to maximize the likelihood, we have constructed a wrapper function around ln_likelihood, my_func, to reverse its sign, and to handle bad function evaluations.
Most optimize routines in scipy require a derivative. Since we don't have this available, scipy must construct an approximate one, unless the method doesn't require a gradient to be computed (like differential_evolution). For this first sub-problem, optimize my_func using differential_evolution.
Hints: Each variable is bounded to the range (0, 1), but problems can arise if an optimizer attempts to compute values outside or right at the boundaries. Therefore, it is recommended to use a bounded optimizer in scipy, where the bounds do not include 0 or 1.
End of explanation
"""
output = my_model.run_stack(best_x, root='output')
for ti, t in enumerate(output['times']):
# complete
plt.errorbar( # complete
plt.plot( # complete
plt.gca().invert_yaxis()
plt.show()
"""
Explanation: This might take a while; try to limit the execution time of the above to ~5 minutes by playing with the maxiter and similar options of the scipy optimizers.
Once the above has finished evaluating, compare the score you got to your neighbors. Is there a significant difference between your scores? Let's plot your result against the data.
Model output is provided in the output object below, the format is a dictionary of arrays of the same length. The times of observation are in the times array, and magnitudes are in the model_observations array.
Hint: times is given relative to the time of the first detection, so add min(times) to your time to overplot onto the data.
End of explanation
"""
results = scipy.optimize.basinhopping( #complete
best_x = results.x
print('All done! Best score: `{}`.'.format(-results.fun))
"""
Explanation: Problem 1c
Try optimizing the same function using another minimization routine in scipy that can take a derivative as an input (examples: L-BFGS-B, SLSQP, basinhopping, etc.).
End of explanation
"""
# complete
for ti, t in enumerate(output['times']):
# complete
# complete
"""
Explanation: Now, plot the results of the above minimization alongside your original differential_evolution solution.
End of explanation
"""
def analytic_f(x, tt, mm, vv):
m0, alpha, beta, tau, t0, sigma = tuple(x)
t = np.array(tt)
m = np.array(mm)
v = np.array(vv)
# complete
"""
Explanation: After this process, some of you might have gotten a good solution with a runtime of a few minutes. In practice, guaranteed convergence to the best solution can take a very long time. Whats more, we only attempted to find the best solution available, usually we are interested in posterior distributions that (usually) include the best solution. These take even longer to compute (tens of thousands of function evaluations for a problem of this size).
Problem 2
Now, we'll construct our own simpler model that is analytically differentiable. We'll partly motivate the shape of this function based upon our knowledge of how tidal disruption events are expected to behave theoretically, but there will be limitations.
First, let's define a function that loosely mimics a tidal disruption event's temporal evolution. Tidal disruption events rise exponentially, then decay as a power-law. Canonically, the decay rate is -5/3, and the rise is very unconstrained, being mediated by complicated dynamics and accretion physics that have yet to be determined. So, we use the following agnostic form,
$$L(t) = L_0 \left(1-e^{-\frac{t}{t_0}}\right)^{\alpha } \left(\frac{t}{t_0}\right)^{-\beta }.$$
Tidal disruption observations are usually reported in magnitudes, thus the expression we'll actually compare against observations is
$$m(t) = m_0 - 2.5 \log_{10}\left[\left(1-e^{-\frac{t}{t_0}}\right)^{\alpha } \left(\frac{t}{t_0}\right)^{-\beta }\right].$$
To calculate the likelihood, we want to subtract the above from the observations. We'll make the gross assumption that the color of a tidal disruption is constant in time (which turns out to not be a terrible assumption) and thus $L_{\rm g}(t) \propto L(t)$.
Our likelihood function will be defined as the product of the squares of differences between our model and observation,
$$p = \prod_i \frac{1}{\sqrt{2\pi (\sigma_i^2 + \sigma^2)}} \left[\frac{\left(m_{{\rm g}, i} - \bar{m}_{{\rm g}, i}\right)^2}{2\left(\sigma_i^2 + \sigma^2\right)}\right],$$
and thus our log likelihood is the sum of these squared differences, plus a separate sum for the variances,
$$\log p = -\frac{1}{2} \left{\sum_i \left[\frac{\left(m_{{\rm g}, i} - \bar{m}_{{\rm g}, i}\right)^2}{\sigma_i^2 + \sigma^2}\right] + \log 2\pi \left(\sigma_i^2 + \sigma^2\right)\right}.$$
Problem 2a
Write the above expression as a python function:
End of explanation
"""
def dlogp_dalpha(x, tt, mm, vv):
m0, alpha, beta, tau, t0, sigma = tuple(x)
t = np.array(tt)
m = np.array(mm)
v = np.array(vv)
derivs = np.sum(-5.0 * np.log(1.0 - np.exp(-(t + t0) / tau)) * (np.log(100.0) * (m - m0) + 5.0 * lf(
alpha, beta, tau, t0, t)) / (4.0 * np.log(10.0) ** 2 * (v ** 2 + sigma ** 2)))
return derivs
def dlogp_dbeta(x, tt, mm, vv):
m0, alpha, beta, tau, t0, sigma = tuple(x)
t = np.array(tt)
m = np.array(mm)
v = np.array(vv)
derivs = np.sum(5.0 * np.log((t + t0) / tau) * (np.log(100.0) * (m - m0) + 5.0 * lf(
alpha, beta, tau, t0, t)) / (4.0 * np.log(10.0) ** 2 * (v ** 2 + sigma ** 2)))
return derivs
def dlogp_dtau(x, tt, mm, vv):
m0, alpha, beta, tau, t0, sigma = tuple(x)
t = np.array(tt)
m = np.array(mm)
v = np.array(vv)
derivs = np.sum(5.0 * (alpha * (t + t0) - beta * tau * (np.exp((t + t0)/tau) - 1.0)) * (
np.log(100.0) * (m - m0) + 5.0 * lf(
alpha, beta, tau, t0, t)) / (4.0 * tau ** 2 * np.log(10.0) ** 2 * (v ** 2 + sigma ** 2) * (
np.exp((t + t0)/tau) - 1.0)))
return derivs
def dlogp_dt0(x, tt, mm, vv):
m0, alpha, beta, tau, t0, sigma = tuple(x)
t = np.array(tt)
m = np.array(mm)
v = np.array(vv)
derivs = np.sum(-5.0 * (alpha * (t + t0) - beta * tau * (np.exp((t + t0)/tau) - 1.0)) * (
np.log(100.0) * (m - m0) + 5.0 * lf(
alpha, beta, tau, t0, t)) / (4.0 * tau * (t + t0) * np.log(10.0) ** 2 * (v ** 2 + sigma ** 2) * (
np.exp((t + t0)/tau) - 1.0)))
return derivs
def dlogp_dsigma(x, tt, mm, vv):
m0, alpha, beta, tau, t0, sigma = tuple(x)
t = np.array(tt)
m = np.array(mm)
v = np.array(vv)
derivs = np.sum(sigma/(4.0 * np.log(10.0) ** 2 * (v**2 + sigma**2)**2) * (5.0 * lf(
alpha, beta, tau, t0, t) * (4.0 * np.log(10.0) * (m - m0) + 5.0 * lf(
alpha, beta, tau, t0, t)) + 4.0 * np.log(10.0) ** 2 * ((m0 - m) ** 2 - v ** 2 - sigma ** 2)))
return derivs
"""
Explanation: Problem 2a
Compute the derivative for $\log p$ (above expression) with respect to $m_0$ (Mathematica might be helpful here). Below are the derivatives for the other five free parameters $\alpha$, $\beta$, $\tau$, $t_0$, and $\sigma$:
$$
\begin{align}
\frac{\partial\log p}{\partial \alpha} &= \sum_i -\frac{5 \log \left(1-e^{-\frac{t+t_0}{\tau }}\right) \left{\log (100) (\bar{m}-m_0)+5 \log \left[\left(1-e^{-\frac{t+t_0}{\tau }}\right)^{\alpha } \left(\frac{t+t_0}{\tau }\right)^{-\beta }\right]\right}}{4 \log ^2(10) \left(\sigma_i^2+\sigma^2\right)}\
\frac{\partial\log p}{\partial \beta} &= \sum_i \frac{5 \log \left(\frac{t+t_0}{\tau }\right) \left{\log (100) (\bar{m}-m_0)+5 \log \left[\left(1-e^{-\frac{t+t_0}{\tau }}\right)^{\alpha } \left(\frac{t+t_0}{\tau }\right)^{-\beta }\right]\right}}{4 \log ^2(10) \left(\sigma_i^2+\sigma^2\right)}\
\frac{\partial\log p}{\partial \tau} &= \sum_i \frac{5 \left(\alpha (t+t_0)-\beta \tau \left(e^{\frac{t+t_0}{\tau }}-1\right)\right) \left(\log (100) (\bar{m}-m_0)+5 \log \left(\left(1-e^{-\frac{t+t_0}{\tau }}\right)^{\alpha } \left(\frac{t+t_0}{\tau }\right)^{-\beta }\right)\right)}{4 \tau ^2 \log ^2(10) \left(\sigma_i^2 + \sigma^2\right) \left(e^{\frac{t+t_0}{\tau }}-1\right)}\
\frac{\partial\log p}{\partial t_0} &= \sum_i \frac{5 \left(\alpha (t+t_0)-\beta \tau \left(e^{\frac{t+t_0}{\tau }}-1\right)\right) \left(\log (100) (m_0-\bar{m})-5 \log \left(\left(1-e^{-\frac{t+t_0}{\tau }}\right)^{\alpha } \left(\frac{t+t_0}{\tau }\right)^{-\beta }\right)\right)}{4 \tau \log ^2(10) \left(\sigma_i^2+\sigma^2\right) (t+t_0) \left(e^{\frac{t+t_0}{\tau }}-1\right)}\
\frac{\partial\log p}{\partial \sigma} &= \sum_i \frac{\sigma_i}{4 \log ^2(10) \left(\sigma_i^2+\sigma^2\right)^2} \left{5 \log \left[\left(1-e^{-\frac{t+t_0}{\tau }}\right)^{\alpha } \left(\frac{t+t_0}{\tau }\right)^{-\beta }\right]\right.\&\times\left.\left(4 \log (10) (\bar{m}-m_0)+5 \log \left[\left(1-e^{-\frac{t+t_0}{\tau }}\right)^{\alpha } \left(\frac{t+t_0}{\tau }\right)^{-\beta }\right]\right)+4 \log ^2(10) \left((m_0-\bar{m})^2-\sigma_i^2-\sigma^2\right)\right}
\end{align}
$$
Problem 2b
We now need to write each of these derivatives as python functions. These functions should accept a single vector argument x with length equal to the number of free parameters, plus a vector $t$ (the times of the observation) vector $m$ (the magnitudes of each observation), and finally errors $v$ (the measurement error of each observation). Again, 5 of the 6 parameters have already been written for you (you must provide the 6th).
End of explanation
"""
# Set up bounds/test parameters.
abounds = [
[0.0, 30.0],
[0.1, 50.0],
[0.1, 10.0],
[0.1, 200.0],
[0.1, 200.0],
[0.001, 1.0]
]
test_times = [1.0, 20.0]
test_mags = [23.0, 19.0]
test_errs = [0.1, 0.2]
# Draw a random parameter combo to test with.
n = 100
dm0_diff = np.zeros(n)
for p in range(n):
test_x = [abounds[i][0] + x * (abounds[i][1] - abounds[i][0]) for i, x in enumerate(np.random.rand(6))]
# Check that every derivative expression is close to finite difference.
teps = 1e-10
xp = list(test_x)
xp[0] += teps
exactd = dlogp_dm0(test_x, test_times, test_mags, test_errs)
dm0_diff[p] = (exactd - (
analytic_f(test_x, test_times, test_mags, test_errs) - analytic_f(
xp, test_times, test_mags, test_errs)) / teps) / exactd
# complete for rest of parameters
plt.subplot(321)
plt.hist(dm0_diff[~np.isnan(dm0_diff)]);
# complete for rest of parameters
"""
Explanation: Problem 2c
Make sure the derivatives for all the above function are consistent with the finite differences of the objective function. How large is the error for an eps = 1e-8 (the default distance used when no Jacobian is provided)? Make a histogram for each derivative of these errors for 100 random parameter combinations drawn from the bounds (in other words, six plots with 100 samples each).
Hint: you will likely have to remove some nans.
End of explanation
"""
import scipy
times0 = np.array(times) - min(times)
results = scipy.optimize.differential_evolution(# complete
best_x = results.x
print('All done! Best score: `{}`, took `{}` function evaluations.'.format(results.fun, results.nfev))
"""
Explanation: Which derivatives seem to have the least accurate finite differences? Why?
Problem 3
Now we have an analytical function with analytical derivatives that should be accurate to near-machine precision.
Problem 3a
First, let's optimize our function using differential_evolution, as we did above with the MOSFiT output, without using the derivatives we have constructed (as differential_evolution does not use them).
End of explanation
"""
# complete
"""
Explanation: Now plot the result:
End of explanation
"""
times0 = np.array(times) - min(times)
results = scipy.optimize.basinhopping( # complete
best_x = results.x
print('All done! Best score: `{}`, took `{}` function evaluations.'.format(results.fun, results.nfev))
"""
Explanation: How good is the approximation?
Problem 3b
Let's minimize using the basinhopping algorithm now, again not using our derivatives.
End of explanation
"""
def jac(x, tt, mm, vv):
m0, alpha, beta, tau, t0, sigma = tuple(x)
t = np.array(tt)
m = np.array(mm)
v = np.array(vv)
# complete
return jac
results = scipy.optimize.basinhopping( # complete
best_x = results.x
print('All done! Best score: `{}`, took `{}` function evaluations.'.format(results.fun, results.nfev))
# plot the resulting fit
"""
Explanation: This algorithm, which depends on finite differencing, seems to have taken more function evaluations than differential_evolution. Let's give it some help: construct a jacobian using the derivative functions defined above.
Hint: mind the sign of the Jacobian since we are minimizing the function.
End of explanation
"""
global jcount
jcount = 0
# complete
"""
Explanation: If all went well, the jacobian version of the optimization should have taken ~8x fewer function evaluations. But is it faster?
Problem 3c
Compute how many times the Jacobian was called, and estimate how expensive the Jacobian is to compute relative to the objective function. How does this compare to the run that only used finite differencing?
End of explanation
"""
from pyhmc import hmc
# complete
"""
Explanation: Can you think of a reason why using the Jacobian version may be preferable, even if it is slower?
Challenge Problem(s)
Select one (or more) of the following:
Fit a different event using either MOSFiT or the analytical formula. Any supernova can be loaded by name from the internet via the fetch method of the Fetcher class. Examples: SN1987A, SN2011fe, PTF09ge. If you are using the analytical model, exclude all but one of the bands in the dataset.
Optimize the Jacobian function to reuse common functions that are shared between each derivative component (example: $1 - e^{((t + t_0)/\tau)}$ appears frequently in the expressions, it only needs to be computed once).
Sample the posterior in a Monte Carlo framework (using priors of your choice). Samplers like emcee are versatile and work even when derivatives aren't available, but we do have derivatives, so more powerful methods like Hamiltonian MCMC are available to us. A simple HMC for our purposes is available via pip install pyhmc, see the README for instructions on how to construct the input function: https://github.com/rmcgibbo/pyhmc. Plot the resulting samples using the corner package.
End of explanation
"""
|
ocefpaf/secoora | notebooks/timeSeries/sst/00-fetch_data.ipynb | mit | import time
start_time = time.time()
"""
Explanation: <img style='float: left' width="150px" src="http://secoora.org/sites/default/files/secoora_logo.png">
<br><br>
SECOORA Notebook 1
Fetch Sea Surface Temperature time-series data
This notebook fetches weekly time-series of all the SECOORA observations and
models available in the NGDC and SECOORA THREDDS catalogs.
End of explanation
"""
import os
try:
import cPickle as pickle
except ImportError:
import pickle
import iris
import cf_units
from datetime import datetime
from utilities import CF_names, fetch_range, start_log
# 1-week start of data.
kw = dict(start=datetime(2014, 7, 1, 12), days=6)
start, stop = fetch_range(**kw)
# SECOORA region (NC, SC GA, FL).
bbox = [-87.40, 24.25, -74.70, 36.70]
# CF-names.
sos_name = 'sea_water_temperature'
name_list = CF_names[sos_name]
# Units.
units = cf_units.Unit('celsius')
# Logging.
run_name = '{:%Y-%m-%d}'.format(stop)
log = start_log(start, stop, bbox)
# SECOORA models.
secoora_models = ['SABGOM', 'USEAST',
'USF_ROMS', 'USF_SWAN', 'USF_FVCOM']
# Config.
fname = os.path.join(run_name, 'config.pkl')
config = dict(start=start,
stop=stop,
bbox=bbox,
name_list=name_list,
units=units,
run_name=run_name,
secoora_models=secoora_models)
with open(fname,'wb') as f:
pickle.dump(config, f)
from owslib import fes
from utilities import fes_date_filter
kw = dict(wildCard='*',
escapeChar='\\',
singleChar='?',
propertyname='apiso:AnyText')
or_filt = fes.Or([fes.PropertyIsLike(literal=('*%s*' % val), **kw)
for val in name_list])
# Exclude ROMS Averages and History files.
not_filt = fes.Not([fes.PropertyIsLike(literal='*Averages*', **kw)])
begin, end = fes_date_filter(start, stop)
filter_list = [fes.And([fes.BBox(bbox), begin, end, or_filt, not_filt])]
from owslib.csw import CatalogueServiceWeb
endpoint = 'http://www.ngdc.noaa.gov/geoportal/csw'
csw = CatalogueServiceWeb(endpoint, timeout=60)
csw.getrecords2(constraints=filter_list, maxrecords=1000, esn='full')
fmt = '{:*^64}'.format
log.info(fmt(' Catalog information '))
log.info("URL: {}".format(endpoint))
log.info("CSW version: {}".format(csw.version))
log.info("Number of datasets available: {}".format(len(csw.records.keys())))
from utilities import service_urls
dap_urls = service_urls(csw.records, service='odp:url')
sos_urls = service_urls(csw.records, service='sos:url')
# Work around https://github.com/ioos/secoora/issues/184:
dap_urls = [url for url in dap_urls if 'G1_SST_GLOBAL' not in url]
log.info(fmt(' CSW '))
for rec, item in csw.records.items():
log.info('{}'.format(item.title))
log.info(fmt(' SOS '))
for url in sos_urls:
log.info('{}'.format(url))
log.info(fmt(' DAP '))
for url in dap_urls:
log.info('{}.html'.format(url))
from utilities import is_station
# Filter out some station endpoints.
non_stations = []
for url in dap_urls:
try:
if not is_station(url):
non_stations.append(url)
except RuntimeError as e:
log.warn("Could not access URL {}. {!r}".format(url, e))
dap_urls = non_stations
log.info(fmt(' Filtered DAP '))
for url in dap_urls:
log.info('{}.html'.format(url))
"""
Explanation: Save configuration
End of explanation
"""
from utilities import titles, fix_url
for secoora_model in secoora_models:
if titles[secoora_model] not in dap_urls:
log.warning('{} not in the NGDC csw'.format(secoora_model))
dap_urls.append(titles[secoora_model])
# NOTE: USEAST is not archived at the moment!
# https://github.com/ioos/secoora/issues/173
dap_urls = [fix_url(start, url) if
'SABGOM' in url else url for url in dap_urls]
import warnings
from iris.exceptions import CoordinateNotFoundError, ConstraintMismatchError
from utilities import (TimeoutException, secoora_buoys,
quick_load_cubes, proc_cube)
urls = list(secoora_buoys())
if not urls:
raise ValueError("Did not find any SECOORA buoys!")
buoys = dict()
for url in urls:
try:
with warnings.catch_warnings():
warnings.simplefilter("ignore") # Suppress iris warnings.
kw = dict(bbox=bbox, time=(start, stop), units=units)
cubes = quick_load_cubes(url, name_list)
cubes = [proc_cube(cube, **kw) for cube in cubes]
buoy = url.split('/')[-1].split('.nc')[0]
if len(cubes) == 1:
buoys.update({buoy: cubes[0]})
else:
#[buoys.update({'{}_{}'.format(buoy, k): cube}) for
# k, cube in list(enumerate(cubes))]
# FIXME: For now I am choosing the first sensor.
buoys.update({buoy: cubes[0]})
except (RuntimeError, ValueError, TimeoutException,
ConstraintMismatchError, CoordinateNotFoundError) as e:
log.warning('Cannot get cube for: {}\n{}'.format(url, e))
from pyoos.collectors.coops.coops_sos import CoopsSos
collector = CoopsSos()
collector.end_time = stop
collector.start_time = start
collector.variables = [sos_name]
ofrs = collector.server.offerings
title = collector.server.identification.title
log.info(fmt(' Collector offerings '))
log.info('{}: {} offerings'.format(title, len(ofrs)))
from pandas import read_csv
from utilities import sos_request
params = dict(observedProperty=sos_name,
eventTime=start.strftime('%Y-%m-%dT%H:%M:%SZ'),
featureOfInterest='BBOX:{0},{1},{2},{3}'.format(*bbox),
offering='urn:ioos:network:NOAA.NOS.CO-OPS:MetActive')
uri = 'http://opendap.co-ops.nos.noaa.gov/ioos-dif-sos/SOS'
url = sos_request(uri, **params)
observations = read_csv(url)
log.info('SOS URL request: {}'.format(url))
"""
Explanation: Add SECOORA models and observations
End of explanation
"""
from utilities import get_coops_metadata, to_html
columns = {'sensor_id': 'sensor',
'station_id': 'station',
'latitude (degree)': 'lat',
'longitude (degree)': 'lon',
'sea_water_temperature (C)': sos_name}
observations.rename(columns=columns, inplace=True)
observations['sensor'] = [s.split(':')[-1] for s in observations['sensor']]
observations['station'] = [s.split(':')[-1] for s in observations['station']]
observations['name'] = [get_coops_metadata(s)[0] for s in observations['station']]
observations.set_index('name', inplace=True)
to_html(observations.head())
from pandas import DataFrame
from utilities import secoora2df
if buoys:
secoora_observations = secoora2df(buoys, sos_name)
to_html(secoora_observations.head())
else:
secoora_observations = DataFrame()
from pandas import concat
all_obs = concat([observations, secoora_observations], axis=0)
to_html(concat([all_obs.head(2), all_obs.tail(2)]))
"""
Explanation: Clean the DataFrame
End of explanation
"""
from owslib.ows import ExceptionReport
from utilities import pyoos2df, save_timeseries
iris.FUTURE.netcdf_promote = True
log.info(fmt(' Observations '))
outfile = '{:%Y-%m-%d}-OBS_DATA.nc'.format(stop)
outfile = os.path.join(run_name, outfile)
log.info(fmt(' Downloading to file {} '.format(outfile)))
data, bad_station = dict(), []
col = 'sea_water_temperature (C)'
for station in observations.index:
station_code = observations['station'][station]
try:
df = pyoos2df(collector, station_code, df_name=station)
data.update({station_code: df[col]})
except ExceptionReport as e:
bad_station.append(station_code)
log.warning("[{}] {}:\n{}".format(station_code, station, e))
obs_data = DataFrame.from_dict(data)
"""
Explanation: Uniform 6-min time base for model/data comparison
End of explanation
"""
pattern = '|'.join(bad_station)
if pattern:
all_obs['bad_station'] = all_obs.station.str.contains(pattern)
observations = observations[~observations.station.str.contains(pattern)]
else:
all_obs['bad_station'] = ~all_obs.station.str.contains(pattern)
# Save updated `all_obs.csv`.
fname = '{}-all_obs.csv'.format(run_name)
fname = os.path.join(run_name, fname)
all_obs.to_csv(fname)
comment = "Several stations from http://opendap.co-ops.nos.noaa.gov"
kw = dict(longitude=observations.lon,
latitude=observations.lat,
station_attr=dict(cf_role="timeseries_id"),
cube_attr=dict(featureType='timeSeries',
Conventions='CF-1.6',
standard_name_vocabulary='CF-1.6',
cdm_data_type="Station",
comment=comment,
url=url))
save_timeseries(obs_data, outfile=outfile,
standard_name=sos_name, **kw)
to_html(obs_data.head())
"""
Explanation: Split good and bad stations
End of explanation
"""
import numpy as np
from pandas import DataFrame
def extract_series(cube, station):
time = cube.coord(axis='T')
date_time = time.units.num2date(cube.coord(axis='T').points)
data = cube.data
return DataFrame(data, columns=[station], index=date_time)
if buoys:
secoora_obs_data = []
for station, cube in list(buoys.items()):
df = extract_series(cube, station)
secoora_obs_data.append(df)
# Some series have duplicated times!
kw = dict(subset='index', take_last=True)
secoora_obs_data = [obs.reset_index().drop_duplicates(**kw).set_index('index') for
obs in secoora_obs_data]
secoora_obs_data = concat(secoora_obs_data, axis=1)
else:
secoora_obs_data = DataFrame()
"""
Explanation: SECOORA Observations
End of explanation
"""
from utilities.qaqc import filter_spikes, threshold_series
if buoys:
secoora_obs_data.apply(threshold_series, args=(-5, 40))
secoora_obs_data.apply(filter_spikes)
# Interpolate to the same index as SOS.
index = obs_data.index
kw = dict(method='time', limit=30)
secoora_obs_data = secoora_obs_data.reindex(index).interpolate(**kw).ix[index]
log.info(fmt(' SECOORA Observations '))
fname = '{:%Y-%m-%d}-SECOORA_OBS_DATA.nc'.format(stop)
fname = os.path.join(run_name, fname)
log.info(fmt(' Downloading to file {} '.format(fname)))
url = "http://129.252.139.124/thredds/catalog_platforms.html"
comment = "Several stations {}".format(url)
kw = dict(longitude=secoora_observations.lon,
latitude=secoora_observations.lat,
station_attr=dict(cf_role="timeseries_id"),
cube_attr=dict(featureType='timeSeries',
Conventions='CF-1.6',
standard_name_vocabulary='CF-1.6',
cdm_data_type="Station",
comment=comment,
url=url))
save_timeseries(secoora_obs_data, outfile=fname,
standard_name=sos_name, **kw)
to_html(secoora_obs_data.head())
"""
Explanation: These buoys need some QA/QC before saving
End of explanation
"""
from iris.exceptions import (CoordinateNotFoundError, ConstraintMismatchError,
MergeError)
from utilities import time_limit, is_model, get_model_name, get_surface
log.info(fmt(' Models '))
cubes = dict()
with warnings.catch_warnings():
warnings.simplefilter("ignore") # Suppress iris warnings.
for k, url in enumerate(dap_urls):
log.info('\n[Reading url {}/{}]: {}'.format(k+1, len(dap_urls), url))
try:
with time_limit(60*5):
cube = quick_load_cubes(url, name_list, callback=None, strict=True)
if is_model(cube):
cube = proc_cube(cube, bbox=bbox, time=(start, stop), units=units)
else:
log.warning("[Not model data]: {}".format(url))
continue
cube = get_surface(cube)
mod_name, model_full_name = get_model_name(cube, url)
cubes.update({mod_name: cube})
except (TimeoutException, RuntimeError, ValueError,
ConstraintMismatchError, CoordinateNotFoundError,
IndexError) as e:
log.warning('Cannot get cube for: {}\n{}'.format(url, e))
from iris.pandas import as_series
from utilities import (make_tree, get_nearest_water,
add_station, ensure_timeseries, remove_ssh)
for mod_name, cube in cubes.items():
fname = '{:%Y-%m-%d}-{}.nc'.format(stop, mod_name)
fname = os.path.join(run_name, fname)
log.info(fmt(' Downloading to file {} '.format(fname)))
try:
tree, lon, lat = make_tree(cube)
except CoordinateNotFoundError as e:
log.warning('Cannot make KDTree for: {}'.format(mod_name))
continue
# Get model series at observed locations.
raw_series = dict()
for station, obs in all_obs.iterrows():
try:
kw = dict(k=10, max_dist=0.04, min_var=0.01)
args = cube, tree, obs.lon, obs.lat
series, dist, idx = get_nearest_water(*args, **kw)
except ValueError as e:
status = "No Data"
log.info('[{}] {}'.format(status, obs.name))
continue
except RuntimeError as e:
status = "Failed"
log.info('[{}] {}. ({})'.format(status, obs.name, e.message))
continue
if not series:
status = "Land "
else:
raw_series.update({obs['station']: series})
series = as_series(series)
status = "Water "
log.info('[{}] {}'.format(status, obs.name))
if raw_series: # Save cube.
for station, cube in raw_series.items():
cube = add_station(cube, station)
cube = remove_ssh(cube)
try:
cube = iris.cube.CubeList(raw_series.values()).merge_cube()
except MergeError as e:
log.warning(e)
ensure_timeseries(cube)
iris.save(cube, fname)
del cube
log.info(fmt('Finished processing {}\n'.format(mod_name)))
from utilities import nbviewer_link, make_qr
make_qr(nbviewer_link('00-fetch_data.ipynb'))
elapsed = time.time() - start_time
log.info('{:.2f} minutes'.format(elapsed/60.))
log.info('EOF')
with open('{}/log.txt'.format(run_name)) as f:
print(f.read())
"""
Explanation: Loop discovered models and save the nearest time-series
End of explanation
"""
|
AEW2015/PYNQ_PR_Overlay | Pynq-Z1/notebooks/Video_PR/Motion_Blur_Filter.ipynb | bsd-3-clause | from pynq.drivers.video import HDMI
from pynq import Bitstream_Part
from pynq.board import Register
from pynq import Overlay
Overlay("demo.bit").download()
"""
Explanation: Don't forget to delete the hdmi_out and hdmi_in when finished
Motion Blur Filter Example
In this notebook, we will demonstrate how to use the motion blur filter. This filter shows that partially reconfigurable modules can use Xilinx IP cores. This filter blurs the video feed horizontally. The length of the blur is determined by a register in the module. This register is controlled by a python slide widget.
<img src="data/motion.jpg"/>
This filter works by adding up the RBG values of the pixels to the left of the pixel being displayed and then dividing by the number of pixels. The number of pixels to blur is determined by a register. A Xilinx divider core is needed to operate the dividing. This is because the numerator and denominator of the division are variables.
1. Download base overlay to the board
Ensure that the camera is not connected to the board. Run the following script to provide the PYNQ with its base overlay.
End of explanation
"""
hdmi_in = HDMI('in')
hdmi_out = HDMI('out', frame_list=hdmi_in.frame_list)
hdmi_out.mode(2)
hdmi_out.start()
hdmi_in.start()
"""
Explanation: 2. Connect camera
Physically connect the camera to the HDMI-in port of the PYNQ. Run the following code to instruct the PYNQ to capture the video from the camera and to begin streaming video to your monitor (connected to the HDMI-out port).
End of explanation
"""
Bitstream_Part("motion_p.bit").download()
"""
Explanation: 3. Program board
Run the following script to download the Motion Blur Filter to the PYNQ.
End of explanation
"""
import ipywidgets as widgets
R0 =Register(0)
R0.write(255)
R0_s = widgets.IntSlider(
value=255,
min=0,
max=511,
step=1,
description='Blur:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='red'
)
def update_r0(*args):
R0.write(R0_s.value)
R0_s.observe(update_r0, 'value')
widgets.HBox([R0_s])
"""
Explanation: 4. Create a user interface
We will communicate with the filter using a nice user interface. Run the following code to activate that interface.
End of explanation
"""
hdmi_out.stop()
hdmi_in.stop()
del hdmi_out
del hdmi_in
"""
Explanation: 5. Exploration
Move the slider above to change the lenght of the blur. When the slider is set to zero there is no blur. Notice how quickly the filter responds to the movement of the slider.
6. Clean up
When you are done with the filter, run the following code to stop the video stream
End of explanation
"""
|
ClaudioVZ/Metodos_numericos_I | 01_Raices_de_ecuaciones_de_una_variable/01_Biseccion.ipynb | gpl-2.0 | def raiz(x_l, x_u):
x_r = (x_l + x_u)/2
return x_r
def intervalo_de_raiz(f, x_l, x_u):
x_r = raiz(x_l, x_u)
if f(x_l)*f(x_r) < 0:
x_u = x_r
if f(x_l)*f(x_r) > 0:
x_l = x_r
return x_l, x_u
"""
Explanation: Método de la bisección
El método de bisección, conocido también como de corte binario, de partición de intervalos o de Bolzano, es un método de búsqueda incremental en el que el intervalo se divide siempre a la mitad.
grafico
\begin{equation}
x_{raiz} = \frac{x_{inferior} + x_{superior}}{2}
\end{equation}
Algoritmo
la raiz verdadera está en el intervalo [x_inferior, x_superior]
calcular x_raiz
si f(x_inferior)*f(x_raiz) < 0
la raiz verdadera está en el intervalo [x_inferior, x_raiz]
calcular x_raiz
si f(x_inferior)*f(x_raiz) > 0
la raiz verdadera está en el intervalo [x_raiz, x_superior]
calcular x_raiz
si f(x_inferior)*f(x_raiz) = 0
se encontró la raiz verdadera
Ejemplo 1
Encontrar la raiz de
\begin{equation}
y = x^{3} + 4 x^{2} - 10
\end{equation}
la raíz posiblemente se encuentre en $[x_{l}, x_{u}] = [1,2]$
Iteración 0
Intervalo actual
\begin{equation}
[x_{l_{0}}, x_{u_{0}}] = [1, 2]
\end{equation}
Raíz aproximada
\begin{equation}
x_{r_{0}} = \frac{x_{l_{0}} + x_{u_{0}}}{2} = \frac{1 + 2}{2} = 1.5
\end{equation}
Siguiente intervalo
\begin{equation}
[x_{l_{1}}, x_{u_{1}}] = \left {
\begin{array}{llcll}
si & f(x_{l_{0}}) \cdot f(x_{r_{0}}) = f(1) \cdot f(1.5) < 0 & \longrightarrow & [x_{l_{0}}, x_{r_{0}}] = [1, 1.5] & \checkmark \
si & f(x_{l_{0}}) \cdot f(x_{r_{0}}) = f(1) \cdot f(1.5) > 0 & \longrightarrow & [x_{r_{0}}, x_{u_{0}}] = [1.5, 2] &
\end{array}
\right .
\end{equation}
Error relativo
\begin{equation}
e_{r} = ?
\end{equation}
Iteración 1
Intervalo actual
\begin{equation}
[x_{l_{1}}, x_{u_{1}}] = [1.5, 2]
\end{equation}
Raíz aproximada
\begin{equation}
x_{r_{1}} = \frac{x_{l_{1}} + x_{u_{1}}}{2} = \frac{1 + 1.5}{2} = 1.25
\end{equation}
Siguiente intervalo
\begin{equation}
[x_{l_{2}}, x_{u_{2}}] = \left {
\begin{array}{llcll}
si & f(x_{l_{1}}) \cdot f(x_{r_{1}}) = f(1) \cdot f(1.25) < 0 & \longrightarrow & [x_{l_{1}}, x_{r_{1}}] = [1, 1.25] & \
si & f(x_{l_{1}}) \cdot f(x_{r_{1}}) = f(1) \cdot f(1.25) > 0 & \longrightarrow & [x_{r_{1}}, x_{u_{1}}] = [1.25, 1.5] & \checkmark
\end{array}
\right .
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{r_{1}} - x_{r_{0}}}{x_{r_{1}}}\bigg| \times 100\% = \bigg|\frac{1.25 - 1.5}{1.25}\bigg| \times 100\% = 20\%
\end{equation}
Iteración 2
Intervalo actual
\begin{equation}
[x_{l_{2}}, x_{u_{2}}] = [1.25, 1.5]
\end{equation}
Raíz aproximada
\begin{equation}
x_{r_{2}} = \frac{x_{l_{2}} + x_{u_{2}}}{2} = \frac{1.25 + 1.5}{2} = 1.375
\end{equation}
Siguiente intervalo
\begin{equation}
[x_{l_{3}}, x_{u_{3}}] = \left {
\begin{array}{llcll}
si & f(x_{l_{2}}) \cdot f(x_{r_{2}}) = f(1.25) \cdot f(1.375) < 0 & \longrightarrow & [x_{l_{2}}, x_{r_{2}}] = [1.25, 1.375] & \checkmark \
si & f(x_{l_{2}}) \cdot f(x_{r_{2}}) = f(1.25) \cdot f(1.375) > 0 & \longrightarrow & [x_{r_{2}}, x_{u_{2}}] = [1.375, 1.5] &
\end{array}
\right .
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{r_{2}} - x_{r_{1}}}{x_{r_{2}}}\bigg| \times 100\% = \bigg|\frac{1.375 - 1.25}{1.375}\bigg| \times 100\% = 9.09\%
\end{equation}
Iteración 3
Intervalo actual
\begin{equation}
[x_{l_{3}}, x_{u_{3}}] = [1.25, 1.375]
\end{equation}
Raíz aproximada
\begin{equation}
x_{r_{3}} = \frac{x_{l_{3}} + x_{u_{3}}}{2} = \frac{1.25 + 1.375}{2} = 1.3125
\end{equation}
Siguiente intervalo
\begin{equation}
[x_{l_{4}}, x_{u_{4}}] = \left {
\begin{array}{llcll}
si & f(x_{l_{3}}) \cdot f(x_{r_{3}}) = f(1.25) \cdot f(1.3125) < 0 & \longrightarrow & [x_{l_{3}}, x_{r_{3}}] = [1.25, 1.3125] & \
si & f(x_{l_{3}}) \cdot f(x_{r_{3}}) = f(1.25) \cdot f(1.3125) > 0 & \longrightarrow & [x_{r_{3}}, x_{u_{3}}] = [1.3125, 1.375] & \checkmark
\end{array}
\right .
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg| \frac{x_{r_{3}} - x_{r_{2}}}{x_{r_{3}}} \bigg| \times 100\% = \bigg|\frac{1.3125 - 1.375}{1.3125}\bigg| \times 100\% = 4.76\%
\end{equation}
Implementación de funciones auxiliares
Seudocódigo para calcular la raíz
pascal
function raiz(x_l, x_u)
x_r = (x_l + x_u)/2
return x_r
end function
Seudocódigo para determinar el siguiente intervalo
pascal
function intervalo_de_raiz(f(x), x_l, x_u)
x_r = raiz(x_l, x_u)
if f(x_l)*f(x_r) < 0
x_u = x_r
end if
if f(x_l)*f(x_r) > 0
x_l = x_r
end if
return x_l, x_u
end function
End of explanation
"""
def biseccion(f, x_inferior, x_superior):
print("{0:2s}\t{1:12s}\t{2:12s}\t{3:12s}\t{4:16s}".format(' i', 'x inferior', 'x superior', 'raiz', 'error relativo %'))
x_raiz_actual = raiz(x_inferior, x_superior)
i = 0
print("{0:2d}\t{1:12.10f}\t{2:12.10f}\t{3:12.10f}\t{4:16s}".format(i, x_inferior, x_superior, x_raiz_actual, '????????????????'))
error_permitido = 0.000001
while True:
x_raiz_anterior = x_raiz_actual
x_inferior, x_superior = intervalo_de_raiz(f, x_inferior, x_superior)
x_raiz_actual = raiz(x_inferior, x_superior)
if x_raiz_actual != 0:
error_relativo = abs((x_raiz_actual - x_raiz_anterior)/x_raiz_actual)*100
i = i + 1
print("{0:2d}\t{1:12.10f}\t{2:12.10f}\t{3:12.10f}\t{4:16.13f}".format(i, x_inferior, x_superior, x_raiz_actual, error_relativo))
if (error_relativo < error_permitido) or (i>=20):
break
print('\nx =', x_raiz_actual)
"""
Explanation: Implementación no vectorizada
Seudocódigo
pascal
function biseccion(f(x), x_inferior, x_superior)
x_raiz_actual = raiz(x_inferior, x_superior)
error_permitido = 0.000001
while(True)
x_raiz_anterior = x_raiz_actual
x_inferior, x_superior = intervalo_de_raiz(f(x), x_inferior, x_superior)
x_raiz_actual = raiz(x_inferior, x_superior)
if x_raiz_actual != 0
error_relativo = abs((x_raiz_actual - x_raiz_anterior)/x_raiz_actual)*100
end if
if error_relativo < error_permitido
exit
end if
end while
mostrar x_raiz_actual
end function
o también
pascal
function biseccion(f(x), x_inferior, x_superior)
x_raiz_actual = raiz(x_inferior, x_superior)
for 1 to maxima_iteracion do
x_raiz_anterior = x_raiz_actual
x_inferior, x_superior = intervalo_de_raiz(f(x), x_inferior, x_superior)
x_raiz_actual = raiz(x_inferior, x_superior)
end for
mostrar x_raiz_actual
end function
End of explanation
"""
def f(x):
y = x**3 + 4*x**2 - 10
return y
intervalo_de_raiz(f, 1, 2)
intervalo_de_raiz(f, 1, 1.5)
intervalo_de_raiz(f, 1.25, 1.5)
biseccion(f, 1, 2)
"""
Explanation: Ejemplo 2
Encontrar la raiz de
\begin{equation}
y = x^{3} + 4 x^{2} - 10
\end{equation}
la raíz posiblemente se encuentre en el intervalo $[1,2]$
End of explanation
"""
from math import sin, cos
def g(x):
y = sin(10*x) + cos(3*x)
return y
intervalo_de_raiz(g, 14, 15)
intervalo_de_raiz(g, 14.5, 15)
intervalo_de_raiz(g, 14.75, 15)
biseccion(g, 14, 15)
"""
Explanation: Ejemplo 3
Encontrar la raiz de
\begin{equation}
y = \sin{10 x} + \cos{3 x}
\end{equation}
la raíz posiblemente se encuentre en el intervalo $[14,15]$
End of explanation
"""
biseccion(g, 12, 16)
"""
Explanation: Ejemplo 4
Encontrar la raiz de
\begin{equation}
y = \sin{10 x} + \cos{3 x}
\end{equation}
la raíz posiblemente se encuentre en el intervalo $[12,16]$
End of explanation
"""
|
NuGrid/NuPyCEE | NSM_test_suite.ipynb | bsd-3-clause | # Do a SYGMA run for each NuGrid metallicity
s_02 = s.sygma(iniZ=0.02, imf_type='salpeter')
s_01 = s.sygma(iniZ=0.01, imf_type='salpeter')
s_006 = s.sygma(iniZ=0.006, imf_type='salpeter')
s_001 = s.sygma(iniZ=0.001, imf_type='salpeter')
s_0001 = s.sygma(iniZ=0.0001, imf_type='salpeter')
# Show the number of neutron star mergers at each timestep; should not be negative, and should be roughly equal!
print np.sum(s_02.history.nsm_numbers)
print np.sum(s_01.history.nsm_numbers)
print np.sum(s_006.history.nsm_numbers)
print np.sum(s_001.history.nsm_numbers)
print np.sum(s_0001.history.nsm_numbers)
"""
Explanation: Initialisation
End of explanation
"""
# Name the relevant quantities
mtot = s_001.mgal
nsm_l = s_001.transitionmass
imf0 = s_001.imf_bdys[0]
imf1 = s_001.imf_bdys[1]
# Compute the normalization constant as defined above
k_N = (mtot*0.35) / (imf0**-0.35 - imf1**-0.35) #(I)
"""
Explanation: IMF check
The Salpeter allows one to calculate the number of stars $N_{12}$ in the mass interval [m1, m2] with
(I) $N_{12} = k_N \int _{m1}^{m2} m^{-2.35} dm$
Where $k_{N}$ is the normalization constant, which can be derived from the total amount of mass in the system $M_{tot}$.
Since the total mass $M_{12}$ in the mass interval above can be estimated with
(II) $M_{12} = k_N \int _{m1}^{m2} m^{-1.35} dm$
the mass interval of [8, 100] for neutron star progenitors, [0.1, 100] for the total mass, and $M_{tot}=1e4$ will yield for $k_N$:
$1e4 = \frac{k_N}{0.35}(0.1^{-0.35} - 100^{-0.35})$
End of explanation
"""
# Compute the total number of neutron star merger progenitors as defined above
N_nsm = (k_N/1.35) * (nsm_l**-1.35 - imf1**-1.35)
"""
Explanation: The total number of NS merger progenitors $N_{12}$ is then:
$N_{12} = \frac{k_N}{1.35}(8^{-1.35} - 100^{-1.35})$
End of explanation
"""
# Compute the number of NS merger progenitors in one of the SYGMA runs (normalize to mgal)
A_imf = mtot / s_001._imf(imf0, imf1, 2)
N_sim = A_imf * s_001._imf(nsm_l, imf1, 1)
print 'Theoretical number of neutron star progenitors: ', N_nsm
print 'Number of neutron star progenitors in SYGMA run: ', N_sim
print 'Ratio (should be ~1): ', N_sim / N_nsm
"""
Explanation: Compared to a SYGMA run:
End of explanation
"""
# Obtain the fractional isotope yields from a SYGMA run
l = len(s_01.history.ism_iso_yield_nsm)
y = s_01.history.ism_iso_yield_nsm[l-1]
n = np.sum(y)
yields = y / n
# Exclude isotopes for which there are no r-process yields in the yield table
nonzero = np.nonzero(yields)
yields = yields[nonzero]
# Obtain the mass numbers for the isotopes (x-axis ticks)
massnums = []
for i in s_01.history.isotopes:
massnum = i.split('-')[1]
massnums.append(float(massnum))
# Again exclude zero values
massnums = np.asarray(massnums)
massnums = massnums[nonzero]
# Hacky text parser to get all the fractional isotope values from the r-process yield table
r_yields_text = open('yield_tables/r_process_rosswog_2014.txt')
r_yields = r_yields_text.readlines()
lines = []
for line in r_yields:
lines.append(line)
newlines = []
for line in lines:
if '&' in line:
new = line.strip()
new = new.split('&')
newlines.append(new)
massfracs = []
rmassnums = []
for ind, el in enumerate(newlines):
if ind is not 0:
massfracs.append(float(el[2]))
rmassnums.append(float(el[1].split('-')[1]))
# Array of r-process yields to compare with simulation yields
massfracs = np.asarray(massfracs)
# Plot r-process yields against neutron star merger simulation yields (should be nearly identical)
plt.figure(figsize=(12,8))
plt.scatter(massnums, yields, marker='x', s=32, color='red', label='Final neutron star merger ejecta')
plt.scatter(rmassnums, massfracs, s=8, label='r-process yields')
plt.xlim(80, 250)
plt.ylim(0.000000000001, 1)
plt.yscale('log')
plt.xlabel('Mass number')
plt.ylabel('Mass fraction')
plt.legend(loc=4)
#plt.savefig('yield_comparison.png', dpi=200)
"""
Explanation: Ensure r-process yields are being read in properly
End of explanation
"""
# Define the three functions which are fit to the DTD (Power law, 5th and 6th degree polynomials)
def quintic(t, a, b, c, d, e, f):
y = (a*(t**5))+(b*(t**4))+(c*(t**3))+(d*(t**2))+(e*t)+f
return y
def sextic(t, a, b, c, d, e, f, g):
y = a*(t**6) + b*(t**5) + c*(t**4) + d*(t**3) + e*(t**2) + f*t + g
return y
def powlaw(a, t):
y = a / t
return y
# Solar metallicity fit, parameters from chem_evol (see fitting notebook to derive new parameters)
a = -0.0138858377011
b = 1.0712569392
c = -32.1555682584
d = 468.236521089
e = -3300.97955814
f = 9019.62468302
t = np.linspace(10, 22.2987, 100) # Polynomial portion of solar metallicity DTD x-axis
# Define the DTD fit and plot it
y = quintic(t, a, b, c, d, e, f)
plt.plot(t, y)
a = -2.88192413434e-5
b = 0.00387383125623
c = -0.20721471544
d = 5.64382310405
e = -82.6061154979
f = 617.464778362
g = -1840.49386605
y = sextic(t, a, b, c, d, e, f, g)
plt.plot(t, y)
"""
Explanation: Check DTD fits
End of explanation
"""
|
huongttlan/statsmodels | examples/notebooks/statespace_sarimax_stata.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import norm
import statsmodels.api as sm
import matplotlib.pyplot as plt
from datetime import datetime
import requests
from io import BytesIO
"""
Explanation: SARIMAX: Introduction
This notebook replicates examples from the Stata ARIMA time series estimation and postestimation documentation.
First, we replicate the four estimation examples http://www.stata.com/manuals13/tsarima.pdf:
ARIMA(1,1,1) model on the U.S. Wholesale Price Index (WPI) dataset.
Variation of example 1 which adds an MA(4) term to the ARIMA(1,1,1) specification to allow for an additive seasonal effect.
ARIMA(2,1,0) x (1,1,0,12) model of monthly airline data. This example allows a multiplicative seasonal effect.
ARMA(1,1) model with exogenous regressors; describes consumption as an autoregressive process on which also the money supply is assumed to be an explanatory variable.
Second, we demonstrate postestimation capabilitites to replicate http://www.stata.com/manuals13/tsarimapostestimation.pdf. The model from example 4 is used to demonstrate:
One-step-ahead in-sample prediction
n-step-ahead out-of-sample forecasting
n-step-ahead in-sample dynamic prediction
End of explanation
"""
# Dataset
wpi1 = requests.get('http://www.stata-press.com/data/r12/wpi1.dta').content
data = pd.read_stata(BytesIO(wpi1))
data.index = data.t
# Fit the model
mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))
res = mod.fit()
print(res.summary())
"""
Explanation: ARIMA Example 1: Arima
As can be seen in the graphs from Example 2, the Wholesale price index (WPI) is growing over time (i.e. is not stationary). Therefore an ARMA model is not a good specification. In this first example, we consider a model where the original time series is assumed to be integrated of order 1, so that the difference is assumed to be stationary, and fit a model with one autoregressive lag and one moving average lag, as well as an intercept term.
The postulated data process is then:
$$
\Delta y_t = c + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \epsilon_{t}
$$
where $c$ is the intercept of the ARMA model, $\Delta$ is the first-difference operator, and we assume $\epsilon_{t} \sim N(0, \sigma^2)$. This can be rewritten to emphasize lag polynomials as (this will be useful in example 2, below):
$$
(1 - \phi_1 L ) \Delta y_t = c + (1 + \theta_1 L) \epsilon_{t}
$$
where $L$ is the lag operator.
Notice that one difference between the Stata output and the output below is that Stata estimates the following model:
$$
(\Delta y_t - \beta_0) = \phi_1 ( \Delta y_{t-1} - \beta_0) + \theta_1 \epsilon_{t-1} + \epsilon_{t}
$$
where $\beta_0$ is the mean of the process $y_t$. This model is equivalent to the one estimated in the Statsmodels SARIMAX class, but the interpretation is different. To see the equivalence, note that:
$$
(\Delta y_t - \beta_0) = \phi_1 ( \Delta y_{t-1} - \beta_0) + \theta_1 \epsilon_{t-1} + \epsilon_{t} \
\Delta y_t = (1 - \phi_1) \beta_0 + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \epsilon_{t}
$$
so that $c = (1 - \phi_1) \beta_0$.
End of explanation
"""
# Dataset
data = pd.read_stata(BytesIO(wpi1))
data.index = data.t
data['ln_wpi'] = np.log(data['wpi'])
data['D.ln_wpi'] = data['ln_wpi'].diff()
# Graph data
fig, axes = plt.subplots(1, 2, figsize=(15,4))
# Levels
axes[0].plot(data.index._mpl_repr(), data['wpi'], '-')
axes[0].set(title='US Wholesale Price Index')
# Log difference
axes[1].plot(data.index._mpl_repr(), data['D.ln_wpi'], '-')
axes[1].hlines(0, data.index[0], data.index[-1], 'r')
axes[1].set(title='US Wholesale Price Index - difference of logs');
# Graph data
fig, axes = plt.subplots(1, 2, figsize=(15,4))
fig = sm.graphics.tsa.plot_acf(data.ix[1:, 'D.ln_wpi'], lags=40, ax=axes[0])
fig = sm.graphics.tsa.plot_pacf(data.ix[1:, 'D.ln_wpi'], lags=40, ax=axes[1])
"""
Explanation: Thus the maximum likelihood estimates imply that for the process above, we have:
$$
\Delta y_t = 0.1050 + 0.8740 \Delta y_{t-1} - 0.4206 \epsilon_{t-1} + \epsilon_{t}
$$
where $\epsilon_{t} \sim N(0, 0.5226)$. Finally, recall that $c = (1 - \phi_1) \beta_0$, and here $c = 0.1050$ and $\phi_1 = 0.8740$. To compare with the output from Stata, we could calculate the mean:
$$\beta_0 = \frac{c}{1 - \phi_1} = \frac{0.1050}{1 - 0.8740} = 0.83$$
Note: these values are slightly different from the values in the Stata documentation because the optimizer in Statsmodels has found parameters here that yield a higher likelihood. Nonetheless, they are very close.
ARIMA Example 2: Arima with additive seasonal effects
This model is an extension of that from example 1. Here the data is assumed to follow the process:
$$
\Delta y_t = c + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \theta_4 \epsilon_{t-4} + \epsilon_{t}
$$
The new part of this model is that there is allowed to be a annual seasonal effect (it is annual even though the periodicity is 4 because the dataset is quarterly). The second difference is that this model uses the log of the data rather than the level.
Before estimating the dataset, graphs showing:
The time series (in logs)
The first difference of the time series (in logs)
The autocorrelation function
The partial autocorrelation function.
From the first two graphs, we note that the original time series does not appear to be stationary, whereas the first-difference does. This supports either estimating an ARMA model on the first-difference of the data, or estimating an ARIMA model with 1 order of integration (recall that we are taking the latter approach). The last two graphs support the use of an ARMA(1,1,1) model.
End of explanation
"""
# Fit the model
mod = sm.tsa.statespace.SARIMAX(data['ln_wpi'], trend='c', order=(1,1,1))
res = mod.fit()
print(res.summary())
"""
Explanation: To understand how to specify this model in Statsmodels, first recall that from example 1 we used the following code to specify the ARIMA(1,1,1) model:
python
mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))
The order argument is a tuple of the form (AR specification, Integration order, MA specification). The integration order must be an integer (for example, here we assumed one order of integration, so it was specified as 1. In a pure ARMA model where the underlying data is already stationary, it would be 0).
For the AR specification and MA specification components, there are two possiblities. The first is to specify the maximum degree of the corresponding lag polynomial, in which case the component is an integer. For example, if we wanted to specify an ARIMA(1,1,4) process, we would use:
python
mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,4))
and the corresponding data process would be:
$$
y_t = c + \phi_1 y_{t-1} + \theta_1 \epsilon_{t-1} + \theta_2 \epsilon_{t-2} + \theta_3 \epsilon_{t-3} + \theta_4 \epsilon_{t-4} + \epsilon_{t}
$$
or
$$
(1 - \phi_1 L)\Delta y_t = c + (1 + \theta_1 L + \theta_2 L^2 + \theta_3 L^3 + \theta_4 L^4) \epsilon_{t}
$$
When the specification parameter is given as a maximum degree of the lag polynomial, it implies that all polynomial terms up to that degree are included. Notice that this is not the model we want to use, because it would include terms for $\epsilon_{t-2}$ and $\epsilon_{t-3}$, which we don't want here.
What we want is a polynomial that has terms for the 1st and 4th degrees, but leaves out the 2nd and 3rd terms. To do that, we need to provide a tuple for the specifiation parameter, where the tuple describes the lag polynomial itself. In particular, here we would want to use:
python
ar = 1 # this is the maximum degree specification
ma = (1,0,0,1) # this is the lag polynomial specification
mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(ar,1,ma)))
This gives the following form for the process of the data:
$$
\Delta y_t = c + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \theta_4 \epsilon_{t-4} + \epsilon_{t} \
(1 - \phi_1 L)\Delta y_t = c + (1 + \theta_1 L + \theta_4 L^4) \epsilon_{t}
$$
which is what we want.
End of explanation
"""
# Dataset
air2 = requests.get('http://www.stata-press.com/data/r12/air2.dta').content
data = pd.read_stata(BytesIO(air2))
data.index = pd.date_range(start=datetime(data.time[0], 1, 1), periods=len(data), freq='MS')
data['lnair'] = np.log(data['air'])
# Fit the model
mod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12), simple_differencing=True)
res = mod.fit()
print(res.summary())
"""
Explanation: ARIMA Example 3: Airline Model
In the previous example, we included a seasonal effect in an additive way, meaning that we added a term allowing the process to depend on the 4th MA lag. It may be instead that we want to model a seasonal effect in a multiplicative way. We often write the model then as an ARIMA $(p,d,q) \times (P,D,Q)_s$, where the lowercast letters indicate the specification for the non-seasonal component, and the uppercase letters indicate the specification for the seasonal component; $s$ is the periodicity of the seasons (e.g. it is often 4 for quarterly data or 12 for monthly data). The data process can be written generically as:
$$
\phi_p (L) \tilde \phi_P (L^s) \Delta^d \Delta_s^D y_t = A(t) + \theta_q (L) \tilde \theta_Q (L^s) \epsilon_t
$$
where:
$\phi_p (L)$ is the non-seasonal autoregressive lag polynomial
$\tilde \phi_P (L^s)$ is the seasonal autoregressive lag polynomial
$\Delta^d \Delta_s^D y_t$ is the time series, differenced $d$ times, and seasonally differenced $D$ times.
$A(t)$ is the trend polynomial (including the intercept)
$\theta_q (L)$ is the non-seasonal moving average lag polynomial
$\tilde \theta_Q (L^s)$ is the seasonal moving average lag polynomial
sometimes we rewrite this as:
$$
\phi_p (L) \tilde \phi_P (L^s) y_t^* = A(t) + \theta_q (L) \tilde \theta_Q (L^s) \epsilon_t
$$
where $y_t^* = \Delta^d \Delta_s^D y_t$. This emphasizes that just as in the simple case, after we take differences (here both non-seasonal and seasonal) to make the data stationary, the resulting model is just an ARMA model.
As an example, consider the airline model ARIMA $(2,1,0) \times (1,1,0)_{12}$, with an intercept. The data process can be written in the form above as:
$$
(1 - \phi_1 L - \phi_2 L^2) (1 - \tilde \phi_1 L^{12}) \Delta \Delta_{12} y_t = c + \epsilon_t
$$
Here, we have:
$\phi_p (L) = (1 - \phi_1 L - \phi_2 L^2)$
$\tilde \phi_P (L^s) = (1 - \phi_1 L^12)$
$d = 1, D = 1, s=12$ indicating that $y_t^*$ is derived from $y_t$ by taking first-differences and then taking 12-th differences.
$A(t) = c$ is the constant trend polynomial (i.e. just an intercept)
$\theta_q (L) = \tilde \theta_Q (L^s) = 1$ (i.e. there is no moving average effect)
It may still be confusing to see the two lag polynomials in front of the time-series variable, but notice that we can multiply the lag polynomials together to get the following model:
$$
(1 - \phi_1 L - \phi_2 L^2 - \tilde \phi_1 L^{12} + \phi_1 \tilde \phi_1 L^{13} + \phi_2 \tilde \phi_1 L^{14} ) y_t^* = c + \epsilon_t
$$
which can be rewritten as:
$$
y_t^ = c + \phi_1 y_{t-1}^ + \phi_2 y_{t-2}^ + \tilde \phi_1 y_{t-12}^ - \phi_1 \tilde \phi_1 y_{t-13}^ - \phi_2 \tilde \phi_1 y_{t-14}^ + \epsilon_t
$$
This is similar to the additively seasonal model from example 2, but the coefficients in front of the autoregressive lags are actually combinations of the underlying seasonal and non-seasonal parameters.
Specifying the model in Statsmodels is done simply by adding the seasonal_order argument, which accepts a tuple of the form (Seasonal AR specification, Seasonal Integration order, Seasonal MA, Seasonal periodicity). The seasonal AR and MA specifications, as before, can be expressed as a maximum polynomial degree or as the lag polynomial itself. Seasonal periodicity is an integer.
For the airline model ARIMA $(2,1,0) \times (1,1,0)_{12}$ with an intercept, the command is:
python
mod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12))
End of explanation
"""
# Dataset
friedman2 = requests.get('http://www.stata-press.com/data/r12/friedman2.dta').content
data = pd.read_stata(BytesIO(friedman2))
data.index = data.time
# Variables
endog = data.ix['1959':'1981', 'consump']
exog = sm.add_constant(data.ix['1959':'1981', 'm2'])
# Fit the model
mod = sm.tsa.statespace.SARIMAX(endog, exog, order=(1,0,1))
res = mod.fit()
print(res.summary())
"""
Explanation: Notice that here we used an additional argument simple_differencing=True. This controls how the order of integration is handled in ARIMA models. If simple_differencing=True, then the time series provided as endog is literatlly differenced and an ARMA model is fit to the resulting new time series. This implies that a number of initial periods are lost to the differencing process, however it may be necessary either to compare results to other packages (e.g. Stata's arima always uses simple differencing) or if the seasonal periodicity is large.
The default is simple_differencing=False, in which case the integration component is implemented as part of the state space formulation, and all of the original data can be used in estimation.
ARIMA Example 4: ARMAX (Friedman)
This model demonstrates the use of explanatory variables (the X part of ARMAX). When exogenous regressors are included, the SARIMAX module uses the concept of "regression with SARIMA errors" (see http://robjhyndman.com/hyndsight/arimax/ for details of regression with ARIMA errors versus alternative specifications), so that the model is specified as:
$$
y_t = \beta_t x_t + u_t \
\phi_p (L) \tilde \phi_P (L^s) \Delta^d \Delta_s^D u_t = A(t) +
\theta_q (L) \tilde \theta_Q (L^s) \epsilon_t
$$
Notice that the first equation is just a linear regression, and the second equation just describes the process followed by the error component as SARIMA (as was described in example 3). One reason for this specification is that the estimated parameters have their natural interpretations.
This specification nests many simpler specifications. For example, regression with AR(2) errors is:
$$
y_t = \beta_t x_t + u_t \
(1 - \phi_1 L - \phi_2 L^2) u_t = A(t) + \epsilon_t
$$
The model considered in this example is regression with ARMA(1,1) errors. The process is then written:
$$
\text{consump}_t = \beta_0 + \beta_1 \text{m2}_t + u_t \
(1 - \phi_1 L) u_t = (1 - \theta_1 L) \epsilon_t
$$
Notice that $\beta_0$ is, as described in example 1 above, not the same thing as an intercept specified by trend='c'. Whereas in the examples above we estimated the intercept of the model via the trend polynomial, here, we demonstrate how to estimate $\beta_0$ itself by adding a constant to the exogenous dataset. In the output, the $beta_0$ is called const, whereas above the intercept $c$ was called intercept in the output.
End of explanation
"""
# Dataset
raw = pd.read_stata(BytesIO(friedman2))
raw.index = raw.time
data = raw.ix[:'1981']
# Variables
endog = data.ix['1959':, 'consump']
exog = sm.add_constant(data.ix['1959':, 'm2'])
nobs = endog.shape[0]
# Fit the model
mod = sm.tsa.statespace.SARIMAX(endog.ix[:'1978-01-01'], exog=exog.ix[:'1978-01-01'], order=(1,0,1))
fit_res = mod.fit()
print(fit_res.summary())
"""
Explanation: ARIMA Postestimation: Example 1 - Dynamic Forecasting
Here we describe some of the post-estimation capabilities of Statsmodels' SARIMAX.
First, using the model from example, we estimate the parameters using data that excludes the last few observations (this is a little artificial as an example, but it allows considering performance of out-of-sample forecasting and facilitates comparison to Stata's documentation).
End of explanation
"""
mod = sm.tsa.statespace.SARIMAX(endog, exog=exog, order=(1,0,1))
res = mod.filter(np.array(fit_res.params))
"""
Explanation: Next, we want to get results for the full dataset but using the estimated parameters (on a subset of the data).
End of explanation
"""
# In-sample one-step-ahead predictions
predict_res = res.predict(full_results=True)
predict = predict_res.forecasts
cov = predict_res.forecasts_error_cov
idx = res.data.predict_dates._mpl_repr()
# 95% confidence intervals
critical_value = norm.ppf(1 - 0.05 / 2.)
std_errors = np.sqrt(cov.diagonal().T)
ci = np.c_[
(predict - critical_value*std_errors)[:, :, None],
(predict + critical_value*std_errors)[:, :, None],
]
"""
Explanation: The predict command is first applied here to get in-sample predictions. We use the full_results=True argument to allow us to calculate confidence intervals (the default output of predict is just the predicted values).
With no other arguments, predict returns the one-step-ahead in-sample predictions for the entire sample.
End of explanation
"""
# Dynamic predictions
npredict = data.ix['1978-01-01':].shape[0]
predict_dy_res = res.predict(dynamic=nobs-npredict-1, full_results=True)
predict_dy = predict_dy_res.forecasts
cov_dy = predict_dy_res.forecasts_error_cov
# 95% confidence intervals
critical_value = norm.ppf(1 - 0.05 / 2.)
std_errors_dy = np.sqrt(cov_dy.diagonal().T)
ci_dy = np.c_[
(predict_dy - critical_value*std_errors_dy)[:, :, None],
(predict_dy + critical_value*std_errors_dy)[:, :, None],
]
"""
Explanation: We can also get dynamic predictions. One-step-ahead prediction uses the true values of the endogenous values at each step to predict the next in-sample value. Dynamic predictions use one-step-ahead prediction up to some point in the dataset (specified by the dynamic argument); after that, the previous predicted endogenous values are used in place of the true endogenous values for each new predicted element.
The dynamic argument is specified to be an offset relative to the start argument. If start is not specified, it is assumed to be 0.
Here we perform dynamic prediction starting in the first quarter of 1978.
End of explanation
"""
# Graph
fig, ax = plt.subplots(figsize=(9,4))
npre = 4
ax.set(title='Personal consumption', xlabel='Date', ylabel='Billions of dollars')
dates = data.index[-npredict-npre+1:]._mpl_repr()
ax.plot(dates, data.ix[-npredict-npre+1:, 'consump'], 'o', label='Observed')
# Plot predictions
ax.plot(idx[-npredict-npre:], predict[0, -npredict-npre:], 'r--', label='One-step-ahead forecast');
ax.plot(idx[-npredict-npre:], ci[0, -npredict-npre:], 'r--', alpha=0.3);
ax.plot(idx[-npredict-npre:], predict_dy[0, -npredict-npre:], 'g', label='Dynamic forecast (1978)');
ax.plot(idx[-npredict-npre:], ci_dy[0, -npredict-npre:], 'g:', alpha=0.3);
legend = ax.legend(loc='lower right')
legend.get_frame().set_facecolor('w')
"""
Explanation: We can graph the one-step-ahead and dynamic predictions (and the corresponding confidence intervals) to see their relative performance. Notice that up to the point where dynamic prediction begins (1978:Q1), the two are the same.
End of explanation
"""
# Prediction error
# Graph
fig, ax = plt.subplots(figsize=(9,4))
npre = 4
ax.set(title='Forecast error', xlabel='Date', ylabel='Forecast - Actual')
# In-sample one-step-ahead predictions and 95% confidence intervals
predict_error = predict[0, -npredict-1:] - endog.iloc[-npredict-1:]
predict_ci = ci[0, -npredict-1:] - endog.iloc[-npredict-1:][:, None]
ax.plot(idx[-npredict-1:], predict_error, label='One-step-ahead forecast');
ax.plot(idx[-npredict-1:], predict_ci, 'b--', alpha=0.4)
# Dynamic predictions and 95% confidence intervals
predict_dy_error = predict_dy[0, -npredict-1:] - endog.iloc[-npredict-1:]
predict_dy_ci = ci_dy[0, -npredict-1:] - endog.iloc[-npredict-1:][:, None]
ax.plot(idx[-npredict-1:], predict_dy_error, 'r', label='Dynamic forecast (1978)');
ax.plot(idx[-npredict-1:], predict_dy_ci, 'r--', alpha=0.4)
legend = ax.legend(loc='lower left');
legend.get_frame().set_facecolor('w')
"""
Explanation: Finally, graph the prediction error. It is obvious that, as one would suspect, one-step-ahead prediction is considerably better.
End of explanation
"""
|
PLN-FaMAF/DeepLearningEAIA | deep_learning_tutorial_2.ipynb | bsd-3-clause | import numpy
import keras
from keras import backend as K
from keras import losses, optimizers, regularizers
from keras.datasets import mnist
from keras.layers import Activation, ActivityRegularization, Conv2D, Dense, Dropout, Flatten, MaxPooling2D
from keras.models import Sequential
from keras.utils.np_utils import to_categorical
"""
Explanation: Express Deep Learning in Python: Advanced Layers
The Dense layer is only one of the possible core layers of Keras. Dense is a forward layer, this are the ones that take an input and do some transformation on it (in this case a matrix multiplication).
Other important layers to consider are: activation layers, regularization layers, dropout layers, convolutional layers, pooling layers, recurrent layers, normalization layers, embedding layers, noise layers, etc.
For this tutorial we will focus on some layers to aid in the tuning of the network: activations, regularizers and dropout; as well as the layers needed to design convolutional neural networks: convolutional and pooling layers.
We will point out other tutorials and examples to learn about the other kind of layers at the end of this tutorial.
End of explanation
"""
model = Sequential()
model.add(Dense(64, input_shape=(784,), activation='relu'))
model.add(Dense(10, activation='softmax'))
"""
Explanation: Activation Functions
A neural network classifier with linear activations has no more representation power than a logistic regression classifier. In order to express non-linearity with a neural network model a non-linear function is needed as activation function for each neuron.
One simple activation function to use is the sigmoid (or logistic) function, the same one used in the logistic regression algorithm, which restricts the output value to be between zero and one. This was one of the most common nonlinearities used as activation function in some of the first versions of neural networks. There are however other possibilities (all the following available in Keras, but there are more which can be adapted):
rectified linear unit (ReLU)
tanh
hard sigmoid
softsign
softplus
exponential linear unit (elu)
scaled exponential linear unit (selu)
leaky rectifier linear unit (Leaky ReLU)
parametric rectified linear unit (PReLU)
Activation Functions Examples
<div style="text-align: right;">Source: https://ujjwalkarn.me/2016/08/09/quick-intro-neural-networks/</div>
Of these, the one most used in the present state-of-the-art neural networks classifiers is the ReLU, because tipically learns much faster in networks with many layers [1].
There is another activation layer which is the SoftMax activation. This is generally used as the last activation layer, i.e. as the output of the network. This function, also known as normalized exponential function is a generalization of the logistic function that "squashes" a K-dimensional vector ${\displaystyle \mathbf {z}}$ of arbitrary real values to a K-dimensional vector ${\displaystyle \sigma (\mathbf {z} )}$ of real values in the range [0, 1] that add up to 1.
Activation Functions in Keras
Keras provides two ways to define an activation function. Any method is equally valid.
Activation as a parameter of a forward layer
End of explanation
"""
model = Sequential()
model.add(Dense(64, input_shape=(784,)))
model.add(Activation('tanh'))
model.add(Dense(10))
model.add(Activation('softmax'))
"""
Explanation: Activation as a layer
End of explanation
"""
model = Sequential()
model.add(Dense(64, input_shape=(784,),
activation=K.sigmoid))
model.add(Dense(10, activation='softmax'))
"""
Explanation: Activation from a TensorFlow function
In the previous examples we used some of the available functions in the Keras library.
We can also use an element-wise TensorFlow function as activation.
End of explanation
"""
model = Sequential()
model.add(Dense(64, input_shape=(784,),
activation='relu',
kernel_regularizer=regularizers.l2(0.01),
activity_regularizer=regularizers.l1(0.01)))
model.add(Dense(10, activation='softmax'))
"""
Explanation: Regularizers
Regularizers allow to apply penalties on layer parameters or layer activity during optimization. These penalties are incorporated in the loss function that the network optimizes. The penalties are applied on a per-layer basis.
The regularizers can be applied to three parameters:
Weight/kernel matrix regularization: Applies the regularizer function to the weight matrix (called kernel matrix in Keras documentation).
Bias regularization: Applies the regularizer to the bias vector.
Activity regularizer: Applies the regularizer to the output (i.e. the activation function).
There are three possible penalties to apply as regularizers already present in Keras (but the API permits the definition of a custom regularizer) [2]: l1, l2 and elasticnet.
Regularizers in Keras
As with activation functions, there are two ways to use a regularizer in keras. Although not for all the parameters.
Regularization as parameter of a layer
This is the most practical way and the only one which allows the individual regularization of each available parameter.
The regularizer is given as a parameter of the layers (e.g. Dense):
kernel_regularizer: Regularization of the weight matrix.
bias_regularizer: Regularization of the bias vector.
activity_regularizer: Regularization of the total output.
The available penalties for this case are:
keras.regularizers.l1: L1 norm or "sum of weights".
keras.regularizers.l2: L2 norm or "sum of weights squared".
keras.regularizers.l1_l2: Linear combination of L1 and L2 penalties or "elastic net regularization".
For more information on the difference between L1 and L2 see [5].
End of explanation
"""
model = Sequential()
model.add(Dense(64, input_shape=(784,), activation='relu'))
model.add(ActivityRegularization(l1=0.01, l2=0.1))
model.add(Dense(10, activation='softmax'))
"""
Explanation: Regularization as a layer
The core layer ActivityRegularization is another way to apply regularization, in this case (as the name indicates), only for the activation function (not for the weight matrix or the bias vector).
End of explanation
"""
model = Sequential()
model.add(Dense(64, input_shape=(784,), activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
"""
Explanation: Dropout
This are special layers useful for regularization which randomly drop (i.e. set to zero) units of the neural network during training. This prevents units from co-adapting too much to the input [3].
Keras has a special layer which can be added to a sequential model which takes a value rate, between 0 and 1, and sets the fraction given by the value to 0 during training of the input.
End of explanation
"""
model = Sequential()
# input: 100x100 images with 3 channels -> (100, 100, 3) tensors.
# this applies 32 convolution filters of size 3x3 each.
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(100, 100, 3)))
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
"""
Explanation: Convolutional Neural Networks
CNNs were responsible for major breakthroughs in Image Classification and are the core of most Computer Vision systems today, from Facebook's automated photo tagging to self-driving cars [6].
What is convolution?
A simple way to think about it is as a sliding window function applied to a matrix:
<div style="text-align: right;">Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution</div>
Imagine that the matrix on the left represents an black and white image. Each entry corresponds to one pixel, 0 for black and 1 for white (typically it's between 0 and 255 for grayscale images). The sliding window is called a kernel, filter, or feature detector. Here we use a 3×3 filter, multiply its values element-wise with the original matrix, then sum them up. To get the full convolution we do this for each element by sliding the filter over the whole matrix.
There are different uses for a convolution, particularly in images: averaging each pixel with its neighboring values blurs an image; taking the difference between a pixel and its neighbors detects edges; etc. For a better understanding of how a convolution work we recommend Chris Olah's post.
What are convolutional neural networks?
CNNs are basically just several layers of convolutions with nonlinear activation functions (e.g. ReLU or tanh) applied to the results.
In a traditional feedforward neural network we connect each input neuron to each output neuron in the next layer. These are fully connected layer (or Dense layers). In CNNs we don't do that. Instead, we use convolutions over the input layer to compute the output. This results in local connections, where each region of the input is connected to a neuron in the output. Each layer applies different filters, typically hundreds or thousands like the ones showed above, and combines their results.
During the training phase, a CNN automatically learns the values of its filters based on the task you want to perform. For example, in Image Classification a CNN may learn to detect edges from raw pixels in the first layer, then use the edges to detect simple shapes in the second layer, and then use these shapes to deter higher-level features, such as facial shapes in higher layers. The last layer is then a classifier that uses these high-level features.
<div style="text-align: right;">Source: http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/</div>
CNN Hyperparameters
Narrow vs. wide convolution
Applying a 3x3 filter at the center of the matrix works fine, but what about the edges? How would you apply the filter to the first element of a matrix that doesn't have any neighboring elements to the top and left? You can use zero-padding. All elements that would fall outside of the matrix are taken to be zero. By doing this you can apply the filter to every element of your input matrix, and get a larger or equally sized output. Adding zero-padding is also called wide convolution, and not using zero-padding would be a narrow convolution.
<div style="text-align: right;">Source: http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/</div>
The previous example shows the difference between narrow and wide convolution for 1 dimension for an input size of 7 and a filter size of 5.
Stride size
Another hyperparameter for your convolutions is the stride size, defining by how much you want to shift your filter at each step. A larger stride size leads to fewer applications of the filter and a smaller output size. The typical stride size is 1. The following example shows the differente outputs of a convolution for different size of strides (stride size of 1 vs stride size of 2).
<div style="text-align: right;">Source: http://cs231n.github.io/convolutional-networks/</div>
Channels
Channels are different "views" of your input data. For example, in image recognition you typically have RGB (red, green, blue) channels. You can apply convolutions across channels, either with different or equal weights.
Pooling layers
A key aspect of Convolutional Neural Networks are pooling layers, typically applied after the convolutional layers. Pooling layers subsample their input. The most common way to do pooling it to apply a max operation to the result of each filter. You don't necessarily need to pool over the complete matrix, you could also pool over a window. For example, the following shows max pooling for a 2x2 window:
<div style="text-align: right;">Source: http://cs231n.github.io/convolutional-networks/#pool</div>
One property of pooling is that it provides a fixed size output matrix, which typically is required for classification. For example, if you have 1,000 filters and you apply max pooling to each, you will get a 1000-dimensional output, regardless of the size of your filters, or the size of your input. This allows you to use variable size sentences, and variable size filters, but always get the same output dimensions to feed into a classifier. Pooling also reduces the output dimensionality but (hopefully) keeps the most salient information. You can think of each filter as detecting a specific feature.
CNNs in Keras
Keras has many different kinds of convolutional layers. The most commonly used for doing spatial convolution over images is keras.layers.convolutional.Conv2D. The layer takes as arguments the number output of filters in the convolution, the size of the 2D convolution window, the strides of the convolution and the padding.
Keras also is shipped with many different pooling layers. For spatial data the layer is keras.layers.pooling.MaxPooling2D. This layer takes the pool size and the data format. The data format corresponds to whether the channels are the first (i.e. the input has shape (batch, channels, height, width)) or the last (i.e. the input has shape (batch, height, width, channels)) dimension (this one is the default for Keras with a TensorFlow backend).
Finally, there is a layer, which doesn't take any parameters and serves as the connection between the convolutional layers and the dense layers, which is keras.layers.core.Flatten() which basically flattens the input to one dimension (without affecting the batch size, i.e. the number of examples to use for training/classifying).
End of explanation
"""
model = Sequential()
model.add(Dense(64, input_shape=(784,), activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
"""
Explanation: Compiling the model: loss functions and optimizers
When compiling a model there are two important parameters: the loss function and the optimizer algorithm. Both of them depend on the problem and can change the performance of the model.
Loss function
Also know as the objective function, is the function we want to optimize when training the algorithm (that is find the minimum). Depending on the task (whether it is classification or regression), and some other parameters, the objective function can change. Two of the most popular objective functions are the mean squared error for regression and categorical crossentropy for classification. Keras bring a number of different loss functions already available [4], but for this course we will be using only the categorical crossentropy (since we have a classification task to work with).
Optimizer
The optimizer algorithm is the way to find the minimum values to the loss function. As with loss functions, there are many available optimizers already packaged with Keras. One of the most popular algorithms is stochastic gradient descent (or SGD) optimizer, which is also one of the simplest to understand. However, in this tutorial we will be exploring other optimizers (e.g. RMSProp, Adam, Adadelta, etc.) which give better results.
Loss function and optimizer in Keras
In Keras, is the .compile() method of a model which takes as parameters the loss function and the optimizer. The parameters can either be instances of a loss function (e.g. keras.losses.hinge_loss) or an optimizer (e.g. keras.optimizers.RMSprop), or a string calling the loss function/optimizer by the name.
In the case of loss functions, the advantage of using an instance of a function is to have a custom defined loss function besides the ones given by Keras. E.g. you can pass a TensorFlow symbolic function that returns a scalar for each data-point and takes two arguments: the true labels and the predicted labels.
For optimizers, the main difference between an instance and a string is that in the latter case the optimizer will have default parameter values. Besides, there is a wrapper class (keras.optimizers.TFOptimizer) for native TensorFlow optimizers.
Loss function/optimizer as a string
End of explanation
"""
# Simple 1 layer denoising autoencoder
model = Sequential()
model.add(Dense(200, input_shape=(784,), activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(784))
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss=losses.mean_squared_error, optimizer=sgd)
"""
Explanation: Loss function/optimizer as an instance
End of explanation
"""
batch_size = 128
num_classes = 10
epochs = 10
TRAIN_EXAMPLES = 20000
TEST_EXAMPLES = 5000
# image dimensions
img_rows, img_cols = 28, 28
# load the data (already shuffled and splitted)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# reshape the data to add the "channels" dimension
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
# normalize the input in the range [0, 1]
# to make quick runs, select a smaller set of images.
train_mask = numpy.random.choice(x_train.shape[0], TRAIN_EXAMPLES, replace=False)
x_train = x_train[train_mask, :].astype('float32')
y_train = y_train[train_mask]
test_mask = numpy.random.choice(x_test.shape[0], TEST_EXAMPLES, replace=False)
x_test = x_test[test_mask, :].astype('float32')
y_test = y_test[test_mask]
x_train /= 255
x_test /= 255
print('Train samples: %d' % x_train.shape[0])
print('Test samples: %d' % x_test.shape[0])
# convert class vectors to binary class matrices
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
# define the network architecture
model = Sequential()
model.add(Conv2D(filters=16,
kernel_size=(3, 3),
strides=(1,1),
padding='valid',
activation='relu',
input_shape=input_shape,
activity_regularizer='l2'))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# compile the model
model.compile(loss=losses.categorical_crossentropy,
optimizer=optimizers.RMSprop(),
metrics=['accuracy'])
# train the model
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
# evaluate the model
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss: %.2f' % score[0])
print('Test accuracy: %.2f' % (100. * score[1]))
"""
Explanation: Categorical format
In case of using a loss function for classification (e.g. the categorical crossentropy) having more than 2 classes, Keras requires the targets to be in categorical format (e.g. if you have 10 classes, the target for each sample should be a 10-dimensional vector that is all-zeros expect for a 1 at the index corresponding to the class of the sample). In order to convert integer targets into categorical targets, you can use the Keras utility keras.utils.np_utils.to_categorical to transform an input vector of integers into a matrix of one-hot encoding representations.
Wrapping up
Finally, to end the tutorial, we use what we have learned so far and use this to create a new classifier for the MNIST dataset.
End of explanation
"""
|
totalgood/twip | docs/notebooks/08 Features -- TFIDF with Gensim.ipynb | mit | dates = pd.read_csv(os.path.join(DATA_PATH, 'datetimes.csv.gz'), engine='python')
nums = pd.read_csv(os.path.join(DATA_PATH, 'numbers.csv.gz'), engine='python')
df = pd.read_csv(os.path.join(DATA_PATH, 'text.csv.gz'))
df.tokens
d = Dictionary.from_documents(([str(s) for s in row]for row in df.tokens))
df.tokens.iloc[0]
# one way to fix this
df.tokens = df.tokens.apply(eval)
"""
Explanation: Load previously cleaned data
End of explanation
"""
df['tokens'] = df.txt.str.split()
df.tokens
"""
Explanation: When we said "QUOTE_NONNUMERIC" we didn't mean ALL nonnumeric fields ;)
So we can recreate the token lists usint split() again
End of explanation
"""
df.tokens.values[0:3]
d = Dictionary.from_documents(df.tokens)
d
tfidf = TfidfModel(d)
"""
Explanation: That's more like it, our tokens are now lists of strings not stringified lists of strings ;)
End of explanation
"""
TfidfModel?
TfidfModel(df.txt)
TfidfModel(df.tokens)
TfidfModel((d.doc2bow(tokens) for tokens in df.tokens))
"""
Explanation: Hint-Hint: gensim is sprinting this week at PyCon!
End of explanation
"""
pd.Series(d.dfs)
pd.Series(d.iteritems())
"""
Explanation: But there's a simpler way.
We already have a vocabulary
with term and document frequencies in a matrix...
End of explanation
"""
pd.Series(d.doc2bow(toks) for toks in df.tokens[:6])
"""
Explanation: OK, now I get it
document is a list of strings (ordered sequence of tokens)
bow or [bag of words] is a list of Counter-like mappings between word IDs and their count in each document
TfidfModel is a transformation from a BOW into a BORF, a "bag of relative frequencies"
TFIDF = BORF = term frequencies normalized by document occurence counts
End of explanation
"""
d.token2id['python']
d.token2id['Python']
d.token2id['you']
d[1] # guesses anyone?
tfidf = TfidfModel(dictionary=d)
tfidf
dfs = pd.Series(OrderedDict(sorted([(d.id2token[i], numdocs) for (i, numdocs) in tfidf.dfs.items()])))
dfs
dfs.iloc[4000:4030]
tfidf.num_docs
tfidf.num_nnz
tfidf.save(os.path.join(DATA_PATH, 'tfidf'))
tfidf2 = TfidfModel.load(os.path.join(DATA_PATH, 'tfidf'))
tfidf2.num_nnz
"""
Explanation: Did it assign 0 to the first word it found?
Sort-of...
End of explanation
"""
|
AllenDowney/ThinkBayes2 | examples/elephants_soln.ipynb | mit | # Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pandas as pd
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Cdf, Suite, Joint
from thinkbayes2 import MakePoissonPmf, EvalBinomialPmf, MakeMixture
import thinkplot
"""
Explanation: Think Bayes
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
from itertools import combinations
def power_set(s):
n = len(s)
for r in range(1, n+1):
for combo in combinations(s, r):
yield ''.join(combo)
"""
Explanation: Cats and rats and elephants
Suppose there are six species that might be in a zoo: lions and tigers and bears, and cats and rats and elephants. Every zoo has a subset of these species, and every subset is equally likely.
One day we visit a zoo and see 3 lions, 2 tigers, and one bear. Assuming that every animal in the zoo has an equal chance to be seen, what is the probability that the next animal we see is an elephant?
Solution
I'll start by enumerating all possible zoos with itertools.
End of explanation
"""
def enumerate_zoos(all_species, present):
"""Enumerate all zoos that contain `present`.
all_species: sequence of all species
present: sequence of species present
yields: possible zoos
"""
present = set(present)
for combo in power_set(species):
intersect = set(combo) & present
if len(intersect) == len(present):
yield len(combo), combo
"""
Explanation: Now we can enumerate only the zoos that are possible, given a set of animals known to be present.
End of explanation
"""
species = 'LTBCRE'
present = 'LTB'
for n, zoo in enumerate_zoos(species, present):
print(n, zoo)
"""
Explanation: Here are the possible zoos.
End of explanation
"""
class Dirichlet(object):
"""Represents a Dirichlet distribution.
See http://en.wikipedia.org/wiki/Dirichlet_distribution
"""
def __init__(self, n, conc=1, label=None):
"""Initializes a Dirichlet distribution.
n: number of dimensions
conc: concentration parameter (smaller yields more concentration)
label: string label
"""
if n < 2:
raise ValueError('A Dirichlet distribution with '
'n<2 makes no sense')
self.n = n
self.params = np.ones(n, dtype=np.float) * conc
self.label = label if label is not None else '_nolegend_'
def update(self, data):
"""Updates a Dirichlet distribution.
data: sequence of observations, in order corresponding to params
"""
m = len(data)
self.params[:m] += data
def random(self):
"""Generates a random variate from this distribution.
Returns: normalized vector of fractions
"""
p = np.random.gamma(self.params)
return p / p.sum()
def mean(self):
"""Array of means."""
return self.params / self.params.sum()
"""
Explanation: To represent the prior and posterior distributions I'll use a hierarchical model with one Dirichlet object for each possible zoo.
At the bottom of the hierarchy, it is easy to update each Dirichlet object just by adding the observed frequencies to the parameters.
In order to update the top of the hierarchy, we need the total probability of the data for each hypothetical zoo. When we do an update using grid algorithms, we get the probability of the data free, since it is the normalizing constant.
But when we do an update using a conjugate distribution, we don't get the total probability of the data, and for a Dirichlet distribution it is not easy to compute.
However, we can estimate it by drawing samples from the Dirichlet distribution, and then computing the probability of the data for each sample.
End of explanation
"""
d4 = Dirichlet(4)
"""
Explanation: Here's an example that represents a zoo with 4 animals.
End of explanation
"""
p = d4.random()
"""
Explanation: Here's a sample from it.
End of explanation
"""
from scipy.stats import multinomial
data = [3, 2, 1, 0]
m = sum(data)
multinomial(m, p).pmf(data)
"""
Explanation: Now we can compute the probability of the data, given these prevalences, using the multinomial distribution.
End of explanation
"""
def zero_pad(a, n):
"""Why does np.pad have to be so complicated?
"""
res = np.zeros(n)
res[:len(a)] = a
return res
"""
Explanation: Since I only observed 3 species, and my hypothetical zoo has 4, I had to zero-pad the data. Here's a function that makes that easier:
End of explanation
"""
data = [3, 2, 1]
zero_pad(data, 4)
"""
Explanation: Here's an example:
End of explanation
"""
def sample_likelihood(dirichlet, data, iters=1000):
"""Estimate the total probability of the data.
dirichlet: Dirichlet object
data: array of observed frequencies
iters: number of samples to draw
"""
data = zero_pad(data, dirichlet.n)
m = np.sum(data)
likes = [multinomial(m, dirichlet.random()).pmf(data)
for i in range(iters)]
return np.mean(likes)
"""
Explanation: Let's pull all that together. Here's a function that estimates the total probability of the data by sampling from the dirichlet distribution:
End of explanation
"""
sample_likelihood(d4, data)
"""
Explanation: And here's an example:
End of explanation
"""
class Zoo(Suite):
def Likelihood(self, data, hypo):
"""
data: sequence of counts
hypo: Dirichlet object
"""
return sample_likelihood(hypo, data)
"""
Explanation: Now we're ready to solve the problem.
Here's a Suite that represents the set of possible zoos. The likelihood of any zoo is just the total probability of the data.
End of explanation
"""
suite = Zoo([Dirichlet(n, label=''.join(zoo))
for n, zoo in enumerate_zoos(species, present)]);
def print_zoos(suite):
for d, p in suite.Items():
print(p, d.label)
print_zoos(suite)
"""
Explanation: We can construct the prior by enumerating the possible zoos.
End of explanation
"""
suite.Update(data)
"""
Explanation: We can update the top level of the hierarchy by calling Update
End of explanation
"""
for hypo in suite:
hypo.update(data)
"""
Explanation: We have to update the bottom level explicitly.
End of explanation
"""
print_zoos(suite)
"""
Explanation: Here's the posterior for the top level.
End of explanation
"""
pmf_n = Pmf()
for d, p in suite.Items():
pmf_n[d.n] += p
"""
Explanation: Here's how we can get the posterior distribution of n, the number of species.
End of explanation
"""
thinkplot.Hist(pmf_n)
print(pmf_n.Mean())
thinkplot.decorate(xlabel='n',
ylabel='PMF',
title='Posterior distribution of n')
"""
Explanation: And here's what it looks like.
End of explanation
"""
def enumerate_posterior(suite):
for d, p in suite.Items():
mean = d.mean()
index = d.label.find('E')
p_elephant = 0 if index == -1 else mean[index]
yield d, p, p_elephant
"""
Explanation: Now, to answer the question, we have to compute the posterior distribution of the prevalence of elephants. Here's a function that computes it.
End of explanation
"""
for d, p, p_elephant in enumerate_posterior(suite):
print(d.label, p, p_elephant)
"""
Explanation: Here are the possible zoos, the posterior probability of each, and the conditional prevalence of elephants for each.
End of explanation
"""
total = np.sum(p * p_elephant
for d, p, p_elephant in enumerate_posterior(suite))
"""
Explanation: Finally, we can use the law of total probability to compute the probability of seeing an elephant.
End of explanation
"""
|
daniel-severo/dask-ml | docs/source/examples/predict.ipynb | bsd-3-clause | import numpy as np
import dask.array as da
from sklearn.datasets import make_classification
X_train, y_train = make_classification(
n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1, n_samples=1000)
N = 100
X = da.concatenate([da.from_array(X_train, chunks=X_train.shape)
for _ in range(N)])
y = da.concatenate([da.from_array(y_train, chunks=y_train.shape)
for _ in range(N)])
"""
Explanation: Out-of-core Prediction
For some estimators, additional data don't improve performance past a certain point.
The learning curve levels off.
You may have additional data, but using it in the fit step won't make any difference.
In these cases, you'll commonly fit a model on a dataset that fits in memory, and use it to predict for datasets that may not.
Dask can make the prediction step easier and faster.
End of explanation
"""
from sklearn.linear_model import LogisticRegressionCV
clf = LogisticRegressionCV()
clf.fit(X_train, y_train)
"""
Explanation: So X_train and y_train are regular numpy arrays that we'll use to fit the model.
X and y are large dask arrays that may not fit in memory.
End of explanation
"""
yhat = X.map_blocks(clf.predict_proba, dtype=np.float64)
yhat
yhat[:5].compute()
"""
Explanation: With the model, we can make predictions for each observation by mapping the clf.predict_proba method over each block. This can then be scheduled to run on your single machine or your cluster.
End of explanation
"""
|
Tsiems/machine-learning-projects | Lab1/.ipynb_checkpoints/Lab1-Travis-checkpoint.ipynb | mit | import pandas as pd
import numpy as np
df = pd.read_csv('data/data.csv') # read in the csv file
"""
Explanation: Lab 1: Exploring NFL Play-By-Play Data
Data Loading and Preprocessing
To begin, we load the data into a Pandas data frame from a csv file.
End of explanation
"""
df.head()
"""
Explanation: Let's take a cursory glance at the data to see what we're working with.
End of explanation
"""
columns_to_delete = ['Unnamed: 0', 'Date', 'time', 'TimeUnder',
'PosTeamScore', 'PassAttempt', 'RushAttempt',
'DefTeamScore', 'Season', 'PlayAttempted']
#Iterate through and delete the columns we don't want
for col in columns_to_delete:
if col in df:
del df[col]
"""
Explanation: There's a lot of data that we don't care about. For example, 'PassAttempt' is a binary attribute, but there's also an attribute called 'PlayType' which is set to 'Pass' for a passing play.
We define a list of the columns which we're not interested in, and then we delete them
End of explanation
"""
df.columns
"""
Explanation: We can then grab a list of the remaining column names
End of explanation
"""
df.info()
df = df.replace(to_replace=np.nan,value=-1)
"""
Explanation: Temporary simple data replacement so that we can cast to integers (instead of objects)
End of explanation
"""
df.info()
"""
Explanation: At this point, lots of things are encoded as objects, or with excesively large data types
End of explanation
"""
continuous_features = ['TimeSecs', 'PlayTimeDiff', 'yrdln', 'yrdline100',
'ydstogo', 'ydsnet', 'Yards.Gained', 'Penalty.Yards',
'ScoreDiff', 'AbsScoreDiff']
ordinal_features = ['Drive', 'qtr', 'down']
binary_features = ['GoalToGo', 'FirstDown','sp', 'Touchdown', 'Safety', 'Fumble']
categorical_features = df.columns.difference(continuous_features).difference(ordinal_features)
"""
Explanation: We define four lists based on the types of features we're using.
Binary features are separated from the other categorical features so that they can be stored in less space
End of explanation
"""
df[continuous_features] = df[continuous_features].astype(np.float64)
df[ordinal_features] = df[ordinal_features].astype(np.int64)
df[binary_features] = df[binary_features].astype(np.int8)
"""
Explanation: We then cast all of the columns to the appropriate underlying data types
End of explanation
"""
df['PassOutcome'].replace(['Complete', 'Incomplete Pass'], [1, 0], inplace=True)
df = df[df["PlayType"] != 'Quarter End']
df = df[df["PlayType"] != 'Two Minute Warning']
df = df[df["PlayType"] != 'End of Game']
"""
Explanation: THIS IS SOME MORE REFORMATTING SHIT I'M DOING FOR NOW. PROLLY GONNA KEEP IT
End of explanation
"""
df.info()
"""
Explanation: Now all of the objects are encoded the way we'd like them to be
End of explanation
"""
df.describe()
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
#Embed figures in the Jupyter Notebook
%matplotlib inline
#Use GGPlot style for matplotlib
plt.style.use('ggplot')
pass_plays = df[df['PlayType'] == "Pass"]
pass_plays_grouped = pass_plays.groupby(by=['Passer'])
"""
Explanation: Now we can start to take a look at what's in each of our columns
End of explanation
"""
first_downs_grouped = df.groupby(by=['FirstDown'])
print(first_downs_grouped['Yards.Gained'].count())
print("-----------------------------")
print(first_downs_grouped['Yards.Gained'].sum())
print("-----------------------------")
print(first_downs_grouped['Yards.Gained'].sum()/first_downs_grouped['Yards.Gained'].count())
"""
Explanation: Look at the number of yards gained by a FirstDown
End of explanation
"""
plays_grouped = df.groupby(by=['PlayType'])
print(plays_grouped['Yards.Gained'].count())
print("-----------------------------")
print(plays_grouped['Yards.Gained'].sum())
print("-----------------------------")
print(plays_grouped['Yards.Gained'].sum()/plays_grouped['Yards.Gained'].count())
"""
Explanation: Group by play type
End of explanation
"""
size = 10
corr = df.corr()
fig, ax = plt.subplots(figsize=(size, size))
ax.matshow(corr)
plt.xticks(range(len(corr.columns)), corr.columns);
for tick in ax.get_xticklabels():
tick.set_rotation(90)
plt.yticks(range(len(corr.columns)), corr.columns);
"""
Explanation: We can eliminate combos who didn't have at least 10 receptions together, and then re-sample the data. This will remove noise from QB-receiver combos who have very high or low completion rates because they've played very little together.
End of explanation
"""
import seaborn as sns
# df_dropped = df.dropna()
# df_dropped.info()
selected_types = df.select_dtypes(exclude=["object"])
useful_attributes = df[['FieldGoalDistance','ydstogo']]
print(useful_attributes)
sns.heatmap(corr)
cluster_corr = sns.clustermap(corr)
plt.setp(cluster_corr.ax_heatmap.yaxis.get_majorticklabels(), rotation=0)
# plt.xticks(rotation=90)
"""
Explanation: We can also extract the highest-completion percentage combos.
Here we take the top-10 most reliable QB-receiver pairs.
End of explanation
"""
|
BinRoot/TensorFlow-Book | ch04_classification/Concept04_softmax.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
"""
Explanation: Ch 04: Concept 04
Softmax classification
Import the usual libraries:
End of explanation
"""
learning_rate = 0.01
training_epochs = 1000
num_labels = 3
batch_size = 100
x1_label0 = np.random.normal(1, 1, (100, 1))
x2_label0 = np.random.normal(1, 1, (100, 1))
x1_label1 = np.random.normal(5, 1, (100, 1))
x2_label1 = np.random.normal(4, 1, (100, 1))
x1_label2 = np.random.normal(8, 1, (100, 1))
x2_label2 = np.random.normal(0, 1, (100, 1))
plt.scatter(x1_label0, x2_label0, c='r', marker='o', s=60)
plt.scatter(x1_label1, x2_label1, c='g', marker='x', s=60)
plt.scatter(x1_label2, x2_label2, c='b', marker='_', s=60)
plt.show()
"""
Explanation: Generated some initial 2D data:
End of explanation
"""
xs_label0 = np.hstack((x1_label0, x2_label0))
xs_label1 = np.hstack((x1_label1, x2_label1))
xs_label2 = np.hstack((x1_label2, x2_label2))
xs = np.vstack((xs_label0, xs_label1, xs_label2))
labels = np.matrix([[1., 0., 0.]] * len(x1_label0) + [[0., 1., 0.]] * len(x1_label1) + [[0., 0., 1.]] * len(x1_label2))
arr = np.arange(xs.shape[0])
np.random.shuffle(arr)
xs = xs[arr, :]
labels = labels[arr, :]
"""
Explanation: Define the labels and shuffle the data:
End of explanation
"""
test_x1_label0 = np.random.normal(1, 1, (10, 1))
test_x2_label0 = np.random.normal(1, 1, (10, 1))
test_x1_label1 = np.random.normal(5, 1, (10, 1))
test_x2_label1 = np.random.normal(4, 1, (10, 1))
test_x1_label2 = np.random.normal(8, 1, (10, 1))
test_x2_label2 = np.random.normal(0, 1, (10, 1))
test_xs_label0 = np.hstack((test_x1_label0, test_x2_label0))
test_xs_label1 = np.hstack((test_x1_label1, test_x2_label1))
test_xs_label2 = np.hstack((test_x1_label2, test_x2_label2))
test_xs = np.vstack((test_xs_label0, test_xs_label1, test_xs_label2))
test_labels = np.matrix([[1., 0., 0.]] * 10 + [[0., 1., 0.]] * 10 + [[0., 0., 1.]] * 10)
"""
Explanation: We'll get back to this later, but the following are test inputs that we'll use to evaluate the model:
End of explanation
"""
train_size, num_features = xs.shape
X = tf.placeholder("float", shape=[None, num_features])
Y = tf.placeholder("float", shape=[None, num_labels])
W = tf.Variable(tf.zeros([num_features, num_labels]))
b = tf.Variable(tf.zeros([num_labels]))
y_model = tf.nn.softmax(tf.matmul(X, W) + b)
cost = -tf.reduce_sum(Y * tf.log(y_model))
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
correct_prediction = tf.equal(tf.argmax(y_model, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
"""
Explanation: Again, define the placeholders, variables, model, and cost function:
End of explanation
"""
with tf.Session() as sess:
tf.global_variables_initializer().run()
for step in range(training_epochs * train_size // batch_size):
offset = (step * batch_size) % train_size
batch_xs = xs[offset:(offset + batch_size), :]
batch_labels = labels[offset:(offset + batch_size)]
err, _ = sess.run([cost, train_op], feed_dict={X: batch_xs, Y: batch_labels})
if step % 100 == 0:
print (step, err)
W_val = sess.run(W)
print('w', W_val)
b_val = sess.run(b)
print('b', b_val)
print("accuracy", accuracy.eval(feed_dict={X: test_xs, Y: test_labels}))
"""
Explanation: Train the softmax classification model:
End of explanation
"""
|
jdhp-docs/python-notebooks | python_geopandas_cities_near_paris_saclay_en.ipynb | mit | !wget http://osm13.openstreetmap.fr/~cquest/openfla/export/communes-20180101-shp.zip
!unzip -u communes-20180101-shp.zip
import geopandas
"""
Explanation: Cities near Paris Saclay
http://geopandas.org/gallery/plotting_basemap_background.html#adding-a-background-map-to-plots
https://www.data.gouv.fr/fr/datasets/contours-des-departements-francais-issus-d-openstreetmap/
End of explanation
"""
df = geopandas.read_file("communes-20181110.shp")
#df = df.loc[df.insee == "78646"]
dept_78 = [str(insee) for insee in df.insee.values if insee.startswith("78")]
dept_91 = [str(insee) for insee in df.insee.values if insee.startswith("91")]
dept_92 = [str(insee) for insee in df.insee.values if insee.startswith("92")]
dept_95 = [str(insee) for insee in df.insee.values if insee.startswith("95")]
dept_78, dept_91, dept_92, dept_95
## Yvelines ##########################################
# https://fr.wikipedia.org/wiki/Yvelines#D%C3%A9coupage_administratif
# https://commons.wikimedia.org/wiki/File:Yvelines_intercommunalit%C3%A9.svg?uselang=fr
# https://fr.wikipedia.org/wiki/Saint-Quentin-en-Yvelines
communes_sqy = [
"78621", # Trappes
"78165", # Les Clayes-sous-Bois
"78168", # Coignières
"78208", # Élancourt
"78297", # Guyancourt
"78356", # Magny-les-Hameaux
"78383", # Maurepas
"78423", # Montigny-le-Bretonneux
"78490", # Plaisir
"78644", # La Verrière
"78674", # Villepreux
"78688", # Voisins-le-Bretonneux
]
# https://fr.wikipedia.org/wiki/Canton_de_Montigny-le-Bretonneux
canton_montigny = [
"78423", # Montigny-le-Bretonneux
"78297", # Guyancourt
]
# https://fr.wikipedia.org/wiki/Canton_de_Versailles-1
canton_versailles1 = [
"78646", # Versailles
]
# https://fr.wikipedia.org/wiki/Canton_de_Versailles-2
canton_versailles2 = [
"78646", # Versailles
"78117", # Buc
"78322", # Jouy-en-Josas
"78343", # Les Loges-en-Josas
"78640", # Vélizy-Villacoublay
"78686", # Viroflay
]
# https://fr.wikipedia.org/wiki/Canton_de_Maurepas
canton_maurepas = [
"78383", # Maurepas
"78143", # Châteaufort
"78160", # Chevreuse
"78162", # Choisel
"78168", # Coignières
"78193", # Dampierre-en-Yvelines
"78334", # Lévis-Saint-Nom
"78356", # Magny-les-Hameaux
"78397", # Le Mesnil-Saint-Denis
"78406", # Milon-la-Chapelle
"78548", # Saint-Forget
"78561", # Saint-Lambert
"78575", # Saint-Rémy-lès-Chevreuse
"78590", # Senlisse
"78620", # Toussus-le-Noble
"78688", # Voisins-le-Bretonneux
]
# https://fr.wikipedia.org/wiki/Canton_de_Rambouillet
canton_rambouillet = [
"78517", # Rambouillet
"78003", # Ablis
"78009", # Allainville
"78030", # Auffargis
"78071", # Boinville-le-Gaillard
"78077", # La Boissière-École
"78087", # Bonnelles
"78108", # Les Bréviaires
"78120", # Bullion
"78125", # La Celle-les-Bordes
"78128", # Cernay-la-Ville
"78164", # Clairefontaine-en-Yvelines
"78209", # Émancé
"78220", # Les Essarts-le-Roi
"78264", # Gambaiseuil
"78269", # Gazeran
"78307", # Hermeray
"78349", # Longvilliers
"78407", # Mittainville
"78464", # Orcemont
"78470", # Orphin
"78472", # Orsonville
"78478", # Paray-Douaville
"78486", # Le Perray-en-Yvelines
"78497", # Poigny-la-Forêt
"78499", # Ponthévrard
"78506", # Prunay-en-Yvelines
"78516", # Raizeux
"78522", # Rochefort-en-Yvelines
"78537", # Saint-Arnoult-en-Yvelines
"78557", # Saint-Hilarion
"78562", # Saint-Léger-en-Yvelines
"78564", # Saint-Martin-de-Bréthencourt
"78569", # Sainte-Mesme
"78601", # Sonchamp
"78655", # Vieille-Église-en-Yvelines
]
## Essonne ###########################################
# https://fr.wikipedia.org/wiki/Canton_de_Massy
canton_massy = [
"91377", # Massy
"91161", # Chilly-Mazarin
]
# https://fr.wikipedia.org/wiki/Canton_de_Palaiseau
canton_palaiseau = [
"91477", # Palaiseau
"91312", # Igny
"91471", # Orsay
]
# https://fr.wikipedia.org/wiki/Canton_de_Gif-sur-Yvette
canton_gif = [
"91272", # Gif-sur-Yvette
"91064", # Bièvres
"91093", # Boullay-les-Troux
"91122", # Bures-sur-Yvette
"91274", # Gometz-la-Ville
"91411", # Les Molières
"91482", # Pecqueuse
"91534", # Saclay
"91538", # Saint-Aubin
"91635", # Vauhallan
"91645", # Verrières-le-Buisson
"91679", # Villiers-le-Bâcle
]
# https://fr.wikipedia.org/wiki/Canton_des_Ulis
canton_les_ulis = [
"91692", # Les Ulis
"91275", # Gometz-le-Châtel
"91363", # Marcoussis
"91458", # Nozay
"91560", # Saint-Jean-de-Beauregard
"91661", # Villebon-sur-Yvette
"91666", # Villejust
]
# https://fr.wikipedia.org/wiki/Canton_de_Longjumeau
canton_longjumeau = [
"91345", # Longjumeau
"91044", # Ballainvilliers
"91136", # Champlan
"91216", # Épinay-sur-Orge
"91339", # Linas
"91425", # Montlhéry
"91587", # Saulx-les-Chartreux
"91665", # La Ville-du-Bois
]
# https://fr.wikipedia.org/wiki/Canton_de_Savigny-sur-Orge
canton_savigny = [
"91589", # Savigny-sur-Orge
"91432", # Morangis
"91689", # Wissous
]
# https://fr.wikipedia.org/wiki/Communaut%C3%A9_de_communes_du_pays_de_Limours
communes_pays_limours = [
"91111", # Briis-sous-Forges
"91017", # Angervilliers
"91093", # Boullay-les-Troux
"91186", # Courson-Monteloup
"91243", # Fontenay-lès-Briis
"91249", # Forges-les-Bains
"91274", # Gometz-la-Ville
"91319", # Janvry
"91411", # Les Molières
"91338", # Limours
"91482", # Pecqueuse
"91560", # Saint-Jean-de-Beauregard
"91568", # Saint-Maurice-Montcouronne
"91634", # Vaugrigneuse
]
## Hauts-de-Seine ####################################
# https://fr.wikipedia.org/wiki/Hauts-de-Seine
dept_92 = [
"92002", # Antony
"92004", # Asnières-sur-Seine
"92007", # Bagneux
"92009", # Bois-Colombes
"92012", # Boulogne-Billancourt
"92014", # Bourg-la-Reine
"92019", # Châtenay-Malabry
"92020", # Châtillon
"92022", # Chaville
"92023", # Clamart
"92024", # Clichy
"92025", # Colombes
"92026", # Courbevoie
"92032", # Fontenay-aux-Roses
"92033", # Garches
"92035", # Gennevilliers
"92036", # Issy-les-Moulineaux
"92040", # La Garenne-Colombes
"92044", # Le Plessis-Robinson
"92046", # Levallois-Perret
"92047", # Malakoff
"92048", # Marnes-la-Coquette
"92049", # Meudon
"92050", # Montrouge
"92051", # Nanterre
"92060", # Neuilly-sur-Seine
"92062", # Puteaux
"92063", # Rueil-Malmaison
"92064", # Saint-Cloud
"92071", # Sceaux
"92072", # Sèvres
"92073", # Suresnes
"92075", # Vanves
"92076", # Vaucresson
"92077", # Ville-d'Avray
"92078", # Villeneuve-la-Garenne
]
######################################################
communes_list = []
# Hauts-de-Seine
communes_list += dept_78
communes_list += dept_91
communes_list += dept_92
communes_list += dept_95
# Yvelines
communes_list += communes_sqy
communes_list += canton_montigny
communes_list += canton_versailles2
communes_list += canton_maurepas
communes_list += canton_rambouillet
# Essonne
communes_list += canton_massy
communes_list += canton_palaiseau
communes_list += canton_gif
communes_list += canton_les_ulis
communes_list += canton_longjumeau
communes_list += canton_savigny
communes_list += communes_pays_limours
df = df.loc[df.insee.isin(communes_list)]
df
ax = df.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')
"""
Explanation: TODO: put the following lists in a JSON dict and make it avaliable in a public Git repository (it can be usefull for other uses)
TODO: put the generated GeoJSON files in a public Git repository
End of explanation
"""
df2 = df.to_crs(epsg=3857)
df2
ax = df2.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')
"""
Explanation: Convert the data to Web Mercator
End of explanation
"""
import contextily as ctx
def add_basemap(ax, zoom, url='http://tile.stamen.com/terrain/tileZ/tileX/tileY.png'):
xmin, xmax, ymin, ymax = ax.axis()
basemap, extent = ctx.bounds2img(xmin, ymin, xmax, ymax, zoom=zoom, url=url)
ax.imshow(basemap, extent=extent, interpolation='bilinear')
# restore original x/y limits
ax.axis((xmin, xmax, ymin, ymax))
"""
Explanation: Contextily helper function
End of explanation
"""
ax = df2.plot(figsize=(16, 16), alpha=0.5, edgecolor='k')
#add_basemap(ax, zoom=13, url=ctx.sources.ST_TONER_LITE)
add_basemap(ax, zoom=12)
ax.set_axis_off()
"""
Explanation: Add background tiles to plot
End of explanation
"""
import fiona
fiona.supported_drivers
!rm communes.geojson
df.to_file("communes.geojson", driver="GeoJSON")
!ls -lh communes.geojson
df = geopandas.read_file("communes.geojson")
df
ax = df.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')
"""
Explanation: Save selected departments into a GeoJSON file
End of explanation
"""
|
satishgoda/learning | web/jquery_ipywidgets.ipynb | mit | from IPython.display import HTML, Javascript
from ipywidgets import interact
"""
Explanation: Back to jQuery
Mixing ipywidgets and jQuery
End of explanation
"""
HTML("""<h1 class='juh' id='juhh1'>Hello World</h1>""")
"""
Explanation: Create a HTML element with a class and a tag
End of explanation
"""
Javascript("""
$("#juhh1").css('color', 'red')
""")
Javascript("""
$("#juhh1").css('color', 'green')
""")
"""
Explanation: Change the style/color of the previously created element
End of explanation
"""
HTML("""<h1 class='juh' id='juhh2'>jQuery and ipywidgets</h1>""")
def changeDOMColor(color):
js = Javascript("""
$("#juhh2").css("color", '{0}');
""".format(color))
return js
interact(changeDOMColor, color=['red', 'green', 'blue'])
"""
Explanation: def changeDOMColor(color):
Javascript("""
$("#juhh1").css("color", '{0}');
""".format(color))
interact(changeDOMColor, color=['red', 'green', 'blue'])
Why is Javascript code not being executed? :(
SOLVED :)
In order for the Javascript() function to take effect, I had to save it in a variable and return it.
End of explanation
"""
HTML("""<h1 class='juh' id='juhh3'>Using Python 3.6 f-strings</h1>""")
def changeDOMColor(color):
js = Javascript(f"""
$("#juhh3").css("color", '{color}');
""")
return js
interact(changeDOMColor, color=['red', 'green', 'blue'])
"""
Explanation: Using the f-string syntax in Python 3.6 makes it even easier to write templated strings.
As you can see in the code below, the f-string starts with f" and the variable in the scope is interpolated {variable}
End of explanation
"""
|
dariox2/CADL | session-1/.ipynb_checkpoints/session-1-checkpoint.ipynb | apache-2.0 | # First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n' \
'You should consider updating to Python 3.4.0 or ' \
'higher as the libraries built for this course ' \
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda '
'and then restart `jupyter notebook`:\n' \
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17"
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
print('Done!')
# Import Tensorflow
try:
import tensorflow as tf
except ImportError:
print("You do not have tensorflow installed!")
print("Follow the instructions on the following link")
print("to install tensorflow before continuing:")
print("")
print("https://github.com/pkmital/CADL#installation-preliminaries")
# This cell includes the provided libraries from the zip file
try:
from libs import utils
except ImportError:
print("Make sure you have started notebook in the same directory" +
" as the provided zip file which includes the 'libs' folder" +
" and the file 'utils.py' inside of it. You will NOT be able"
" to complete this assignment unless you restart jupyter"
" notebook inside the directory created by extracting"
" the zip file or cloning the github repo.")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because inline code is not styled very good by default:
from IPython.core.display import HTML
HTML("""<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>""")
"""
Explanation: Session 1 - Introduction to Tensorflow
<p class="lead">
Assignment: Creating a Dataset/Computing with Tensorflow
</p>
<p class="lead">
Parag K. Mital<br />
<a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning w/ Tensorflow</a><br />
<a href="https://www.kadenze.com/partners/kadenze-academy">Kadenze Academy</a><br />
<a href="https://twitter.com/hashtag/CADL">#CADL</a>
</p>
This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
Learning Goals
Learn how to normalize a dataset by calculating the mean/std. deviation
Learn how to use convolution
Explore what representations exist in your dataset
Outline
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Assignment Synopsis
Part One - Create a Small Dataset
Instructions
Code
Part Two - Compute the Mean
Instructions
Code
Part Three - Compute the Standard Deviation
Instructions
Code
Part Four - Normalize the Dataset
Instructions
Code
Part Five - Convolve the Dataset
Instructions
Code
Part Six - Sort the Dataset
Instructions
Code
Assignment Submission
<!-- /MarkdownTOC -->
<h1>Notebook</h1>
Everything you will need to do will be inside of this notebook, and I've marked which cells you will need to edit by saying <b><font color='red'>"TODO! COMPLETE THIS SECTION!"</font></b>. For you to work with this notebook, you'll either download the zip file from the resources section on Kadenze or clone the github repo (whichever you are more comfortable with), and then run notebook inside the same directory as wherever this file is located using the command line "jupyter notebook" or "ipython notebook" (using Terminal on Unix/Linux/OSX, or Command Line/Shell/Powershell on Windows). If you are unfamiliar with jupyter notebook, please look at Installation Preliminaries and Session 0 before starting!
Once you have launched notebook, this will launch a web browser with the contents of the zip files listed. Click the file "session-1.ipynb" and this document will open in an interactive notebook, allowing you to "run" the cells, computing them using python, and edit the text inside the cells.
<a name="assignment-synopsis"></a>
Assignment Synopsis
This first homework assignment will guide you through working with a small dataset of images. For Part 1, you'll need to find 100 images and use the function I've provided to create a montage of your images, saving it to the file "dataset.png" (template code provided below). You can load an existing dataset of images, find your own images, or perhaps create your own images using a creative process such as painting, photography, or something along those lines. Each image will be reshaped to 100 x 100 pixels. There needs to be at least 100 images. For Parts 2 and 3, you'll then calculate the mean and deviation of it using a tensorflow session. In Part 4, you'll normalize your dataset using the mean and deviation. Then in Part 5, you will convolve your normalized dataset. For Part 6, you'll need to sort the entire convolved dataset. Finally, the last part will package everything for you in a zip file which you can upload to Kadenze to get assessed (only if you are a Kadenze Premium member, $10 p/m, free for the first month). Remember to complete the additional excercises online, including the Gallery participation and the Forum post. If you have any questions, be sure to enroll in the course and ask your peers in the #CADL community or me on the forums!
https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info
The following assignment breakdown gives more detailed instructions and includes template code for you to fill out. Good luck!
<a name="part-one---create-a-small-dataset"></a>
Part One - Create a Small Dataset
<a name="instructions"></a>
Instructions
Use Python, Numpy, and Matplotlib to load a dataset of 100 images and create a montage of the dataset as a 10 x 10 image using the function below. You'll need to make sure you call the function using a 4-d array of N x H x W x C dimensions, meaning every image will need to be the same size! You can load an existing dataset of images, find your own images, or perhaps create your own images using a creative process such as painting, photography, or something along those lines.
When you are creating your dataset, I want you to think about what representations might exist in the limited amount of data that you are organizing. It is only 100 images after all, not a whole lot for a computer to reason about and learn something meaningful. So <b>think about creating a dataset of images that could possibly reveal something fundamental about what is contained in the images</b>. Try to think about creating a set of images that represents something. For instance, this might be images of yourself over time. Or it might be every picture you've ever taken of your cat. Or perhaps the view from your room at different times of the day. Consider making the changes within each image as significant as possible. As "representative" of the thing you want to capture as possible. Hopefully by the end of this lesson, you'll understand a little better the difference between what a computer thinks is significant and what you yourself thought was significant.
The code below will show you how to resize and/or crop your images so that they are 100 pixels x 100 pixels in height and width. Once you have 100 images loaded, we'll use a montage function to draw and save your dataset to the file <b>dataset.png</b>.
<a name="code"></a>
Code
This next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you "run" it (use "shift+enter")!
End of explanation
"""
# You need to find 100 images from the web/create them yourself
# or find a dataset that interests you (e.g. I used celeb faces
# in the course lecture...)
# then store them all in a single directory.
# With all the images in a single directory, you can then
# perform the following steps to create a 4-d array of:
# N x H x W x C dimensions as 100 x 100 x 100 x 3.
dirname = ...
# Load every image file in the provided directory
filenames = [os.path.join(dirname, fname)
for fname in os.listdir(dirname)]
# Make sure we have exactly 100 image files!
filenames = filenames[:100]
assert(len(filenames) == 100)
# Read every filename as an RGB image
imgs = [plt.imread(fname)[..., :3] for fname in filenames]
# Crop every image to a square
imgs = [utils.imcrop_tosquare(img_i) for img_i in imgs]
# Then resize the square image to 100 x 100 pixels
imgs = [resize(img_i, (100, 100)) for img_i in imgs]
# Finally make our list of 3-D images a 4-D array with the first dimension the number of images:
imgs = np.array(imgs).astype(np.float32)
# Plot the resulting dataset:
# Make sure you "run" this cell after you create your `imgs` variable as a 4-D array!
# Make sure we have a 100 x 100 x 100 x 3 dimension array
assert(imgs.shape == (100, 100, 100, 3))
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(imgs, saveto='dataset.png'))
"""
Explanation: Places your images in a folder such as dirname = '/Users/Someone/Desktop/ImagesFromTheInternet'. We'll then use the os package to load them and crop/resize them to a standard size of 100 x 100 pixels.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
# First create a tensorflow session
sess = ...
# Now create an operation that will calculate the mean of your images
mean_img_op = ...
# And then run that operation using your session
mean_img = sess.run(mean_img_op)
# Then plot the resulting mean image:
# Make sure the mean image is the right size!
assert(mean_img.shape == (100, 100, 3))
plt.figure(figsize=(10, 10))
plt.imshow(mean_img)
plt.imsave(arr=mean_img, fname='mean.png')
"""
Explanation: <a name="part-two---compute-the-mean"></a>
Part Two - Compute the Mean
<a name="instructions-1"></a>
Instructions
First use Tensorflow to define a session. Then use Tensorflow to create an operation which takes your 4-d array and calculates the mean color image (100 x 100 x 3) using the function tf.reduce_mean. Have a look at the documentation for this function to see how it works in order to get the mean of every pixel and get an image of (100 x 100 x 3) as a result. You'll then calculate the mean image by running the operation you create with your session (e.g. <code>sess.run(...)</code>). Finally, plot the mean image, save it, and then include this image in your zip file as <b>mean.png</b>.
<a name="code-1"></a>
Code
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
# Create a tensorflow operation to give you the standard deviation
# First compute the difference of every image with a
# 4 dimensional mean image shaped 1 x H x W x C
mean_img_4d = ...
subtraction = imgs - mean_img_4d
# Now compute the standard deviation by calculating the
# square root of the sum of squared differences
std_img_op = tf.sqrt(tf.reduce_sum(subtraction * subtraction, reduction_indices=0))
# Now calculate the standard deviation using your session
std_img = sess.run(std_img_op)
# Then plot the resulting standard deviation image:
# Make sure the std image is the right size!
assert(std_img.shape == (100, 100) or std_img.shape == (100, 100, 3))
plt.figure(figsize=(10, 10))
std_img_show = std_img / np.max(std_img)
plt.imshow(std_img_show)
plt.imsave(arr=std_img_show, fname='std.png')
"""
Explanation: Once you have seen the mean image of your dataset, how does it relate to your own expectations of the dataset? Did you expect something different? Was there something more "regular" or "predictable" about your dataset that the mean image did or did not reveal? If your mean image looks a lot like something recognizable, it's a good sign that there is a lot of predictability in your dataset. If your mean image looks like nothing at all, a gray blob where not much seems to stand out, then it's pretty likely that there isn't very much in common between your images. Neither is a bad scenario. Though, it is more likely that having some predictability in your mean image, e.g. something recognizable, that there are representations worth exploring with deeper networks capable of representing them. However, we're only using 100 images so it's a very small dataset to begin with.
<a name="part-three---compute-the-standard-deviation"></a>
Part Three - Compute the Standard Deviation
<a name="instructions-2"></a>
Instructions
Now use tensorflow to calculate the standard deviation and upload the standard deviation image averaged across color channels as a "jet" heatmap of the 100 images. This will be a little more involved as there is no operation in tensorflow to do this for you. However, you can do this by calculating the mean image of your dataset as a 4-D array. To do this, you could write e.g. mean_img_4d = tf.reduce_mean(imgs, reduction_indices=0, keep_dims=True) to give you a 1 x H x W x C dimension array calculated on the N x H x W x C images variable. The reduction_indices parameter is saying to calculate the mean over the 0th dimension, meaning for every possible H, W, C, or for every pixel, you will have a mean composed over the N possible values it could have had, or what that pixel was for every possible image. This way, you can write images - mean_img_4d to give you a N x H x W x C dimension variable, with every image in your images array having been subtracted by the mean_img_4d. If you calculate the square root of the sum of the squared differences of this resulting operation, you have your standard deviation!
In summary, you'll need to write something like: subtraction = imgs - tf.reduce_mean(imgs, reduction_indices=0, keep_dims=True), then reduce this operation using tf.sqrt(tf.reduce_sum(subtraction * subtraction, reduction_indices=0)) to get your standard deviation then include this image in your zip file as <b>std.png</b>
<a name="code-2"></a>
Code
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
norm_imgs_op = ...
norm_imgs = sess.run(norm_imgs_op)
print(np.min(norm_imgs), np.max(norm_imgs))
print(imgs.dtype)
# Then plot the resulting normalized dataset montage:
# Make sure we have a 100 x 100 x 100 x 3 dimension array
assert(norm_imgs.shape == (100, 100, 100, 3))
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(norm_imgs, 'normalized.png'))
"""
Explanation: Once you have plotted your dataset's standard deviation per pixel, what does it reveal about your dataset? Like with the mean image, you should consider what is predictable and not predictable about this image.
<a name="part-four---normalize-the-dataset"></a>
Part Four - Normalize the Dataset
<a name="instructions-3"></a>
Instructions
Using tensorflow, we'll attempt to normalize your dataset using the mean and standard deviation.
<a name="code-3"></a>
Code
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
norm_imgs_show = (norm_imgs - np.min(norm_imgs)) / (np.max(norm_imgs) - np.min(norm_imgs))
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(norm_imgs_show, 'normalized.png'))
"""
Explanation: We apply another type of normalization to 0-1 just for the purposes of plotting the image. If we didn't do this, the range of our values would be somewhere between -1 and 1, and matplotlib would not be able to interpret the entire range of values. By rescaling our -1 to 1 valued images to 0-1, we can visualize it better.
End of explanation
"""
# First build 3 kernels for each input color channel
ksize = ...
kernel = np.concatenate([utils.gabor(ksize)[:, :, np.newaxis] for i in range(3)], axis=2)
# Now make the kernels into the shape: [ksize, ksize, 3, 1]:
kernel_4d = ...
assert(kernel_4d.shape == (ksize, ksize, 3, 1))
"""
Explanation: <a name="part-five---convolve-the-dataset"></a>
Part Five - Convolve the Dataset
<a name="instructions-4"></a>
Instructions
Using tensorflow, we'll attempt to convolve your dataset with one of the kernels we created during the lesson, and then in the next part, we'll take the sum of the convolved output to use for sorting. You should use the function utils.gabor to create an edge detector. You can also explore with the utils.gauss2d kernel. What you must figure out is how to reshape your kernel to be 4-dimensional: K_H, K_W, C_I, and C_O, corresponding to the kernel's height and width (e.g. 16), the number of input channels (RGB = 3 input channels), and the number of output channels, (1).
<a name="code-4"></a>
Code
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
plt.figure(figsize=(5, 5))
plt.imshow(kernel_4d[:, :, 0, 0], cmap='gray')
plt.imsave(arr=kernel_4d[:, :, 0, 0], fname='kernel.png', cmap='gray')
"""
Explanation: We'll Perform the convolution with the 4d tensor in kernel_4d. This is a ksize x ksize x 3 x 1 tensor, where each input color channel corresponds to one filter with 1 output. Each filter looks like:
End of explanation
"""
convolved = utils.convolve(...
convolved_show = (convolved - np.min(convolved)) / (np.max(convolved) - np.min(convolved))
print(convolved_show.shape)
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(convolved_show[..., 0], 'convolved.png'), cmap='gray')
"""
Explanation: Perform the convolution with the 4d tensors:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
# Create a set of operations using tensorflow which could
# provide you for instance the sum or mean value of every
# image in your dataset:
# First flatten our convolved images so instead of many 3d images,
# we have many 1d vectors.
# This should convert our 4d representation of N x H x W x C to a
# 2d representation of N x (H*W*C)
flattened = tf.reshape(convolved...
assert(flattened.get_shape().as_list() == [100, 10000])
# Now calculate some statistics about each of our images
values = tf.reduce_sum(flattened, reduction_indices=1)
# Then create another operation which sorts those values
# and then calculate the result:
idxs_op = tf.nn.top_k(values, k=100)[1]
idxs = sess.run(idxs_op)
# Then finally use the sorted indices to sort your images:
sorted_imgs = np.array([imgs[idx_i] for idx_i in idxs])
# Then plot the resulting sorted dataset montage:
# Make sure we have a 100 x 100 x 100 x 3 dimension array
assert(sorted_imgs.shape == (100, 100, 100, 3))
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(sorted_imgs, 'sorted.png'))
"""
Explanation: What we've just done is build a "hand-crafted" feature detector: the Gabor Kernel. This kernel is built to respond to particular orientation: horizontal edges, and a particular scale. It also responds equally to R, G, and B color channels, as that is how we have told the convolve operation to work: use the same kernel for every input color channel. When we work with deep networks, we'll see how we can learn the convolution kernels for every color channel, and learn many more of them, in the order of 100s per color channel. That is really where the power of deep networks will start to become obvious. For now, we've seen just how difficult it is to get at any higher order features of the dataset. We've really only picked out some edges!
<a name="part-six---sort-the-dataset"></a>
Part Six - Sort the Dataset
<a name="instructions-5"></a>
Instructions
Using tensorflow, we'll attempt to organize your dataset. We'll try sorting based on the mean value of each convolved image's output to use for sorting. To do this, we could calculate either the sum value (tf.reduce_sum) or the mean value (tf.reduce_mean) of each image in your dataset and then use those values, e.g. stored inside a variable values to sort your images using something like tf.nn.top_k and sorted_imgs = np.array([imgs[idx_i] for idx_i in idxs]) prior to creating the montage image, m = montage(sorted_imgs, "sorted.png") and then include this image in your zip file as <b>sorted.png</b>
<a name="code-5"></a>
Code
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
utils.build_submission('session-1.zip',
('dataset.png',
'mean.png',
'std.png',
'normalized.png',
'kernel.png',
'convolved.png',
'sorted.png',
'session-1.ipynb'))
"""
Explanation: What does your sorting reveal? Could you imagine the same sorting over many more images reveal the thing your dataset sought to represent? It is likely that the representations that you wanted to find hidden within "higher layers", i.e., "deeper features" of the image, and that these "low level" features, edges essentially, are not very good at describing the really interesting aspects of your dataset. In later sessions, we'll see how we can combine the outputs of many more convolution kernels that have been assembled in a way that accentuate something very particular about each image, and build a sorting that is much more intelligent than this one!
<a name="assignment-submission"></a>
Assignment Submission
Now that you've completed all 6 parts, we'll create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as:
<pre>
session-1/
session-1.ipynb
dataset.png
mean.png
std.png
normalized.png
kernel.png
convolved.png
sorted.png
libs/
utils.py
</pre>
You'll then submit this zip file for your first assignment on Kadenze for "Assignment 1: Datasets/Computing with Tensorflow"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.
<b>To get assessed, you'll need to be a premium student which is free for a month!</b> If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info
Then remember to complete the remaining parts of Assignemnt 1 on Kadenze!:
* Comment on 1 student's open-ended arrangement (Part 6) in the course gallery titled "Creating a Dataset/ Computing with Tensorflow". Think about what images they've used in their dataset and how the arrangement reflects what could be represented by that data.
* Finally make a forum post in the forum for this assignment "Creating a Dataset/ Computing with Tensorflow".
- Including a link to an artist making use of machine learning to organize data or finding representations within large datasets
- Tell a little about their work (min 20 words).
- Comment on at least 2 other student's forum posts (min 20 words)
Make sure your notebook is named "session-1" or else replace it with the correct name in the list of files below:
End of explanation
"""
|
y2ee201/Deep-Learning-Nanodegree | first-neural-network/.ipynb_checkpoints/DLND Your first neural network-checkpoint.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# input size
print(train_features.shape)
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
# self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
def sigmoid(x):
return 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation here
self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
# X = np.reshape(X, (X.shape[0],1))
# DEBUG 1
# print('X')
# print(X.shape)
# print('self.weights_input_to_hidden.shape')
# print(self.weights_input_to_hidden.shape)
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# print('hidden_error.shape')
# print(hidden_error)
# print('hidden_outputs.shape')
# print(hidden_outputs)
# print('1 - hidden_outputs.shape')
# print(1 - hidden_outputs)
# DEBUG 2
# print('output_error_term.shape')
# print(output_error_term)
# print('hidden_outputs.shape')
# print(hidden_outputs)
# print('delta_weights_h_o.shape')
# print(delta_weights_h_o)
# print((output_error_term * hidden_outputs))
# print('delta_weights_i_h.shape')
# print(delta_weights_i_h.shape)
# print('hidden_error_term.shape')
# print(hidden_error_term.shape)
# print('X.shape')
# print(X.shape)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += np.reshape(output_error_term * hidden_outputs, delta_weights_h_o.shape)
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
# my debugging
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
testNN = NeuralNetwork(input_nodes=3, hidden_nodes=2, output_nodes=1, learning_rate=0.5)
testNN.train(features=inputs, targets=targets)
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
"""
import sys
### Set the hyperparameters here ###
iterations = 1000
learning_rate = 0.1
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
|
analog-rl/Easy21 | Joe #2 Monte-Carlo Control in Easy21/easy21 tests.ipynb | mit | import matplotlib.pyplot as plt
%matplotlib notebook
plt.figure(1)
values = []
for i in xrange(0,100000):
values.append(Card().absolute_value)
# values.append(random.randint(1,10))
plt.title('Test; Each draw from the deck results in a value between 1 and 10 (uniformly distributed)')
plt.hist(values)
# , c='g', s=20, alpha=0.25, label='true positive')
plt.show()
plt.savefig("#1-test1.png")
"""
Explanation: from 1st module
Test; Each draw from the deck results in a value between 1 and 10 (uniformly distributed)
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib notebook
plt.figure(2)
values = []
for i in xrange(0,100000):
if (Card().is_black):
values.append(0.6666666)
else:
values.append(0.3333333)
plt.title('Test; red (probability 1/3) or black (probability 2/3)')
plt.hist(values)
# , c='g', s=20, alpha=0.25, label='true positive')
plt.show()
plt.savefig("#1-test2.png")
"""
Explanation: Each draw from the deck results in a colour of red (probability 1/3) or black (probability 2/3).
End of explanation
"""
def play_test_player_bust():
s = State(Card(True),Card(True))
a = Actions.hit
e = Environment()
while not s.term:
s, r = e.step(s, a)
# print ("state = %s, %s, %s" % (s.player, s.dealer, s.term))
return s, r
import matplotlib.pyplot as plt
%matplotlib notebook
plt.figure(3)
values = []
for i in xrange(0,100000):
s, r = play_test_player_bust()
if s.player > 21:
values.append(1)
elif s.player < 1:
values.append(1)
else:
values.append(-1)
print "error!!!!"
plt.title('Test; player busts > 21 or <1')
plt.hist(values)
# , c='g', s=20, alpha=0.25, label='true positive')
plt.show()
plt.savefig("#1-test3.png")
"""
Explanation: Test: If the player’s sum exceeds 21, or becomes less than 1, then she “goes bust” and loses the game (reward -1)
End of explanation
"""
def play_test_player_stick():
s = State(Card(True),Card(True))
a = Actions.hit
e = Environment()
a = Actions.stick
while not s.term:
s, r = e.step(s, a)
# print ("state = %s, %s, %s" % (s.player, s.dealer, s.term))
return s, r
import matplotlib.pyplot as plt
%matplotlib notebook
plt.figure(4)
values = []
for i in xrange(0,100000):
s, r = play_test_player_stick()
if s.dealer > 21 or s.dealer < 1:
if r == 1:
values.append(1)
else:
print "error, player should have won"
print ("state = %s, %s, %s. result = %s" % (s.player, s.dealer, s.term, r))
values.append(-1)
elif s.player == s.dealer:
if r == 0:
values.append(1)
else:
print "error, player should have drawn"
print ("state = %s, %s, %s. result = %s" % (s.player, s.dealer, s.term, r))
values.append(-2)
elif s.player > s.dealer:
if r == 1:
values.append(1)
else:
print "error, player should have won"
print ("state = %s, %s, %s. result = %s" % (s.player, s.dealer, s.term, r))
values.append(-3)
elif s.player < s.dealer:
if r == -1:
values.append(1)
else:
print "error, player should have lost"
print ("state = %s, %s, %s. result = %s" % (s.player, s.dealer, s.term, r))
values.append(-4)
else:
print "all cases should have been dealt with"
print ("state = %s, %s, %s. result = %s" % (s.player, s.dealer, s.term, r))
values.append(-5)
plt.title('Test; player sticks')
plt.hist(values)
plt.show()
plt.savefig("#1-test4.png")
"""
Explanation: Test: If the player sticks then the dealer starts taking turns. The dealer always sticks on any sum of 17 or greater, and hits otherwise. If the dealer goes bust, then the player wins; otherwise, the outcome – win (reward +1), lose (reward -1), or draw (reward 0) – is the player with the largest sum.
End of explanation
"""
|
lisitsyn/shogun | doc/ipython-notebooks/distributions/KernelDensity.ipynb | bsd-3-clause | import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# generates samples from the distribution
def generate_samples(n_samples,mu1,sigma1,mu2,sigma2):
samples1 = np.random.normal(mu1,sigma1,(1,int(n_samples/2)))
samples2 = np.random.normal(mu2,sigma2,(1,int(n_samples/2)))
samples = np.concatenate((samples1,samples2),1)
return samples
# parameters of the distribution
mu1=4
sigma1=1
mu2=8
sigma2=2
# number of samples
n_samples = 200
samples=generate_samples(n_samples,mu1,sigma1,mu2,sigma2)
# pdf function for plotting
x = np.linspace(0,15,500)
y = 0.5*(stats.norm(mu1,sigma1).pdf(x)+stats.norm(mu2,sigma2).pdf(x))
# plot samples
plt.plot(samples[0,:],np.zeros(n_samples),'rx',label="Samples")
# plot actual pdf
plt.plot(x,y,'b--',label="Actual pdf")
plt.legend(numpoints=1)
plt.show()
"""
Explanation: Kernel Density Estimation
by Parijat Mazumdar (GitHub ID: <a href='https://github.com/mazumdarparijat'>mazumdarparijat</a>)
This notebook is on using the Shogun Machine Learning Toolbox for kernel density estimation (KDE). We start with a brief overview of KDE. Then we demonstrate the use of Shogun's $KernelDensity$ class on a toy example. Finally, we apply KDE to a real world example, thus demonstrating the its prowess as a non-parametric statistical method.
Brief overview of Kernel Density Estimation
Kernel Density Estimation (KDE) is a non-parametric way of estimating the probability density function (pdf) of ANY distribution given a finite number of its samples. The pdf of a random variable X given finite samples ($x_i$s), as per KDE formula, is given by:
$$pdf(x)=\frac{1}{nh} \Sigma_{i=1}^n K(\frac{||x-x_i||}{h})$$
In the above equation, K() is called the kernel - a symmetric function that integrates to 1. h is called the kernel bandwidth
which controls how smooth (or spread-out) the kernel is. The most commonly used kernel is the normal distribution function.
KDE is a computationally expensive method. Given $N_1$ query points (i.e. the points where we want to compute the pdf) and $N_2$ samples, computational complexity of KDE is $\mathcal{O}(N_1.N_2.D)$ where D is the dimension of the data. This computational load can be reduced by spatially segregating data points using data structures like KD-Tree and Ball-Tree. In single tree methods, only the sample points are structured in a tree whereas in dual tree methods both sample points and query points are structured in respective trees. Using these tree structures enables us to compute the density estimate for a bunch of points together at once thus reducing the number of required computations. This speed-up, however, results in reduced accuracy. Greater the speed-up, lower the accuracy. Therefore, in practice, the maximum amount of speed-up that can be afforded is usually controlled by error tolerance values.
KDE on toy data
Let us learn about KDE in Shogun by estimating a mixture of 2 one-dimensional gaussian distributions.
$$pdf(x) = \frac{1}{2} [\mathcal{N}(\mu_1,\sigma_1) + \mathcal{N}(\mu_2,\sigma_2)]$$
We start by plotting the actual distribution and generating the required samples (i.e. $x_i$s).
End of explanation
"""
from shogun import KernelDensity, features, K_GAUSSIAN, D_EUCLIDEAN, EM_KDTREE_SINGLE
def get_kde_result(bandwidth,samples):
# set model parameters
kernel_type = K_GAUSSIAN
dist_metric = D_EUCLIDEAN # other choice is D_MANHATTAN
eval_mode = EM_KDTREE_SINGLE # other choices are EM_BALLTREE_SINGLE, EM_KDTREE_DUAL and EM_BALLTREE_DUAL
leaf_size = 1 # min number of samples to be present in leaves of the spatial tree
abs_tol = 0 # absolute tolerance
rel_tol = 0 # relative tolerance i.e. accepted error as fraction of true density
k=KernelDensity(bandwidth,kernel_type,dist_metric,eval_mode,leaf_size,abs_tol,rel_tol)
# form Shogun features and train
train_feats=features(samples)
k.train(train_feats)
# get log density
query_points = np.array([np.linspace(0,15,500)])
query_feats = features(query_points)
log_pdf = k.get_log_density(query_feats)
return query_points,log_pdf
query_points,log_pdf=get_kde_result(0.5,samples)
"""
Explanation: Now, we will apply KDE to estimate the actual pdf using the samples. Using KDE in Shogun is a 3 stage process : setting the model parameters, supplying sample data points for training and supplying query points for getting log of pdf estimates.
End of explanation
"""
import matplotlib.pyplot as plt
% matplotlib inline
def plot_pdf(samples,query_points,log_pdf,title):
plt.plot(samples,np.zeros((1,samples.size)),'rx')
plt.plot(query_points[0,:],np.exp(log_pdf),'r',label="Estimated pdf")
plt.plot(x,y,'b--',label="Actual pdf")
plt.title(title)
plt.legend()
plt.show()
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.5')
"""
Explanation: We have calculated log of pdf. Let us see how accurate it is by comparing it with the actual pdf.
End of explanation
"""
query_points,log_pdf=get_kde_result(0.1,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.1')
query_points,log_pdf=get_kde_result(0.2,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.2')
query_points,log_pdf=get_kde_result(0.5,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.5')
query_points,log_pdf=get_kde_result(1.1,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=1.1')
query_points,log_pdf=get_kde_result(1.5,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=1.5')
"""
Explanation: We see that the estimated pdf resembles the actual pdf with reasonable accuracy. This is a small demonstration of the fact that KDE can be used to estimate any arbitrary distribution given a finite number of it's samples.
Effect of bandwidth
Kernel bandwidth is a very important controlling parameter of the kernel density estimate. We have already seen that for bandwidth of 0.5, the estimated pdf almost coincides with the actual pdf. Let us see what happens when we decrease or increase the value of the kernel bandwidth keeping number of samples constant at 200.
End of explanation
"""
samples=generate_samples(20,mu1,sigma1,mu2,sigma2)
query_points,log_pdf=get_kde_result(0.7,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=20, bandwidth=0.7')
samples=generate_samples(200,mu1,sigma1,mu2,sigma2)
query_points,log_pdf=get_kde_result(0.5,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.5')
samples=generate_samples(2000,mu1,sigma1,mu2,sigma2)
query_points,log_pdf=get_kde_result(0.4,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=2000, bandwidth=0.4')
"""
Explanation: From the above plots, it can be inferred that the kernel bandwidth controls the extent of smoothness of the pdf function. Low value of bandwidth parameter causes under-smoothing (which is the case with the first 2 plots from top) and high value causes over-smoothing (as it is the case with the bottom 2 plots). The perfect value of the kernel bandwidth should be estimated using
model-selection techniques which is presently not supported by Shogun (to be updated soon).
Effect of number of samples
Here, we see the effect of the number of samples on the estimated pdf, fine-tuning bandwidth in each case such that we get the most accurate pdf.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
f = open(os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data'))
feats = []
# read data from file
for line in f:
words = line.rstrip().split(',')
feats.append([float(i) for i in words[0:4]])
f.close()
# create observation matrix
obsmatrix = np.array(feats).T
# Just keep 2 most important features
obsmatrix = obsmatrix[2:4,:]
# plot the data
def plot_samples(marker='o',plot_show=True):
# First 50 data belong to Iris Sentosa, plotted in green
plt.plot(obsmatrix[0,0:50], obsmatrix[1,0:50], marker, color='green', markersize=5,label='Iris Sentosa')
# Next 50 data belong to Iris Versicolour, plotted in red
plt.plot(obsmatrix[0,50:100], obsmatrix[1,50:100], marker, color='red', markersize=5,label='Iris Versicolour')
# Last 50 data belong to Iris Virginica, plotted in blue
plt.plot(obsmatrix[0,100:150], obsmatrix[1,100:150], marker, color='blue', markersize=5,label='Iris Virginica')
if plot_show:
plt.xlim(0,8)
plt.ylim(-1,3)
plt.title('3 varieties of Iris plants')
plt.xlabel('petal length')
plt.ylabel('petal width')
plt.legend(numpoints=1,bbox_to_anchor=(0.97,0.35))
plt.show()
plot_samples()
"""
Explanation: Firstly, We see that the estimated pdf becomes more accurate with increasing number of samples. By running the above snippent multiple times, we also notice that the variation in the shape of estimated pdf, between 2 different runs of the above code snippet, is highest when the number of samples is 20 and lowest when the number of samples is 2000. Therefore, we can say that with increase in the number of samples, the stability of the estimated pdf increases. Both the results can be explained using the intuitive fact that a larger number of samples gives a better picture of the entire distribution. A formal proof of the same has been presented by L. Devroye in his book "Nonparametric Density Estimation: The $L_1$ View" [3]. It is theoretically proven that as the number of samples tends to $\infty$, the estimated pdf converges to the real pdf.
Classification using KDE
In this section we see how KDE can be used for classification using a generative approach. Here, we try to classify the different varieties of Iris plant making use of Fisher's Iris dataset borrowed from the <a href='http://archive.ics.uci.edu/ml/datasets/Iris'>UCI Machine Learning Repository</a>. There are 3 varieties of Iris plants:
<ul><li>Iris Sensosa</li><li>Iris Versicolour</li><li>Iris Virginica</li></ul>
<br>
The Iris dataset enlists 4 features that can be used to segregate these varieties, but for ease of analysis and visualization, we only use two of the most important features (ie. features with very high class correlations)[refer to <a href='http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.names'>summary statistics</a>] namely
<ul><li>petal length</li><li>petal width</li></ul>
<br>
As a first step, we plot the data.
End of explanation
"""
from shogun import KernelDensity, features, K_GAUSSIAN, D_EUCLIDEAN, EM_BALLTREE_DUAL
import scipy.interpolate as interpolate
def get_kde(samples):
# set model parameters
bandwidth = 0.4
kernel_type = K_GAUSSIAN
dist_metric = D_EUCLIDEAN
eval_mode = EM_BALLTREE_DUAL
leaf_size = 1
abs_tol = 0
rel_tol = 0
k=KernelDensity(bandwidth,kernel_type,dist_metric,eval_mode,leaf_size,abs_tol,rel_tol)
# form Shogun features and train
train_feats=features(samples)
k.train(train_feats)
return k
def density_estimate_grid(kdestimator):
xmin,xmax,ymin,ymax=[0,8,-1,3]
# Set up a regular grid of interpolation points
x, y = np.linspace(xmin, xmax, 100), np.linspace(ymin, ymax, 100)
x, y = np.meshgrid(x, y)
# compute density estimate at each of the grid points
query_feats=features(np.array([x[0,:],y[0,:]]))
z=np.array([kdestimator.get_log_density(query_feats)])
z=np.exp(z)
for i in range(1,x.shape[0]):
query_feats=features(np.array([x[i,:],y[i,:]]))
zi=np.exp(kdestimator.get_log_density(query_feats))
z=np.vstack((z,zi))
return (x,y,z)
def plot_pdf(kdestimator,title):
# compute interpolation points and corresponding kde values
x,y,z=density_estimate_grid(kdestimator)
# plot pdf
plt.imshow(z, vmin=z.min(), vmax=z.max(), origin='lower',extent=[x.min(), x.max(), y.min(), y.max()])
plt.title(title)
plt.colorbar(shrink=0.5)
plt.xlabel('petal length')
plt.ylabel('petal width')
plt.show()
kde1=get_kde(obsmatrix[:,0:50])
plot_pdf(kde1,'pdf for Iris Sentosa')
kde2=get_kde(obsmatrix[:,50:100])
plot_pdf(kde2,'pdf for Iris Versicolour')
kde3=get_kde(obsmatrix[:,100:150])
plot_pdf(kde3,'pdf for Iris Virginica')
kde=get_kde(obsmatrix[:,0:150])
plot_pdf(kde,'Combined pdf')
"""
Explanation: Next, let us use the samples to estimate the probability density functions of each category of plant.
End of explanation
"""
# get 3 likelihoods for each test point in grid
x,y,z1=density_estimate_grid(kde1)
x,y,z2=density_estimate_grid(kde2)
x,y,z3=density_estimate_grid(kde3)
# classify using our decision rule
z=[]
for i in range(0,x.shape[0]):
zj=[]
for j in range(0,x.shape[1]):
if ((z1[i,j]>z2[i,j]) and (z1[i,j]>z3[i,j])):
zj.append(1)
elif (z2[i,j]>z3[i,j]):
zj.append(2)
else:
zj.append(0)
z.append(zj)
z=np.array(z)
# plot results
plt.imshow(z, vmin=z.min(), vmax=z.max(), origin='lower',extent=[x.min(), x.max(), y.min(), y.max()])
plt.title("Classified regions")
plt.xlabel('petal length')
plt.ylabel('petal width')
plot_samples(marker='x',plot_show=False)
plt.show()
"""
Explanation: The above contour plots depict the pdf of respective categories of iris plant. These probability density functions can be used
as generative models to estimate the likelihood of any test sample belonging to a particular category. We use these likelihoods for classification by forming a simple decision rule: a test sample is assigned the class for which it's likelihood is maximum. With this in mind, let us try to segregate the
entire 2-D space into 3 regions :
<ul><li>Iris Sentosa (green)</li><li>Iris Versicolour (red)</li><li>Iris Virginica (blue)</li></ul>
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_object_raw.ipynb | bsd-3-clause | from __future__ import print_function
import mne
import os.path as op
from matplotlib import pyplot as plt
"""
Explanation: The :class:Raw <mne.io.Raw> data structure: continuous data
End of explanation
"""
# Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(data_path, preload=True, add_eeg_ref=False)
raw.set_eeg_reference() # set EEG average reference
# Give the sample rate
print('sample rate:', raw.info['sfreq'], 'Hz')
# Give the size of the data matrix
print('channels x samples:', raw._data.shape)
"""
Explanation: Continuous data is stored in objects of type :class:Raw <mne.io.Raw>.
The core data structure is simply a 2D numpy array (channels × samples,
stored in a private attribute called ._data) combined with an
:class:Info <mne.Info> object (.info attribute)
(see tut_info_objects).
The most common way to load continuous data is from a .fif file. For more
information on loading data from other formats <ch_convert>, or
creating it from scratch <tut_creating_data_structures>.
Loading continuous data
End of explanation
"""
# Extract data from the first 5 channels, from 1 s to 3 s.
sfreq = raw.info['sfreq']
data, times = raw[:5, int(sfreq * 1):int(sfreq * 3)]
_ = plt.plot(times, data.T)
_ = plt.title('Sample channels')
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Accessing the `._data` attribute is done here for educational
purposes. However this is a private attribute as its name starts
with an `_`. This suggests that you should **not** access this
variable directly but rely on indexing syntax detailed just below.</p></div>
Information about the channels contained in the :class:Raw <mne.io.Raw>
object is contained in the :class:Info <mne.Info> attribute.
This is essentially a dictionary with a number of relevant fields (see
tut_info_objects).
Indexing data
To access the data stored within :class:Raw <mne.io.Raw> objects,
it is possible to index the :class:Raw <mne.io.Raw> object.
Indexing a :class:Raw <mne.io.Raw> object will return two arrays: an array
of times, as well as the data representing those timepoints. This works
even if the data is not preloaded, in which case the data will be read from
disk when indexing. The syntax is as follows:
End of explanation
"""
# Pull all MEG gradiometer channels:
# Make sure to use .copy() or it will overwrite the data
meg_only = raw.copy().pick_types(meg=True)
eeg_only = raw.copy().pick_types(meg=False, eeg=True)
# The MEG flag in particular lets you specify a string for more specificity
grad_only = raw.copy().pick_types(meg='grad')
# Or you can use custom channel names
pick_chans = ['MEG 0112', 'MEG 0111', 'MEG 0122', 'MEG 0123']
specific_chans = raw.copy().pick_channels(pick_chans)
print(meg_only, eeg_only, grad_only, specific_chans, sep='\n')
"""
Explanation: Selecting subsets of channels and samples
It is possible to use more intelligent indexing to extract data, using
channel names, types or time ranges.
End of explanation
"""
f, (a1, a2) = plt.subplots(2, 1)
eeg, times = eeg_only[0, :int(sfreq * 2)]
meg, times = meg_only[0, :int(sfreq * 2)]
a1.plot(times, meg[0])
a2.plot(times, eeg[0])
del eeg, meg, meg_only, grad_only, eeg_only, data, specific_chans
"""
Explanation: Notice the different scalings of these types
End of explanation
"""
raw = raw.crop(0, 50) # in seconds
print('New time range from', raw.times.min(), 's to', raw.times.max(), 's')
"""
Explanation: You can restrict the data to a specific time range
End of explanation
"""
nchan = raw.info['nchan']
raw = raw.drop_channels(['MEG 0241', 'EEG 001'])
print('Number of channels reduced from', nchan, 'to', raw.info['nchan'])
"""
Explanation: And drop channels by name
End of explanation
"""
# Create multiple :class:`Raw <mne.io.RawFIF>` objects
raw1 = raw.copy().crop(0, 10)
raw2 = raw.copy().crop(10, 20)
raw3 = raw.copy().crop(20, 40)
# Concatenate in time (also works without preloading)
raw1.append([raw2, raw3])
print('Time extends from', raw1.times.min(), 's to', raw1.times.max(), 's')
"""
Explanation: Concatenating :class:Raw <mne.io.Raw> objects
:class:Raw <mne.io.Raw> objects can be concatenated in time by using the
:func:append <mne.io.Raw.append> function. For this to work, they must
have the same number of channels and their :class:Info
<mne.Info> structures should be compatible.
End of explanation
"""
|
syednasar/datascience | deeplearning/language-translation/translation with rnn.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation with RNN using Tensorflow
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
source_sentences = source_text.split('\n')
target_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\n')]
source_ids = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences]
target_ids = [[target_vocab_to_int[word] for word in sentence.split()] for sentence in target_sentences]
return (source_ids, target_ids)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, learning_rate, keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
"""
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
processed_target = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return processed_target
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)
"""
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
# TODO: Implement Function
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, state_is_tuple=True)
# Dropout
drop_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=keep_prob)
# Encoder
enc_cell = tf.contrib.rnn.MultiRNNCell([drop_cell] * num_layers, state_is_tuple=True)
_, rnn_state = tf.nn.dynamic_rnn(cell = enc_cell, inputs = rnn_inputs, dtype=tf.float32)
return rnn_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
"""
# TODO: Implement Function
with tf.variable_scope("decoding") as decoding_scope:
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
"""
# TODO: Implement Function
#tf.variable_scope("decoder") as varscope
with tf.variable_scope("decoding") as decoding_scope:
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
"""
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
with tf.variable_scope('decoding') as decoding_scope:
#Output Function
output_fn= lambda x: tf.contrib.layers.fully_connected(x,vocab_size,None,scope=decoding_scope)
#Train Logits
train_logits=decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,output_fn, keep_prob)
decoding_scope.reuse_variables()
#Infer Logits
infer_logits=decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],sequence_length-1, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
"""
#Apply embedding to the input data for the encoder.
enc_input = tf.contrib.layers.embed_sequence(
input_data,
source_vocab_size,
enc_embedding_size
)
#embed_target = tf.nn.embedding_lookup(dec_embed, dec_input)
#Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
enc_layer = encoding_layer(
enc_input,
rnn_size,
num_layers,
keep_prob
)
#Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
dec_input = process_decoding_input(
target_data,
target_vocab_to_int,
batch_size
)
#Apply embedding to the target data for the decoder.
#embed_target = tf.contrib.layers.embed_sequence(dec_input,target_vocab_size,dec_embedding_size)
dec_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
embed_target = tf.nn.embedding_lookup(dec_embed, dec_input)
#Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
train_logits, inf_logits = decoding_layer(
embed_target,
dec_embed,
enc_layer,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob
)
return train_logits, inf_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
"""
# Number of Epochs
epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Number of Layers
num_layers = None
# Embedding Size
encoding_embedding_size = None
decoding_embedding_size = None
# Learning Rate
learning_rate = None
# Dropout Keep Probability
keep_probability = None
#Number of Epochs
epochs = 5
#Batch Size
batch_size = 256
#RNN Size
rnn_size = 512 #25
#Number of Layers
num_layers = 2
#Embedding Size
encoding_embedding_size = 256 #13
decoding_embedding_size = 256 #13
#Learning Rate
learning_rate = 0.01
#Dropout Keep Probability
keep_probability = 0.5
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
input_sentence = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
return input_sentence
#return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
atulsingh0/MachineLearning | Sklearn_MLPython/cross_validation.ipynb | gpl-3.0 | # import
from sklearn.datasets import load_iris
from sklearn.cross_validation import cross_val_score, KFold, train_test_split, cross_val_predict, LeaveOneOut, LeavePOut
from sklearn.cross_validation import ShuffleSplit, StratifiedKFold, StratifiedShuffleSplit
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
from scipy.stats import sem
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
iris = load_iris()
X, y = iris.data, iris.target
# splotting the data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=27)
print(X_train.shape, X_test.shape, X_train.shape[0])
"""
Explanation: Cross Validation
End of explanation
"""
# define cross_val func
def xVal_score(clf, X, y, K):
# creating K using KFold
cv = KFold(n=X.shape[0], n_folds=K, shuffle=True, random_state=True)
# Can use suffle as well
# cv = ShuffleSplit(n_splits=3, test_size=0.3, random_state=0)
# doing cross validation
scores = cross_val_score(clf, X, y, cv=cv)
print(scores)
print("Accuracy Mean : %0.3f" %np.mean(scores))
print("Std : ", np.std(scores)*2)
print("Standard Err : +/- {0:0.3f} ".format(sem(scores)))
svc1 = SVC()
xVal_score(svc1, X_train, y_train, 10)
# define cross_val predict
# The function cross_val_predict has a similar interface to cross_val_score, but returns,
# for each element in the input, the prediction that was obtained for that element when it
# was in the test set. Only cross-validation strategies that assign all elements to a test
# set exactly once can be used (otherwise, an exception is raised).
def xVal_predict(clf, X, y, K):
# creating K using KFold
cv = KFold(n=X.shape[0], n_folds=K, shuffle=True, random_state=True)
# Can use suffle as well
# cv = ShuffleSplit(n_splits=3, test_size=0.3, random_state=0)
# doing cross validation prediction
predicted = cross_val_predict(clf, X, y, cv=cv)
print(predicted)
print("Accuracy Score : %0.3f" % accuracy_score(y, predicted))
xVal_predict(svc1, X_train, y_train, 10)
"""
Explanation: cross_val_score uses the KFold or StratifiedKFold strategies by default
End of explanation
"""
X = [1,2,3,4,5]
kf = KFold(n=len(X), n_folds=2)
print(kf)
for i in kf:
print(i)
"""
Explanation: Cross Validation Iterator
K-Fold - KFold divides all the samples in k groups of samples, called folds (if k = n, this is equivalent to the Leave One Out strategy), of equal sizes (if possible). The prediction function is learned using k - 1 folds, and the fold left out is used for test.
End of explanation
"""
X = [1,2,3,4,5]
loo = LeaveOneOut(len(X))
print(loo)
for i in loo:
print(i)
"""
Explanation: Leave One Out (LOO) - LeaveOneOut (or LOO) is a simple cross-validation. Each learning set is created by taking all the samples except one, the test set being the sample left out. Thus, for n samples, we have n different training sets and n different tests set. This cross-validation procedure does not waste much data as only one sample is removed from the training set:
End of explanation
"""
X = [1,2,3,4,5]
loo = LeavePOut(len(X), p=3)
print(loo)
for i in loo:
print(i)
"""
Explanation: Leave P Out (LPO) - LeavePOut is very similar to LeaveOneOut as it creates all the possible training/test sets by removing p samples from the complete set. For n samples, this produces {n \choose p} train-test pairs. Unlike LeaveOneOut and KFold, the test sets will overlap for p > 1
End of explanation
"""
X = [1,2,3,4,5]
loo = ShuffleSplit(len(X))
print(loo)
for i in loo:
print(i)
"""
Explanation: Random permutations cross-validation a.k.a. Shuffle & Split - The ShuffleSplit iterator will generate a user defined number of independent train / test dataset splits. Samples are first shuffled and then split into a pair of train and test sets.
It is possible to control the randomness for reproducibility of the results by explicitly seeding the random_state pseudo random number generator.
End of explanation
"""
X = np.ones(10)
y = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
skf = StratifiedKFold(n_folds=4, y=y)
for i in skf:
print(i)
skf.
"""
Explanation: Some classification problems can exhibit a large imbalance in the distribution of the target classes: for instance there could be several times more negative samples than positive samples. In such cases it is recommended to use stratified sampling as implemented in StratifiedKFold and StratifiedShuffleSplit to ensure that relative class frequencies is approximately preserved in each train and validation fold.
Stratified k-fold
StratifiedKFold is a variation of k-fold which returns stratified folds: each set contains approximately the same percentage of samples of each target class as the complete set.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.21/_downloads/063df3a44a4ac9d23978d7b307e69a4e/plot_read_evoked.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
from mne import read_evokeds
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
# Reading
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0),
proj=True)
"""
Explanation: Reading and writing an evoked file
This script shows how to read and write evoked datasets.
End of explanation
"""
evoked.plot(exclude=[], time_unit='s')
# Show result as a 2D image (x: time, y: channels, color: amplitude)
evoked.plot_image(exclude=[], time_unit='s')
"""
Explanation: Show result as a butterfly plot:
By using exclude=[] bad channels are not excluded and are shown in red
End of explanation
"""
|
mqvist/CarND-Behavioral-Cloning | Experiment_1.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
df = pd.read_csv('data/driving_log.csv')
print(df.describe())
df['steering'].hist(bins=100)
plt.title('Histogram of steering angle (100 bins)')
"""
Explanation: Introduction
In this notebook, I want to experiment with the problem using the provided sample driving data. The aim is to create a working solution that is able to predict the correct steering angle just by using three training examples. Using so
small a training set means that the model should quickly overfit if it is working properly, which should indicate that it should be ok to use the model in further experiments. The idea for this approach came from Paul Heraty's cheatsheet (https://carnd-forums.udacity.com/questions/26214464/behavioral-cloning-cheatsheet).
Specifically, I want to
1. Access the driving data provided by Udacity
1. Pick three representative training examples from the data
1. Setup a LeNet-like model using Keras
1. Train the model and see if it can perfectly learn the test data (i.e. overfit it)
Step 1: Access the driving data
As a preparation step, I read the data in the driving_log.csv with Pandas and print some statistics.
End of explanation
"""
df[df['steering'] < -0.5].index
"""
Explanation: Seems that we are mostly steering straight here.
Step 2: Picking the training images
Now I need to pick three images from the sample driving data that correspond to steering left, right and straight. This set of images should be enough to see that the model is able to learn the differences between the images if it can predict the different steering angles correctly.
Let's first get the indices
of records where steering angle is hard left (< -0.5).
End of explanation
"""
import os
from PIL import Image
def get_record_and_image(index):
record = df.iloc[index]
path = os.path.join('data', record.center)
return record, Image.open(path)
left_record, left_image = get_record_and_image(4341)
print('Steering angle {}'.format(left_record.steering))
plt.imshow(left_image)
"""
Explanation: By trial and error, I ended up picking index 4341 where the image matches the left turn nicely.
End of explanation
"""
df[df['steering'] > 0.5].index
"""
Explanation: Let's do the same with the hard right turn (steering angle > 0.5).
End of explanation
"""
right_record, right_image = get_record_and_image(3357)
print('Steering angle {}'.format(right_record.steering))
plt.imshow(right_image)
"""
Explanation: Again, after some peeking of the images, the index 3357 looks fine.
End of explanation
"""
# I used this code to pick random images until I found one I liked
#index = df[(df['steering'] > -0.1) & (df['steering'] < 0.1)].sample(n=1).iloc[0].name
#print('Index', index)
straight_record, straight_image = get_record_and_image(796)
plt.imshow(straight_image)
"""
Explanation: Now I need to pick an record for driving straight. There should be plenty of choices to pick from, so some random exploration of the choices is probably the best way to find one that looks ok.
End of explanation
"""
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
model = Sequential()
model.add(Convolution2D(6, 5, 5, border_mode='valid', subsample=(5, 5), input_shape=(160, 320, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(16, 5, 5, border_mode='valid', subsample=(2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(120))
model.add(Activation('relu'))
model.add(Dense(84))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('tanh'))
"""
Explanation: Step 3: Setup the model
Having selected the training examples, I next need to create a DNN model to train. I first thought about starting with the Nvidia pipeline (http://images.nvidia.com/content/tegra/automotive/images/2016/solutions/pdf/end-to-end-dl-using-px.pdf) but I started to wonder how a bit simpler network like the LeNet would work. The input is likely less complex here than in the Nvidia self-driving car case, with the constant lightning and road color etc.
So, let's setup a modified version of the LeNet that can take the 320x160 images as input and outputs a single number between -1 and 1. Because the image resolution is so much higher than in the original LeNet it probably makes sense to use striding in the convolution layers to reduce the dimensionality. Let's try (10, 10) stride for the first convolution layer followed To get the [-1, 1] range for the output, tanh activation function seems to be a good choice.
End of explanation
"""
for n, layer in enumerate(model.layers, 1):
print('Layer {:2} {:16} input shape {} output shape {}'.format(n, layer.name, layer.input_shape, layer.output_shape))
"""
Explanation: Lets check the dimensions of the network layers.
End of explanation
"""
X_train = [np.array(image) for image in [left_image, right_image, straight_image]]
X_min = np.min(X_train)
X_max = np.max(X_train)
X_normalized = (X_train - X_min) / (X_max - X_min) - 0.5
y_train = np.array([record['steering'] for record in [left_record, right_record, straight_record]])
from random import randrange
def generator():
while 1:
i = randrange(3)
# Create a one item batch by taking a slice
yield X_normalized[i:i+1], y_train[i:i+1]
model.compile('adam', 'mse')
history = model.fit_generator(generator(), samples_per_epoch=1000, validation_data=(X_normalized, y_train), nb_epoch=10, verbose=2)
"""
Explanation: Now I need to massage the images and corresponding steering angles to form that is usable in model training.
End of explanation
"""
for X, y in zip(X_expand, y_train):
print('Actual steering angle {} model prediction {}'.format(y, model.predict(X)[0][0]))
"""
Explanation: Training hits zero validation loss after epoch 5, i.e., it should have learned the data perfectly. Lets see how well the model predicts.
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/tutorials/beaming_boosting.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Beaming and Boosting
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b['requiv@primary'] = 1.8
b['requiv@secondary'] = 0.96
b['teff@primary'] = 10000
b['gravb_bol@primary'] = 1.0
b['teff@secondary'] = 5200
b['gravb_bol@secondary'] = 0.32
b['q@binary'] = 0.96/1.8
b['incl@binary'] = 88
b['period@binary'] = 1.0
b['sma@binary'] = 6.0
"""
Explanation: Let's make our system so that the boosting effects will be quite noticeable.
End of explanation
"""
times = np.linspace(0,1,101)
b.add_dataset('lc', times=times, dataset='lc01')
b.add_dataset('rv', times=times, dataset='rv01')
b.add_dataset('mesh', times=times[::10], dataset='mesh01', columns=['boost_factors@lc01'])
"""
Explanation: We'll add lc, rv, and mesh datasets so that we can see how they're each affected by beaming and boosting.
End of explanation
"""
b.set_value('irrad_method', 'none')
print b['boosting_method@compute']
print b['boosting_method@compute'].choices
"""
Explanation: Relevant Parameters
End of explanation
"""
b.run_compute(boosting_method='none', model='boosting_none')
b.run_compute(boosting_method='linear', model='boosting_linear')
afig, mplfig = b['lc01'].plot(show=True, legend=True)
afig, mplfig = b['lc01'].plot(ylim=(1.01,1.03), show=True, legend=True)
"""
Explanation: Influence on Light Curves (fluxes)
End of explanation
"""
afig, mplfig = b['rv01@model'].plot(show=True, legend=True)
"""
Explanation: Influence on Radial Velocities
End of explanation
"""
afig, mplfig = b['mesh@boosting_none'].plot(time=0.6, fc='boost_factors', ec='none', show=True)
afig, mplfig = b['mesh@boosting_linear'].plot(time=0.6, fc='boost_factors', ec='none', show=True)
"""
Explanation: Influence on Meshes
End of explanation
"""
|
Hvass-Labs/TensorFlow-Tutorials | 08_Transfer_Learning.ipynb | mit | from IPython.display import Image, display
Image('images/08_transfer_learning_flowchart.png')
"""
Explanation: TensorFlow Tutorial #08
Transfer Learning
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
WARNING!
This tutorial does not work with TensorFlow v. 1.9 due to the PrettyTensor builder API apparently no longer being updated and supported by the Google Developers. It would take too much effort to update this tutorial to use e.g. the Keras API, especially because Tutorial #10 is a similar but more advanced version of Transfer Learning using the Keras builder API. However, you may still want to watch the video for this Tutorial #08 as it explains more details about Transfer Learning than Tutorial #10 does.
Introduction
We saw in the previous Tutorial #07 how to use the pre-trained Inception model for classifying images. Unfortunately the Inception model seemed unable to classify images of people. The reason was the data-set used for training the Inception model, which had some confusing text-labels for classes.
The Inception model is actually quite capable of extracting useful information from an image. So we can instead train the Inception model using another data-set. But it takes several weeks using a very powerful and expensive computer to fully train the Inception model on a new data-set.
We can instead re-use the pre-trained Inception model and merely replace the layer that does the final classification. This is called Transfer Learning.
This tutorial builds on the previous tutorials so you should be familiar with Tutorial #07 on the Inception model, as well as earlier tutorials on how to build and train Neural Networks in TensorFlow. A part of the source-code for this tutorial is located in the inception.py file.
Flowchart
The following chart shows how the data flows when using the Inception model for Transfer Learning. First we input and process an image with the Inception model. Just prior to the final classification layer of the Inception model, we save the so-called Transfer Values to a cache-file.
The reason for using a cache-file is that it takes a long time to process an image with the Inception model. My laptop computer with a Quad-Core 2 GHz CPU can process about 3 images per second using the Inception model. If each image is processed more than once then we can save a lot of time by caching the transfer-values.
The transfer-values are also sometimes called bottleneck-values, but that is a confusing term so it is not used here.
When all the images in the new data-set have been processed through the Inception model and the resulting transfer-values saved to a cache file, then we can use those transfer-values as the input to another neural network. We will then train the second neural network using the classes from the new data-set, so the network learns how to classify images based on the transfer-values from the Inception model.
In this way, the Inception model is used to extract useful information from the images and another neural network is then used for the actual classification.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import time
from datetime import timedelta
import os
# Functions and classes for loading and using the Inception model.
import inception
# We use Pretty Tensor to define the new classifier.
import prettytensor as pt
"""
Explanation: Imports
End of explanation
"""
tf.__version__
"""
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
"""
pt.__version__
"""
Explanation: PrettyTensor version:
End of explanation
"""
import cifar10
"""
Explanation: Load Data for CIFAR-10
End of explanation
"""
from cifar10 import num_classes
"""
Explanation: The data dimensions have already been defined in the cifar10 module, so we just need to import the ones we need.
End of explanation
"""
# cifar10.data_path = "data/CIFAR-10/"
"""
Explanation: Set the path for storing the data-set on your computer.
End of explanation
"""
cifar10.maybe_download_and_extract()
"""
Explanation: The CIFAR-10 data-set is about 163 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
"""
class_names = cifar10.load_class_names()
class_names
"""
Explanation: Load the class-names.
End of explanation
"""
images_train, cls_train, labels_train = cifar10.load_training_data()
"""
Explanation: Load the training-set. This returns the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.
End of explanation
"""
images_test, cls_test, labels_test = cifar10.load_test_data()
"""
Explanation: Load the test-set.
End of explanation
"""
print("Size of:")
print("- Training-set:\t\t{}".format(len(images_train)))
print("- Test-set:\t\t{}".format(len(images_test)))
"""
Explanation: The CIFAR-10 data-set has now been loaded and consists of 60,000 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.
End of explanation
"""
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true)
# Create figure with sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
for i, ax in enumerate(axes.flat):
# There may be less than 9 images, ensure it doesn't crash.
if i < len(images):
# Plot image.
ax.imshow(images[i],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name, cls_pred_name)
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
"""
Explanation: Helper-function for plotting images
Function used to plot at most 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
"""
# Get the first images from the test-set.
images = images_test[0:9]
# Get the true classes for those images.
cls_true = cls_test[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true, smooth=False)
"""
Explanation: Plot a few images to see if data is correct
End of explanation
"""
# inception.data_dir = 'inception/'
"""
Explanation: Download the Inception Model
The Inception model is downloaded from the internet. This is the default directory where you want to save the data-files. The directory will be created if it does not exist.
End of explanation
"""
inception.maybe_download()
"""
Explanation: Download the data for the Inception model if it doesn't already exist in the directory. It is 85 MB.
See Tutorial #07 for more details.
End of explanation
"""
model = inception.Inception()
"""
Explanation: Load the Inception Model
Load the Inception model so it is ready for classifying images.
Note the deprecation warning, which might cause the program to fail in the future.
End of explanation
"""
from inception import transfer_values_cache
"""
Explanation: Calculate Transfer-Values
Import a helper-function for caching the transfer-values of the Inception model.
End of explanation
"""
file_path_cache_train = os.path.join(cifar10.data_path, 'inception_cifar10_train.pkl')
file_path_cache_test = os.path.join(cifar10.data_path, 'inception_cifar10_test.pkl')
print("Processing Inception transfer-values for training-images ...")
# Scale images because Inception needs pixels to be between 0 and 255,
# while the CIFAR-10 functions return pixels between 0.0 and 1.0
images_scaled = images_train * 255.0
# If transfer-values have already been calculated then reload them,
# otherwise calculate them and save them to a cache-file.
transfer_values_train = transfer_values_cache(cache_path=file_path_cache_train,
images=images_scaled,
model=model)
print("Processing Inception transfer-values for test-images ...")
# Scale images because Inception needs pixels to be between 0 and 255,
# while the CIFAR-10 functions return pixels between 0.0 and 1.0
images_scaled = images_test * 255.0
# If transfer-values have already been calculated then reload them,
# otherwise calculate them and save them to a cache-file.
transfer_values_test = transfer_values_cache(cache_path=file_path_cache_test,
images=images_scaled,
model=model)
"""
Explanation: Set the file-paths for the caches of the training-set and test-set.
End of explanation
"""
transfer_values_train.shape
"""
Explanation: Check the shape of the array with the transfer-values. There are 50,000 images in the training-set and for each image there are 2048 transfer-values.
End of explanation
"""
transfer_values_test.shape
"""
Explanation: Similarly, there are 10,000 images in the test-set with 2048 transfer-values for each image.
End of explanation
"""
def plot_transfer_values(i):
print("Input image:")
# Plot the i'th image from the test-set.
plt.imshow(images_test[i], interpolation='nearest')
plt.show()
print("Transfer-values for the image using Inception model:")
# Transform the transfer-values into an image.
img = transfer_values_test[i]
img = img.reshape((32, 64))
# Plot the image for the transfer-values.
plt.imshow(img, interpolation='nearest', cmap='Reds')
plt.show()
plot_transfer_values(i=16)
plot_transfer_values(i=17)
"""
Explanation: Helper-function for plotting transfer-values
End of explanation
"""
from sklearn.decomposition import PCA
"""
Explanation: Analysis of Transfer-Values using PCA
Use Principal Component Analysis (PCA) from scikit-learn to reduce the array-lengths of the transfer-values from 2048 to 2 so they can be plotted.
End of explanation
"""
pca = PCA(n_components=2)
"""
Explanation: Create a new PCA-object and set the target array-length to 2.
End of explanation
"""
transfer_values = transfer_values_train[0:3000]
"""
Explanation: It takes a while to compute the PCA so the number of samples has been limited to 3000. You can try and use the full training-set if you like.
End of explanation
"""
cls = cls_train[0:3000]
"""
Explanation: Get the class-numbers for the samples you selected.
End of explanation
"""
transfer_values.shape
"""
Explanation: Check that the array has 3000 samples and 2048 transfer-values for each sample.
End of explanation
"""
transfer_values_reduced = pca.fit_transform(transfer_values)
"""
Explanation: Use PCA to reduce the transfer-value arrays from 2048 to 2 elements.
End of explanation
"""
transfer_values_reduced.shape
"""
Explanation: Check that it is now an array with 3000 samples and 2 values per sample.
End of explanation
"""
def plot_scatter(values, cls):
# Create a color-map with a different color for each class.
import matplotlib.cm as cm
cmap = cm.rainbow(np.linspace(0.0, 1.0, num_classes))
# Get the color for each sample.
colors = cmap[cls]
# Extract the x- and y-values.
x = values[:, 0]
y = values[:, 1]
# Plot it.
plt.scatter(x, y, color=colors)
plt.show()
"""
Explanation: Helper-function for plotting the reduced transfer-values.
End of explanation
"""
plot_scatter(transfer_values_reduced, cls)
"""
Explanation: Plot the transfer-values that have been reduced using PCA. There are 10 different colors for the different classes in the CIFAR-10 data-set. The colors are grouped together but with very large overlap. This may be because PCA cannot properly separate the transfer-values.
End of explanation
"""
from sklearn.manifold import TSNE
"""
Explanation: Analysis of Transfer-Values using t-SNE
End of explanation
"""
pca = PCA(n_components=50)
transfer_values_50d = pca.fit_transform(transfer_values)
"""
Explanation: Another method for doing dimensionality reduction is t-SNE. Unfortunately, t-SNE is very slow so we first use PCA to reduce the transfer-values from 2048 to 50 elements.
End of explanation
"""
tsne = TSNE(n_components=2)
"""
Explanation: Create a new t-SNE object for the final dimensionality reduction and set the target to 2-dim.
End of explanation
"""
transfer_values_reduced = tsne.fit_transform(transfer_values_50d)
"""
Explanation: Perform the final reduction using t-SNE. The current implemenation of t-SNE in scikit-learn cannot handle data with many samples so this might crash if you use the full training-set.
End of explanation
"""
transfer_values_reduced.shape
"""
Explanation: Check that it is now an array with 3000 samples and 2 transfer-values per sample.
End of explanation
"""
plot_scatter(transfer_values_reduced, cls)
"""
Explanation: Plot the transfer-values that have been reduced to 2-dim using t-SNE, which shows better separation than the PCA-plot above.
This means the transfer-values from the Inception model appear to contain enough information to separate the CIFAR-10 images into classes, although there is still some overlap so the separation is not perfect.
End of explanation
"""
transfer_len = model.transfer_len
"""
Explanation: New Classifier in TensorFlow
Now we will create another neural network in TensorFlow. This network will take as input the transfer-values from the Inception model and output the predicted classes for CIFAR-10 images.
It is assumed that you are already familiar with how to build neural networks in TensorFlow, otherwise see e.g. Tutorial #03.
Placeholder Variables
First we need the array-length for transfer-values which is stored as a variable in the object for the Inception model.
End of explanation
"""
x = tf.placeholder(tf.float32, shape=[None, transfer_len], name='x')
"""
Explanation: Now create a placeholder variable for inputting the transfer-values from the Inception model into the new network that we are building. The shape of this variable is [None, transfer_len] which means it takes an input array with an arbitrary number of samples as indicated by the keyword None and each sample has 2048 elements, equal to transfer_len.
End of explanation
"""
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
"""
Explanation: Create another placeholder variable for inputting the true class-label of each image. These are so-called One-Hot encoded arrays with 10 elements, one for each possible class in the data-set.
End of explanation
"""
y_true_cls = tf.argmax(y_true, dimension=1)
"""
Explanation: Calculate the true class as an integer. This could also be a placeholder variable.
End of explanation
"""
# Wrap the transfer-values as a Pretty Tensor object.
x_pretty = pt.wrap(x)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
fully_connected(size=1024, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
"""
Explanation: Neural Network
Create the neural network for doing the classification on the CIFAR-10 data-set. This takes as input the transfer-values from the Inception model which will be fed into the placeholder variable x. The network outputs the predicted class in y_pred.
See Tutorial #03 for more details on how to use Pretty Tensor to construct neural networks.
End of explanation
"""
global_step = tf.Variable(initial_value=0,
name='global_step', trainable=False)
"""
Explanation: Optimization Method
Create a variable for keeping track of the number of optimization iterations performed.
End of explanation
"""
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step)
"""
Explanation: Method for optimizing the new neural network.
End of explanation
"""
y_pred_cls = tf.argmax(y_pred, dimension=1)
"""
Explanation: Classification Accuracy
The output of the network y_pred is an array with 10 elements. The class number is the index of the largest element in the array.
End of explanation
"""
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
"""
Explanation: Create an array of booleans whether the predicted class equals the true class of each image.
End of explanation
"""
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
"""
Explanation: The classification accuracy is calculated by first type-casting the array of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
End of explanation
"""
session = tf.Session()
"""
Explanation: TensorFlow Run
Create TensorFlow Session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
"""
session.run(tf.global_variables_initializer())
"""
Explanation: Initialize Variables
The variables for the new network must be initialized before we start optimizing them.
End of explanation
"""
train_batch_size = 64
"""
Explanation: Helper-function to get a random training-batch
There are 50,000 images (and arrays with transfer-values for the images) in the training-set. It takes a long time to calculate the gradient of the model using all these images (transfer-values). We therefore only use a small batch of images (transfer-values) in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
"""
def random_batch():
# Number of images (transfer-values) in the training-set.
num_images = len(transfer_values_train)
# Create a random index.
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
# Use the random index to select random x and y-values.
# We use the transfer-values instead of images as x-values.
x_batch = transfer_values_train[idx]
y_batch = labels_train[idx]
return x_batch, y_batch
"""
Explanation: Function for selecting a random batch of transfer-values from the training-set.
End of explanation
"""
def optimize(num_iterations):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images (transfer-values) and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = random_batch()
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
# We also want to retrieve the global_step counter.
i_global, _ = session.run([global_step, optimizer],
feed_dict=feed_dict_train)
# Print status to screen every 100 iterations (and last).
if (i_global % 100 == 0) or (i == num_iterations - 1):
# Calculate the accuracy on the training-batch.
batch_acc = session.run(accuracy,
feed_dict=feed_dict_train)
# Print status.
msg = "Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
print(msg.format(i_global, batch_acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
"""
Explanation: Helper-function to perform optimization
This function performs a number of optimization iterations so as to gradually improve the variables of the neural network. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
End of explanation
"""
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = images_test[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = cls_test[incorrect]
n = min(9, len(images))
# Plot the first n images.
plot_images(images=images[0:n],
cls_true=cls_true[0:n],
cls_pred=cls_pred[0:n])
"""
Explanation: Helper-Functions for Showing Results
Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
"""
# Import a function from sklearn to calculate the confusion-matrix.
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_test, # True class for test-set.
y_pred=cls_pred) # Predicted class.
# Print the confusion matrix as text.
for i in range(num_classes):
# Append the class-name to each line.
class_name = "({}) {}".format(i, class_names[i])
print(cm[i, :], class_name)
# Print the class-numbers for easy reference.
class_numbers = [" ({0})".format(i) for i in range(num_classes)]
print("".join(class_numbers))
"""
Explanation: Helper-function to plot confusion matrix
End of explanation
"""
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(transfer_values, labels, cls_true):
# Number of images.
num_images = len(transfer_values)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: transfer_values[i:j],
y_true: labels[i:j]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
"""
Explanation: Helper-functions for calculating classifications
This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
End of explanation
"""
def predict_cls_test():
return predict_cls(transfer_values = transfer_values_test,
labels = labels_test,
cls_true = cls_test)
"""
Explanation: Calculate the predicted class for the test-set.
End of explanation
"""
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
# Return the classification accuracy
# and the number of correct classifications.
return correct.mean(), correct.sum()
"""
Explanation: Helper-functions for calculating the classification accuracy
This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4. The function also returns the number of correct classifications.
End of explanation
"""
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = classification_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
"""
Explanation: Helper-function for showing the classification accuracy
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
End of explanation
"""
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)
"""
Explanation: Results
Performance before any optimization
The classification accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
End of explanation
"""
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
"""
Explanation: Performance after 10,000 optimization iterations
After 10,000 optimization iterations, the classification accuracy is about 90% on the test-set. Compare this to the basic Convolutional Neural Network from Tutorial #06 which had less than 80% accuracy on the test-set.
End of explanation
"""
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# model.close()
# session.close()
"""
Explanation: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources. Note that there are two TensorFlow-sessions so we close both, one session is inside the model-object.
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.1/examples/notebooks/generated/metaanalysis1.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
from scipy import stats, optimize
from statsmodels.regression.linear_model import WLS
from statsmodels.genmod.generalized_linear_model import GLM
from statsmodels.stats.meta_analysis import (
effectsize_smd,
effectsize_2proportions,
combine_effects,
_fit_tau_iterative,
_fit_tau_mm,
_fit_tau_iter_mm,
)
# increase line length for pandas
pd.set_option("display.width", 100)
"""
Explanation: Meta-Analysis in statsmodels
Statsmodels include basic methods for meta-analysis. This notebook illustrates the current usage.
Status: The results have been verified against R meta and metafor packages. However, the API is still experimental and will still change. Some options for additional methods that are available in R meta and metafor are missing.
The support for meta-analysis has 3 parts:
effect size functions: this currently includes
effectsize_smd computes effect size and their standard errors for standardized mean difference,
effectsize_2proportions computes effect sizes for comparing two independent proportions using risk difference, (log) risk ratio, (log) odds-ratio or arcsine square root transformation
The combine_effects computes fixed and random effects estimate for the overall mean or effect. The returned results instance includes a forest plot function.
helper functions to estimate the random effect variance, tau-squared
The estimate of the overall effect size in combine_effects can also be performed using WLS or GLM with var_weights.
Finally, the meta-analysis functions currently do not include the Mantel-Hanszel method. However, the fixed effects results can be computed directly using StratifiedTable as illustrated below.
End of explanation
"""
data = [
["Carroll", 94, 22, 60, 92, 20, 60],
["Grant", 98, 21, 65, 92, 22, 65],
["Peck", 98, 28, 40, 88, 26, 40],
["Donat", 94, 19, 200, 82, 17, 200],
["Stewart", 98, 21, 50, 88, 22, 45],
["Young", 96, 21, 85, 92, 22, 85],
]
colnames = ["study", "mean_t", "sd_t", "n_t", "mean_c", "sd_c", "n_c"]
rownames = [i[0] for i in data]
dframe1 = pd.DataFrame(data, columns=colnames)
rownames
mean2, sd2, nobs2, mean1, sd1, nobs1 = np.asarray(
dframe1[["mean_t", "sd_t", "n_t", "mean_c", "sd_c", "n_c"]]
).T
rownames = dframe1["study"]
rownames.tolist()
np.array(nobs1 + nobs2)
"""
Explanation: Example
End of explanation
"""
eff, var_eff = effectsize_smd(mean2, sd2, nobs2, mean1, sd1, nobs1)
"""
Explanation: estimate effect size standardized mean difference
End of explanation
"""
res3 = combine_effects(eff, var_eff, method_re="chi2", use_t=True, row_names=rownames)
# TODO: we still need better information about conf_int of individual samples
# We don't have enough information in the model for individual confidence intervals
# if those are not based on normal distribution.
res3.conf_int_samples(nobs=np.array(nobs1 + nobs2))
print(res3.summary_frame())
res3.cache_ci
res3.method_re
fig = res3.plot_forest()
fig.set_figheight(6)
fig.set_figwidth(6)
res3 = combine_effects(eff, var_eff, method_re="chi2", use_t=False, row_names=rownames)
# TODO: we still need better information about conf_int of individual samples
# We don't have enough information in the model for individual confidence intervals
# if those are not based on normal distribution.
res3.conf_int_samples(nobs=np.array(nobs1 + nobs2))
print(res3.summary_frame())
"""
Explanation: Using one-step chi2, DerSimonian-Laird estimate for random effects variance tau
Method option for random effect method_re="chi2" or method_re="dl", both names are accepted.
This is commonly referred to as the DerSimonian-Laird method, it is based on a moment estimator based on pearson chi2 from the fixed effects estimate.
End of explanation
"""
res4 = combine_effects(
eff, var_eff, method_re="iterated", use_t=False, row_names=rownames
)
res4_df = res4.summary_frame()
print("method RE:", res4.method_re)
print(res4.summary_frame())
fig = res4.plot_forest()
"""
Explanation: Using iterated, Paule-Mandel estimate for random effects variance tau
The method commonly referred to as Paule-Mandel estimate is a method of moment estimate for the random effects variance that iterates between mean and variance estimate until convergence.
End of explanation
"""
eff = np.array([61.00, 61.40, 62.21, 62.30, 62.34, 62.60, 62.70, 62.84, 65.90])
var_eff = np.array(
[0.2025, 1.2100, 0.0900, 0.2025, 0.3844, 0.5625, 0.0676, 0.0225, 1.8225]
)
rownames = ["PTB", "NMi", "NIMC", "KRISS", "LGC", "NRC", "IRMM", "NIST", "LNE"]
res2_DL = combine_effects(eff, var_eff, method_re="dl", use_t=True, row_names=rownames)
print("method RE:", res2_DL.method_re)
print(res2_DL.summary_frame())
fig = res2_DL.plot_forest()
fig.set_figheight(6)
fig.set_figwidth(6)
res2_PM = combine_effects(eff, var_eff, method_re="pm", use_t=True, row_names=rownames)
print("method RE:", res2_PM.method_re)
print(res2_PM.summary_frame())
fig = res2_PM.plot_forest()
fig.set_figheight(6)
fig.set_figwidth(6)
"""
Explanation: Example Kacker interlaboratory mean
In this example the effect size is the mean of measurements in a lab. We combine the estimates from several labs to estimate and overall average.
End of explanation
"""
import io
ss = """\
study,nei,nci,e1i,c1i,e2i,c2i,e3i,c3i,e4i,c4i
1,19,22,16.0,20.0,11,12,4.0,8.0,4,3
2,34,35,22.0,22.0,18,12,15.0,8.0,15,6
3,72,68,44.0,40.0,21,15,10.0,3.0,3,0
4,22,20,19.0,12.0,14,5,5.0,4.0,2,3
5,70,32,62.0,27.0,42,13,26.0,6.0,15,5
6,183,94,130.0,65.0,80,33,47.0,14.0,30,11
7,26,50,24.0,30.0,13,18,5.0,10.0,3,9
8,61,55,51.0,44.0,37,30,19.0,19.0,11,15
9,36,25,30.0,17.0,23,12,13.0,4.0,10,4
10,45,35,43.0,35.0,19,14,8.0,4.0,6,0
11,246,208,169.0,139.0,106,76,67.0,42.0,51,35
12,386,141,279.0,97.0,170,46,97.0,21.0,73,8
13,59,32,56.0,30.0,34,17,21.0,9.0,20,7
14,45,15,42.0,10.0,18,3,9.0,1.0,9,1
15,14,18,14.0,18.0,13,14,12.0,13.0,9,12
16,26,19,21.0,15.0,12,10,6.0,4.0,5,1
17,74,75,,,42,40,,,23,30"""
df3 = pd.read_csv(io.StringIO(ss))
df_12y = df3[["e2i", "nei", "c2i", "nci"]]
# TODO: currently 1 is reference, switch labels
count1, nobs1, count2, nobs2 = df_12y.values.T
dta = df_12y.values.T
eff, var_eff = effectsize_2proportions(*dta, statistic="rd")
eff, var_eff
res5 = combine_effects(
eff, var_eff, method_re="iterated", use_t=False
) # , row_names=rownames)
res5_df = res5.summary_frame()
print("method RE:", res5.method_re)
print("RE variance tau2:", res5.tau2)
print(res5.summary_frame())
fig = res5.plot_forest()
fig.set_figheight(8)
fig.set_figwidth(6)
"""
Explanation: Meta-analysis of proportions
In the following example the random effect variance tau is estimated to be zero.
I then change two counts in the data, so the second example has random effects variance greater than zero.
End of explanation
"""
dta_c = dta.copy()
dta_c.T[0, 0] = 18
dta_c.T[1, 0] = 22
dta_c.T
eff, var_eff = effectsize_2proportions(*dta_c, statistic="rd")
res5 = combine_effects(
eff, var_eff, method_re="iterated", use_t=False
) # , row_names=rownames)
res5_df = res5.summary_frame()
print("method RE:", res5.method_re)
print(res5.summary_frame())
fig = res5.plot_forest()
fig.set_figheight(8)
fig.set_figwidth(6)
res5 = combine_effects(eff, var_eff, method_re="chi2", use_t=False)
res5_df = res5.summary_frame()
print("method RE:", res5.method_re)
print(res5.summary_frame())
fig = res5.plot_forest()
fig.set_figheight(8)
fig.set_figwidth(6)
"""
Explanation: changing data to have positive random effects variance
End of explanation
"""
from statsmodels.genmod.generalized_linear_model import GLM
eff, var_eff = effectsize_2proportions(*dta_c, statistic="or")
res = combine_effects(eff, var_eff, method_re="chi2", use_t=False)
res_frame = res.summary_frame()
print(res_frame.iloc[-4:])
"""
Explanation: Replicate fixed effect analysis using GLM with var_weights
combine_effects computes weighted average estimates which can be replicated using GLM with var_weights or with WLS.
The scale option in GLM.fit can be used to replicate fixed meta-analysis with fixed and with HKSJ/WLS scale
End of explanation
"""
weights = 1 / var_eff
mod_glm = GLM(eff, np.ones(len(eff)), var_weights=weights)
res_glm = mod_glm.fit(scale=1.0)
print(res_glm.summary().tables[1])
# check results
res_glm.scale, res_glm.conf_int() - res_frame.loc[
"fixed effect", ["ci_low", "ci_upp"]
].values
"""
Explanation: We need to fix scale=1 in order to replicate standard errors for the usual meta-analysis.
End of explanation
"""
res_glm = mod_glm.fit(scale="x2")
print(res_glm.summary().tables[1])
# check results
res_glm.scale, res_glm.conf_int() - res_frame.loc[
"fixed effect", ["ci_low", "ci_upp"]
].values
"""
Explanation: Using HKSJ variance adjustment in meta-analysis is equivalent to estimating the scale using pearson chi2, which is also the default for the gaussian family.
End of explanation
"""
t, nt, c, nc = dta_c
counts = np.column_stack([t, nt - t, c, nc - c])
ctables = counts.T.reshape(2, 2, -1)
ctables[:, :, 0]
counts[0]
dta_c.T[0]
import statsmodels.stats.api as smstats
st = smstats.StratifiedTable(ctables.astype(np.float64))
"""
Explanation: Mantel-Hanszel odds-ratio using contingency tables
The fixed effect for the log-odds-ratio using the Mantel-Hanszel can be directly computed using StratifiedTable.
We need to create a 2 x 2 x k contingency table to be used with StratifiedTable.
End of explanation
"""
st.logodds_pooled, st.logodds_pooled - 0.4428186730553189 # R meta
st.logodds_pooled_se, st.logodds_pooled_se - 0.08928560091027186 # R meta
st.logodds_pooled_confint()
print(st.test_equal_odds())
print(st.test_null_odds())
"""
Explanation: compare pooled log-odds-ratio and standard error to R meta package
End of explanation
"""
ctables.sum(1)
nt, nc
"""
Explanation: check conversion to stratified contingency table
Row sums of each table are the sample sizes for treatment and control experiments
End of explanation
"""
print(st.summary())
"""
Explanation: Results from R meta package
```
res_mb_hk = metabin(e2i, nei, c2i, nci, data=dat2, sm="OR", Q.Cochrane=FALSE, method="MH", method.tau="DL", hakn=FALSE, backtransf=FALSE)
res_mb_hk
logOR 95%-CI %W(fixed) %W(random)
1 2.7081 [ 0.5265; 4.8896] 0.3 0.7
2 1.2567 [ 0.2658; 2.2476] 2.1 3.2
3 0.3749 [-0.3911; 1.1410] 5.4 5.4
4 1.6582 [ 0.3245; 2.9920] 0.9 1.8
5 0.7850 [-0.0673; 1.6372] 3.5 4.4
6 0.3617 [-0.1528; 0.8762] 12.1 11.8
7 0.5754 [-0.3861; 1.5368] 3.0 3.4
8 0.2505 [-0.4881; 0.9892] 6.1 5.8
9 0.6506 [-0.3877; 1.6889] 2.5 3.0
10 0.0918 [-0.8067; 0.9903] 4.5 3.9
11 0.2739 [-0.1047; 0.6525] 23.1 21.4
12 0.4858 [ 0.0804; 0.8911] 18.6 18.8
13 0.1823 [-0.6830; 1.0476] 4.6 4.2
14 0.9808 [-0.4178; 2.3795] 1.3 1.6
15 1.3122 [-1.0055; 3.6299] 0.4 0.6
16 -0.2595 [-1.4450; 0.9260] 3.1 2.3
17 0.1384 [-0.5076; 0.7844] 8.5 7.6
Number of studies combined: k = 17
logOR 95%-CI z p-value
Fixed effect model 0.4428 [0.2678; 0.6178] 4.96 < 0.0001
Random effects model 0.4295 [0.2504; 0.6086] 4.70 < 0.0001
Quantifying heterogeneity:
tau^2 = 0.0017 [0.0000; 0.4589]; tau = 0.0410 [0.0000; 0.6774];
I^2 = 1.1% [0.0%; 51.6%]; H = 1.01 [1.00; 1.44]
Test of heterogeneity:
Q d.f. p-value
16.18 16 0.4404
Details on meta-analytical method:
- Mantel-Haenszel method
- DerSimonian-Laird estimator for tau^2
- Jackson method for confidence interval of tau^2 and tau
res_mb_hk$TE.fixed
[1] 0.4428186730553189
res_mb_hk$seTE.fixed
[1] 0.08928560091027186
c(res_mb_hk$lower.fixed, res_mb_hk$upper.fixed)
[1] 0.2678221109331694 0.6178152351774684
```
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/2aba6a5c9f79fe16cdce1a232bc5e327/plot_brainstorm_phantom_elekta.ipynb | bsd-3-clause | # sphinx_gallery_thumbnail_number = 9
# Authors: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
from mayavi import mlab
print(__doc__)
"""
Explanation: Brainstorm Elekta phantom dataset tutorial
Here we compute the evoked from raw for the Brainstorm Elekta phantom
tutorial dataset. For comparison, see [1]_ and:
http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta
References
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
End of explanation
"""
data_path = bst_phantom_elekta.data_path(verbose=True)
raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
raw = read_raw_fif(raw_fname)
"""
Explanation: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data
are read to construct instances of :class:mne.io.Raw.
End of explanation
"""
events = find_events(raw, 'STI201')
raw.plot(events=events)
raw.info['bads'] = ['MEG2421']
"""
Explanation: Data channel array consisted of 204 MEG planor gradiometers,
102 axial magnetometers, and 3 stimulus channels. Let's get the events
for the phantom, where each dipole (1-32) gets its own event:
End of explanation
"""
raw.plot_psd(tmax=60., average=False)
"""
Explanation: The data have strong line frequency (60 Hz and harmonics) and cHPI coil
noise (five peaks around 300 Hz). Here we plot only out to 60 seconds
to save memory:
End of explanation
"""
raw.fix_mag_coil_types()
raw = mne.preprocessing.maxwell_filter(raw, origin=(0., 0., 0.))
"""
Explanation: Let's use Maxwell filtering to clean the data a bit.
Ideally we would have the fine calibration and cross-talk information
for the site of interest, but we don't, so we just do:
End of explanation
"""
raw.filter(None, 40., fir_design='firwin')
raw.plot(events=events)
"""
Explanation: We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.
End of explanation
"""
tmin, tmax = -0.1, 0.1
event_id = list(range(1, 33))
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01),
decim=3, preload=True)
epochs['1'].average().plot(time_unit='s')
"""
Explanation: Now we epoch our data, average it, and look at the first dipole response.
The first peak appears around 3 ms. Because we low-passed at 40 Hz,
we can also decimate our data to save memory.
End of explanation
"""
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.08)
mne.viz.plot_alignment(raw.info, subject='sample', show_axes=True,
bem=sphere, dig=True, surfaces='inner_skull')
"""
Explanation: Let's use a sphere head geometry model and let's see the coordinate
alignment and the sphere location. The phantom is properly modeled by
a single-shell sphere with origin (0., 0., 0.).
End of explanation
"""
# here we can get away with using method='oas' for speed (faster than "shrunk")
# but in general "shrunk" is usually better
cov = mne.compute_covariance(
epochs, tmax=0, method='oas', rank=None)
mne.viz.plot_evoked_white(epochs['1'].average(), cov)
data = []
t_peak = 0.036 # true for Elekta phantom
for ii in event_id:
evoked = epochs[str(ii)].average().crop(t_peak, t_peak)
data.append(evoked.data[:, 0])
evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)
del epochs, raw
dip, residual = fit_dipole(evoked, cov, sphere, n_jobs=1)
"""
Explanation: Let's do some dipole fits. We first compute the noise covariance,
then do the fits for each event_id taking the time instant that maximizes
the global field power.
End of explanation
"""
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
ax.texts = []
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
"""
Explanation: Do a quick visualization of how much variance we explained, putting the
data and residuals on the same scale (here the "time points" are the
32 dipole peak values that we fit):
End of explanation
"""
actual_pos, actual_ori = mne.dipole.get_phantom_dipoles()
actual_amp = 100. # nAm
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, figsize=(6, 7))
diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))
print('mean(position error) = %0.1f mm' % (np.mean(diffs),))
ax1.bar(event_id, diffs)
ax1.set_xlabel('Dipole index')
ax1.set_ylabel('Loc. error (mm)')
angles = np.rad2deg(np.arccos(np.abs(np.sum(dip.ori * actual_ori, axis=1))))
print(u'mean(angle error) = %0.1f°' % (np.mean(angles),))
ax2.bar(event_id, angles)
ax2.set_xlabel('Dipole index')
ax2.set_ylabel(u'Angle error (°)')
amps = actual_amp - dip.amplitude / 1e-9
print('mean(abs amplitude error) = %0.1f nAm' % (np.mean(np.abs(amps)),))
ax3.bar(event_id, amps)
ax3.set_xlabel('Dipole index')
ax3.set_ylabel('Amplitude error (nAm)')
fig.tight_layout()
plt.show()
"""
Explanation: Now we can compare to the actual locations, taking the difference in mm:
End of explanation
"""
def plot_pos_ori(pos, ori, color=(0., 0., 0.), opacity=1.):
x, y, z = pos.T
u, v, w = ori.T
mlab.points3d(x, y, z, scale_factor=0.005, opacity=opacity, color=color)
q = mlab.quiver3d(x, y, z, u, v, w,
scale_factor=0.03, opacity=opacity,
color=color, mode='arrow')
q.glyph.glyph_source.glyph_source.shaft_radius = 0.02
q.glyph.glyph_source.glyph_source.tip_length = 0.1
q.glyph.glyph_source.glyph_source.tip_radius = 0.05
mne.viz.plot_alignment(evoked.info, bem=sphere, surfaces='inner_skull',
coord_frame='head', meg='helmet', show_axes=True)
# Plot the position and the orientation of the actual dipole
plot_pos_ori(actual_pos, actual_ori, color=(0., 0., 0.), opacity=0.5)
# Plot the position and the orientation of the estimated dipole
plot_pos_ori(dip.pos, dip.ori, color=(0.2, 1., 0.5))
mlab.view(70, 80, distance=0.5)
"""
Explanation: Let's plot the positions and the orientations of the actual and the estimated
dipoles
End of explanation
"""
|
sprax/python | ds/umich-ds-wk1.ipynb | lgpl-3.0 | def add_numbers(x, y):
return x + y
add_numbers(1, 2)
"""
Explanation: You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
The Python Programming Language: Functions
<br>
add_numbers is a function that takes two numbers and adds them together.
End of explanation
"""
def add_numbers(x,y,z=None):
if (z==None):
return x+y
else:
return x+y+z
print(add_numbers(1, 2))
print(add_numbers(1, 2, 3))
"""
Explanation: <br>
add_numbers updated to take an optional 3rd parameter. Using print allows printing of multiple expressions within a single cell.
End of explanation
"""
def add_numbers(x, y, z=None, flag=False):
if (flag):
print('Flag is true!')
if (z==None):
return x + y
else:
return x + y + z
print(add_numbers(1, 2, flag=True))
"""
Explanation: <br>
add_numbers updated to take an optional flag parameter.
End of explanation
"""
def add_numbers(x,y):
return x+y
a = add_numbers
a(1,2)
"""
Explanation: <br>
Assign function add_numbers to variable a.
End of explanation
"""
type('This is a string')
type(None)
type(1)
type(1.0)
type(add_numbers)
"""
Explanation: <br>
The Python Programming Language: Types and Sequences
<br>
Use type to return the object's type.
End of explanation
"""
x = (1, 'a', 2, 'b')
type(x)
"""
Explanation: <br>
Tuples are an immutable data structure (cannot be altered).
End of explanation
"""
x = [1, 'a', 2, 'b']
type(x)
"""
Explanation: <br>
Lists are a mutable data structure.
End of explanation
"""
x.append(3.3)
print(x)
"""
Explanation: <br>
Use append to append an object to a list.
End of explanation
"""
for item in x:
print(item)
"""
Explanation: <br>
This is an example of how to loop through each item in the list.
End of explanation
"""
i=0
while( i != len(x) ):
print(x[i])
i = i + 1
"""
Explanation: <br>
Or using the indexing operator:
End of explanation
"""
[1,2] + [3,4]
"""
Explanation: <br>
Use + to concatenate lists.
End of explanation
"""
[1]*3
"""
Explanation: <br>
Use * to repeat lists.
End of explanation
"""
1 in [1, 2, 3]
"""
Explanation: <br>
Use the in operator to check if something is inside a list.
End of explanation
"""
x = 'This is a string'
print(x[0]) #first character
print(x[0:1]) #first character, but we have explicitly set the end character
print(x[0:2]) #first two characters
"""
Explanation: <br>
Now let's look at strings. Use bracket notation to slice a string.
End of explanation
"""
x[-1]
"""
Explanation: <br>
This will return the last element of the string.
End of explanation
"""
x[-4:-2]
"""
Explanation: <br>
This will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end.
End of explanation
"""
x[:3]
"""
Explanation: <br>
This is a slice from the beginning of the string and stopping before the 3rd element.
End of explanation
"""
x[3:]
firstname = 'Christopher'
lastname = 'Brooks'
print(firstname + ' ' + lastname)
print(firstname*3)
print('Chris' in firstname)
"""
Explanation: <br>
And this is a slice starting from the 4th element of the string and going all the way to the end.
End of explanation
"""
firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list
lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(lastname)
"""
Explanation: <br>
split returns a list of all the words in a string, or a list split on a specific character.
End of explanation
"""
'Chris' + 2
'Chris' + str(2)
"""
Explanation: <br>
Make sure you convert objects to strings before concatenating.
End of explanation
"""
x = {'Christopher Brooks': '[email protected]', 'Bill Gates': '[email protected]'}
x['Christopher Brooks'] # Retrieve a value by using the indexing operator
x['Kevyn Collins-Thompson'] = None
x['Kevyn Collins-Thompson']
"""
Explanation: <br>
Dictionaries associate keys with values.
End of explanation
"""
for name in x:
print(x[name])
"""
Explanation: <br>
Iterate over all of the keys:
End of explanation
"""
for email in x.values():
print(email)
"""
Explanation: <br>
Iterate over all of the values:
End of explanation
"""
for name, email in x.items():
print(name)
print(email)
"""
Explanation: <br>
Iterate over all of the items in the list:
End of explanation
"""
x = ('Christopher', 'Brooks', '[email protected]')
fname, lname, email = x
fname
lname
"""
Explanation: <br>
You can unpack a sequence into different variables:
End of explanation
"""
x = ('Christopher', 'Brooks', '[email protected]', 'Ann Arbor')
fname, lname, email = x
"""
Explanation: <br>
Make sure the number of values you are unpacking matches the number of variables being assigned.
End of explanation
"""
print('Chris' + 2)
print('Chris' + str(2))
"""
Explanation: <br>
The Python Programming Language: More on Strings
End of explanation
"""
sales_record = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'}
sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'
print(sales_statement.format(sales_record['person'],
sales_record['num_items'],
sales_record['price'],
sales_record['num_items']*sales_record['price']))
"""
Explanation: <br>
Python has a built in method for convenient string formatting.
End of explanation
"""
import csv
%precision 2
with open('mpg.csv') as csvfile:
mpg = list(csv.DictReader(csvfile))
mpg[:3] # The first three dictionaries in our list.
"""
Explanation: <br>
Reading and Writing CSV files
<br>
Let's import our datafile mpg.csv, which contains fuel economy data for 234 cars.
mpg : miles per gallon
class : car classification
cty : city mpg
cyl : # of cylinders
displ : engine displacement in liters
drv : f = front-wheel drive, r = rear wheel drive, 4 = 4wd
fl : fuel (e = ethanol E85, d = diesel, r = regular, p = premium, c = CNG)
hwy : highway mpg
manufacturer : automobile manufacturer
model : model of car
trans : type of transmission
year : model year
End of explanation
"""
len(mpg)
"""
Explanation: <br>
csv.Dictreader has read in each row of our csv file as a dictionary. len shows that our list is comprised of 234 dictionaries.
End of explanation
"""
mpg[0].keys()
"""
Explanation: <br>
keys gives us the column names of our csv.
End of explanation
"""
sum(float(d['cty']) for d in mpg) / len(mpg)
"""
Explanation: <br>
This is how to find the average cty fuel economy across all cars. All values in the dictionaries are strings, so we need to convert to float.
End of explanation
"""
sum(float(d['hwy']) for d in mpg) / len(mpg)
"""
Explanation: <br>
Similarly this is how to find the average hwy fuel economy across all cars.
End of explanation
"""
cylinders = set(d['cyl'] for d in mpg)
cylinders
"""
Explanation: <br>
Use set to return the unique values for the number of cylinders the cars in our dataset have.
End of explanation
"""
CtyMpgByCyl = []
for c in cylinders: # iterate over all the cylinder levels
summpg = 0
cyltypecount = 0
for d in mpg: # iterate over all dictionaries
if d['cyl'] == c: # if the cylinder level type matches,
summpg += float(d['cty']) # add the cty mpg
cyltypecount += 1 # increment the count
CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')
CtyMpgByCyl.sort(key=lambda x: x[0])
CtyMpgByCyl
"""
Explanation: <br>
Here's a more complex example where we are grouping the cars by number of cylinder, and finding the average cty mpg for each group.
End of explanation
"""
vehicleclass = set(d['class'] for d in mpg) # what are the class types
vehicleclass
"""
Explanation: <br>
Use set to return the unique values for the class types in our dataset.
End of explanation
"""
HwyMpgByClass = []
for t in vehicleclass: # iterate over all the vehicle classes
summpg = 0
vclasscount = 0
for d in mpg: # iterate over all dictionaries
if d['class'] == t: # if the cylinder amount type matches,
summpg += float(d['hwy']) # add the hwy mpg
vclasscount += 1 # increment the count
HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')
HwyMpgByClass.sort(key=lambda x: x[1])
HwyMpgByClass
"""
Explanation: <br>
And here's an example of how to find the average hwy mpg for each class of vehicle in our dataset.
End of explanation
"""
import datetime as dt
import time as tm
"""
Explanation: <br>
The Python Programming Language: Dates and Times
End of explanation
"""
tm.time()
"""
Explanation: <br>
time returns the current time in seconds since the Epoch. (January 1st, 1970)
End of explanation
"""
dtnow = dt.datetime.fromtimestamp(tm.time())
dtnow
"""
Explanation: <br>
Convert the timestamp to datetime.
End of explanation
"""
dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime
"""
Explanation: <br>
Handy datetime attributes:
End of explanation
"""
delta = dt.timedelta(days = 100) # create a timedelta of 100 days
delta
"""
Explanation: <br>
timedelta is a duration expressing the difference between two dates.
End of explanation
"""
today = dt.date.today()
today - delta # the date 100 days ago
today > today-delta # compare dates
"""
Explanation: <br>
date.today returns the current local date.
End of explanation
"""
class Person:
department = 'School of Information' #a class variable
def set_name(self, new_name): #a method
self.name = new_name
def set_location(self, new_location):
self.location = new_location
person = Person()
person.set_name('Christopher Brooks')
person.set_location('Ann Arbor, MI, USA')
print('{} live in {} and works in the department {}'.format(person.name, person.location, person.department))
"""
Explanation: <br>
The Python Programming Language: Objects and map()
<br>
An example of a class in python:
End of explanation
"""
store1 = [10.00, 11.00, 12.34, 2.34]
store2 = [9.00, 11.10, 12.34, 2.01]
cheapest = map(min, store1, store2)
cheapest
"""
Explanation: <br>
Here's an example of mapping the min function between two lists.
End of explanation
"""
for item in cheapest:
print(item)
"""
Explanation: <br>
Now let's iterate through the map object to see the values.
End of explanation
"""
my_function = lambda a, b, c : a + b
my_function(1, 2, 3)
"""
Explanation: <br>
The Python Programming Language: Lambda and List Comprehensions
<br>
Here's an example of lambda that takes in three parameters and adds the first two.
End of explanation
"""
my_list = []
for number in range(0, 1000):
if number % 2 == 0:
my_list.append(number)
my_list
"""
Explanation: <br>
Let's iterate from 0 to 999 and return the even numbers.
End of explanation
"""
my_list = [number for number in range(0,1000) if number % 2 == 0]
my_list
"""
Explanation: <br>
Now the same thing but with list comprehension.
End of explanation
"""
import numpy as np
"""
Explanation: <br>
The Python Programming Language: Numerical Python (NumPy)
End of explanation
"""
mylist = [1, 2, 3]
x = np.array(mylist)
x
"""
Explanation: <br>
Creating Arrays
Create a list and convert it to a numpy array
End of explanation
"""
y = np.array([4, 5, 6])
y
"""
Explanation: <br>
Or just pass in a list directly
End of explanation
"""
m = np.array([[7, 8, 9], [10, 11, 12]])
m
"""
Explanation: <br>
Pass in a list of lists to create a multidimensional array.
End of explanation
"""
m.shape
"""
Explanation: <br>
Use the shape method to find the dimensions of the array. (rows, columns)
End of explanation
"""
n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30
n
"""
Explanation: <br>
arange returns evenly spaced values within a given interval.
End of explanation
"""
n = n.reshape(3, 5) # reshape array to be 3x5
n
"""
Explanation: <br>
reshape returns an array with the same data with a new shape.
End of explanation
"""
o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4
o
"""
Explanation: <br>
linspace returns evenly spaced numbers over a specified interval.
End of explanation
"""
o.resize(3, 3)
o
"""
Explanation: <br>
resize changes the shape and size of array in-place.
End of explanation
"""
np.ones((3, 2))
"""
Explanation: <br>
ones returns a new array of given shape and type, filled with ones.
End of explanation
"""
np.zeros((2, 3))
"""
Explanation: <br>
zeros returns a new array of given shape and type, filled with zeros.
End of explanation
"""
np.eye(3)
"""
Explanation: <br>
eye returns a 2-D array with ones on the diagonal and zeros elsewhere.
End of explanation
"""
np.diag(y)
"""
Explanation: <br>
diag extracts a diagonal or constructs a diagonal array.
End of explanation
"""
np.array([1, 2, 3] * 3)
"""
Explanation: <br>
Create an array using repeating list (or see np.tile)
End of explanation
"""
np.repeat([1, 2, 3], 3)
"""
Explanation: <br>
Repeat elements of an array using repeat.
End of explanation
"""
p = np.ones([2, 3], int)
p
"""
Explanation: <br>
Combining Arrays
End of explanation
"""
np.vstack([p, 2*p])
"""
Explanation: <br>
Use vstack to stack arrays in sequence vertically (row wise).
End of explanation
"""
np.hstack([p, 2*p])
"""
Explanation: <br>
Use hstack to stack arrays in sequence horizontally (column wise).
End of explanation
"""
print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9]
print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3]
print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18]
print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5]
print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]
"""
Explanation: <br>
Operations
Use +, -, *, / and ** to perform element wise addition, subtraction, multiplication, division and power.
End of explanation
"""
x.dot(y) # dot product 1*4 + 2*5 + 3*6
z = np.array([y, y**2])
print(len(z)) # number of rows of array
"""
Explanation: <br>
Dot Product:
$ \begin{bmatrix}x_1 \ x_2 \ x_3\end{bmatrix}
\cdot
\begin{bmatrix}y_1 \ y_2 \ y_3\end{bmatrix}
= x_1 y_1 + x_2 y_2 + x_3 y_3$
End of explanation
"""
z = np.array([y, y**2])
z
"""
Explanation: <br>
Let's look at transposing arrays. Transposing permutes the dimensions of the array.
End of explanation
"""
z.shape
"""
Explanation: <br>
The shape of array z is (2,3) before transposing.
End of explanation
"""
z.T
"""
Explanation: <br>
Use .T to get the transpose.
End of explanation
"""
z.T.shape
"""
Explanation: <br>
The number of rows has swapped with the number of columns.
End of explanation
"""
z.dtype
"""
Explanation: <br>
Use .dtype to see the data type of the elements in the array.
End of explanation
"""
z = z.astype('f')
z.dtype
"""
Explanation: <br>
Use .astype to cast to a specific type.
End of explanation
"""
a = np.array([-4, -2, 1, 3, 5])
a.sum()
a.max()
a.min()
a.mean()
a.std()
"""
Explanation: <br>
Math Functions
Numpy has many built in math functions that can be performed on arrays.
End of explanation
"""
a.argmax()
a.argmin()
"""
Explanation: <br>
argmax and argmin return the index of the maximum and minimum values in the array.
End of explanation
"""
s = np.arange(13)**2
s
"""
Explanation: <br>
Indexing / Slicing
End of explanation
"""
s[0], s[4], s[-1]
"""
Explanation: <br>
Use bracket notation to get the value at a specific index. Remember that indexing starts at 0.
End of explanation
"""
s[1:5]
"""
Explanation: <br>
Use : to indicate a range. array[start:stop]
Leaving start or stop empty will default to the beginning/end of the array.
End of explanation
"""
s[-4:]
"""
Explanation: <br>
Use negatives to count from the back.
End of explanation
"""
s[-5::-2]
"""
Explanation: <br>
A second : can be used to indicate step-size. array[start:stop:stepsize]
Here we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached.
End of explanation
"""
r = np.arange(36)
r.resize((6, 6))
r
"""
Explanation: <br>
Let's look at a multidimensional array.
End of explanation
"""
r[2, 2]
"""
Explanation: <br>
Use bracket notation to slice: array[row, column]
End of explanation
"""
r[3, 3:6]
"""
Explanation: <br>
And use : to select a range of rows or columns
End of explanation
"""
r[:2, :-1]
"""
Explanation: <br>
Here we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column.
End of explanation
"""
r[-1, ::2]
"""
Explanation: <br>
This is a slice of the last row, and only every other element.
End of explanation
"""
r[r > 30]
"""
Explanation: <br>
We can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see np.where)
End of explanation
"""
r[r > 30] = 30
r
"""
Explanation: <br>
Here we are assigning all values in the array that are greater than 30 to the value of 30.
End of explanation
"""
r2 = r[:3,:3]
r2
"""
Explanation: <br>
Copying Data
Be careful with copying and modifying arrays in NumPy!
r2 is a slice of r
End of explanation
"""
r2[:] = 0
r2
"""
Explanation: <br>
Set this slice's values to zero ([:] selects the entire array)
End of explanation
"""
r
"""
Explanation: <br>
r has also been changed!
End of explanation
"""
r_copy = r.copy()
r_copy
"""
Explanation: <br>
To avoid this, use r.copy to create a copy that will not affect the original array
End of explanation
"""
r_copy[:] = 10
print(r_copy, '\n')
print(r)
"""
Explanation: <br>
Now when r_copy is modified, r will not be changed.
End of explanation
"""
test = np.random.randint(0, 10, (4,3))
test
"""
Explanation: <br>
Iterating Over Arrays
Let's create a new 4 by 3 array of random numbers 0-9.
End of explanation
"""
for row in test:
print(row)
"""
Explanation: <br>
Iterate by row:
End of explanation
"""
for i in range(len(test)):
print(test[i])
"""
Explanation: <br>
Iterate by index:
End of explanation
"""
for i, row in enumerate(test):
print('row', i, 'is', row)
"""
Explanation: <br>
Iterate by row and index:
End of explanation
"""
test2 = test**2
test2
for i, j in zip(test, test2):
print(i,'+',j,'=',i+j)
"""
Explanation: <br>
Use zip to iterate over multiple iterables.
End of explanation
"""
|
opengeostat/pygslib | pygslib/Ipython_templates/deprecated/probplt_raw.ipynb | mit | #general imports
import matplotlib.pyplot as plt
import pygslib
import numpy as np
#make the plots inline
%matplotlib inline
"""
Explanation: PyGSLIB
Probplot
End of explanation
"""
#get the data in gslib format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../datasets/cluster.dat')
true= pygslib.gslib.read_gslib_file('../datasets/true.dat')
# This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code
# so, we are adding constant elevation = 0 and a dummy BHID = 1
mydata['Zlocation']=0
mydata['bhid']=1
true['Declustering Weight']=1
# printing to verify results
print ' \n **** 5 first rows in my datafile \n\n ', mydata.head(n=5)
print ' \n **** 5 first rows in my datafile \n\n ', true.head(n=5)
#view data in a 2D projection
plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary'])
plt.colorbar()
plt.grid(True)
plt.show()
"""
Explanation: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
"""
print pygslib.gslib.__plot.probplt.__doc__
mydata['Declustering Weight'].sum()
parameters_probplt = {
'iwt' : 0, #int, 1 use declustering weight
'va' : mydata['Primary'], # array('d') with bounds (nd)
'wt' : mydata['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
parameters_probpltl = {
'iwt' : 1, #int, 1 use declustering weight
'va' : mydata['Primary'], # array('d') with bounds (nd)
'wt' : mydata['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
parameters_probpltt = {
'iwt' : 0, #int, 1 use declustering weight
'va' : true['Primary'], # array('d') with bounds (nd)
'wt' : true['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval,cl,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax, \
xcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt)
binvall,cll,xpt025l,xlqtl,xmedl,xuqtl,xpt975l,xminl, \
xmaxl,xcvrl,xmenl,xvarl,errorl = pygslib.gslib.__plot.probplt(**parameters_probpltl)
binvalt,clt,xpt025t,xlqtt,xmedt,xuqtt,xpt975t,xmint, \
xmaxt,xcvrt,xment,xvart,errort = pygslib.gslib.__plot.probplt(**parameters_probpltt)
print cl
print binvall
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
plt.plot (clt, binvalt, label = 'true')
plt.plot (cl, binval, label = 'raw')
plt.plot (cll, binvall, label = 'declustered')
plt.grid(True)
plt.legend()
fig.show
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
plt.plot (clt, binvalt, label = 'true')
plt.plot (cl, binval, label = 'raw')
plt.plot (cll, binvall, label = 'declustered')
ax.set_xscale('log')
plt.grid(True)
plt.legend()
fig.show
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
plt.plot (clt, binvalt, label = 'true')
plt.plot (cl, binval, label = 'raw')
plt.plot (cll, binvall, label = 'declustered')
ax.set_yscale('log')
plt.grid(True)
plt.legend()
fig.show
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
plt.plot (clt, binvalt, label = 'true')
plt.plot (cl, binval, label = 'raw')
plt.plot (cll, binvall, label = 'declustered')
ax.set_xscale('log')
ax.set_yscale('log')
plt.grid(True)
plt.legend()
fig.show
print 'data min, max: ', xmin, xmax
print 'data quantile 2.5%, 25%, 50%, 75%, 97.75%: ' , xpt025,xlqt,xmed,xuqt,xpt975
print 'data cv, mean, variance : ', xcvr,xmen,xvar
print 'error <> 0? Then all ok?' , error==0
"""
Explanation: Testing probplot
This is not plotting results but is handy to get declustered bins for plots
End of explanation
"""
|
tschinz/iPython_Workspace | 01_Mine/MachineLearning/NeuroEvolution-Flappy-Bird-master/Jupyter Notebook/Flappy.ipynb | gpl-2.0 | import pygame
from pygame.locals import * # noqa
import sys
import random
class FlappyBird_Human:
def __init__(self):
self.screen = pygame.display.set_mode((400, 700))
self.bird = pygame.Rect(65, 50, 50, 50)
self.background = pygame.image.load("assets/background.png").convert()
self.birdSprites = [pygame.image.load("assets/1.png").convert_alpha(),
pygame.image.load("assets/2.png").convert_alpha(),
pygame.image.load("assets/dead.png")]
self.wallUp = pygame.image.load("assets/bottom.png").convert_alpha()
self.wallDown = pygame.image.load("assets/top.png").convert_alpha()
self.gap = 145
self.wallx = 400
self.birdY = 350
self.jump = 0
self.jumpSpeed = 15
self.gravity = 10
self.dead = False
self.sprite = 0
self.counter = 0
self.offset = random.randint(-200, 200)
def updateWalls(self):
self.wallx -= 4
if self.wallx < -80:
self.wallx = 400
self.counter += 1
self.offset = random.randint(-200, 200)
def birdUpdate(self):
if self.jump:
self.jumpSpeed -= 1
self.birdY -= self.jumpSpeed
self.jump -= 1
else:
self.birdY += self.gravity
self.gravity += 0.2
self.bird[1] = self.birdY
upRect = pygame.Rect(self.wallx,
360 + self.gap - self.offset + 10,
self.wallUp.get_width() - 10,
self.wallUp.get_height())
downRect = pygame.Rect(self.wallx,
0 - self.gap - self.offset - 10,
self.wallDown.get_width() - 10,
self.wallDown.get_height())
if upRect.colliderect(self.bird):
self.dead = True
if downRect.colliderect(self.bird):
self.dead = True
if not 0 < self.bird[1] < 720:
self.bird[1] = 50
self.birdY = 50
self.dead = False
self.counter = 0
self.wallx = 400
self.offset = random.randint(-110, 110)
self.gravity = 10
def run(self):
clock = pygame.time.Clock()
pygame.font.init()
font = pygame.font.SysFont("Arial", 50)
while True:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
if (event.type == pygame.KEYDOWN or event.type == pygame.MOUSEBUTTONDOWN) and not self.dead:
self.jump = 17
self.gravity = 10
self.jumpSpeed = 15
self.screen.fill((255, 255, 255))
self.screen.blit(self.background, (0, 0))
self.screen.blit(self.wallUp,
(self.wallx, 360 + self.gap - self.offset))
self.screen.blit(self.wallDown,
(self.wallx, 0 - self.gap - self.offset))
self.screen.blit(font.render(str(self.counter),
-1,
(255, 255, 255)),
(200, 50))
if self.dead:
self.sprite = 2
elif self.jump:
self.sprite = 1
self.screen.blit(self.birdSprites[self.sprite], (70, self.birdY))
if not self.dead:
self.sprite = 0
self.updateWalls()
self.birdUpdate()
pygame.display.update()
if __name__ == "__main__":
FlappyBird_Human().run()
"""
Explanation: Neuroevolution
Gonzalo Piérola
Iker García
Human Player
Run the following if you want to play the game and test It by yourself.<br /> The objective is not to crash against the tubes.<br /> The bird will jump if you push any key. <br />
<br />
Credits: https://youtu.be/h2Uhla6nLDU
End of explanation
"""
import pygame
from pygame.locals import * # noqa
import sys
import random
class FlappyBird:
def __init__(self):
self.bird = pygame.Rect(65, 50, 50, 50)
self.distance = 0
self.gap = 145
self.wallx = 400
self.birdY = 350
self.jump = 0
self.jumpSpeed = 15
self.gravity = 10
self.dead = False
self.counter = 0
self.offset = random.randint(-200, 200)
def calculateInput(self):
dist_X_to_The_Wall = self.wallx+80
dist_Y_to_The_Wall_UP = self.birdY-(0 - self.gap - self.offset+500)
dist_Y_to_The_Wall_DOWN = self.birdY-(360 + self.gap - self.offset)
dist_Y_TOP = self.birdY
dist_Y_BOTTOM = 720-self.birdY
res = [dist_X_to_The_Wall,dist_Y_to_The_Wall_UP,dist_Y_to_The_Wall_DOWN,dist_Y_TOP,dist_Y_BOTTOM]
return res
def centerWalls(self):
return 0 - self.gap - self.offset+572.5
def downWall(self):
return 360 + self.gap - self.offset
def posBird(self):
return self.birdY
def isDead(self):
return self.dead
def TotalDistance(self):
return self.distance
def updateWalls(self):
self.wallx -= 4
if self.wallx < -80:
self.wallx = 400
self.counter += 1
self.offset = random.randint(-200, 200)
def birdUpdate(self):
self.distance = self.distance + 1
if self.jump:
self.jumpSpeed -= 1
self.birdY -= self.jumpSpeed
self.jump -= 1
else:
self.birdY += self.gravity
self.gravity += 0.2
self.bird[1] = self.birdY
upRect = pygame.Rect(self.wallx,
360 + self.gap - self.offset + 10,
88,
500)
downRect = pygame.Rect(self.wallx,
0 - self.gap - self.offset - 10,
88,
500)
if upRect.colliderect(self.bird):
self.dead = True
if downRect.colliderect(self.bird):
self.dead = True
if not 0 < self.bird[1] < 720:
self.dead=True
def tick(self,jump):
if (jump==True) and not self.dead:
self.jump = 17
self.gravity = 10
self.jumpSpeed = 15
self.updateWalls()
self.birdUpdate()
"""
Explanation: Game for training
The cell below contains a implementation of the game adapted to our needs. <br />
The cell does not output any graphics <br />
Every time the function "tick" is called, It executes a step of the game. This functions receives as a parameter if the bird will jump or no in that step. <br />
We added functions needed to train our models <br />
End of explanation
"""
import pygame
from pygame.locals import * # noqa
import sys
import random
class FlappyBird_GAME:
def __init__(self):
self.screen = pygame.display.set_mode((400, 700))
self.bird = pygame.Rect(65, 50, 50, 50)
self.background = pygame.image.load("assets/background.png").convert()
self.birdSprites = [pygame.image.load("assets/1.png").convert_alpha(),
pygame.image.load("assets/2.png").convert_alpha(),
pygame.image.load("assets/dead.png")]
self.wallUp = pygame.image.load("assets/bottom.png").convert_alpha()
self.wallDown = pygame.image.load("assets/top.png").convert_alpha()
self.distance = 0
self.gap = 145
self.wallx = 400
self.birdY = 350
self.jump = 0
self.jumpSpeed = 15
self.gravity = 10
self.dead = False
self.counter = 0
self.offset = random.randint(-200, 200)
self.sprite = 0
def calculateInput(self):
dist_X_to_The_Wall = self.wallx+80
dist_Y_to_The_Wall_UP = self.birdY-(0 - self.gap - self.offset+500)
dist_Y_to_The_Wall_DOWN = self.birdY-(360 + self.gap - self.offset)
dist_Y_TOP = self.birdY
dist_Y_BOTTOM = 720-self.birdY
res = [dist_X_to_The_Wall,dist_Y_to_The_Wall_UP,dist_Y_to_The_Wall_DOWN,dist_Y_TOP,dist_Y_BOTTOM]
return res
def isDead(self):
return self.dead
def TotalDistance(self):
return self.distance
def centerWalls(self):
return 0 - self.gap - self.offset+572.5
def downWall(self):
return 360 + self.gap - self.offset
def posBird(self):
return self.birdY
def updateWalls(self):
self.wallx -= 4
if self.wallx < -80:
self.wallx = 400
self.counter += 1
self.offset = random.randint(-200, 200)
def birdUpdate(self):
self.distance = self.distance + 1
if self.jump:
self.jumpSpeed -= 1
self.birdY -= self.jumpSpeed
self.jump -= 1
else:
self.birdY += self.gravity
self.gravity += 0.2
self.bird[1] = self.birdY
upRect = pygame.Rect(self.wallx,
360 + self.gap - self.offset + 10,
88,
500)
downRect = pygame.Rect(self.wallx,
0 - self.gap - self.offset - 10,
88,
500)
if upRect.colliderect(self.bird):
self.dead = True
if downRect.colliderect(self.bird):
self.dead = True
if not 0 < self.bird[1] < 720:
self.dead=True
def tick(self,jump):
if (jump==True) and not self.dead:
self.jump = 17
self.gravity = 10
self.jumpSpeed = 15
self.screen.fill((255, 255, 255))
self.screen.blit(self.background, (0, 0))
self.screen.blit(self.wallUp,
(self.wallx, 360 + self.gap - self.offset))
self.screen.blit(self.wallDown,
(self.wallx, 0 - self.gap - self.offset))
self.screen.blit(font.render(str(self.counter),
-1,
(255, 255, 255)),
(200, 50))
if self.dead:
self.sprite = 2
elif self.jump:
self.sprite = 1
self.screen.blit(self.birdSprites[self.sprite], (70, self.birdY))
if not self.dead:
self.sprite = 0
self.updateWalls()
self.birdUpdate()
pygame.display.update()
"""
Explanation: Game with graphics
Similar to the previous cell, with this cell will show the bird playing the game.
End of explanation
"""
import neat
number_generations = 1000
def eval_genomes(genomes,config):
for genome_id, genome in genomes:
genome.fitness = 99999
net = neat.nn.FeedForwardNetwork.create(genome,config)
bird = FlappyBird()
while (not bird.isDead() and not bird.TotalDistance()>110000):
nnInput = bird.calculateInput()
#print(nnInput)
#print(bird.fitness())
output = net.activate(nnInput)
if output[0] > output[1]:
bird.tick(True)
else:
bird.tick(False)
genome.fitness = bird.TotalDistance()
config = neat.Config(neat.DefaultGenome,neat.DefaultReproduction,neat.DefaultSpeciesSet,neat.DefaultStagnation,'FlapyBirdNEAT')
p = neat.Population(config)
p.add_reporter(neat.StdOutReporter(False))
winner = p.run(eval_genomes,number_generations)
"""
Explanation: Neuroevolution with NEAT
The cell below will evolve a network and Its weights to learn how to play the game
The fitness will be the distance traveled by the bird.
End of explanation
"""
import graphviz
def draw_net(config, genome, view=False, filename=None, node_names=None, show_disabled=True, prune_unused=False,
node_colors=None, fmt='svg'):
""" Receives a genome and draws a neural network with arbitrary topology. """
# Attributes for network nodes.
if graphviz is None:
warnings.warn("This display is not available due to a missing optional dependency (graphviz)")
return
if node_names is None:
node_names = {}
assert type(node_names) is dict
if node_colors is None:
node_colors = {}
assert type(node_colors) is dict
node_attrs = {
'shape': 'circle',
'fontsize': '9',
'height': '0.2',
'width': '0.2'}
dot = graphviz.Digraph(format=fmt, node_attr=node_attrs)
inputs = set()
for k in config.genome_config.input_keys:
inputs.add(k)
name = node_names.get(k, str(k))
input_attrs = {'style': 'filled',
'shape': 'box'}
input_attrs['fillcolor'] = node_colors.get(k, 'lightgray')
dot.node(name, _attributes=input_attrs)
outputs = set()
for k in config.genome_config.output_keys:
outputs.add(k)
name = node_names.get(k, str(k))
node_attrs = {'style': 'filled'}
node_attrs['fillcolor'] = node_colors.get(k, 'lightblue')
dot.node(name, _attributes=node_attrs)
if prune_unused:
connections = set()
for cg in genome.connections.values():
if cg.enabled or show_disabled:
connections.add((cg.in_node_id, cg.out_node_id))
used_nodes = copy.copy(outputs)
pending = copy.copy(outputs)
while pending:
new_pending = set()
for a, b in connections:
if b in pending and a not in used_nodes:
new_pending.add(a)
used_nodes.add(a)
pending = new_pending
else:
used_nodes = set(genome.nodes.keys())
for n in used_nodes:
if n in inputs or n in outputs:
continue
attrs = {'style': 'filled',
'fillcolor': node_colors.get(n, 'white')}
dot.node(str(n), _attributes=attrs)
for cg in genome.connections.values():
if cg.enabled or show_disabled:
# if cg.input not in used_nodes or cg.output not in used_nodes:
# continue
input, output = cg.key
a = node_names.get(input, str(input))
b = node_names.get(output, str(output))
style = 'solid' if cg.enabled else 'dotted'
color = 'green' if cg.weight > 0 else 'red'
width = str(0.1 + abs(cg.weight / 5.0))
dot.edge(a, b, _attributes={'style': style, 'color': color, 'penwidth': width})
dot.render(filename, view=view)
return dot
"""
Explanation: Draw the network
We are using the graphviz in order to draw the network that we generate with the NEAT
End of explanation
"""
draw_net(config, winner, view=True)
"""
Explanation: Visualization of the network
End of explanation
"""
clock = pygame.time.Clock()
pygame.font.init()
font = pygame.font.SysFont("Arial", 50)
bird = FlappyBird_GAME()
#import csv
#import numpy
while (not bird.isDead()):
clock.tick(60)
net = neat.nn.FeedForwardNetwork.create(winner,config)
nnInput = bird.calculateInput()
output = net.activate(nnInput)
if output[0] > output[1]:
bird.tick(True)
# out = [1.0,0.0]
else:
bird.tick(False)
# out = [0.0,1.0]
#w = nnInput + out
#a = numpy.asarray(w)
# with open(r'flapyData.csv', 'a') as f:
# writer = csv.writer(f)
# writer.writerow(a)
print(bird.TotalDistance())
"""
Explanation: Visualization of the best genome
This cell will show the best genome playing the game.
By adding the commented part the inputs and outputs generated during the process will be saved in csv file.
End of explanation
"""
import pandas as pd
import numpy as np
from sklearn import linear_model, datasets, metrics
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
dat=pd.read_csv('flapyData.csv', sep=',',header=None)
inputs,outputs = np.column_stack((dat[0],dat[1],dat[2],dat[3],dat[4])),np.column_stack((dat[5],dat[6]))
X_train, X_test, Y_train, Y_test = train_test_split(inputs, outputs, test_size = 0.2, random_state=0)
mlp_classifier = MLPClassifier(hidden_layer_sizes=(10,4), max_iter=1000, tol=0.001, random_state=1, verbose=True)
mlp_classifier.fit(X_train,Y_train)
"""
Explanation: DEEP LEARNING
This cell will train a MLP with the data generated playing with the network generated by the NEAT to play the game.
End of explanation
"""
print("MLP predictions:\n%s\n" % (metrics.classification_report(Y_test, mlp_classifier.predict(X_test))))
"""
Explanation: Results
The results obtained classifying the test data with the trained MLP.
End of explanation
"""
#Esta celda visualiza el mejor genoma tras la neuroevolución
clock = pygame.time.Clock()
pygame.font.init()
font = pygame.font.SysFont("Arial", 50)
bird = FlappyBird_GAME()
while (not bird.isDead()):
clock.tick(60)
nnInput = bird.calculateInput()
output = mlp_classifier.predict(np.column_stack((nnInput[0],nnInput[1],nnInput[2],nnInput[3],nnInput[4])))
if output[0][0] > output[0][1]:
bird.tick(True)
else:
bird.tick(False)
print(bird.TotalDistance())
"""
Explanation: MLP playing the game
This cell will show how the MLP plays the game
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.