markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
モデルの登録
|
import os
from azureml.core.model import Model
# Register model
model = Model.register(workspace = ws,
model_path = modelfilespath + '/model.pkl',
model_name = 'bankmarketing',
tags = {'automl': 'use generated file'},
description = 'AutoML generated model for Bank Marketing')
|
Registering model bankmarketing
|
MIT
|
4.AML-Functions-notebook/AML-AzureFunctionsPackager.ipynb
|
dahatake/Azure-Machine-Learning-sample
|
推論環境定義
|
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name = 'myenv',
file_path = modelfilespath + '/conda_env_v_1_0_0.yml')
myenv.register(workspace=ws)
|
_____no_output_____
|
MIT
|
4.AML-Functions-notebook/AML-AzureFunctionsPackager.ipynb
|
dahatake/Azure-Machine-Learning-sample
|
推論環境設定
|
from azureml.core.model import InferenceConfig
myenv = Environment.get(workspace=ws, name='myenv', version='1')
inference_config = InferenceConfig(entry_script= modelfilespath + '/scoring_file_v_1_0_0.py',
environment=myenv)
|
_____no_output_____
|
MIT
|
4.AML-Functions-notebook/AML-AzureFunctionsPackager.ipynb
|
dahatake/Azure-Machine-Learning-sample
|
Azure Functions 用 イメージ作成HTTP Trigger 用:https://docs.microsoft.com/ja-jp/python/api/azureml-contrib-functions/azureml.contrib.functions?view=azure-ml-pypackage-http-workspace--models--inference-config--generate-dockerfile-false--auth-level-none-
|
from azureml.contrib.functions import package_http
httptrigger = package_http(ws, [model], inference_config, generate_dockerfile=True, auth_level=None)
httptrigger.wait_for_creation(show_output=True)
# Display the package location/ACR path
print(httptrigger.location)
|
Package creation Succeeded
https://dahatakeml5466187599.blob.core.windows.net/azureml/LocalUpload/d81db5dd-82ae-41fd-a56c-89010d382c36/build_context_manifest.json?sv=2019-02-02&sr=b&sig=ktxPIr5t%2F00E4lxDUQ4OjfiTxn00Yo0VfABY3BbQ4gQ%3D&st=2020-09-10T07%3A39%3A46Z&se=2020-09-10T15%3A49%3A46Z&sp=r
|
MIT
|
4.AML-Functions-notebook/AML-AzureFunctionsPackager.ipynb
|
dahatake/Azure-Machine-Learning-sample
|
Scenario Analysis: Pop Up ShopKürschner (talk) 17:51, 1 December 2020 (UTC), CC0, via Wikimedia Commons
|
# install Pyomo and solvers for Google Colab
import sys
if "google.colab" in sys.modules:
!wget -N -q https://raw.githubusercontent.com/jckantor/MO-book/main/tools/install_on_colab.py
%run install_on_colab.py
|
_____no_output_____
|
MIT
|
_build/jupyter_execute/notebooks/01/Pop-Up-Shop.ipynb
|
jckantor/MO-book
|
The problemThere is an opportunity to operate a pop-up shop to sell a unique commemorative item for events held at a famous location. The items cost 12 € each and will selL for 40 €. Unsold items can be returned to the supplier at a value of only 2 € due to their commemorative nature.| Parameter | Symbol | Value || :---: | :---: | :---: || sales price | $r$ | 40 € || unit cost | $c$ | 12 € || salvage value | $w$ | 2 € |Profit will increase with sales. Demand for these items, however, will be high only if the weather is good. Historical data suggests the following scenarios. | Scenario ($s$) | Demand ($d_s$) | Probability ($p_s$) || :---: | :-----: | :----------: || Sunny Skies | 650 | 0.10 || Good Weather | 400 | 0.60 || Poor Weather | 200 | 0.30 |The problem is to determine how many items to order for the pop-up shop. The dilemma is that the weather won't be known until after the order is placed. Ordering enough items to meet demand for a good weather day results in a financial penalty on returned goods if the weather is poor. But ordering just enough items to satisfy demand on a poor weather day leaves "money on the table" if the weather is good.How many items should be ordered for sale? Expected value for the mean scenario (EVM) A naive solution to this problem is to place an order equal to the expected demand. The expected demand is given by$$\begin{align*}\mathbb E[D] & = \sum_{s\in S} p_s d_s \end{align*}$$Choosing an order size $x = \mathbb E[d]$ results in an expected profit we call the **expected value of the mean scenario (EVM)**. Variable $y_s$ is the actual number of items sold if scenario $s$ should occur. The number sold is the lesser of the demand $d_s$ and the order size $x$.$$\begin{align*}y_s & = \min(d_s, x) & \forall s \in S\end{align*}$$Any unsold inventory $x - y_s$ remaining after the event will be sold at the salvage price $w$. Taking into account the revenue from sales $r y_s$, the salvage value of the unsold inventory $w(x - y_s)$, and the cost of the order $c x$, the profit $f_s$ for scenario $s$ is given by$$\begin{align*}f_s & = r y_s + w (x - y_s) - c x & \forall s \in S\end{align*}$$The average or expected profit is given by$$\begin{align*}\text{EVM} = \mathbb E[f] & = \sum_{s\in S} p_s f_s\end{align*}$$These calculations can be executed using operations on the pandas dataframe. Let's begin by calculating the expected demand.Below we create a pandas DataFrame object to store the scenario data.
|
import numpy as np
import pandas as pd
# price information
r = 40
c = 12
w = 2
# scenario information
scenarios = {
"sunny skies" : {"probability": 0.10, "demand": 650},
"good weather": {"probability": 0.60, "demand": 400},
"poor weather": {"probability": 0.30, "demand": 200},
}
df = pd.DataFrame.from_dict(scenarios).T
display(df)
expected_demand = sum(df["probability"] * df["demand"])
print(f"Expected demand = {expected_demand}")
|
Expected demand = 365.0
|
MIT
|
_build/jupyter_execute/notebooks/01/Pop-Up-Shop.ipynb
|
jckantor/MO-book
|
Subsequent calculations can be done directly withthe pandas dataframe holding the scenario data.
|
df["order"] = expected_demand
df["sold"] = df[["demand", "order"]].min(axis=1)
df["salvage"] = df["order"] - df["sold"]
df["profit"] = r * df["sold"] + w * df["salvage"] - c * df["order"]
EVM = sum(df["probability"] * df["profit"])
print(f"Mean demand = {expected_demand}")
print(f"Expected value of the mean demand (EVM) = {EVM}")
display(df)
|
Mean demand = 365.0
Expected value of the mean demand (EVM) = 8339.0
|
MIT
|
_build/jupyter_execute/notebooks/01/Pop-Up-Shop.ipynb
|
jckantor/MO-book
|
Expected value of the stochastic solution (EVSS)The optimization problem is to find the order size $x$ that maximizes expected profit subject to operational constraints on the decision variables. The variables $x$ and $y_s$ are non-negative integers, while $f_s$ is a real number that can take either positive and negative values. The number of goods sold in scenario $s$ has to be less than the order size $x$ and customer demand $d_s$. The problem to be solved is$$\begin{align*}\text{EV} = & \max_{x, y_s} \mathbb E[F] = \sum_{s\in S} p_s f_s \\\text{subject to:} \\f_s & = r y_s + w(x - y_s) - c x & \forall s \in S\\y_s & \leq x & \forall s \in S \\y_s & \leq d_s & \forall s \in S\end{align*}$$where $S$ is the set of all scenarios under consideration.
|
import pyomo.environ as pyo
import pandas as pd
# price information
r = 40
c = 12
w = 2
# scenario information
scenarios = {
"sunny skies" : {"demand": 650, "probability": 0.1},
"good weather": {"demand": 400, "probability": 0.6},
"poor weather": {"demand": 200, "probability": 0.3},
}
# create model instance
m = pyo.ConcreteModel('Pop-up Shop')
# set of scenarios
m.S = pyo.Set(initialize=scenarios.keys())
# decision variables
m.x = pyo.Var(domain=pyo.NonNegativeIntegers)
m.y = pyo.Var(m.S, domain=pyo.NonNegativeIntegers)
m.f = pyo.Var(m.S, domain=pyo.Reals)
# objective
@m.Objective(sense=pyo.maximize)
def EV(m):
return sum([scenarios[s]["probability"]*m.f[s] for s in m.S])
# constraints
@m.Constraint(m.S)
def profit(m, s):
return m.f[s] == r*m.y[s] + w*(m.x - m.y[s]) - c*m.x
@m.Constraint(m.S)
def sales_less_than_order(m, s):
return m.y[s] <= m.x
@m.Constraint(m.S)
def sales_less_than_demand(m, s):
return m.y[s] <= scenarios[s]["demand"]
# solve
solver = pyo.SolverFactory('glpk')
results = solver.solve(m)
# display solution using Pandas
print("Solver Termination Condition:", results.solver.termination_condition)
print("Expected Profit:", m.EV())
print()
for s in m.S:
scenarios[s]["order"] = m.x()
scenarios[s]["sold"] = m.y[s]()
scenarios[s]["salvage"] = m.x() - m.y[s]()
scenarios[s]["profit"] = m.f[s]()
df = pd.DataFrame.from_dict(scenarios).T
display(df)
|
Solver Termination Condition: optimal
Expected Profit: 8920.0
|
MIT
|
_build/jupyter_execute/notebooks/01/Pop-Up-Shop.ipynb
|
jckantor/MO-book
|
Optimizing over all scenarios provides an expected profit of 8,920 €, an increase of 581 € over the base case of simply ordering the expected number of items sold. The new solution places a larger order. In poor weather conditions there will be more returns and lower profit that is more than compensated by the increased profits in good weather conditions. The addtional value that results from solve of this planning problem is called the **Value of the Stochastic Solution (VSS)**. The value of the stochastic solution is the additional profit compared to ordering to meet expected in demand. In this case,$$\text{VSS} = \text{EV} - \text{EVM} = 8,920 - 8,339 = 581$$ Expected value with perfect information (EVPI)Maximizing expected profit requires the size of the order be decided before knowing what scenario will unfold. The decision for $x$ has to be made "here and now" with probablistic information about the future, but without specific information on which future will actually transpire.Nevertheless, we can perform the hypothetical calculation of what profit would be realized if we could know the future. We are still subject to the variability of weather, what is different is we know what the weather will be at the time the order is placed. The resulting value for the expected profit is called the **Expected Value of Perfect Information (EVPI)**. The difference EVPI - EV is the extra profit due to having perfect knowledge of the future.To compute the expected profit with perfect information, we let the order variable $x$ be indexed by the subsequent scenario that will unfold. Given decision varaible $x_s$, the model for EVPI becomes$$\begin{align*}\text{EVPI} = & \max_{x_s, y_s} \mathbb E[f] = \sum_{s\in S} p_s f_s \\\text{subject to:} \\f_s & = r y_s + w(x_s - y_s) - c x_s & \forall s \in S\\y_s & \leq x_s & \forall s \in S \\y_s & \leq d_s & \forall s \in S\end{align*}$$The following implementation is a variation of the prior cell.
|
import pyomo.environ as pyo
import pandas as pd
# price information
r = 40
c = 12
w = 2
# scenario information
scenarios = {
"sunny skies" : {"demand": 650, "probability": 0.1},
"good weather": {"demand": 400, "probability": 0.6},
"poor weather": {"demand": 200, "probability": 0.3},
}
# create model instance
m = pyo.ConcreteModel('Pop-up Shop')
# set of scenarios
m.S = pyo.Set(initialize=scenarios.keys())
# decision variables
m.x = pyo.Var(m.S, domain=pyo.NonNegativeIntegers)
m.y = pyo.Var(m.S, domain=pyo.NonNegativeIntegers)
m.f = pyo.Var(m.S, domain=pyo.Reals)
# objective
@m.Objective(sense=pyo.maximize)
def EV(m):
return sum([scenarios[s]["probability"]*m.f[s] for s in m.S])
# constraints
@m.Constraint(m.S)
def profit(m, s):
return m.f[s] == r*m.y[s] + w*(m.x[s] - m.y[s]) - c*m.x[s]
@m.Constraint(m.S)
def sales_less_than_order(m, s):
return m.y[s] <= m.x[s]
@m.Constraint(m.S)
def sales_less_than_demand(m, s):
return m.y[s] <= scenarios[s]["demand"]
# solve
solver = pyo.SolverFactory('glpk')
results = solver.solve(m)
# display solution using Pandas
print("Solver Termination Condition:", results.solver.termination_condition)
print("Expected Profit:", m.EV())
print()
for s in m.S:
scenarios[s]["order"] = m.x[s]()
scenarios[s]["sold"] = m.y[s]()
scenarios[s]["salvage"] = m.x[s]() - m.y[s]()
scenarios[s]["profit"] = m.f[s]()
df = pd.DataFrame.from_dict(scenarios).T
display(df)
|
Solver Termination Condition: optimal
Expected Profit: 10220.0
|
MIT
|
_build/jupyter_execute/notebooks/01/Pop-Up-Shop.ipynb
|
jckantor/MO-book
|
Copyright (c) 2014-2021 National Technology and Engineering Solutions of Sandia, LLC. Under the terms of Contract DE-NA0003525 with National Technology and Engineering Solutions of Sandia, LLC, the U.S. Government retains certain rights in this software. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. **Tutorial 1:** How to Create Trajectory Points from a Deliminated File PurposeThis notebook demonstrates how to create Tracktable Trajectory Point objects from a deliminated (e.g. csv, tsv, etc.) data file. A data file must contain the following columns in order to be compatible with Tracktable:* **an identifier** that is unique to each object* **a timestamp*** **longitude*** **latitude**Both ordering and headers for these columns can vary, but they must exist in the file. Each row of the data file should represent the information for a single trajectory point. **IMPORTANT:** Deliminated files must be **sorted by timestamp** to be compatible with Tracktable.*Note:* This notebook does not cover how to create a Trajectory object (as opposed to a list of Trajectory point objects). Please see [Tutorial 2](Tutorial_02.ipynb) for an example of how to create Trajectory objects from a csv file containing trajectory point information. Step 1: Identify your CSV/TSV FileWe will use the provided example data $^1$ for this tutorial. If you are using another filename, `data_filename` should be set to the string containing the path to your csv file.
|
from tracktable.core import data_directory
import os.path
data_filename = os.path.join(data_directory(), 'NYHarbor_2020_06_30_first_hour.csv')
|
_____no_output_____
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
Step 2: Create a TrajectoryPointReader object. We will create a Terrestrial point reader, which will expect **(longitude, latitude)** coordinates. Alternatively, if our data points were in a Cartesian coordinate system, we would import the `TrajectoryPointReader` object from `tracktable.domain.cartesian2d` or `tracktable.domain.cartesian3d`.
|
from tracktable.domain.terrestrial import TrajectoryPointReader
reader = TrajectoryPointReader()
|
_____no_output_____
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
Step 3: Give the TrajectoryPointReader object info about the file. Have the reader open an input stream to the data file.
|
reader.input = open(data_filename, 'r')
|
_____no_output_____
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
*Additional Settings* Identify the comment character for the data file. Any lines with this as the first non-whitespace character will be ignored. This is optional and defaulted to ``.
|
reader.comment_character = '#'
|
_____no_output_____
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
Identify the file's delimiter. For comma-separated (CSV) files, the delimiter should be set to `,`. For tab-separated files, this should be `\t`. This is optional, and the default value is `,`.
|
reader.field_delimiter = ','
|
_____no_output_____
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
Identify the string associated with a null value in a cell. This is optional and defaulted to an empty string.
|
reader.null_value = 'NaN'
|
_____no_output_____
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
*Required Columns* We must tell the reader where to find the **unique object ID**, **timestamp**, **longitude** and **latitude** columns. Column numbering starts at zero.If no column numbers are given, the reader will assume they are in the order listed above. Note that terrestrial points are stored as (longitude, latitude) in Tracktable.
|
reader.object_id_column = 3
reader.timestamp_column = 0
reader.coordinates[0] = 1 # longitude
reader.coordinates[1] = 2 # latitude
|
_____no_output_____
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
*Optional Columns* Your data file may contain additional information (e.g. speed, heading, altitude, etc.) that you wish to store with your trajectory points. These can be stored as either floats, strings or datetime objects. An example of each is shown below, respectively.
|
reader.set_real_field_column('heading', 6)
reader.set_string_field_column('vessel-name', 7)
reader.set_time_field_column('eta', 17)
|
_____no_output_____
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
Step 4: Convert the Reader to a List of Trajectory Points
|
trajectory_points = list(reader)
|
_____no_output_____
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
How many trajectory points do we have?
|
len(trajectory_points)
|
_____no_output_____
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
Step 5: Accessing Trajectory Point Info The information from the required columns of the csv can be accessed for a single `trajectory_point` object as* **unique object identifier:** `trajectory_point.object_id`* **timestamp:** `trajectory_point.timestamp`* **longitude:** `trajectory_point[0]`* **latitude:** `trajectory_point[1]`The optional column information is available through the member variable `properties` as follows: `trajectory_point.properties['what-you-named-it']`.This is demonstrated below for our first ten trajectory points.
|
for traj_point in trajectory_points[:10]:
object_id = traj_point.object_id
timestamp = traj_point.timestamp
longitude = traj_point[0]
latitude = traj_point[1]
heading = traj_point.properties["heading"]
vessel_name = traj_point.properties["vessel-name"]
eta = traj_point.properties["eta"]
print(f'Unique ID: {object_id}')
print(f'Timestamp: {timestamp}')
print(f'Longitude: {longitude}')
print(f'Latitude: {latitude}')
print(f'Heading: {heading}')
print(f'Vessel Name: {vessel_name}')
print(f'ETA: {eta}\n')
|
Unique ID: 367000140
Timestamp: 2020-06-30 00:00:00+00:00
Longitude: -74.07157
Latitude: 40.64409
Heading: 246.0
Vessel Name: SAMUEL I NEWHOUSE
ETA: 2020-06-30 12:01:00+00:00
Unique ID: 366999618
Timestamp: 2020-06-30 00:00:00+00:00
Longitude: -74.02433
Latitude: 40.54291
Heading: 349.0
Vessel Name: CG SHRIKE
ETA: 2020-06-30 19:40:00+00:00
Unique ID: 367776270
Timestamp: 2020-06-30 00:00:00+00:00
Longitude: -73.97656
Latitude: 40.70324
Heading: 290.0
Vessel Name: H200
ETA: 2020-06-30 20:04:00+00:00
Unique ID: 367022550
Timestamp: 2020-06-30 00:00:00+00:00
Longitude: -74.07281
Latitude: 40.63668
Heading: 511.0
Vessel Name: SAMANTHA MILLER
ETA: 2020-06-30 08:10:00+00:00
Unique ID: 367515850
Timestamp: 2020-06-30 00:00:00+00:00
Longitude: -74.11926
Latitude: 40.64217
Heading: 163.0
Vessel Name: DISCOVERY COAST
ETA: 2020-06-30 09:53:00+00:00
Unique ID: 367531640
Timestamp: 2020-06-30 00:00:00+00:00
Longitude: -74.07176
Latitude: 40.62947
Heading: 511.0
Vessel Name: FDNY M9B
ETA: 2020-06-30 13:45:00+00:00
Unique ID: 338531000
Timestamp: 2020-06-30 00:00:00+00:00
Longitude: -74.05089
Latitude: 40.64413
Heading: 96.0
Vessel Name: GENESIS VIGILANT
ETA: 2020-06-30 09:15:00+00:00
Unique ID: 366516370
Timestamp: 2020-06-30 00:00:00+00:00
Longitude: -74.14805
Latitude: 40.64346
Heading: 302.0
Vessel Name: STEPHEN REINAUER
ETA: 2020-06-30 04:51:00+00:00
Unique ID: 367779550
Timestamp: 2020-06-30 00:00:00+00:00
Longitude: -74.00551
Latitude: 40.70308
Heading: 234.0
Vessel Name: SUNSET CROSSING
ETA: 2020-06-30 06:36:00+00:00
Unique ID: 367797260
Timestamp: 2020-06-30 00:00:00+00:00
Longitude: -73.9741
Latitude: 40.70235
Heading: 51.0
Vessel Name: H208
ETA: 2020-06-30 05:39:00+00:00
|
Unlicense
|
tutorial_notebooks/Tutorial_01.ipynb
|
sandialabs/tracktable-docs
|
Optimization of a Voigt profile
|
from exojax.spec.rlpf import rvoigt
import jax.numpy as jnp
import matplotlib.pyplot as plt
|
_____no_output_____
|
MIT
|
examples/tutorial/optimize_voigt.ipynb
|
ykawashima/exojax
|
Let's optimize the Voigt function $V(\nu, \beta, \gamma_L)$ using exojax!$V(\nu, \beta, \gamma_L)$ is a convolution of a Gaussian with a STD of $\beta$ and a Lorentian with a gamma parameter of $\gamma_L$. Note that we use spec.rlpf.rvoigt instead of spec.voigt. This function is a voigt profile with VJP while voigt is JVP defined one. For some reason, we do not use rvoigt as a default function of the voigt profile. But in future, we plan to replace the VJP version as a default one.
|
nu=jnp.linspace(-10,10,100)
plt.plot(nu, rvoigt(nu,1.0,2.0)) #beta=1.0, gamma_L=2.0
|
_____no_output_____
|
MIT
|
examples/tutorial/optimize_voigt.ipynb
|
ykawashima/exojax
|
optimization of a simple absorption model Next, we try to fit a simple absorption model to mock data.The absorption model is $ f= 1 - e^{-a V(\nu,\beta,\gamma_L)}$
|
def absmodel(nu,a,beta,gamma_L):
return 1.0 - jnp.exp(a*rvoigt(nu,beta,gamma_L))
|
_____no_output_____
|
MIT
|
examples/tutorial/optimize_voigt.ipynb
|
ykawashima/exojax
|
Adding a noise...
|
from numpy.random import normal
data=absmodel(nu,2.0,1.0,2.0)+normal(0.0,0.01,len(nu))
plt.plot(nu,data,".")
|
_____no_output_____
|
MIT
|
examples/tutorial/optimize_voigt.ipynb
|
ykawashima/exojax
|
Let's optimize the multiple parameters
|
from jax import grad, vmap
|
_____no_output_____
|
MIT
|
examples/tutorial/optimize_voigt.ipynb
|
ykawashima/exojax
|
We define the objective function as $obj = |d - f|^2$
|
# loss or objective function
def obj(a,beta,gamma_L):
f=data-absmodel(nu,a,beta,gamma_L)
g=jnp.dot(f,f)
return g
#These are the derivative of the objective function
h_a=grad(obj,argnums=0)
h_beta=grad(obj,argnums=1)
h_gamma_L=grad(obj,argnums=2)
print(h_a(2.0,1.0,2.0),h_beta(2.0,1.0,2.0),h_gamma_L(2.0,1.0,2.0))
from jax import jit
@jit
def step(t,opt_state):
a,beta,gamma_L=get_params(opt_state)
value=obj(a,beta,gamma_L)
grads_a = h_a(a,beta,gamma_L)
grads_beta = h_beta(a,beta,gamma_L)
grads_gamma_L = h_gamma_L(a,beta,gamma_L)
grads=jnp.array([grads_a,grads_beta,grads_gamma_L])
opt_state = opt_update(t, grads, opt_state)
return value, opt_state
def doopt(r0,opt_init,get_params,Nstep):
opt_state = opt_init(r0)
traj=[r0]
for t in range(Nstep):
value, opt_state = step(t, opt_state)
p=get_params(opt_state)
traj.append(p)
return traj, p
|
_____no_output_____
|
MIT
|
examples/tutorial/optimize_voigt.ipynb
|
ykawashima/exojax
|
Here, we use the ADAM optimizer
|
#adam
from jax.experimental import optimizers
opt_init, opt_update, get_params = optimizers.adam(1.e-1)
r0 = jnp.array([1.5,1.5,1.5])
trajadam, padam=doopt(r0,opt_init,get_params,1000)
|
_____no_output_____
|
MIT
|
examples/tutorial/optimize_voigt.ipynb
|
ykawashima/exojax
|
Optimized values are given in padam
|
padam
traj=jnp.array(trajadam)
plt.plot(traj[:,0],label="$\\alpha$")
plt.plot(traj[:,1],ls="dashed",label="$\\beta$")
plt.plot(traj[:,2],ls="dotted",label="$\\gamma_L$")
plt.xscale("log")
plt.legend()
plt.show()
plt.plot(nu,data,".",label="data")
plt.plot(nu,absmodel(nu,padam[0],padam[1],padam[2]),label="optimized")
plt.show()
|
_____no_output_____
|
MIT
|
examples/tutorial/optimize_voigt.ipynb
|
ykawashima/exojax
|
Using SGD instead..., you need to increase the number of iteration for convergence
|
#sgd
from jax.experimental import optimizers
opt_init, opt_update, get_params = optimizers.sgd(1.e-1)
r0 = jnp.array([1.5,1.5,1.5])
trajsgd, psgd=doopt(r0,opt_init,get_params,10000)
traj=jnp.array(trajsgd)
plt.plot(traj[:,0],label="$\\alpha$")
plt.plot(traj[:,1],ls="dashed",label="$\\beta$")
plt.plot(traj[:,2],ls="dotted",label="$\\gamma_L$")
plt.xscale("log")
plt.legend()
plt.show()
|
_____no_output_____
|
MIT
|
examples/tutorial/optimize_voigt.ipynb
|
ykawashima/exojax
|
Machine Learning Trading BotIn this Challenge, you’ll assume the role of a financial advisor at one of the top five financial advisory firms in the world. Your firm constantly competes with the other major firms to manage and automatically trade assets in a highly dynamic environment. In recent years, your firm has heavily profited by using computer algorithms that can buy and sell faster than human traders.The speed of these transactions gave your firm a competitive advantage early on. But, people still need to specifically program these systems, which limits their ability to adapt to new data. You’re thus planning to improve the existing algorithmic trading systems and maintain the firm’s competitive advantage in the market. To do so, you’ll enhance the existing trading signals with machine learning algorithms that can adapt to new data. Instructions:Use the starter code file to complete the steps that the instructions outline. The steps for this Challenge are divided into the following sections:* Establish a Baseline Performance* Tune the Baseline Trading Algorithm* Evaluate a New Machine Learning Classifier* Create an Evaluation Report Establish a Baseline PerformanceIn this section, you’ll run the provided starter code to establish a baseline performance for the trading algorithm. To do so, complete the following steps.Open the Jupyter notebook. Restart the kernel, run the provided cells that correspond with the first three steps, and then proceed to step four. 1. Import the OHLCV dataset into a Pandas DataFrame.2. Generate trading signals using short- and long-window SMA values. 3. Split the data into training and testing datasets.4. Use the `SVC` classifier model from SKLearn's support vector machine (SVM) learning method to fit the training data and make predictions based on the testing data. Review the predictions.5. Review the classification report associated with the `SVC` model predictions. 6. Create a predictions DataFrame that contains columns for “Predicted” values, “Actual Returns”, and “Strategy Returns”.7. Create a cumulative return plot that shows the actual returns vs. the strategy returns. Save a PNG image of this plot. This will serve as a baseline against which to compare the effects of tuning the trading algorithm.8. Write your conclusions about the performance of the baseline trading algorithm in the `README.md` file that’s associated with your GitHub repository. Support your findings by using the PNG image that you saved in the previous step. Tune the Baseline Trading AlgorithmIn this section, you’ll tune, or adjust, the model’s input features to find the parameters that result in the best trading outcomes. (You’ll choose the best by comparing the cumulative products of the strategy returns.) To do so, complete the following steps:1. Tune the training algorithm by adjusting the size of the training dataset. To do so, slice your data into different periods. Rerun the notebook with the updated parameters, and record the results in your `README.md` file. Answer the following question: What impact resulted from increasing or decreasing the training window?> **Hint** To adjust the size of the training dataset, you can use a different `DateOffset` value—for example, six months. Be aware that changing the size of the training dataset also affects the size of the testing dataset.2. Tune the trading algorithm by adjusting the SMA input features. Adjust one or both of the windows for the algorithm. Rerun the notebook with the updated parameters, and record the results in your `README.md` file. Answer the following question: What impact resulted from increasing or decreasing either or both of the SMA windows?3. Choose the set of parameters that best improved the trading algorithm returns. Save a PNG image of the cumulative product of the actual returns vs. the strategy returns, and document your conclusion in your `README.md` file. Evaluate a New Machine Learning ClassifierIn this section, you’ll use the original parameters that the starter code provided. But, you’ll apply them to the performance of a second machine learning model. To do so, complete the following steps:1. Import a new classifier, such as `AdaBoost`, `DecisionTreeClassifier`, or `LogisticRegression`. (For the full list of classifiers, refer to the [Supervised learning page](https://scikit-learn.org/stable/supervised_learning.html) in the scikit-learn documentation.)2. Using the original training data as the baseline model, fit another model with the new classifier.3. Backtest the new model to evaluate its performance. Save a PNG image of the cumulative product of the actual returns vs. the strategy returns for this updated trading algorithm, and write your conclusions in your `README.md` file. Answer the following questions: Did this new model perform better or worse than the provided baseline model? Did this new model perform better or worse than your tuned trading algorithm? Create an Evaluation ReportIn the previous sections, you updated your `README.md` file with your conclusions. To accomplish this section, you need to add a summary evaluation report at the end of the `README.md` file. For this report, express your final conclusions and analysis. Support your findings by using the PNG images that you created.
|
# Imports
import pandas as pd
import numpy as np
from pathlib import Path
import hvplot.pandas
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn import metrics
from sklearn.ensemble import AdaBoostClassifier
from sklearn.preprocessing import StandardScaler
from pandas.tseries.offsets import DateOffset
from sklearn.metrics import classification_report
|
_____no_output_____
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
--- Establish a Baseline PerformanceIn this section, you’ll run the provided starter code to establish a baseline performance for the trading algorithm. To do so, complete the following steps.Open the Jupyter notebook. Restart the kernel, run the provided cells that correspond with the first three steps, and then proceed to step four. Step 1: mport the OHLCV dataset into a Pandas DataFrame.
|
# Import the OHLCV dataset into a Pandas Dataframe
ohlcv_df = pd.read_csv(
Path("./Resources/emerging_markets_ohlcv.csv"),
index_col='date',
infer_datetime_format=True,
parse_dates=True
)
# Review the DataFrame
ohlcv_df.head()
# Filter the date index and close columns
signals_df = ohlcv_df.loc[:, ["close"]]
# Use the pct_change function to generate returns from close prices
signals_df["Actual Returns"] = signals_df["close"].pct_change()
# Drop all NaN values from the DataFrame
signals_df = signals_df.dropna()
# Review the DataFrame
display(signals_df.head())
display(signals_df.tail())
|
_____no_output_____
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
Step 2: Generate trading signals using short- and long-window SMA values.
|
# Set the short window and long window
short_window = 4
long_window = 100
# Generate the fast and slow simple moving averages (4 and 100 days, respectively)
signals_df['SMA_Fast'] = signals_df['close'].rolling(window=short_window).mean()
signals_df['SMA_Slow'] = signals_df['close'].rolling(window=long_window).mean()
signals_df = signals_df.dropna()
# Review the DataFrame
display(signals_df.head())
display(signals_df.tail())
# Initialize the new Signal column
signals_df['Signal'] = 0.0
# When Actual Returns are greater than or equal to 0, generate signal to buy stock long
signals_df.loc[(signals_df['Actual Returns'] >= 0), 'Signal'] = 1
# When Actual Returns are less than 0, generate signal to sell stock short
signals_df.loc[(signals_df['Actual Returns'] < 0), 'Signal'] = -1
# Review the DataFrame
display(signals_df.head())
display(signals_df.tail())
signals_df['Signal'].value_counts()
# Calculate the strategy returns and add them to the signals_df DataFrame
signals_df['Strategy Returns'] = signals_df['Actual Returns'] * signals_df['Signal'].shift()
# Review the DataFrame
display(signals_df.head())
display(signals_df.tail())
# Plot Strategy Returns to examine performance
(1 + signals_df['Strategy Returns']).cumprod().plot()
|
_____no_output_____
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
Step 3: Split the data into training and testing datasets.
|
# Assign a copy of the sma_fast and sma_slow columns to a features DataFrame called X
X = signals_df[['SMA_Fast', 'SMA_Slow']].shift().dropna()
# Review the DataFrame
X.head()
# Create the target set selecting the Signal column and assiging it to y
y = signals_df['Signal']
# Review the value counts
y.value_counts()
# Select the start of the training period
training_begin = X.index.min()
# Display the training begin date
print(training_begin)
# Select the ending period for the training data with an offset of 3 months
training_end = X.index.min() + DateOffset(months=3)
# Display the training end date
print(training_end)
# Generate the X_train and y_train DataFrames
X_train = X.loc[training_begin:training_end]
y_train = y.loc[training_begin:training_end]
# Review the X_train DataFrame
X_train.head()
# Generate the X_test and y_test DataFrames
X_test = X.loc[training_end+DateOffset(hours=1):]
y_test = y.loc[training_end+DateOffset(hours=1):]
# Review the X_test DataFrame
X_train.head()
# Scale the features DataFrames
# Create a StandardScaler instance
scaler = StandardScaler()
# Apply the scaler model to fit the X-train data
X_scaler = scaler.fit(X_train)
# Transform the X_train and X_test DataFrames using the X_scaler
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
|
_____no_output_____
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
Step 4: Use the `SVC` classifier model from SKLearn's support vector machine (SVM) learning method to fit the training data and make predictions based on the testing data. Review the predictions.
|
# From SVM, instantiate SVC classifier model instance
svm_model = svm.SVC()
# Fit the model to the data using the training data
svm_model = svm_model.fit(X_train_scaled, y_train)
# Use the testing data to make the model predictions
svm_pred = svm_model.predict(X_test_scaled)
# Review the model's predicted values
svm_pred[:10]
|
_____no_output_____
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
Step 5: Review the classification report associated with the `SVC` model predictions.
|
# Use a classification report to evaluate the model using the predictions and testing data
svm_testing_report = classification_report(y_test, svm_pred)
# Print the classification report
print(svm_testing_report)
|
precision recall f1-score support
-1.0 0.43 0.04 0.07 1804
1.0 0.56 0.96 0.71 2288
accuracy 0.55 4092
macro avg 0.49 0.50 0.39 4092
weighted avg 0.50 0.55 0.43 4092
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
Step 6: Create a predictions DataFrame that contains columns for “Predicted” values, “Actual Returns”, and “Strategy Returns”.
|
# Create a new empty predictions DataFrame.
# Create a predictions DataFrame
predictions_df = pd.DataFrame(index=X_test.index)
# Add the SVM model predictions to the DataFrame
predictions_df['Predicted'] = svm_pred
# Add the actual returns to the DataFrame
predictions_df['Actual Returns'] = signals_df['Actual Returns']
# Add the strategy returns to the DataFrame
predictions_df['Strategy Returns'] = predictions_df['Predicted'] * predictions_df['Actual Returns']
# Review the DataFrame
display(predictions_df.head())
display(predictions_df.tail())
|
_____no_output_____
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
Step 7: Create a cumulative return plot that shows the actual returns vs. the strategy returns. Save a PNG image of this plot. This will serve as a baseline against which to compare the effects of tuning the trading algorithm.
|
# Plot the actual returns versus the strategy returns
baseline_actual_vs_stragegy_plot = (1 + predictions_df[['Actual Returns', 'Strategy Returns']]).cumprod().plot(title="Baseline")
baseline_actual_vs_stragegy_plot.get_figure().savefig('Baseline_actual_vs_strategy.png',bbox_inches='tight')
(1 + predictions_df[['Actual Returns', 'Strategy Returns']]).cumprod().tail(1)
|
_____no_output_____
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
--- Tune the Baseline Trading Algorithm Step 6: Use an Alternative ML Model and Evaluate Strategy Returns In this section, you’ll tune, or adjust, the model’s input features to find the parameters that result in the best trading outcomes. You’ll choose the best by comparing the cumulative products of the strategy returns. Step 1: Tune the training algorithm by adjusting the size of the training dataset. To do so, slice your data into different periods. Rerun the notebook with the updated parameters, and record the results in your `README.md` file. Answer the following question: What impact resulted from increasing or decreasing the training window? Step 2: Tune the trading algorithm by adjusting the SMA input features. Adjust one or both of the windows for the algorithm. Rerun the notebook with the updated parameters, and record the results in your `README.md` file. Answer the following question: What impact resulted from increasing or decreasing either or both of the SMA windows? Step 3: Choose the set of parameters that best improved the trading algorithm returns. Save a PNG image of the cumulative product of the actual returns vs. the strategy returns, and document your conclusion in your `README.md` file. --- Evaluate a New Machine Learning ClassifierIn this section, you’ll use the original parameters that the starter code provided. But, you’ll apply them to the performance of a second machine learning model. Step 1: Import a new classifier, such as `AdaBoost`, `DecisionTreeClassifier`, or `LogisticRegression`. (For the full list of classifiers, refer to the [Supervised learning page](https://scikit-learn.org/stable/supervised_learning.html) in the scikit-learn documentation.)
|
# Initiate the model instance
abc = AdaBoostClassifier(n_estimators=50)
|
_____no_output_____
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
Step 2: Using the original training data as the baseline model, fit another model with the new classifier.
|
# Fit the model using the training data
model = abc.fit(X_train_scaled, y_train)
# Use the testing dataset to generate the predictions for the new model
abc_pred = model.predict(X_test_scaled)
# Review the model's predicted values
abc_pred[:10]
|
_____no_output_____
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
Step 3: Backtest the new model to evaluate its performance. Save a PNG image of the cumulative product of the actual returns vs. the strategy returns for this updated trading algorithm, and write your conclusions in your `README.md` file. Answer the following questions: Did this new model perform better or worse than the provided baseline model? Did this new model perform better or worse than your tuned trading algorithm?
|
print("Accuracy:",metrics.accuracy_score(y_test, abc_pred))
# Use a classification report to evaluate the model using the predictions and testing data
abc_testing_report = classification_report(y_test, abc_pred)
# Print the classification report
print(abc_testing_report)
# Create a new empty predictions DataFrame.
abc_pred_df = pd.DataFrame(index=X_test.index)
# Add the ABC model predictions to the DataFrame
abc_pred_df['Predicted'] = abc_pred
# Add the actual returns to the DataFrame
abc_pred_df['Actual Returns'] = signals_df['Actual Returns']
# Add the strategy returns to the DataFrame
abc_pred_df['Strategy Returns'] = abc_pred_df['Predicted'] * abc_pred_df['Actual Returns']
# Review the DataFrame
display(abc_pred_df.head(3))
display(abc_pred_df.tail(3))
# Plot the actual returns versus the strategy returns
abc_strategy_plot = (1 + abc_pred_df[['Actual Returns', 'Strategy Returns']]).cumprod().plot(title="AdaBoost: 3-month Train, SMA 4/100")
abc_strategy_plot.get_figure().savefig('AdaBoost_actual_vs_strategy.png',bbox_inches='tight')
(1 + abc_pred_df[['Actual Returns', 'Strategy Returns']]).cumprod().tail(1)
|
_____no_output_____
|
MIT
|
machine_learning_trading_bot.ipynb
|
djonathan/Algorithmic-Trading-ML
|
You can build rlambda objects using any python arithmetic, comparision and bitwise operators. Here are some examples...
|
from rlambda.abc import x, y, z
print((x + 1) + (y - 1) / z)
print((x % 2) // y + z ** 2)
print((x + 1) ** 2 > (y * 2))
print(x != y)
print(x ** 2 == y)
print((x > y) & (y > z))
print((x < 0) | (y < 0))
print(~(x > 0) ^ ~(y > 0))
print((x << 1) + (y >> 1))
|
x, y, z : (x > y) & (y > z)
x, y : (x < 0) | (y < 0)
x, y : ~(x > 0) ^ ~(y > 0)
x, y : (x << 1) + (y >> 1)
|
MIT
|
docs/operations.ipynb
|
Vykstorm/rlambda
|
You can use subscripting and indexing operations...
|
print(x[2:] + y[:2])
print(x[::2] + y[1::2])
print(x[1, 0:2])
f = x.imag ** 2 + x.real * 2
print(f)
f(complex(1, 2))
|
_____no_output_____
|
MIT
|
docs/operations.ipynb
|
Vykstorm/rlambda
|
Getting started with the Google Genomics API In this notebook we'll cover how to make authenticated requests to the [Google Genomics API](https://cloud.google.com/genomics/reference/rest/).----NOTE:* If you're new to notebooks, or want to check out additional samples, check out the full [list](../) of general notebooks.* For additional Genomics samples, check out the full [list](./) of Genomics notebooks. Setup Install Python libraries We'll be using the [Google Python API client](https://github.com/google/google-api-python-client) for interacting with Genomics API. We can install this library, or any other 3rd-party Python libraries from the [Python Package Index (PyPI)](https://pypi.python.org/pypi) using the `pip` package manager.There are [50+ Google APIs](http://api-python-client-doc.appspot.com/) that you can work against with the Google Python API Client, but we'll focus on the Genomics API in this notebook.
|
!pip install --upgrade google-api-python-client
|
Requirement already up-to-date: google-api-python-client in /usr/local/lib/python2.7/dist-packages
Cleaning up...
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
Create an Authenticated Client Next we construct a Python object that we can use it to make requests. The following snippet shows how we can authenticate using the service account on the Datalab host. For more detail about authentication from Python, see [Using OAuth 2.0 for Server to Server Applications](https://developers.google.com/api-client-library/python/auth/service-accounts).
|
from httplib2 import Http
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
http = Http()
credentials.authorize(http)
|
_____no_output_____
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
And then we create a client for the Genomics API.
|
from apiclient.discovery import build
genomics = build('genomics', 'v1', http=http)
|
_____no_output_____
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
Send a request to the Genomics API Now that we have a Python client for the Genomics API, we can access a variety of different resources. For details about each available resource, see the python client [API docs here](https://google-api-client-libraries.appspot.com/documentation/genomics/v1/python/latest/index.html).Using our `genomics` client, we'll demonstrate fetching a Dataset resource by ID (the [1000 Genomes dataset](http://googlegenomics.readthedocs.org/en/latest/use_cases/discover_public_data/1000_genomes.html) in this case).First, we need to construct a request object.
|
request = genomics.datasets().get(datasetId='10473108253681171589')
|
_____no_output_____
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
Next, we'll send this request to the Genomics API by calling the `request.execute()` method.
|
response = request.execute()
|
_____no_output_____
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
You will need enable the Genomics API for your project if you have not done so previously. Click on [this link](https://console.developers.google.com/flows/enableapi?apiid=genomics) to enable the API in your project. The response object returned is simply a Python dictionary. Let's take a look at the properties returned in the response.
|
for entry in response.items():
print "%s => %s" % entry
|
projectId => genomics-public-data
id => 10473108253681171589
createTime => 1970-01-01T00:00:00.000Z
name => 1000 Genomes
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
Success! We can see the name of the specified Dataset and a few other pieces of metadata.Accessing other Genomics API resources will follow this same set of steps. The full [list of available resources within the API is here](https://google-api-client-libraries.appspot.com/documentation/genomics/v1/python/latest/index.html). Each resource has details about the different verbs that can be applied (e.g., [Dataset methods](https://google-api-client-libraries.appspot.com/documentation/genomics/v1/python/latest/genomics_v1.datasets.html)). Access Data In this portion of the notebook, we implement [this same example](https://github.com/googlegenomics/getting-started-with-the-api/tree/master/python) implemented as a python script. First let's define a few constants to use within the examples that follow.
|
dataset_id = '10473108253681171589' # This is the 1000 Genomes dataset ID
sample = 'NA12872'
reference_name = '22'
reference_position = 51003835
|
_____no_output_____
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
Get read bases for a sample at specific a position First find the read group set ID for the sample.
|
request = genomics.readgroupsets().search(
body={'datasetIds': [dataset_id], 'name': sample},
fields='readGroupSets(id)')
read_group_sets = request.execute().get('readGroupSets', [])
if len(read_group_sets) != 1:
raise Exception('Searching for %s didn\'t return '
'the right number of read group sets' % sample)
read_group_set_id = read_group_sets[0]['id']
|
_____no_output_____
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
Once we have the read group set ID, lookup the reads at the position in which we are interested.
|
request = genomics.reads().search(
body={'readGroupSetIds': [read_group_set_id],
'referenceName': reference_name,
'start': reference_position,
'end': reference_position + 1,
'pageSize': 1024},
fields='alignments(alignment,alignedSequence)')
reads = request.execute().get('alignments', [])
|
_____no_output_____
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
And we print out the results.
|
# Note: This is simplistic - the cigar should be considered for real code
bases = [read['alignedSequence'][
reference_position - int(read['alignment']['position']['position'])]
for read in reads]
print '%s bases on %s at %d are' % (sample, reference_name, reference_position)
from collections import Counter
for base, count in Counter(bases).items():
print '%s: %s' % (base, count)
|
NA12872 bases on 22 at 51003835 are
C: 1
G: 13
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
Get variants for a sample at specific a position First find the call set ID for the sample.
|
request = genomics.callsets().search(
body={'variantSetIds': [dataset_id], 'name': sample},
fields='callSets(id)')
resp = request.execute()
call_sets = resp.get('callSets', [])
if len(call_sets) != 1:
raise Exception('Searching for %s didn\'t return '
'the right number of call sets' % sample)
call_set_id = call_sets[0]['id']
|
_____no_output_____
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
Once we have the call set ID, lookup the variants that overlap the position in which we are interested.
|
request = genomics.variants().search(
body={'callSetIds': [call_set_id],
'referenceName': reference_name,
'start': reference_position,
'end': reference_position + 1},
fields='variants(names,referenceBases,alternateBases,calls(genotype))')
variant = request.execute().get('variants', [])[0]
|
_____no_output_____
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
And we print out the results.
|
variant_name = variant['names'][0]
genotype = [variant['referenceBases'] if g == 0
else variant['alternateBases'][g - 1]
for g in variant['calls'][0]['genotype']]
print 'the called genotype is %s for %s' % (','.join(genotype), variant_name)
|
the called genotype is G,G for rs131767
|
Apache-2.0
|
datalab/genomics/Getting started with the Genomics API.ipynb
|
googlegenomics/datalab-examples
|
Question 4
|
df = pd.read_csv('data-hw2.csv')
df
plt.figure(figsize=(8,8))
plt.scatter(df['LUNG'], df['CIG'])
plt.xlabel("LUNG DEATHS")
plt.ylabel("CIG SALES")
plt.title("Scatter plot of Lung Cancer Deaths vs. Cigarette Sales")
for i in range(len(df)):
plt.annotate(df.iloc[i]['STATE'], xy=(df.iloc[i]['LUNG'], df.iloc[i]['CIG']))
df.corr()
df_clean = df
df_clean = df_clean.drop([6, 24], axis=0)
df_clean
plt.figure(figsize=(8,8))
plt.scatter(df_clean['LUNG'], df_clean['CIG'])
plt.xlabel("LUNG DEATHS")
plt.ylabel("CIG SALES")
plt.title("Scatter plot of Lung Cancer Deaths vs. Cigarette Sales")
for i in range(len(df_clean)):
plt.annotate(df_clean.iloc[i]['STATE'], xy=(df_clean.iloc[i]['LUNG'], df_clean.iloc[i]['CIG']))
df_clean.corr()
|
_____no_output_____
|
MIT
|
HW2/HW2.ipynb
|
kaahanmotwani/CS361
|
Question 5
|
df_ko = pd.read_csv('KO.csv')
df_pep = pd.read_csv('PEP.csv')
del df_ko['Open'], df_ko['High'], df_ko['Low'], df_ko['Close'], df_ko['Volume']
del df_pep['Open'], df_pep['High'], df_pep['Low'], df_pep['Close'], df_pep['Volume']
df_comb = pd.DataFrame(columns=["Date", "KO Adj Close", "PEP Adj Close"])
df_comb["Date"] = df_ko["Date"]
df_comb["KO Adj Close"] = df_ko["Adj Close"]
df_comb["PEP Adj Close"] = df_pep["Adj Close"]
df_comb.corr()
x_vals = np.array([np.min(df_comb["KO Adj Close"]), np.max(df_comb["PEP Adj Close"])])
x_vals_standardized = (x_vals-df_comb["KO Adj Close"].mean())/df_comb["KO Adj Close"].std(ddof=0)
y_predictions_standardized = df_comb.corr()["KO Adj Close"]["PEP Adj Close"]*x_vals_standardized
y_predictions = y_predictions_standardized*df_comb["PEP Adj Close"].std(ddof=0)+df_comb["PEP Adj Close"].mean()
plt.figure(figsize=(8,8))
plt.scatter(df_comb['KO Adj Close'], df_comb['PEP Adj Close'])
plt.xlabel("KO Daily Adj Close Price")
plt.ylabel("PEP Daily Adj Close Price")
plt.title("Scatter plot of KO Daily Adj Close Price vs. PEP Daily Adj Close Price with prediction line")
plt.plot(x_vals, y_predictions, 'r', linewidth=2)
plt.xlim(35, 60)
plt.ylim(100, 145)
|
_____no_output_____
|
MIT
|
HW2/HW2.ipynb
|
kaahanmotwani/CS361
|
Setup
|
from google.colab import drive
drive.mount('/content/drive')
!ls /content/drive/MyDrive/ColabNotebooks/Transformer
!nvcc --version
!pip3 install timm faiss tqdm numpy
!pip3 install torch==1.10.2+cu113 torchvision==0.11.3+cu113 torchaudio==0.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
!sudo apt-get install libomp-dev
import torch
print(f'torch.__version__ = {torch.__version__}')
print(f'torch.cuda.is_available() = {torch.cuda.is_available()}')
print(f'torch.cuda.current_device() = {torch.cuda.current_device()}')
print(f'torch.cuda.device(0) = {torch.cuda.device(0)}')
print(f'torch.cuda.device_count() = {torch.cuda.device_count()}')
print(f'torch.cuda.get_device_name(0) = {torch.cuda.get_device_name(0)}')
%cd /content/drive/MyDrive/ColabNotebooks/Transformer/LA-Transformer
|
/content/drive/.shortcut-targets-by-id/19RweVltTTlScqIDv6lHIQzlQezjmyFBN/ColabNotebooks/Transformer/LA-Transformer
|
MIT
|
Transformer/LA_Transformer_Oneshot_clean.ipynb
|
McStevenss/reid-keras-padel
|
Testing
|
from __future__ import print_function
import os
import time
import glob
import random
import zipfile
from itertools import chain
import timm
import numpy as np
import pandas as pd
from PIL import Image
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.model_selection import train_test_split
import torch
import torch.nn as nn
from torch.nn import init
import torch.optim as optim
from torchvision import models
import torch.nn.functional as F
from torch.autograd import Variable
from torch.optim.lr_scheduler import StepLR
from torchvision import datasets, transforms
from torch.utils.data import DataLoader, Dataset
import faiss
# from LATransformer.model import ClassBlock, LATransformer, LATransformerTest
# from LATransformer.utils import save_network, update_summary, get_id
# from LATransformer.metrics import rank1, rank5, rank10, calc_map
from osprey import LATransformerTest
def initilize_device(hardware):
# os.environ['CUDA_VISIBLE_DEVICES']='1'
if hardware == "gpu":
device = "cuda"
# if not device.type == "cpu":
print(f'torch.__version__ = {torch.__version__}')
print(f'torch.cuda.is_available() = {torch.cuda.is_available()}')
print(f'torch.cuda.current_device() = {torch.cuda.current_device()}')
print(f'torch.cuda.device(0) = {torch.cuda.device(0)}')
print(f'torch.cuda.device_count() = {torch.cuda.device_count()}')
print(f'torch.cuda.get_device_name(0) = {torch.cuda.get_device_name(0)}')
elif hardware == "cpu":
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ## Use if CPU
print("Using cpu")
else:
print("Choose either gpu or cpu")
return None
return device
device = initilize_device("gpu")
|
torch.__version__ = 1.10.2+cu113
torch.cuda.is_available() = True
torch.cuda.current_device() = 0
torch.cuda.device(0) = <torch.cuda.device object at 0x00000256EC04E2C8>
torch.cuda.device_count() = 1
torch.cuda.get_device_name(0) = NVIDIA GeForce GTX 1080
|
MIT
|
Transformer/LA_Transformer_Oneshot_clean.ipynb
|
McStevenss/reid-keras-padel
|
Load Model
|
batch_size = 8
gamma = 0.7
seed = 42
# Load ViT
vit_base = timm.create_model('vit_base_patch16_224', pretrained=True, num_classes=50)
vit_base= vit_base.to(device)
# Create La-Transformer
osprey_model = LATransformerTest(vit_base, lmbd=8).to(device)
# Load LA-Transformer
# name = "la_with_lmbd_8"
# name = "la_with_lmbd_8_12-03"
# save_path = os.path.join('./model',name,'net_best.pth')
name = "oprey_{}".format(8)
output_dir = "model/" + name
save_path = os.path.join(output_dir, "saves", "model_32.pt")
checkpoint = torch.load(save_path)
osprey_model.load_state_dict(checkpoint['model_state_dict'], strict=False)
# # Load LA-Transformer
# name = "old_weights"
# save_path = os.path.join('./model',name,'small_ds_68_map_net_best.pth')
#Transformer\model\old_weights\small_ds_68_map_net_best.pth
# osprey_model.load_state_dict(torch.load(save_path), strict=False)
# model.eval()
transform_query_list = [
transforms.Resize((224,224), interpolation=3),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]
transform_gallery_list = [
transforms.Resize(size=(224,224),interpolation=3), #Image.BICUBIC
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]
data_transforms = {
'query': transforms.Compose( transform_query_list ),
'gallery': transforms.Compose(transform_gallery_list),
}
|
E:\Anaconda\envs\py37\lib\site-packages\torchvision\transforms\transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
"Argument interpolation should be of type InterpolationMode instead of int. "
|
MIT
|
Transformer/LA_Transformer_Oneshot_clean.ipynb
|
McStevenss/reid-keras-padel
|
Required functions
|
# device = initilize_device("cpu")
# We had to recreate the get_id() func since they assume the pictures are named in a specific manner.
def get_id_padel(img_path):
labels = []
for path, v in img_path:
filename = os.path.basename(path)
label = filename.split('_')[0]
labels.append(int(label))
return labels
def extract_feature(model,dataloaders):
features = torch.FloatTensor()
count = 0
idx = 0
for data in tqdm(dataloaders):
img, label = data
img, label = img.to(device), label.to(device)
output = model(img)
n, c, h, w = img.size()
count += n
features = torch.cat((features, output.detach().cpu()), 0)
idx += 1
return features
def image_loader(data_dir_path):
image_datasets = {}
# data_dir = "data/The_OspreyChallengerSet"
data_dir = data_dir_path
image_datasets['query'] = datasets.ImageFolder(os.path.join(data_dir, 'query'),
data_transforms['query'])
image_datasets['gallery'] = datasets.ImageFolder(os.path.join(data_dir, 'gallery'),
data_transforms['gallery'])
query_loader = DataLoader(dataset = image_datasets['query'], batch_size=batch_size, shuffle=False)
gallery_loader = DataLoader(dataset = image_datasets['gallery'], batch_size=batch_size, shuffle=False)
return query_loader, gallery_loader, image_datasets
def feature_extraction(model, query_loader, gallery_loader):
# Extract Query Features
query_feature = extract_feature(model, query_loader)
# Extract Gallery Features
gallery_feature = extract_feature(model, gallery_loader)
return query_feature, gallery_feature
def get_labels(image_datasets):
#Retrieve labels
gallery_path = image_datasets['gallery'].imgs
query_path = image_datasets['query'].imgs
gallery_label = get_id_padel(gallery_path)
query_label = get_id_padel(query_path)
return gallery_label, query_label
def calc_gelt_feature(query_feature):
concatenated_query_vectors = []
for query in query_feature:
fnorm = torch.norm(query, p=2, dim=1, keepdim=True)*np.sqrt(14)
query_norm = query.div(fnorm.expand_as(query))
concatenated_query_vectors.append(query_norm.view((-1))) # 14*768 -> 10752
return concatenated_query_vectors
def calc_gelt_gallery(gallery_feature):
concatenated_gallery_vectors = []
for gallery in gallery_feature:
fnorm = torch.norm(gallery, p=2, dim=1, keepdim=True) *np.sqrt(14)
gallery_norm = gallery.div(fnorm.expand_as(gallery))
concatenated_gallery_vectors.append(gallery_norm.view((-1))) # 14*768 -> 10752
return concatenated_gallery_vectors
def calc_faiss(concatenated_gallery_vectors, gallery_label):
index = faiss.IndexIDMap(faiss.IndexFlatIP(10752))
index.add_with_ids(np.array([t.numpy() for t in concatenated_gallery_vectors]), np.array(gallery_label).astype('int64')) # original
return index
def search(query: str, k=1):
encoded_query = query.unsqueeze(dim=0).numpy()
top_k = index.search(encoded_query, k)
return top_k
def osprey_detect(data_dir_path, osprey_model):
query_loader, gallery_loader, image_datasets = image_loader(data_dir_path=data_dir_path)
query_feature, gallery_feature = feature_extraction(model=osprey_model, query_loader=query_loader, gallery_loader=gallery_loader)
gallery_label, query_label = get_labels(image_datasets)
concatenated_query_vectors = calc_gelt_feature(query_feature)
concatenated_gallery_vectors = calc_gelt_gallery(gallery_feature)
index = calc_faiss(concatenated_gallery_vectors, gallery_label)
return concatenated_query_vectors, index
concatenated_query_vectors, index = osprey_detect("data/Osprey_eval", osprey_model)
#For each vector in the query vector list
for query in concatenated_query_vectors:
output = search(query)
print(f"Predicted class: {output[1][0][0]} with {output[0][0][0] * 100} % confidence")
##Making new class boy
def predictClass(queryVector):
output = search(queryVector)
print(f"Predicted class: {output[1][0][0]} with {output[0][0][0] * 100} % confidence")
return output[1][0][0]
|
_____no_output_____
|
MIT
|
Transformer/LA_Transformer_Oneshot_clean.ipynb
|
McStevenss/reid-keras-padel
|
odoijadsoijas
|
#query_loader, gallery_loader, image_datasets = image_loader(data_dir_path="data/The_OspreyChallengerSet")
#load images from folder
query_loader, gallery_loader, image_datasets = image_loader(data_dir_path="data/bim_bam")
#extract features
query_feature, gallery_feature = feature_extraction(model=osprey_model, query_loader=query_loader)
#get labels from pictures
gallery_label, query_label = get_labels(image_datasets)
concatenated_query_vectors = calc_gelt_feature(query_feature)
concatenated_gallery_vectors = calc_gelt_gallery(gallery_feature)
index = calc_faiss(concatenated_gallery_vectors, gallery_label)
rank1_score = 0
rank5_score = 0
rank10_score = 0
ap = 0
count = 0
for query, label in zip(concatenated_query_vectors, query_label):
count += 1
label = label
output = search(query, k=10)
# print(output)
rank1_score += rank1(label, output)
rank5_score += rank5(label, output)
rank10_score += rank10(label, output)
print("Correct: {}, Total: {}, Incorrect: {}".format(rank1_score, count, count-rank1_score), end="\r")
ap += calc_map(label, output)
print("Rank1: {}, Rank5: {}, Rank10: {}, mAP: {}".format(rank1_score/len(query_feature),
rank5_score/len(query_feature),
rank10_score/len(query_feature), ap/len(query_feature)))
|
Correct: 1, Total: 1, Incorrect: 0
Correct: 2, Total: 2, Incorrect: 0
Rank1: 1.0, Rank5: 1.0, Rank10: 1.0, mAP: 0.6766666666666666
|
MIT
|
Transformer/LA_Transformer_Oneshot_clean.ipynb
|
McStevenss/reid-keras-padel
|
The Chain at a Fixed Time Let $X_0, X_1, X_2, \ldots $ be a Markov Chain with state space $S$. We will start by setting up notation that will help us express our calculations compactly.For $n \ge 0$, let $P_n$ be the distribution of $X_n$. That is,$$P_n(i) = P(X_n = i), ~~~~ i \in S$$Then the distribution of $X_0$ is $P_0$. This is called the *initial distribution* of the chain.For $n \ge 0$ and $j \in S$,\begin{align*}P_{n+1}(j) &= P(X_{n+1} = j) \\&= \sum_{i \in S} P(X_n = i, X_{n+1} = j) \\&= \sum_{i \in S} P(X_n = i)P(X_{n+1} = j \mid X_n = i) \\&= \sum_{i \in S} P_n(i)P(X_{n+1} = j \mid X_n = i)\end{align*}The conditional probability $P(X_{n+1} = j \mid X_n = i)$ is called a *one-step transition probability at time $n$*. For many chains such as the random walk, these one-step transition probabilities depend only on the states $i$ and $j$, not on the time $n$. For example, for the random walk,\begin{equation}P(X_{n+1} = j \mid X_n = i) = \begin{cases} \frac{1}{2} & \text{if } j = i-1 \text{ or } j = i+1 \\ 0 & \text{ otherwise} \end{cases}\end{equation}for every $n$. When one-step transition probabilites don't depend on $n$, they are called *stationary* or *time-homogenous*. All the Markov Chains that we will study in this course have time-homogenous transition probabilities.For such a chain, define the *one-step transition probability*$$P(i, j) = P(X_{n+1} = j \mid X_n = i)$$ The Probability of a Path Given that the chain starts at $i$, what is the chance that the next three values are of the chain are $j, k$, and $l$, in that order? We are looking for $$P(X_1 = j, X_2 = k, X_3 = l \mid X_0 = i)$$By repeated use of the multiplication rule and the Markov property, this is$$P(X_1 = j, X_2 = k, X_3 = l \mid X_0 = i) = P(i, j)P(j, k)P(k, l)$$In the same way, given that you know the starting point, you can find the probability of any path of finite length by multiplying one-step transition probabilities. The Distribution of $X_{n+1}$ By our calculation at the start of this section,\begin{align*}P_{n+1}(j) &= P(X_{n+1} = j) \\&= \sum_{i \in S} P_n(i)P(X_{n+1} = j \mid X_n = i) \\&= \sum_{i \in S} P_n(i)P(i, j)\end{align*}The calculation is based on the straightforward observation that for the chain to be at state $j$ at time $n+1$, it had to be at some state $i$ at time $n$ and then get from $i$ to $j$ in one step. Let's use all this in examples. You will quickly see that the distribution $P_n$ has interesting properties. Lazy Random Walk on a Circle Let the state space be five points arranged on a circle. Suppose the process starts at Point 1, and at each step either stays in place with probability 0.5 (and thus is lazy), or moves to one of the two neighboring points with chance 0.25 each, regardless of the other moves. This transition behavior can be summed up in a *transition diagram*:At every step, the next move is determined by a random choice from among three options and by the chain's current location, not on how it got to that location. So the process is a Markov chain. Let's call it $X_0, X_1, X_2, \ldots $.By our assumption, the initial distribution $P_0$ puts all the probability on Point 1. It is defined in the cell below. We will be using `prob140` Markov Chain methods based on [Pykov](https://github.com/riccardoscalco/Pykov) written by [Riccardo Scalco](http://riccardoscalco.github.io). Note the use of `states` instead of `values`. Please enter the states in ascending order, for technical reasons that we hope to overcome later in the term.
|
s = np.arange(1, 6)
p = [1, 0, 0, 0, 0]
initial = Table().states(s).probability(p)
initial
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
The transition probabilities are:- For $2 \le i \le 4$, $P(i, i) = 0.5$ and $P(i, i-1) = 0.25 = P(i, i+1)$. - $P(1, 1) = 0.5$ and $P(1, 5) = 0.25 = P(1, 2)$.- $P(5, 5) = 0.5$ and $P(5, 4) = 0.25 = P(5, 1)$.These probabilities are returned by the function `circle_walk_probs` that takes states $i$ and $j$ as its arguments.
|
def circle_walk_probs(i, j):
if i-j == 0:
return 0.5
elif abs(i-j) == 1:
return 0.25
elif abs(i-j) == 4:
return 0.25
else:
return 0
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
All the transition probabilities can be captured in a table, in a process analogous to creating a joint distribution table.
|
trans_tbl = Table().states(s).transition_function(circle_walk_probs)
trans_tbl
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
Just as when we were constructing joint distribution tables, we can better visualize this as a $5 \times 5$ table:
|
circle_walk = trans_tbl.toMarkovChain()
circle_walk
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
This is called the *transition matrix* of the chain. - For each $i$ and $j$, the $(i, j)$ element of the transition matrix is the one-step transition probability $P(i, j)$.- For each $i$, the $i$th row of the transition matrix consists of the conditional distribution of $X_{n+1}$ given $X_n = i$. Probability of a Path What's the probability of the path 1, 1, 2, 1, 2? That's the path $X_0 = 1, X_1 = 1, X_2 = 2, X_3 = 1, X_4 = 2$. We know that the chain is starting at 1, so the chance of the path is$$1 \cdot P(1, 1)P(1, 2)P(2, 1)P(1, 2) = 0.5 \times 0.25 \times 0.25 \times 0.25 = 0.0078125$$The method `prob_of_path` takes the initial distribution and path as its arguments, and returns the probability of the path:
|
circle_walk.prob_of_path(initial, [1, 1, 2, 1, 2])
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
Distribution of $X_n$ Remember that the chain starts at 1. So $P_0$, the distribution of $X_0$ is:
|
initial
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
We know that $P_1$ must place probability 0.5 at Point 1 and 0.25 each the points 2 and 5. This is confirmed by the `distribution` method that applies to a MarkovChain object. Its first argument is the initial distribution, and its second is the number of steps $n$. It returns a distribution object that is the distribution of $X_n$.
|
P_1 = circle_walk.distribution(initial, 1)
P_1
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
What's the probability that the chain is has value 3 at time 2? That's $P_2(3)$ which we can calculate by conditioning on $X_1$:$$P_2(3) = \sum_{i=1}^5 P_1(i)P(i, 3)$$The distribution of $X_1$ is $P_1$, given above. Here are those probabilities in an array:
|
P_1.column('Probability')
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
The `3` column of the transition matrix gives us, for each $i$, the chance of getting from $i$ to 3 in one step.
|
circle_walk.column('3')
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
So the probability that the chain has the value 3 at time 2 is $P_2(3)$ which is equal to:
|
sum(P_1.column('Probability')*circle_walk.column('3'))
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
Similarly, $P_2(2)$ is equal to:
|
sum(P_1.column('Probability')*circle_walk.column('2'))
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
And so on. The `distribution` method finds all these probabilities for us.
|
P_2 = circle_walk.distribution(initial, 2)
P_2
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
At time 3, the chain continues to be much more likely to be at 1, 2, or 5 compared to the other two states. That's because it started at Point 1 and is lazy.
|
P_3 = circle_walk.distribution(initial, 3)
P_3
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
But by time 10, something interesting starts to emerge.
|
P_10 = circle_walk.distribution(initial, 10)
P_10
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
The chain is almost equally likely to be at any of the five states. By time 50, it seems to have completely forgotten where it started, and is distributed uniformly on the state space.
|
P_50 = circle_walk.distribution(initial, 50)
P_50
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
As time passes, this chain gets "all mixed up", regardless of where it started. That is perhaps not surprising as the transition probabilities are symmetric over the five states. Let's see what happens when we cut the circle between Points 1 and 5 and lay it out in a line. Reflecting Random Walk The state space and transition probabilities remain the same, except when the chain is at the two "edge" states.- If the chain is at Point 1, then at the next step it either stays there or moves to Point 2 with equal probability: $P(1, 1) = 0.5 = P(1, 2)$.- If the chain is at Point 5, then at the next step it either stays there or moves to Point 4 with equal probability: $P(5, 5) = 0.5 = P(5, 4)$.We say that there is *reflection* at the boundaries 1 and 5.
|
def ref_walk_probs(i, j):
if i-j == 0:
return 0.5
elif 2 <= i <= 4:
if abs(i-j) == 1:
return 0.25
else:
return 0
elif i == 1:
if j == 2:
return 0.5
else:
return 0
elif i == 5:
if j == 4:
return 0.5
else:
return 0
trans_tbl = Table().states(s).transition_function(ref_walk_probs)
refl_walk = trans_tbl.toMarkovChain()
print('Transition Matrix')
refl_walk
|
Transition Matrix
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
Let the chain start at Point 1 as it did in the last example. That initial distribution was defined as `initial`. At time 1, therefore, the chain is either at 1 or 2, and at times 2 and 3 it is likely to still be around 1.
|
refl_walk.distribution(initial, 1)
refl_walk.distribution(initial, 3)
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
But by time 20, the distribution is settling down:
|
refl_walk.distribution(initial, 20)
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
And by time 100 it has settled into what is called its *steady state*.
|
refl_walk.distribution(initial, 100)
|
_____no_output_____
|
MIT
|
miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb
|
dcroce/jupyter-book
|
Twitter Sentiment Analysis
|
import twitter
import pandas as pd
import numpy as np
|
_____no_output_____
|
MIT
|
.ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb
|
adrientalbot/twitter-sentiment-training
|
Source https://towardsdatascience.com/creating-the-twitter-sentiment-analysis-program-in-python-with-naive-bayes-classification-672e5589a7ed Authenticating Twitter API
|
# Authenticating our twitter API credentials
twitter_api = twitter.Api(consumer_key='f2ujCRaUnQJy4PoiZvhRQL4n4',
consumer_secret='EjBSQirf7i83T7CX90D5Qxgs9pTdpIGIsVAhHVs5uvd0iAcw5V',
access_token_key='1272989631404015616-5XMQkx65rKfQU87UWAh40cMf4aCzSq',
access_token_secret='emfWcF8fyfqoyywfPCJnz4jXt6DFXfndro59UK9IMAMgy')
# Test authentication to make sure it was successful
print(twitter_api.VerifyCredentials())
|
{"created_at": "Tue Jun 16 20:29:26 +0000 2020", "default_profile": true, "default_profile_image": true, "id": 1272989631404015616, "id_str": "1272989631404015616", "name": "Nicola Osrin", "profile_background_color": "F5F8FA", "profile_image_url": "http://abs.twimg.com/sticky/default_profile_images/default_profile_normal.png", "profile_image_url_https": "https://abs.twimg.com/sticky/default_profile_images/default_profile_normal.png", "profile_link_color": "1DA1F2", "profile_sidebar_border_color": "C0DEED", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": true, "screen_name": "NicolaOsrin"}
|
MIT
|
.ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb
|
adrientalbot/twitter-sentiment-training
|
Building the Test Set
|
#We first build the test set, consisting of only 100 tweets for simplicity.
#Note that we can only download 180 tweets every 15min.
def buildTestSet(search_keyword):
try:
tweets_fetched = twitter_api.GetSearch(search_keyword, count = 100)
print("Fetched " + str(len(tweets_fetched)) + " tweets for the term " + search_keyword)
return [{"text":status.text, "label":None} for status in tweets_fetched]
except:
print("Unfortunately, something went wrong..")
return None
#Testing out fetching the test set. The function below prints out the first 5 tweets in our test set.
search_term = input("Enter a search keyword:")
testDataSet = buildTestSet(search_term)
print(testDataSet[0:4])
testDataSet[0]
#df = pd.DataFrame(list())
#df.to_csv('tweetDataFile.csv')
|
_____no_output_____
|
MIT
|
.ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb
|
adrientalbot/twitter-sentiment-training
|
Building the Training Set We will be using a downloadable training set, consisting of 5,000 tweets. These tweets have already been labelled as positive/negative. We use this training set to calculate the posterior probabilities of each word appearing and its respective sentiment.
|
#As Twitter doesn't allow the storage of the tweets on personal drives, we have to create a function to download
#the relevant tweets that will be matched to the Tweet IDs and their labels, which we have.
def buildTrainingSet(corpusFile, tweetDataFile, size):
import csv
import time
count = 0
corpus = []
with open(corpusFile,'r') as csvfile:
lineReader = csv.reader(csvfile,delimiter=',', quotechar="\"")
for row in lineReader:
if count <= size:
corpus.append({"tweet_id":row[2], "label":row[1], "topic":row[0]})
count += 1
else:
break
rate_limit = 180
sleep_time = 900/180
trainingDataSet = []
for tweet in corpus:
try:
status = twitter_api.GetStatus(tweet["tweet_id"])
print("Tweet fetched" + status.text)
tweet["text"] = status.text
trainingDataSet.append(tweet)
time.sleep(sleep_time)
except:
continue
# now we write them to the empty CSV file
with open(tweetDataFile,'w') as csvfile:
linewriter = csv.writer(csvfile,delimiter=',',quotechar="\"")
for tweet in trainingDataSet:
try:
linewriter.writerow([tweet["tweet_id"], tweet["text"], tweet["label"], tweet["topic"]])
except Exception as e:
print(e)
return trainingDataSet
#This function is used to download the actual tweets. It takes hours to run and we only need to run it once
#in order to get all 5,000 training tweets. The 'size' parameter below is the number of tweets that we want to
#download. If 5,000 => set size=5,000
'''
corpusFile = "corpus.csv"
tweetDataFile = "tweetDataFile.csv"
trainingData = buildTrainingSet(corpusFile, tweetDataFile, 5000)
'''
#When this code stops running, we will have a tweetDataFile.csv full of the tweets that we downloaded.
#This line counts the number of tweets and their labels in the Corpus.csv file that we originally downloaded
corp = pd.read_csv("corpus.csv", header = 0 , names = ['topic','label', 'tweet_id'] )
corp['label'].value_counts()
#As a check, we look at the first 5 lines in our new tweetDataFile.csv
trainingData_copied = pd.read_csv("tweetDataFile.csv", header = None, names = ['tweet_id', 'text', 'label', 'topic'])
trainingData_copied.head()
len(trainingData_copied)
#We check the number of tweets by each label in our training set
trainingData_copied['label'].value_counts()
df = trainingData_copied.copy()
lst_labels = df['label'].unique()
count_rows_keep = df['label'].value_counts().min()
neutral_df = df[df['label'] == 'neutral'].sample(n= count_rows_keep , random_state = 3)
irrelevant_df = df[df['label'] == 'irrelevant'].sample(n= count_rows_keep , random_state = 2)
negative_df = df[df['label'] == 'negative'].sample(n= count_rows_keep , random_state = 3)
positive_df = df[df['label'] == 'positive'].sample(n= count_rows_keep , random_state = 3)
lst_df = [neutral_df, irrelevant_df, negative_df, positive_df]
trainingData_copied = pd.concat(lst_df)
trainingData_copied['label'].value_counts()
'''
def oversample(df):
lst_labels = df['label'].unique()
for x in lst_labels:
if len(df[df['label'] == x]) < df['label'].value_counts().max():
df=df.append(df[df['label'] == x]*((df['label'].value_counts().max())/ len(df[df['label'] == 'x'])))
return df
'''
'''
def undersample(df):
lst_labels = df['label'].unique()
for x in lst_labels:
if len(df[df['label'] == 'x']) > df['label'].value_counts().min():
count_rows_keep = df['label'].value_counts().min()
sample = df[df['label'] == 'x'].sample(n= count_rows_keep , random_state = 1)
index_drop = pd.concat([df[df['label'] == 'x'], sample]).drop_duplicates(keep=False).index
df = df.drop(index_drop)
return df
'''
trainingData_copied = trainingData_copied.to_dict('records')
|
_____no_output_____
|
MIT
|
.ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb
|
adrientalbot/twitter-sentiment-training
|
Pre-processing Here we use the NLTK library to filter for keywords and remove irrelevant words in tweets. We also remove punctuation and things like images (emojis) as they cannot be classified using this model.
|
import re #a library that makes parsing strings and modifying them more efficient
from nltk.tokenize import word_tokenize
from string import punctuation
from nltk.corpus import stopwords
import nltk #Natural Processing Toolkit that takes care of any processing that we need to perform on text
#to change its form or extract certain components from it.
#nltk.download('popular') #We need this if certain nltk libraries are not installed.
class PreProcessTweets:
def __init__(self):
self._stopwords = set(stopwords.words('english') + list(punctuation) + ['AT_USER','URL'])
def processTweets(self, list_of_tweets):
processedTweets=[]
for tweet in list_of_tweets:
processedTweets.append((self._processTweet(tweet["text"]),tweet["label"]))
return processedTweets
def _processTweet(self, tweet):
tweet = tweet.lower() # convert text to lower-case
tweet = re.sub('((www\.[^\s]+)|(https?://[^\s]+))', 'URL', tweet) # remove URLs
tweet = re.sub('@[^\s]+', 'AT_USER', tweet) # remove usernames
tweet = re.sub(r'#([^\s]+)', r'\1', tweet) # remove the # in #hashtag
tweet = word_tokenize(tweet) # remove repeated characters (helloooooooo into hello)
return [word for word in tweet if word not in self._stopwords]
#Here we call the function to pre-process both our training and our test set.
tweetProcessor = PreProcessTweets()
preprocessedTrainingSet = tweetProcessor.processTweets(trainingData_copied)
preprocessedTestSet = tweetProcessor.processTweets(testDataSet)
|
_____no_output_____
|
MIT
|
.ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb
|
adrientalbot/twitter-sentiment-training
|
Building the Naive Bayes Classifier We apply a classifier based on Bayes' Theorem, hence the name. It allows us to find the posterior probability of an event occuring (in this case that event being the sentiment- positive/neutral or negative) is reliant on another probabilistic background that we know. The posterior probability is calculated as follows:$P(A|B) = \frac{P(B|A)\times P(A)}{P(B)}$The final sentiment is assigned based on the highest probability of the tweet falling in each one. To read more about Bayes Classifier in the context of classification:https://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html Build the vocabulary
|
#Here we attempt to build a vocabulary (a list of words) of all words present in the training set.
import nltk
def buildVocabulary(preprocessedTrainingData):
all_words = []
for (words, sentiment) in preprocessedTrainingData:
all_words.extend(words)
wordlist = nltk.FreqDist(all_words)
word_features = wordlist.keys()
return word_features
#This function generates a list of all words (all_words) and then turns it into a frequency distribution (wordlist)
#The word_features is a list of distinct words, with the key being the frequency of each one.
|
_____no_output_____
|
MIT
|
.ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb
|
adrientalbot/twitter-sentiment-training
|
Matching tweets against our vocabulary Here we go through all the words in the training set (i.e. our word_features list), comparing every word against the tweet at hand, associating a number with the word following:label 1 (true): if word in vocabulary occurs in tweetlabel 0 (false): if word in vocabulary does not occur in tweet
|
def extract_features(tweet):
tweet_words = set(tweet)
features = {}
for word in word_features:
features['contains(%s)' % word] = (word in tweet_words)
return features
|
_____no_output_____
|
MIT
|
.ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb
|
adrientalbot/twitter-sentiment-training
|
Building our feature vector
|
word_features = buildVocabulary(preprocessedTrainingSet)
trainingFeatures = nltk.classify.apply_features(extract_features, preprocessedTrainingSet)
|
_____no_output_____
|
MIT
|
.ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb
|
adrientalbot/twitter-sentiment-training
|
This feature vector shows if a particular tweet contains a certain word out of all the words present in the corpus in the training data + label (positive, negative or neutral) of the tweet.We will input the feature vector in the Naive Bayes Classifier, which will calculate the posterior probability given the prior probability that a randomly chosen observation is associated with a certain label, and the likelihood of the outcome/label given the presence of this word (density function of X that comes for observation that comes from the k class/label) Train the Naives Bayes Classifier
|
#This line trains our Bayes Classifier
NBayesClassifier = nltk.NaiveBayesClassifier.train(trainingFeatures)
|
_____no_output_____
|
MIT
|
.ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb
|
adrientalbot/twitter-sentiment-training
|
Test Classifier
|
#We now run the classifier and test it on 100 tweets previously downloaded in the test set, on our specified keyword.
NBResultLabels = [NBayesClassifier.classify(extract_features(tweet[0])) for tweet in preprocessedTestSet]
# get the majority vote
if NBResultLabels.count('positive') > NBResultLabels.count('negative'):
print("Overall Positive Sentiment")
print("Positive Sentiment Percentage = " + str(100*NBResultLabels.count('positive')/len(NBResultLabels)) + "%")
else:
print("Overall Negative Sentiment")
print("Negative Sentiment Percentage = " + str(100*NBResultLabels.count('negative')/len(NBResultLabels)) + "%")
print("Positive Sentiment Percentage = " + str(100*NBResultLabels.count('positive')/len(NBResultLabels)) + "%")
print("Number of negative comments = " + str(NBResultLabels.count('negative')))
print("Number of positive comments = " + str(NBResultLabels.count('positive')))
print("Number of neutral comments = " + str(NBResultLabels.count('neutral')))
print("Number of irrelevant comments = " + str(NBResultLabels.count('irrelevant')))
len(preprocessedTestSet)
import plotly.graph_objects as go
sentiment = ["Negative","Positive","Neutral", "Irrelevant"]
fig = go.Figure([go.Bar(x=sentiment, y=[str(NBResultLabels.count('negative')), str(NBResultLabels.count('positive')), str(NBResultLabels.count('neutral')), str(NBResultLabels.count('irrelevant'))])])
fig.update_layout(title_text='Sentiment Results for Specific Keyword')
fig.update_layout(template = 'simple_white',
title_text='Twitter Sentiment Results',
yaxis=dict(
title='Percentage (%)',
titlefont_size=16,
tickfont_size=14,) ,
)
fig.show()
|
_____no_output_____
|
MIT
|
.ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb
|
adrientalbot/twitter-sentiment-training
|
This script is to map 2012 galway traffic data (bridge 1)
|
#python list to store csv data as mapping suggest
#Site No Dataset Survey Company Client Project Reference Method of Survey Address Latitude Longtitude Easting Northing Date From Date To Time From Time To Observations Weather Junction Type Vehicle Type Direction Count
#Site No,Dataset,Survey Company,Client,Project Reference,Method of Survey,Address,Latitude,Longtitude,Easting,Northing,Date From,Date To,Time From,Time To,Observations,Weather,Junction Type,Vehicle Type,Direction,Count
header=["Site No","Dataset","Survey Company","Client","Project Reference","Method of Survey","Address","Latitude","Longtitude","Easting","Northing",
"Date From","Date To","Time From","Time To","Observations","Weather","Junction Type","Vehicle Type","Direction","Count"]
full_data_template = ["","Galway 2016 Br. 1","Idaso Ltd.","Galway City Council","2016 Annual Survey","JTC","Quincentenary Bridge",53.282696,-9.06065,495956.4,5903720.6,"","","","","Nothing to report","Sunny and generally dry but there were some light showers","Link","","",""]
data_template = ["","Galway 2012 Br. 1","","Galway City Council","","","Quincentenary Bridge",53.282696,-9.06065,495956.4,5903720.6,"","","","","","","Link","","",""]
directions_alphabet = ["", "", "", "", "", "", "A TO F", "A TO E", "A TO D", "A TO C", "A TO B", "A TO A", "B TO A", "B TO F", "B TO E", "B TO D", "B TO C", "B TO B", "C TO B", "C TO A", "C TO F", "C TO E", "C TO D", "C TO C", "D TO C", "D TO B", "D TO A", "D TO F", "D TO E", "D TO D", "E TO D", "E TO C", "E TO B", "E TO A", "E TO F", "E TO E", "F TO E", "F TO D", "F TO C", "F TO B", "F TO A", "F TO F"]
outputfile_name="data/2012/mapped-final/bridge1_2012_eastbound_verified.csv"
vich_type = ["Motorcycles","Cars","LGV","HGV","Buses"]
directions = ["Westbound","Eastbound"]
counts_in_rows = [3,5,7,9,11]
#times_hourly = ["00:00","01:00","02:00","03:00","04:00","05:00","06:00","07:00","08:00","08:00","09:00","10:00","11:00"]
#Read csv file data row by row
#this file wil only fill sections (0,11,12,13,14,19,20,21)
import csv
with open('data/2012/refined/Br1_Eastbound_2012.csv', 'rb') as source:
#write data again acoording to the schema
#import csv
with open(outputfile_name, 'w+') as output:
csv_sourcereader = csv.reader(source, delimiter=',', quotechar='\"')
outputwriter = csv.writer(output, delimiter=',', quotechar='\"')
#putting the header
outputwriter.writerow(header)
#counter to scape file headers
c = 0
#list to get all possible readings
quinque_data = []
#csv reader object to list
sourcereader = list(csv_sourcereader)
for r in xrange (0,len(sourcereader)):
#print ', '.join(row)
print sourcereader[r]
import copy
#lget both possible directions (A-B, B-A)
#data_A_B = copy.deepcopy(data_template)
#data_B_A = copy.deepcopy(data_template)
data = copy.deepcopy(data_template)
#print data
if c > 1 :
for x in xrange(0,5):
#a-b
#data_A_B[0]=row[0] # Site NO
#data_A_B[11]=row[2] # date from
#data_A_B[12]=row[2] # date to
#data_A_B[13]=row[3] # time from
#data_A_B[14]=row[4] # time to
#data_A_B[18]=row[5] # Vehicle Type
#b-a
#data_B_A[0]=row[0] # Site NO
#data_B_A[11]=row[2] # date from
#data_B_A[12]=row[2] # date to
#data_B_A[13]=row[3] # time from
#data_B_A[14]=row[4] # time to
#data_B_A[18]=row[5] # Vehicle Type
data[0]="" # Site NO
data[11]=sourcereader[r][0] # date from
data[12]=sourcereader[r][0] # date to
data[13]="\'"+str(sourcereader[r][1]) # time from
#last one to avoid index out range
if sourcereader[r][1] != "23:00":
data[14]="\'"+str(sourcereader[r+1][1]) # time to
elif sourcereader[r][1] == "23:00":
data[14]="\'24:00" # time to
data[18]=vich_type[x] # Vehicle Type
data[19]=sourcereader[r][13] # direction
data[20]=sourcereader[r][counts_in_rows[x]] # count
#appending data row to the 5 rows batch
quinque_data.append(copy.deepcopy(data))
for data_row in quinque_data:
outputwriter.writerow(data_row)
c = c + 1
#print data
#del data_B_A [:]
#del data_A_B[:]
del data[:]
del quinque_data [:]
|
['Date From', 'Time', 'Total', 'Bin 1', 'Bin 1', 'Bin 2', 'Bin 2', 'Bin 3', 'Bin 3', 'Bin 4', 'Bin 4', 'Bin 5', 'Bin 5', 'dir']
['12/11/12', 'Begin', 'Vol.', 'Motorcycles', '%', 'Cars', '%', 'LGV', '%', 'HGV', '%', 'Buses', '%', 'Eastbound']
['12/11/12', '00:00', '98', '1', '1.02', '92', '93.88', '3', '3.06', '2', '2.04', '0', '0', 'Eastbound']
['12/11/12', '01:00', '41', '0', '0', '34', '82.93', '3', '7.32', '4', '9.76', '0', '0', 'Eastbound']
['12/11/12', '02:00', '22', '0', '0', '15', '68.18', '3', '13.64', '4', '18.18', '0', '0', 'Eastbound']
['12/11/12', '03:00', '35', '0', '0', '33', '94.29', '1', '2.86', '1', '2.86', '0', '0', 'Eastbound']
['12/11/12', '04:00', '61', '1', '1.64', '44', '72.13', '12', '19.67', '4', '6.56', '0', '0', 'Eastbound']
['12/11/12', '05:00', '172', '4', '2.33', '137', '79.65', '17', '9.88', '13', '7.56', '1', '0.58', 'Eastbound']
['12/11/12', '06:00', '492', '4', '0.81', '437', '88.82', '31', '6.3', '20', '4.07', '0', '0', 'Eastbound']
['12/11/12', '07:00', '1107', '12', '1.08', '979', '88.44', '52', '4.7', '64', '5.78', '0', '0', 'Eastbound']
['12/11/12', '08:00', '1593', '25', '1.57', '1423', '89.33', '48', '3.01', '97', '6.09', '0', '0', 'Eastbound']
['12/11/12', '09:00', '1286', '26', '2.02', '1147', '89.19', '37', '2.88', '74', '5.75', '2', '0.16', 'Eastbound']
['12/11/12', '10:00', '1054', '18', '1.71', '892', '84.63', '72', '6.83', '72', '6.83', '0', '0', 'Eastbound']
['12/11/12', '11:00', '1041', '10', '0.96', '893', '85.78', '69', '6.63', '66', '6.34', '3', '0.29', 'Eastbound']
['12/11/12', '12:00', '1100', '14', '1.27', '946', '86', '70', '6.36', '69', '6.27', '1', '0.09', 'Eastbound']
['12/11/12', '13:00', '1084', '5', '0.46', '961', '88.65', '51', '4.7', '64', '5.9', '3', '0.28', 'Eastbound']
['12/11/12', '14:00', '887', '8', '0.9', '764', '86.13', '51', '5.75', '63', '7.1', '1', '0.11', 'Eastbound']
['12/11/12', '15:00', '1217', '17', '1.4', '1052', '86.44', '76', '6.24', '72', '5.92', '0', '0', 'Eastbound']
['12/11/12', '16:00', '1318', '15', '1.14', '1182', '89.68', '59', '4.48', '61', '4.63', '1', '0.08', 'Eastbound']
['12/11/12', '17:00', '1213', '13', '1.07', '1113', '91.76', '38', '3.13', '48', '3.96', '1', '0.08', 'Eastbound']
['12/11/12', '18:00', '1055', '10', '0.95', '965', '91.47', '33', '3.13', '46', '4.36', '1', '0.09', 'Eastbound']
['12/11/12', '19:00', '764', '6', '0.79', '692', '90.58', '38', '4.97', '28', '3.66', '0', '0', 'Eastbound']
['12/11/12', '20:00', '665', '0', '0', '612', '92.03', '25', '3.76', '28', '4.21', '0', '0', 'Eastbound']
['12/11/12', '21:00', '536', '2', '0.37', '490', '91.42', '25', '4.66', '18', '3.36', '1', '0.19', 'Eastbound']
['12/11/12', '22:00', '321', '1', '0.31', '295', '91.9', '16', '4.98', '9', '2.8', '0', '0', 'Eastbound']
['12/11/12', '23:00', '209', '0', '0', '194', '92.82', '8', '3.83', '4', '1.91', '3', '1.44', 'Eastbound']
['13/11/12', '00:00', '82', '0', '0', '79', '96.34', '3', '3.66', '0', '0', '0', '0', 'Eastbound']
['13/11/12', '01:00', '34', '0', '0', '31', '91.18', '2', '5.88', '0', '0', '1', '2.94', 'Eastbound']
['13/11/12', '02:00', '21', '0', '0', '17', '80.95', '2', '9.52', '2', '9.52', '0', '0', 'Eastbound']
['13/11/12', '03:00', '37', '0', '0', '29', '78.38', '4', '10.81', '3', '8.11', '1', '2.7', 'Eastbound']
['13/11/12', '04:00', '62', '1', '1.61', '40', '64.52', '19', '30.65', '1', '1.61', '1', '1.61', 'Eastbound']
['13/11/12', '05:00', '144', '1', '0.69', '119', '82.64', '20', '13.89', '4', '2.78', '0', '0', 'Eastbound']
['13/11/12', '06:00', '431', '2', '0.46', '389', '90.26', '31', '7.19', '8', '1.86', '1', '0.23', 'Eastbound']
['13/11/12', '07:00', '1189', '6', '0.5', '1092', '91.84', '51', '4.29', '38', '3.2', '2', '0.17', 'Eastbound']
['13/11/12', '08:00', '1659', '24', '1.45', '1547', '93.25', '25', '1.51', '60', '3.62', '3', '0.18', 'Eastbound']
['13/11/12', '09:00', '1407', '15', '1.07', '1250', '88.84', '64', '4.55', '76', '5.4', '2', '0.14', 'Eastbound']
['13/11/12', '10:00', '1095', '15', '1.37', '930', '84.93', '88', '8.04', '61', '5.57', '1', '0.09', 'Eastbound']
['13/11/12', '11:00', '1037', '21', '2.03', '875', '84.38', '74', '7.14', '60', '5.79', '7', '0.68', 'Eastbound']
['13/11/12', '12:00', '1075', '4', '0.37', '937', '87.16', '69', '6.42', '63', '5.86', '2', '0.19', 'Eastbound']
['13/11/12', '13:00', '1074', '11', '1.02', '951', '88.55', '50', '4.66', '59', '5.49', '3', '0.28', 'Eastbound']
['13/11/12', '14:00', '1159', '16', '1.38', '1008', '86.97', '71', '6.13', '62', '5.35', '2', '0.17', 'Eastbound']
['13/11/12', '15:00', '1309', '16', '1.22', '1146', '87.55', '75', '5.73', '72', '5.5', '0', '0', 'Eastbound']
['13/11/12', '16:00', '1411', '28', '1.98', '1241', '87.95', '75', '5.32', '66', '4.68', '1', '0.07', 'Eastbound']
['13/11/12', '17:00', '1287', '10', '0.78', '1203', '93.47', '21', '1.63', '53', '4.12', '0', '0', 'Eastbound']
['13/11/12', '18:00', '1233', '11', '0.89', '1164', '94.4', '16', '1.3', '42', '3.41', '0', '0', 'Eastbound']
['13/11/12', '19:00', '792', '4', '0.51', '719', '90.78', '39', '4.92', '29', '3.66', '1', '0.13', 'Eastbound']
['13/11/12', '20:00', '744', '3', '0.4', '678', '91.13', '33', '4.44', '30', '4.03', '0', '0', 'Eastbound']
['13/11/12', '21:00', '607', '1', '0.16', '574', '94.56', '15', '2.47', '16', '2.64', '1', '0.16', 'Eastbound']
['13/11/12', '22:00', '362', '2', '0.55', '331', '91.44', '17', '4.7', '11', '3.04', '1', '0.28', 'Eastbound']
['13/11/12', '23:00', '202', '0', '0', '188', '93.07', '8', '3.96', '6', '2.97', '0', '0', 'Eastbound']
['14/11/12', '00:00', '95', '0', '0', '90', '94.74', '4', '4.21', '1', '1.05', '0', '0', 'Eastbound']
['14/11/12', '01:00', '39', '0', '0', '36', '92.31', '2', '5.13', '1', '2.56', '0', '0', 'Eastbound']
['14/11/12', '02:00', '17', '0', '0', '14', '82.35', '2', '11.76', '1', '5.88', '0', '0', 'Eastbound']
['14/11/12', '03:00', '25', '0', '0', '23', '92', '2', '8', '0', '0', '0', '0', 'Eastbound']
['14/11/12', '04:00', '45', '0', '0', '27', '60', '14', '31.11', '3', '6.67', '1', '2.22', 'Eastbound']
['14/11/12', '05:00', '147', '1', '0.68', '126', '85.71', '15', '10.2', '5', '3.4', '0', '0', 'Eastbound']
['14/11/12', '06:00', '420', '2', '0.48', '370', '88.1', '28', '6.67', '19', '4.52', '1', '0.24', 'Eastbound']
['14/11/12', '07:00', '1108', '12', '1.08', '990', '89.35', '52', '4.69', '51', '4.6', '3', '0.27', 'Eastbound']
['14/11/12', '08:00', '1598', '24', '1.5', '1468', '91.86', '33', '2.07', '73', '4.57', '0', '0', 'Eastbound']
['14/11/12', '09:00', '1465', '25', '1.98', '1344', '90.43', '26', '2.06', '69', '5.45', '1', '0.08', 'Eastbound']
['14/11/12', '10:00', '995', '13', '1.31', '839', '84.32', '74', '7.44', '66', '6.63', '3', '0.3', 'Eastbound']
['14/11/12', '11:00', '982', '20', '2.04', '844', '85.95', '70', '7.13', '43', '4.38', '5', '0.51', 'Eastbound']
['14/11/12', '12:00', '1148', '11', '0.96', '981', '85.45', '86', '7.49', '67', '5.84', '3', '0.26', 'Eastbound']
['14/11/12', '13:00', '1185', '13', '1.1', '1026', '86.58', '78', '6.58', '64', '5.4', '4', '0.34', 'Eastbound']
['14/11/12', '14:00', '1202', '11', '0.92', '1058', '88.02', '71', '5.91', '60', '4.99', '2', '0.17', 'Eastbound']
['14/11/12', '15:00', '1389', '16', '1.24', '1224', '87.2', '62', '4.81', '84', '6.52', '3', '0.23', 'Eastbound']
['14/11/12', '16:00', '1549', '11', '0.76', '1404', '89.99', '52', '3.59', '77', '5.31', '5', '0.35', 'Eastbound']
['14/11/12', '17:00', '1517', '20', '1.41', '1414', '92.73', '15', '1.06', '67', '4.73', '1', '0.07', 'Eastbound']
['14/11/12', '18:00', '1062', '12', '1.39', '964', '88.63', '32', '3.71', '53', '6.15', '1', '0.12', 'Eastbound']
['14/11/12', '19:00', '914', '4', '0.44', '822', '89.93', '46', '5.03', '42', '4.6', '0', '0', 'Eastbound']
['14/11/12', '20:00', '775', '4', '0.52', '706', '91.1', '30', '3.87', '35', '4.52', '0', '0', 'Eastbound']
['14/11/12', '21:00', '619', '2', '0.32', '569', '91.92', '26', '4.2', '21', '3.39', '1', '0.16', 'Eastbound']
['14/11/12', '22:00', '373', '0', '0', '351', '94.1', '14', '3.75', '8', '2.14', '0', '0', 'Eastbound']
['14/11/12', '23:00', '211', '0', '0', '199', '94.31', '8', '3.79', '3', '1.42', '1', '0.47', 'Eastbound']
['15/11/12', '00:00', '122', '0', '0', '112', '91.8', '4', '3.28', '4', '3.28', '2', '1.64', 'Eastbound']
['15/11/12', '01:00', '57', '0', '0', '53', '92.98', '3', '5.26', '1', '1.75', '0', '0', 'Eastbound']
['15/11/12', '02:00', '48', '1', '2.08', '43', '89.58', '3', '6.25', '1', '2.08', '0', '0', 'Eastbound']
['15/11/12', '03:00', '42', '0', '0', '33', '78.57', '7', '16.67', '1', '2.38', '1', '2.38', 'Eastbound']
['15/11/12', '04:00', '64', '1', '1.56', '47', '73.44', '13', '20.31', '2', '3.13', '1', '1.56', 'Eastbound']
['15/11/12', '05:00', '153', '0', '0', '125', '81.7', '21', '13.73', '6', '3.92', '1', '0.65', 'Eastbound']
['15/11/12', '06:00', '423', '1', '0.24', '373', '88.18', '36', '8.51', '12', '2.84', '1', '0.24', 'Eastbound']
['15/11/12', '07:00', '1104', '13', '1.18', '972', '88.04', '64', '5.8', '54', '4.89', '1', '0.09', 'Eastbound']
['15/11/12', '08:00', '1629', '19', '1.17', '1493', '91.65', '42', '2.58', '75', '4.6', '0', '0', 'Eastbound']
['15/11/12', '09:00', '1227', '15', '1.22', '1102', '89.81', '49', '3.99', '60', '4.89', '1', '0.08', 'Eastbound']
['15/11/12', '10:00', '997', '3', '0.3', '863', '86.56', '90', '9.03', '39', '3.91', '2', '0.2', 'Eastbound']
['15/11/12', '11:00', '1040', '10', '0.96', '879', '84.52', '96', '9.23', '52', '5', '3', '0.29', 'Eastbound']
['15/11/12', '12:00', '1093', '17', '1.56', '938', '85.82', '71', '6.5', '66', '6.04', '1', '0.09', 'Eastbound']
['15/11/12', '13:00', '1143', '13', '1.14', '1002', '87.66', '77', '6.74', '50', '4.37', '1', '0.09', 'Eastbound']
['15/11/12', '14:00', '1147', '11', '0.96', '1000', '87.18', '76', '6.63', '59', '5.14', '1', '0.09', 'Eastbound']
['15/11/12', '15:00', '1208', '6', '0.5', '1071', '88.66', '71', '5.88', '57', '4.72', '3', '0.25', 'Eastbound']
['15/11/12', '16:00', '1425', '17', '1.19', '1262', '88.56', '76', '5.33', '63', '4.42', '7', '0.49', 'Eastbound']
['15/11/12', '17:00', '1338', '11', '0.82', '1197', '89.46', '73', '5.46', '56', '4.19', '1', '0.07', 'Eastbound']
['15/11/12', '18:00', '1079', '11', '1.02', '968', '89.71', '58', '5.38', '38', '3.52', '4', '0.37', 'Eastbound']
['15/11/12', '19:00', '893', '3', '0.34', '819', '91.71', '46', '5.15', '25', '2.8', '0', '0', 'Eastbound']
['15/11/12', '20:00', '800', '2', '0.25', '739', '92.38', '28', '3.5', '31', '3.88', '0', '0', 'Eastbound']
['15/11/12', '21:00', '581', '0', '0', '533', '91.74', '28', '4.82', '20', '3.44', '0', '0', 'Eastbound']
['15/11/12', '22:00', '427', '1', '0.23', '392', '91.8', '25', '5.85', '9', '2.11', '0', '0', 'Eastbound']
['15/11/12', '23:00', '214', '0', '0', '201', '93.93', '8', '3.74', '5', '2.34', '0', '0', 'Eastbound']
['16/11/12', '00:00', '116', '0', '0', '105', '90.52', '10', '8.62', '1', '0.86', '0', '0', 'Eastbound']
['16/11/12', '01:00', '73', '0', '0', '70', '95.89', '3', '4.11', '0', '0', '0', '0', 'Eastbound']
['16/11/12', '02:00', '60', '0', '0', '46', '76.67', '9', '15', '5', '8.33', '0', '0', 'Eastbound']
['16/11/12', '03:00', '62', '1', '1.61', '51', '82.26', '8', '12.9', '2', '3.23', '0', '0', 'Eastbound']
['16/11/12', '04:00', '66', '0', '0', '44', '66.67', '19', '28.79', '3', '4.55', '0', '0', 'Eastbound']
['16/11/12', '05:00', '150', '2', '1.33', '124', '82.67', '19', '12.67', '4', '2.67', '1', '0.67', 'Eastbound']
['16/11/12', '06:00', '381', '1', '0.26', '343', '90.03', '27', '7.09', '9', '2.36', '1', '0.26', 'Eastbound']
['16/11/12', '07:00', '1036', '8', '0.77', '921', '88.9', '60', '5.79', '44', '4.25', '3', '0.29', 'Eastbound']
['16/11/12', '08:00', '1590', '27', '1.7', '1417', '89.12', '78', '4.91', '65', '4.09', '3', '0.19', 'Eastbound']
['16/11/12', '09:00', '1350', '13', '0.96', '1184', '87.7', '82', '6.07', '69', '5.11', '2', '0.15', 'Eastbound']
['16/11/12', '10:00', '1067', '8', '0.75', '932', '87.35', '74', '6.94', '47', '4.4', '6', '0.56', 'Eastbound']
['16/11/12', '11:00', '1179', '13', '1.1', '1024', '86.85', '85', '7.21', '55', '4.66', '2', '0.17', 'Eastbound']
['16/11/12', '12:00', '1225', '15', '1.22', '1058', '86.37', '84', '6.86', '63', '5.14', '5', '0.41', 'Eastbound']
['16/11/12', '13:00', '1328', '15', '1.13', '1167', '87.88', '74', '5.57', '61', '4.59', '11', '0.83', 'Eastbound']
['16/11/12', '14:00', '1152', '16', '1.39', '1025', '88.98', '34', '2.95', '77', '6.68', '0', '0', 'Eastbound']
['16/11/12', '15:00', '1212', '23', '1.9', '1083', '89.36', '19', '1.57', '87', '7.18', '0', '0', 'Eastbound']
['16/11/12', '16:00', '1485', '16', '1.08', '1314', '88.48', '83', '5.59', '63', '4.24', '9', '0.61', 'Eastbound']
['16/11/12', '17:00', '1429', '19', '1.33', '1280', '89.57', '58', '4.06', '70', '4.9', '2', '0.14', 'Eastbound']
['16/11/12', '18:00', '1039', '8', '0.77', '946', '91.05', '43', '4.14', '40', '3.85', '2', '0.19', 'Eastbound']
['16/11/12', '19:00', '854', '5', '0.59', '774', '90.63', '36', '4.22', '38', '4.45', '1', '0.12', 'Eastbound']
['16/11/12', '20:00', '727', '3', '0.41', '675', '92.85', '28', '3.85', '21', '2.89', '0', '0', 'Eastbound']
['16/11/12', '21:00', '498', '0', '0', '472', '94.78', '14', '2.81', '12', '2.41', '0', '0', 'Eastbound']
['16/11/12', '22:00', '297', '2', '0.67', '278', '93.6', '10', '3.37', '6', '2.02', '1', '0.34', 'Eastbound']
['16/11/12', '23:00', '190', '0', '0', '174', '91.58', '9', '4.74', '7', '3.68', '0', '0', 'Eastbound']
['17/11/12', '00:00', '145', '0', '0', '131', '90.34', '7', '4.83', '7', '4.83', '0', '0', 'Eastbound']
['17/11/12', '01:00', '89', '0', '0', '83', '93.26', '4', '4.49', '2', '2.25', '0', '0', 'Eastbound']
['17/11/12', '02:00', '53', '0', '0', '45', '84.91', '4', '7.55', '3', '5.66', '1', '1.89', 'Eastbound']
['17/11/12', '03:00', '55', '0', '0', '47', '85.45', '5', '9.09', '1', '1.82', '2', '3.64', 'Eastbound']
['17/11/12', '04:00', '85', '2', '2.35', '70', '82.35', '10', '11.76', '2', '2.35', '1', '1.18', 'Eastbound']
['17/11/12', '05:00', '95', '1', '1.05', '77', '81.05', '11', '11.58', '6', '6.32', '0', '0', 'Eastbound']
['17/11/12', '06:00', '159', '0', '0', '131', '82.39', '19', '11.95', '9', '5.66', '0', '0', 'Eastbound']
['17/11/12', '07:00', '289', '1', '0.35', '239', '82.7', '36', '12.46', '11', '3.81', '2', '0.69', 'Eastbound']
['17/11/12', '08:00', '572', '6', '1.05', '491', '85.84', '54', '9.44', '21', '3.67', '0', '0', 'Eastbound']
['17/11/12', '09:00', '1007', '10', '0.99', '891', '88.48', '65', '6.45', '38', '3.77', '3', '0.3', 'Eastbound']
['17/11/12', '10:00', '1053', '10', '0.95', '929', '88.22', '56', '5.32', '57', '5.41', '1', '0.09', 'Eastbound']
['17/11/12', '11:00', '1213', '8', '0.66', '1062', '87.55', '73', '6.02', '69', '5.69', '1', '0.08', 'Eastbound']
['17/11/12', '12:00', '1281', '16', '1.25', '1115', '87.04', '81', '6.32', '69', '5.39', '0', '0', 'Eastbound']
['17/11/12', '13:00', '1178', '12', '1.02', '1044', '88.62', '63', '5.35', '59', '5.01', '0', '0', 'Eastbound']
['17/11/12', '14:00', '1177', '11', '0.93', '1076', '91.42', '43', '3.65', '47', '3.99', '0', '0', 'Eastbound']
['17/11/12', '15:00', '1115', '7', '0.63', '1000', '89.69', '54', '4.84', '54', '4.84', '0', '0', 'Eastbound']
['17/11/12', '16:00', '1058', '7', '0.66', '936', '88.47', '60', '5.67', '53', '5.01', '2', '0.19', 'Eastbound']
['17/11/12', '17:00', '1013', '11', '1.09', '924', '91.21', '29', '2.86', '45', '4.44', '4', '0.39', 'Eastbound']
['17/11/12', '18:00', '772', '3', '0.39', '713', '92.36', '31', '4.02', '25', '3.24', '0', '0', 'Eastbound']
['17/11/12', '19:00', '688', '1', '0.15', '635', '92.3', '30', '4.36', '21', '3.05', '1', '0.15', 'Eastbound']
['17/11/12', '20:00', '569', '4', '0.7', '510', '89.63', '23', '4.04', '32', '5.62', '0', '0', 'Eastbound']
['17/11/12', '21:00', '372', '1', '0.27', '342', '91.94', '13', '3.49', '16', '4.3', '0', '0', 'Eastbound']
['17/11/12', '22:00', '270', '1', '0.37', '241', '89.26', '8', '2.96', '20', '7.41', '0', '0', 'Eastbound']
['17/11/12', '23:00', '208', '1', '0.48', '182', '87.5', '8', '3.85', '17', '8.17', '0', '0', 'Eastbound']
['18/11/12', '00:00', '126', '0', '0', '118', '93.65', '2', '1.59', '6', '4.76', '0', '0', 'Eastbound']
['18/11/12', '01:00', '115', '0', '0', '101', '87.83', '4', '3.48', '10', '8.7', '0', '0', 'Eastbound']
['18/11/12', '02:00', '77', '0', '0', '67', '87.01', '7', '9.09', '3', '3.9', '0', '0', 'Eastbound']
['18/11/12', '03:00', '51', '0', '0', '43', '84.31', '6', '11.76', '2', '3.92', '0', '0', 'Eastbound']
['18/11/12', '04:00', '63', '0', '0', '49', '77.78', '9', '14.29', '5', '7.94', '0', '0', 'Eastbound']
['18/11/12', '05:00', '56', '0', '0', '46', '82.14', '7', '12.5', '3', '5.36', '0', '0', 'Eastbound']
['18/11/12', '06:00', '89', '0', '0', '79', '88.76', '5', '5.62', '5', '5.62', '0', '0', 'Eastbound']
['18/11/12', '07:00', '151', '1', '0.66', '136', '90.07', '6', '3.97', '8', '5.3', '0', '0', 'Eastbound']
['18/11/12', '08:00', '224', '0', '0', '196', '87.5', '14', '6.25', '13', '5.8', '1', '0.45', 'Eastbound']
['18/11/12', '09:00', '387', '5', '1.29', '353', '91.21', '19', '4.91', '10', '2.58', '0', '0', 'Eastbound']
['18/11/12', '10:00', '715', '4', '0.56', '634', '88.67', '29', '4.06', '48', '6.71', '0', '0', 'Eastbound']
['18/11/12', '11:00', '875', '6', '0.69', '807', '92.23', '25', '2.86', '36', '4.11', '1', '0.11', 'Eastbound']
['18/11/12', '12:00', '1097', '14', '1.28', '985', '89.79', '29', '2.64', '69', '6.29', '0', '0', 'Eastbound']
['18/11/12', '13:00', '1111', '9', '0.81', '1006', '90.55', '34', '3.06', '61', '5.49', '1', '0.09', 'Eastbound']
['18/11/12', '14:00', '1121', '9', '0.8', '1031', '91.97', '24', '2.14', '56', '5', '1', '0.09', 'Eastbound']
['18/11/12', '15:00', '1446', '7', '0.48', '1313', '90.8', '36', '2.49', '89', '6.15', '1', '0.07', 'Eastbound']
['18/11/12', '16:00', '1467', '10', '0.68', '1361', '92.77', '16', '1.09', '79', '5.39', '1', '0.07', 'Eastbound']
['18/11/12', '17:00', '966', '12', '1.24', '875', '90.58', '31', '3.21', '47', '4.87', '1', '0.1', 'Eastbound']
['18/11/12', '18:00', '739', '12', '1.62', '659', '89.17', '24', '3.25', '43', '5.82', '1', '0.14', 'Eastbound']
['18/11/12', '19:00', '632', '0', '0', '570', '90.19', '16', '2.53', '44', '6.96', '2', '0.32', 'Eastbound']
['18/11/12', '20:00', '556', '5', '0.9', '511', '91.91', '9', '1.62', '28', '5.04', '3', '0.54', 'Eastbound']
['18/11/12', '21:00', '400', '2', '0.5', '355', '88.75', '10', '2.5', '32', '8', '1', '0.25', 'Eastbound']
['18/11/12', '22:00', '282', '3', '1.06', '254', '90.07', '6', '2.13', '18', '6.38', '1', '0.35', 'Eastbound']
['18/11/12', '23:00', '145', '1', '0.69', '129', '88.97', '3', '2.07', '8', '5.52', '4', '2.76', 'Eastbound']
|
MIT
|
yds_mapping_2012_ds.ipynb
|
mohadelrezk/open-gov-DataHandling-traffic
|
Data Preperation for the first ModelWelcome to the first notebook. Here we'll process the data from downloading to what we will be using to train our first model - **'Wh’re Art Thee Min’ral?'**.The steps we'll be following here are:- Downloading the SARIG Geochem Data Package. **(~350 Mb)**- Understanding the data columns in our csv of interest.- Cleaning and applying some processing.- Saving our processed file into a csv.- _And seeing some unnecessary memes in between_.You can upload this notebook and run it on colab or on Jupyter-Notebook locally.
|
# import the required package - Pandas
import pandas as pd
|
_____no_output_____
|
MIT
|
models/Model1/Mod1_Data_Prep.ipynb
|
Xavian-Brooker/Gawler-Unearthed
|
You can simply download the data by clicking the link [here](https://unearthed-exploresa.s3-ap-southeast-2.amazonaws.com/Unearthed_5_SARIG_Data_Package.zip). You can also download it by simply running the cell down below.We recommed you to use **Google Colab** and download it here itself if you have a poor internet connection. Colab has a decent internet speed of around **~15-20 Mb/s** which is more than enough for the download.
|
# You can simply download the data by running this cell
!wget https://unearthed-exploresa.s3-ap-southeast-2.amazonaws.com/Unearthed_5_SARIG_Data_Package.zip
|
--2020-07-26 10:57:12-- https://unearthed-exploresa.s3-ap-southeast-2.amazonaws.com/Unearthed_5_SARIG_Data_Package.zip
Resolving unearthed-exploresa.s3-ap-southeast-2.amazonaws.com (unearthed-exploresa.s3-ap-southeast-2.amazonaws.com)... 52.95.128.54
Connecting to unearthed-exploresa.s3-ap-southeast-2.amazonaws.com (unearthed-exploresa.s3-ap-southeast-2.amazonaws.com)|52.95.128.54|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 458997620 (438M) [application/zip]
Saving to: ‘Unearthed_5_SARIG_Data_Package.zip’
Unearthed_5_SARIG_D 100%[===================>] 437.73M 20.7MB/s in 22s
2020-07-26 10:57:35 (19.5 MB/s) - ‘Unearthed_5_SARIG_Data_Package.zip’ saved [458997620/458997620]
|
MIT
|
models/Model1/Mod1_Data_Prep.ipynb
|
Xavian-Brooker/Gawler-Unearthed
|
Here for extracting, if you wish to use the download file for a later use, than you can first mount your google drive and then extracting the files there. You can read more about mounting Google Drive to colab [here](https://towardsdatascience.com/downloading-datasets-into-google-drive-via-google-colab-bcb1b30b0166).***Note** - One of the files is really big (~10 Gb) and so it might take some time to extract as well. *Don't think that it's stuck!*
|
# Let's first create a directory to extract the downloaded zip file.
!mkdir 'GeoChemData'
# Now let's unzip the files into the data directory that we created.
!unzip 'Unearthed_5_SARIG_Data_Package.zip' -d 'GeoChemData/'
# Read the df_details.csv
# We use unicode_escape as the encoding to avoid etf-8 error.
df_details = pd.read_csv('/content/GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_dh_details_exp.csv', encoding= 'unicode_escape')
# Let's view the first few columns
df_details.head()
# Data Column Information
df_details.info()
|
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 321843 entries, 0 to 321842
Data columns (total 51 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 DRILLHOLE_NO 321843 non-null int64
1 DH_NAME 191457 non-null object
2 DH_OTHER_NAME 26298 non-null object
3 PACE_DH 321843 non-null object
4 PACE_ROUND_NO 6535 non-null float64
5 REPRESENTATIVE_DH 321843 non-null object
6 REPRESENTATIVE_DH_COMMENTS 97696 non-null object
7 DH_UNIT_NO 321843 non-null object
8 MAX_DRILLED_DEPTH 303597 non-null float64
9 MAX_DRILLED_DEPTH_DATE 296142 non-null object
10 CORED_LENGTH 51566 non-null float64
11 TENEMENT 321843 non-null object
12 OPERATOR_CODE 155645 non-null object
13 OPERATOR_NAME 155645 non-null object
14 TARGET_COMMODITIES 274769 non-null object
15 MINERAL_CLASS 321843 non-null object
16 PETROLEUM_CLASS 321843 non-null object
17 STRATIGRAPHIC_CLASS 321843 non-null object
18 ENGINEERING_CLASS 321843 non-null object
19 SEISMIC_POINT_CLASS 321843 non-null object
20 WATER_WELL_CLASS 321843 non-null object
21 WATER_POINT_CLASS 321843 non-null object
22 DRILLING_METHODS 235287 non-null object
23 STRAT_LOG 321843 non-null object
24 LITHO_LOG 321843 non-null object
25 PETROPHYSICAL_LOG 321843 non-null object
26 GEOCHEMISTRY 321843 non-null object
27 PETROLOGY 321843 non-null object
28 BIOSTRATIGRAPHY 321843 non-null object
29 SPECTRAL_SCANNED 321843 non-null object
30 CORE_LIBRARY 321843 non-null object
31 REFERENCES 321843 non-null object
32 HISTORICAL_DOCUMENTS 321843 non-null object
33 COMMENTS 156435 non-null object
34 MAP_250000 321843 non-null object
35 MAP_100000 321843 non-null object
36 MAP_50K_NO 321843 non-null int64
37 SITE_NO 321843 non-null int64
38 EASTING_GDA2020 321843 non-null float64
39 NORTHING_GDA2020 321843 non-null float64
40 ZONE_GDA2020 321843 non-null int64
41 LONGITUDE_GDA2020 321843 non-null float64
42 LATITUDE_GDA2020 321843 non-null float64
43 LONGITUDE_GDA94 321843 non-null float64
44 LATITUDE_GDA94 321843 non-null float64
45 HORIZ_ACCRCY_M 187292 non-null float64
46 ELEVATION_M 236945 non-null float64
47 INCLINATION 196822 non-null float64
48 AZIMUTH 166320 non-null float64
49 SURVEY_METHOD_CODE 195778 non-null object
50 SURVEY_METHOD 195778 non-null object
dtypes: float64(13), int64(4), object(34)
memory usage: 125.2+ MB
|
MIT
|
models/Model1/Mod1_Data_Prep.ipynb
|
Xavian-Brooker/Gawler-Unearthed
|
What columns do we need?We only need the following three columns from this dataframe ->- `LONGITUDE_GDA94`: This is the longitude of the mine/mineral location in **EPSG:4283** Co-ordinate Referencing System (CRS). - `LATITUDE_GDA94`: This is the latitude of the mine/mineral location in **EPSG:4283** Co-ordinate Referencing System (CRS).- `MINERAL_CLASS`: Mineral Class is a column containing **two unique values (Y/N)** representing if there is any mineralization or not.> *Note - We are using GDA94 over GDA20 because of the former's standardness.* You can understand more about it our glossary's page [here]().
|
# Here the only relevant data we need is the location and the Mineral Class (Yes/No)
df_final = df_details[['LONGITUDE_GDA94','LATITUDE_GDA94', 'MINERAL_CLASS']]
# Drop the rows with null values
df_final = df_final.dropna()
# Lets print out a few rows of the new dataframe.
df_final.head()
# Let's check the data points in both classes
print("Number of rows with Mineral Class Yes is", len(df_final.query('MINERAL_CLASS=="Y"')))
print("Number of rows with Mineral Class No is", len(df_final.query('MINERAL_CLASS=="N"')))
|
Number of rows with Mineral Class Yes is 147407
Number of rows with Mineral Class No is 174436
|
MIT
|
models/Model1/Mod1_Data_Prep.ipynb
|
Xavian-Brooker/Gawler-Unearthed
|
The Total Number of rows in the new dataset is **147407 (Y) + 174436 (N) = 321843** which is quite sufficient for training our models over it.Also the ratio of Class `'Y'` to Class `'N'` is 1 : 0.8 which is quite _**balanced**_. Now that we have our csv, let's go ahead and save our progress into a new csv before the session expires!
|
# Create a new directory to save the csv.
!mkdir 'GeoChemData/exported'
# Convert the dataframe into a new csv file.
df_final.to_csv('GeoChemData/mod1_unsampled.csv')
# Finally if you are on google colab, you can simply download using ->
from google.colab import files
files.download('GeoChemData/exported/mod1_vectors.csv')
|
_____no_output_____
|
MIT
|
models/Model1/Mod1_Data_Prep.ipynb
|
Xavian-Brooker/Gawler-Unearthed
|
Hyperopt Iris 数据集 在本节中,我们将介绍4个使用hyperopt在经典数据集 Iris 上调参的完整示例。我们将涵盖 K 近邻(KNN),支持向量机(SVM),决策树和随机森林。 对于这项任务,我们将使用经典的Iris数据集,并进行一些有监督的机器学习。数据集有有4个输入特征和3个输出类别。数据被标记为属于类别0,1或2,其映射到不同种类的鸢尾花。输入有4列:萼片长度,萼片宽度,花瓣长度和花瓣宽度。输入的单位是厘米。我们将使用这4个特征来学习模型,预测三种输出类别之一。因为数据由sklearn提供,它有一个很好的DESCR属性,可以提供有关数据集的详细信息。尝试以下代码以获得更多细节信息
|
from sklearn import datasets
iris = datasets.load_iris()
print(iris.feature_names) # input names
print(iris.target_names) # output names
print(iris.DESCR) # everything else
|
['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
['setosa' 'versicolor' 'virginica']
.. _iris_dataset:
Iris plants dataset
--------------------
**Data Set Characteristics:**
:Number of Instances: 150 (50 in each of three classes)
:Number of Attributes: 4 numeric, predictive attributes and the class
:Attribute Information:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris-Setosa
- Iris-Versicolour
- Iris-Virginica
:Summary Statistics:
============== ==== ==== ======= ===== ====================
Min Max Mean SD Class Correlation
============== ==== ==== ======= ===== ====================
sepal length: 4.3 7.9 5.84 0.83 0.7826
sepal width: 2.0 4.4 3.05 0.43 -0.4194
petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)
petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)
============== ==== ==== ======= ===== ====================
:Missing Attribute Values: None
:Class Distribution: 33.3% for each of 3 classes.
:Creator: R.A. Fisher
:Donor: Michael Marshall (MARSHALL%[email protected])
:Date: July, 1988
The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken
from Fisher's paper. Note that it's the same as in R, but not as in the UCI
Machine Learning Repository, which has two wrong data points.
This is perhaps the best known database to be found in the
pattern recognition literature. Fisher's paper is a classic in the field and
is referenced frequently to this day. (See Duda & Hart, for example.) The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant. One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.
.. topic:: References
- Fisher, R.A. "The use of multiple measurements in taxonomic problems"
Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
Mathematical Statistics" (John Wiley, NY, 1950).
- Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.
(Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.
- Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
Structure and Classification Rule for Recognition in Partially Exposed
Environments". IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. PAMI-2, No. 1, 67-71.
- Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions
on Information Theory, May 1972, 431-433.
- See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II
conceptual clustering system finds 3 classes in the data.
- Many, many more ...
|
Apache-2.0
|
notebooks/hyperopt_on_iris_data.ipynb
|
jianzhnie/AutoML-Tools
|
K-means 我们现在将使用hyperopt来找到 K近邻(KNN)机器学习模型的最佳参数。KNN 模型是基于训练数据集中 k 个最近数据点的大多数类别对来自测试集的数据点进行分类。
|
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
import matplotlib.pyplot as plt
import numpy as np, pandas as pd
from math import *
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
# 数据集导入
iris = datasets.load_iris()
X = iris.data
y = iris.target
# 损失函数
def hyperopt_train_test(params):
clf = KNeighborsClassifier(**params)
return cross_val_score(clf, X, y).mean()
# hp.choice(label, options) 其中options应是 python 列表或元组
# space4nn就是需要输入到损失函数里面的参数
space4knn = {
'n_neighbors': hp.choice('n_neighbors', range(1,100))
}
# 定义目标函数
def f(params):
acc = hyperopt_train_test(params)
return {'loss': -acc, 'status': STATUS_OK}
# Trials对象允许我们在每个时间步存储信息
trials = Trials()
# 函数fmin首先接受一个函数来最小化,algo参数指定搜索算法,最大评估次数max_evals
best = fmin(f, space4knn, algo=tpe.suggest, max_evals=100, trials=trials)
print('best:',best)
print('trials:')
for trial in trials.trials[:2]:
print(trial)
|
100%|█| 100/100 [00:02<00:00, 34.95it/s, best loss: -0.98000000
best: {'n_neighbors': 11}
trials:
{'state': 2, 'tid': 0, 'spec': None, 'result': {'loss': -0.9666666666666668, 'status': 'ok'}, 'misc': {'tid': 0, 'cmd': ('domain_attachment', 'FMinIter_Domain'), 'workdir': None, 'idxs': {'n_neighbors': [0]}, 'vals': {'n_neighbors': [7]}}, 'exp_key': None, 'owner': None, 'version': 0, 'book_time': datetime.datetime(2020, 11, 10, 8, 0, 48, 790000), 'refresh_time': datetime.datetime(2020, 11, 10, 8, 0, 48, 814000)}
{'state': 2, 'tid': 1, 'spec': None, 'result': {'loss': -0.6599999999999999, 'status': 'ok'}, 'misc': {'tid': 1, 'cmd': ('domain_attachment', 'FMinIter_Domain'), 'workdir': None, 'idxs': {'n_neighbors': [1]}, 'vals': {'n_neighbors': [86]}}, 'exp_key': None, 'owner': None, 'version': 0, 'book_time': datetime.datetime(2020, 11, 10, 8, 0, 48, 883000), 'refresh_time': datetime.datetime(2020, 11, 10, 8, 0, 48, 912000)}
|
Apache-2.0
|
notebooks/hyperopt_on_iris_data.ipynb
|
jianzhnie/AutoML-Tools
|
现在让我们看看输出结果的图。y轴是交叉验证分数,x轴是 k 近邻个数。下面是代码和它的图像:
|
f, ax = plt.subplots(1) #, figsize=(10,10))
xs = [t['misc']['vals']['n_neighbors'] for t in trials.trials]
ys = [-t['result']['loss'] for t in trials.trials]
ax.scatter(xs, ys, s=20, linewidth=0.01, alpha=0.5)
ax.set_title('Iris Dataset - KNN', fontsize=18)
ax.set_xlabel('n_neighbors', fontsize=12)
ax.set_ylabel('cross validation accuracy', fontsize=12)
|
_____no_output_____
|
Apache-2.0
|
notebooks/hyperopt_on_iris_data.ipynb
|
jianzhnie/AutoML-Tools
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.