markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Embedded Python in ObjectScriptFrom ObjectScript, run some Python library methods.
|
set datetime = ##class(%SYS.Python).Import("datetime")
zw datetime
zw datetime.date.today().isoformat()
|
datetime=3@%SYS.Python ; <module 'datetime' from '/usr/lib/python3.8/datetime.py'> ; <OREF>
"2021-12-12"
|
MIT
|
src/Notebooks/ObjectScript.ipynb
|
gjsjohnmurray/iris-python-template
|
Examples of usage of Gate Angle PlaceholderThe word "Placeholder" is used in Qubiter (we are in good company, Tensorflow uses this word in the same way) to mean a variable for which we delay/postpone assigning a numerical value (evaluating it) until a later time. In the case of Qubiter, it is useful to define gates with placeholders standing for angles. One can postpone evaluating those placeholders until one is ready to call the circuit simulator, and then pass the values of the placeholders as an argument to the simulator’s constructor. Placeholders of this type can be useful, for example, with quantum neural nets (QNNs). In some QNN algorithms, the circuit gate structure is fixed but the angles of the gates are varied many times, gradually, trying to lower a cost function each time.> In Qubiter, legal variable names must be of form `3` or `-3` or `3*.5` or`-3*.5` where 3 can be replaced by any non-negative int, and .5 canbe replaced by anything that can be an argument of float() withoutthrowing an exception. In this example, the 3 that follows the hashcharacter is called the variable number>NEW! (functional placeholder variables)Now legal variable names can ALSO be of the form `my_fun12` or`-my_fun12`, where* the 1 and 2 can be replaced by any non-negative integers and theremight be any number > 0 of hash variables. Thus, there need notalways be precisely 2 hash variables as in the example.* `my_fun` can be replaced by the name of any function with one ormore input floats (2 inputs in the example), as long as the firstcharacter of the function's name is a lower case letter.>The strings `my_fun12` or `-my_fun12` indicate than one wants touse for the angle being replaced, the values of `my_fun(1, 2)` or`-my_fun(1, 2)`, respectively, where the inputs 1 and 2 arefloats standing for radians and the output is also a float standingfor radians.
|
import os
import sys
print(os.getcwd())
os.chdir('../../')
print(os.getcwd())
sys.path.insert(0,os.getcwd())
|
C:\Users\rrtuc\Desktop\backedup\python-projects\qubiter\qubiter\jupyter-notebooks
C:\Users\rrtuc\Desktop\backedup\python-projects\qubiter
|
Apache-2.0
|
qubiter/jupyter_notebooks/examples_of_placeholder_usage.ipynb
|
yourball/qubiter
|
We begin by writing a simple circuit with 4 qubits. As usual, the following code willwrite an English and a Picture file in the `io_folder` directory. Note that someangles have been entered into the write() Python functions as legalvariable names instead of floats. In the English file, you will see those legalnames where the numerical values of those angles would have been.
|
from qubiter.SEO_writer import *
from qubiter.SEO_reader import *
from qubiter.EchoingSEO_reader import *
from qubiter.SEO_simulator import *
num_bits = 4
file_prefix = 'placeholder_test'
emb = CktEmbedder(num_bits, num_bits)
wr = SEO_writer(file_prefix, emb)
wr.write_Rx(2, rads=np.pi/7)
wr.write_Rx(1, rads='#2*.5')
wr.write_Rx(1, rads='my_fun1#2')
wr.write_Rn(3, rads_list=['#1', '-#1*3', '#3'])
wr.write_Rx(1, rads='-my_fun2#2#1')
wr.write_cnot(2, 3)
wr.close_files()
|
_____no_output_____
|
Apache-2.0
|
qubiter/jupyter_notebooks/examples_of_placeholder_usage.ipynb
|
yourball/qubiter
|
The following 2 files were just written:1. ../io_folder/placeholder_test_4_eng.txt2. ../io_folder/placeholder_test_4_ZLpic.txt Simply by creating an object of the class SEO_reader with the flag `write_log` set equal to True, you can create a log file which contains * a list of distinct variable numbers * a list of distinct function namesencountered in the English file
|
rdr = SEO_reader(file_prefix, num_bits, write_log=True)
|
_____no_output_____
|
Apache-2.0
|
qubiter/jupyter_notebooks/examples_of_placeholder_usage.ipynb
|
yourball/qubiter
|
The following log file was just written: ../io_folder/placeholder_test_4_log.txt Next, let us create two functions that will be used for the functional placeholders
|
def my_fun1(x):
return x*.5
def my_fun2(x, y):
return x + y
|
_____no_output_____
|
Apache-2.0
|
qubiter/jupyter_notebooks/examples_of_placeholder_usage.ipynb
|
yourball/qubiter
|
**Partial Substitution**This creates new fileswith `1=30`, `2=60`, `'my_fun1'->my_fun1`,but `3` and `'my_fun2'` still undecided
|
vman = PlaceholderManager(eval_all_vars=False,
var_num_to_rads={1: np.pi/6, 2: np.pi/3},
fun_name_to_fun={'my_fun1': my_fun1})
wr = SEO_writer(file_prefix + '_eval01', emb)
EchoingSEO_reader(file_prefix, num_bits, wr,
vars_manager=vman)
|
_____no_output_____
|
Apache-2.0
|
qubiter/jupyter_notebooks/examples_of_placeholder_usage.ipynb
|
yourball/qubiter
|
The following 2 files were just written:1. ../io_folder/placeholder_test_eval01_4_eng.txt2. ../io_folder/placeholder_test_eval01_4_ZLpic.txt The following code runs the simulator after substituting`1=30`, `2=60`, `3=90`, `'my_fun1'->my_fun1`, `'my_fun2'->my_fun2`
|
vman = PlaceholderManager(
var_num_to_rads={1: np.pi/6, 2: np.pi/3, 3: np.pi/2},
fun_name_to_fun={'my_fun1': my_fun1, 'my_fun2': my_fun2}
)
sim = SEO_simulator(file_prefix, num_bits, verbose=False,
vars_manager=vman)
StateVec.describe_st_vec_dict(sim.cur_st_vec_dict)
|
*********branch= pure
total probability of state vector (=one if no measurements)= 1.0000000000000004
dictionary with key=qubit, value=(Prob(0), Prob(1))
{0: (1.0000000000000004, -4.440892098500626e-16),
1: (0.7500000000000002, 0.24999999999999978),
2: (0.811744900929367, 0.18825509907063298),
3: (0.6235127414399703, 0.37648725856002974)}
|
Apache-2.0
|
qubiter/jupyter_notebooks/examples_of_placeholder_usage.ipynb
|
yourball/qubiter
|
The art of using pipelines Pipelines are a natural way to think about a machine learning system. Indeed with some practice a data scientist can visualise data "flowing" through a series of steps. The input is typically some raw data which has to be processed in some manner. The goal is to represent the data in such a way that is can be ingested by a machine learning algorithm. Along the way some steps will extract features, while others will normalize the data and remove undesirable elements. Pipelines are simple, and yet they are a powerful way of designing sophisticated machine learning systems.Both [scikit-learn](https://stackoverflow.com/questions/33091376/python-what-is-exactly-sklearn-pipeline-pipeline) and [pandas](https://tomaugspurger.github.io/method-chaining) make it possible to use pipelines. However it's quite rare to see pipelines being used in practice (at least on Kaggle). Sometimes you get to see people using scikit-learn's `pipeline` module, however the `pipe` method from `pandas` is sadly underappreciated. A big reason why pipelines are not given much love is that it's easier to think of batch learning in terms of a script or a notebook. Indeed many people doing data science seem to prefer a procedural style to a declarative style. Moreover in practice pipelines can be a bit rigid if one wishes to do non-orthodox operations.Although pipelines may be a bit of an odd fit for batch learning, they make complete sense when they are used for online learning. Indeed the UNIX philosophy has advocated the use of pipelines for data processing for many decades. If you can visualise data as a stream of observations then using pipelines should make a lot of sense to you. We'll attempt to convince you by writing a machine learning algorithm in a procedural way and then converting it to a declarative pipeline in small steps. Hopefully by the end you'll be convinced, or not!In this notebook we'll manipulate data from the [Kaggle Recruit Restaurants Visitor Forecasting competition](https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting). The data is directly available through `river`'s `datasets` module.
|
from pprint import pprint
from river import datasets
for x, y in datasets.Restaurants():
pprint(x)
pprint(y)
break
|
{'area_name': 'Tōkyō-to Nerima-ku Toyotamakita',
'date': datetime.datetime(2016, 1, 1, 0, 0),
'genre_name': 'Izakaya',
'is_holiday': True,
'latitude': 35.7356234,
'longitude': 139.6516577,
'store_id': 'air_04341b588bde96cd'}
10
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
We'll start by building and running a model using a procedural coding style. The performance of the model doesn't matter, we're simply interested in the design of the model.
|
from river import feature_extraction
from river import linear_model
from river import metrics
from river import preprocessing
from river import stats
means = (
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
)
scaler = preprocessing.StandardScaler()
lin_reg = linear_model.LinearRegression()
metric = metrics.MAE()
for x, y in datasets.Restaurants():
# Derive date features
x['weekday'] = x['date'].weekday()
x['is_weekend'] = x['date'].weekday() in (5, 6)
# Process the rolling means of the target
for mean in means:
x = {**x, **mean.transform_one(x)}
mean.learn_one(x, y)
# Remove the key/value pairs that aren't features
for key in ['store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude']:
x.pop(key)
# Rescale the data
x = scaler.learn_one(x).transform_one(x)
# Fit the linear regression
y_pred = lin_reg.predict_one(x)
lin_reg.learn_one(x, y)
# Update the metric using the out-of-fold prediction
metric.update(y, y_pred)
print(metric)
|
MAE: 8.465114
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
We're not using many features. We can print the last `x` to get an idea of the features (don't forget they've been scaled!)
|
pprint(x)
|
{'is_holiday': -0.23103573677646685,
'is_weekend': 1.6249280076334165,
'weekday': 1.0292832579142892,
'y_rollingmean_14_by_store_id': -1.4125913815779154,
'y_rollingmean_21_by_store_id': -1.3980979075298519,
'y_rollingmean_7_by_store_id': -1.3502314499809096}
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
The above chunk of code is quite explicit but it's a bit verbose. The whole point of libraries such as `river` is to make life easier for users. Moreover there's too much space for users to mess up the order in which things are done, which increases the chance of there being target leakage. We'll now rewrite our model in a declarative fashion using a pipeline *à la sklearn*.
|
from river import compose
def get_date_features(x):
weekday = x['date'].weekday()
return {'weekday': weekday, 'is_weekend': weekday in (5, 6)}
model = compose.Pipeline(
('features', compose.TransformerUnion(
('date_features', compose.FuncTransformer(get_date_features)),
('last_7_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7))),
('last_14_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14))),
('last_21_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)))
)),
('drop_non_features', compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude')),
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LinearRegression())
)
metric = metrics.MAE()
for x, y in datasets.Restaurants():
# Make a prediction without using the target
y_pred = model.predict_one(x)
# Update the model using the target
model.learn_one(x, y)
# Update the metric using the out-of-fold prediction
metric.update(y, y_pred)
print(metric)
|
MAE: 8.38533
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
We use a `Pipeline` to arrange each step in a sequential order. A `TransformerUnion` is used to merge multiple feature extractors into a single transformer. The `for` loop is now much shorter and is thus easier to grok: we get the out-of-fold prediction, we fit the model, and finally we update the metric. This way of evaluating a model is typical of online learning, and so we put it wrapped it inside a function called `progressive_val_score` part of the `evaluate` module. We can use it to replace the `for` loop.
|
from river import evaluate
model = compose.Pipeline(
('features', compose.TransformerUnion(
('date_features', compose.FuncTransformer(get_date_features)),
('last_7_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7))),
('last_14_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14))),
('last_21_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)))
)),
('drop_non_features', compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude')),
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LinearRegression())
)
evaluate.progressive_val_score(dataset=datasets.Restaurants(), model=model, metric=metrics.MAE())
|
_____no_output_____
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
Notice that you couldn't have used the `progressive_val_score` method if you wrote the model in a procedural manner.Our code is getting shorter, but it's still a bit difficult on the eyes. Indeed there is a lot of boilerplate code associated with pipelines that can get tedious to write. However `river` has some special tricks up it's sleeve to save you from a lot of pain.The first trick is that the name of each step in the pipeline can be omitted. If no name is given for a step then `river` automatically infers one.
|
model = compose.Pipeline(
compose.TransformerUnion(
compose.FuncTransformer(get_date_features),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
),
compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'),
preprocessing.StandardScaler(),
linear_model.LinearRegression()
)
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
|
_____no_output_____
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
Under the hood a `Pipeline` inherits from `collections.OrderedDict`. Indeed this makes sense because if you think about it a `Pipeline` is simply a sequence of steps where each step has a name. The reason we mention this is because it means you can manipulate a `Pipeline` the same way you would manipulate an ordinary `dict`. For instance we can print the name of each step by using the `keys` method.
|
for name in model.steps:
print(name)
|
TransformerUnion
Discard
StandardScaler
LinearRegression
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
The first step is a `FeatureUnion` and it's string representation contains the string representation of each of it's elements. Not having to write names saves up some time and space and is certainly less tedious.The next trick is that we can use mathematical operators to compose our pipeline. For example we can use the `+` operator to merge `Transformer`s into a `TransformerUnion`.
|
model = compose.Pipeline(
compose.FuncTransformer(get_date_features) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)),
compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'),
preprocessing.StandardScaler(),
linear_model.LinearRegression()
)
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
|
_____no_output_____
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
Likewhise we can use the `|` operator to assemble steps into a `Pipeline`.
|
model = (
compose.FuncTransformer(get_date_features) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
)
to_discard = ['store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude']
model = model | compose.Discard(*to_discard) | preprocessing.StandardScaler()
model |= linear_model.LinearRegression()
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
|
_____no_output_____
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
Hopefully you'll agree that this is a powerful way to express machine learning pipelines. For some people this should be quite remeniscent of the UNIX pipe operator. One final trick we want to mention is that functions are automatically wrapped with a `FuncTransformer`, which can be quite handy.
|
model = get_date_features
for n in [7, 14, 21]:
model += feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(n))
model |= compose.Discard(*to_discard)
model |= preprocessing.StandardScaler()
model |= linear_model.LinearRegression()
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
|
_____no_output_____
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
Naturally some may prefer the procedural style we first used because they find it easier to work with. It all depends on your style and you should use what you feel comfortable with. However we encourage you to use operators because we believe that this will increase the readability of your code, which is very important. To each their own!Before finishing we can take an interactive look at our pipeline.
|
model
|
_____no_output_____
|
BSD-3-Clause
|
docs/examples/the-art-of-using-pipelines.ipynb
|
dataJSA/river
|
Reflect Tables into SQLAlchemy ORM
|
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
measurements = Base.classes.measurement
stations = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
|
_____no_output_____
|
MIT
|
climate_starter.ipynb
|
ahchambers/sqlalchemy-challenge
|
Exploratory Climate Analysis
|
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
last_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
results = session.query(measurements.date, measurements.prcp).filter(measurements.date >= last_year).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
data = pd.DataFrame(results, columns=['date', 'precipitation'])
# Sort the dataframe by date
data = data.sort_values("date")
# Use Pandas Plotting with Matplotlib to plot the data
x_axis=data["date"]
y_axis=data["precipitation"]
plt.scatter(x_axis, y_axis, marker="o", facecolors="red", edgecolors="black")
plt.xlabel("Date")
plt.ylabel("Measurement")
# Use Pandas to calcualte the summary statistics for the precipitation data
data.describe()
# Design a query to show how many stations are available in this dataset?
session.query(func.count(stations.station)).all()
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
session.query(measurements.station, func.count(1)).\
group_by(measurements.station).\
order_by(func.count(1).desc()).all()
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
sel = [measurements.station,
func.min(measurements.tobs),
func.max(measurements.tobs),
func.avg(measurements.tobs)]
session.query(*sel).\
filter(measurements.station == "USC00519281").all()
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
precipitation_df = pd.DataFrame(session.query(measurements.date, measurements.tobs).\
filter(measurements.date > last_year).\
filter(measurements.station == "USC00519281").\
order_by(measurements.date).all(), columns = ["Date", "temperature"])
# plot the results as a histogram
precipitation_df.plot(kind = "hist", bins = 12)
plt.xlabel("Temperature")
plt.ylabel("Frequency")
plt.savefig("output/fig1.png");
|
_____no_output_____
|
MIT
|
climate_starter.ipynb
|
ahchambers/sqlalchemy-challenge
|
Photometric PluginFor optical photometry, we provide the **PhotometryLike** plugin that handles forward folding of a spectral model through filter curves. Let's have a look at the avaiable procedures.
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from threeML import *
# we will need XPSEC models for extinction
from astromodels.xspec import *
# The filter library takes a while to load so you must import it explicitly..
from threeML.plugins.photometry.filter_library import threeML_filter_library
|
_____no_output_____
|
BSD-3-Clause
|
examples/Photometry_demo.ipynb
|
ke-fang/3ML
|
SetupWe use [speclite](http://speclite.readthedocs.io/en/latest/ ) to handle optical filters.Therefore, you can easily build your own custom filters, use the built in speclite filters, or use the 3ML filter library that we have built thanks to [Spanish Virtual Observatory](http://svo.cab.inta-csic.es/main/index.php). **If you use these filters, please be sure to cite the proper sources!** Simple example of building a filterLet's say we have our own 1-m telescope with a Johnson filter and we happen to record the data. We also have simultaneous data at other wavelengths and we want to compare. Let's setup the optical plugin (we'll ignore the other data for now).
|
import speclite.filters as spec_filters
my_backyard_telescope_filter = spec_filters.load_filter('bessell-r')
# NOTE:
my_backyard_telescope_filter.name
|
_____no_output_____
|
BSD-3-Clause
|
examples/Photometry_demo.ipynb
|
ke-fang/3ML
|
NOTE: the filter name is 'bessell-R'. The plugin will look for the name *after* the **'-'** i.e 'R'Now let's build a 3ML plugin via **PhotometryLike**. Our data are entered as keywords with the name of the filter as the keyword and the data in an magnitude,error tuple, i.e. R=(mag,mag_err):
|
my_backyard_telescope = PhotometryLike('backyard_astronomy',
filters=my_backyard_telescope_filter, # the filter
R=(20,.1) ) # the magnitude and error
my_backyard_telescope.display_filters()
|
Using Gaussian statistic (equivalent to chi^2) with the provided errors.
|
BSD-3-Clause
|
examples/Photometry_demo.ipynb
|
ke-fang/3ML
|
3ML filter libraryExplore the filter library. If you cannot find what you need, it is simple to add your own
|
threeML_filter_library.SLOAN
spec_filters.plot_filters(threeML_filter_library.SLOAN.SDSS)
spec_filters.plot_filters(threeML_filter_library.Herschel.SPIRE)
spec_filters.plot_filters(threeML_filter_library.Keck.NIRC2)
|
_____no_output_____
|
BSD-3-Clause
|
examples/Photometry_demo.ipynb
|
ke-fang/3ML
|
Build your own filtersFollowing the example from speclite, we can build our own filters and add them:
|
fangs_g = spec_filters.FilterResponse(
wavelength = [3800, 4500, 5200] * u.Angstrom,
response = [0, 0.5, 0], meta=dict(group_name='fangs', band_name='g'))
fangs_r = spec_filters.FilterResponse(
wavelength = [4800, 5500, 6200] * u.Angstrom,
response = [0, 0.5, 0], meta=dict(group_name='fangs', band_name='r'))
fangs = spec_filters.load_filters('fangs-g', 'fangs-r')
fangslike = PhotometryLike('fangs',filters=fangs,g=(20,.1),r=(18,.1))
fangslike.display_filters()
|
Using Gaussian statistic (equivalent to chi^2) with the provided errors.
|
BSD-3-Clause
|
examples/Photometry_demo.ipynb
|
ke-fang/3ML
|
GROND ExampleNow we will look at GROND. We get the filter from the 3ML filter library.(Just play with tab completion to see what is available!)
|
grond = PhotometryLike('GROND',
filters=threeML_filter_library.ESO.GROND,
#g=(21.5.93,.23), # we exclude these filters
#r=(22.,0.12),
i=(21.8,.01),
z=(21.2,.01),
J=(19.6,.01),
H=(18.6,.01),
K=(18.,.01))
grond.display_filters()
|
_____no_output_____
|
BSD-3-Clause
|
examples/Photometry_demo.ipynb
|
ke-fang/3ML
|
Model specificationHere we use XSPEC's dust extinction models for the milky way and the host
|
spec = Powerlaw() * XS_zdust() * XS_zdust()
data_list = DataList(grond)
model = Model(PointSource('grb',0,0,spectral_shape=spec))
spec.piv_1 = 1E-2
spec.index_1.fix=False
spec.redshift_2 = 0.347
spec.redshift_2.fix = True
spec.e_bmv_2 = 5./2.93
spec.e_bmv_2.fix = True
spec.rv_2 = 2.93
spec.rv_2.fix = True
spec.method_2 = 3
spec.method_2.fix=True
spec.e_bmv_3 = .002/3.08
spec.e_bmv_3.fix = True
spec.rv_3= 3.08
spec.rv_3.fix=True
spec.redshift_3 = 0
spec.redshift_3.fix=True
spec.method_3 = 1
spec.method_3.fix=True
jl = JointLikelihood(model,data_list)
|
_____no_output_____
|
BSD-3-Clause
|
examples/Photometry_demo.ipynb
|
ke-fang/3ML
|
We compute $m_{\rm AB}$ from astromodels photon fluxes. This is done by convolving the differential flux over the filter response:$ F[R,f_\lambda] \equiv \int_0^\infty \frac{dg}{d\lambda}(\lambda)R(\lambda) \omega(\lambda) d\lambda$where we have converted the astromodels functions to wavelength properly.
|
_ = jl.fit()
|
Best fit values:
|
BSD-3-Clause
|
examples/Photometry_demo.ipynb
|
ke-fang/3ML
|
We can now look at the fit in magnitude space or model space as with any plugin.
|
_=display_photometry_model_magnitudes(jl)
_ = plot_point_source_spectra(jl.results,flux_unit='erg/(cm2 s keV)',
xscale='linear',
energy_unit='nm',ene_min=1E3, ene_max=1E5, num_ene=200 )
|
_____no_output_____
|
BSD-3-Clause
|
examples/Photometry_demo.ipynb
|
ke-fang/3ML
|
Copyright 2018 The TensorFlow Authors.
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
_____no_output_____
|
Apache-2.0
|
lite/examples/gesture_classification/ml/tensorflowjs_to_tflite_colab_notebook.ipynb
|
hawk-praxs/examples
|
Tensorflow Lite Gesture Classification Example Conversion ScriptThis guide shows how you can go about converting the model trained with TensorFlowJS to TensorFlow Lite FlatBuffers.Run all steps in-order. At the end, `model.tflite` file will be downloaded. Run in Google Colab View source on GitHub **Install Dependencies**
|
!pip3 install tensorflow==1.14.0 keras==2.2.4 tensorflowjs==0.6.4 --force-reinstall
import traceback
import logging
import tensorflow.compat.v1 as tf
import keras.backend as K
import os
from google.colab import files
from keras import Model, Input
from keras.applications import MobileNet
from keras.engine.saving import load_model
from tensorflowjs.converters import load_keras_model
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
|
_____no_output_____
|
Apache-2.0
|
lite/examples/gesture_classification/ml/tensorflowjs_to_tflite_colab_notebook.ipynb
|
hawk-praxs/examples
|
***Cleanup any existing models if necessary***
|
!rm -rf *.h5 *.tflite *.json *.bin
|
_____no_output_____
|
Apache-2.0
|
lite/examples/gesture_classification/ml/tensorflowjs_to_tflite_colab_notebook.ipynb
|
hawk-praxs/examples
|
**Upload your Tensorflow.js Artifacts Here**i.e., The weights manifest **model.json** and the binary weights file **model-weights.bin**
|
files.upload()
|
_____no_output_____
|
Apache-2.0
|
lite/examples/gesture_classification/ml/tensorflowjs_to_tflite_colab_notebook.ipynb
|
hawk-praxs/examples
|
**Export Configuration**
|
#@title Export Configuration
# TensorFlow.js arguments
config_json = "model.json" #@param {type:"string"}
weights_path_prefix = None #@param {type:"raw"}
model_tflite = "model.tflite" #@param {type:"string"}
|
_____no_output_____
|
Apache-2.0
|
lite/examples/gesture_classification/ml/tensorflowjs_to_tflite_colab_notebook.ipynb
|
hawk-praxs/examples
|
**Model Converter**The following class converts a TensorFlow.js model to a TFLite FlatBuffer
|
class ModelConverter:
"""
Creates a ModelConverter class from a TensorFlow.js model file.
Args:
:param config_json_path: Full filepath of weights manifest file containing the model architecture.
:param weights_path_prefix: Full filepath to the directory in which the weights binaries exist.
:param tflite_model_file: Name of the TFLite FlatBuffer file to be exported.
:return:
ModelConverter class.
"""
def __init__(self,
config_json_path,
weights_path_prefix,
tflite_model_file
):
self.config_json_path = config_json_path
self.weights_path_prefix = weights_path_prefix
self.tflite_model_file = tflite_model_file
self.keras_model_file = 'merged.h5'
# MobileNet Options
self.input_node_name = 'the_input'
self.image_size = 224
self.alpha = 0.25
self.depth_multiplier = 1
self._input_shape = (1, self.image_size, self.image_size, 3)
self.depthwise_conv_layer = 'conv_pw_13_relu'
def convert(self):
self.save_keras_model()
self._deserialize_tflite_from_keras()
logger.info('The TFLite model has been generated')
self._purge()
def save_keras_model(self):
top_model = load_keras_model(self.config_json_path, self.weights_path_prefix,
weights_data_buffers=None,
load_weights=True,
use_unique_name_scope=True)
base_model = self.get_base_model()
merged_model = self.merge(base_model, top_model)
merged_model.save(self.keras_model_file)
logger.info("The merged Keras HDF5 model has been saved as {}".format(self.keras_model_file))
def merge(self, base_model, top_model):
"""
Merges base model with the classification block
:return: Returns the merged Keras model
"""
logger.info("Initializing model...")
layer = base_model.get_layer(self.depthwise_conv_layer)
model = Model(inputs=base_model.input, outputs=top_model(layer.output))
logger.info("Model created.")
return model
def get_base_model(self):
"""
Builds MobileNet with the default parameters
:return: Returns the base MobileNet model
"""
input_tensor = Input(shape=self._input_shape[1:], name=self.input_node_name)
base_model = MobileNet(input_shape=self._input_shape[1:],
alpha=self.alpha,
depth_multiplier=self.depth_multiplier,
input_tensor=input_tensor,
include_top=False)
return base_model
def _deserialize_tflite_from_keras(self):
converter = tf.lite.TFLiteConverter.from_keras_model_file(self.keras_model_file)
tflite_model = converter.convert()
with open(self.tflite_model_file, "wb") as file:
file.write(tflite_model)
def _purge(self):
logger.info('Cleaning up Keras model')
os.remove(self.keras_model_file)
try:
K.clear_session()
converter = ModelConverter(config_json,
weights_path_prefix,
model_tflite)
converter.convert()
except ValueError as e:
print(traceback.format_exc())
print("Error occurred while converting")
files.download(model_tflite)
|
_____no_output_____
|
Apache-2.0
|
lite/examples/gesture_classification/ml/tensorflowjs_to_tflite_colab_notebook.ipynb
|
hawk-praxs/examples
|
Generate dataset
|
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [-6,7],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [-5,-4],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [-1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [0,2],cov=[[0.1,0],[0,0.1]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [0,-1],cov=[[0.1,0],[0,0.1]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [-0.5,-0.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [0.4,0.2],cov=[[0.1,0],[0,0.1]],size=sum(idx[9]))
x[idx[0]][0], x[idx[5]][5]
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
bg_idx = [ np.where(idx[3] == True)[0],
np.where(idx[4] == True)[0],
np.where(idx[5] == True)[0],
np.where(idx[6] == True)[0],
np.where(idx[7] == True)[0],
np.where(idx[8] == True)[0],
np.where(idx[9] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
np.reshape(a,(2*m,1))
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(2*m,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape
mosaic_list_of_images.shape, mosaic_list_of_images[0]
for j in range(m):
print(mosaic_list_of_images[0][2*j:2*j+2])
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number, m):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
cnt = 0
counter = np.zeros(m) #np.array([0,0,0,0,0,0,0,0,0])
for i in range(len(mosaic_dataset)):
img = torch.zeros([2], dtype=torch.float64)
np.random.seed(int(dataset_number*10000 + i))
give_pref = foreground_index[i] #np.random.randint(0,9)
# print("outside", give_pref,foreground_index[i])
for j in range(m):
if j == give_pref:
img = img + mosaic_dataset[i][2*j:2*j+2]*dataset_number/m #2 is data dim
else :
img = img + mosaic_dataset[i][2*j:2*j+2]*(m-dataset_number)/((m-1)*m)
if give_pref == foreground_index[i] :
# print("equal are", give_pref,foreground_index[i])
cnt += 1
counter[give_pref] += 1
else :
counter[give_pref] += 1
avg_image_dataset.append(img)
print("number of correct averaging happened for dataset "+str(dataset_number)+" is "+str(cnt))
print("the averaging are done as ", counter)
return avg_image_dataset , labels , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:tr_j], mosaic_label[0:tr_j], fore_idx[0:tr_j] , 1, m)
test_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[tr_j : tr_k], mosaic_label[tr_j : tr_k], fore_idx[tr_j : tr_k] , m, m)
avg_image_dataset_1 = torch.stack(avg_image_dataset_1, axis = 0)
# avg_image_dataset_1 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
# print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))
# print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))
print("=="*40)
test_dataset = torch.stack(test_dataset, axis = 0)
# test_dataset = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
# print(torch.mean(test_dataset, keepdims= True, axis = 0))
# print(torch.std(test_dataset, keepdims= True, axis = 0))
print("=="*40)
x1 = (avg_image_dataset_1).numpy()
y1 = np.array(labels_1)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("dataset4 CIN with alpha = 1/"+str(m))
x1 = (test_dataset).numpy() / m
y1 = np.array(labels)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("test dataset4")
test_dataset[0:10]/m
test_dataset = test_dataset/m
test_dataset[0:10]
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
avg_image_dataset_1[0].shape
avg_image_dataset_1[0]
batch = 200
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
testdata_11 = MosaicDataset(test_dataset, labels )
testloader_11 = DataLoader( testdata_11 , batch_size= batch ,shuffle=False)
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,3)
# self.linear2 = nn.Linear(50,10)
# self.linear3 = nn.Linear(10,3)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
def forward(self,x):
# x = F.relu(self.linear1(x))
# x = F.relu(self.linear2(x))
x = (self.linear1(x))
return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/(i+1)
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the %d test dataset %d: %.2f %%' % (total, number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.001 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1000
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%200 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %.2f %%' % (total, 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
return loss_curi
train_loss_all=[]
testloader_list= [ testloader_1, testloader_11]
train_loss_all.append(train_all(trainloader_1, 1, testloader_list))
%matplotlib inline
for i,j in enumerate(train_loss_all):
plt.plot(j,label ="dataset "+str(i+1))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
|
_____no_output_____
|
MIT
|
AAAI/Learnability/CIN/Linear/ds2/size_100/synthetic_type2_Linear_m_50.ipynb
|
lnpandey/DL_explore_synth_data
|
Introduction to Convolutional Neural Networks (CNNs) in PyTorch Representing images digitallyWhile convolutional neural networks (CNNs) see a wide variety of uses, they were originally designed for images, and CNNs are still most commonly used for vision-related tasks.For today, we'll primarily be focusing on CNNs for images.Before we dive into convolutions and neural networks, it's worth prefacing with how images are represented by a computer, as this understanding will inform some of our design choices.Previously, we saw an example of a digitized MNIST handwritten digit.Specifically, we represent it as an $H \times W$ table, with the value of each element storing the intensity of the corresponding pixel.With a 2D representation as above, we for the most part can only efficiently represent grayscale images.What if we want color?There are many schemes for storing color, but one of the most common ones is the [RGB color model](https://en.wikipedia.org/wiki/RGB_color_model).In such a system, we store 3 tables of pixel intensities (each called a *channel*), one each for the colors red, green, and blue (hence RGB), resulting in an $H \times W \times 3$ tensor.Pixel values for a particular channel indicate how much of the corresponding color the image has at a particular location. Let's load an image and look at different channels:
|
%matplotlib inline
import imageio
import matplotlib.pyplot as plt
# Read the image "./Figures/chapel.jpg" from the disk.
# Hint: use `im = imageio.imread(<Path to the image>)`.
# Print the shape of the tensor
# Display the image
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
We can see that the image we loaded has height and width of $620 \times 1175$, with 3 channels corresponding to RGB.We can easily slice out and view individual color channels:
|
# Uncomment the following command to extract the red channel of the above image.
# im_red = im[:,:,0]
# Display the image
# Hint: To display the pixel values for a single channel, we can display the image using the gray-scale colormap
# Repeat the above for the blue channel to visualize features represented in the blue color channel.
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
While we have so far considered only 3 channel RGB images, there are many settings in which we may consider a different number of channels.For example, [hyperspectral imaging](https://en.wikipedia.org/wiki/Hyperspectral_imaging) uses a wide range of the electromagnetic spectrum to characterize a scene.Such modalities may have hundreds of channels or more.Additionally, we'll soon see that certain intermediate representations in a CNN can be considered images with many channels. ConvolutionsConvolutional neural networks (CNNs) are a class of neural networks that have convolutional layers.CNNs are particularly effective for data that have spatial structures and correlations (e.g. images).We'll focus on CNNs applied to images in this tutorial.Recall that a multilayer perceptron (MLP) is entirely composed of fully connected layers, which are each a matrix multiply operation (and addition of a bias) followed by a non-linearity (e.g. sigmoid, ReLU). A convolutional layer is similar, except the matrix multiply operation is replaced with a convolution operation (in practice a cross-correlation). Note that a CNN need not be entirely composed of convolutional layers; in fact, many popular CNN architectures end in fully connected layers.As before, since we're building neural networks, let's start by loading PyTorch. We'll find NumPy useful as well, so we'll also import that here.
|
import numpy as np
# PyTorch Imports
##################################################
# #
# ---- YOUR CODE HERE ---- #
# #
##################################################
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
Review: Fully connected layerIn a fully connected layer, the input $x \in \mathbb R^{M \times C_{in}}$ is a vector (or, rather a batch of vectors), where $M$ is the minibatch size and $C_{in}$ is the dimensionality of the input. We first matrix multiply the input $x$ by a weight matrix $W$.This weight matrix has dimensions $W \in \mathbb R^{C_{in} \times C_{out}}$, where $C_{out}$ is the number of output units.We then add a bias for each output, which we do by adding $b \in \mathbb{R}^{C_{out}}$.The output $y \in \mathbb{R}^{M \times C_{out}}$ of the fully connected layer then:\begin{align*}y = \text{ReLU}(x W + b)\end{align*}Remember, the values of $W$ and $b$ are variables that we are trying to learn for our model. Below we have a visualization of what the matrix operation looks like (bias term and activation function omitted).
|
# Create a random flat input vector
x_fc = torch.randn(100, 1024)
# Create weight matrix variable
W = torch.randn(1024, 10)/np.sqrt(1024)
# Create bias variable
b = torch.zeros(10, requires_grad=True)
# Use `W` and `b` to apply a fully connected layer.
# Store the output in variable `y`.
# Don't forget to apply the activation function.
##################################################
# ---- YOUR CODE HERE ---- #
##################################################
# Print input/output shape
print("Input shape: {}".format(x_fc.shape))
print("Output shape: {}".format(y.shape))
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
Convolutional layerIn a convolutional layer, we convolve the input $x$ with a convolutional kernel (aka filter), which we also call $W$, producing output $y$:\begin{align*}y = \text{ReLU}(W*x + b)\end{align*}In the context of CNNs, the output $y$ is often referred to as feature maps. As with a fully connected layer, the goal is to learn $W$ and $b$ for our model.Unlike the input of a fully connected layer, which is $x \in \mathbb R^{M\times C_{in}}$, the dimensionality of an image input is 4D: $x \in \mathbb R^{M \times C_{in} \times H_{in} \times W_{in}}$, where $M$ is still the batch size, $C_{in}$ is the number of channels of the input (e.g. 3 for RGB), and $H_{in}$ and $W_{in}$ are the height and width of the image.The weight parameter $W$ is also different in a convolutional layer.Unlike the 2-D weight matrix for fully connected layers, the kernel is 4-D with dimensions $W \in \mathbb R^{C_{out} \times C_{in} \times H_K \times W_K }$, where $H_K$ and $W_K$ are the kernel height and weight, respectively.A common choice for $H_K$ and $W_K$ is $H_K = W_K = 3$ or $5$, but this tends to vary depending on the architecture.Convolving the input with the kernel and adding a bias then gives an output $y \in \mathbb R^{M \times C_{out} \times H_{out} \times W_{out}}$.If we use "same" padding and a stride of $1$ in our convolution (more on this later), our output will have the same spatial dimensions as the input: $H_{out}=H_{in}$ and $W_{out}=W_{in}$.If you're having trouble visualizing this operation in 4D, it's easier to think about for a single member of the minibatch, one convolutional kernel at a time. Consider a stack of $C_{out}$ number of kernels, each of which are 3D ($C_{in} \times H_K \times W_K $). This 3D volume is then slid across the input (which is also 3D: $C_{in} \times H_{in} \times W_{in}$) in the two spatial dimensions (along $H_{in}$ and $W_{in}$). The outputs of the multiplication of the kernel and the input at every location creates a single feature map that is $H_{out} \times W_{out}$. Stacking the feature maps generated by each kernel gives the 3D output $C_{out} \times H_{out} \times W_{out} $.Repeat the process for all $M$ inputs in the minibatch, and we get a 4D output $M \times C_{out} \times H_{out} \times W_{out}$.A few more things to note:- Notice the ordering of the dimensions of the input (batch, channels in, height, width).This is commonly referred to as $NCHW$ ordering.Many other languages and libraries (e.g. MATLAB, TensorFlow, the image example at the beginning of this notebook) instead default to the slightly different $NHWC$ ordering.PyTorch defaults to $NCHW$, as it more efficient computationally, especially with CUDA. - An additional argument for the convolution is the *stride*, which controls the how far we slide the convolutional filter as we move it along the input image. The convolutional operator, from its signal processing roots, by default considers a stride length of 1 in all dimensions, but in some situations we would like to consider strides more than 1 (or even less than 1). More on this later.- In the context of signal processing, convolutions usually result in outputs that are larger than the input size, which results from when the kernel "hangs off the edge" of the input on both sides. This might not always be desirable.We can control this by controlling the padding of the input.Typically, we use pad the input to ensure the output has the same spatial dimensions as the input (assuming stride of 1); this makes it easier for us to keep track of what the size of our model is.Let's implement this convolution operator in code.There is a convolution implementation in `torch.nn.functional`, which we use here.
|
# Create a random 4D tensor. Use the NCHW format, where N = 100, C = 3, H = W =32
x_cnn =
# Create convolutional kernel variable (C_out, C_in, H_k, W_k)
W1 =
# Create a bias variable of size C_out
b1 =
# Apply the convolutional layer with relu activation
conv1 =
# Print input/output shape
print("Input shape: {}".format(x_cnn.shape))
print("Convolution output shape: {}".format(conv1.shape))
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
Just like in a MLP, we can stack multiple of these convolutional layers. In the *Representing Images Digitally* section, we briefly mentioned considering images with channels more than 3.Observe that the input to the second layer (i.e. the output of the first layer) can be viewed as an "image" with $C_{out}$ channels.Instead of each channel representing a color content though, each channel effectively represents how much the original input image activated a particular convolutional kernel.Given $C_{out}$ kernels that are each $C_{in} \times H_K \times W_K$, this results in $C_{out}$ channels for the output of the convolution.Note that we need to change the dimensions of the convolutional kernel such that its input channels matches the number of output channels of the previous layer:
|
# Create the second convolutional layer by defining a random `W2` and `b2`
W2 =
b2 =
# Apply 2nd convolutional layer to the output of the first convolutional layer
conv2 =
# Print output shape
print("Second convolution output shape: {}".format(conv2.shape))
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
In fact, we typically perform these convolution operations many times. Popular CNN architectures for image analysis today can be 100+ layers. ReshapingYou'll commonly finding yourself needing to reshape tensors while building CNNs.The PyTorch function for doing so is `view()`. Anyone familiar with NumPy will find it very similar to `np.reshape()`.Importantly, the new dimensions must be chosen so that it is possible to rearrange the input into the shape of the output (i.e. the total number of elements must be the same).As with NumPy, you can optionally replace one of the dimensions with a `-1`, which tells `torch` to infer the missing dimension.
|
M = torch.zeros(4, 3)
M2 = M.view(1,1,12)
M3 = M.view(2,1,2,3)
M4 = M.view(-1,2,3)
M5 = M.view(-1)
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
To get an idea of why reshaping is need in a CNN, let's look at a diagram of a simple CNN.First of all, the CNN expects a 4D input, with the dimensions corresponding to `[batch, channel, height, width]`.Your data may not come in this format, so you may have to reshape it yourself.
|
x_flat = torch.randn(100, 1024)
# Reshape flat input image into a 4D batched image input
# Hint: Use batch=100, height=width=32.
x_reshaped =
# Print input shape
print(x_reshaped.shape)
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
CNN architectures also commonly contain fully connected layers or a softmax, as we're often interested in classification.Both of these expect 2D inputs with dimensions `[batch, dim]`, so you have to "flatten" a CNN's 4D output to 2D.For example, to flatten the convolutional feature maps we created earlier:
|
# Flatten convolutional feature maps into a vector
h_flat = conv2.view(-1, 32*32*32)
# Print output shape
print(h_flat.shape)
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
Pooling and stridingAlmost all CNN architectures incorporate either pooling or striding. This is done for a number of reasons, including:- Dimensionality reduction: pooling and striding operations reduces computational complexity by shrinking the number of values passed to the next layer.For example, a 2x2 maxpool reduces the size of the feature maps by a factor of 4.- Translational invariance: Oftentimes in computer vision, we'd prefer that shifting the input by a few pixels doesn't change the output. Pooling and striding reduces sensitivity to exact pixel locations.- Increasing receptive field: by summarizing a window with a single value, subsequent convolutional kernels are seeing a wider swath of the original input image. For example, a max pool on some input followed by a 3x3 convolution results in a kernel "seeing" a 6x6 region instead of 3x3. PoolingThe two most common forms of pooling are max pooling and average pooling. Both reduce values within a window to a single value, on a per-feature-map basis.Max pooling takes the maximum value of the window as the output value; average pooling takes the mean.
|
# Recreate the values in pooling figure with shape [4,4]
feature_map_fig =
# Convert 2D matrix to a 4D tensor of shape [1,1,4,4].
fmap_fig =
print("Feature map shape pre-pooling: {}".format(fmap_fig.shape))
# Apply max pool to fmap_fig
max_pool_fig =
print("\nMax pool")
print("Shape: {}".format(max_pool_fig.shape))
print(torch.squeeze(max_pool_fig))
# Apply Avgerage pool to fmap_fig
avg_pool_fig =
print("\nAvg pool")
print("Shape: {}".format(avg_pool_fig.shape))
print(torch.squeeze(avg_pool_fig))
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
Now we will apply max pool and average pool to the output of the convolutional layer `conv2`.
|
# Taking the output we've been working with so far, first print its current size
print("Shape of conv2 feature maps before pooling: {0}".format(conv2.shape))
# Apply Max pool with size = 2 and then print new shape.
max_pool2 =
print("Shape of conv2 feature maps after max pooling: {0}".format(max_pool2.shape))
# Average pool with size = 2 and then print new shape
avg_pool2 =
print("Shape of conv2 feature maps after avg pooling: {0}".format(avg_pool2.shape))
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
StridingOne might expect that pixels in an image have high correlation with neighboring pixels, so we can save computation by skipping positions while sliding the convolutional kernel. By default, a CNN slides across the input one pixel at a time, which we call a stride of 1.By instead striding by 2, we skip calculating 75% of the values of the output feature map, which yields a feature map that's half the size in each spatial direction.Note, while pooling is an operation done after the convolution, striding is part of the convolution operation itself.
|
# Since striding is part of the convolution operation, we'll start with the feature maps before the 2nd convolution
print("Shape of conv1 feature maps: {0}".format(conv1.shape))
# Apply 2nd convolutional layer, with striding of 2
conv2_strided =
# Print output shape
print("Shape of conv2 feature maps with stride of 2: {0}".format(conv2_strided.shape))
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
Building a custom CNN Let's revisit MNIST digit classification, but this time, we'll use the following CNN as our classifier: $5 \times 5$ convolution -> $2 \times 2$ max pool -> $5 \times 5$ convolution -> $2 \times 2$ max pool -> fully connected to $\mathbb R^{256}$ -> fully connected to $\mathbb R^{10}$ (prediction). ReLU activation functions will be used to impose non-linearities.Remember, convolutions produce 4-D outputs, and fully connected layers expect 2-D inputs, so tensors must be reshaped when transitioning from one to the other.We can build this CNN with the components introduced before, but as with the logistic regression example, it may prove helpful to instead organize our model with a `nn.Module`.
|
import torch.nn as nn
# Important: Inherit the `nn.Module` class to define a PyTorch model
class CIFAR_CNN():
def __init__(self):
super().__init__()
# Step 1: Define the first convoluation layer (C_in=3, C_out=32, H_k=W_k=5, padding = 2)
self.conv1 =
# Step 2: Define the second convolutional layer (C_out=64, H_k=W_k=5, padding = 2)
self.conv2 =
# Step 3: Define the first fully-connected layer with an output dimension of 256.
# What should be the input dimension of this layer?
self.fc1 =
# Step 4: Define the second fully-connected layer with an output dimension of 10 (# of classes).
self.fc2 =
def forward(self, x):
# Step 5: Using the layers defined in __init__ function, define the forward pass of the neural network below:
# Apply conv layer 1, activation, and max-pool
# Apply conv layer 2, activation, and max-pool
# Reshape to kernel for fully-connected layer
# Apply fc layer 1 and activation
# Apply fc layer 2
output =
return output
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
Notice how our `nn.Module` contains several operation chained together.The code for submodule initialization, which creates all the stateful parameters associated with each operation, is placed in the `__init__()` function, where it is run once during object instantiation.Meanwhile, the code describing the forward pass, which is used every time the model is run, is placed in the `forward()` method.Printing an instantiated model shows the model summary:
|
model = CIFAR_CNN()
print(model)
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
We can drop this model into our logistic regression training code, with few modifications beyond changing the model itself.A few other changes:- CNNs expect a 4-D input, so we no longer have to reshape the images before feeding them to our neural network.- Since CNNs are a little more complex than models we've worked with before, we're going to increase the number of epochs (complete passes through the training data) during training.- We switch from a vanilla stochastic gradient descent optimizer to the [Adam](https://arxiv.org/abs/1412.6980) optimizer, which tends to do well for neural networks. Training the CNN
|
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets, transforms
from tqdm.notebook import tqdm, trange
cifar_train = datasets.CIFAR10(root="./datasets/cifar-10/", train=True, transform=transforms.ToTensor(), download=True)
cifar_test = datasets.CIFAR10(root="./datasets/cifar-10/", train=False, transform=transforms.ToTensor(), download=True)
# Creatre the train and test data loaders.
train_loader =
test_loader =
# Create a loader identical to the training laoder with a sample size of 8. This is to demonstrate
# how we display images. If we had used the train_loader, we would be looking at 100 images!
sample_loader =
#define an image viewing function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
#list out the classes for the dataset in order from 0 to 9 to correspond to the integer labels
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
#Take a sample of 1 batch from the sample loader
dataiter = iter(sample_loader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(8)))
# Instantiate model
model =
# Loss and Optimizer
criterion =
optimizer =
track_loss = []
# Iterate through train set minibatchs
num_training_steps = 0
for epoch in trange(3):
for images, labels in tqdm(train_loader):
# Step 1: Zero out the gradients.
# Step 2: Forward pass.
# Step 3: Compute the loss using `criterion`.
# Step 5: Backward pass.
# Step 6: Update the parameters.
# Step 7: Track the loss value at every 100th step.
if num_training_steps % 100 == 0:
# Append loss to the list.
track_loss.append()
num_training_steps += 1
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
Let's plot the loss function
|
##################################################
# #
# ---- YOUR CODE HERE ---- #
# #
##################################################
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
Testing the trained model
|
## Testing
correct = 0
total = len(cifar_test)
with torch.no_grad():
# Iterate through test set minibatchs
for images, labels in tqdm(test_loader):
# Step 1: Forward pass to get
y =
# Step 2: Compute the predicted labels from `y`.
predictions =
# Step 3: Compute the number of samples that were correctly predicted, and maintain the count in the variable `correct`.
correct +=
print('Test accuracy: {}'.format(correct/total))
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
If you are running this notebook on CPU, training this CNN might take a while.On the other hand, if you use a GPU, this model should train in seconds.This is why we usually prefer to use GPUs when we have them. Torchvision Datasets and transformsAs any experienced ML practioner will say, data wrangling is often half (sometimes even 90%) of the battle when building a model.Often, we have to write significant code to handle downloading, organizing, formatting, shuffling, pre-processing, augmenting, and batching examples. For popular datasets, we'd like to standardize data handling so that the comparisons we make are specific to the models themselves.Enter [Torchvision](https://pytorch.org/vision/stable/index.html).Torchvision includes easy-to-use APIs for downloading and loading many popular vision datasets.We've previously seen this in action for downloading the MNIST dataset:
|
from torchvision import datasets
mnist_train = datasets.CIFAR10(root="./datasets", train=True, transform=transforms.ToTensor(), download=True)
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
Of course, there's [many more](https://pytorch.org/vision/stable/datasets.html).Currently, datasets for image classification (e.g. MNIST, CIFAR, ImageNet), object detection (VOC, COCO, Cityscapes), and video action recognition (UCF101, Kinetics) are included.For formatting, pre-processing, and augmenting, [transforms](https://pytorch.org/vision/stable/transforms.html) can come in handy.Again, we've seen this before (see above), when we used a transform to convert the MNIST data from PIL images to PyTorch tensors.However, transforms can be used for much more. Preprocessing steps like data whitening are common before feeding the data into the model.Also, in many cases, we use data augmentations to artificially inflate our dataset and learn invariances.Transforms are a versatile tool for all of these. Leveraging popular convolutional neural networksWhile you certainly can build your own custom CNNs like we did above, more often than not, it's better to use one of the popular existing architectures. The Torchvision documentation has a [list of supported CNNs](https://pytorch.org/vision/stable/models.html), as well as some performance characteristics. There's a number of reasons for using one of these CNNs instead of designing your own.First, for image datasets larger and more complex than CIFAR and MNIST (which is basically all of them), a fair amount network depth and width is often necessary.For example, some of the popular CNNs can be over 100 layers deep, with several tricks and details beyond what we've covered in this notebook.Coding all of this yourself has a high potential for error, especially when you're first getting started.Instead, you can create the CNN architecture using Torchvision, using a couple lines:
|
import torchvision.models as models
resnet18 = models.resnet18()
print(resnet18)
|
_____no_output_____
|
MIT
|
day2_student_notebook.ipynb
|
dukeplusds/mlwscv2002
|
gpu info
|
gtx950 = DeviceInfo()
gtx950.sm_num = 6
gtx950.sharedmem_per_sm = 49152
gtx950.reg_per_sm = 65536
gtx950.maxthreads_per_sm = 2048
|
_____no_output_____
|
MIT
|
mem_mem/t2-cke.ipynb
|
3upperm2n/trans_kernel_model
|
single stream info
|
data_size = 23000
trace_file = './1cke/trace_' + str(data_size) + '.csv'
df_trace = trace2dataframe(trace_file) # read the trace to the dataframe
df_trace
df_single_stream = model_param_from_trace_v1(df_trace)
df_single_stream.head(20)
df_s1 = reset_starting(df_single_stream)
df_s1
|
_____no_output_____
|
MIT
|
mem_mem/t2-cke.ipynb
|
3upperm2n/trans_kernel_model
|
running 2cke case
|
stream_num = 2
df_cke_list = []
for x in range(stream_num):
df_cke_list.append(df_s1.copy(deep=True))
df_cke_list[0]
df_cke_list[1]
H2D_H2D_OVLP_TH = 3.158431
for i in range(1,stream_num):
# compute the time for the init data transfer
stream_startTime = find_whentostart_comingStream(df_cke_list[i-1], H2D_H2D_OVLP_TH)
print('stream_startTime : {}'.format(stream_startTime))
df_cke_list[i].start += stream_startTime
df_cke_list[i].end += stream_startTime
df_cke_list[0]
df_cke_list[1]
|
_____no_output_____
|
MIT
|
mem_mem/t2-cke.ipynb
|
3upperm2n/trans_kernel_model
|
check whether there is h2d overlapping
|
prev_stm_h2ds_start, prev_stm_h2ds_end = find_h2ds_timing(df_cke_list[0])
print("prev stream h2ds : {} - {}".format(prev_stm_h2ds_start, prev_stm_h2ds_end))
curr_stm_h2ds_start, curr_stm_h2ds_end = find_h2ds_timing(df_cke_list[1])
print("curr stream h2ds : {} - {}".format(curr_stm_h2ds_start, curr_stm_h2ds_end))
if curr_stm_h2ds_start >=prev_stm_h2ds_start and curr_stm_h2ds_start < prev_stm_h2ds_end:
h2ds_ovlp_between_stream = True
else:
h2ds_ovlp_between_stream = False
print("h2ds_ovlp_between_stream : {}".format(h2ds_ovlp_between_stream))
|
h2ds_ovlp_between_stream : False
|
MIT
|
mem_mem/t2-cke.ipynb
|
3upperm2n/trans_kernel_model
|
check kernel overlapping
|
prev_stm_kern_start, prev_stm_kern_end = find_kern_timing(df_cke_list[0])
print("prev stream kern : {} - {}".format(prev_stm_kern_start, prev_stm_kern_end))
curr_stm_kern_start, curr_stm_kern_end = find_kern_timing(df_cke_list[1])
print("curr stream kern : {} - {}".format(curr_stm_kern_start, curr_stm_kern_end))
if prev_stm_kern_start <= curr_stm_kern_start < prev_stm_kern_end:
kern_ovlp_between_stream = True
else:
kern_ovlp_between_stream = False
print("kern_ovlp_between_stream : {}".format(kern_ovlp_between_stream))
|
kern_ovlp_between_stream : True
|
MIT
|
mem_mem/t2-cke.ipynb
|
3upperm2n/trans_kernel_model
|
use cke model if kern_ovlp_between_stream is true
|
# get the overlapping kernel info from both stream
kernel_ = model_cke_from_same_kernel(gtx950, df_trace, )
|
_____no_output_____
|
MIT
|
mem_mem/t2-cke.ipynb
|
3upperm2n/trans_kernel_model
|
Reflect Tables into SQLAlchemy ORM
|
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
|
_____no_output_____
|
ADSL
|
sql_alchemy.ipynb
|
Yuva38/sqlalchemy-challenge
|
Exploratory Climate Analysis using pandas
|
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
# Perform a query to retrieve the data and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
# Use Pandas Plotting with Matplotlib to plot the data
weather_data = pd.read_sql("SELECT * FROM measurement", engine)
weather_data.head()
# Latest Date
latest_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first().date
latest_date
end_date = latest_date
end_date
start_date = dt.datetime.strptime(end_date, '%Y-%m-%d') - dt.timedelta(days=365)
start_date
start_date = start_date.strftime('%y-%m-%d')
start_date
start_date = "2016-08-23"
end_date = "2017-08-23"
weather_data_one_year = weather_data[weather_data["date"].between(start_date, end_date)]
weather_data_one_year.head()
len(weather_data_one_year)
precipitation_data = weather_data_one_year[["prcp", "date"]]
precipitation_data.set_index('date', inplace=True)
# Sort the dataframe by date
precipitation_data_sorted = precipitation_data.sort_values('date', ascending=True )
precipitation_data_sorted.head()
# Use Pandas Plotting with Matplotlib to plot the data
# Rotate the xticks for the dates
precipitation_chart = precipitation_data_sorted.plot(kind = "line",grid=True, figsize=(10,6), rot=30, x_compat=True, fontsize=12, title = "Precipitation data for one year")
precipitation_chart.set_xlabel("Date")
precipitation_chart.set_ylabel("Precipitation")
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
precipitation_data_sorted.describe()
# Design a query to show how many stations are available in this dataset?
station_data = pd.read_sql("SELECT * FROM station", engine)
station_data
station_data["station"].count()
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
weather_data["station"].value_counts()
weather_data_station_counts = weather_data["station"].value_counts()
# The station with maximum number of temperature observations
active_station = weather_data_station_counts.index[0]
active_station
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
weather_data_active_station = weather_data.loc[(weather_data["station"] == active_station), :]
Lowest_temperature = weather_data_active_station["tobs"].min()
Highest_temperature = weather_data_active_station["tobs"].max()
Average_temperature = weather_data_active_station["tobs"].mean()
print(f"For the most active station The lowest temperature, The Highest temperature, The Average temperature is {Lowest_temperature} , {Highest_temperature}, {Average_temperature}")
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
start_date = "2016-08-23"
end_date = "2017-08-23"
weather_data_one_year = weather_data[weather_data["date"].between(start_date, end_date)]
weather_data_active_station_one_year = weather_data_one_year.loc[(weather_data_one_year["station"] == active_station), :]
temperature_data = weather_data_active_station_one_year[["tobs", "date"]]
x_data = temperature_data["tobs"]
plt.hist(x_data, 12, label = "tobs")
plt.xlabel('Temperature')
plt.ylabel('Frequency')
plt.legend(loc=1, prop={'size': 14})
plt.show()
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
print(calc_temps('2017-02-28', '2017-03-05'))
trip_results = calc_temps('2017-02-28', '2017-03-05')
trip_results
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
trip_df = pd.DataFrame(trip_results, columns=['Min Temp', 'Avg Temp', 'Max Temp'])
avg_temp = trip_df['Avg Temp']
min_max_temp = trip_df.iloc[0]['Max Temp'] - trip_df.iloc[0]['Min Temp']
temp_chart = avg_temp.plot(kind='bar', yerr=min_max_temp, grid = True, figsize=(6,8), alpha=0.5, color='coral')
temp_chart.set_title("Trip Avg Temp", fontsize=20)
temp_chart.set_ylabel("Temp (F)")
plt.xticks([])
plt.show()
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
trip_start_date = "2017-02-28"
trip_end_date = "2017-03-5"
weather_data_one_year_trip = weather_data_one_year[weather_data_one_year["date"].between(trip_start_date, trip_end_date)]
weather_data_one_year_trip_per_station = weather_data_one_year_trip.groupby("station")
weather_data_one_year_trip_per_station["prcp"].sum()
|
_____no_output_____
|
ADSL
|
sql_alchemy.ipynb
|
Yuva38/sqlalchemy-challenge
|
Tutorial - Time Series Forecasting - Autoregression (AR)The goal is to forecast time series with the Autoregression (AR) Approach. 1) JetRail Commuter, 2) Air Passengers, 3) Function Autoregression with Air Passengers, and 5) Function Autoregression with Wine Sales.References Jason Brownlee - https://machinelearningmastery.com/time-series-forecasting-methods-in-python-cheat-sheet/
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime
import warnings
warnings.filterwarnings("ignore")
# Load File
url = 'https://raw.githubusercontent.com/tristanga/Machine-Learning/master/Data/JetRail%20Avg%20Hourly%20Traffic%20Data%20-%202012-2013.csv'
df = pd.read_csv(url)
df.info()
df.Datetime = pd.to_datetime(df.Datetime,format='%Y-%m-%d %H:%M')
df.index = df.Datetime
|
_____no_output_____
|
MIT
|
Time Series Analysis/Time Series Forecasting - Autoregression (AR)/Autoregression (AR).ipynb
|
shreejitverma/Data-Scientist
|
Autoregression (AR) Approach with JetRail The autoregression (AR) method models the next step in the sequence as a linear function of the observations at prior time steps.The notation for the model involves specifying the order of the model p as a parameter to the AR function, e.g. AR(p). For example, AR(1) is a first-order autoregression model.The method is suitable for univariate time series without trend and seasonal components.
|
#Split Train Test
import math
total_size=len(df)
split = 10392 / 11856
train_size=math.floor(split*total_size)
train=df.head(train_size)
test=df.tail(len(df) -train_size)
from statsmodels.tsa.ar_model import AR
model = AR(train.Count)
fit1 = model.fit()
y_hat = test.copy()
y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False)
#Plotting data
plt.figure(figsize=(12,8))
plt.plot(train.index, train['Count'], label='Train')
plt.plot(test.index,test['Count'], label='Test')
plt.plot(y_hat.index,y_hat['AR'], label='AR')
plt.legend(loc='best')
plt.title("Autoregression (AR) Forecast")
plt.show()
|
_____no_output_____
|
MIT
|
Time Series Analysis/Time Series Forecasting - Autoregression (AR)/Autoregression (AR).ipynb
|
shreejitverma/Data-Scientist
|
RMSE Calculation
|
from sklearn.metrics import mean_squared_error
from math import sqrt
rms = sqrt(mean_squared_error(test.Count, y_hat.AR))
print('RMSE = '+str(rms))
|
RMSE = 28.635096626807453
|
MIT
|
Time Series Analysis/Time Series Forecasting - Autoregression (AR)/Autoregression (AR).ipynb
|
shreejitverma/Data-Scientist
|
Autoregression (AR) Approach with Air Passagers
|
# Subsetting
url = 'https://raw.githubusercontent.com/tristanga/Machine-Learning/master/Data/International%20Airline%20Passengers.csv'
df = pd.read_csv(url, sep =";")
df.info()
df.Month = pd.to_datetime(df.Month,format='%Y-%m')
df.index = df.Month
#df.head()
#Creating train and test set
import math
total_size=len(df)
train_size=math.floor(0.7*total_size) #(70% Dataset)
train=df.head(train_size)
test=df.tail(len(df) -train_size)
#train.info()
#test.info()
from statsmodels.tsa.ar_model import AR
# Create prediction table
y_hat = test.copy()
model = AR(train['Passengers'])
fit1 = model.fit()
y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False)
y_hat.describe()
plt.figure(figsize=(12,8))
plt.plot(train.index, train['Passengers'], label='Train')
plt.plot(test.index,test['Passengers'], label='Test')
plt.plot(y_hat.index,y_hat['AR'], label='AR')
plt.legend(loc='best')
plt.title("Autoregression (AR)")
plt.show()
from sklearn.metrics import mean_squared_error
from math import sqrt
rms = sqrt(mean_squared_error(test.Passengers, y_hat.AR))
print('RMSE = '+str(rms))
|
RMSE = 60.13838110500644
|
MIT
|
Time Series Analysis/Time Series Forecasting - Autoregression (AR)/Autoregression (AR).ipynb
|
shreejitverma/Data-Scientist
|
Function Autoregression (AR) Approach with variables
|
def AR_forecasting(mydf,colval,split):
#print(split)
import math
from statsmodels.tsa.api import Holt
from sklearn.metrics import mean_squared_error
from math import sqrt
global y_hat, train, test
total_size=len(mydf)
train_size=math.floor(split*total_size) #(70% Dataset)
train=mydf.head(train_size)
test=mydf.tail(len(mydf) -train_size)
y_hat = test.copy()
model = AR(train[colval])
fit1 = model.fit()
y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False)
plt.figure(figsize=(12,8))
plt.plot(train.index, train[colval], label='Train')
plt.plot(test.index,test[colval], label='Test')
plt.plot(y_hat.index,y_hat['AR'], label='AR')
plt.legend(loc='best')
plt.title("Autoregression (AR) Forecast")
plt.show()
rms = sqrt(mean_squared_error(test[colval], y_hat.AR))
print('RMSE = '+str(rms))
AR_forecasting(df,'Passengers',0.7)
|
_____no_output_____
|
MIT
|
Time Series Analysis/Time Series Forecasting - Autoregression (AR)/Autoregression (AR).ipynb
|
shreejitverma/Data-Scientist
|
Testing Function Autoregression (AR) Approach with Wine Dataset
|
url = 'https://raw.githubusercontent.com/tristanga/Data-Cleaning/master/Converting%20Time%20Series/Wine_Sales_R_Dataset.csv'
df = pd.read_csv(url)
df.info()
df.Date = pd.to_datetime(df.Date,format='%Y-%m-%d')
df.index = df.Date
AR_forecasting(df,'Sales',0.7)
|
_____no_output_____
|
MIT
|
Time Series Analysis/Time Series Forecasting - Autoregression (AR)/Autoregression (AR).ipynb
|
shreejitverma/Data-Scientist
|
import numpy as np
# Vector 1-D array
a = [1,2,3]
a = a + [1]
print(a)
# Numpy array 1-D
b = np.array([4,5,6])
b = np.append(b,[7])
A = np.array([[1,22,3],[4,5,6],[111,-11,33]])
B = np.array([[10,11,12],[13,14,15],[14,7,2.5]])
A.shape
sum = np.sum(np.dot(A,B))
print(sum)
sum.dtype
C = np.array([[10,11,12],[13,14,15],[16,17,18]])
C
C.shape
K = np.array([[1,2,3],[1,2,3],[2,3,5]])
K.ndim
F = np.random.random(size=5)
F
L = np.random.rand(4,4,4)
L
L.shape
U = np.random.uniform(4.3,5.3,3)
print(U)
U.shape
Z_3D = np.array([
[
[1,2,3],
[4,5,6],
[7,8,9]
],
[
[1,2,3],
[4,5,6],
[7,8,9]],
[[1,2,3],
[4,5,6],
[7,8,9]
],
[
[1,2,3],
[4,5,6],
[7,8,9]
]
]
)
print(Z_3D)
print("Number of Dimensions",Z_3D.ndim)
print("Size of Array",Z_3D.size)
B = np.array([[[
[1,2,3],[1,2,3]],
[[1,2,3],[1,2,3]],
[[1,2,3],[1,2,3]]
]])
print(B)
print(B.ndim)
B.shape
Z_3D = np.zeros_like([
[
[1,2,3],
[4,5,6],
[7,8,9]
],
[
[1,2,3],
[4,5,6],
[7,8,9]],
[[1,2,3],
[4,5,6],
[7,8,9]
],
[
[1,2,3],
[4,5,6],
[7,8,9]
]
]
)
print(Z_3D)
print("Number of Dimensions ",Z_3D.ndim)
print("Size of Array",Z_3D.size)
|
[[[0 0 0]
[0 0 0]
[0 0 0]]
[[0 0 0]
[0 0 0]
[0 0 0]]
[[0 0 0]
[0 0 0]
[0 0 0]]
[[0 0 0]
[0 0 0]
[0 0 0]]]
Number of Dimensions 3
Size of Array 36
|
MIT
|
numpy.ipynb
|
OmidMustafa/XOR_python
|
|
Copyright 2019 The TensorFlow Authors.
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
_____no_output_____
|
Apache-2.0
|
site/it/tutorials/quickstart/advanced.ipynb
|
justaverygoodboy/docs-l10n
|
Introduzione a TensorFlow 2 per esperti Visualizza su TensorFlow.org Esegui in Google Colab Visualizza il sorgente su GitHub Scarica il notebook Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè queste traduzioni sono *best-effort*, non è garantito che rispecchino in maniera precisa e aggiornata la [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en). Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs). Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Questo è un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. I programmi Python sono eseguiti direttamente nel browser—un ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*. Scarica e installa il pacchetto TensorFlow 2: Importa TensorFlow nel tuo codice:
|
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
|
_____no_output_____
|
Apache-2.0
|
site/it/tutorials/quickstart/advanced.ipynb
|
justaverygoodboy/docs-l10n
|
Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/).
|
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Add a channels dimension
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
|
_____no_output_____
|
Apache-2.0
|
site/it/tutorials/quickstart/advanced.ipynb
|
justaverygoodboy/docs-l10n
|
Usa `tf.data` per raggruppare e mischiare il dataset:
|
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
|
_____no_output_____
|
Apache-2.0
|
site/it/tutorials/quickstart/advanced.ipynb
|
justaverygoodboy/docs-l10n
|
Costrusci il modello `tf.keras` usando l'[API Keras per creare sottoclassi di modelli](https://www.tensorflow.org/guide/kerasmodel_subclassing):
|
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
# Create an instance of the model
model = MyModel()
|
_____no_output_____
|
Apache-2.0
|
site/it/tutorials/quickstart/advanced.ipynb
|
justaverygoodboy/docs-l10n
|
Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento:
|
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
|
_____no_output_____
|
Apache-2.0
|
site/it/tutorials/quickstart/advanced.ipynb
|
justaverygoodboy/docs-l10n
|
Seleziona delle metriche per misurare la pertita e l'accuratezza del modello. Queste metriche accumulano i valori alle varie epoche e alla fine stampano il risultato globale.
|
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
|
_____no_output_____
|
Apache-2.0
|
site/it/tutorials/quickstart/advanced.ipynb
|
justaverygoodboy/docs-l10n
|
Usa `tf.GradientTape` per addestrare il modello:
|
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
|
_____no_output_____
|
Apache-2.0
|
site/it/tutorials/quickstart/advanced.ipynb
|
justaverygoodboy/docs-l10n
|
Testa il modello:
|
@tf.function
def test_step(images, labels):
predictions = model(images)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
EPOCHS = 5
for epoch in range(EPOCHS):
for images, labels in train_ds:
train_step(images, labels)
for test_images, test_labels in test_ds:
test_step(test_images, test_labels)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
# Reset the metrics for the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
|
_____no_output_____
|
Apache-2.0
|
site/it/tutorials/quickstart/advanced.ipynb
|
justaverygoodboy/docs-l10n
|
用带有三种类型噪声(度,边权重,点权重)的传销模型网络测试RoleMagnet的抗噪性
|
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
|
_____no_output_____
|
MIT
|
experiment_3.ipynb
|
Tirami-su/rolemagnet
|
Creating a graph模拟23人的小型传销组织,带少量噪声
|
%matplotlib inline
plt.rcParams['figure.dpi'] = 150
plt.rcParams['figure.figsize'] = (4, 3)
G = nx.DiGraph()
G.add_weighted_edges_from([('11','s1',0.07),('12','s1',0.1),('13','s1',0.06),('14','s1',0.09),('15','s1',0.08),
('21','s2',0.07),('22','s2',0.1),('23','s2',0.06),('24','s2',0.09),('25','s2',0.08),('26','s2',0.1),
('31','s3',0.1),('32','s3',0.1),('33','s3',0.1),('34','s3',0.1),('35','s3',0.1),('36','s3',0.1),
('s1','mid',0.4),('s2','mid',0.5),('s3','mid',0.55),
('mid','boss',0.7),('mid','w1',0.72),
('w1','41',0.065),('w1','42',0.05),('w1','43',0.06),('w1','44',0.055),('w1','51',0.24),('w1','52',0.25)])
# 净获利
balance=[-0.07,0,-0.1,-0.06,-0.09,-0.08,
-0.07,0,-0.1,-0.06,-0.09,-0.08,-0.1,
-0.1,0.05,-0.1,-0.1,-0.1,-0.1,-0.1,
0.03,0.7,0,
0.065,0.05,0.06,0.055,0.24,0.25]
color=['lightgray','violet','lightgray','lightgray','lightgray','lightgray',
'lightgray','violet','lightgray','lightgray','lightgray','lightgray','lightgray',
'lightgray','violet','lightgray','lightgray','lightgray','lightgray','lightgray',
'orange','r','limegreen',
'c','c','c','c','pink','pink']
nx.draw_planar(G, with_labels=True, node_color=color, node_size=300, font_size=7)
plt.show()
|
_____no_output_____
|
MIT
|
experiment_3.ipynb
|
Tirami-su/rolemagnet
|
RoleMagnet
|
import rolemagnet as rm
vec,role,label=rm.role_magnet(G, balance=balance)
|
Embedding: 100.00% -
SOM shape: [11, 7]
Training SOM: 145
|
MIT
|
experiment_3.ipynb
|
Tirami-su/rolemagnet
|
Visualization可视化节点的向量表示,用PCA降到二维后再次可视化
|
print ('三维嵌入结果')
for i in range(len(G.nodes)):
print (list(G.nodes)[i],'\t',vec[i])
from mpl_toolkits.mplot3d import Axes3D
coord = np.transpose(vec)
fig = plt.figure(figsize=(4,3))
ax = Axes3D(fig)
ax.scatter(coord[0], coord[1], coord[2], c=color, s=150)
plt.show()
# 再次降到二维
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
reduced=PCA(n_components=2).fit_transform(StandardScaler().fit_transform(vec))
print ('二维嵌入结果')
for i in range(len(G.nodes)):
print (list(G.nodes)[i],'\t',reduced[i])
coord = np.transpose(reduced)
plt.scatter(coord[0], coord[1], c=color, s=150, linewidths=0.8, edgecolors='k')
plt.title("RoleMagnet")
plt.show()
|
三维嵌入结果
11 [-4.656918 2.50780243 -2.60384377]
s1 [13.6955635 -7.36617524 0. ]
12 [-3.4945784 1.04689359 -3.71977681]
13 [-4.72707781 3.05246777 -2.23186608]
14 [-3.95635282 1.50288243 -3.34779913]
15 [-4.37087035 1.9886804 -2.97582145]
21 [-4.5643264 3.33294681 -2.60384377]
s2 [ 17.36800426 -11.11766841 0. ]
22 [-4.23934001 2.05242074 -3.71977681]
23 [-4.37037584 3.68885727 -2.23186608]
24 [-4.48608341 2.46724424 -3.34779913]
25 [-4.60589919 2.90127084 -2.97582145]
26 [-4.23934001 2.05242074 -3.71977681]
31 [-4.41890655 2.45218286 -3.71977681]
s3 [ 18.0140488 -11.44197277 1.8598884 ]
32 [-4.41890655 2.45218286 -3.71977681]
33 [-4.41890655 2.45218286 -3.71977681]
34 [-4.41890655 2.45218286 -3.71977681]
35 [-4.41890655 2.45218286 -3.71977681]
36 [-4.41890655 2.45218286 -3.71977681]
mid [29.54709281 2.7854159 1.11593304]
boss [-0.24243254 -5.20282019 26.03843765]
w1 [16.38327335 28.67939275 0. ]
41 [-3.55963367 -6.53538914 2.41785492]
42 [-3.69861603 -6.33304626 1.8598884 ]
43 [-3.59066646 -6.48454517 2.23186608]
44 [-3.63486767 -6.41754297 2.04587724]
51 [-3.02674862 -5.96176481 8.92746434]
52 [-3.03041617 -5.91086811 9.29944202]
|
MIT
|
experiment_3.ipynb
|
Tirami-su/rolemagnet
|
Evaluation用 Adjusted Rand Index 和 V-Measure 两种指标评价聚类结果
|
from sklearn.metrics.cluster import adjusted_rand_score, homogeneity_completeness_v_measure
true_label=[1,2,1,1,1,1,
1,2,1,1,1,1,1,
1,2,1,1,1,1,1,
3,4,5,6,6,6,6,7,7]
print('Adjusted Rand Index:',adjusted_rand_score(true_label,label))
print('V-Measure:',homogeneity_completeness_v_measure(true_label,label))
print('\n聚类结果')
for k,v in role.items():
print(k,v[0])
for i in v[1]:
print(' ',list(G.nodes)[i])
|
Adjusted Rand Index: 0.9892723141150981
V-Measure: (1.0, 0.9536171907216509, 0.9762579846765088)
聚类结果
21 [-0.6 -0.4]
11
12
13
14
15
21
22
24
23
25
26
31
32
33
34
35
36
45 [1.2 1.2]
s1
59 [1.6 1.2]
s2
s3
41 [0.6 3.2]
mid
71 [ 3.6 -0.2]
boss
6 [-2.2 3.2]
w1
35 [ 0.8 -0.6]
41
43
44
42
49 [ 1.6 -0.6]
51
52
|
MIT
|
experiment_3.ipynb
|
Tirami-su/rolemagnet
|
Euler Problem 14================The following iterative sequence is defined for the set of positive integers: n → n/2 (n is even) n → 3n + 1 (n is odd)Using the rule above and starting with 13, we generate the following sequence: 13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.Which starting number, under one million, produces the longest chain?NOTE: Once the chain starts the terms are allowed to go above one million.
|
D = {1:0}
maxlen = 0
start = 1
def collatz(n):
if n in D:
return D[n]
elif (n % 2):
c = 1 + collatz(3*n+1)
else:
c = 1 + collatz(n/2)
D[n] = c
return c
for n in range(1,1000000):
c = collatz(n)
if c > maxlen:
maxlen = c
start = n
print(start)
|
837799
|
MIT
|
Euler 014 - Longest Collatz Sequence.ipynb
|
Radcliffe/project-euler
|
Relatório de Análise IV Seleções e Frequências
|
import pandas as pd
dados = pd.read_csv('dados/aluguel_residencial.csv', sep = ';')
dados.head(10)
# Selecione somente os imóveis classificados com tipo 'Apartamento'
selecao = dados['Tipo'] == 'Apartamento'
n1 = dados[selecao].shape[0]
n1
# Selecione os imóveis classificados com tipos 'Casa', 'Casa de Condomínio' e 'Casa de Vila'
selecao = (dados['Tipo'] == 'Casa') | (dados['Tipo'] == 'Casa de Condomínio') | (dados['Tipo'] == 'Casa de Vila')
n2 = dados[selecao].shape[0]
n2
# Selecione os imóveis com área entre 60 e 100 metros quadrados, incluindo os limites
# 60 <= Area <= 100
selecao = (dados['Area'] >= 60) & (dados['Area'] <= 100)
n3 = dados[selecao].shape[0]
n3
# Selecione os imóveis que tenham pelo menos 4 quartos e aluguel menor que R$ 2.000,00
selecao = (dados['Quartos'] >= 4) & (dados['Valor'] < 2000)
n4 = dados[selecao].shape[0]
n4
print("Nº de imóveis classificados com tipo 'Apartamento' -> {}".format(n1))
print("Nº de imóveis classificados com tipos 'Casa', 'Casa de Condomínio' e 'Casa de Vila' -> {}".format(n2))
print("Nº de imóveis com área entre 60 e 100 metros quadrados, incluindo os limites -> {}".format(n3))
print("Nº de imóveis que tenham pelo menos 4 quartos e aluguel menor que R$ 2.000,00 -> {}".format(n4))
|
Nº de imóveis classificados com tipo 'Apartamento' -> 19532
Nº de imóveis classificados com tipos 'Casa', 'Casa de Condomínio' e 'Casa de Vila' -> 2212
Nº de imóveis com área entre 60 e 100 metros quadrados, incluindo os limites -> 8719
Nº de imóveis que tenham pelo menos 4 quartos e aluguel menor que R$ 2.000,00 -> 41
|
MIT
|
FormacaoPythonParaDataScience/PythonPandas-TratandoAnalisandoDados/CursoPandas/SelecoesFrequencias.ipynb
|
anablima/TreinamentosAlura
|
Scrumblet(Courtesy of K Polansky)Two-step doublet score processing, mirroring the approach from Popescu et al. https://www.nature.com/articles/s41586-019-1652-y which was closely based on Pijuan-Sala et al. https://www.nature.com/articles/s41586-019-0933-9The first step starts with some sort of doublet score, e.g. Scrublet, and ends up with a per-cell p-value (with significant values marking doublets). For each sample individually:run Scrublet to obtain each cell's scoreovercluster the manifold - run a basic Scanpy pipeline up to clustering, then additionally cluster each cluster separatelycompute per-cluster Scrublet scores as the median of the observed values, and use those going forwardidentify p-values:compute normal distribution parameters: centered at the median of the scores, with a MAD-derived standard deviationthe score distribution is zero-truncated, so as per the paper I only use above-median values to compute the MADK deviates from the paper a bit, at least the exact wording captured within it, and multiply the MAD by 1.4826 to obtain a literature-derived normal distribution standard deviation estimateFDR-correct the p-values via Benjamini-Hochbergwrite out all this doublet info into CSVs for later useNOTE: The second step is performed later, in a multi-sample space
|
path_to_data = '/nfs/users/nfs_l/lg18/team292/lg18/gonads/data/scRNAseq/FCA/rawdata/'
metadata = pd.read_csv(path_to_data + 'immune_meta.csv', index_col=0)
metadata['process'].value_counts()
# Select process = CD45+
metadata_enriched = metadata[metadata['process'] == 'CD45+']
metadata_enriched
metadata_enriched['stage'] = metadata_enriched['stage'].astype('str')
plotmeta = list(metadata_enriched.columns)
plotmeta.append('sample')
print('Number of samples: ', metadata_enriched.index.size)
#there's loads of clustering going on, so set verbosity low unless you enjoy walls of text
sc.settings.verbosity = 0 # verbosity: errors (0), warnings (1), info (2), hints (3)
scorenames = ['scrublet_score','scrublet_cluster_score','zscore','bh_pval','bonf_pval']
if not os.path.exists('scrublet-scores'):
os.makedirs('scrublet-scores')
#loop over the subfolders of the rawdata folder
samples = metadata_enriched.index.to_list()
for sample in list(reversed(samples)):
print(sample)
#import data
adata_sample = sc.read_10x_mtx(path_to_data + sample + '/filtered_feature_bc_matrix/',cache=True)
adata_sample.var_names_make_unique()
#rename cells to SAMPLE_BARCODE
adata_sample.obs_names = [sample+'_'+i for i in adata_sample.obs_names]
#do some early filtering to retain meaningful cells for doublet inspection
sc.pp.filter_cells(adata_sample, min_genes=200)
sc.pp.filter_genes(adata_sample, min_cells=3)
#convert to lower to be species agnostic: human mito start with MT-, mouse with mt-
mito_genes = [name for name in adata_sample.var_names if name.lower().startswith('mt-')]
# for each cell compute fraction of counts in mito genes vs. all genes
# the `.A1` is only necessary as X is sparse (to transform to a dense array after summing)
adata_sample.obs['percent_mito'] = np.sum(
adata_sample[:, mito_genes].X, axis=1).A1 / np.sum(adata_sample.X, axis=1).A1
adata_sample = adata_sample[adata_sample.obs['percent_mito'] < 0.2, :]
#set up and run Scrublet, seeding for replicability
np.random.seed(0)
scrub = scr.Scrublet(adata_sample.X)
doublet_scores, predicted_doublets = scrub.scrub_doublets(verbose=False)
adata_sample.obs['scrublet_score'] = doublet_scores
#overcluster prep. run turbo basic scanpy pipeline
sc.pp.normalize_per_cell(adata_sample, counts_per_cell_after=1e4)
sc.pp.log1p(adata_sample)
sc.pp.highly_variable_genes(adata_sample, min_mean=0.0125, max_mean=3, min_disp=0.5)
adata_sample = adata_sample[:, adata_sample.var['highly_variable']]
sc.pp.scale(adata_sample, max_value=10)
sc.tl.pca(adata_sample, svd_solver='arpack')
sc.pp.neighbors(adata_sample)
#overclustering proper - do basic clustering first, then cluster each cluster
sc.tl.leiden(adata_sample)
adata_sample.obs['leiden'] = [str(i) for i in adata_sample.obs['leiden']]
for clus in np.unique(adata_sample.obs['leiden']):
adata_sub = adata_sample[adata_sample.obs['leiden']==clus].copy()
sc.tl.leiden(adata_sub)
adata_sub.obs['leiden'] = [clus+','+i for i in adata_sub.obs['leiden']]
adata_sample.obs.loc[adata_sub.obs_names,'leiden'] = adata_sub.obs['leiden']
#compute the cluster scores - the median of Scrublet scores per overclustered cluster
for clus in np.unique(adata_sample.obs['leiden']):
adata_sample.obs.loc[adata_sample.obs['leiden']==clus, 'scrublet_cluster_score'] = \
np.median(adata_sample.obs.loc[adata_sample.obs['leiden']==clus, 'scrublet_score'])
#now compute doublet p-values. figure out the median and mad (from above-median values) for the distribution
med = np.median(adata_sample.obs['scrublet_cluster_score'])
mask = adata_sample.obs['scrublet_cluster_score']>med
mad = np.median(adata_sample.obs['scrublet_cluster_score'][mask]-med)
#let's do a one-sided test. the Bertie write-up does not address this but it makes sense
zscores = (adata_sample.obs['scrublet_cluster_score'].values - med) / (1.4826 * mad)
adata_sample.obs['zscore'] = zscores
pvals = 1-scipy.stats.norm.cdf(zscores)
adata_sample.obs['bh_pval'] = bh(pvals)
adata_sample.obs['bonf_pval'] = bonf(pvals)
#create results data frame for single sample and copy stuff over from the adata object
scrublet_sample = pd.DataFrame(0, index=adata_sample.obs_names, columns=scorenames)
for score in scorenames:
scrublet_sample[score] = adata_sample.obs[score]
#write out complete sample scores
scrublet_sample.to_csv('scrublet-scores/'+sample+'.csv')
|
FCA_GND8784459
|
MIT
|
immune_CD45enriched_load_detect_doublets.ipynb
|
ventolab/HGDA
|
The BasicsAt the core of Python (and any programming language) there are some key characteristics of how a program is structured that enable the proper execution of that program. These characteristics include the structure of the code itself, the core data types from which others are built, and core operators that modify objects or create new ones. From these raw materials more complex commands, functions, and modules are built.For guidance on recommended Python structure refer to the [Python Style Guide](https://www.python.org/dev/peps/pep-0008). Examples: Variables and Data Types The Interpreter
|
# The interpreter can be used as a calculator, and can also echo or concatenate strings.
3 + 3
3 * 3
3 ** 3
3 / 2 # classic division - output is a floating point number
# Use quotes around strings, single or double, but be consistent to the extent possible
'dogs'
"dogs"
"They're going to the beach"
'He said "I like mac and cheese"'
# sometimes you can't escape the escape
'He said "I\'d like mac and cheese"'
# + operator can be used to concatenate strings
'dogs' + "cats"
print('Hello World!')
|
Hello World!
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
Try It YourselfGo to the section _4.4. Numeric Types_ in the Python 3 documentation at . The table in that section describes different operators - try some!What is the difference between the different division operators (`/`, `//`, and `%`)? VariablesVariables allow us to store values for later use.
|
a = 5
b = 10
a + b
|
_____no_output_____
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
Variables can be reassigned:
|
b = 38764289.1097
a + b
|
_____no_output_____
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
The ability to reassign variable values becomes important when iterating through groups of objects for batch processing or other purposes. In the example below, the value of `b` is dynamically updated every time the `while` loop is executed:
|
a = 5
b = 10
while b > a:
print("b="+str(b))
b = b-1
|
b=10
b=9
b=8
b=7
b=6
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
Variable data types can be inferred, so Python does not require us to declare the data type of a variable on assignment.
|
a = 5
type(a)
|
_____no_output_____
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
is equivalent to
|
a = int(5)
type(a)
c = 'dogs'
print(type(c))
c = str('dogs')
print(type(c))
|
<class 'str'>
<class 'str'>
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
There are cases when we may want to declare the data type, for example to assign a different data type from the default that will be inferred. Concatenating strings provides a good example.
|
customer = 'Carol'
pizzas = 2
print(customer + ' ordered ' + pizzas + ' pizzas.')
|
_____no_output_____
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
Above, Python has inferred the type of the variable `pizza` to be an integer. Since strings can only be concatenated with other strings, our print statement generates an error. There are two ways we can resolve the error:1. Declare the `pizzas` variable as type string (`str`) on assignment or2. Re-cast the `pizzas` variable as a string within the `print` statement.
|
customer = 'Carol'
pizzas = str(2)
print(customer + ' ordered ' + pizzas + ' pizzas.')
customer = 'Carol'
pizzas = 2
print(customer + ' ordered ' + str(pizzas) + ' pizzas.')
|
Carol ordered 2 pizzas.
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
Given the following variable assignments:```x = 12y = str(14)z = donuts```Predict the output of the following:1. `y + z`2. `x + y`3. `x + int(y)`4. `str(x) + y`Check your answers in the interpreter. Variable Naming RulesVariable names are case senstive and:1. Can only consist of one "word" (no spaces).2. Must begin with a letter or underscore character ('\_').3. Can only use letters, numbers, and the underscore character.We further recommend using variable names that are meaningful within the context of the script and the research. Reading FilesWe can accomplish a lot by assigning variables within our code as demonstrated above, but often we are interested in working with objects and data that exist in other files and directories on our system.When we want to read data files into a script, we do so by assigning the content of the file to a variable. This stores the data in memory and lets us perform processes and analyses on the data without changing the content of the source file.There are several ways to read files in Python - many libraries have methods for reading text, Excel and Word documents, PDFs, etc. This morning we're going to demonstrate using the ```read()``` and ```readlines()``` method in the standard library, and the Pandas```read_csv()``` function.
|
# Read unstructured text
# One way is to open the whole file as a block
file_path = "./beowulf" # We can save the path to the file as a variable
file_in = open(file_path, "r") # Options are 'r', 'w', and 'a' (read, write, append)
beowulf_a = file_in.read()
file_in.close()
print(beowulf_a)
# Another way is to read the file as a list of individual lines
with open(file_path, "r") as b:
beowulf_b = b.readlines()
print(beowulf_b)
# In order to get a similar printout to the first method, we use a for loop
# to print line by line - more on for loops below!
for l in beowulf_b:
print(l)
# We now have two variables with the content of our 'beowulf' file represented using two different data structures.
# Why do you think we get the different outputs from the next two statements?
# Beowulf text stored as one large string
print("As string:", beowulf_a[0])
# Beowulf text stored as a list of lines
print("As list of lines:", beowulf_b[0])
# We can confirm our expectations by checking on the types of our two beowulf variables
print(type(beowulf_a))
print(type(beowulf_b))
# Read CSV files using the Pandas read_csv method.
# Note: Pandas also includes methods for reading Excel.
# First we need to import the pandas library
import pandas as pd
# Create a variable to hold the path to the file
fpath = "aaj1945_DataS1_Egg_shape_by_species_v2.csv"
egg_data = pd.read_csv(fpath)
# We can get all kinds of info about the dataset
# info() provides an overview of the structure
print(egg_data.info())
# Look at the first five rows
egg_data.head()
# Names of columns
print(egg_data.columns.values)
# Dimensions (number of rows and columns)
print(egg_data.shape)
# And much more! But as a final example we can perform operations on the data.
# Descriptive statistics on the "Number of eggs" column
print(egg_data["Number of eggs"].describe())
# Or all of the columns in whole table with numeric data types:
print(egg_data.describe())
|
Asymmetry Ellipticity AvgLength (cm) Number of images \
count 1400.000000 1400.000000 1400.000000 1400.000000
mean 0.148230 0.384384 3.426853 9.320714
std 0.071228 0.089594 2.161549 20.747693
min 0.001400 0.096700 1.196000 1.000000
25% 0.104800 0.325775 1.958925 1.000000
50% 0.141750 0.377400 2.581150 2.000000
75% 0.184825 0.435075 4.323650 8.000000
max 0.484700 0.723700 23.870000 300.000000
Number of eggs
count 1400.000000
mean 35.125000
std 85.790347
min 1.000000
25% 3.000000
50% 8.000000
75% 26.250000
max 1139.000000
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
StructureNow that we have practiced assigning variables and reading information from files, we will have a look at concepts that are key to developing processes to use and analyze this information. BlocksThe structure of a Python program is pretty simple:Blocks of code are defined using indentation. Code that is at a lower level of indentation is not considerd part of a block. Indentation can be defined using spaces or tabs (spaces are recommended by the style guide), but be consistent (and prepared to defend your choice). As we will see, code blocks define the boundaries of sets of commands that fit within a given section of code. This indentation model for defining blocks of code significantly increases the readabiltiy of Python code.For example: >>>a = 5 >>>b = 10 >>>while b > a: ... print("b="+str(b)) ... b = b-1 >>>print("I'm outside the block") Comments & DocumentationYou can (and should) also include documentation and comments in the code your write - both for yourself, and potential future users (including yourself). Comments are pretty much any content on a line that follows a `` symbol (unless it is between quotation marks. For example: >>> we're going to do some math now >>>yae = 5 the number of votes in favor >>>nay = 10 the number of votes against >>>proportion = yae / nay the proportion of votes in favor >>>print(proportion) When you are creating functions or classes (a bit more on what these are in a bit) you can also create what are called *doc strings* that provide a defined location for content that is used to generate the `help()` information highlighted above and is also used by other systems for the automatic generation of documentation for packages that contain these *doc strings*. Creating a *doc string* is simple - just create a single or multi-line text string (more on this soon) that starts on the first indented line following the start of the definition of the function or class. For example: >>> we're going to create a documented function and then access the information about the function >>>def doc_demo(some_text="Ill skewer yer gizzard, ye salty sea bass"): ... """This function takes the provided text and prints it out in Pirate ... ... If a string is not provided for `some_text` a default message will be displayed ... """ ... out_string = "Ahoy Matey. " + some_text ... print(out_string) >>>help(doc_demo) >>>doc_demo() >>>doc_demo("Sail ho!") Standard ObjectsAny programming language has at its foundation a collection of *types* or in Python's terminology *objects*. The standard objects of Python consist of the following:* **Numbers** - integer, floating point, complex, and multiple-base defined numeric values* **Strings** - **immutable** strings of characters, numbers, and symbols that are bounded by single- or double-quotes* **Lists** - an ordered collection of objects that is bounded by square-brackets - `[]`. Elements in lists are extracted or referenced by their position in the list. For example, `my_list[0]` refers to the first item in the list, `my_list[5]` the sixth, and `my_list[-1]` to the last item in the list. * **Dictionaries** - an unordered collection of objects that are referenced by *keys* that allow for referring to those objexts by reference to those keys. Dictionaries are bounded by curley-brackets - `{}` with each element of the dictionary consisting of a *key* (string) and a *value* (object) separated by a colon `:`. Elements of a dictionary are extracted or referenced using their keys. for example: my_dict = {"key1":"value1", "key2":36, "key3":[1,2,3]} my_dict['key1'] returns "value1" my_dict['key3'] returns [1,2,3]* **Tuples** - **immutable** lists that are bounded by parentheses = `()`. Referencing elements in a tuple is the same as referencing elements in a list above. * **Files** - objects that represent external files on the file system. Programs can interact with (e.g. read, write, append) external files through their representative file objects in the program.* **Sets** - unordered, collections of **immutable** objects (i.e. ints, floats, strings, and tuples) where membership in the set and uniqueness within the set are defining characteristics of the member objects. Sets are created using the `set` function on a sequence of objects. A specialized list of operators on sets allow for identifying *union*, *intersection*, and *difference* (among others) between sets. * **Other core types** - Booleans, types, `None`* **Program unit types** - *functions*, *modules*, and *classes* for example* **Implementation-related types** (not covered in this workshop)These objects have their own sets of related methods (as we saw in the `help()` examples above) that enable their creation, and operations upon them.
|
# Fun with types
this = 12
that = 15
the_other = "27"
my_stuff = [this,that,the_other,["a","b","c",4]]
more_stuff = {
"item1": this,
"item2": that,
"item3": the_other,
"item4": my_stuff
}
this + that
# this won't work ...
# this + that + the_other
# ... but this will ...
this + that + int(the_other)
# ...and this too
str(this) + str(that) + the_other
|
_____no_output_____
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
ListsLists are a type of collection in Python. Lists allow us to store sequences of items that are typically but not always similar. All of the following lists are legal in Python:
|
# Separate list items with commas!
number_list = [1, 2, 3, 4, 5]
string_list = ['apples', 'oranges', 'pears', 'grapes', 'pineapples']
combined_list = [1, 2, 'oranges', 3.14, 'peaches', 'grapes', 99.19876]
# Nested lists - lists of lists - are allowed.
list_of_lists = [[1, 2, 3],
['oranges', 'grapes', 8],
[['small list'],
['bigger', 'list', 55],
['url_1', 'url_2']
]
]
|
_____no_output_____
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
There are multiple ways to create a list:
|
# Create an empty list
empty_list = []
# As we did above, by using square brackets around a comma-separated sequence of items
new_list = [1, 2, 3]
# Using the type constructor
constructed_list = list('purple')
# Using a list comprehension
result_list = [i for i in range(1, 20)]
|
_____no_output_____
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
We can inspect our lists:
|
empty_list
new_list
result_list
constructed_list
|
_____no_output_____
|
Apache-2.0
|
1.2-The Basics.ipynb
|
unmrds/cc-python
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.