markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
4.3 Label Distribution
print(train_df["label"].value_counts()) fig = plt.figure(figsize=(10, 6)) label_stats_plot = train_df["label"].value_counts().plot.bar() plt.tight_layout(pad=1) plt.savefig("img/label_stats_plot.png", dpi=100)
half-true 2114 false 1995 mostly-true 1962 true 1676 barely-true 1654 pants-fire 839 Name: label, dtype: int64
MIT
notebooks/eda-notebook.ipynb
archity/fake-news
4.4 Speaker Distribution
print(train_df.speaker.value_counts()) fig = plt.figure(figsize=(10, 6)) speaker_stats_plot = train_df["speaker"].value_counts()[:10].plot.bar() plt.tight_layout(pad=1) plt.title("Speakers") plt.savefig("img/speaker_stats_plot.png", dpi=100) print(train_df.speaker_title.value_counts()) fig = plt.figure(figsize=(10, 6)) speaker_title_stats_plot = train_df["speaker_title"].value_counts()[:10].plot.bar() plt.tight_layout(pad=1) plt.title("Speaker Title") plt.savefig("img/speaker_title_stats_plot.png", dpi=100)
President 492 U.S. Senator 479 Governor 391 President-Elect 273 U.S. senator 263 ... Pundit and communications consultant 1 Harrisonburg city councilman 1 Theme park company 1 Executive director, NARAL Pro-Choice Virginia 1 President, The Whitman Strategy Group 1 Name: speaker_title, Length: 1184, dtype: int64
MIT
notebooks/eda-notebook.ipynb
archity/fake-news
4.5 Democrats vs Republicans* Let's see how the 2 main parties compete with each other in terms oftruthfulness in the labels
fig = plt.figure(figsize=(8,4)) plt.suptitle("Party-wise Label") ax1 = fig.add_subplot(121) party_wise = train_df[train_df["party_affiliation"]=="democrat"]["label"].value_counts().to_frame() ax1.pie(party_wise["label"], labels=party_wise.index, autopct='%1.1f%%', startangle=90) ax1.set_title("Democrat") plt.suptitle("Party-wise Label") ax2 = fig.add_subplot(122) party_wise = train_df[train_df["party_affiliation"]=="republican"]["label"].value_counts().to_frame() ax2.pie(party_wise["label"], labels=party_wise.index, autopct='%1.1f%%', startangle=90) ax2.set_title("Republican") plt.tight_layout() plt.savefig("img/dems_gop_label_plot.png", dpi=200)
_____no_output_____
MIT
notebooks/eda-notebook.ipynb
archity/fake-news
* We can combine some labels to get a more simplified plot
def get_binary_label(label): if label in ["pants-fire", "barely-true", "false"]: return False elif label in ["true", "half-true", "mostly-true"]: return True train_df["binary_label"] = train_df.label.apply(get_binary_label) fig = plt.figure(figsize=(8,4)) plt.suptitle("Party-wise Label") ax1 = fig.add_subplot(121) party_wise = train_df[train_df["party_affiliation"]=="democrat"]["binary_label"].value_counts().to_frame() ax1.pie(party_wise["binary_label"], labels=party_wise.index, autopct='%1.1f%%', startangle=90) ax1.set_title("Democrat") plt.suptitle("Party-wise Label") ax2 = fig.add_subplot(122) party_wise = train_df[train_df["party_affiliation"]=="republican"]["binary_label"].value_counts().to_frame() ax2.pie(party_wise["binary_label"], labels=party_wise.index, autopct='%1.1f%%', startangle=90) ax2.set_title("Republican") plt.tight_layout() plt.savefig("img/dems_gop_binary_label_plot.png", dpi=200)
_____no_output_____
MIT
notebooks/eda-notebook.ipynb
archity/fake-news
5. Sentiment Analysis
from textblob import TextBlob pol = lambda x: TextBlob(x).sentiment.polarity sub = lambda x: TextBlob(x).sentiment.subjectivity train_df['polarity_true'] = train_df[train_df["binary_label"]==True]['statement'].apply(pol) train_df['subjectivity_true'] = train_df[train_df["binary_label"]==True]['statement'].apply(sub) plt.rcParams['figure.figsize'] = [10, 8] x = train_df["polarity_true"] y = train_df["subjectivity_true"] plt.scatter(x, y, color='blue') plt.title('Sentiment Analysis', fontsize=20) plt.xlabel('<-- Negative ---------------- Positive -->', fontsize=10) plt.ylabel('<-- Facts ---------------- Opinions -->', fontsize=10) plt.savefig("img/sa_true.png", format="png", dpi=200) plt.show() train_df['polarity_false'] = train_df[train_df["binary_label"]==False]['statement'].apply(pol) train_df['subjectivity_false'] = train_df[train_df["binary_label"]==False]['statement'].apply(sub) plt.rcParams['figure.figsize'] = [10, 8] x = train_df["polarity_false"] y = train_df["subjectivity_false"] plt.scatter(x, y, color='blue') plt.title('Sentiment Analysis', fontsize=20) plt.xlabel('<-- Negative ---------------- Positive -->', fontsize=10) plt.ylabel('<-- Facts ---------------- Opinions -->', fontsize=10) plt.savefig("img/sa_false.png", format="png", dpi=200) plt.show()
_____no_output_____
MIT
notebooks/eda-notebook.ipynb
archity/fake-news
Collaborative filtering> Tools to quickly get the data and train models suitable for collaborative filtering This module contains all the high-level functions you need in a collaborative filtering application to assemble your data, get a model and train it with a `Learner`. We will go other those in order but you can also check the [collaborative filtering tutorial](http://docs.fast.ai/tutorial.collab). Gather the data
#export class TabularCollab(TabularPandas): "Instance of `TabularPandas` suitable for collaborative filtering (with no continuous variable)" with_cont=False
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
This is just to use the internal of the tabular application, don't worry about it.
#export class CollabDataLoaders(DataLoaders): "Base `DataLoaders` for collaborative filtering." @delegates(DataLoaders.from_dblock) @classmethod def from_df(cls, ratings, valid_pct=0.2, user_name=None, item_name=None, rating_name=None, seed=None, path='.', **kwargs): "Create a `DataLoaders` suitable for collaborative filtering from `ratings`." user_name = ifnone(user_name, ratings.columns[0]) item_name = ifnone(item_name, ratings.columns[1]) rating_name = ifnone(rating_name, ratings.columns[2]) cat_names = [user_name,item_name] splits = RandomSplitter(valid_pct=valid_pct, seed=seed)(range_of(ratings)) to = TabularCollab(ratings, [Categorify], cat_names, y_names=[rating_name], y_block=TransformBlock(), splits=splits) return to.dataloaders(path=path, **kwargs) @classmethod def from_csv(cls, csv, **kwargs): "Create a `DataLoaders` suitable for collaborative filtering from `csv`." return cls.from_df(pd.read_csv(csv), **kwargs) CollabDataLoaders.from_csv = delegates(to=CollabDataLoaders.from_df)(CollabDataLoaders.from_csv)
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
This class should not be used directly, one of the factory methods should be preferred instead. All those factory methods accept as arguments:- `valid_pct`: the random percentage of the dataset to set aside for validation (with an optional `seed`)- `user_name`: the name of the column containing the user (defaults to the first column)- `item_name`: the name of the column containing the item (defaults to the second column)- `rating_name`: the name of the column containing the rating (defaults to the third column)- `path`: the folder where to work- `bs`: the batch size- `val_bs`: the batch size for the validation `DataLoader` (defaults to `bs`)- `shuffle_train`: if we shuffle the training `DataLoader` or not- `device`: the PyTorch device to use (defaults to `default_device()`)
show_doc(CollabDataLoaders.from_df)
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
Let's see how this works on an example:
path = untar_data(URLs.ML_SAMPLE) ratings = pd.read_csv(path/'ratings.csv') ratings.head() dls = CollabDataLoaders.from_df(ratings, bs=64) dls.show_batch() show_doc(CollabDataLoaders.from_csv) dls = CollabDataLoaders.from_csv(path/'ratings.csv', bs=64)
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
Models fastai provides two kinds of models for collaborative filtering: a dot-product model and a neural net.
#export class EmbeddingDotBias(Module): "Base dot model for collaborative filtering." def __init__(self, n_factors, n_users, n_items, y_range=None): self.y_range = y_range (self.u_weight, self.i_weight, self.u_bias, self.i_bias) = [Embedding(*o) for o in [ (n_users, n_factors), (n_items, n_factors), (n_users,1), (n_items,1) ]] def forward(self, x): users,items = x[:,0],x[:,1] dot = self.u_weight(users)* self.i_weight(items) res = dot.sum(1) + self.u_bias(users).squeeze() + self.i_bias(items).squeeze() if self.y_range is None: return res return torch.sigmoid(res) * (self.y_range[1]-self.y_range[0]) + self.y_range[0] @classmethod def from_classes(cls, n_factors, classes, user=None, item=None, y_range=None): "Build a model with `n_factors` by inferring `n_users` and `n_items` from `classes`" if user is None: user = list(classes.keys())[0] if item is None: item = list(classes.keys())[1] res = cls(n_factors, len(classes[user]), len(classes[item]), y_range=y_range) res.classes,res.user,res.item = classes,user,item return res def _get_idx(self, arr, is_item=True): "Fetch item or user (based on `is_item`) for all in `arr`" assert hasattr(self, 'classes'), "Build your model with `EmbeddingDotBias.from_classes` to use this functionality." classes = self.classes[self.item] if is_item else self.classes[self.user] c2i = {v:k for k,v in enumerate(classes)} try: return tensor([c2i[o] for o in arr]) except Exception as e: print(f"""You're trying to access {'an item' if is_item else 'a user'} that isn't in the training data. If it was in your original data, it may have been split such that it's only in the validation set now.""") def bias(self, arr, is_item=True): "Bias for item or user (based on `is_item`) for all in `arr`" idx = self._get_idx(arr, is_item) layer = (self.i_bias if is_item else self.u_bias).eval().cpu() return to_detach(layer(idx).squeeze(),gather=False) def weight(self, arr, is_item=True): "Weight for item or user (based on `is_item`) for all in `arr`" idx = self._get_idx(arr, is_item) layer = (self.i_weight if is_item else self.u_weight).eval().cpu() return to_detach(layer(idx),gather=False)
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
The model is built with `n_factors` (the length of the internal vectors), `n_users` and `n_items`. For a given user and item, it grabs the corresponding weights and bias and returns``` pythontorch.dot(user_w, item_w) + user_b + item_b```Optionally, if `y_range` is passed, it applies a `SigmoidRange` to that result.
x,y = dls.one_batch() model = EmbeddingDotBias(50, len(dls.classes['userId']), len(dls.classes['movieId']), y_range=(0,5) ).to(x.device) out = model(x) assert (0 <= out).all() and (out <= 5).all() show_doc(EmbeddingDotBias.from_classes)
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
`y_range` is passed to the main init. `user` and `item` are the names of the keys for users and items in `classes` (default to the first and second key respectively). `classes` is expected to be a dictionary key to list of categories like the result of `dls.classes` in a `CollabDataLoaders`:
dls.classes
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
Let's see how it can be used in practice:
model = EmbeddingDotBias.from_classes(50, dls.classes, y_range=(0,5) ).to(x.device) out = model(x) assert (0 <= out).all() and (out <= 5).all()
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
Two convenience methods are added to easily access the weights and bias when a model is created with `EmbeddingDotBias.from_classes`:
show_doc(EmbeddingDotBias.weight)
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
The elements of `arr` are expected to be class names (which is why the model needs to be created with `EmbeddingDotBias.from_classes`)
mov = dls.classes['movieId'][42] w = model.weight([mov]) test_eq(w, model.i_weight(tensor([42]))) show_doc(EmbeddingDotBias.bias)
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
The elements of `arr` are expected to be class names (which is why the model needs to be created with `EmbeddingDotBias.from_classes`)
mov = dls.classes['movieId'][42] b = model.bias([mov]) test_eq(b, model.i_bias(tensor([42]))) #export class EmbeddingNN(TabularModel): "Subclass `TabularModel` to create a NN suitable for collaborative filtering." @delegates(TabularModel.__init__) def __init__(self, emb_szs, layers, **kwargs): super().__init__(emb_szs=emb_szs, n_cont=0, out_sz=1, layers=layers, **kwargs) show_doc(EmbeddingNN)
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
`emb_szs` should be a list of two tuples, one for the users, one for the items, each tuple containing the number of users/items and the corresponding embedding size (the function `get_emb_sz` can give a good default). All the other arguments are passed to `TabularModel`.
emb_szs = get_emb_sz(dls.train_ds, {}) model = EmbeddingNN(emb_szs, [50], y_range=(0,5) ).to(x.device) out = model(x) assert (0 <= out).all() and (out <= 5).all()
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
Create a `Learner` The following function lets us quickly create a `Learner` for collaborative filtering from the data.
# export @log_args(to_return=True, but_as=Learner.__init__) @delegates(Learner.__init__) def collab_learner(dls, n_factors=50, use_nn=False, emb_szs=None, layers=None, config=None, y_range=None, loss_func=None, **kwargs): "Create a Learner for collaborative filtering on `dls`." emb_szs = get_emb_sz(dls, ifnone(emb_szs, {})) if loss_func is None: loss_func = MSELossFlat() if config is None: config = tabular_config() if y_range is not None: config['y_range'] = y_range if layers is None: layers = [n_factors] if use_nn: model = EmbeddingNN(emb_szs=emb_szs, layers=layers, **config) else: model = EmbeddingDotBias.from_classes(n_factors, dls.classes, y_range=y_range) return Learner(dls, model, loss_func=loss_func, **kwargs)
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
If `use_nn=False`, the model used is an `EmbeddingDotBias` with `n_factors` and `y_range`. Otherwise, it's a `EmbeddingNN` for which you can pass `emb_szs` (will be inferred from the `dls` with `get_emb_sz` if you don't provide any), `layers` (defaults to `[n_factors]`) `y_range`, and a `config` that you can create with `tabular_config` to customize your model. `loss_func` will default to `MSELossFlat` and all the other arguments are passed to `Learner`.
learn = collab_learner(dls, y_range=(0,5)) learn.fit_one_cycle(1)
_____no_output_____
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
Export -
#hide from nbdev.export import * notebook2script()
Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb.
Apache-2.0
nbs/45_collab.ipynb
ldanilov/fastai
Model
def get_3d_head(p=0.0): pool, feat = (nn.AdaptiveAvgPool3d(1), 64) m = nn.Sequential(Batchify(), ConvLayer(512,512,stride=2,ndim=3), # 8 ConvLayer(512,1024,stride=2,ndim=3), # 4 ConvLayer(1024,1024,stride=2,ndim=3), # 2 nn.AdaptiveAvgPool3d((1, 1, 1)), Batchify(), Flat3d(), nn.Dropout(p), nn.Linear(1024, 6)) init_cnn(m) return m m = get_3d_head() config=dict(custom_head=m) learn = get_learner(dls, xresnet18, get_loss(), config=config) hook = ReshapeBodyHook(learn.model[0]) learn.add_cb(RowLoss()) # learn.load(f'runs/baseline_stg1_xresnet18-3', strict=False) name = 'trainfull3d_labels_partial3d_new'
_____no_output_____
Apache-2.0
04_trainfull3d/04_trainfull3d_labels_01_partial3d.ipynb
bearpelican/rsna_retro
Training
learn.lr_find() do_fit(learn, 8, 1e-3) learn.save(f'runs/{name}-1') learn.load(f'runs/{name}-1') learn.dls = get_3d_dls_aug(Meta.df_comb, sz=256, bs=12, grps=Meta.grps_stg1) do_fit(learn, 4, 1e-4) learn.save(f'runs/{name}-2') learn.load(f'runs/{name}-2') learn.dls = get_3d_dls_aug(Meta.df_comb, sz=384, bs=4, path=path_jpg, grps=Meta.grps_stg1) do_fit(learn, 2, 1e-5) learn.save(f'runs/{name}-3')
_____no_output_____
Apache-2.0
04_trainfull3d/04_trainfull3d_labels_01_partial3d.ipynb
bearpelican/rsna_retro
Import Modules
import cv2 import numpy as np from google.colab.patches import cv2_imshow
_____no_output_____
MIT
ImageResize/Image_Scaling/Image_scaling.ipynb
noviicee/Image-Processing-OpenCV
Load Image
#image is loaded using cv2.imread() method,here flag is 0 ,specifies to load image in GRAYSCALE mode. ''' Syntax: cv2.imread(path,flag) Parameters: path: string representing the path of the image to be read. flag: specifies the way in which image should be read. ''' img=cv2.imread("input.png",0) cv2_imshow(img)
_____no_output_____
MIT
ImageResize/Image_Scaling/Image_scaling.ipynb
noviicee/Image-Processing-OpenCV
Apply scaling Operation
# To perform scaling operation,cv2.resize() method is used. ''' Syntax: cv2.resize(image,(width,height)=None,fx=1,fy=1,interpolation) Parameters: image: input image. (width,height): determining the size of output image ; optional parameter. fx: scaling factor for x-axis,default=1. fy: scaling factor for y-axis,default=1. interpolation: interpolation method to be used. ''' scaled_up_x=cv2.resize(img,None,fx=2,fy=1,interpolation=cv2.INTER_CUBIC) scaled_down_x=cv2.resize(img,None,fx=0.5,fy=1,interpolation=cv2.INTER_LINEAR) scaled_up_y=cv2.resize(img,None,fx=1,fy=2,interpolation=cv2.INTER_CUBIC) scaled_down_y=cv2.resize(img,None,fx=1,fy=0.5,interpolation=cv2.INTER_LINEAR)
_____no_output_____
MIT
ImageResize/Image_Scaling/Image_scaling.ipynb
noviicee/Image-Processing-OpenCV
Display the scaled image
cv2_imshow(scaled_up_x) cv2_imshow(scaled_down_x) cv2_imshow(scaled_up_y) cv2_imshow(scaled_down_y)
_____no_output_____
MIT
ImageResize/Image_Scaling/Image_scaling.ipynb
noviicee/Image-Processing-OpenCV
***Introduction to Radar Using Python and MATLAB*** Andy Harrison - Copyright (C) 2019 Artech House Pulse Train Ambiguity Function*** Referring to Section 8.6.1, the amibguity function for a coherent pulse train is found by employing the generic waveform technique outlined in Section 8.6.3.*** Begin by getting the library path
import lib_path
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
Set the pulsewidth (s), the pulse repetition interval (s) and the number of pulses
pulsewidth = 0.4 pri = 1.0 number_of_pulses = 6
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
Generate the time delay (s) using the `linspace` routine from `scipy`
from numpy import linspace # Set the time delay time_delay = linspace(-number_of_pulses * pri, number_of_pulses * pri, 5000)
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
Calculate the ambiguity function for the pulse train
from Libs.ambiguity.ambiguity_function import pulse_train from numpy import finfo ambiguity = pulse_train(time_delay, finfo(float).eps, pulsewidth, pri, number_of_pulses)
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
Plot the zero-Doppler cut using the `matplotlib` routines
from matplotlib import pyplot as plt # Set the figure size plt.rcParams["figure.figsize"] = (15, 10) # Plot the ambiguity function plt.plot(time_delay, ambiguity, '') # Set the x and y axis labels plt.xlabel("Time (s)", size=12) plt.ylabel("Relative Amplitude", size=12) # Turn on the grid plt.grid(linestyle=':', linewidth=0.5) # Set the plot title and labels plt.title('Pulse Train Ambiguity Function', size=14) # Set the tick label size plt.tick_params(labelsize=12)
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
Set the Doppler mismatch frequencies using the `linspace` routine
doppler_frequency = linspace(-2.0 / pulsewidth, 2.0 / pulsewidth, 1000)
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
Calculate the ambiguity function for the pulse train
ambiguity = pulse_train(finfo(float).eps, doppler_frequency, pulsewidth, pri, number_of_pulses)
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
Display the zero-range cut for the pulse train
plt.plot(doppler_frequency, ambiguity, '') # Set the x and y axis labels plt.xlabel("Doppler (Hz)", size=12) plt.ylabel("Relative Amplitude", size=12) # Turn on the grid plt.grid(linestyle=':', linewidth=0.5) # Set the plot title and labels plt.title('Pulse Train Ambiguity Function', size=14) # Set the tick label size plt.tick_params(labelsize=12)
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
Set the time delay and Doppler mismatch frequency and create the two-dimensional grid using the `meshgrid` routine from `scipy`
from numpy import meshgrid # Set the time delay time_delay = linspace(-number_of_pulses * pri, number_of_pulses * pri, 1000) # Set the Doppler mismatch doppler_frequency = linspace(-2.0 / pulsewidth, 2.0 / pulsewidth, 1000) # Create the grid t, f = meshgrid(time_delay, doppler_frequency)
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
Calculate the ambiguity function for the pulse train
ambiguity = pulse_train(t, f, pulsewidth, pri, number_of_pulses)
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
Display the two-dimensional contour plot for the pulse train ambiguity function
# Plot the ambiguity function from numpy import finfo plt.contour(t, f, ambiguity + finfo('float').eps, 20, cmap='jet', vmin=-0.2, vmax=1.0) # Set the x and y axis labels plt.xlabel("Time (s)", size=12) plt.ylabel("Doppler (Hz)", size=12) # Turn on the grid plt.grid(linestyle=':', linewidth=0.5) # Set the plot title and labels plt.title('Pulse Pulse Ambiguity Function', size=14) # Set the tick label size plt.tick_params(labelsize=12)
_____no_output_____
Apache-2.0
jupyter/Chapter08/pulse_train_ambiguity.ipynb
mberkanbicer/software
============================================4D Neuroimaging/BTi phantom dataset tutorial============================================Here we read 4DBTi epochs data obtained with a spherical phantomusing four different dipole locations. For each condition wecompute evoked data and compute dipole fits.Data are provided by Jean-Michel Badier from MEG center in Marseille, France.
# Authors: Alex Gramfort <[email protected]> # # License: BSD (3-clause) import os.path as op import numpy as np from mayavi import mlab from mne.datasets import phantom_4dbti import mne
_____no_output_____
BSD-3-Clause
stable/_downloads/a68c968ba9eafa2b1315cbf9e139eee3/plot_phantom_4DBTi.ipynb
drammock/mne-tools.github.io
Read data and compute a dipole fit at the peak of the evoked response
data_path = phantom_4dbti.data_path() raw_fname = op.join(data_path, '%d/e,rfhp1.0Hz') dipoles = list() sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.080) t0 = 0.07 # peak of the response pos = np.empty((4, 3)) for ii in range(4): raw = mne.io.read_raw_bti(raw_fname % (ii + 1,), rename_channels=False, preload=True) raw.info['bads'] = ['A173', 'A213', 'A232'] events = mne.find_events(raw, 'TRIGGER', mask=4350, mask_type='not_and') epochs = mne.Epochs(raw, events=events, event_id=8192, tmin=-0.2, tmax=0.4, preload=True) evoked = epochs.average() evoked.plot(time_unit='s') cov = mne.compute_covariance(epochs, tmax=0.) dip = mne.fit_dipole(evoked.copy().crop(t0, t0), cov, sphere)[0] pos[ii] = dip.pos[0]
_____no_output_____
BSD-3-Clause
stable/_downloads/a68c968ba9eafa2b1315cbf9e139eee3/plot_phantom_4DBTi.ipynb
drammock/mne-tools.github.io
Compute localisation errors
actual_pos = 0.01 * np.array([[0.16, 1.61, 5.13], [0.17, 1.35, 4.15], [0.16, 1.05, 3.19], [0.13, 0.80, 2.26]]) actual_pos = np.dot(actual_pos, [[0, 1, 0], [-1, 0, 0], [0, 0, 1]]) errors = 1e3 * np.linalg.norm(actual_pos - pos, axis=1) print("errors (mm) : %s" % errors)
_____no_output_____
BSD-3-Clause
stable/_downloads/a68c968ba9eafa2b1315cbf9e139eee3/plot_phantom_4DBTi.ipynb
drammock/mne-tools.github.io
Plot the dipoles in 3D
def plot_pos(pos, color=(0., 0., 0.)): mlab.points3d(pos[:, 0], pos[:, 1], pos[:, 2], scale_factor=0.005, color=color) mne.viz.plot_alignment(evoked.info, bem=sphere, surfaces=[]) # Plot the position of the actual dipole plot_pos(actual_pos, color=(1., 0., 0.)) # Plot the position of the estimated dipole plot_pos(pos, color=(1., 1., 0.))
_____no_output_____
BSD-3-Clause
stable/_downloads/a68c968ba9eafa2b1315cbf9e139eee3/plot_phantom_4DBTi.ipynb
drammock/mne-tools.github.io
F1, Precision Recall, and Confusion Matrix
from sklearn.metrics import precision_recall_fscore_support from sklearn.metrics import recall_score from sklearn.metrics import classification_report y_prediction = model.predict_classes(X_test) y_prediction.reshape(-1,1) print("Recall score:"+ str(recall_score(y_test, y_prediction))) print(classification_report(y_test, y_prediction, target_names=["default", "non_default"])) import itertools import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="red" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Compute confusion matrix cnf_matrix = confusion_matrix(y_test, y_prediction) np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=['Defualt', 'Non_default'], title='Confusion matrix, without normalization') # Plot normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=['Defualt', 'Non_default'], normalize=True, title='Normalized confusion matrix') plt.show()
Confusion matrix, without normalization [[4687 0] [1313 0]] Normalized confusion matrix [[1. 0.] [1. 0.]]
MIT
Model/3-NeuralNetwork4.ipynb
skawns0724/KOSA-Big-Data_Vision
Nonlinear recharge models*R.A. Collenteur, University of Graz*This notebook explains the use of the `RechargeModel` stress model to simulate the combined effect of precipitation and potential evaporation on the groundwater levels. For the computation of the groundwater recharge, three recharge models are currently available:- `Linear` ([Berendrecht et al., 2003](References); [von Asmuth et al., 2008](References))- `Berendrecht` ([Berendrecht et al., 2006](References))- `FlexModel` ([Collenteur et al., 2021](References))The first model is a simple linear function of precipitation and potential evaporation while the latter two are simulate a nonlinear response of recharge to precipitation using a soil-water balance concepts. Detailed descriptions of these models can be found in articles listed in the [References](References) at the end of this notebook. Tip To run this notebook and the related non-linear recharge models, it is strongly recommended to install Numba (http://numba.pydata.org). This Just-In-Time (JIT) compiler compiles the computationally intensive part of the recharge calculation, making the non-linear model as fast as the Linear recharge model.
import pandas as pd import pastas as ps import matplotlib.pyplot as plt ps.show_versions(numba=True) ps.set_log_level("INFO")
Python version: 3.8.2 (default, Mar 25 2020, 11:22:43) [Clang 4.0.1 (tags/RELEASE_401/final)] Numpy version: 1.20.2 Scipy version: 1.6.2 Pandas version: 1.1.5 Pastas version: 0.18.0b Matplotlib version: 3.3.4 numba version: 0.51.2
MIT
examples/notebooks/07_non_linear_recharge.ipynb
pastas/pastas
Read Input dataInput data handling is similar to other stressmodels. The only thing that is necessary to check is that the precipitation and evaporation are provided in mm/day. This is necessary because the parameters for the nonlinear recharge models are defined in mm for the length unit and days for the time unit. It is possible to use other units, but this would require manually setting the initial values and parameter boundaries for the recharge models.
head = pd.read_csv("../data/B32C0639001.csv", parse_dates=['date'], index_col='date', squeeze=True) # Make this millimeters per day evap = ps.read_knmi("../data/etmgeg_260.txt", variables="EV24").series * 1e3 rain = ps.read_knmi("../data/etmgeg_260.txt", variables="RH").series * 1e3 fig, axes = plt.subplots(3,1, figsize=(10,6), sharex=True) head.plot(ax=axes[0], x_compat=True, linestyle=" ", marker=".") evap.plot(ax=axes[1], x_compat=True) rain.plot(ax=axes[2], x_compat=True) axes[0].set_ylabel("Head [m]") axes[1].set_ylabel("Evap [mm/d]") axes[2].set_ylabel("Rain [mm/d]") plt.xlim("1985", "2005");
INFO: Inferred frequency for time series EV24 260: freq=D INFO: Inferred frequency for time series RH 260: freq=D
MIT
examples/notebooks/07_non_linear_recharge.ipynb
pastas/pastas
Make a basic modelThe normal workflow may be used to create and calibrate the model.1. Create a Pastas `Model` instance2. Choose a recharge model. All recharge models can be accessed through the recharge subpackage (`ps.rch`).3. Create a `RechargeModel` object and add it to the model4. Solve and visualize the model
ml = ps.Model(head) # Select a recharge model rch = ps.rch.FlexModel() #rch = ps.rch.Berendrecht() #rch = ps.rch.Linear() rm = ps.RechargeModel(rain, evap, recharge=rch, rfunc=ps.Gamma, name="rch") ml.add_stressmodel(rm) ml.solve(noise=True, tmin="1990", report="basic") ml.plots.results(figsize=(10,6));
INFO: Cannot determine frequency of series head: freq=None. The time series is irregular. INFO: Inferred frequency for time series RH 260: freq=D INFO: Inferred frequency for time series EV24 260: freq=D
MIT
examples/notebooks/07_non_linear_recharge.ipynb
pastas/pastas
Analyze the estimated recharge fluxAfter the parameter estimation we can take a look at the recharge flux computed by the model. The flux is easy to obtain using the `get_stress` method of the model object, which automatically provides the optimal parameter values that were just estimated. After this, we can for example look at the yearly recharge flux estimated by the Pastas model.
recharge = ml.get_stress("rch").resample("A").sum() ax = recharge.plot.bar(figsize=(10,3)) ax.set_xticklabels(recharge.index.year) plt.ylabel("Recharge [mm/year]");
_____no_output_____
MIT
examples/notebooks/07_non_linear_recharge.ipynb
pastas/pastas
Place Stock Trades into Senator Dataframe 1. Understand the Senator Trading Report (STR) Dataframe
import pandas as pd #https://docs.google.com/spreadsheets/d/1lH_LpTgRlfzKvpRnWYgoxlkWvJj0v1r3zN3CeWMAgqI/edit?usp=sharing try: sen_df = pd.read_csv("Senator Stock Trades/Senate Stock Watcher 04_16_2020 All Transactions.csv") except: sen_df = pd.read_csv("https://github.com/pkm29/big_data_final_project/raw/master/Senate%20Stock%20Trades/Senate%20Stock%20Watcher%2004_16_2020%20All%20Transactions.csv") sen_df.head() sen_df.type.unique()
_____no_output_____
MIT
Stocks/Place Stock Trades into Senator Dataframe Ankur Edit.ipynb
paulmtree/Suspicious-Senator-Trading
There are 4 types of trades.Exchanges: Exchange 1 stock for anotherSale (Full): Selling all of their stockPurchase: Buying a stockSale (Partial): Selling some of that particular stock
n_exchanges = len(sen_df.loc[sen_df['type'] == "Exchange"]) n_trades = len(sen_df) print("There are " +str(n_exchanges) +" exchange trades out of a total of " +str(n_trades)+ " trades.") sen_df = sen_df.loc[sen_df['type'] != "Exchange"]
There are 84 exchange trades out of a total of 8600 trades.
MIT
Stocks/Place Stock Trades into Senator Dataframe Ankur Edit.ipynb
paulmtree/Suspicious-Senator-Trading
At this point in time, I will exclude exchange trades because they are so few and wish to build the basic structure of the project. As you can see, this would require splitting up the exchange into two rows with each company and so on. I may include this step later if time permits. There should now be 8516 trades remaining in the dataframe. Let's make sure this is so.
n_trades = len(sen_df) print("There are " +str(n_trades)+ " trades in the dataframe") n_blank_ticker = len(sen_df.loc[sen_df['ticker'] == "--"]) print("There are " +str(n_blank_ticker) +" trades w/o a ticker out of a total of " +str(n_trades)+ " trades") sen_df = sen_df.loc[sen_df['ticker'] != "--"]
There are 1872 trades w/o a ticker out of a total of 8516 trades
MIT
Stocks/Place Stock Trades into Senator Dataframe Ankur Edit.ipynb
paulmtree/Suspicious-Senator-Trading
For the same reasons we excluded exchange trades, we will also exclude trades without a ticker (which all public stocks have - the ticker is their identifier on the stock exchange). Eliminating trades without a ticker takes out trades of other types of securities (corporate bonds, municipal securities, non-public stock). There should now be 6644 trades remaining in the dataframe. Let's make sure this is so.
n_trades = len(sen_df) print("There are " +str(n_trades)+ " trades in the dataframe")
There are 6644 trades in the dataframe
MIT
Stocks/Place Stock Trades into Senator Dataframe Ankur Edit.ipynb
paulmtree/Suspicious-Senator-Trading
2. Add Data to STR Dataframe Import Data In this step we will be using company information such as market cap and industry from online lists provided by the NYSE, NASDAQ, and ASXL exchange. Links can be found here:https://stackoverflow.com/questions/25338608/download-all-stock-symbol-list-of-a-market
ticker_list = list() try: NYSE_df = pd.read_csv("NYSEcompanylist.csv") except: NYSE_df = pd.read_csv("https://github.com/pkm29/big_data_final_project/raw/master/Stocks/NYSEcompanylist.csv") try: NASDAQ_df = pd.read_csv("NASDAQcompanylist.csv") except: NASDAQ_df = pd.read_csv("https://github.com/pkm29/big_data_final_project/raw/master/Stocks/NASDAQcompanylist.csv") ticker_list.append(NYSE_df) ticker_list.append(NASDAQ_df) NYSE_df.head() NASDAQ_df.head() """ Add data for Berkshire Hathaway, Lions Gate Entertainment, and Royal Dutch Shell to the NYSE company list. While #these companies are in the company list, their fields are empty. Also, change the tickers of these companies to #match Senate Stock Data (since dashes are used instead of periods in that dataset, we make sure the same is true in the NYSE company list). What matters is consistent convention here. """ row_count = 0 replacement_count = 0 for row_tuple in NYSE_df.itertuples(): if replacement_count == 4: break if row_tuple.Symbol == "BRK.B": #row_tuple.Symbol = "BRK-B" NYSE_df.at[row_count, 'Symbol'] = "BRK-B" #Shares outstanding reported in Q1 2020 financial reports, stock price from May 6, when this data is dated #row_tuple.MarketCap = "$420.02B" NYSE_df.at[row_count, 'MarketCap'] = "$420.02B" #row_tuple.Sector = "Miscellaneous" NYSE_df.at[row_count, 'Sector'] = "Miscellaneous" #row_tuple.industry = "Conglomerate" NYSE_df.at[row_count, 'industry'] = "Conglomerate" replacement_count = replacement_count + 1 if row_tuple.Symbol == "LGF.B": #row_tuple.Symbol = "LGF-B" #Shares outstanding reported in Q1 2020 financial reports, stock price from May 6, when this data is dated #row_tuple.MarketCap = "$14.62B" #row_tuple.Sector = "Consumer Services" #row_tuple.industry = "Movies/Entertainment" NYSE_df.at[row_count, 'Symbol'] = "LGF-B" NYSE_df.at[row_count, 'MarketCap'] = "$14.62B" NYSE_df.at[row_count, 'Sector'] = "Consumer Services" NYSE_df.at[row_count, 'industry'] = "Movies/Entertainment" replacement_count = replacement_count + 1 if row_tuple.Symbol == "RDS.A": #row_tuple.Symbol = "RDS-A" #Shares outstanding reported in Q1 2020 financial reports, stock price from May 6, when this data is dated #row_tuple.MarketCap = "$122.28B" #row_tuple.Sector = "Energy" #row_tuple.industry = "Oil & Gas Production" NYSE_df.at[row_count, 'Symbol'] = "RDS-A" NYSE_df.at[row_count, 'MarketCap'] = "$122.28B" NYSE_df.at[row_count, 'Sector'] = "Energy" NYSE_df.at[row_count, 'industry'] = "Oil & Gas Production" replacement_count = replacement_count + 1 if row_tuple.Symbol == "RDS.B": #row_tuple.Symbol = "RDS-B" #Shares outstanding reported in Q1 2020 financial reports, stock price from May 6, when this data is dated #row_tuple.MarketCap = "$122.09B" #row_tuple.Sector = "Energy" #row_tuple.industry = "Oil & Gas Production" NYSE_df.at[row_count, 'Symbol'] = "RDS-B" NYSE_df.at[row_count, 'MarketCap'] = "$122.09B" NYSE_df.at[row_count, 'Sector'] = "Energy" NYSE_df.at[row_count, 'industry'] = "Oil & Gas Production" replacement_count = replacement_count + 1 row_count = row_count + 1 #Confirm changes have been made successfully for row_tuple in NYSE_df.itertuples(): if row_tuple.Symbol == "BRK-B": print (row_tuple) if row_tuple.Symbol == "LGF-B": print (row_tuple) if row_tuple.Symbol == "RDS-A": print (row_tuple) if row_tuple.Symbol == "RDS-B": print (row_tuple) #There are also 2 instances where a wrong ticker for Berkshire Hathaway is found in the Senate Stock data #(BRKB is used as opposed to BRK-B). Thus, we correct for those instances here. #Find indices of these two trades for row_tuple in sen_df.itertuples(): if row_tuple.ticker == "BRKB": print (row_tuple) #We can see that the indices are 1207 and 4611, so we will manually modify the ticker field of these trades. sen_df.at[1207, 'ticker'] = "BRK-B" sen_df.at[4611, 'ticker'] = "BRK-B" len(sen_df) #Get sector data for each stock trade sector_data = list() for row_tuple in sen_df.itertuples(): tic = row_tuple.ticker count = 0 for row_tuple_tic in NYSE_df.itertuples(): sym = row_tuple_tic.Symbol if tic == sym: count = count+1 if row_tuple_tic.Sector == "n/a": sector_data.append("none") else: sector_data.append(row_tuple_tic.Sector) break if count == 0: for row_tuple_tic in NASDAQ_df.itertuples(): sym = row_tuple_tic.Symbol if tic == sym: count = count+1 if row_tuple_tic.Sector == "n/a": sector_data.append("none") else: sector_data.append(row_tuple_tic.Sector) break if count == 0: sector_data.append("none") print(sector_data[0:9]) #make sure length matches number of rows in df print(len(sector_data)) #counter for how many times the stock traded by senator not found in exchange data set no_ticker_cnt = 0 for i in sector_data: if i == "none": no_ticker_cnt = no_ticker_cnt + 1 print(no_ticker_cnt) #Get industry data for each stock trade industry_data = list() for row_tuple in sen_df.itertuples(): tic = row_tuple.ticker count = 0 for row_tuple_tic in NYSE_df.itertuples(): sym = row_tuple_tic.Symbol if tic == sym: count = count+1 if row_tuple_tic.industry == "n/a": industry_data.append("none") else: industry_data.append(row_tuple_tic.industry) break if count == 0: for row_tuple_tic in NASDAQ_df.itertuples(): sym = row_tuple_tic.Symbol if tic == sym: count = count+1 if row_tuple_tic.industry == "n/a": industry_data.append("none") else: industry_data.append(row_tuple_tic.industry) break if count == 0: industry_data.append("none") print(industry_data[0:9]) #make sure length matches number of rows in df print(len(industry_data)) #counter for how many times the stock traded by senator not found in exchange data set no_ticker_cnt = 0 for i in industry_data: if i == "none": no_ticker_cnt = no_ticker_cnt + 1 print(no_ticker_cnt) #Get market cap data for each stock trade mktcap_data = list() for row_tuple in sen_df.itertuples(): tic = row_tuple.ticker count = 0 for row_tuple_tic in NYSE_df.itertuples(): sym = row_tuple_tic.Symbol if tic == sym: count = count+1 if row_tuple_tic.MarketCap == "n/a": mktcap_data.append("none") else: mktcap_data.append(row_tuple_tic.MarketCap) break if count == 0: for row_tuple_tic in NASDAQ_df.itertuples(): sym = row_tuple_tic.Symbol if tic == sym: count = count+1 if row_tuple_tic.MarketCap == "n/a": mktcap_data.append("none") else: mktcap_data.append(row_tuple_tic.MarketCap) break if count == 0: mktcap_data.append("none") print(mktcap_data[0:9]) #make sure length matches number of rows in df print(len(mktcap_data)) #counter for how many times the stock traded by senator not found in exchange data set no_ticker_cnt = 0 for i in mktcap_data: if i == "none": no_ticker_cnt = no_ticker_cnt + 1 print(no_ticker_cnt) #add new columns to df sen_df['mkt_cap'] = mktcap_data sen_df['sector'] = sector_data sen_df['industry'] = industry_data sen_df = sen_df.fillna("none") sen_df.head() """ Print out names of companies with missing data to find out why we have so many misses (~17% of our data). There seem to be 3 reasons for this: 1. Companies merging with another or being acquired (or even acquiring and taking the acquired company's name - very rare) 2. Foreign companies (listed abroad) 3. American companies listed abroad - this applies to a very small number of trades """ from collections import Counter company_missing_data = list() for row_tuple in sen_df.itertuples(): if row_tuple.mkt_cap == "none": company_missing_data.append(row_tuple.asset_description) print(Counter(company_missing_data)) #Get a view of how many industries are found in our senate stock data. industry_dict = Counter(industry_data) industry_list = list() for x in industry_dict: industry_list.append(x) print(industry_list[0:9]) n_industries = len(industry_list) #since 'none' is included in our list n_industries = n_industries - 1 print("There are " + str(n_industries) + " industries covered by the trades of senators.") import string industry_size_data = list() for row_tuple in sen_df.itertuples(): industry_size = row_tuple.industry if industry_size == 'none': industry_size_data.append("none") continue size = row_tuple.mkt_cap factor = 0 x = size.find("M") if x != -1: factor = 1000000 else: factor = 1000000000 size = size.lstrip("$") size = size.rstrip("MB") size = float(size) size = size*factor if size < 500000000: industry_size = industry_size + "1" industry_size_data.append(industry_size) continue elif size < 1000000000: industry_size = industry_size + "2" industry_size_data.append(industry_size) continue elif size < 10000000000: industry_size = industry_size + "3" industry_size_data.append(industry_size) continue elif size < 50000000000: industry_size = industry_size + "4" industry_size_data.append(industry_size) continue elif size < 100000000000: industry_size = industry_size + "5" industry_size_data.append(industry_size) continue elif size < 500000000000: industry_size = industry_size + "6" industry_size_data.append(industry_size) continue else: industry_size = industry_size + "7" industry_size_data.append(industry_size) continue print(industry_size_data[0:9]) print(len(industry_size_data)) #add the new column to df sen_df['classification'] = industry_size_data sen_df.head() #create a list of all the classifications per industry across whole dataframe, to get a view of the breakdown in #classifications across each industry classification_industry_breakdown = list() for x in industry_list: y = list() for row_tuple in sen_df.itertuples(): if row_tuple.industry == x: y.append(row_tuple.classification) classification_industry_breakdown.append(y) print(classification_industry_breakdown[0:9])
[['Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments3', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments3', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments3', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4', 'Biotechnology: Laboratory Analytical Instruments4'], ['Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components5', 'Industrial Machinery/Components5', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components5', 'Industrial Machinery/Components5', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components5', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components5', 'Industrial Machinery/Components5', 'Industrial Machinery/Components5', 'Industrial Machinery/Components4', 'Industrial Machinery/Components5', 'Industrial Machinery/Components6', 'Industrial Machinery/Components5', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components6', 'Industrial Machinery/Components5', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components5', 'Industrial Machinery/Components5', 'Industrial Machinery/Components4', 'Industrial Machinery/Components5', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components5', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components5', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components5', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components2', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6', 'Industrial Machinery/Components6', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components4', 'Industrial Machinery/Components2', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components3', 'Industrial Machinery/Components4', 'Industrial Machinery/Components3', 'Industrial Machinery/Components2', 'Industrial Machinery/Components4', 'Industrial Machinery/Components6'], ['none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'none'], ['Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings4', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings3', 'Paints/Coatings4', 'Paints/Coatings4', 'Paints/Coatings4', 'Paints/Coatings3'], ['Building operators4', 'Building operators4', 'Building operators4', 'Building operators4', 'Building operators4', 'Building operators3'], ['Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks3', 'Major Banks6', 'Major Banks3', 'Major Banks4', 'Major Banks5', 'Major Banks4', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks4', 'Major Banks6', 'Major Banks6', 'Major Banks4', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks3', 'Major Banks5', 'Major Banks6', 'Major Banks3', 'Major Banks5', 'Major Banks5', 'Major Banks6', 'Major Banks4', 'Major Banks6', 'Major Banks4', 'Major Banks6', 'Major Banks5', 'Major Banks4', 'Major Banks3', 'Major Banks4', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks6', 'Major Banks3', 'Major Banks6', 'Major Banks3', 'Major Banks6', 'Major Banks6', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks4', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks3', 'Major Banks5', 'Major Banks6', 'Major Banks3', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks4', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks5', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks3', 'Major Banks6', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks3', 'Major Banks3', 'Major Banks4', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks4', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks4', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks4', 'Major Banks3', 'Major Banks5', 'Major Banks4', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks5', 'Major Banks4', 'Major Banks4', 'Major Banks4', 'Major Banks6', 'Major Banks3', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks5', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks4', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks4', 'Major Banks4', 'Major Banks4', 'Major Banks6', 'Major Banks3', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks3', 'Major Banks4', 'Major Banks4', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks4', 'Major Banks5', 'Major Banks3', 'Major Banks4', 'Major Banks4', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks4', 'Major Banks4', 'Major Banks4', 'Major Banks3', 'Major Banks4', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks4', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks4', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks4', 'Major Banks4', 'Major Banks4', 'Major Banks3', 'Major Banks4', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks4', 'Major Banks6', 'Major Banks4', 'Major Banks2', 'Major Banks5', 'Major Banks5', 'Major Banks5', 'Major Banks6', 'Major Banks3', 'Major Banks4', 'Major Banks4', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks4', 'Major Banks6', 'Major Banks6', 'Major Banks3', 'Major Banks3', 'Major Banks5', 'Major Banks5', 'Major Banks5', 'Major Banks4', 'Major Banks6', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks3', 'Major Banks6', 'Major Banks4', 'Major Banks6', 'Major Banks2', 'Major Banks5', 'Major Banks6', 'Major Banks3', 'Major Banks3', 'Major Banks2', 'Major Banks3', 'Major Banks6', 'Major Banks4', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks5', 'Major Banks6', 'Major Banks6', 'Major Banks6', 'Major Banks5', 'Major Banks5'], ['Semiconductors5', 'Semiconductors5', 'Semiconductors5', 'Semiconductors5', 'Semiconductors5', 'Semiconductors4', 'Semiconductors6', 'Semiconductors6', 'Semiconductors3', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors4', 'Semiconductors4', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors4', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors4', 'Semiconductors6', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors5', 'Semiconductors4', 'Semiconductors6', 'Semiconductors6', 'Semiconductors5', 'Semiconductors6', 'Semiconductors5', 'Semiconductors3', 'Semiconductors5', 'Semiconductors6', 'Semiconductors6', 'Semiconductors4', 'Semiconductors4', 'Semiconductors3', 'Semiconductors6', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors6', 'Semiconductors4', 'Semiconductors6', 'Semiconductors4', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors3', 'Semiconductors3', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors3', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors6', 'Semiconductors4', 'Semiconductors4', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors6', 'Semiconductors3', 'Semiconductors4', 'Semiconductors3', 'Semiconductors3', 'Semiconductors4', 'Semiconductors3', 'Semiconductors3', 'Semiconductors3', 'Semiconductors4', 'Semiconductors5', 'Semiconductors3', 'Semiconductors3', 'Semiconductors3', 'Semiconductors6', 'Semiconductors6', 'Semiconductors3', 'Semiconductors3', 'Semiconductors3', 'Semiconductors3', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors4', 'Semiconductors3', 'Semiconductors4'], ['Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals1', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals1', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals4', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals2', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals4', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals4', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals5', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals4', 'Major Pharmaceuticals5', 'Major Pharmaceuticals4', 'Major Pharmaceuticals5', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals4', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals5', 'Major Pharmaceuticals5', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals5', 'Major Pharmaceuticals5', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals4', 'Major Pharmaceuticals3', 'Major Pharmaceuticals3', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals5', 'Major Pharmaceuticals5', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals5', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals3', 'Major Pharmaceuticals1', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals3', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals6', 'Major Pharmaceuticals4'], ['Newspapers/Magazines3', 'Newspapers/Magazines3', 'Newspapers/Magazines3', 'Newspapers/Magazines3', 'Newspapers/Magazines3', 'Newspapers/Magazines3', 'Newspapers/Magazines3', 'Newspapers/Magazines3', 'Newspapers/Magazines2', 'Newspapers/Magazines3']]
MIT
Stocks/Place Stock Trades into Senator Dataframe Ankur Edit.ipynb
paulmtree/Suspicious-Senator-Trading
Collaborative filtering on Google Analytics dataThis notebook demonstrates how to implement a WALS matrix refactorization approach to do collaborative filtering.
import os PROJECT = "qwiklabs-gcp-00-34ffb0f0dc65" # REPLACE WITH YOUR PROJECT ID BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # Do not change these os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["TFVERSION"] = "1.13" %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION import tensorflow as tf print(tf.__version__)
1.13.1
Apache-2.0
courses/machine_learning/deepdive/10_recommend/wals.ipynb
gozer/training-data-analyst
Create raw datasetFor collaborative filtering, we don't need to know anything about either the users or the content. Essentially, all we need to know is userId, itemId, and rating that the particular user gave the particular item.In this case, we are working with newspaper articles. The company doesn't ask their users to rate the articles. However, we can use the time-spent on the page as a proxy for rating.Normally, we would also add a time filter to this ("latest 7 days"), but our dataset is itself limited to a few days.
from google.cloud import bigquery bq = bigquery.Client(project = PROJECT) sql = """ #standardSQL WITH CTE_visitor_page_content AS ( SELECT fullVisitorID, (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS latestContentId, (LEAD(hits.time, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) - hits.time) AS session_duration FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" GROUP BY fullVisitorId, latestContentId, hits.time ) -- Aggregate web stats SELECT fullVisitorID as visitorId, latestContentId as contentId, SUM(session_duration) AS session_duration FROM CTE_visitor_page_content WHERE latestContentId IS NOT NULL GROUP BY fullVisitorID, latestContentId HAVING session_duration > 0 ORDER BY latestContentId """ df = bq.query(sql).to_dataframe() df.head() stats = df.describe() stats df[["session_duration"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5]) # The rating is the session_duration scaled to be in the range 0-1. This will help with training. median = stats.loc["50%", "session_duration"] df["rating"] = 0.3 * df["session_duration"] / median df.loc[df["rating"] > 1, "rating"] = 1 df[["rating"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5]) del df["session_duration"] %%bash rm -rf data mkdir data df.to_csv(path_or_buf = "data/collab_raw.csv", index = False, header = False) !head data/collab_raw.csv
7337153711992174438,100074831,0.2321051400452234 5190801220865459604,100170790,1.0 2293633612703952721,100510126,0.2481776360816793 5874973374932455844,100510126,0.16690549004998828 1173698801255170595,100676857,0.05464232805149575 883397426232997550,10083328,0.9487035095774818 1808867070685560283,100906145,1.0 7615995624631762562,100906145,0.48418654214351925 5519169380728479914,100915139,0.20026163722525925 3427736932800080345,100950628,0.558924688331153
Apache-2.0
courses/machine_learning/deepdive/10_recommend/wals.ipynb
gozer/training-data-analyst
Create dataset for WALSThe raw dataset (above) won't work for WALS: The userId and itemId have to be 0,1,2 ... so we need to create a mapping from visitorId (in the raw data) to userId and contentId (in the raw data) to itemId. We will need to save the above mapping to a file because at prediction time, we'll need to know how to map the contentId in the table above to the itemId. We'll need two files: a "rows" dataset where all the items for a particular user are listed; and a "columns" dataset where all the users for a particular item are listed. Mapping
import pandas as pd import numpy as np def create_mapping(values, filename): with open(filename, 'w') as ofp: value_to_id = {value:idx for idx, value in enumerate(values.unique())} for value, idx in value_to_id.items(): ofp.write("{},{}\n".format(value, idx)) return value_to_id df = pd.read_csv(filepath_or_buffer = "data/collab_raw.csv", header = None, names = ["visitorId", "contentId", "rating"], dtype = {"visitorId": str, "contentId": str, "rating": np.float}) df.to_csv(path_or_buf = "data/collab_raw.csv", index = False, header = False) user_mapping = create_mapping(df["visitorId"], "data/users.csv") item_mapping = create_mapping(df["contentId"], "data/items.csv") !head -3 data/*.csv df["userId"] = df["visitorId"].map(user_mapping.get) df["itemId"] = df["contentId"].map(item_mapping.get) mapped_df = df[["userId", "itemId", "rating"]] mapped_df.to_csv(path_or_buf = "data/collab_mapped.csv", index = False, header = False) mapped_df.head()
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/wals.ipynb
gozer/training-data-analyst
Creating rows and columns datasets
import pandas as pd import numpy as np mapped_df = pd.read_csv(filepath_or_buffer = "data/collab_mapped.csv", header = None, names = ["userId", "itemId", "rating"]) mapped_df.head() NITEMS = np.max(mapped_df["itemId"]) + 1 NUSERS = np.max(mapped_df["userId"]) + 1 mapped_df["rating"] = np.round(mapped_df["rating"].values, 2) print("{} items, {} users, {} interactions".format( NITEMS, NUSERS, len(mapped_df) )) grouped_by_items = mapped_df.groupby("itemId") iter = 0 for item, grouped in grouped_by_items: print(item, grouped["userId"].values, grouped["rating"].values) iter = iter + 1 if iter > 5: break import tensorflow as tf grouped_by_items = mapped_df.groupby("itemId") with tf.python_io.TFRecordWriter("data/users_for_item") as ofp: for item, grouped in grouped_by_items: example = tf.train.Example(features = tf.train.Features(feature = { "key": tf.train.Feature(int64_list = tf.train.Int64List(value = [item])), "indices": tf.train.Feature(int64_list = tf.train.Int64List(value = grouped["userId"].values)), "values": tf.train.Feature(float_list = tf.train.FloatList(value = grouped["rating"].values)) })) ofp.write(example.SerializeToString()) grouped_by_users = mapped_df.groupby("userId") with tf.python_io.TFRecordWriter("data/items_for_user") as ofp: for user, grouped in grouped_by_users: example = tf.train.Example(features = tf.train.Features(feature = { "key": tf.train.Feature(int64_list = tf.train.Int64List(value = [user])), "indices": tf.train.Feature(int64_list = tf.train.Int64List(value = grouped["itemId"].values)), "values": tf.train.Feature(float_list = tf.train.FloatList(value = grouped["rating"].values)) })) ofp.write(example.SerializeToString()) !ls -lrt data
total 31908 -rw-r--r-- 1 jupyter jupyter 13152765 Jul 31 20:41 collab_raw.csv -rw-r--r-- 1 jupyter jupyter 2134511 Jul 31 20:41 users.csv -rw-r--r-- 1 jupyter jupyter 82947 Jul 31 20:41 items.csv -rw-r--r-- 1 jupyter jupyter 7812739 Jul 31 20:41 collab_mapped.csv -rw-r--r-- 1 jupyter jupyter 2252828 Jul 31 20:41 users_for_item -rw-r--r-- 1 jupyter jupyter 7217822 Jul 31 20:41 items_for_user
Apache-2.0
courses/machine_learning/deepdive/10_recommend/wals.ipynb
gozer/training-data-analyst
To summarize, we created the following data files from collab_raw.csv: ```collab_mapped.csv``` is essentially the same data as in ```collab_raw.csv``` except that ```visitorId``` and ```contentId``` which are business-specific have been mapped to ```userId``` and ```itemId``` which are enumerated in 0,1,2,.... The mappings themselves are stored in ```items.csv``` and ```users.csv``` so that they can be used during inference. ```users_for_item``` contains all the users/ratings for each item in TFExample format ```items_for_user``` contains all the items/ratings for each user in TFExample format Train with WALSOnce you have the dataset, do matrix factorization with WALS using the [WALSMatrixFactorization](https://www.tensorflow.org/versions/master/api_docs/python/tf/contrib/factorization/WALSMatrixFactorization) in the contrib directory.This is an estimator model, so it should be relatively familiar.As usual, we write an input_fn to provide the data to the model, and then create the Estimator to do train_and_evaluate.Because it is in contrib and hasn't moved over to tf.estimator yet, we use tf.contrib.learn.Experiment to handle the training loop.
import os import tensorflow as tf from tensorflow.python.lib.io import file_io from tensorflow.contrib.factorization import WALSMatrixFactorization def read_dataset(mode, args): def decode_example(protos, vocab_size): features = { "key": tf.FixedLenFeature(shape = [1], dtype = tf.int64), "indices": tf.VarLenFeature(dtype = tf.int64), "values": tf.VarLenFeature(dtype = tf.float32)} parsed_features = tf.parse_single_example(serialized = protos, features = features) values = tf.sparse_merge(sp_ids = parsed_features["indices"], sp_values = parsed_features["values"], vocab_size = vocab_size) # Save key to remap after batching # This is a temporary workaround to assign correct row numbers in each batch. # You can ignore details of this part and remap_keys(). key = parsed_features["key"] decoded_sparse_tensor = tf.SparseTensor(indices = tf.concat(values = [values.indices, [key]], axis = 0), values = tf.concat(values = [values.values, [0.0]], axis = 0), dense_shape = values.dense_shape) return decoded_sparse_tensor def remap_keys(sparse_tensor): # Current indices of our SparseTensor that we need to fix bad_indices = sparse_tensor.indices # shape = (current_batch_size * (number_of_items/users[i] + 1), 2) # Current values of our SparseTensor that we need to fix bad_values = sparse_tensor.values # shape = (current_batch_size * (number_of_items/users[i] + 1),) # Since batch is ordered, the last value for a batch index is the user # Find where the batch index chages to extract the user rows # 1 where user, else 0 user_mask = tf.concat(values = [bad_indices[1:,0] - bad_indices[:-1,0], tf.constant(value = [1], dtype = tf.int64)], axis = 0) # shape = (current_batch_size * (number_of_items/users[i] + 1), 2) # Mask out the user rows from the values good_values = tf.boolean_mask(tensor = bad_values, mask = tf.equal(x = user_mask, y = 0)) # shape = (current_batch_size * number_of_items/users[i],) item_indices = tf.boolean_mask(tensor = bad_indices, mask = tf.equal(x = user_mask, y = 0)) # shape = (current_batch_size * number_of_items/users[i],) user_indices = tf.boolean_mask(tensor = bad_indices, mask = tf.equal(x = user_mask, y = 1))[:, 1] # shape = (current_batch_size,) good_user_indices = tf.gather(params = user_indices, indices = item_indices[:,0]) # shape = (current_batch_size * number_of_items/users[i],) # User and item indices are rank 1, need to make rank 1 to concat good_user_indices_expanded = tf.expand_dims(input = good_user_indices, axis = -1) # shape = (current_batch_size * number_of_items/users[i], 1) good_item_indices_expanded = tf.expand_dims(input = item_indices[:, 1], axis = -1) # shape = (current_batch_size * number_of_items/users[i], 1) good_indices = tf.concat(values = [good_user_indices_expanded, good_item_indices_expanded], axis = 1) # shape = (current_batch_size * number_of_items/users[i], 2) remapped_sparse_tensor = tf.SparseTensor(indices = good_indices, values = good_values, dense_shape = sparse_tensor.dense_shape) return remapped_sparse_tensor def parse_tfrecords(filename, vocab_size): if mode == tf.estimator.ModeKeys.TRAIN: num_epochs = None # indefinitely else: num_epochs = 1 # end-of-input after this files = tf.gfile.Glob(filename = os.path.join(args["input_path"], filename)) # Create dataset from file list dataset = tf.data.TFRecordDataset(files) dataset = dataset.map(map_func = lambda x: decode_example(x, vocab_size)) dataset = dataset.repeat(count = num_epochs) dataset = dataset.batch(batch_size = args["batch_size"]) dataset = dataset.map(map_func = lambda x: remap_keys(x)) return dataset.make_one_shot_iterator().get_next() def _input_fn(): features = { WALSMatrixFactorization.INPUT_ROWS: parse_tfrecords("items_for_user", args["nitems"]), WALSMatrixFactorization.INPUT_COLS: parse_tfrecords("users_for_item", args["nusers"]), WALSMatrixFactorization.PROJECT_ROW: tf.constant(True) } return features, None return _input_fn
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/wals.ipynb
gozer/training-data-analyst
This code is helpful in developing the input function. You don't need it in production.
def try_out(): with tf.Session() as sess: fn = read_dataset( mode = tf.estimator.ModeKeys.EVAL, args = {"input_path": "data", "batch_size": 4, "nitems": NITEMS, "nusers": NUSERS}) feats, _ = fn() print(feats["input_rows"].eval()) print(feats["input_rows"].eval()) try_out() def find_top_k(user, item_factors, k): all_items = tf.matmul(a = tf.expand_dims(input = user, axis = 0), b = tf.transpose(a = item_factors)) topk = tf.nn.top_k(input = all_items, k = k) return tf.cast(x = topk.indices, dtype = tf.int64) def batch_predict(args): import numpy as np with tf.Session() as sess: estimator = tf.contrib.factorization.WALSMatrixFactorization( num_rows = args["nusers"], num_cols = args["nitems"], embedding_dimension = args["n_embeds"], model_dir = args["output_dir"]) # This is how you would get the row factors for out-of-vocab user data # row_factors = list(estimator.get_projections(input_fn=read_dataset(tf.estimator.ModeKeys.EVAL, args))) # user_factors = tf.convert_to_tensor(np.array(row_factors)) # But for in-vocab data, the row factors are already in the checkpoint user_factors = tf.convert_to_tensor(value = estimator.get_row_factors()[0]) # (nusers, nembeds) # In either case, we have to assume catalog doesn"t change, so col_factors are read in item_factors = tf.convert_to_tensor(value = estimator.get_col_factors()[0])# (nitems, nembeds) # For each user, find the top K items topk = tf.squeeze(input = tf.map_fn(fn = lambda user: find_top_k(user, item_factors, args["topk"]), elems = user_factors, dtype = tf.int64)) with file_io.FileIO(os.path.join(args["output_dir"], "batch_pred.txt"), mode = 'w') as f: for best_items_for_user in topk.eval(): f.write(",".join(str(x) for x in best_items_for_user) + '\n') def train_and_evaluate(args): train_steps = int(0.5 + (1.0 * args["num_epochs"] * args["nusers"]) / args["batch_size"]) steps_in_epoch = int(0.5 + args["nusers"] / args["batch_size"]) print("Will train for {} steps, evaluating once every {} steps".format(train_steps, steps_in_epoch)) def experiment_fn(output_dir): return tf.contrib.learn.Experiment( tf.contrib.factorization.WALSMatrixFactorization( num_rows = args["nusers"], num_cols = args["nitems"], embedding_dimension = args["n_embeds"], model_dir = args["output_dir"]), train_input_fn = read_dataset(tf.estimator.ModeKeys.TRAIN, args), eval_input_fn = read_dataset(tf.estimator.ModeKeys.EVAL, args), train_steps = train_steps, eval_steps = 1, min_eval_frequency = steps_in_epoch ) from tensorflow.contrib.learn.python.learn import learn_runner learn_runner.run(experiment_fn = experiment_fn, output_dir = args["output_dir"]) batch_predict(args) import shutil shutil.rmtree(path = "wals_trained", ignore_errors=True) train_and_evaluate({ "output_dir": "wals_trained", "input_path": "data/", "num_epochs": 0.05, "nitems": NITEMS, "nusers": NUSERS, "batch_size": 512, "n_embeds": 10, "topk": 3 }) !ls wals_trained !head wals_trained/batch_pred.txt
284,5609,36 284,2754,42 284,3168,534 2621,5528,2694 4409,5295,343 5161,3267,3369 5479,1335,55 5479,1335,55 4414,284,5572 284,241,2359
Apache-2.0
courses/machine_learning/deepdive/10_recommend/wals.ipynb
gozer/training-data-analyst
Run as a Python moduleLet's run it as Python module for just a few steps.
os.environ["NITEMS"] = str(NITEMS) os.environ["NUSERS"] = str(NUSERS) %%bash rm -rf wals.tar.gz wals_trained gcloud ml-engine local train \ --module-name=walsmodel.task \ --package-path=${PWD}/walsmodel \ -- \ --output_dir=${PWD}/wals_trained \ --input_path=${PWD}/data \ --num_epochs=0.01 --nitems=${NITEMS} --nusers=${NUSERS} \ --job-dir=./tmp
Will train for 2 steps, evaluating once every 162 steps
Apache-2.0
courses/machine_learning/deepdive/10_recommend/wals.ipynb
gozer/training-data-analyst
Run on Cloud
%%bash gsutil -m cp data/* gs://${BUCKET}/wals/data %%bash OUTDIR=gs://${BUCKET}/wals/model_trained JOBNAME=wals_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=walsmodel.task \ --package-path=${PWD}/walsmodel \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC_GPU \ --runtime-version=$TFVERSION \ -- \ --output_dir=$OUTDIR \ --input_path=gs://${BUCKET}/wals/data \ --num_epochs=10 --nitems=${NITEMS} --nusers=${NUSERS}
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/wals.ipynb
gozer/training-data-analyst
This took 10 minutes for me. Get row and column factorsOnce you have a trained WALS model, you can get row and column factors (user and item embeddings) from the checkpoint file. We'll look at how to use these in the section on building a recommendation system using deep neural networks.
def get_factors(args): with tf.Session() as sess: estimator = tf.contrib.factorization.WALSMatrixFactorization( num_rows = args["nusers"], num_cols = args["nitems"], embedding_dimension = args["n_embeds"], model_dir = args["output_dir"]) row_factors = estimator.get_row_factors()[0] col_factors = estimator.get_col_factors()[0] return row_factors, col_factors args = { "output_dir": "gs://{}/wals/model_trained".format(BUCKET), "nitems": NITEMS, "nusers": NUSERS, "n_embeds": 10 } user_embeddings, item_embeddings = get_factors(args) print(user_embeddings[:3]) print(item_embeddings[:3])
INFO:tensorflow:Using default config. INFO:tensorflow:Using config: {'_environment': 'local', '_is_chief': True, '_keep_checkpoint_every_n_hours': 10000, '_num_worker_replicas': 0, '_session_config': None, '_task_type': None, '_eval_distribute': None, '_tf_config': gpu_options { per_process_gpu_memory_fraction: 1.0 } , '_master': '', '_log_step_count_steps': 100, '_model_dir': 'gs://qwiklabs-gcp-cbc8684b07fc2dbd-bucket/wals/model_trained', '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f4bd8302f28>, '_device_fn': None, '_keep_checkpoint_max': 5, '_task_id': 0, '_evaluation_master': '', '_save_checkpoints_steps': None, '_protocol': None, '_train_distribute': None, '_save_checkpoints_secs': 600, '_save_summary_steps': 100, '_tf_random_seed': None, '_num_ps_replicas': 0} [[ 3.3451824e-06 -1.1986867e-05 4.8447573e-06 -1.5209486e-05 -1.7004859e-07 1.1976428e-05 9.8887876e-06 7.2386983e-06 -7.0237149e-07 -7.9796819e-06] [-2.5300323e-03 1.4055537e-03 -9.8291773e-04 -4.2533795e-03 -1.4166030e-03 -1.9530674e-03 8.5932651e-04 -1.5276540e-03 2.1342330e-03 1.2041229e-03] [ 9.5228699e-21 5.5453966e-21 2.2947056e-21 -5.8859543e-21 7.7516509e-21 -2.7640896e-20 2.3587296e-20 -3.9876822e-21 1.7312470e-20 2.5409211e-20]] [[-1.2125404e-06 -8.6304914e-05 4.4657736e-05 -6.8423047e-05 5.8551927e-06 9.7241784e-05 6.6776753e-05 1.6673854e-05 -1.2708440e-05 -5.1148414e-05] [-1.1353870e-01 5.9097271e-02 -4.6105500e-02 -1.5460028e-01 -1.9166643e-02 -7.3236257e-02 3.5582058e-02 -5.6805085e-02 7.5831160e-02 7.5306065e-02] [ 7.1989548e-20 4.4574543e-20 6.5149121e-21 -4.6291777e-20 8.8196718e-20 -2.3245078e-19 1.9459292e-19 4.0191465e-20 1.6273659e-19 2.2836562e-19]]
Apache-2.0
courses/machine_learning/deepdive/10_recommend/wals.ipynb
gozer/training-data-analyst
You can visualize the embedding vectors using dimensional reduction techniques such as PCA.
import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn.decomposition import PCA pca = PCA(n_components = 3) pca.fit(user_embeddings) user_embeddings_pca = pca.transform(user_embeddings) fig = plt.figure(figsize = (8,8)) ax = fig.add_subplot(111, projection = "3d") xs, ys, zs = user_embeddings_pca[::150].T ax.scatter(xs, ys, zs)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/wals.ipynb
gozer/training-data-analyst
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Computing the 4-Velocity Time-Component $u^0$, the Magnetic Field Measured by a Comoving Observer $b^{\mu}$, and the Poynting Vector $S^i$ Authors: Zach Etienne & Patrick Nelson[comment]: (Abstract: TODO)**Notebook Status:** Validated **Validation Notes:** This module has been validated against a trusted code (the hand-written smallbPoynET in WVUThorns_diagnostics, which itself is based on expressions in IllinoisGRMHD... which was validated against the original GRMHD code of the Illinois NR group) NRPy+ Source Code for this module: [u0_smallb_Poynting__Cartesian.py](../edit/u0_smallb_Poynting__Cartesian/u0_smallb_Poynting__Cartesian.py)[comment]: (Introduction: TODO) Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](u0bu): Computing $u^0$ and $b^{\mu}$ 1. [Step 1.a](4metric): Compute the 4-metric $g_{\mu\nu}$ and its inverse $g^{\mu\nu}$ from the ADM 3+1 variables, using the [`BSSN.ADMBSSN_tofrom_4metric`](../edit/BSSN/ADMBSSN_tofrom_4metric.py) ([**tutorial**](Tutorial-ADMBSSN_tofrom_4metric.ipynb)) NRPy+ module 1. [Step 1.b](u0): Compute $u^0$ from the Valencia 3-velocity 1. [Step 1.c](uj): Compute $u_j$ from $u^0$, the Valencia 3-velocity, and $g_{\mu\nu}$ 1. [Step 1.d](gamma): Compute $\gamma=$ `gammaDET` from the ADM 3+1 variables 1. [Step 1.e](beta): Compute $b^\mu$1. [Step 2](poynting_flux): Defining the Poynting Flux Vector $S^{i}$ 1. [Step 2.a](g): Computing $g^{i\nu}$ 1. [Step 2.b](s): Computing $S^{i}$1. [Step 3](code_validation): Code Validation against `u0_smallb_Poynting__Cartesian` NRPy+ module1. [Step 4](appendix): Appendix: Proving Eqs. 53 and 56 in [Duez *et al* (2005)](https://arxiv.org/pdf/astro-ph/0503420.pdf)1. [Step 5](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Computing $u^0$ and $b^{\mu}$ \[Back to [top](toc)\]$$\label{u0bu}$$First some definitions. The spatial components of $b^{\mu}$ are simply the magnetic field as measured by an observer comoving with the plasma $B^{\mu}_{\rm (u)}$, divided by $\sqrt{4\pi}$. In addition, in the ideal MHD limit, $B^{\mu}_{\rm (u)}$ is orthogonal to the plasma 4-velocity $u^\mu$, which sets the $\mu=0$ component. Note also that $B^{\mu}_{\rm (u)}$ is related to the magnetic field as measured by a *normal* observer $B^i$ via a simple projection (Eq 21 in [Duez *et al* (2005)](https://arxiv.org/pdf/astro-ph/0503420.pdf)), which results in the expressions (Eqs 23 and 24 in [Duez *et al* (2005)](https://arxiv.org/pdf/astro-ph/0503420.pdf)):\begin{align}\sqrt{4\pi} b^0 = B^0_{\rm (u)} &= \frac{u_j B^j}{\alpha} \\\sqrt{4\pi} b^i = B^i_{\rm (u)} &= \frac{B^i + (u_j B^j) u^i}{\alpha u^0}\\\end{align}$B^i$ is related to the actual magnetic field evaluated in IllinoisGRMHD, $\tilde{B}^i$ via$$B^i = \frac{\tilde{B}^i}{\gamma},$$where $\gamma$ is the determinant of the spatial 3-metric.The above expressions will require that we compute1. the 4-metric $g_{\mu\nu}$ from the ADM 3+1 variables1. $u^0$ from the Valencia 3-velocity1. $u_j$ from $u^0$, the Valencia 3-velocity, and $g_{\mu\nu}$1. $\gamma$ from the ADM 3+1 variables Step 1.a: Compute the 4-metric $g_{\mu\nu}$ and its inverse $g^{\mu\nu}$ from the ADM 3+1 variables, using the [`BSSN.ADMBSSN_tofrom_4metric`](../edit/BSSN/ADMBSSN_tofrom_4metric.py) ([**tutorial**](Tutorial-ADMBSSN_tofrom_4metric.ipynb)) NRPy+ module \[Back to [top](toc)\]$$\label{4metric}$$We are given $\gamma_{ij}$, $\alpha$, and $\beta^i$ from ADMBase, so let's first compute $$g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\\beta_j & \gamma_{ij}\end{pmatrix}.$$
# Step 1: Initialize needed Python/NRPy+ modules import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends import NRPy_param_funcs as par # NRPy+: Parameter interface import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import reference_metric as rfm # NRPy+: Reference metric support from outputC import * # NRPy+: Basic C code output functionality import BSSN.ADMBSSN_tofrom_4metric as AB4m # NRPy+: ADM/BSSN <-> 4-metric conversions # Set spatial dimension = 3 DIM=3 thismodule = "smallbPoynET" # Step 1.a: Compute the 4-metric $g_{\mu\nu}$ and its inverse # $g^{\mu\nu}$ from the ADM 3+1 variables, using the # BSSN.ADMBSSN_tofrom_4metric NRPy+ module import BSSN.ADMBSSN_tofrom_4metric as AB4m gammaDD,betaU,alpha = AB4m.setup_ADM_quantities("ADM") AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha) g4DD = AB4m.g4DD AB4m.g4UU_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha) g4UU = AB4m.g4UU
_____no_output_____
BSD-2-Clause
Tutorial-u0_smallb_Poynting-Cartesian.ipynb
KAClough/nrpytutorial
Step 1.b: Compute $u^0$ from the Valencia 3-velocity \[Back to [top](toc)\]$$\label{u0}$$According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via\begin{align}\alpha v^i_{(n)} &= \frac{u^i}{u^0} + \beta^i \\\implies u^i &= u^0 \left(\alpha v^i_{(n)} - \beta^i\right)\end{align}Defining $v^i = \frac{u^i}{u^0}$, we get$$v^i = \alpha v^i_{(n)} - \beta^i,$$and in terms of this variable we get\begin{align}g_{00} \left(u^0\right)^2 + 2 g_{0i} u^0 u^i + g_{ij} u^i u^j &= \left(u^0\right)^2 \left(g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j\right)\\\implies u^0 &= \pm \sqrt{\frac{-1}{g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{-1}{(-\alpha^2 + \beta^2) + 2 \beta_i v^i + \gamma_{ij} v^i v^j}} \\&= \pm \sqrt{\frac{1}{\alpha^2 - \gamma_{ij}\left(\beta^i + v^i\right)\left(\beta^j + v^j\right)}}\\&= \pm \sqrt{\frac{1}{\alpha^2 - \alpha^2 \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\&= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\end{align}Generally speaking, numerical errors will occasionally drive expressions under the radical to either negative values or potentially enormous values (corresponding to enormous Lorentz factors). Thus a reliable approach for computing $u^0$ requires that we first rewrite the above expression in terms of the Lorentz factor squared: $\Gamma^2=\left(\alpha u^0\right)^2$:\begin{align}u^0 &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\\implies \left(\alpha u^0\right)^2 &= \frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}} \\\implies \gamma_{ij}v^i_{(n)}v^j_{(n)} &= 1 - \frac{1}{\left(\alpha u^0\right)^2} \\&= 1 - \frac{1}{\Gamma^2}\end{align}In order for the bottom expression to hold true, the left-hand side must be between 0 and 1. Again, this is not guaranteed due to the appearance of numerical errors. In fact, a robust algorithm will not allow $\Gamma^2$ to become too large (which might contribute greatly to the stress-energy of a given gridpoint), so let's define $\Gamma_{\rm max}$, the largest allowed Lorentz factor. Then our algorithm for computing $u^0$ is as follows:If$$R=\gamma_{ij}v^i_{(n)}v^j_{(n)}>1 - \frac{1}{\Gamma_{\rm max}^2},$$ then adjust the 3-velocity $v^i$ as follows:$$v^i_{(n)} = \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}.$$After this rescaling, we are then guaranteed that if $R$ is recomputed, it will be set to its ceiling value $R=R_{\rm max} = 1 - \frac{1}{\Gamma_{\rm max}^2}$.Then, regardless of whether the ceiling on $R$ was applied, $u^0$ can be safely computed via$$u^0 = \frac{1}{\alpha \sqrt{1-R}}.$$
ValenciavU = ixp.register_gridfunctions_for_single_rank1("AUX","ValenciavU",DIM=3) # Step 1: Compute R = 1 - 1/max(Gamma) R = sp.sympify(0) for i in range(DIM): for j in range(DIM): R += gammaDD[i][j]*ValenciavU[i]*ValenciavU[j] GAMMA_SPEED_LIMIT = par.Cparameters("REAL",thismodule,"GAMMA_SPEED_LIMIT",10.0) # Default value based on # IllinoisGRMHD. # GiRaFFE default = 2000.0 Rmax = 1 - 1/(GAMMA_SPEED_LIMIT*GAMMA_SPEED_LIMIT) rescaledValenciavU = ixp.zerorank1() for i in range(DIM): rescaledValenciavU[i] = ValenciavU[i]*sp.sqrt(Rmax/R) rescaledu0 = 1/(alpha*sp.sqrt(1-Rmax)) regularu0 = 1/(alpha*sp.sqrt(1-R)) computeu0_Cfunction = """ /* Function for computing u^0 from Valencia 3-velocity. */ /* Inputs: ValenciavU[], alpha, gammaDD[][], GAMMA_SPEED_LIMIT (C parameter) */ /* Output: u0=u^0 and velocity-limited ValenciavU[] */\n\n""" computeu0_Cfunction += outputC([R,Rmax],["const double R","const double Rmax"],"returnstring", params="includebraces=False,CSE_varprefix=tmpR,outCverbose=False") computeu0_Cfunction += "if(R <= Rmax) " computeu0_Cfunction += outputC(regularu0,"u0","returnstring", params="includebraces=True,CSE_varprefix=tmpnorescale,outCverbose=False") computeu0_Cfunction += " else " computeu0_Cfunction += outputC([rescaledValenciavU[0],rescaledValenciavU[1],rescaledValenciavU[2],rescaledu0], ["ValenciavU0","ValenciavU1","ValenciavU2","u0"],"returnstring", params="includebraces=True,CSE_varprefix=tmprescale,outCverbose=False") print(computeu0_Cfunction)
/* Function for computing u^0 from Valencia 3-velocity. */ /* Inputs: ValenciavU[], alpha, gammaDD[][], GAMMA_SPEED_LIMIT (C parameter) */ /* Output: u0=u^0 and velocity-limited ValenciavU[] */ const double tmpR0 = 2*ValenciavU0; const double R = ((ValenciavU0)*(ValenciavU0))*gammaDD00 + ((ValenciavU1)*(ValenciavU1))*gammaDD11 + 2*ValenciavU1*ValenciavU2*gammaDD12 + ValenciavU1*gammaDD01*tmpR0 + ((ValenciavU2)*(ValenciavU2))*gammaDD22 + ValenciavU2*gammaDD02*tmpR0; const double Rmax = 1 - 1/((GAMMA_SPEED_LIMIT)*(GAMMA_SPEED_LIMIT)); if(R <= Rmax) { const double tmpnorescale0 = 2*ValenciavU0; u0 = 1/(alpha*sqrt(-((ValenciavU0)*(ValenciavU0))*gammaDD00 - ((ValenciavU1)*(ValenciavU1))*gammaDD11 - 2*ValenciavU1*ValenciavU2*gammaDD12 - ValenciavU1*gammaDD01*tmpnorescale0 - ((ValenciavU2)*(ValenciavU2))*gammaDD22 - ValenciavU2*gammaDD02*tmpnorescale0 + 1)); } else { const double tmprescale0 = 2*ValenciavU0; const double tmprescale1 = sqrt((1 - 1/((GAMMA_SPEED_LIMIT)*(GAMMA_SPEED_LIMIT)))/(((ValenciavU0)*(ValenciavU0))*gammaDD00 + ((ValenciavU1)*(ValenciavU1))*gammaDD11 + 2*ValenciavU1*ValenciavU2*gammaDD12 + ValenciavU1*gammaDD01*tmprescale0 + ((ValenciavU2)*(ValenciavU2))*gammaDD22 + ValenciavU2*gammaDD02*tmprescale0)); ValenciavU0 = ValenciavU0*tmprescale1; ValenciavU1 = ValenciavU1*tmprescale1; ValenciavU2 = ValenciavU2*tmprescale1; u0 = fabs(GAMMA_SPEED_LIMIT)/alpha; }
BSD-2-Clause
Tutorial-u0_smallb_Poynting-Cartesian.ipynb
KAClough/nrpytutorial
Step 1.c: Compute $u_j$ from $u^0$, the Valencia 3-velocity, and $g_{\mu\nu}$ \[Back to [top](toc)\]$$\label{uj}$$The basic equation is\begin{align}u_j &= g_{\mu j} u^{\mu} \\&= g_{0j} u^0 + g_{ij} u^i \\&= \beta_j u^0 + \gamma_{ij} u^i \\&= \beta_j u^0 + \gamma_{ij} u^0 \left(\alpha v^i_{(n)} - \beta^i\right) \\&= u^0 \left(\beta_j + \gamma_{ij} \left(\alpha v^i_{(n)} - \beta^i\right) \right)\\&= \alpha u^0 \gamma_{ij} v^i_{(n)} \\\end{align}
u0 = par.Cparameters("REAL",thismodule,"u0",1e300) # Will be overwritten in C code. Set to crazy value to ensure this. uD = ixp.zerorank1() for i in range(DIM): for j in range(DIM): uD[j] += alpha*u0*gammaDD[i][j]*ValenciavU[i]
_____no_output_____
BSD-2-Clause
Tutorial-u0_smallb_Poynting-Cartesian.ipynb
KAClough/nrpytutorial
Step 1.d: Compute $b^\mu$ \[Back to [top](toc)\]$$\label{beta}$$We compute $b^\mu$ from the above expressions:\begin{align}\sqrt{4\pi} b^0 = B^0_{\rm (u)} &= \frac{u_j B^j}{\alpha} \\\sqrt{4\pi} b^i = B^i_{\rm (u)} &= \frac{B^i + (u_j B^j) u^i}{\alpha u^0}\\\end{align}$B^i$ is exactly equal to the $B^i$ evaluated in IllinoisGRMHD/GiRaFFE.Pulling this together, we currently have available as input:+ $\tilde{B}^i$+ $u_j$+ $u^0$,with the goal of outputting now $b^\mu$ and $b^2$:
M_PI = par.Cparameters("#define",thismodule,"M_PI","") BU = ixp.register_gridfunctions_for_single_rank1("AUX","BU",DIM=3) # uBcontraction = u_i B^i uBcontraction = sp.sympify(0) for i in range(DIM): uBcontraction += uD[i]*BU[i] # uU = 3-vector representing u^i = u^0 \left(\alpha v^i_{(n)} - \beta^i\right) uU = ixp.zerorank1() for i in range(DIM): uU[i] = u0*(alpha*ValenciavU[i] - betaU[i]) smallb4U = ixp.zerorank1(DIM=4) smallb4U[0] = uBcontraction/(alpha*sp.sqrt(4*M_PI)) for i in range(DIM): smallb4U[1+i] = (BU[i] + uBcontraction*uU[i])/(alpha*u0*sp.sqrt(4*M_PI))
_____no_output_____
BSD-2-Clause
Tutorial-u0_smallb_Poynting-Cartesian.ipynb
KAClough/nrpytutorial
Step 2: Defining the Poynting Flux Vector $S^{i}$ \[Back to [top](toc)\]$$\label{poynting_flux}$$The Poynting flux is defined in Eq. 11 of [Kelly *et al.*](https://arxiv.org/pdf/1710.02132.pdf) (note that we choose the minus sign convention so that the Poynting luminosity across a spherical shell is $L_{\rm EM} = \int (-\alpha T^i_{\rm EM\ 0}) \sqrt{\gamma} d\Omega = \int S^r \sqrt{\gamma} d\Omega$, as in [Farris *et al.*](https://arxiv.org/pdf/1207.3354.pdf):$$S^i = -\alpha T^i_{\rm EM\ 0} = -\alpha\left(b^2 u^i u_0 + \frac{1}{2} b^2 g^i{}_0 - b^i b_0\right)$$ Step 2.a: Computing $S^{i}$ \[Back to [top](toc)\]$$\label{s}$$Given $g^{\mu\nu}$ computed above, we focus first on the $g^i{}_{0}$ term by computing $$g^\mu{}_\delta = g^{\mu\nu} g_{\nu \delta},$$and then the rest of the Poynting flux vector can be immediately computed from quantities defined above:$$S^i = -\alpha T^i_{\rm EM\ 0} = -\alpha\left(b^2 u^i u_0 + \frac{1}{2} b^2 g^i{}_0 - b^i b_0\right)$$
# Step 2.a.i: compute g^\mu_\delta: g4UD = ixp.zerorank2(DIM=4) for mu in range(4): for delta in range(4): for nu in range(4): g4UD[mu][delta] += g4UU[mu][nu]*g4DD[nu][delta] # Step 2.a.ii: compute b_{\mu} smallb4D = ixp.zerorank1(DIM=4) for mu in range(4): for nu in range(4): smallb4D[mu] += g4DD[mu][nu]*smallb4U[nu] # Step 2.a.iii: compute u_0 = g_{mu 0} u^{mu} = g4DD[0][0]*u0 + g4DD[i][0]*uU[i] u_0 = g4DD[0][0]*u0 for i in range(DIM): u_0 += g4DD[i+1][0]*uU[i] # Step 2.a.iv: compute b^2, setting b^2 = smallb2etk, as gridfunctions with base names ending in a digit # are forbidden in NRPy+. smallb2etk = sp.sympify(0) for mu in range(4): smallb2etk += smallb4U[mu]*smallb4D[mu] # Step 2.a.v: compute S^i PoynSU = ixp.zerorank1() for i in range(DIM): PoynSU[i] = -alpha * (smallb2etk*uU[i]*u_0 + sp.Rational(1,2)*smallb2etk*g4UD[i+1][0] - smallb4U[i+1]*smallb4D[0])
_____no_output_____
BSD-2-Clause
Tutorial-u0_smallb_Poynting-Cartesian.ipynb
KAClough/nrpytutorial
Step 3: Code Validation against `u0_smallb_Poynting__Cartesian` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$Here, as a code validation check, we verify agreement in the SymPy expressions for u0, smallbU, smallb2etk, and PoynSU between1. this tutorial and 2. the NRPy+ [u0_smallb_Poynting__Cartesian module](../edit/u0_smallb_Poynting__Cartesian/u0_smallb_Poynting__Cartesian.py).
import sys import u0_smallb_Poynting__Cartesian.u0_smallb_Poynting__Cartesian as u0etc u0etc.compute_u0_smallb_Poynting__Cartesian(gammaDD,betaU,alpha,ValenciavU,BU) if u0etc.computeu0_Cfunction != computeu0_Cfunction: print("FAILURE: u0 C code has changed!") sys.exit(1) else: print("PASSED: u0 C code matches!") for i in range(4): print("u0etc.smallb4U["+str(i)+"] - smallb4U["+str(i)+"] = " + str(u0etc.smallb4U[i]-smallb4U[i])) print("u0etc.smallb2etk - smallb2etk = " + str(u0etc.smallb2etk-smallb2etk)) for i in range(DIM): print("u0etc.PoynSU["+str(i)+"] - PoynSU["+str(i)+"] = " + str(u0etc.PoynSU[i]-PoynSU[i]))
PASSED: u0 C code matches! u0etc.smallb4U[0] - smallb4U[0] = 0 u0etc.smallb4U[1] - smallb4U[1] = 0 u0etc.smallb4U[2] - smallb4U[2] = 0 u0etc.smallb4U[3] - smallb4U[3] = 0 u0etc.smallb2etk - smallb2etk = 0 u0etc.PoynSU[0] - PoynSU[0] = 0 u0etc.PoynSU[1] - PoynSU[1] = 0 u0etc.PoynSU[2] - PoynSU[2] = 0
BSD-2-Clause
Tutorial-u0_smallb_Poynting-Cartesian.ipynb
KAClough/nrpytutorial
Step 4: Appendix: Proving Eqs. 53 and 56 in [Duez *et al* (2005)](https://arxiv.org/pdf/astro-ph/0503420.pdf)$$\label{appendix}$$$u^\mu u_\mu = -1$ implies\begin{align}g^{\mu\nu} u_\mu u_\nu &= g^{00} \left(u_0\right)^2 + 2 g^{0i} u_0 u_i + g^{ij} u_i u_j = -1 \\\implies &g^{00} \left(u_0\right)^2 + 2 g^{0i} u_0 u_i + g^{ij} u_i u_j + 1 = 0\\& a x^2 + b x + c = 0\end{align}Thus we have a quadratic equation for $u_0$, with solution given by\begin{align}u_0 &= \frac{-b \pm \sqrt{b^2 - 4 a c}}{2 a} \\&= \frac{-2 g^{0i}u_i \pm \sqrt{\left(2 g^{0i} u_i\right)^2 - 4 g^{00} (g^{ij} u_i u_j + 1)}}{2 g^{00}}\\&= \frac{-g^{0i}u_i \pm \sqrt{\left(g^{0i} u_i\right)^2 - g^{00} (g^{ij} u_i u_j + 1)}}{g^{00}}\\\end{align}Notice that (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf))$$g^{\mu\nu} = \begin{pmatrix} -\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\end{pmatrix},$$so we have\begin{align}u_0 &= \frac{-\beta^i u_i/\alpha^2 \pm \sqrt{\left(\beta^i u_i/\alpha^2\right)^2 + 1/\alpha^2 (g^{ij} u_i u_j + 1)}}{1/\alpha^2}\\&= -\beta^i u_i \pm \sqrt{\left(\beta^i u_i\right)^2 + \alpha^2 (g^{ij} u_i u_j + 1)}\\&= -\beta^i u_i \pm \sqrt{\left(\beta^i u_i\right)^2 + \alpha^2 \left(\left[\gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\right] u_i u_j + 1\right)}\\&= -\beta^i u_i \pm \sqrt{\left(\beta^i u_i\right)^2 + \alpha^2 \left(\gamma^{ij}u_i u_j + 1\right) - \beta^i\beta^j u_i u_j}\\&= -\beta^i u_i \pm \sqrt{\alpha^2 \left(\gamma^{ij}u_i u_j + 1\right)}\\\end{align}Now, since $$u^0 = g^{\alpha 0} u_\alpha = -\frac{1}{\alpha^2} u_0 + \frac{\beta^i u_i}{\alpha^2},$$we get\begin{align}u^0 &= \frac{1}{\alpha^2} \left(u_0 + \beta^i u_i\right) \\&= \pm \frac{1}{\alpha^2} \sqrt{\alpha^2 \left(\gamma^{ij}u_i u_j + 1\right)}\\&= \pm \frac{1}{\alpha} \sqrt{\gamma^{ij}u_i u_j + 1}\\\end{align}By convention, the relativistic Gamma factor is positive and given by $\alpha u^0$, so we choose the positive root. Thus we have derived Eq. 53 in [Duez *et al* (2005)](https://arxiv.org/pdf/astro-ph/0503420.pdf):$$u^0 = \frac{1}{\alpha} \sqrt{\gamma^{ij}u_i u_j + 1}.$$Next we evaluate \begin{align}u^i &= u_\mu g^{\mu i} \\&= u_0 g^{0 i} + u_j g^{i j}\\&= u_0 \frac{\beta^i}{\alpha^2} + u_j \left(\gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}\right)\\&= \gamma^{ij} u_j + u_0 \frac{\beta^i}{\alpha^2} - u_j \frac{\beta^i\beta^j}{\alpha^2}\\&= \gamma^{ij} u_j + \frac{\beta^i}{\alpha^2} \left(u_0 - u_j \beta^j\right)\\&= \gamma^{ij} u_j - \beta^i u^0,\\\implies v^i &= \frac{\gamma^{ij} u_j}{u^0} - \beta^i\end{align}which is equivalent to Eq. 56 in [Duez *et al* (2005)](https://arxiv.org/pdf/astro-ph/0503420.pdf). Notice in the last step, we used the above definition of $u^0$. Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-u0_smallb_Poynting-Cartesian.pdf](Tutorial-u0_smallb_Poynting-Cartesian.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-u0_smallb_Poynting-Cartesian.ipynb !pdflatex -interaction=batchmode Tutorial-u0_smallb_Poynting-Cartesian.tex !pdflatex -interaction=batchmode Tutorial-u0_smallb_Poynting-Cartesian.tex !pdflatex -interaction=batchmode Tutorial-u0_smallb_Poynting-Cartesian.tex !rm -f Tut*.out Tut*.aux Tut*.log
[pandoc warning] Duplicate link reference `[comment]' "source" (line 22, column 1) This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex) restricted \write18 enabled. entering extended mode This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex) restricted \write18 enabled. entering extended mode This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex) restricted \write18 enabled. entering extended mode
BSD-2-Clause
Tutorial-u0_smallb_Poynting-Cartesian.ipynb
KAClough/nrpytutorial
T81-558: Applications of Deep Neural Networks**Module 4: Training for Tabular Data*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 4 Material* **Part 4.1: Encoding a Feature Vector for Keras Deep Learning** [[Video]](https://www.youtube.com/watch?v=Vxz-gfs9nMQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_1_feature_encode.ipynb)* Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC [[Video]](https://www.youtube.com/watch?v=-f3bg9dLMks&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_2_multi_class.ipynb)* Part 4.3: Keras Regression for Deep Neural Networks with RMSE [[Video]](https://www.youtube.com/watch?v=wNhBUC6X5-E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_3_regression.ipynb)* Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Neural Network Training [[Video]](https://www.youtube.com/watch?v=VbDg8aBgpck&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_4_backprop.ipynb)* Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch [[Video]](https://www.youtube.com/watch?v=wmQX1t2PHJc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_5_rmse_logloss.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False
Note: not using Google CoLab
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
Part 4.1: Encoding a Feature Vector for Keras Deep LearningNeural networks can accept many types of data. We will begin with tabular data, where there are well defined rows and columns. This is the sort of data you would typically see in Microsoft Excel. An example of tabular data is shown below.Neural networks require numeric input. This numeric form is called a feature vector. Each row of training data typically becomes one vector. The individual input neurons each receive one feature (or column) from this vector. In this section, we will see how to encode the following tabular data into a feature vector.
import pandas as pd pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv", na_values=['NA','?']) pd.set_option('display.max_columns', 9) pd.set_option('display.max_rows', 5) display(df)
_____no_output_____
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
The following observations can be made from the above data:* The target column is the column that you seek to predict. There are several candidates here. However, we will initially use product. This field specifies what product someone bought.* There is an ID column. This column should not be fed into the neural network as it contains no information useful for prediction.* Many of these fields are numeric and might not require any further processing.* The income column does have some missing values.* There are categorical values: job, area, and product.To begin with, we will convert the job code into dummy variables.
pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) dummies = pd.get_dummies(df['job'],prefix="job") print(dummies.shape) pd.set_option('display.max_columns', 9) pd.set_option('display.max_rows', 10) display(dummies)
(2000, 33)
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
Because there are 33 different job codes, there are 33 dummy variables. We also specified a prefix, because the job codes (such as "ax") are not that meaningful by themselves. Something such as "job_ax" also tells us the origin of this field.Next, we must merge these dummies back into the main data frame. We also drop the original "job" field, as it is now represented by the dummies.
pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df = pd.concat([df,dummies],axis=1) df.drop('job', axis=1, inplace=True) pd.set_option('display.max_columns', 9) pd.set_option('display.max_rows', 10) display(df)
_____no_output_____
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
We also introduce dummy variables for the area column.
pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1) df.drop('area', axis=1, inplace=True) pd.set_option('display.max_columns', 9) pd.set_option('display.max_rows', 10) display(df)
_____no_output_____
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
The last remaining transformation is to fill in missing income values.
med = df['income'].median() df['income'] = df['income'].fillna(med)
_____no_output_____
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
There are more advanced ways of filling in missing values, but they require more analysis. The idea would be to see if another field might give a hint as to what the income were. For example, it might be beneficial to calculate a median income for each of the areas or job categories. This is something to keep in mind for the class Kaggle competition.At this point, the Pandas dataframe is ready to be converted to Numpy for neural network training. We need to know a list of the columns that will make up *x* (the predictors or inputs) and *y* (the target). The complete list of columns is:
print(list(df.columns))
['id', 'income', 'aspect', 'subscriptions', 'dist_healthy', 'save_rate', 'dist_unhealthy', 'age', 'pop_dense', 'retail_dense', 'crime', 'product', 'job_11', 'job_al', 'job_am', 'job_ax', 'job_bf', 'job_by', 'job_cv', 'job_de', 'job_dz', 'job_e2', 'job_f8', 'job_gj', 'job_gv', 'job_kd', 'job_ke', 'job_kl', 'job_kp', 'job_ks', 'job_kw', 'job_mm', 'job_nb', 'job_nn', 'job_ob', 'job_pe', 'job_po', 'job_pq', 'job_pz', 'job_qp', 'job_qw', 'job_rn', 'job_sa', 'job_vv', 'job_zz', 'area_a', 'area_b', 'area_c', 'area_d']
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
This includes both the target and predictors. We need a list with the target removed. We also remove **id** because it is not useful for prediction.
x_columns = df.columns.drop('product').drop('id') print(list(x_columns))
['income', 'aspect', 'subscriptions', 'dist_healthy', 'save_rate', 'dist_unhealthy', 'age', 'pop_dense', 'retail_dense', 'crime', 'job_11', 'job_al', 'job_am', 'job_ax', 'job_bf', 'job_by', 'job_cv', 'job_de', 'job_dz', 'job_e2', 'job_f8', 'job_gj', 'job_gv', 'job_kd', 'job_ke', 'job_kl', 'job_kp', 'job_ks', 'job_kw', 'job_mm', 'job_nb', 'job_nn', 'job_ob', 'job_pe', 'job_po', 'job_pq', 'job_pz', 'job_qp', 'job_qw', 'job_rn', 'job_sa', 'job_vv', 'job_zz', 'area_a', 'area_b', 'area_c', 'area_d']
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
Generate X and Y for a Classification Neural Network We can now generate *x* and *y*. Note, this is how we generate y for a classification problem. Regression would not use dummies and would simply encode the numeric value of the target.
# Convert to numpy - Classification x_columns = df.columns.drop('product').drop('id') x = df[x_columns].values dummies = pd.get_dummies(df['product']) # Classification products = dummies.columns y = dummies.values
_____no_output_____
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
We can display the *x* and *y* matrices.
print(x) print(y)
[[5.08760000e+04 1.31000000e+01 1.00000000e+00 ... 0.00000000e+00 1.00000000e+00 0.00000000e+00] [6.03690000e+04 1.86250000e+01 2.00000000e+00 ... 0.00000000e+00 1.00000000e+00 0.00000000e+00] [5.51260000e+04 3.47666667e+01 1.00000000e+00 ... 0.00000000e+00 1.00000000e+00 0.00000000e+00] ... [2.85950000e+04 3.94250000e+01 3.00000000e+00 ... 0.00000000e+00 0.00000000e+00 1.00000000e+00] [6.79490000e+04 5.73333333e+00 0.00000000e+00 ... 0.00000000e+00 1.00000000e+00 0.00000000e+00] [6.14670000e+04 1.68916667e+01 0.00000000e+00 ... 0.00000000e+00 1.00000000e+00 0.00000000e+00]] [[0 1 0 ... 0 0 0] [0 0 1 ... 0 0 0] [0 1 0 ... 0 0 0] ... [0 0 0 ... 0 1 0] [0 0 1 ... 0 0 0] [0 0 1 ... 0 0 0]]
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
The x and y values are now ready for a neural network. Make sure that you construct the neural network for a classification problem. Specifically,* Classification neural networks have an output neuron count equal to the number of classes.* Classification neural networks should use **categorical_crossentropy** and a **softmax** activation function on the output layer. Generate X and Y for a Regression Neural NetworkFor a regression neural network, the *x* values are generated the same. However, *y* does not use dummies. Make sure to replace **income** with your actual target.
y = df['income'].values
_____no_output_____
Apache-2.0
t81_558_class_04_1_feature_encode.ipynb
IlkerCa/t81_558_deep_learning
**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/categorical-variables).**--- By encoding **categorical variables**, you'll obtain your best results thus far! SetupThe questions below will give you feedback on your work. Run the following cell to set up the feedback system.
# Set up code checking import os if not os.path.exists("../input/train.csv"): os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv") os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv") from learntools.core import binder binder.bind(globals()) from learntools.ml_intermediate.ex3 import * print("Setup Complete")
Setup Complete
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
In this exercise, you will work with data from the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course). ![Ames Housing dataset image](https://i.imgur.com/lTJVG4e.png)Run the next code cell without changes to load the training and validation sets in `X_train`, `X_valid`, `y_train`, and `y_valid`. The test set is loaded in `X_test`.
import pandas as pd from sklearn.model_selection import train_test_split # Read the data X = pd.read_csv('../input/train.csv', index_col='Id') X_test = pd.read_csv('../input/test.csv', index_col='Id') # Remove rows with missing target, separate target from predictors X.dropna(axis=0, subset=['SalePrice'], inplace=True) y = X.SalePrice X.drop(['SalePrice'], axis=1, inplace=True) # To keep things simple, we'll drop columns with missing values cols_with_missing = [col for col in X.columns if X[col].isnull().any()] X.drop(cols_with_missing, axis=1, inplace=True) X_test.drop(cols_with_missing, axis=1, inplace=True) # Break off validation set from training data X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Use the next code cell to print the first five rows of the data.
X_train.head()
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Notice that the dataset contains both numerical and categorical variables. You'll need to encode the categorical data before training a model.To compare different models, you'll use the same `score_dataset()` function from the tutorial. This function reports the [mean absolute error](https://en.wikipedia.org/wiki/Mean_absolute_error) (MAE) from a random forest model.
from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error # function for comparing different approaches def score_dataset(X_train, X_valid, y_train, y_valid): model = RandomForestRegressor(n_estimators=100, random_state=0) model.fit(X_train, y_train) preds = model.predict(X_valid) return mean_absolute_error(y_valid, preds)
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Step 1: Drop columns with categorical dataYou'll get started with the most straightforward approach. Use the code cell below to preprocess the data in `X_train` and `X_valid` to remove columns with categorical data. Set the preprocessed DataFrames to `drop_X_train` and `drop_X_valid`, respectively.
# Fill in the lines below: drop columns in training and validation data drop_X_train = X_train.select_dtypes(exclude=['object']) drop_X_valid = X_valid.select_dtypes(exclude=['object']) # Check your answers step_1.check() # Lines below will give you a hint or solution code #step_1.hint() #step_1.solution()
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Run the next code cell to get the MAE for this approach.
print("MAE from Approach 1 (Drop categorical variables):") print(score_dataset(drop_X_train, drop_X_valid, y_train, y_valid))
MAE from Approach 1 (Drop categorical variables): 17837.82570776256
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Before jumping into label encoding, we'll investigate the dataset. Specifically, we'll look at the `'Condition2'` column. The code cell below prints the unique entries in both the training and validation sets.
print("Unique values in 'Condition2' column in training data:", X_train['Condition2'].unique()) print("\nUnique values in 'Condition2' column in validation data:", X_valid['Condition2'].unique())
Unique values in 'Condition2' column in training data: ['Norm' 'PosA' 'Feedr' 'PosN' 'Artery' 'RRAe'] Unique values in 'Condition2' column in validation data: ['Norm' 'RRAn' 'RRNn' 'Artery' 'Feedr' 'PosN']
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Step 2: Label encoding Part AIf you now write code to: - fit a label encoder to the training data, and then - use it to transform both the training and validation data, you'll get an error. Can you see why this is the case? (_You'll need to use the above output to answer this question._)
# Check your answer (Run this code cell to receive credit!) step_2.a.check() #step_2.a.hint()
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
This is a common problem that you'll encounter with real-world data, and there are many approaches to fixing this issue. For instance, you can write a custom label encoder to deal with new categories. The simplest approach, however, is to drop the problematic categorical columns. Run the code cell below to save the problematic columns to a Python list `bad_label_cols`. Likewise, columns that can be safely label encoded are stored in `good_label_cols`.
# All categorical columns object_cols = [col for col in X_train.columns if X_train[col].dtype == "object"] # Columns that can be safely label encoded good_label_cols = [col for col in object_cols if set(X_train[col]) == set(X_valid[col])] # Problematic columns that will be dropped from the dataset bad_label_cols = list(set(object_cols)-set(good_label_cols)) print('Categorical columns that will be label encoded:', good_label_cols) print('\nCategorical columns that will be dropped from the dataset:', bad_label_cols)
Categorical columns that will be label encoded: ['MSZoning', 'Street', 'LotShape', 'LandContour', 'LotConfig', 'BldgType', 'HouseStyle', 'ExterQual', 'CentralAir', 'KitchenQual', 'PavedDrive', 'SaleCondition'] Categorical columns that will be dropped from the dataset: ['Neighborhood', 'LandSlope', 'Condition1', 'Heating', 'Foundation', 'RoofMatl', 'Condition2', 'RoofStyle', 'ExterCond', 'Exterior1st', 'Utilities', 'Functional', 'HeatingQC', 'SaleType', 'Exterior2nd']
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Part BUse the next code cell to label encode the data in `X_train` and `X_valid`. Set the preprocessed DataFrames to `label_X_train` and `label_X_valid`, respectively. - We have provided code below to drop the categorical columns in `bad_label_cols` from the dataset. - You should label encode the categorical columns in `good_label_cols`.
from sklearn.preprocessing import LabelEncoder # Drop categorical columns that will not be encoded label_X_train = X_train.drop(bad_label_cols, axis=1) label_X_valid = X_valid.drop(bad_label_cols, axis=1) # Apply label encoder label_encoder = LabelEncoder() for col in good_label_cols: label_X_train[col] = label_encoder.fit_transform(label_X_train[col]) label_X_valid[col] = label_encoder.transform(label_X_valid[col]) # Check your answer step_2.b.check() # Lines below will give you a hint or solution code #step_2.b.hint() #step_2.b.solution()
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Run the next code cell to get the MAE for this approach.
print("MAE from Approach 2 (Label Encoding):") print(score_dataset(label_X_train, label_X_valid, y_train, y_valid))
MAE from Approach 2 (Label Encoding): 17575.291883561644
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
So far, you've tried two different approaches to dealing with categorical variables. And, you've seen that encoding categorical data yields better results than removing columns from the dataset.Soon, you'll try one-hot encoding. Before then, there's one additional topic we need to cover. Begin by running the next code cell without changes.
# Get number of unique entries in each column with categorical data object_nunique = list(map(lambda col: X_train[col].nunique(), object_cols)) d = dict(zip(object_cols, object_nunique)) # Print number of unique entries by column, in ascending order sorted(d.items(), key=lambda x: x[1])
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Step 3: Investigating cardinality Part AThe output above shows, for each column with categorical data, the number of unique values in the column. For instance, the `'Street'` column in the training data has two unique values: `'Grvl'` and `'Pave'`, corresponding to a gravel road and a paved road, respectively.We refer to the number of unique entries of a categorical variable as the **cardinality** of that categorical variable. For instance, the `'Street'` variable has cardinality 2.Use the output above to answer the questions below.
# Fill in the line below: How many categorical variables in the training data # have cardinality greater than 10? high_cardinality_numcols = 3 # Fill in the line below: How many columns are needed to one-hot encode the # 'Neighborhood' variable in the training data? num_cols_neighborhood = 25 # Check your answers step_3.a.check() # Lines below will give you a hint or solution code #step_3.a.hint() #step_3.a.solution()
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Part BFor large datasets with many rows, one-hot encoding can greatly expand the size of the dataset. For this reason, we typically will only one-hot encode columns with relatively low cardinality. Then, high cardinality columns can either be dropped from the dataset, or we can use label encoding.As an example, consider a dataset with 10,000 rows, and containing one categorical column with 100 unique entries. - If this column is replaced with the corresponding one-hot encoding, how many entries are added to the dataset? - If we instead replace the column with the label encoding, how many entries are added? Use your answers to fill in the lines below.
# Fill in the line below: How many entries are added to the dataset by # replacing the column with a one-hot encoding? OH_entries_added = 1e4*100 - 1e4 # Fill in the line below: How many entries are added to the dataset by # replacing the column with a label encoding? label_entries_added = 0 # Check your answers step_3.b.check() # Lines below will give you a hint or solution code #step_3.b.hint() #step_3.b.solution()
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Next, you'll experiment with one-hot encoding. But, instead of encoding all of the categorical variables in the dataset, you'll only create a one-hot encoding for columns with cardinality less than 10.Run the code cell below without changes to set `low_cardinality_cols` to a Python list containing the columns that will be one-hot encoded. Likewise, `high_cardinality_cols` contains a list of categorical columns that will be dropped from the dataset.
# Columns that will be one-hot encoded low_cardinality_cols = [col for col in object_cols if X_train[col].nunique() < 10] # Columns that will be dropped from the dataset high_cardinality_cols = list(set(object_cols)-set(low_cardinality_cols)) print('Categorical columns that will be one-hot encoded:', low_cardinality_cols) print('\nCategorical columns that will be dropped from the dataset:', high_cardinality_cols)
Categorical columns that will be one-hot encoded: ['MSZoning', 'Street', 'LotShape', 'LandContour', 'Utilities', 'LotConfig', 'LandSlope', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'ExterQual', 'ExterCond', 'Foundation', 'Heating', 'HeatingQC', 'CentralAir', 'KitchenQual', 'Functional', 'PavedDrive', 'SaleType', 'SaleCondition'] Categorical columns that will be dropped from the dataset: ['Neighborhood', 'Exterior2nd', 'Exterior1st']
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Step 4: One-hot encodingUse the next code cell to one-hot encode the data in `X_train` and `X_valid`. Set the preprocessed DataFrames to `OH_X_train` and `OH_X_valid`, respectively. - The full list of categorical columns in the dataset can be found in the Python list `object_cols`.- You should only one-hot encode the categorical columns in `low_cardinality_cols`. All other categorical columns should be dropped from the dataset.
from sklearn.preprocessing import OneHotEncoder # Use as many lines of code as you need! OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False) OH_cols_train = pd.DataFrame(OH_encoder.fit_transform(X_train[low_cardinality_cols])) OH_cols_valid = pd.DataFrame(OH_encoder.transform(X_valid[low_cardinality_cols])) # One-hot encoding removed index; put it back OH_cols_train.index = X_train.index OH_cols_valid.index = X_valid.index # Remove categorical columns (will replace with one-hot encoding) num_X_train = X_train.drop(object_cols, axis=1) num_X_valid = X_valid.drop(object_cols, axis=1) # Add one-hot encoded columns to numerical features OH_X_train = pd.concat([num_X_train, OH_cols_train], axis=1) OH_X_valid = pd.concat([num_X_valid, OH_cols_valid], axis=1) # Check your answer step_4.check() # Lines below will give you a hint or solution code #step_4.hint() #step_4.solution()
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Run the next code cell to get the MAE for this approach.
print("MAE from Approach 3 (One-Hot Encoding):") print(score_dataset(OH_X_train, OH_X_valid, y_train, y_valid))
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Generate test predictions and submit your resultsAfter you complete Step 4, if you'd like to use what you've learned to submit your results to the leaderboard, you'll need to preprocess the test data before generating predictions.**This step is completely optional, and you do not need to submit results to the leaderboard to successfully complete the exercise.**Check out the previous exercise if you need help with remembering how to [join the competition](https://www.kaggle.com/c/home-data-for-ml-course) or save your results to CSV. Once you have generated a file with your results, follow the instructions below:1. Begin by clicking on the blue **Save Version** button in the top right corner of the window. This will generate a pop-up window. 2. Ensure that the **Save and Run All** option is selected, and then click on the blue **Save** button.3. This generates a window in the bottom left corner of the notebook. After it has finished running, click on the number to the right of the **Save Version** button. This pulls up a list of versions on the right of the screen. Click on the ellipsis **(...)** to the right of the most recent version, and select **Open in Viewer**. This brings you into view mode of the same page. You will need to scroll down to get back to these instructions.4. Click on the **Output** tab on the right of the screen. Then, click on the file you would like to submit, and click on the blue **Submit** button to submit your results to the leaderboard.You have now successfully submitted to the competition!If you want to keep working to improve your performance, select the blue **Edit** button in the top right of the screen. Then you can change your code and repeat the process. There's a lot of room to improve, and you will climb up the leaderboard as you work.
# (Optional) Your code here
_____no_output_____
Apache-2.0
pre_exercises/Intermediate_ML/exercise-categorical-variables.ipynb
krishnaaxo/Spotify_Skip_Action_Prediction
Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en/r2/guide/_tpu.ipynb
christophmeyer/docs
View on TensorFlow.org Run in Google Colab View source on GitHub Using TPUsTensor Processing Units (TPUs) are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the TensorFlow Research Cloud and Google Compute Engine. In this notebook, you can try training a convolutional neural network against the Fashion MNIST dataset on Cloud TPUs using tf.keras and Distribution Strategy. Learning ObjectivesIn this Colab, you will learn how to:* Write a standard 4-layer conv-net with drop-out and batch normalization in Keras.* Use TPUs and Distribution Strategy to train the model.* Run a prediction to see how well the model can predict fashion categories and output the result. InstructionsTo use TPUs in Colab:1. On the main menu, click Runtime and select **Change runtime type**. Set "TPU" as the hardware accelerator.1. Click Runtime again and select **Runtime > Run All**. You can also run the cells manually with Shift-ENTER. Data, Model, and Training Download the DataBegin by downloading the fashion MNIST dataset using `tf.keras.datasets`, as shown below. We will also need to convert the data to `float32` format, as the data types supported by TPUs are limited right now.TPUs currently do not support Eager Execution, so we disable that with `disable_eager_execution()`.
from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np from __future__ import absolute_import, division, print_function !pip install tensorflow-gpu==2.0.0-beta1 import tensorflow as tf tf.compat.v1.disable_eager_execution() import numpy as np (x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() # add empty color dimension x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) # convert types to float32 x_train = x_train.astype(np.float32) x_test = x_test.astype(np.float32) y_train = y_train.astype(np.float32) y_test = y_test.astype(np.float32)
_____no_output_____
Apache-2.0
site/en/r2/guide/_tpu.ipynb
christophmeyer/docs
Initialize TPUStrategyWe first initialize the TPUStrategy object before creating the model, so that Keras knows that we are creating a model for TPUs. To do this, we are first creating a TPUClusterResolver using the IP address of the TPU, and then creating a TPUStrategy object from the Cluster Resolver.
import os resolver = tf.distribute.cluster_resolver.TPUClusterResolver() tf.tpu.experimental.initialize_tpu_system(resolver) strategy = tf.distribute.experimental.TPUStrategy(resolver)
_____no_output_____
Apache-2.0
site/en/r2/guide/_tpu.ipynb
christophmeyer/docs