markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
数据清洗大多机器学习算法不能处理缺失的特征,因此先创建一些函数来处理特征缺失的问题。前面,你应该注意到了属性 total_bedrooms 有一些缺失值。有三个解决选项:* 去掉对应的街区;* 去掉整个属性;* 进行赋值(0、平均值、中位数等等)。用 DataFrame 的 `dropna()`,`drop()`,和 `fillna()` 方法,可以方便地实现:```pythonhousing.dropna(subset=["total_bedrooms"]) 选项1housing.drop("total_bedrooms", axis= 1) 选项2median = housing["total_bedrooms"].median()housing["total_bedrooms"].fillna(median) 选项3``` Scikit-Learn 提供了一个方便的类来处理缺失值: `Imputer`。下面是其使用方法:首先,需要创建一个 `Imputer` 实例,指定用某属性的中位数来替换该属性所有的缺失值:```pythonfrom sklearn.impute import SimpleImputerimputer = SimpleImputer(missing_values=np.nan, strategy='mean') 因为只有数值属性才能算出中位数,所以需要创建一份不包括文本属性 ocean_proximity 的数据副本:housing_num = housing.drop("ocean_proximity", axis=1) 用 fit() 方法将 imputer 实例拟合到训练数据:imputer.fit(housing_num) 使用这个“训练过的” imputer 来对训练集进行转换,将缺失值替换为中位数:X = imputer.transform(housing_num)```
from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values=np.nan, strategy='mean') housing_num = housing.drop("ocean_proximity", axis=1) imputer.fit(housing_num) X = imputer.transform(housing_num) housing_tr = pd.DataFrame(X, columns=housing_num.columns) housing_tr.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 16512 entries, 0 to 16511 Data columns (total 8 columns): longitude 16512 non-null float64 latitude 16512 non-null float64 housing_median_age 16512 non-null float64 total_rooms 16512 non-null float64 total_bedrooms 16512 non-null float64 population 16512 non-null float64 households 16512 non-null float64 median_income 16512 non-null float64 dtypes: float64(8) memory usage: 1.0 MB
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
处理文本和类别属性前面,我们丢弃了类别属性 ocean_proximity,因为它是一个文本属性,不能计算出中位数。__大多数机器学习算法跟喜欢和数字打交道,所以让我们把这些文本标签转换为数字__。 LabelEncoderScikit-Learn 为这个任务提供了一个转换器 `LabelEncoder`
from sklearn.preprocessing import LabelEncoder encoder = LabelEncoder() housing_cat = housing["ocean_proximity"] housing_cat_encoded = encoder.fit_transform(housing_cat) housing_cat_encoded encoder.classes_ # <1H OCEAN 被映射为 0, INLAND 被映射为 1 等等
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
OneHotEncoder注意输出结果是一个 SciPy 稀疏矩阵,而不是 NumPy 数组。> 当类别属性有数千个分类时,这样非常有用。经过独热编码,我们得到了一个有数千列的矩阵,这个矩阵每行只有一个 1,其余都是 0。使用大量内存来存储这些 0 非常浪费,所以稀疏矩阵只存储非零元素的位置。你可以像一个 2D 数据那样进行使用,但是如果你真的想将其转变成一个(密集的)NumPy 数组,只需调用 `toarray()` 方法。
from sklearn.preprocessing import OneHotEncoder encoder = OneHotEncoder() housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape( -1 , 1 )) housing_cat_1hot housing_cat_1hot.toarray()
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
LabelBinarizer使用类 LabelBinarizer ,我们可以用一步执行这两个转换。> 向构造器 `LabelBinarizer` 传递 `sparse_output=True`,就可以得到一个稀疏矩阵。
from sklearn.preprocessing import LabelBinarizer encoder = LabelBinarizer() housing_cat_1hot = encoder.fit_transform(housing_cat) housing_cat_1hot
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
自定义转换器尽管 Scikit-Learn 提供了许多有用的转换器,你还是需要自己动手写转换器执行任务,比如自定义的清理操作,或属性组合。你需要让自制的转换器与 Scikit-Learn 组件(比如流水线)无缝衔接工作,因为 Scikit-Learn 是依赖鸭子类型的(而不是继承),你所需要做的是创建一个类并执行三个方法: `fit()`(返回 `self` ),`transform()` ,和 `fit_transform()`。通过添加 `TransformerMixin` 作为基类,可以很容易地得到最后一个。另外,如果你添加 `BaseEstimator` 作为基类(且构造器中避免使用 `*args` 和 `**kargs`),就能得到两个额外的方法(`get_params()` 和 `set_params()`),二者可以方便地进行超参数自动微调。
from sklearn.base import BaseEstimator, TransformerMixin rooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6 class CombinedAttributesAdder(BaseEstimator, TransformerMixin): def __init__ (self, add_bedrooms_per_room = True): # no *args or **kargs self.add_bedrooms_per_room = add_bedrooms_per_room def fit(self, X, y=None): return self # nothing else to do def transform(self, X, y=None): rooms_per_household = X[:, rooms_ix] / X[:, household_ix] population_per_household = X[:, population_ix] / X[:, household_ix] if self.add_bedrooms_per_room: bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix] return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room] else: return np.c_[X, rooms_per_household, population_per_household] attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False) housing_extra_attribs = attr_adder.transform(housing.values) class DataFrameSelector(BaseEstimator, TransformerMixin): def __init__(self, attribute_names): self.attribute_names = attribute_names def fit(self, X, y=None): return self def transform(self, X): return X[self.attribute_names].values
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
特征缩放有两种常见的方法可以让所有的属性有相同的量度:线性函数归一化(Min-Max scaling)和标准化(standardization)。1. 线性函数归一化(许多人称其为归一化(normalization))很简单:值被转变、重新缩放,直到范围变成 0 到 1。我们通过减去最小值,然后再除以最大值与最小值的差值,来进行归一化。> Scikit-Learn 提供了一个转换器 `MinMaxScaler` 来实现这个功能。它有一个超参数 `feature_range`,可以让你改变范围,如果不希望范围是 0 到 1。2. 标准化:首先减去平均值(所以标准化值的平均值总是 0),然后除以方差,使得到的分布具有单位方差。标准化受到异常值的影响很小。例如,假设一个街区的收入中位数由于某种错误变成了100,归一化会将其它范围是 0 到 15 的值变为 `0-0.15`,但是标准化不会受什么影响。> Scikit-Learn 提供了一个转换器 `StandardScaler` 来进行标准化。 转换流水线因为存在许多数据转换步骤,需要按一定的顺序执行。所以,Scikit-Learn 提供了类 `Pipeline`,来进行这一系列的转换。Pipeline 构造器需要一个定义步骤顺序的名字/估计器对的列表。__除了最后一个估计器,其余都要是转换器__(即,它们都要有 `fit_transform()` 方法)。当调用流水线的 `fit()` 方法,就会对所有转换器顺序调用 `fit_transform()` 方法,将每次调用的输出作为参数传递给下一个调用,一直到最后一个估计器,它只执行 `fit()` 方法。
from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler num_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('attribs_adder', CombinedAttributesAdder()), ('std_scaler', StandardScaler()) ]) housing_num_tr = num_pipeline.fit_transform(housing_num)
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
现在就有了一个对数值的流水线,还需要对分类值应用 `LabelBinarizer`:如何将这些转换写成一个流水线呢?Scikit-Learn 提供了一个类 `FeatureUnion` 实现这个功能。你给它一列转换器(可以是所有的转换器),当调用它的 `transform()` 方法,每个转换器的 `transform()` 会被 __并行执行__,等待输出,然后将输出合并起来,并返回结果(当然,调用它的 `fit()` 方法就会调用每个转换器的 `fit()`)。
from sklearn.pipeline import FeatureUnion num_attribs = list(housing_num) cat_attribs = ["ocean_proximity"] num_pipeline = Pipeline([ ('selector', DataFrameSelector(num_attribs)), ('imputer', SimpleImputer(strategy="median")), ('attribs_adder', CombinedAttributesAdder()), ('std_scaler', StandardScaler()) ]) cat_pipeline = Pipeline([ ('selector', DataFrameSelector(cat_attribs)), # ('label_binarizer', LabelBinarizer()), ('label_binarizer', CategoricalEncoder()), ]) full_pipeline = FeatureUnion(transformer_list=[ ("num_pipeline", num_pipeline), ("cat_pipeline", cat_pipeline), ]) housing_prepared = full_pipeline.fit_transform(housing) # d = DataFrameSelector(num_attribs) # housing_d = d.fit_transform(housing) # imputer = SimpleImputer(strategy="median") # housing_i = imputer.fit_transform(housing_d) # c = CombinedAttributesAdder() # housing_c = c.fit_transform(housing_i) # s = StandardScaler() # housing_s = s.fit_transform(housing_c) # d = DataFrameSelector(cat_attribs) # housing_d = d.fit_transform(housing) # l = LabelBinarizer() # housing_l = l.fit_transform(housing_d) housing_prepared.toarray()
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
选择并训练模型 线性回归
from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(housing_prepared, housing_labels)
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
完毕!你现在就有了一个可用的线性回归模型。用一些训练集中的实例做下验证:
some_data = housing.iloc[:5] some_labels = housing_labels.iloc[:5] some_data_prepared = full_pipeline.transform(some_data) print("Predictions:\t", lin_reg.predict(some_data_prepared)) print("Labels:\t\t", list(some_labels))
Predictions: [181746.54358872 290558.74963381 244957.50041055 146498.51057872 163230.42389721] Labels: [103000.0, 382100.0, 172600.0, 93400.0, 96500.0]
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
RMSE使用 Scikit-Learn 的 `mean_squared_error` 函数,用全部训练集来计算下这个回归模型的 RMSE:
from sklearn.metrics import mean_squared_error housing_predictions = lin_reg.predict(housing_prepared) lin_mse = mean_squared_error(housing_labels, housing_predictions) lin_rmse = np.sqrt(lin_mse) lin_rmse
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
尝试一个更为复杂的模型。 DecisionTreeRegressor
from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor() tree_reg.fit(housing_prepared, housing_labels)
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
RMSE 评估
housing_predictions = tree_reg.predict(housing_prepared) lin_mse = mean_squared_error(housing_labels, housing_predictions) lin_rmse = np.sqrt(lin_mse) lin_rmse
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
可以发现该模型严重过拟合 交叉验证评估模型的一种方法是用函数 `train_test_split` 来分割训练集,得到一个更小的训练集和一个 __交叉验证集__,然后用更小的训练集来训练模型,用验证集来评估。另一种更好的方法是 __使用 Scikit-Learn 的交叉验证功能__。下面的代码采用了 K 折交叉验证(K-fold cross-validation):它随机地将训练集分成十个不同的子集,成为“折”,然后训练评估决策树模型 10 次,每次选一个不用的折来做评估,用其它 9 个来做训练。结果是一个包含 10 个评分的数组> Scikit-Learn 交叉验证功能期望的是效用函数(越大越好)而不是损失函数(越低越好),因此得分函数实际上与 MSE 相反(即负值),所以在计算平方根之前先计算 -scores 。
from sklearn.model_selection import cross_val_score scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10) rmse_scores = np.sqrt(-scores) def display_scores(scores): print("Scores:", scores) print("Mean:", scores.mean()) print("Standard deviation:", scores.std()) display_scores(rmse_scores)
Scores: [64669.81202575 70631.54431519 68182.27830444 70392.73509393 72864.28420412 67109.28516943 66338.75100355 69542.07611318 65752.27281003 70391.54164896] Mean: 68587.45806885832 Standard deviation: 2463.4659300283547
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
RandomForestRegressor随机森林是通过用特征的随机子集训练许多决策树。在其它多个模型之上建立模型称为集成学习(Ensemble Learning),它是推进 ML 算法的一种好方法。
from sklearn.ensemble import RandomForestRegressor forest_reg = RandomForestRegressor() forest_reg.fit(housing_prepared, housing_labels) scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10, n_jobs=-1) rmse_scores = np.sqrt(-scores) display_scores(rmse_scores)
Scores: [49751.31861666 54615.84913363 52738.25864141 54820.43695375 55833.78571584 49535.30004953 49969.23161663 52868.72231176 51471.9865128 51848.05631902] Mean: 52345.29458710363 Standard deviation: 2125.0902130050936
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
保存模型可以使用python自带的 pickle 或 下述函数```pythonfrom sklearn.externals import joblibjoblib.dump(my_model, "my_model.pkl") load my_model_loaded = joblib.load("my_model.pkl")``` 模型微调假设现在有了一个列表,列表里有几个有希望的模型。现在需要对它们进行微调。 网格搜索微调的一种方法是手工调整超参数,直到找到一个好的超参数组合。这么做的话会非常冗长,你也可能没有时间探索多种组合。应该使用 Scikit-Learn 的 `GridSearchCV` 来做这项搜索工作。你所需要做的是告诉 `GridSearchCV` 要试验有哪些超参数,要试验什么值,`GridSearchCV` 就能用交叉验证试验所有可能超参数值的组合。例如,下面的代码搜索了 `RandomForestRegressor` 超参数值的最佳组合:
from sklearn.model_selection import GridSearchCV param_grid = [ {'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]}, {'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]}, ] forest_reg = RandomForestRegressor() grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring='neg_mean_squared_error', n_jobs=-1) grid_search.fit(housing_prepared, housing_labels)
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
`param_grid` 告诉 Scikit-Learn 首先评估所有的列在第一个 `dict` 中的 `n_estimators` 和 `max_features` 的 `3 × 4 = 12` 种组合。然后尝试第二个 `dict` 中超参数的 `2 × 3 = 6` 种组合,这次会将超参数 `bootstrap` 设为 `False`。总之,网格搜索会探索 `12 + 6 = 18` 种 `RandomForestRegressor` 的超参数组合,会训练每个模型五次(因为用的是五折交叉验证)。换句话说,训练总共有 `18 × 5 = 90` 轮!K 折将要花费大量时间,完成后,你就能获得参数的最佳组合,如下所示:
grid_search.best_params_ # 参数最佳组合 grid_search.best_estimator_ # 最佳估计器
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
可以像超参数一样处理数据准备的步骤。例如,__网格搜索可以自动判断是否添加一个你不确定的特征__(比如,使用转换器 `CombinedAttributesAdder` 的超参数 `add_bedrooms_per_room`)。它还能用相似的方法来自动找到处理异常值、缺失特征、特征选择等任务的最佳方法。 随机搜索当探索相对较少的组合时,网格搜索还可以。但是当超参数的搜索空间很大时,最好使用 `RandomizedSearchCV`。这个类的使用方法和类`GridSearchCV` 很相似,但它不是尝试所有可能的组合,而是通过选择每个超参数的一个随机值的特定数量的随机组合。这个方法有两个优点:* 如果你让随机搜索运行,比如 1000 次,它会探索每个超参数的 1000 个不同的值(而不是像网格搜索那样,只搜索每个超参数的几个值)。* 可以方便地通过设定搜索次数,控制超参数搜索的计算量。 集成方法另一种微调系统的方法是将表现最好的模型组合起来。组合(集成)之后的性能通常要比单独的模型要好(就像随机森林要比单独的决策树要好),特别是当单独模型的误差类型不同时。 分析最佳模型和它们的误差通过分析最佳模型,常常可以获得对问题更深的了解。 用测试集评估系统调节完系统之后,终于有了一个性能足够好的系统。现在就可以用测试集评估最后的模型了。__注意:在测试集上如果模型效果不是很好,一定不要调参,因为这样也无法泛化__
final_model = grid_search.best_estimator_ X_test = test_set.drop("median_house_value", axis=1) y_test = test_set["median_house_value"].copy() # 清洗数据 X_test_prepared = full_pipeline.transform(X_test) # 预测 final_predictions = final_model.predict(X_test_prepared) # RMSE final_mse = mean_squared_error(y_test, final_predictions) final_rmse = np.sqrt(final_mse) final_rmse
_____no_output_____
MIT
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
Recommending Movies: Retrieval Real-world recommender systems are often composed of two stages:1. The retrieval stage is responsible for selecting an initial set of hundreds of candidates from all possible candidates. The main objective of this model is to efficiently weed out all candidates that the user is not interested in. Because the retrieval model may be dealing with millions of candidates, it has to be computationally efficient.2. The ranking stage takes the outputs of the retrieval model and fine-tunes them to select the best possible handful of recommendations. Its task is to narrow down the set of items the user may be interested in to a shortlist of likely candidates.In this tutorial, we're going to focus on the first stage, retrieval. If you are interested in the ranking stage, have a look at our [ranking](basic_ranking) tutorial.Retrieval models are often composed of two sub-models:1. A query model computing the query representation (normally a fixed-dimensionality embedding vector) using query features.2. A candidate model computing the candidate representation (an equally-sized vector) using the candidate featuresThe outputs of the two models are then multiplied together to give a query-candidate affinity score, with higher scores expressing a better match between the candidate and the query.In this tutorial, we're going to build and train such a two-tower model using the Movielens dataset.We're going to:1. Get our data and split it into a training and test set.2. Implement a retrieval model.3. Fit and evaluate it.4. Export it for efficient serving by building an approximate nearest neighbours (ANN) index. The datasetThe Movielens dataset is a classic dataset from the [GroupLens](https://grouplens.org/datasets/movielens/) research group at the University of Minnesota. It contains a set of ratings given to movies by a set of users, and is a workhorse of recommender system research.The data can be treated in two ways:1. It can be interpreted as expressesing which movies the users watched (and rated), and which they did not. This is a form of implicit feedback, where users' watches tell us which things they prefer to see and which they'd rather not see.2. It can also be seen as expressesing how much the users liked the movies they did watch. This is a form of explicit feedback: given that a user watched a movie, we can tell roughly how much they liked by looking at the rating they have given.In this tutorial, we are focusing on a retrieval system: a model that predicts a set of movies from the catalogue that the user is likely to watch. Often, implicit data is more useful here, and so we are going to treat Movielens as an implicit system. This means that every movie a user watched is a positive example, and every movie they have not seen is an implicit negative example. ImportsLet's first get our imports out of the way.
import os import pprint import tempfile from typing import Dict, Text import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
Preparing the datasetLet's first have a look at the data.We use the MovieLens dataset from [Tensorflow Datasets](https://www.tensorflow.org/datasets). Loading `movie_lens/100k_ratings` yields a `tf.data.Dataset` object containing the ratings data and loading `movie_lens/100k_movies` yields a `tf.data.Dataset` object containing only the movies data.Note that since the MovieLens dataset does not have predefined splits, all data are under `train` split.
# Ratings data. ratings = tfds.load("movie_lens/100k-ratings", split="train") # Features of all the available movies. movies = tfds.load("movie_lens/100k-movies", split="train")
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
The ratings dataset returns a dictionary of movie id, user id, the assigned rating, timestamp, movie information, and user information:
for x in ratings.take(1).as_numpy_iterator(): pprint.pprint(x)
{'bucketized_user_age': 45.0, 'movie_genres': array([7]), 'movie_id': b'357', 'movie_title': b"One Flew Over the Cuckoo's Nest (1975)", 'raw_user_age': 46.0, 'timestamp': 879024327, 'user_gender': True, 'user_id': b'138', 'user_occupation_label': 4, 'user_occupation_text': b'doctor', 'user_rating': 4.0, 'user_zip_code': b'53211'}
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
The movies dataset contains the movie id, movie title, and data on what genres it belongs to. Note that the genres are encoded with integer labels.
for x in movies.take(1).as_numpy_iterator(): pprint.pprint(x)
{'movie_genres': array([4]), 'movie_id': b'1681', 'movie_title': b'You So Crazy (1994)'}
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
In this example, we're going to focus on the ratings data. Other tutorials explore how to use the movie information data as well to improve the model quality.We keep only the `user_id`, and `movie_title` fields in the dataset.
ratings = ratings.map(lambda x: { "movie_title": x["movie_title"], "user_id": x["user_id"], }) movies = movies.map(lambda x: x["movie_title"])
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
To fit and evaluate the model, we need to split it into a training and evaluation set. In an industrial recommender system, this would most likely be done by time: the data up to time $T$ would be used to predict interactions after $T$.In this simple example, however, let's use a random split, putting 80% of the ratings in the train set, and 20% in the test set.
tf.random.set_seed(42) shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False) train = shuffled.take(80_000) test = shuffled.skip(80_000).take(20_000)
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
Let's also figure out unique user ids and movie titles present in the data. This is important because we need to be able to map the raw values of our categorical features to embedding vectors in our models. To do that, we need a vocabulary that maps a raw feature value to an integer in a contiguous range: this allows us to look up the corresponding embeddings in our embedding tables.
movie_titles = movies.batch(1_000) user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"]) unique_movie_titles = np.unique(np.concatenate(list(movie_titles))) unique_user_ids = np.unique(np.concatenate(list(user_ids))) unique_movie_titles[:10]
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
Implementing a modelChoosing the architecure of our model a key part of modelling.Because we are building a two-tower retrieval model, we can build each tower separately and then combine them in the final model. The query towerLet's start with the query tower.The first step is to decide on the dimensionality of the query and candidate representations:
embedding_dimension = 32
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
Higher values will correspond to models that may be more accurate, but will also be slower to fit and more prone to overfitting.The second is to define the model itself. Here, we're going to use Keras preprocessing layers to first convert user ids to integers, and then convert those to user embeddings via an `Embedding` layer. Note that we use the list of unique user ids we computed earlier as a vocabulary: _Note: Requires TF 2.3.0_
user_model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_user_ids, mask_token=None), # We add an additional embedding to account for unknown tokens. tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension) ])
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
A simple model like this corresponds exactly to a classic [matrix factorization](https://ieeexplore.ieee.org/abstract/document/4781121) approach. While defining a subclass of `tf.keras.Model` for this simple model might be overkill, we can easily extend it to an arbitrarily complex model using standard Keras components, as long as we return an `embedding_dimension`-wide output at the end. The candidate towerWe can do the same with the candidate tower.
movie_model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_movie_titles, mask_token=None), tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension) ])
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
MetricsIn our training data we have positive (user, movie) pairs. To figure out how good our model is, we need to compare the affinity score that the model calculates for this pair to the scores of all the other possible candidates: if the score for the positive pair is higher than for all other candidates, our model is highly accurate.To do this, we can use the `tfrs.metrics.FactorizedTopK` metric. The metric has one required argument: the dataset of candidates that are used as implicit negatives for evaluation.In our case, that's the `movies` dataset, converted into embeddings via our movie model:
metrics = tfrs.metrics.FactorizedTopK( candidates=movies.batch(128).map(movie_model) )
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
LossThe next component is the loss used to train our model. TFRS has several loss layers and tasks to make this easy.In this instance, we'll make use of the `Retrieval` task object: a convenience wrapper that bundles together the loss function and metric computation:
task = tfrs.tasks.Retrieval( metrics=metrics )
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
The task itself is a Keras layer that takes the query and candidate embeddings as arguments, and returns the computed loss: we'll use that to implement the model's training loop. The full modelWe can now put it all together into a model. TFRS exposes a base model class (`tfrs.models.Model`) which streamlines bulding models: all we need to do is to set up the components in the `__init__` method, and implement the `compute_loss` method, taking in the raw features and returning a loss value.The base model will then take care of creating the appropriate training loop to fit our model.
class MovielensModel(tfrs.Model): def __init__(self, user_model, movie_model): super().__init__() self.movie_model: tf.keras.Model = movie_model self.user_model: tf.keras.Model = user_model self.task: tf.keras.layers.Layer = task def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor: # We pick out the user features and pass them into the user model. user_embeddings = self.user_model(features["user_id"]) # And pick out the movie features and pass them into the movie model, # getting embeddings back. positive_movie_embeddings = self.movie_model(features["movie_title"]) # The task computes the loss and the metrics. return self.task(user_embeddings, positive_movie_embeddings)
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
The `tfrs.Model` base class is a simply convenience class: it allows us to compute both training and test losses using the same method.Under the hood, it's still a plain Keras model. You could achieve the same functionality by inheriting from `tf.keras.Model` and overriding the `train_step` and `test_step` functions (see [the guide](https://keras.io/guides/customizing_what_happens_in_fit/) for details):
class NoBaseClassMovielensModel(tf.keras.Model): def __init__(self, user_model, movie_model): super().__init__() self.movie_model: tf.keras.Model = movie_model self.user_model: tf.keras.Model = user_model self.task: tf.keras.layers.Layer = task def train_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor: # Set up a gradient tape to record gradients. with tf.GradientTape() as tape: # Loss computation. user_embeddings = self.user_model(features["user_id"]) positive_movie_embeddings = self.movie_model(features["movie_title"]) loss = self.task(user_embeddings, positive_movie_embeddings) # Handle regularization losses as well. regularization_loss = sum(self.losses) total_loss = loss + regularization_loss gradients = tape.gradient(total_loss, self.trainable_variables) self.optimizer.apply_gradients(zip(gradients, self.trainable_variables)) metrics = {metric.name: metric.result() for metric in self.metrics} metrics["loss"] = loss metrics["regularization_loss"] = regularization_loss metrics["total_loss"] = total_loss return metrics def test_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor: # Loss computation. user_embeddings = self.user_model(features["user_id"]) positive_movie_embeddings = self.movie_model(features["movie_title"]) loss = self.task(user_embeddings, positive_movie_embeddings) # Handle regularization losses as well. regularization_loss = sum(self.losses) total_loss = loss + regularization_loss metrics = {metric.name: metric.result() for metric in self.metrics} metrics["loss"] = loss metrics["regularization_loss"] = regularization_loss metrics["total_loss"] = total_loss return metrics
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
In these tutorials, however, we stick to using the `tfrs.Model` base class to keep our focus on modelling and abstract away some of the boilerplate. Fitting and evaluatingAfter defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model.Let's first instantiate the model.
model = MovielensModel(user_model, movie_model) model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
Then shuffle, batch, and cache the training and evaluation data.
cached_train = train.shuffle(100_000).batch(8192).cache() cached_test = test.batch(4096).cache()
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
Then train the model:
model.fit(cached_train, epochs=3)
Epoch 1/3 10/10 [==============================] - 5s 464ms/step - factorized_top_k: 0.0508 - factorized_top_k/top_1_categorical_accuracy: 3.2500e-04 - factorized_top_k/top_5_categorical_accuracy: 0.0046 - factorized_top_k/top_10_categorical_accuracy: 0.0117 - factorized_top_k/top_50_categorical_accuracy: 0.0808 - factorized_top_k/top_100_categorical_accuracy: 0.1566 - loss: 69885.1072 - regularization_loss: 0.0000e+00 - total_loss: 69885.1072 Epoch 2/3 10/10 [==============================] - 5s 453ms/step - factorized_top_k: 0.1006 - factorized_top_k/top_1_categorical_accuracy: 0.0021 - factorized_top_k/top_5_categorical_accuracy: 0.0168 - factorized_top_k/top_10_categorical_accuracy: 0.0346 - factorized_top_k/top_50_categorical_accuracy: 0.1626 - factorized_top_k/top_100_categorical_accuracy: 0.2866 - loss: 67523.3714 - regularization_loss: 0.0000e+00 - total_loss: 67523.3714 Epoch 3/3 10/10 [==============================] - 5s 454ms/step - factorized_top_k: 0.1136 - factorized_top_k/top_1_categorical_accuracy: 0.0029 - factorized_top_k/top_5_categorical_accuracy: 0.0215 - factorized_top_k/top_10_categorical_accuracy: 0.0443 - factorized_top_k/top_50_categorical_accuracy: 0.1854 - factorized_top_k/top_100_categorical_accuracy: 0.3139 - loss: 66302.9609 - regularization_loss: 0.0000e+00 - total_loss: 66302.9609
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
As the model trains, the loss is falling and a set of top-k retrieval metrics is updated. These tell us whether the true positive is in the top-k retrieved items from the entire candidate set. For example, a top-5 categorical accuracy metric of 0.2 would tell us that, on average, the true positive is in the top 5 retrieved items 20% of the time.Note that, in this example, we evaluate the metrics during training as well as evaluation. Because this can be quite slow with large candidate sets, it may be prudent to turn metric calculation off in training, and only run it in evaluation. Finally, we can evaluate our model on the test set:
model.evaluate(cached_test, return_dict=True)
5/5 [==============================] - 1s 169ms/step - factorized_top_k: 0.0782 - factorized_top_k/top_1_categorical_accuracy: 0.0010 - factorized_top_k/top_5_categorical_accuracy: 0.0097 - factorized_top_k/top_10_categorical_accuracy: 0.0226 - factorized_top_k/top_50_categorical_accuracy: 0.1248 - factorized_top_k/top_100_categorical_accuracy: 0.2328 - loss: 31079.0635 - regularization_loss: 0.0000e+00 - total_loss: 31079.0635
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
Test set performance is much worse than training performance. This is due to two factors:1. Our model is likely to perform better on the data that it has seen, simply because it can memorize it. This overfitting phenomenon is especially strong when models have many parameters. It can be mediated by model regularization and use of user and movie features that help the model generalize better to unseen data.2. The model is re-recommending some of users' already watched movies. These known-positive watches can crowd out test movies out of top K recommendations.The second phenomenon can be tackled by excluding previously seen movies from test recommendations. This approach is relatively common in the recommender systems literature, but we don't follow it in these tutorials. If not recommending past watches is important, we should expect appropriately specified models to learn this behaviour automatically from past user history and contextual information. Additionally, it is often appropriate to recommend the same item multiple times (say, an evergreen TV series or a regularly purchased item). Making predictionsNow that we have a model, we would like to be able to make predictions. We can use the `tfrs.layers.ann.BruteForce` layer to do this.
# Create a model that takes in raw query features, and index = tfrs.layers.ann.BruteForce(model.user_model) # recommends movies out of the entire movies dataset. index.index(movies.batch(100).map(model.movie_model), movies) # Get recommendations. _, titles = index(tf.constant(["42"])) print(f"Recommendations for user 42: {titles[0, :3]}")
Recommendations for user 42: [b'Bridges of Madison County, The (1995)' b'Father of the Bride Part II (1995)' b'Rudy (1993)']
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
Of course, the `BruteForce` layer is going to be too slow to serve a model with many possible candidates. The following sections shows how to speed this up by using an approximate retrieval index. Model servingAfter the model is trained, we need a way to deploy it.In a two-tower retrieval model, serving has two components:- a serving query model, taking in features of the query and transforming them into a query embedding, and- a serving candidate model. This most often takes the form of an approximate nearest neighbours (ANN) index which allows fast approximate lookup of candidates in response to a query produced by the query model. Exporting a query model to servingExporting the query model is easy: we can either serialize the Keras model directly, or export it to a `SavedModel` format to make it possible to serve using [TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving).To export to a `SavedModel` format, we can do the following:
model_dir = './models' !mkdir $model_dir # Export the query model. path = '{}/query_model'.format(model_dir) model.user_model.save(path) # Load the query model loaded = tf.keras.models.load_model(path, compile=False) query_embedding = loaded(tf.constant(["10"])) print(f"Query embedding: {query_embedding[0, :3]}")
WARNING:tensorflow:11 out of the last 11 calls to <function recreate_function.<locals>.restored_function_body at 0x7f85d75cce18> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
Building a candidate ANN indexExporting candidate representations is more involved. Firstly, we want to pre-compute them to make sure serving is fast; this is especially important if the candidate model is computationally intensive (for example, if it has many or wide layers; or uses complex representations for text or images). Secondly, we would like to take the precomputed representations and use them to construct a fast approximate retrieval index.We can use [Annoy](https://github.com/spotify/annoy) to build such an index.Annoy isn't included in the base TFRS package. To install it, run: We can now create the index object.
from annoy import AnnoyIndex index = AnnoyIndex(embedding_dimension, "dot")
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
Then take the candidate dataset and transform its raw features into embeddings using the movie model:
print(movies) movie_embeddings = movies.enumerate().map(lambda idx, title: (idx, title, model.movie_model(title))) print(movie_embeddings.as_numpy_iterator().next())
(0, b'You So Crazy (1994)', array([ 0.02039416, 0.15982407, 0.0063992 , -0.02597233, 0.12776582, -0.07474077, -0.14477485, -0.03757067, 0.09737739, 0.05545571, 0.06205893, 0.00479794, -0.1288748 , -0.09362403, 0.03417863, -0.03058628, -0.02924258, -0.09905305, -0.08250699, -0.12956885, -0.00052435, -0.07832637, -0.00451247, 0.04807298, -0.07815737, -0.18195164, 0.10836799, -0.01164408, -0.10894814, -0.03122996, -0.10479282, -0.09899054], dtype=float32))
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
And then index the movie_id, movie embedding pairs into our Annoy index:
%%time movie_id_to_title = dict((idx, title) for idx, title, _ in movie_embeddings.as_numpy_iterator()) # We unbatch the dataset because Annoy accepts only scalar (id, embedding) pairs. for movie_id, _, movie_embedding in movie_embeddings.as_numpy_iterator(): index.add_item(movie_id, movie_embedding) # Build a 10-tree ANN index. index.build(10)
_____no_output_____
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
We can then retrieve nearest neighbours:
for row in test.batch(1).take(3): query_embedding = model.user_model(row["user_id"])[0] candidates = index.get_nns_by_vector(query_embedding, 3) print(f"User ID: {row['user_id']}, Candidates: {[movie_id_to_title[x] for x in candidates]}.") print(type(candidates))
<class 'list'>
Apache-2.0
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
CLEAN CODE
def is_even(num): if num % 2 == 0: return True elif num % 2 != 0: # We really don't need this condition return False is_even(25) is_even(26) # We will clean up our code above a little bit: def is_even(num): if num % 2 == 0: return True else: return False is_even(12) is_even(11) # We can clean up a little more: def is_even(num): if num % 2 == 0: return True return False is_even(5) is_even(6) # We can make our code even nice and simple a little more: def is_even(num): return num % 2 == 0 is_even(22) is_even(19)
_____no_output_____
Apache-2.0
clean_code.ipynb
MaiaNgo/python-zerotomastery
InstructionsPlease make a copy and rename it with your name (ex: Proj6_Ilmi_Yoon). All grading points should be explored in the notebook but some can be done in a separate pdf file. *Graded questions will be listed with "Q:" followed by the corresponding points.* You will be submitting **a pdf** file containing **the url of your own proj6.**--- **Hypothesis testing**===**Outline**At the end of this week, you will be a pro at:- **hypothesis testing** * is there something interesting/meaningful going on in my data? - one-sample t-test - two-sample t-test- **correcting for multiple testing** * doing thousands of hypothesis tests at a time will increase your likelihood of incorrect conclusions * you'll learn how to account for that- **false discovery rates** * you could be a perfectionist ("even one wrong conclusion is the worst"), aka family-wise error rate (FWER) * or become a pragmatic ("of my significant discoveries, i expect x% of them to be false positives."), aka false discovery rate (FDR)- **permutation tests** * if your assumptions about your data are wrong, you may over/underestimate your confidence in your conclusions * assume as little as possible about the data with a permutation test **Examples**In class, we will talk about 3 examples:- confidence intervals - how much time do Americans spend on average per day on Netflix? - one-sample t-test - do Americans spend more time on average per day on Netflix compared to before the pandemic?- two-sample t-test - does exercise affect baseline blood pressure? **Your project**- RNA sequencing: which genes differentiate the different immune cells in your blood? - two-sample t-test - multiple testing correction **How do you make the best of this week?**- start seeing all statistics reported around you, and think of how they relate to what we have learned. - do rigorous statistics in your work from now on **LET'S BEGIN!**===============================================================
#import python packages import numpy as np import scipy as sp import scipy.stats as st import pandas as pd import seaborn as sns import matplotlib.pyplot as plt rng=np.random.RandomState(1234) #this will ensure the reproducibility of the notebook
_____no_output_____
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
**EXAMPLE I:** ===How much time do subscribers spend on average each day on Netflix?--Example discussed in class (Lecture 1). The data we are working with are simulated, but the mean time spent on Netflix is inspired by https://www.pcmag.com/news/us-netflix-subscribers-watch-32-hours-and-use-96-gb-of-data-per-day (average of 3.2 hours for subscribers).
#Summarizing data #================ population=np.array([1,1.8,2,3.2,3.3,4,4,4.2]) our_sample=np.array([2,3.2,4]) #means population_mean=np.mean(population) print('Population mean',population_mean.round(2)) sample_mean=np.mean(our_sample) print('- Sample mean',sample_mean.round(2)) #standard deviations population_sd=np.std(population) print('Population standard deviation',population_sd.round(2)) #biased sample sd biased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/our_sample.shape[0]) print('- Biased sample standard deviation',biased_sample_sd.round(2)) #unbiased sample sd unbiased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/(our_sample.shape[0]-1)) print('- Unbiased sample standard deviation',unbiased_sample_sd.round(2)) plt.hist(population,range(0,6),color='black') plt.yticks([0,1,2]) plt.xlabel('Number of hours spent\nper day on Netflix') plt.ylabel('Number of observations') plt.show() #larger example MEAN_NETFLIX=3.2 SD_NETFLIX=1 population=rng.normal(loc=MEAN_NETFLIX, scale=SD_NETFLIX, size=1000) population[population<0]=0 our_sample=population[0:100] #means population_mean=np.mean(population) print('Population mean',population_mean.round(2)) sample_mean=np.mean(our_sample) print('- Sample mean',sample_mean.round(2)) #standard deviations population_sd=np.std(population) print('Population standard deviation',population_sd.round(2)) #biased sample sd biased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/our_sample.shape[0]) print('- Biased sample standard deviation',biased_sample_sd.round(2)) #unbiased sample sd unbiased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/(our_sample.shape[0]-1)) print('- Unbiased sample standard deviation',unbiased_sample_sd.round(2)) #representing sets of datapoints #=============================== #histograms plt.hist(population,[x*0.6 for x in range(10)],color='lightgray',edgecolor='black') plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15) plt.ylabel('Number of respondents',fontsize=15) plt.xlim(0,6) plt.show() plt.hist(our_sample,[x*0.6 for x in range(10)],color='lightblue',edgecolor='black') plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15) plt.ylabel('Number of respondents',fontsize=15) plt.xlim(0,6) plt.show() #densities sns.distplot(population, hist=True, kde=True, bins=[x*0.6 for x in range(10)], color = 'black', hist_kws={'edgecolor':'black','color':'black'}, kde_kws={'linewidth': 4}) plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15) plt.ylabel('Density',fontsize=15) plt.xlim(0,6) plt.show() sns.distplot(our_sample, hist=True, kde=True, bins=[x*0.6 for x in range(10)], color = 'blue', hist_kws={'edgecolor':'black','color':'lightblue'}, kde_kws={'linewidth': 4}) plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15) plt.ylabel('Density',fontsize=15) plt.xlim(0,6) plt.show() #put both data in the same plot fig,plots=plt.subplots(1) sns.distplot(population, hist=False, kde=True, bins=[x*0.6 for x in range(10)], color = 'black', hist_kws={'edgecolor':'black','color':'black'}, kde_kws={'linewidth': 4},ax=plots) plots.set_xlim(0,6) sns.distplot(our_sample, hist=False, kde=True, bins=[x*0.6 for x in range(10)], color = 'blue', hist_kws={'edgecolor':'black','color':'black'}, kde_kws={'linewidth': 4},ax=plots) plots.set_xlabel('Number of hours spent on Netflix\nper day',fontsize=15) plots.set_ylabel('Density',fontsize=15) x = plots.lines[-1].get_xdata() y = plots.lines[-1].get_ydata() plots.fill_between(x, 0, y, where=x < 2, color='lightblue', alpha=0.3) plt.xlim(0,6) plt.show() #put both data in the same plot fig,plots=plt.subplots(1) sns.distplot(population, hist=False, kde=True, bins=[x*0.6 for x in range(10)], color = 'black', hist_kws={'edgecolor':'black','color':'black'}, kde_kws={'linewidth': 4},ax=plots) plots.set_xlim(0,6) x = plots.lines[-1].get_xdata() y = plots.lines[-1].get_ydata() plots.fill_between(x, 0, y, where=(x < 4) & (x>2), color='gray', alpha=0.3) plt.xlim(0,6) plots.set_xlabel('Number of hours spent on Netflix\nper day',fontsize=15) plots.set_ylabel('Density',fontsize=15) plt.show() np.multiply((population<=4),(population>=2)).sum()/population.shape[0] #brute force confidence interval N_POPULATION=10000 N_SAMPLE=1000 population=np.random.normal(loc=MEAN_NETFLIX, scale=SD_NETFLIX, size=N_POPULATION) population[population<0]=0 sample_means=[] for i in range(N_SAMPLE): sample_i=np.random.choice(population,10) mean_i=np.mean(sample_i) sample_means.append(mean_i) sample_means=np.array(sample_means) #sd of the mean means_mean=np.mean(sample_means) means_sd=np.std(sample_means) print('Mean of the means',means_mean) print('SEM (SD of the means)',means_sd) plt.hist(sample_means,100,color='red') plt.xlabel('Number of hours spent on Netflix\nper day\nMEANS OF SAMPLES') plt.xlim(0,6) plt.axvline(x=means_mean,color='black') plt.axvline(x=means_mean-means_sd,color='black',linestyle='--') plt.axvline(x=means_mean+means_sd,color='black',linestyle='--') plt.show() #compute what fraction of points are within 1 means_sd from means_mean within_1sd=0 within_2sd=0 for i in range(sample_means.shape[0]): m=sample_means[i] if m>=(means_mean-means_sd) and m<=(means_mean+means_sd): within_1sd+=1 if m>=(means_mean-2*means_sd) and m<=(means_mean+2*means_sd): within_2sd+=1 print('within 1 means SD:',within_1sd/sample_means.shape[0]) print('within 1 means SD:',within_2sd/sample_means.shape[0]) from scipy import stats print('SEM (SD of the means), empirically calculated',means_sd.round(2)) print('SEM computed in python',stats.sem(sample_i).round(2)) #one sample t test in python from scipy.stats import ttest_1samp MEAN_NETFLIX=3.2 SD_NETFLIX=1 population=rng.normal(loc=MEAN_NETFLIX, scale=SD_NETFLIX, size=1000) population[population<0]=0 our_sample=population[0:10] print(our_sample.round(2)) print(our_sample.mean()) print(our_sample.std()) TEST_VALUE=1.5 t, pvalue = ttest_1samp(our_sample, popmean=TEST_VALUE) print('t', t.round(2)) print('p-value', pvalue.round(6)) #confidence intervals #===================== #take 100 samples #compute their confidence intervals #plot them import scipy.stats as st N_SAMPLE=200 for CONFIDENCE in [0.9,0.98,0.999999]: population=rng.normal(loc=MEAN_NETFLIX, scale=SD_NETFLIX, size=N_POPULATION) population[population<0]=0 sample_means=[] ci_lows=[] ci_highs=[] for i in range(N_SAMPLE): sample_i=np.random.choice(population,10) mean_i=np.mean(sample_i) ci=st.t.interval(alpha=CONFIDENCE, df=sample_i.shape[0]-1, loc=mean_i, scale=st.sem(sample_i)) ci_lows.append(ci[0]) ci_highs.append(ci[1]) sample_means.append(mean_i) data=pd.DataFrame({'mean':sample_means,'ci_low':ci_lows,'ci_high':ci_highs}) data=data.sort_values(by='mean') data.index=range(N_SAMPLE) print(data) for i in range(N_SAMPLE): color='gray' if MEAN_NETFLIX>data['ci_high'][i] or MEAN_NETFLIX<data['ci_low'][i]: color='red' plt.plot((data['ci_low'][i],data['ci_high'][i]),(i,i),color=color) #plt.scatter(data['mean'],range(N_SAMPLE),color='black') plt.axvline(x=MEAN_NETFLIX,color='black',linestyle='--') plt.xlabel('Mean time spent on Netflix') plt.ylabel('Sampling iteration') plt.xlim(0,10) plt.show() #confidence intervals #===================== #take 100 samples #compute their confidence intervals #plot them import scipy.stats as st N_SAMPLE=200 for CONFIDENCE in [0.9,0.98,0.999999]: population=rng.normal(loc=MEAN_NETFLIX, scale=SD_NETFLIX, size=N_POPULATION) population[population<0]=0 sample_means=[] ci_lows=[] ci_highs=[] for i in range(N_SAMPLE): sample_i=np.random.choice(population,100) mean_i=np.mean(sample_i) ci=st.t.interval(alpha=CONFIDENCE, df=sample_i.shape[0]-1, loc=mean_i, scale=st.sem(sample_i)) ci_lows.append(ci[0]) ci_highs.append(ci[1]) sample_means.append(mean_i) data=pd.DataFrame({'mean':sample_means,'ci_low':ci_lows,'ci_high':ci_highs}) data=data.sort_values(by='mean') data.index=range(N_SAMPLE) print(data) for i in range(N_SAMPLE): color='gray' if MEAN_NETFLIX>data['ci_high'][i] or MEAN_NETFLIX<data['ci_low'][i]: color='red' plt.plot((data['ci_low'][i],data['ci_high'][i]),(i,i),color=color) #plt.scatter(data['mean'],range(N_SAMPLE),color='black') plt.axvline(x=MEAN_NETFLIX,color='black',linestyle='--') plt.xlabel('Mean time spent on Netflix') plt.ylabel('Sampling iteration') plt.xlim(0,10) plt.show()
mean ci_low ci_high 0 2.892338 2.733312 3.051364 1 2.895408 2.729131 3.061684 2 2.946665 2.799507 3.093822 3 2.957376 2.781855 3.132897 4 2.968571 2.784393 3.152748 .. ... ... ... 195 3.427391 3.250704 3.604077 196 3.431677 3.258160 3.605194 197 3.434345 3.274334 3.594356 198 3.450142 3.279478 3.620806 199 3.476758 3.310008 3.643508 [200 rows x 3 columns]
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
**EXAMPLE II:** ===Is exercise associated with lower baseline blood pressure?--We will simulate data with control mean 120 mmHg, treatment mean 116 mmHg and population sd 5 for both conditions.
#simulate dataset #===================== def sample_condition_values(condition_mean, condition_var, condition_N, condition=''): condition_values=np.random.normal(loc = condition_mean, scale=condition_var, size = condition_N) data_condition_here=pd.DataFrame({'BP':condition_values, 'condition':condition}) return(data_condition_here) #========================================================================= N_per_condition=10 ctrl_mean=120 test_mean=116 v=5 np.random.seed(1) data_ctrl=sample_condition_values(condition_mean=ctrl_mean, condition_N=N_per_condition, condition_var=v, condition='couch') data_test=sample_condition_values(condition_mean=test_mean, condition_N=N_per_condition, condition_var=v, condition='exercise') data=pd.concat([data_ctrl,data_test],axis=0) print(data) #visualize data #===================== sns.catplot(x='condition',y='BP',data=data,height=2,aspect=1.5) plt.ylabel('BP') plt.show() sns.catplot(data=data,x='condition',y='BP', jitter=1, ) plt.show() sns.catplot(data=data,x='condition',y='BP',kind='box',) plt.show() sns.catplot(data=data,x='condition',y='BP',kind='violin',) plt.show() fig,plots=plt.subplots(1) sns.boxplot(data=data,x='condition',y='BP', ax=plots, ) sns.stripplot(data=data,x='condition',y='BP', jitter=1, ax=plots,alpha=0.25, ) plt.show()
_____no_output_____
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
In our hypothesis test, we ask if these two groups differ significantly from each other. It's a bit hard to say just from looking at the plot. This is where statistics comes in. It's time to:*3. Think about how much the data surprise you, given your null model*We'll convert this step to some math, as follows:**Step 1. summarize the difference between the groups with a number.**This is called a **test statistic** "How to define the test statistic?" you say?The world is your oyster. You are free to choose anything you wish. (Later, we'll see that some choices come with nice math, which is why they are typically used. But a test statistic could be anything)To demonstrate this intuition, let's come up with a very basic test statistic. For example, let's compute the difference between the BP in the 2 groups.
mean_ctrl=np.mean(data[data['condition']=='couch']['BP']) mean_test=np.mean(data[data['condition']=='exercise']['BP']) test_stat=mean_test-mean_ctrl print('test statistic =',test_stat)
test statistic = -4.362237456546268
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
What is this number telling us? Is the BP significantly different between the 2 conditions? It's impossible to say looking at only this number.We have to ask ourselves, well, what did you expect?This takes us to the next step. **ii) think about what the test statistic would be if in reality there were no difference between the 2 groups. It will be a distribution, not just a single number, because you would expect to see some variation in the test statistic whenever you do an experiment, due to sampling noise, and due to variation in the population.**Here is where the wasteful part comes in. You go and repeat the measurement on 1000 different couch grouos. Then, for each of these, you compute the same test statistic = the difference between the mean in that sample and your original couch group.
np.random.seed(1) data_exp2=sample_condition_values(condition_mean=ctrl_mean, condition_N=N_per_condition, condition_var=v, condition='control_0') for i in range(1,1001): data_exp2=pd.concat([data_exp2,sample_condition_values(condition_mean=ctrl_mean, condition_N=N_per_condition, condition_var=v, condition='control_'+str(i))]) print(data_exp2) #now, let's plot the distribution of the test statistic under the null hypothesis #get mean of each control exp2_means=data_exp2.groupby('condition').mean() print(exp2_means.head()) null_test_stats=exp2_means-ctrl_mean plt.hist(np.array(null_test_stats).flatten(),20,color='black') plt.xlabel('Test statistic') plt.axvline(x=test_stat,color='red') null_test_stats for i in range(null_test_stats.shape[0]): if null_test_stats['BP'][i] > 4: print(null_test_stats.index[i], null_test_stats['BP'][i]) for i in range(null_test_stats.shape[0]): if null_test_stats['BP'][i]<-4: print(null_test_stats.index[i],null_test_stats['BP'][i]) sns.catplot(data=data_exp2,x='condition',y='BP',order=['control_0', 'control_1','control_2','control_3', 'control_4','control_5',#'control_6', #'control_7','control_8','control_9','control_10', 'control_179','control_161',], color='black',#kind='box', aspect=2,height=2) x=5 plt.hist(np.array(null_test_stats[1:2]).flatten(),range(-4,4),color='black',) plt.xlabel('Test statistic (t)') plt.xlim(-x,x) plt.ylim(0,5) plt.show() #plt.axvline(x=t_stat,color='red') plt.hist(np.array(null_test_stats[1:3]).flatten(),range(-4,4),color='black',) plt.xlabel('Test statistic (t)') plt.xlim(-x,x) plt.ylim(0,5) plt.show() #plt.axvline(x=t_stat,color='red') plt.hist(np.array(null_test_stats[1:4]).flatten(),range(-4,4),color='black',) plt.xlabel('Test statistic (t)') plt.xlim(-x,x) plt.ylim(0,5) plt.show() #plt.axvline(x=t_stat,color='red') plt.hist(np.array(null_test_stats[1:5]).flatten(),range(-4,4),color='black',) plt.xlabel('Test statistic (t)') plt.xlim(-x,x) plt.ylim(0,5) plt.show() #plt.axvline(x=t_stat,color='red') plt.hist(np.array(null_test_stats[1:6]).flatten(),range(-4,4),color='black',) plt.xlabel('Test statistic (t)') plt.xlim(-x,x) plt.ylim(0,5) plt.show() #plt.axvline(x=t_stat,color='red') plt.hist(np.array(null_test_stats).flatten(),20,color='black',) plt.xlabel('Test statistic (t)') plt.xlim(-x,x) #plt.ylim(0,5) plt.show() plt.hist(np.array(null_test_stats).flatten(),20,color='black',) plt.xlabel('Test statistic (t)') plt.xlim(-x,x) #plt.ylim(0,5) plt.axvline(x=test_stat,color='red') plt.show()
_____no_output_____
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
In black we have the distribution of test statistics we obtained from the 1000 experiments measuring couch participants. In other words, this is the distribution of the test statistic under the null hypothesis.The red line shows the test statistic from our comparison of exercise group vs with couch group. **Is our difference in expression significant?**if the null is true, in other words, if in reality there is no difference between couch and exercise, what is the probability of seeing such an extreme difference between their means (in other words, such an extreme test statistic)?We can compute this from the plot above. We go to our null distribution, and count how many times we got a more extreme test statistic in our null experiment than the one we got for the couch vs exercise comparison.
count_more_extreme=int(np.sum(np.abs(null_test_stats)>=np.abs(test_stat))) print(count_more_extreme,'times we got a more extreme test statistic under the null') print(count_more_extreme / 1000,'fraction of the time we got a more extreme test statistic under the null')
3 times we got a more extreme test statistic under the null 0.003 fraction of the time we got a more extreme test statistic under the null
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
What we computed above is called a **p-value**. Now, this is a very often misunderstood term, so let's think about it deeply. Deeply.Deeply.About what it is, what it is not.**P-values**--To remember what a p-value is, you decide to make a promise to me and more importantly yourself, that from now on, any sentence in which you mention a p-value will start with "if the null were true, ...".**A p-value IS:**- if the null were true, the probability of observing something as extreme or more extreme than your test statistic.- it's the quantification of your "whoa!", given your null hypothesis. More "whoa!" = smaller p-value.**A p-value is NOT:**- the probability that the null hypothesis is wrong. We don't know the probability of that. That's sort of up to the universe. - the probability that the null hypothesis is wrong. This is so important, that it's worth putting it on the list twice.Why is this distinction so important? First, because we can be very good at estimating what happens under the null. It's much more challenging to think about other scenarios. For instance, if you needed to make a model for the BP being different between 2 conditions, how different do you expect them to be? Is the average couch group at 120 and the exercise at 110? Or the couch at 125 and exercise at 130? Do you make a model for each option and grow old estimating all possible models?Second, it's also a matter of being conservative. It's common courtesy to assume the 2 conditions are the same. I expect you to come to me and convince me that it would be REALLY unlikely to observe what we have just seen given the null, to make it worthwhile my time. It would be weird to just assume the BP is different between the 2 conditions and have to prove that they are the same. We'd be swimming in false positives.**Statistical significance**Now that we have a p-value, you need to ask yourself where you set a cutoff for something being unlikely enough to be "significant", or worth your attention. Usually, that's 0.05, or 0.01, or 0.1. Yes, essentially it's a somewhat arbitrary small number.I reiterate: this does not mean that the exercise group is different from the couch group for sure. If you were to do the experiment 1000 times with groups of participants assigned to "couch", in a small subset of your experiments, you'll get a test statistic as or more extreme than the one we found in our experiment comparing. But given that it's unlikely to get this result under the null hypohesis, you call it a significant difference, one that makes you think.In summary: - you look at your p-value - and you think about the probability of getting your result under the null, as you need to include these words in any sentence with p-values -- compare it with your significance threshold - if it is less than that threshold, you call that difference in expression significant between KO and control.**Technical note: one-tailed vs two-tailed tests***Depending on what you believe would be the possible alternative to your null hypothesis (conveniently called the alternative hypothesis), you may compute the p-value differently.**Specifically, in our example above, we computed the p-value by asking:*- *if the null were true, what is the probability of obtaining a test statistic as extreme or more extreme than the one we've seen. That means we asked whether there were test statistics larger than our test statistic, or lower than minus our test statistic. This is called a two-tailed test, because we looked at both sides (both tails) of the distribution under the null.**If your alternative hypothesis were that the treatment specifically decreases baseline blood pressure, you'd compute the p-value differently, as you'd look under the null at only what fraction of the time you've seen a test statistic lower than the one we've seen. This is a one-tailed test.**Of course, this is not an invitation to use one-tailed tests to try to get more significant p-values, since by definition the p-values from a one-tailed test will be smaller than those for a two-tailed test. You should define your alternative hypothesis based on deep thought. I personally like to be as conservative as possible, and as such strongly prefer two-tailed tests.* **Hypothesis testing in a nutshell**- come up with a **null hypothesis**. * In our case: the gene does not change in expression.- collect some data * yay, we love data!- define a **test statistic** to measure your quantity of interest. * here we looked at the difference between means, but as we'll see below, there are more sophisticated ways to go about it.- figure out the **distribution of the test statistic under the null** hypothesis * here, we did this by repeating the measurement on the same type of cells 1000 times. Next, we'll learn that under certain conditions we can comoute this distribution analytically, rather than having to do thousands of experiments.- compute a **p-value** * that tells you if the null were true, the probability of getting your test statistic or something even more outrageous- decide if **significant** * is p-value below a pre-defined thresholdIf you deeply understand this, you're on a very good path to understand a LARGE fraction of all statistics you'll find in genomics. **PART II. EXAMPLE HYPOTHESIS TESTING USING THE T-TEST**---Now, let's do a t-test.
from scipy.stats import ttest_ind t_stat,pvalue=ttest_ind(data[data['condition']=='exercise']['BP'], data[data['condition']=='couch']['BP'], ) print(t_stat,pvalue) #as before, compare to the distribution null_test_stats=[] for i in range(1000): current_t,current_pvalue=ttest_ind(data_exp2[data_exp2['condition']=='control_'+str(i)]['BP'], data_exp2[data_exp2['condition']=='control_0']['BP'], ) null_test_stats.append(current_t) plt.hist(np.array(null_test_stats).flatten(),color='black') plt.xlabel('Test statistic (t)') plt.axvline(x=t_stat,color='red') count_more_extreme=int(np.sum(np.abs(null_test_stats)>=np.abs(t_stat))) print(count_more_extreme,'times we got a more extreme test statistic under the null') print(count_more_extreme/1000,'fraction of the time we got a more extreme test statistic under the null = p-value')
10 times we got a more extreme test statistic under the null 0.01 fraction of the time we got a more extreme test statistic under the null = p-value
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
Now, the exciting thing is that we didn't have to perform the second experiment to get an empirical distribution of the test statistic under the null. Rather, we were able to estimate it analytically. And indeed, the p-value we obtained from the t-test is similar to the one we got from our big experiment! Ok, so by now, you should be pros at hypothesis tests.Remember: decide on the null, compute test statistic, get the distribution of the test statistic under the null, compute a p-value, decide if significant. There are of course many other types of hypothesis tests that don't look at the difference between groups as we did here. For instace, in GWAS, you want to see if a mutation is enriched in a disease cohort compared to healthy samples, and you do a chi-square test. Or maybe you have more than 2 conditions. Then you do ANOVA, rather than a t-test. **PROJECT: EXAMPLE III:** === RNA sequencing: which genes are characteristic for different types of immune cells in your body? -- Motivation--Although all cells in our body have the same DNA, they can have wildly different functions. That is because they activate different genes, for example your brain cells turn on genes that lead to production of neurotransmitters while liver cells activate genes encoding enzymes.Here, you will compare different types of immune cells (e.g. B-cells that make your antibodies, and T-cells which fight infections), and identify which genes are specifically active in each type of cell.
#install scanpy !pip install scanpy
Requirement already satisfied: scanpy in c:\users\freshskates\.conda\envs\ml\lib\site-packages (1.8.1) Requirement already satisfied: numpy>=1.17.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (1.20.0) Requirement already satisfied: h5py>=2.10.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (3.4.0) Requirement already satisfied: scikit-learn>=0.22 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (1.0) Requirement already satisfied: numba>=0.41.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.54.1) Requirement already satisfied: natsort in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (7.1.1) Requirement already satisfied: umap-learn>=0.3.10 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.5.1) Requirement already satisfied: joblib in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (1.1.0) Requirement already satisfied: pandas>=0.21 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (1.3.3) Requirement already satisfied: anndata>=0.7.4 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.7.6) Requirement already satisfied: patsy in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.5.2) Requirement already satisfied: sinfo in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.3.4) Requirement already satisfied: tqdm in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (4.62.3) Requirement already satisfied: seaborn in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.11.2) Requirement already satisfied: packaging in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (21.0) Requirement already satisfied: matplotlib>=3.1.2 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (3.4.3) Requirement already satisfied: statsmodels>=0.10.0rc2 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.13.0) Requirement already satisfied: scipy>=1.4 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (1.7.1) Requirement already satisfied: tables in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (3.6.1) Requirement already satisfied: networkx>=2.3 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (2.6.3) Requirement already satisfied: xlrd<2.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from anndata>=0.7.4->scanpy) (1.2.0) Requirement already satisfied: python-dateutil>=2.7 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from matplotlib>=3.1.2->scanpy) (2.8.2) Requirement already satisfied: cycler>=0.10 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from matplotlib>=3.1.2->scanpy) (0.10.0) Requirement already satisfied: pyparsing>=2.2.1 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from matplotlib>=3.1.2->scanpy) (2.4.7) Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from matplotlib>=3.1.2->scanpy) (1.3.2) Requirement already satisfied: pillow>=6.2.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from matplotlib>=3.1.2->scanpy) (8.3.2) Requirement already satisfied: six in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from cycler>=0.10->matplotlib>=3.1.2->scanpy) (1.16.0) Requirement already satisfied: setuptools in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from numba>=0.41.0->scanpy) (58.0.4) Requirement already satisfied: llvmlite<0.38,>=0.37.0rc1 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from numba>=0.41.0->scanpy) (0.37.0) Requirement already satisfied: pytz>=2017.3 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from pandas>=0.21->scanpy) (2021.3) Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scikit-learn>=0.22->scanpy) (3.0.0) Requirement already satisfied: pynndescent>=0.5 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from umap-learn>=0.3.10->scanpy) (0.5.5) Requirement already satisfied: stdlib-list in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from sinfo->scanpy) (0.8.0) Requirement already satisfied: numexpr>=2.6.2 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from tables->scanpy) (2.7.3) Requirement already satisfied: colorama in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from tqdm->scanpy) (0.4.4)
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
RNA sequencing--RNA sequencing allows us to quantify the extent to which each gene is active in a sample. When a gene is active, its DNA is transcribed into mRNA and then translated into protein. With RNA sequencing, we are counting how frequent mRNAs for each gene occur in a sample. Genes that are more active will have higher counts, while genes that are not made into mRNA will have 0 counts.Data--The code below will download the data for you, and organize it into a data frame, where:- every row is a different gene- every column is a different sample. - We have 6 samples, 3 of T cells (called "CD4 T cells" and B cells ("B cells").- every value is the number of reads from each gene in each sample. - Note: the values have been normalized to be comparable between samples.
import scanpy as sc def prep_data(): adata=sc.datasets.pbmc3k_processed() counts=pd.DataFrame(np.expm1(adata.raw.X.toarray()), index=adata.raw.obs_names, columns=adata.raw.var_names) #make 3 reps T-cells and 3 reps B-cells cells_per_bulk=100 celltype='CD4 T cells' cells=adata.obs_names[adata.obs['louvain']==celltype] bulks=pd.DataFrame(columns=[celltype+'.rep1',celltype+'.rep2',celltype+'.rep3'], index=adata.raw.var_names) for i in range(3): cells_here=cells[(i*100):((i+1)*100)] bulks[celltype+'.rep'+str(i+1)]=list(counts.loc[cells_here,:].sum(axis=0)) bulk_t=bulks celltype='B cells' cells=adata.obs_names[adata.obs['louvain']==celltype] bulks=pd.DataFrame(columns=[celltype+'.rep1',celltype+'.rep2',celltype+'.rep3'], index=adata.raw.var_names) for i in range(3): cells_here=cells[(i*100):((i+1)*100)] bulks[celltype+'.rep'+str(i+1)]=list(counts.loc[cells_here,:].sum(axis=0)) bulks=pd.concat([bulk_t,bulks],axis=1) bulks=bulks.sort_values(by=bulks.columns[0],ascending=False) return(bulks) data=prep_data() print(data.head()) print("min: ", data.min()) print("max: ", data.max())
CD4 T cells.rep1 CD4 T cells.rep2 CD4 T cells.rep3 B cells.rep1 \ index MALAT1 8303.0 7334.0 7697.0 5246.0 B2M 4493.0 4675.0 4546.0 2861.0 TMSB4X 4198.0 4297.0 3932.0 2551.0 RPL10 3615.0 3565.0 3965.0 3163.0 RPL13 3501.0 3556.0 3679.0 2997.0 B cells.rep2 B cells.rep3 index MALAT1 5336.0 4950.0 B2M 2844.0 2796.0 TMSB4X 2066.0 2276.0 RPL10 2830.0 2753.0 RPL13 2636.0 2506.0 min: CD4 T cells.rep1 0.0 CD4 T cells.rep2 0.0 CD4 T cells.rep3 0.0 B cells.rep1 0.0 B cells.rep2 0.0 B cells.rep3 0.0 dtype: float64 max: CD4 T cells.rep1 8303.0 CD4 T cells.rep2 7334.0 CD4 T cells.rep3 7697.0 B cells.rep1 5246.0 B cells.rep2 5336.0 B cells.rep3 4950.0 dtype: float64
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
**Let's explore the dataset****(1 pt)** What are the names of the samples?**(2 pts)** What is the highest recorded value? What is the lowest? write code to answer the questions here 1) Sample names are - CD4 T cells.rep1, CD4 T cells.rep2, CD4 T cells.rep3, - B cells.rep1, B cells.rep2, B cells.rep3 2) - The highest recorded value: **max: CD4 T cells.rep1 8303.0** - The lowest recorded value: **min: CD4 T cells.rep1 0.0** **Exploring the data** One gene that is different between our 2 cell types is IL7R. **(1 pt)** Plot the distribution of the IL7R gene in the 2 conditions. Which cell type (CD4 T cells or B cells) has the higher level of this gene? **(1 pt)** How many samples do we have for each condition? 4) Answers 3) - CD4 T has a higher level of this gene, it can be seen in the graph plotted 4) - Three samples for each condition For CD4 T Cells: - CD4 T cells rep1 - CD4 T cells rep2 - CD4 T cells rep3 For B Cells: - B cells rep1 - B cells rep2 - B cells rep3
#inspect the data GENE='IL7R' long_data=pd.DataFrame({GENE:data.loc[GENE,:], 'condition':[x.split('.')[0] for x in data.columns]}) print(long_data) sns.catplot(data=long_data,x='condition', y=GENE)
IL7R condition CD4 T cells.rep1 175.0 CD4 T cells CD4 T cells.rep2 128.0 CD4 T cells CD4 T cells.rep3 146.0 CD4 T cells B cells.rep1 13.0 B cells B cells.rep2 10.0 B cells B cells.rep3 20.0 B cells
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
**Two-sample t-test for one gene across 2 conditions** We are now going to check whether the gene IL7R is differentially active in CD4 T cells vs B cells. **(1 pt)** What is the null hypothesis? **(1 pt)** Based on your plot of the gene in the two conditions, and the fact that there looks like there might be a difference, what do you expect the sign of the t-statistic to be (CD4 T cells vs B cells)? We are going to use the function ttest_ind to perform our t-test. You can read about it here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html. **(1 pt)** What is the t-statistic? **(1 pt)** What is the p-value? **(1 pt)** Describe in your own words what the p-value means. **(1 pt)** Is the p-value significant at alpha = 0.05? Answers 5) - The graph is interesting, at first glance it seems like IL7R is not differentially active in CD4 T cells vs B cells - they are similar 6) - If the sign is positive, then reject the null hypothesis --- 7) - t statistic: 9.66 test statistic tells us how much our sample mean deviates from the null hypothesis mean T Statistic is the value calculated when you replace the population std with the sd(sample standard) 8) - p-value: 0.00064 9) - the p valueis the porbability / likelihood of the null hypothesis, since it was rejected it should be low. the smaller the p value, the more "wrong" the null hypothesis it is 10) - p value != alpha - 0.00064 != 0.05 - null hypothesis rejected because of that, therefore making the P value significant
#pick 1 gene, do 1 t-test GENE='IL7R' COND1=['CD4 T cells.rep' + str(x+1) for x in range(3)] COND2=['B cells.rep' + str(x+1) for x in range(3)] #plot gene across samples #t-test from scipy.stats import ttest_ind t_stat,pvalue=ttest_ind(data.loc[GENE,COND1],data.loc[GENE,COND2]) print('t statistic',t_stat.round(2)) print('p-value',pvalue.round(5))
t statistic 9.66 p-value 0.00064
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
**Two-sample t-tests for each gene across 2 conditions**We are now going to repeat our analysis from before for all genes in our dataset.**(1 pt)** How many genes are present in our dataset? Answers 11) - 13714 genes present in the dataset, displayed with display(results)
from IPython.display import display #all genes t-tests PSEUDOCOUNT=1 results=pd.DataFrame(index=data.index, columns=['t','p','lfc']) for gene in data.index: t_stat,pvalue=ttest_ind(data.loc[gene,COND1],data.loc[gene,COND2]) lfc=np.log2((data.loc[gene,COND1].mean()+PSEUDOCOUNT)/(data.loc[gene,COND2].mean()+PSEUDOCOUNT)) results.loc[gene,'t']=t_stat results.loc[gene,'p']=pvalue results.loc[gene,'lfc']=lfc
_____no_output_____
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
**Ranking discoveries by either significance or fold change**For each gene, we have obtained:- a t-statistic- a p-value for the difference between the 2 conditions- a log2 fold change between CD4 T cells and B cellsWe can inspect how fold changes relate to the significance of the differences. **(1 pt)** What do you expect the relationship to be between significance/p-values and fold changes? Answers 12) Fold change has a correlation p value, the bigger the fold change from 0 the bigger p value
#volcano plot ###### results['p']=results['p'].fillna(1) PS2=1e-7 plt.scatter(results['lfc'],-np.log10(results['p']+PS2),s=5,alpha=0.5,color='black') plt.xlabel('Log2 fold change (CD4 T cells/B cells)') plt.ylabel('-log10(p-value)') plt.show() display(results)
_____no_output_____
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
**Multiple testing correction**Now, we will explore how the number of differentially active genes differs depending on how we correct for multiple tests.**(1 pt)** How many genes pass the significance level of 0.05, without performing any correction for multiple testing? Answers 13) - there are 1607 genes that pass the significance level of 0.05
ALPHA=0.05 print((results['p']<=ALPHA).sum())
1607
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
We will use a function that adjusts our p-values using different methods, called "multipletests". You can read about it here: https://www.statsmodels.org/dev/generated/statsmodels.stats.multitest.multipletests.htmlWe will use the following settings:- for Bonferroni correction, we set method='bonferroni'. This will multiply our p-values by the number of tests we did. If the resulting values are greated than 1 they will be clipped to 1.- for Benjamini-Hochberg correction, we set method='fdr_bh' **(2 pts)** How many genes pass the significance level of 0.05, after correcting for multiple testing using the Bonferroni method? What is the revised p-value threshold?**(1 pt)** Would the gene we tested before, IL7R, pass this threshold? Answers 14) - 63 genes pass the significance level of 0.05(after correcting multiple testing using the bonferroni method) - new p value threshold: 0.05/13714 = 3.6 * 10^-6 - uses 13714 - alpha = alpha / k 15) - Yes it would, with the corrected p-value greater than 1
#multiple testing correction #bonferroni from statsmodels.stats.multitest import multipletests results['p.adj.bonferroni']=multipletests(results['p'], method='bonferroni')[1] FDR=ALPHA plt.hist(results['p'],100) plt.axvline(x=FDR,color='red',linestyle='--') plt.xlabel('Unadjusted p-values') plt.ylabel('Number of genes') plt.show() plt.hist(results['p.adj.bonferroni'],100) #plt.ylim(0,200) plt.axvline(x=FDR,color='red',linestyle='--') plt.xlabel('P-values (Bonferroni corrected)') plt.ylabel('Number of genes') plt.show() plt.show() print('DE Bonferroni',(results['p.adj.bonferroni']<=FDR).sum())
_____no_output_____
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
**(1 pt)** How many genes pass the significance level of 0.05, after correcting for multiple testing using the Benjamini-Hochberg method? Answers 16) - 220
results['p.adj.bh']=multipletests(results['p'], method='fdr_bh')[1] FDR=0.05 plt.hist(results['p'],100) plt.axvline(x=FDR,color='red',linestyle='--') plt.xlabel('Unadjusted p-values') plt.ylabel('Number of genes') plt.show() plt.hist(results['p.adj.bh'],100) plt.ylim(0,2000) plt.axvline(x=FDR,color='red',linestyle='--') plt.xlabel('P-values (Benjamini-Hochberg corrected)') plt.ylabel('Number of genes') plt.show() print('DE BH',(results['p.adj.bh']<=FDR).sum())
_____no_output_____
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
**(1 pt)** Which multiple testing correction is the most stringent? Finally, let's look at our results. Print the significant differential genes and look up a few on the internet. Answers 17) - Bonferroni, and this is because the corrected p values resulted in values of 1 or greater, so it was limited to 1
results.loc[results['p.adj.bonferroni']<=FDR,:].sort_values(by='lfc')
_____no_output_____
MIT
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
Copyright 2018 The TF-Agents Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
REINFORCE agent View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Introduction This example shows how to train a [REINFORCE](http://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf) agent on the Cartpole environment using the TF-Agents library, similar to the [DQN tutorial](1_dqn_tutorial.ipynb).![Cartpole environment](images/cartpole.png)We will walk you through all the components in a Reinforcement Learning (RL) pipeline for training, evaluation and data collection. Setup If you haven't installed the following dependencies, run:
!sudo apt-get install -y xvfb ffmpeg !pip install gym !pip install 'imageio==2.4.0' !pip install PILLOW !pip install 'pyglet==1.3.2' !pip install pyvirtualdisplay !pip install tf-agents from __future__ import absolute_import from __future__ import division from __future__ import print_function import base64 import imageio import IPython import matplotlib import matplotlib.pyplot as plt import numpy as np import PIL.Image import pyvirtualdisplay import tensorflow as tf from tf_agents.agents.reinforce import reinforce_agent from tf_agents.drivers import dynamic_step_driver from tf_agents.environments import suite_gym from tf_agents.environments import tf_py_environment from tf_agents.eval import metric_utils from tf_agents.metrics import tf_metrics from tf_agents.networks import actor_distribution_network from tf_agents.replay_buffers import tf_uniform_replay_buffer from tf_agents.trajectories import trajectory from tf_agents.utils import common tf.compat.v1.enable_v2_behavior() # Set up a virtual display for rendering OpenAI gym environments. display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
Hyperparameters
env_name = "CartPole-v0" # @param {type:"string"} num_iterations = 250 # @param {type:"integer"} collect_episodes_per_iteration = 2 # @param {type:"integer"} replay_buffer_capacity = 2000 # @param {type:"integer"} fc_layer_params = (100,) learning_rate = 1e-3 # @param {type:"number"} log_interval = 25 # @param {type:"integer"} num_eval_episodes = 10 # @param {type:"integer"} eval_interval = 50 # @param {type:"integer"}
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
EnvironmentEnvironments in RL represent the task or problem that we are trying to solve. Standard environments can be easily created in TF-Agents using `suites`. We have different `suites` for loading environments from sources such as the OpenAI Gym, Atari, DM Control, etc., given a string environment name.Now let us load the CartPole environment from the OpenAI Gym suite.
env = suite_gym.load(env_name)
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
We can render this environment to see how it looks. A free-swinging pole is attached to a cart. The goal is to move the cart right or left in order to keep the pole pointing up.
#@test {"skip": true} env.reset() PIL.Image.fromarray(env.render())
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
The `time_step = environment.step(action)` statement takes `action` in the environment. The `TimeStep` tuple returned contains the environment's next observation and reward for that action. The `time_step_spec()` and `action_spec()` methods in the environment return the specifications (types, shapes, bounds) of the `time_step` and `action` respectively.
print('Observation Spec:') print(env.time_step_spec().observation) print('Action Spec:') print(env.action_spec())
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
So, we see that observation is an array of 4 floats: the position and velocity of the cart, and the angular position and velocity of the pole. Since only two actions are possible (move left or move right), the `action_spec` is a scalar where 0 means "move left" and 1 means "move right."
time_step = env.reset() print('Time step:') print(time_step) action = np.array(1, dtype=np.int32) next_time_step = env.step(action) print('Next time step:') print(next_time_step)
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
Usually we create two environments: one for training and one for evaluation. Most environments are written in pure python, but they can be easily converted to TensorFlow using the `TFPyEnvironment` wrapper. The original environment's API uses numpy arrays, the `TFPyEnvironment` converts these to/from `Tensors` for you to more easily interact with TensorFlow policies and agents.
train_py_env = suite_gym.load(env_name) eval_py_env = suite_gym.load(env_name) train_env = tf_py_environment.TFPyEnvironment(train_py_env) eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
AgentThe algorithm that we use to solve an RL problem is represented as an `Agent`. In addition to the REINFORCE agent, TF-Agents provides standard implementations of a variety of `Agents` such as [DQN](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf), [DDPG](https://arxiv.org/pdf/1509.02971.pdf), [TD3](https://arxiv.org/pdf/1802.09477.pdf), [PPO](https://arxiv.org/abs/1707.06347) and [SAC](https://arxiv.org/abs/1801.01290).To create a REINFORCE Agent, we first need an `Actor Network` that can learn to predict the action given an observation from the environment.We can easily create an `Actor Network` using the specs of the observations and actions. We can specify the layers in the network which, in this example, is the `fc_layer_params` argument set to a tuple of `ints` representing the sizes of each hidden layer (see the Hyperparameters section above).
actor_net = actor_distribution_network.ActorDistributionNetwork( train_env.observation_spec(), train_env.action_spec(), fc_layer_params=fc_layer_params)
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
We also need an `optimizer` to train the network we just created, and a `train_step_counter` variable to keep track of how many times the network was updated.
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate) train_step_counter = tf.compat.v2.Variable(0) tf_agent = reinforce_agent.ReinforceAgent( train_env.time_step_spec(), train_env.action_spec(), actor_network=actor_net, optimizer=optimizer, normalize_returns=True, train_step_counter=train_step_counter) tf_agent.initialize()
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
PoliciesIn TF-Agents, policies represent the standard notion of policies in RL: given a `time_step` produce an action or a distribution over actions. The main method is `policy_step = policy.step(time_step)` where `policy_step` is a named tuple `PolicyStep(action, state, info)`. The `policy_step.action` is the `action` to be applied to the environment, `state` represents the state for stateful (RNN) policies and `info` may contain auxiliary information such as log probabilities of the actions.Agents contain two policies: the main policy that is used for evaluation/deployment (agent.policy) and another policy that is used for data collection (agent.collect_policy).
eval_policy = tf_agent.policy collect_policy = tf_agent.collect_policy
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
Metrics and EvaluationThe most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows.
#@test {"skip": true} def compute_avg_return(environment, policy, num_episodes=10): total_return = 0.0 for _ in range(num_episodes): time_step = environment.reset() episode_return = 0.0 while not time_step.is_last(): action_step = policy.action(time_step) time_step = environment.step(action_step.action) episode_return += time_step.reward total_return += episode_return avg_return = total_return / num_episodes return avg_return.numpy()[0] # Please also see the metrics module for standard implementations of different # metrics.
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
Replay BufferIn order to keep track of the data collected from the environment, we will use the TFUniformReplayBuffer. This replay buffer is constructed using specs describing the tensors that are to be stored, which can be obtained from the agent using `tf_agent.collect_data_spec`.
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=tf_agent.collect_data_spec, batch_size=train_env.batch_size, max_length=replay_buffer_capacity)
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
For most agents, the `collect_data_spec` is a `Trajectory` named tuple containing the observation, action, reward etc. Data CollectionAs REINFORCE learns from whole episodes, we define a function to collect an episode using the given data collection policy and save the data (observations, actions, rewards etc.) as trajectories in the replay buffer.
#@test {"skip": true} def collect_episode(environment, policy, num_episodes): episode_counter = 0 environment.reset() while episode_counter < num_episodes: time_step = environment.current_time_step() action_step = policy.action(time_step) next_time_step = environment.step(action_step.action) traj = trajectory.from_transition(time_step, action_step, next_time_step) # Add trajectory to the replay buffer replay_buffer.add_batch(traj) if traj.is_boundary(): episode_counter += 1 # This loop is so common in RL, that we provide standard implementations of # these. For more details see the drivers module.
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
Training the agentThe training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing.The following will take ~3 minutes to run.
#@test {"skip": true} try: %%time except: pass # (Optional) Optimize by wrapping some of the code in a graph using TF function. tf_agent.train = common.function(tf_agent.train) # Reset the train step tf_agent.train_step_counter.assign(0) # Evaluate the agent's policy once before training. avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes) returns = [avg_return] for _ in range(num_iterations): # Collect a few episodes using collect_policy and save to the replay buffer. collect_episode( train_env, tf_agent.collect_policy, collect_episodes_per_iteration) # Use data from the buffer and update the agent's network. experience = replay_buffer.gather_all() train_loss = tf_agent.train(experience) replay_buffer.clear() step = tf_agent.train_step_counter.numpy() if step % log_interval == 0: print('step = {0}: loss = {1}'.format(step, train_loss.loss)) if step % eval_interval == 0: avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes) print('step = {0}: Average Return = {1}'.format(step, avg_return)) returns.append(avg_return)
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
Visualization PlotsWe can plot return vs global steps to see the performance of our agent. In `Cartpole-v0`, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 200, the maximum possible return is also 200.
#@test {"skip": true} steps = range(0, num_iterations + 1, eval_interval) plt.plot(steps, returns) plt.ylabel('Average Return') plt.xlabel('Step') plt.ylim(top=250)
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
Videos It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab.
def embed_mp4(filename): """Embeds an mp4 file in the notebook.""" video = open(filename,'rb').read() b64 = base64.b64encode(video) tag = ''' <video width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>'''.format(b64.decode()) return IPython.display.HTML(tag)
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
The following code visualizes the agent's policy for a few episodes:
num_episodes = 3 video_filename = 'imageio.mp4' with imageio.get_writer(video_filename, fps=60) as video: for _ in range(num_episodes): time_step = eval_env.reset() video.append_data(eval_py_env.render()) while not time_step.is_last(): action_step = tf_agent.policy.action(time_step) time_step = eval_env.step(action_step.action) video.append_data(eval_py_env.render()) embed_mp4(video_filename)
_____no_output_____
Apache-2.0
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
Querying Nexus knowledge graph using SPARQLThe goal of this notebook is to learn the basics of SPARQL. Only the READ part of SPARQL will be exposed. PrerequisitesThis notebook assumes you've created a project within the AWS deployment of Nexus. If not follow the Blue Brain Nexus [Quick Start tutorial](https://bluebrain.github.io/nexus/docs/tutorial/getting-started/quick-start/index.html). OverviewYou'll work through the following steps:1. Create a sparql wrapper around your project's SparqlView2. Explore and navigate data using the SPARQL query language Step 1: Create a sparql wrapper around your project's SparqlView Every project in Blue Brain Nexus comes with a SparqlView enabling to navigate the data as a graph and to query it using the W3C SPARQL Language. The address of such SparqlView is https://nexus-sandbox.io/v1/views/tutorialnexus/\$PROJECTLABEL/graph/sparql for a project withe label \$PROJECTLABEL. The address of a SparqlView is also called a **SPARQL endpoint**.
#Configuration for the Nexus deployment nexus_deployment = "https://nexus-sandbox.io/v1" token= "your token here" org ="tutorialnexus" project ="$PROJECTLABEL" headers = {} #Let install sparqlwrapper which a python wrapper around sparql client !pip install git+https://github.com/RDFLib/sparqlwrapper # Utility functions to create sparql wrapper around a sparql endpoint from SPARQLWrapper import SPARQLWrapper, JSON, POST, GET, POSTDIRECTLY, CSV import requests def create_sparql_client(sparql_endpoint, http_query_method=POST, result_format= JSON, token=None): sparql_client = SPARQLWrapper(sparql_endpoint) #sparql_client.addCustomHttpHeader("Content-Type", "application/sparql-query") if token: sparql_client.addCustomHttpHeader("Authorization","Bearer {}".format(token)) sparql_client.setMethod(http_query_method) sparql_client.setReturnFormat(result_format) if http_query_method == POST: sparql_client.setRequestMethod(POSTDIRECTLY) return sparql_client # Utility functions import pandas as pd pd.set_option('display.max_colwidth', -1) # Convert SPARQL results into a Pandas data frame def sparql2dataframe(json_sparql_results): cols = json_sparql_results['head']['vars'] out = [] for row in json_sparql_results['results']['bindings']: item = [] for c in cols: item.append(row.get(c, {}).get('value')) out.append(item) return pd.DataFrame(out, columns=cols) # Send a query using a sparql wrapper def query_sparql(query, sparql_client): sparql_client.setQuery(query) result_object = sparql_client.query() if sparql_client.returnFormat == JSON: return result_object._convertJSON() return result_object.convert() # Let create a sparql wrapper around the project sparql view sparqlview_endpoint = nexus_deployment+"/views/"+org+"/"+project+"/graph/sparql" sparqlview_wrapper = create_sparql_client(sparql_endpoint=sparqlview_endpoint, token=token,http_query_method= POST, result_format=JSON)
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
Step 2: Explore and navigate data using the SPARQL query language Let write our first query.
select_all_query = """ SELECT ?s ?p ?o WHERE { ?s ?p ?o } OFFSET 0 LIMIT 5 """ nexus_results = query_sparql(select_all_query,sparqlview_wrapper) nexus_df =sparql2dataframe(nexus_results) nexus_df.head()
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
Most SPARQL queries you'll see will have the anotomy above with:* a **SELECT** clause that let you select the variables you want to retrieve* a **WHERE** clause defining a set of constraints that the variables should satisfy to be retrieved* **LIMIT** and **OFFSET** clauses to enable pagination* the constraints are usually graph patterns in the form of **triple** (?s for subject, ?p for property and ?o for ?object) Multiple triples can be provided as graph pattern to match but each triple should end with a period. As an example, let retrieve 5 movies (?movie) along with their titles (?title).
movie_with_title = """ PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/> PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/> Select ?movie ?title WHERE { ?movie a vocab:Movie. ?movie vocab:title ?title. } LIMIT 5 """%(org,project) nexus_results = query_sparql(movie_with_title,sparqlview_wrapper) nexus_df =sparql2dataframe(nexus_results) nexus_df.head()
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
Note PREFIX clauses. It is way to shorten URIS within a SPARQL query. Without them we would have to use full URI for all properties.The ?movie variable is bound to a URI (the internal Nexus id). Let retrieve the movieId just like in the MovieLens csv files for simplicity.
movie_with_title = """ PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/> PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/> Select ?movieId ?title WHERE { # Select movies ?movie a vocab:Movie. # Select their movieId value ?movie vocab:movieId ?movieId. # ?movie vocab:title ?title. } LIMIT 5 """%(org,project) nexus_results = query_sparql(movie_with_title,sparqlview_wrapper) nexus_df =sparql2dataframe(nexus_results) nexus_df.head()
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
In the above query movies are things (or entities) of type vocab:Movie. This is a typical instance query where entities are filtered by their type(s) and then some of their properties are retrieved (here ?title). Let retrieve everything that is linked (outgoing) to the movies. The * character in the SELECT clause indicates to retreve all variables: ?movie, ?p, ?o
movie_with_properties = """ PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/> PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/> Select * WHERE { ?movie a vocab:Movie. ?movie ?p ?o. } LIMIT 20 """%(org,project) nexus_results = query_sparql(movie_with_properties,sparqlview_wrapper) nexus_df =sparql2dataframe(nexus_results) nexus_df.head(20)
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
As a little exercise, write a query retrieving incoming entities to movies. You can copy past the query above and modify it.Hints: ?s ?p ?o can be read as: ?o is linked to ?s with an outgoing link.Do you have results ?
#Your query here
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
Let retrieve the movie ratings
movie_with_properties = """ PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/> PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/> Select ?userId ?movieId ?rating ?timestamp WHERE { ?movie a vocab:Movie. ?movie vocab:movieId ?movieId. ?ratingNode vocab:movieId ?ratingmovieId. ?ratingNode vocab:rating ?rating. ?ratingNode vocab:userId ?userId. ?ratingNode vocab:timestamp ?timestamp. # Somehow pandas is movieId as double for rating FILTER(xsd:integer(?ratingmovieId) = ?movieId) } LIMIT 20 """%(org,project) nexus_results = query_sparql(movie_with_properties,sparqlview_wrapper) nexus_df =sparql2dataframe(nexus_results) nexus_df.head(20)
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
As a little exercise, write a query retrieving the movie tags along with the user id and timestamp. You can copy and past the query above and modify it.
#Your query here
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
Aggregate queries [Aggregates](https://www.w3.org/TR/sparql11-query/aggregates) apply some operations over a group of solutions.Available aggregates are: COUNT, SUM, MIN, MAX, AVG, GROUP_CONCAT, and SAMPLE.We will not see them all but we'll look at some examples. The next query will compute the average rating score for 'funny' movies.
tag_value = "funny" movie_avg_ratings = """ PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/> PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/> Select ( AVG(?ratingvalue) AS ?score) WHERE { # Select movies ?movie a vocab:Movie. # Select their movieId value ?movie vocab:movieId ?movieId. ?tag vocab:movieId ?movieId. ?tag vocab:tag ?tagvalue. FILTER(?tagvalue = "%s"). # Keep movies with ratings ?rating vocab:movieId ?ratingmovidId. FILTER(xsd:integer(?ratingmovidId) = xsd:integer(?movieId)) ?rating vocab:rating ?ratingvalue. } """ %(org,project,tag_value) nexus_results = query_sparql(movie_avg_ratings,sparqlview_wrapper) nexus_df =sparql2dataframe(nexus_results) display(nexus_df.head(20)) nexus_df=nexus_df.astype(float)
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
Retrieve the number of tags per movie. Can be a little bit slow depending on the size of your data.
nbr_tags_per_movie = """ PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/> PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/> Select ?title (COUNT(?tagvalue) as ?tagnumber) WHERE { # Select movies ?movie a vocab:Movie. # Select their movieId value ?movie vocab:movieId ?movieId. ?tag a vocab:Tag. ?tag vocab:movieId ?tagmovieId. FILTER(?tagmovieId = ?movieId) ?movie vocab:title ?title. ?tag vocab:tag ?tagvalue. } GROUP BY ?title ORDER BY DESC(?tagnumber) LIMIT 10 """ %(org,project) nexus_results = query_sparql(nbr_tags_per_movie,sparqlview_wrapper) nexus_df =sparql2dataframe(nexus_results) display(nexus_df.head(20)) #Let plot the result nexus_df.tagnumber = pd.to_numeric(nexus_df.tagnumber) nexus_df.plot(x="title",y="tagnumber",kind="bar")
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
The next query will retrieve movies along with users that tagged them separated by a comma
# Group Concat movie_tag_users = """ PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/> PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/> Select ?movieId (group_concat(DISTINCT ?userId;separator=",") as ?users) WHERE { # Select movies ?movie a vocab:Movie. # Select their movieId value ?movie vocab:movieId ?movieId. ?tag vocab:movieId ?movieId. ?tag vocab:userId ?userId. } GROUP BY ?movieId LIMIT 10 """%(org,project) nexus_results = query_sparql(movie_tag_users,sparqlview_wrapper) nexus_df =sparql2dataframe(nexus_results) nexus_df.head(20)
_____no_output_____
Apache-2.0
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
Model Data Prep
df_log = val_data.copy() probas_cols = ["fla_" + str(i) for i in range(1,28)] + ["cam_" + str(i) for i in range(1,28)] +\ ["res_" + str(i) for i in range(1,28)] \ + ["vca_" + str(i) for i in range(1,28)] \ X = df_log[probas_cols] y = df_log['labels'].values X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.2, random_state=random_state) from scipy.stats import randint as sp_randint from scipy.stats import uniform as sp_uniform from sklearn.model_selection import RandomizedSearchCV, GridSearchCV n_HP_points_to_test = 100 param_test ={'num_leaves': sp_randint(6, 50), 'min_child_samples': sp_randint(100, 500), 'min_child_weight': [1e-5, 1e-3, 1e-2, 1e-1, 1, 1e1, 1e2, 1e3, 1e4], 'subsample': sp_uniform(loc=0.2, scale=0.8), 'colsample_bytree': sp_uniform(loc=0.4, scale=0.6), 'reg_alpha': [0, 1e-1, 1, 2, 5, 7, 10, 50, 100], 'reg_lambda': [0, 1e-1, 1, 5, 10, 20, 50, 100], # "bagging_fraction" : [0.5, 0.6, 0.7, 0.8, 0.9], # "feature_fraction":[0.5, 0.6, 0.7, 0.8, 0.9] } fit_params={ "early_stopping_rounds":100, "eval_metric" : 'multi_logloss', "eval_set" : [(X_test,y_test)], 'eval_names': ['valid'], #'callbacks': [lgb.reset_parameter(learning_rate=learning_rate_010_decay_power_099)], 'verbose': 100, 'categorical_feature': 'auto'} clf = lgb.LGBMClassifier(num_iteration=1000, max_depth=-1, random_state=314, silent=True, metric='multi_logloss', n_jobs=4, early_stopping_rounds=100, num_class=num_class, objective= "multiclass") gs = RandomizedSearchCV( estimator=clf, param_distributions=param_test, n_iter=n_HP_points_to_test, cv=3, refit=True, random_state=314, verbose=True) if do_gridsearch==True: gs.fit(X_train, y_train, **fit_params) print('Best score reached: {} with params: {} '.format(gs.best_score_, gs.best_params_)) # opt_parameters = gs.best_params_ opt_parameters = {'colsample_bytree': 0.5284213741879101, 'min_child_samples': 125, 'min_child_weight': 10.0, 'num_leaves': 22, 'reg_alpha': 0.1, 'reg_lambda': 20, 'subsample': 0.3080033455431848}
_____no_output_____
MIT
Boosted Late-Fusion.ipynb
Sakina8/Multimodal-Classification2020
Model Training
### Run lightgbm to get weights for different class logits t0 = time.time() model_met = 'fit' #'xgb'#'train' #fit params = { "objective" : "multiclass", "num_class" : num_class, "num_leaves" : 60, "max_depth": -1, "learning_rate" : 0.01, "bagging_fraction" : 0.9, # subsample "feature_fraction" : 0.9, # colsample_bytree "bagging_freq" : 5, # subsample_freq "bagging_seed" : 2018, "verbosity" : -1 } lgtrain, lgval = lgb.Dataset(X_train, y_train), lgb.Dataset(X_test, y_test) if model_met == 'train': params.update(opt_parameters) params.update(fit_params) lgbmodel = lgb.train(params, lgtrain, valid_sets=[lgtrain, lgval], num_iterations = 1000, metric= 'multi_logloss') train_logits = lgbmodel.predict(X_train) test_logits = lgbmodel.predict(X_test) train_pred = np.argmax(train_logits, axis=1) test_pred = np.argmax(test_logits, axis=1) elif model_met == 'xgb': dtrain = xgb.DMatrix(X_train, label=y_train) dtrain.save_binary('xgb_train.buffer') dtest = xgb.DMatrix(X_test, label=y_test) num_round = 200 xgb_param = {'max_depth': 5, 'eta': 0.1, 'seed':2020, 'verbosity':1, 'objective': 'multi:softmax', 'num_class':num_class} xgb_param['nthread'] = 4 xgb_param['eval_metric'] = 'mlogloss' evallist = [(dtest, 'eval'), (dtrain, 'train')] bst = xgb.train(xgb_param, dtrain, num_round, evallist , early_stopping_rounds=10 ) train_logits = bst.predict(xgb.DMatrix(X_train), ntree_limit=bst.best_ntree_limit) test_logits = bst.predict(xgb.DMatrix(X_test), ntree_limit=bst.best_ntree_limit) train_pred = train_logits test_pred = test_logits else: lgbmodel = lgb.LGBMClassifier(**clf.get_params()) #set optimal parameters lgbmodel.set_params(**opt_parameters) lgbmodel.fit(X_train, y_train, **fit_params) train_logits = lgbmodel.predict(X_train) test_logits = lgbmodel.predict(X_test) train_pred = train_logits test_pred = test_logits print("Validation F1: {} and Training F1: {} ".format( f1_score(y_test, test_pred, average='macro'), f1_score(y_train, train_pred, average='macro'))) if model_met == 'train': feat_imp = pd.DataFrame({'feature':probas_cols, 'logit_kind': [i.split('_')[0] for i in probas_cols], 'imp':lgbmodel.feature_importance()/sum(lgbmodel.feature_importance())}) lgbmodel.save_model('lgb_classifier_81feats.txt', num_iteration=lgbmodel.best_iteration) print("""Feature Importances by logits group: """, feat_imp.groupby(['logit_kind'])['imp'].sum()) else: feat_imp = pd.DataFrame({'feature':probas_cols, 'logit_kind': [i.split('_')[0] for i in probas_cols], 'imp':lgbmodel.feature_importances_/sum(lgbmodel.feature_importances_)}) print("""Feature Importances by logits group: """, feat_imp.groupby(['logit_kind'])['imp'].sum()) import shap explainer = shap.TreeExplainer(lgbmodel) shap_values = explainer.shap_values(X) print("Time Elapsed: {:}.".format(format_time(time.time() - t0))) for n, path in enumerate(['/kaggle/input/textphase1/', '/kaggle/input/testphase2/']): phase = n+1 if phase==1: test_logits_path = test_logits_path_phase1 else: test_logits_path = test_logits_path_phase2 Preprocess.prepare_test(text_col, path, phase) X_test_phase1= Preprocess.X_test test_phase1 = preparelogits_df(test_logits_path, df=X_test_phase1, val_labels=None, **kwargs) phase1_logits = lgbmodel.predict(test_phase1[probas_cols].values) if model_met == 'train': predictions = np.argmax(phase1_logits, axis=1) elif model_met == 'xgb': phase1_logits = bst.predict(xgb.DMatrix(test_phase1[probas_cols]), ntree_limit=bst.best_ntree_limit) predictions = phase1_logits else: predictions = phase1_logits X_test_phase1['prediction_model']= predictions X_test_phase1['Prdtypecode']=X_test_phase1['prediction_model'].map(Preprocess.dict_id_to_code) print(X_test_phase1['Prdtypecode'].value_counts()) X_test_phase1=X_test_phase1.drop(['prediction_model','Title','Description'],axis=1) X_test_phase1.to_csv(f'y_test_task1_phase{phase}_pred_.tsv',sep='\t',index=False)
_____no_output_____
MIT
Boosted Late-Fusion.ipynb
Sakina8/Multimodal-Classification2020
Example usage of the O-C tools This example shows how to construct and fit with MCMC the O-C diagram of the RR Lyrae star OGLE-BLG-RRLYR-02950 We start with importing some libraries
import numpy as np import oc_tools as octs
_____no_output_____
MIT
06498_oc.ipynb
gerhajdu/rrl_binaries_1
We read in the data, set the period used to construct the O-C diagram (and to fold the light curve to construct the template curves, etc.), and the orders of the Fourier series we will fit to the light curve in the first and second iterations in the process
who = "06498" period = 0.589490 order1 = 10 order2 = 15 jd3, mag3 = np.loadtxt('data/{:s}.o3'.format(who), usecols=[0,1], unpack=True) jd4, mag4 = np.loadtxt('data/{:s}.o4'.format(who), usecols=[0,1], unpack=True)
_____no_output_____
MIT
06498_oc.ipynb
gerhajdu/rrl_binaries_1
We correct for possible average magnitude and amplitude differences between The OGLE-III and IV photometries by moving the intensity average of the former to the intensity average measured for the latter The variables "jd" and "mag" contain the merged timings and magnitudes of the OGLE-III + IV photometry, wich are used from hereon to calculate the O-C values
mag3_shift=octs.shift_int(jd3, mag3, jd4, mag4, order1, period, plot=True) jd = np.hstack((jd3,jd4)) mag = np.hstack((mag3_shift, mag4))
_____no_output_____
MIT
06498_oc.ipynb
gerhajdu/rrl_binaries_1
Calling the split_lc_seasons() function provides us with an array containing masks splitting the combined light curve into short sections, depending on the number of points Optionally, the default splitting can be overriden by using the optional parameters "limits" and "into". For example, calling the function as:octs.split_lc_seasons(jd, plot=True, mag = mag, limits = np.array((0, 8, np.inf)), into = np.array((0, 2))) will always split seasons with at least nine points into two separate segments
splits = octs.split_lc_seasons(jd, plot=True, mag = mag)
_____no_output_____
MIT
06498_oc.ipynb
gerhajdu/rrl_binaries_1
The function calc_oc_points() fits the light curve of the variable to produce a template, and uses it to determine the O-C points of the individual segments
oc_jd, oc_oc = octs.calc_oc_points(jd, mag, period, order1, splits, figure=True)
_____no_output_____
MIT
06498_oc.ipynb
gerhajdu/rrl_binaries_1
We make a guess at the binary parameters
e = 0.37 P_orb = 2800. T_peri = 6040 a_sini = 0.011 omega = -0.7 a= -8e-03 b= 3e-06 c= -3.5e-10 params = np.asarray((e, P_orb, T_peri, a_sini, omega, a, b, c)) lower_bounds = np.array((0., 100., -np.inf, 0.0, -np.inf, -np.inf, -np.inf, -np.inf)) upper_bounds = np.array((0.99, 6000., np.inf, 1.0, np.inf, np.inf, np.inf, np.inf))
_____no_output_____
MIT
06498_oc.ipynb
gerhajdu/rrl_binaries_1
We use the above guesses as the starting point (dashed grey line on the plot below) to find the O-C LTTE solution of the first iteration of our procedure. The yellow line on the plot shows the fit. The vertical blue bar shows the timing of the periastron passage Note that in this function also provides the timings of the individual observations corrected for this initial O-C solution
params2, jd2 = octs.fit_oc1(oc_jd, oc_oc, jd, params, lower_bounds, upper_bounds)
_____no_output_____
MIT
06498_oc.ipynb
gerhajdu/rrl_binaries_1
We use the initial solution as the starting point for the MCMC fit, therefore we prepare it first by transforming $e$ and $\omega$ to $\sqrt{e}\sin{\omega}$ and $\sqrt{e}\sin{\omega}$ For each parameter, we also have a lower and higher limit in its prior, but the values given for $\sqrt{e}\sin{\omega}$ and $\sqrt{e}\sin{\omega}$ are ignored, as these are handled separately within the function checking the priors
start = np.zeros_like(params2) start[0:3] = params2[1:4] start[3] = np.sqrt(params2[0]) * np.sin(params2[4]) start[4] = np.sqrt(params2[0]) * np.cos(params2[4]) start[5:] = params2[5:] prior_ranges = np.asanyarray([[start[0]*0.9, start[0]*1.1], [start[1]-start[0]/4., start[1]+start[0]/4.], [0., 0.057754266], [0., 0.], [0., 0.], [-1., 1.], [-1e-4, 1e-4], [-1e-8, 1e-8]])
_____no_output_____
MIT
06498_oc.ipynb
gerhajdu/rrl_binaries_1
We set a random seed to get reproducible results, then prepare the initial positions of the 200 walkers we are using during the fitting. During this, we check explicitly that these correspond to a position with a finite prior (i.e., they are not outside of the prior ranges defined above)
np.random.seed(0) walkers = 200 random_scales = np.array((1e+1, 1e+1, 1e-4, 1e-2, 1e-2, 1e-3, 2e-7, 5e-11)) pos = np.zeros((walkers, start.size)) for i in range(walkers): pos[i,:] = start + random_scales * np.random.normal(size=8) while np.isinf(octs.log_prior(pos[i,:], prior_ranges)): pos[i,:] = start + random_scales * np.random.normal(size=8)
_____no_output_____
MIT
06498_oc.ipynb
gerhajdu/rrl_binaries_1
We recalculate the O-C points, but this time we use a higher-order Fourier series to fit the light curve with the modified timings, and we also calculate errors using bootstrapping
oc_jd, oc_oc, oc_sd = octs.calc_oc_points(jd, mag, period, order2, splits, bootstrap_times = 500, jd_mod = jd2, figure=True)
_____no_output_____
MIT
06498_oc.ipynb
gerhajdu/rrl_binaries_1