markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
12. Use Baye's rule to convert from P(W1|T) to P(T|W1). Write a function Ptw to reflect this. Hints:* Call your other functions.* You may need to write a function for P(W1) and you may need a new counter for `data['first_word']`
def Pw(W1=''): words = Counter(data['first_word']) return words[W1] / len(data['first_word']) def Ptw(T='',W1=''): return (Pwt(W1=W1,T=T)*P(T=T))/Pw(W1=W1)
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
13 What is P(T='movie'|W1='the')? What about P(T='person'|W1='the')? What about P(T='drug'|W1='the')? What about P(T='place'|W1='the') What about P(T='company'|W1='the')
Ptw(T='movie',W1='the') Ptw(T='person',W1='the') Ptw(T='drug',W1='the') Ptw(T='place',W1='the') Ptw(T='company',W1='the')
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
14 Given this, if the word 'the' is found in a name, what is the most likely type?
Pwt('the', 'movie')
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
15. Is Ptw(T='movie'|W1='the') the same as Pwt(W1='the'|T='movie') the same? Why or why not?
Ptw(T='movie',W1='the') Pwt(W1='the', T='movie')
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
_Lambda School Data Science - Model Validation_ Feature SelectionObjectives:* Feature importance* Feature selection Yesterday we saw that... Less isn't always more (but sometimes it is) More isn't always better (but sometimes it is)![Image of Terry Crews](https://media.giphy.com/media/b8kHKZq3YFfnq/giphy.gif) Saavas, Ando [Feature Selection (4 parts)](https://blog.datadive.net/selecting-good-features-part-i-univariate-selection/)>There are in general two reasons why feature selection is used:1. Reducing the number of features, to reduce overfitting and improve the generalization of models.2. To gain a better understanding of the features and their relationship to the response variables.>These two goals are often at odds with each other and thus require different approaches: depending on the data at hand a feature selection method that is good for goal (1) isn’t necessarily good for goal (2) and vice versa. What seems to happen often though is that people use their favourite method (or whatever is most conveniently accessible from their tool of choice) indiscriminately, especially methods more suitable for (1) for achieving (2). While they are not always mutually exclusive, here's a little bit about what's going on with these two goals Goal 1: Reducing Features, Reducing Overfitting, Improving Generalization of ModelsThis is when you're actually trying to engineer a packaged, machine learning pipeline that is streamlined and highly generalizable to novel data as more is collected, and you don't really care "how" it works as long as it does work. Approaches that are good at this tend to fail at Goal 2 because they handle multicollinearity by (sometime randomly) choosing/indicating just one of a group of strongly correlated features. This is good to reduce redundancy, but bad if you want to interpret the data. Goal 2: Gaining a Better Understanding of the Features and their RelationshipsThis is when you want a good, interpretable model or you're doing data science more for analysis than engineering. Company asks you "How do we increase X?" and you can tell them all the factors that correlate to it and their predictive power.Approaches that are good at this tend to fail at Goal 1 because, well, they *don't* handle the multicollinearity problem. If three features are all strongly correlated to each other as well as the output, they will all have high scores. But including all three features in a model is redundant. Each part in Saavas's Blog series describes an increasingly complex (and computationally costly) set of methods for feature selection and interpretation.The ultimate comparison is completed using an adaptation of a dataset called Friedman's 1 regression dataset from Friedman, Jerome H.'s '[Multivariate Adaptive Regression Splines](http://www.stat.ucla.edu/~cocteau/stat204/readings/mars.pdf).>The data is generated according to formula $y=10sin(πX_1X_2)+20(X_3–0.5)^2+10X_4+5X_5+ϵ$, where the $X_1$ to $X_5$ are drawn from uniform distribution and ϵ is the standard normal deviate N(0,1). Additionally, the original dataset had five noise variables $X_6,…,X_{10}$, independent of the response variable. We will increase the number of variables further and add four variables $X_{11},…,X_{14}$ each of which are very strongly correlated with $X_1,…,X_4$, respectively, generated by $f(x)=x+N(0,0.01)$. This yields a correlation coefficient of more than 0.999 between the variables. This will illustrate how different feature ranking methods deal with correlations in the data.**Okay, that's a lot--here's what you need to know:**1. $X_1$ and $X_2$ have the same non-linear relationship to $Y$ -- though together they do have a not-quite-linear relationship to $Y$ (with sinusoidal noise--but the range of the values doesn't let it get negative)2. $X_3$ has a quadratic relationship with $Y$3. $X_4$ and $X_5$ have linear relationships to $Y$, with $X_4$ being weighted twice as heavily as $X_5$4. $X_6$ through $X_{10}$ are random and have NO relationship to $Y$5. $X_{11}$ through $X_{14}$ correlate strongly to $X_1$ through $X_4$ respectively (and thus have the same respective relationships with $Y$)This will help us see the difference between the models in selecting features and interpreting features* how well they deal with multicollinearity (5)* how well they identify noise (4)* how well they identify different kinds of relationships* how well they identify/interpret predictive power of individual variables.
# import import numpy as np # Create the dataset # from https://blog.datadive.net/selecting-good-features-part-iv-stability-selection-rfe-and-everything-side-by-side/ np.random.seed(42) size = 1500 # I increased the size from what's given in the link Xs = np.random.uniform(0, 1, (size, 14)) # Changed variable name to Xs to use X later #"Friedamn #1” regression problem Y = (10 * np.sin(np.pi*Xs[:,0]*Xs[:,1]) + 20*(Xs[:,2] - .5)**2 + 10*Xs[:,3] + 5*Xs[:,4] + np.random.normal(0,1)) #Add 4 additional correlated variables (correlated with X1-X4) Xs[:,10:] = Xs[:,:4] + np.random.normal(0, .025, (size,4)) names = ["X%s" % i for i in range(1,15)] # Putting it into pandas--because... I like pandas. And usually you'll be # working with dataframes not arrays (you'll care what the column titles are) import pandas as pd friedmanX = pd.DataFrame(data=Xs, columns=names) friedmanY = pd.Series(data=Y, name='Y') friedman = friedmanX.join(friedmanY) friedman.head()
_____no_output_____
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
We want to be able to look at classification problems too, so let's bin the Y values to create a categorical feature from the Y values. It should have *roughly* similar relationships to the X features as Y does.
# First, let's take a look at what Y looks like import matplotlib.pyplot as plt import seaborn as sns sns.distplot(friedmanY);
_____no_output_____
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
That's pretty normal, let's make two binary categories--one balanced, one unbalanced, to see the difference.* balanced binary variable will be split evenly in half* unbalanced binary variable will indicate whether $Y <5$.
friedman['Y_bal'] = friedman['Y'].apply(lambda y: 1 if (y < friedman.Y.median()) else 0) friedman['Y_un'] = friedman['Y'].apply(lambda y: 1 if (y < 5) else 0) print(friedman.Y_bal.value_counts(), '\n\n', friedman.Y_un.value_counts()) friedman.head() # Finally, let's put it all into our usual X and y's # (I already have the X dataframe as friedmanX, but I'm working backward to # follow a usual flow) X = friedman.drop(columns=['Y', 'Y_bal', 'Y_un']) y = friedman.Y y_bal = friedman.Y_bal y_un = friedman.Y_un
_____no_output_____
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Alright! Let's get to it! Remember, with each part, we are increasing complexity of the analysis and thereby increasing the computational costs and runtime. So even before univariate selection--which compares each feature to the output feature one by one--there is a [VarianceThreshold](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.VarianceThreshold.htmlsklearn.feature_selection.VarianceThreshold) object in sklearn.feature_selection. It defaults to getting rid of any features that are the same across all samples. Great for cleaning data in that respect. The `threshold` parameter defaults to `0` to show the above behavior. if you change it, make sure you have good reason. Use with caution. Part 1: univariate selection* Best for goal 2 - getting "a better understanding of the data, its structure and characteristics"* unable to remove redundancy (for example selecting only the best feature among a subset of strongly correlated features)* Super fast - can be used for baseline models or just after baseline[sci-kit's univariariate feature selection objects and techniques](https://scikit-learn.org/stable/modules/feature_selection.htmlunivariate-feature-selection) Y (continuous output)options (they do what they sound like they do)* SelectKBest* SelectPercentileboth take the same parameter options for `score_func`* `f_regression`: scores by correlation coefficient, f value, p value--basically automates what you can do by looking at a correlation matrix except without the ability to recognize collinearity* `mutual_info_regression`: can capture non-linear correlations, but doesn't handle noise wellLet's take a look at mutual information (MI)
import sklearn.feature_selection as fe MIR = fe.SelectKBest(fe.mutual_info_regression, k='all').fit(X, y) MIR_scores = pd.Series(data=MIR.scores_, name='MI_Reg_Scores', index=names) MIR_scores
_____no_output_____
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Y_bal (balanced binary output)options* SelectKBest* SelectPercentilethese options will cut out features with error rates above a certain tolerance level, define in parameter -`alpha`* SelectFpr (false positive rate--false positives predicted/total negatives in dataset)* SelectFdr (false discovery rate--false positives predicted/total positives predicted)* ~~SelectFwe (family-wise error--for multinomial classification tasks)~~all have the same optons for parameter `score_func`* `chi2`* `f_classif`* `mutual_info_classif`
MIC_b = fe.SelectFpr(fe.mutual_info_classif).fit(X, y_bal) MIC_b_scores = pd.Series(data=MIC_b.scores_, name='MIC_Bal_Scores', index=names) MIC_b_scores
_____no_output_____
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Y_un (unbalanced binary output)
MIC_u = fe.SelectFpr(fe.mutual_info_classif).fit(X, y_un) MIC_u_scores = pd.Series(data=MIC_u.scores_, name='MIC_Unbal_Scores', index=names) MIC_u_scores
_____no_output_____
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Part 2: linear models and regularization* L1 Regularization (Lasso for regression) is best for goal 1: "produces sparse solutions and as such is very useful selecting a strong subset of features for improving model performance" (forces coefficients to zero, telling you which you could remove--but doesn't handle multicollinearity)* L2 Regularization (Ridge for regression) is best for goal 2: "can be used for data interpretation due to its stability and the fact that useful features tend to have non-zero coefficients* Also fast[sci-kit's L1 feature selection](https://scikit-learn.org/stable/modules/feature_selection.htmll1-based-feature-selection) (can easily be switched to L2 using the parameter `penalty='l2'` for categorical targets or using `Ridge` instead of Lasso for continuous targets)We won't do this here, because1. You know regression2. The same principles apply as shown in Part 3 below with `SelectFromModel`3. There's way cooler stuff coming up Part 3: random forests* Best for goal 1, not 2 because: * strong features can end up with low scores * biased towards variables with many categories* "require very little feature engineering and parameter tuning"* Takes a little more time depending on your dataset - but a popular technique[sci-kit's implementation of tree-based feature selection](https://scikit-learn.org/stable/modules/feature_selection.htmltree-based-feature-selection) Y
from sklearn.ensemble import RandomForestRegressor as RFR # Fitting a random forest regression rfr = RFR().fit(X, y) # Creating scores from feature_importances_ ranking (some randomness here) rfr_scores = pd.Series(data=rfr.feature_importances_, name='RFR', index=names) rfr_scores
/home/seek/anaconda3/lib/python3.7/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22. "10 in version 0.20 to 100 in 0.22.", FutureWarning)
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Y_bal
from sklearn.ensemble import RandomForestClassifier as RFC # Fitting a Random Forest Classifier rfc_b = RFC().fit(X, y_bal) # Creating scores from feature_importances_ ranking (some randomness here) rfc_b_scores = pd.Series(data=rfc_b.feature_importances_, name='RFC_bal', index=names) rfc_b_scores
/home/seek/anaconda3/lib/python3.7/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22. "10 in version 0.20 to 100 in 0.22.", FutureWarning)
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Y_un
# Fitting a Random Forest Classifier rfc_u = RFC().fit(X, y_un) # Creating scores from feature_importances_ ranking (some randomness here) rfc_u_scores = pd.Series(data=rfc_u.feature_importances_, name='RFC_unbal', index=names) rfc_u_scores
/home/seek/anaconda3/lib/python3.7/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22. "10 in version 0.20 to 100 in 0.22.", FutureWarning)
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
SelectFromModel is a meta-transformer that can be used along with any estimator that has a `coef_` or `feature_importances_` attribute after fitting. The features are considered unimportant and removed, if the corresponding `coef_` or `feature_importances_` values are below the provided `threshold` parameter. Apart from specifying the `threshold` numerically, there are built-in heuristics for finding a `threshold` using a string argument. Available heuristics are `'mean'`, `'median'` and float multiples of these like `'0.1*mean'`.
# Random forest regression transformation of X (elimination of least important # features) rfr_transform = fe.SelectFromModel(rfr, prefit=True) X_rfr = rfr_transform.transform(X) # Random forest classifier transformation of X_bal (elimination of least important # features) rfc_b_transform = fe.SelectFromModel(rfc_b, prefit=True) X_rfc_b = rfc_b_transform.transform(X) # Random forest classifier transformation of X_un (elimination of least important # features) rfc_u_transform = fe.SelectFromModel(rfc_u, prefit=True) X_rfc_u = rfc_u_transform.transform(X) RF_comparisons = pd.DataFrame(data=np.array([rfr_transform.get_support(), rfc_b_transform.get_support(), rfc_u_transform.get_support()]).T, columns=['RF_Regressor', 'RF_balanced_classifier', 'RF_unbalanced_classifier'], index=names) RF_comparisons
_____no_output_____
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Part 4: stability selection, RFE, and everything side by side* These methods take longer since they are *wrapper methods* and build multiple ML models before giving results. "They both build on top of other (model based) selection methods such as regression or SVM, building models on different subsets of data and extracting the ranking from the aggregates."* Stability selection is good for both goal 1 and 2: "among the top performing methods for many different datasets and settings" * For categorical targets * ~~[RandomizedLogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RandomizedLogisticRegression.html)~~ (Deprecated) use [RandomizedLogisticRegression](https://thuijskens.github.io/stability-selection/docs/randomized_lasso.htmlstability_selection.randomized_lasso.RandomizedLogisticRegression) * [ExtraTreesClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.htmlsklearn.ensemble.ExtraTreesClassifier) * For continuous targets * ~~[RandomizedLasso](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RandomizedLasso.html)~~ (Deprecated) use [RandomizedLasso](https://thuijskens.github.io/stability-selection/docs/randomized_lasso.htmlstability_selection.randomized_lasso.RandomizedLogisticRegression) * [ExtraTreesRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesRegressor.htmlsklearn.ensemble.ExtraTreesRegressor) Welcome to open-source, folks! [Here](https://github.com/scikit-learn/scikit-learn/issues/8995) is the original discussion to deprecate `RandomizedLogisticRegression` and `RandomizedLasso`. [Here](https://github.com/scikit-learn/scikit-learn/issues/9657) is a failed attempt to resurrect it. It looks like it'll be gone for good soon. So we shouldn't get dependent on it. The alternatives from the deprecated scikit objects come from an official scikit-learn-contrib module called [stability_selection](https://github.com/scikit-learn-contrib/stability-selection). They also have a `StabilitySelection` object that acts similarly scikit's `SelectFromModel`.* recursive feature elimination (RFE) is best for goal 1 * [sci-kit's RFE and RFECV (RFE with built-in cross-validation)](https://scikit-learn.org/stable/modules/feature_selection.htmlrecursive-feature-elimination)
!pip install git+https://github.com/scikit-learn-contrib/stability-selection.git
Collecting git+https://github.com/scikit-learn-contrib/stability-selection.git Cloning https://github.com/scikit-learn-contrib/stability-selection.git to /tmp/pip-req-build-axvzrzpv Requirement already satisfied: nose>=1.1.2 in /home/seek/anaconda3/lib/python3.7/site-packages (from stability-selection==0.0.1) (1.3.7) Requirement already satisfied: scikit-learn>=0.19 in /home/seek/anaconda3/lib/python3.7/site-packages (from stability-selection==0.0.1) (0.20.3) Requirement already satisfied: matplotlib>=2.0.0 in /home/seek/anaconda3/lib/python3.7/site-packages (from stability-selection==0.0.1) (3.0.3) Requirement already satisfied: numpy>=1.8.0 in /home/seek/anaconda3/lib/python3.7/site-packages (from stability-selection==0.0.1) (1.16.2) Requirement already satisfied: scipy>=0.13.3 in /home/seek/anaconda3/lib/python3.7/site-packages (from scikit-learn>=0.19->stability-selection==0.0.1) (1.2.1) Requirement already satisfied: cycler>=0.10 in /home/seek/anaconda3/lib/python3.7/site-packages (from matplotlib>=2.0.0->stability-selection==0.0.1) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in /home/seek/anaconda3/lib/python3.7/site-packages (from matplotlib>=2.0.0->stability-selection==0.0.1) (1.0.1) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /home/seek/anaconda3/lib/python3.7/site-packages (from matplotlib>=2.0.0->stability-selection==0.0.1) (2.3.1) Requirement already satisfied: python-dateutil>=2.1 in /home/seek/anaconda3/lib/python3.7/site-packages (from matplotlib>=2.0.0->stability-selection==0.0.1) (2.8.0) Requirement already satisfied: six in /home/seek/anaconda3/lib/python3.7/site-packages (from cycler>=0.10->matplotlib>=2.0.0->stability-selection==0.0.1) (1.12.0) Requirement already satisfied: setuptools in /home/seek/anaconda3/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib>=2.0.0->stability-selection==0.0.1) (40.8.0) Building wheels for collected packages: stability-selection Building wheel for stability-selection (setup.py) ... [?25ldone [?25h Stored in directory: /tmp/pip-ephem-wheel-cache-woii323n/wheels/58/be/39/79880712b91ffa56e341ff10586a1956527813437ddd759473 Successfully built stability-selection Installing collected packages: stability-selection Successfully installed stability-selection-0.0.1
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Okay, I tried this package... it seems to have some problems... hopefully a good implementation of stability selection for Lasso and Logistic Regression will be created soon! In the meantime, scikit's RandomLasso and RandomLogisticRegression have not been removed, so you can fiddle some! Just alter the commented out code!* import from scikit instead of stability-selection* use scikit's `SelectFromModel` as shown above!Ta Da! Y
'''from stability_selection import (RandomizedLogisticRegression, RandomizedLasso, StabilitySelection, plot_stability_path) # Stability selection using randomized lasso method rl = RandomizedLasso(max_iter=2000) rl_selector = StabilitySelection(base_estimator=rl, lambda_name='alpha', n_jobs=2) rl_selector.fit(X, y); ''' from sklearn.ensemble import ExtraTreesRegressor as ETR # Stability selection using randomized decision trees etr = ETR(n_estimators=50).fit(X, y) # Creating scores from feature_importances_ ranking (some randomness here) etr_scores = pd.Series(data=etr.feature_importances_, name='ETR', index=names) etr_scores from sklearn.linear_model import LinearRegression # Recursive feature elimination with cross validaiton using linear regression # as the model lr = LinearRegression() # rank all features, i.e continue the elimination until the last one rfe = fe.RFECV(lr) rfe.fit(X, y) rfe_score = pd.Series(data=(-1*rfe.ranking_), name='RFE', index=names) rfe_score
/home/seek/anaconda3/lib/python3.7/site-packages/sklearn/model_selection/_split.py:2053: FutureWarning: You should specify a value for 'cv' instead of relying on the default value. The default value will change from 3 to 5 in version 0.22. warnings.warn(CV_WARNING, FutureWarning)
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Y_bal
# stability selection using randomized logistic regression '''rlr_b = RandomizedLogisticRegression() rlr_b_selector = StabilitySelection(base_estimator=rlr_b, lambda_name='C', n_jobs=2) rlr_b_selector.fit(X, y_bal);''' from sklearn.ensemble import ExtraTreesClassifier as ETC # Stability selection using randomized decision trees etc_b = ETC(n_estimators=50).fit(X, y_bal) # Creating scores from feature_importances_ ranking (some randomness here) etc_b_scores = pd.Series(data=etc_b.feature_importances_, name='ETC_bal', index=names) etc_b_scores from sklearn.linear_model import LogisticRegression # Recursive feature elimination with cross validaiton using logistic regression # as the model logr_b = LogisticRegression(solver='lbfgs') # rank all features, i.e continue the elimination until the last one rfe_b = fe.RFECV(logr_b) rfe_b.fit(X, y_bal) rfe_b_score = pd.Series(data=(-1*rfe_b.ranking_), name='RFE_bal', index=names) rfe_b_score
/home/seek/anaconda3/lib/python3.7/site-packages/sklearn/model_selection/_split.py:2053: FutureWarning: You should specify a value for 'cv' instead of relying on the default value. The default value will change from 3 to 5 in version 0.22. warnings.warn(CV_WARNING, FutureWarning)
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Y_un
# stability selection uisng randomized logistic regression '''rlr_u = RandomizedLogisticRegression(max_iter=2000) rlr_u_selector = StabilitySelection(base_estimator=rlr_u, lambda_name='C') rlr_u_selector.fit(X, y_un);''' # Stability selection using randomized decision trees etc_u = ETC(n_estimators=50).fit(X, y_un) # Creating scores from feature_importances_ ranking (some randomness here) etc_u_scores = pd.Series(data=etc_u.feature_importances_, name='ETC_unbal', index=names) etc_u_scores # Recursive feature elimination with cross validaiton using logistic regression # as the model logr_u = LogisticRegression(solver='lbfgs') # rank all features, i.e continue the elimination until the last one rfe_u = fe.RFECV(logr_u) rfe_u.fit(X, y_un) rfe_u_score = pd.Series(data=(-1*rfe_u.ranking_), name='RFE_unbal', index=names) rfe_u_score '''RL_comparisons = pd.DataFrame(data=np.array([rl_selector.get_support(), rlr_b_selector.get_support(), rlr_u_selector.get_support()]).T, columns=['RandomLasso', 'RandomLog_bal', 'RandomLog_unbal'], index=names) RL_comparisons''' comparisons = pd.concat([MIR_scores, MIC_b_scores, MIC_u_scores, rfr_scores, rfc_b_scores, rfc_u_scores, etr_scores, etc_b_scores, etc_u_scores, rfe_score, rfe_b_score, rfe_u_score], axis=1) comparisons from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaled_df = scaler.fit_transform(comparisons) scaled_comparisons = pd.DataFrame(scaled_df, columns=comparisons.columns, index=names) scaled_comparisons
/home/seek/anaconda3/lib/python3.7/site-packages/sklearn/preprocessing/data.py:334: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by MinMaxScaler. return self.partial_fit(X, y)
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
What do you notice from the diagram below?
sns.heatmap(scaled_comparisons);
_____no_output_____
MIT
module3-model-interpretation/LS_DS_243_BONUS_Feature_Selection.ipynb
standroidbeta/DS-Unit-2-Sprint-4-Practicing-Understanding
Scraping Amazon Reviews using Scrapy in Python Part 2> Are you looking for a method of scraping Amazon reviews and do not know where to begin with? In that case, you may find this blog very useful in scraping Amazon reviews.- toc: true - badges: true- comments: true- author: Zeyu Guan- categories: [spaCy, Python, Machine Learning, Data Mining, NLP, RandomForest]- annotations: true- image: https://www.freecodecamp.org/news/content/images/2020/09/wall-5.jpeg- hide: false Required Packages[wordcloud](https://github.com/amueller/word_cloud), [geopandas](https://geopandas.org/en/stable/getting_started/install.html), [nbformat](https://pypi.org/project/nbformat/), [seaborn](https://seaborn.pydata.org/installing.html), [scikit-learn](https://scikit-learn.org/stable/install.html) Now let's get started!First thing first, you need to load all the necessary libraries:
import pandas as pd from matplotlib import pyplot as plt import numpy as np from wordcloud import WordCloud from wordcloud import STOPWORDS import re import plotly.graph_objects as go import seaborn as sns
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
Data CleaningFollowing the previous blog, the raw data we scraped from Amazon look like below. ![Raw Data](https://live.staticflickr.com/65535/51958371521_d139a6c0b1_h.jpg)Even thought that looks relatively clean, but there are still some inperfections such as star1 and star2 need to be combined, date need to be splited, and etc. The whole process could be found from my [github notebooks](https://github.com/christopherGuan/sample-ds-blog).Below is data after cleaning. It contains 6 columns and more than 500 rows. ![Clean Data](https://live.staticflickr.com/65535/51930037691_4a23b4c441_b.jpg) EDABelow are the questions I curioused about, and the result generated by doing data analysis.- Which rating (1-5) got the most and least?![Which point was rated the most](https://live.staticflickr.com/65535/51929072567_d34db66693_h.jpg)- Which country are they targeting?![Target Country](https://live.staticflickr.com/65535/51930675230_6314e2ccde_h.jpg)- Which month people prefer to give a higher rating?![higher rating](https://live.staticflickr.com/65535/51929085842_0cb0aa6b06_w.jpg)- Which month people leave commons the most?![More commons](https://live.staticflickr.com/65535/51929085857_49f7c889d2_w.jpg )- What are the useful words that people mentioned in the reviews?![More commons](https://live.staticflickr.com/65535/51930471329_82bf0c43b9.jpg) Sentiment Analysis (Method 1) What is sentiment analysis?Essentially, sentiment analysis or sentiment classification fall under the broad category of text classification tasks in which you are given a phrase or a list of phrases and your classifier is expected to determine whether the sentiment behind that phrase is positive, negative, or neutral. To keep the problem as a binary classification problem, the third attribute is sometimes ignored. Recent tasks have taken into account sentiments such as "somewhat positive" and "somewhat negative." In this specific case, we catogrize 4 and 5 stars to the positive group and 1 & 2 stars to the negative gorup. ![rating](https://live.staticflickr.com/65535/51930237493_b6afc18052_c.jpg)Below are the most frequently words in reviews from positive group and negative group respectively. Positive review![positive](https://live.staticflickr.com/65535/51930164126_33b911e6b3_c.jpg)Negative review![negative](https://live.staticflickr.com/65535/51930165221_cf61fce68e_c.jpg) Build up the first modelNow we can build up a easy model that, as input, it will accept reviews. It will then predict whether the review will be positive or negative.Because this is a classification task, we will train a simple logistic regression model.- **Clean Data**First, we create a new function to remove all punctuations from the data for later use.
def remove_punctuation(text): final = "".join(u for u in text if u not in ("?", ".", ";", ":", "!",'"')) return final
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
- **Split the Dataframe**Now, we split 80% of the dataset for training and 20% for testing. Meanwhile, each dataset should contain only two variables, one is to indicate positive or negative and another one is the reviews.![output](https://live.staticflickr.com/65535/51930294148_3a9db0297c_b.jpg)
df['random_number'] = np.random.randn(len(index)) train = df[df['random_number'] <= 0.8] test = df[df['random_number'] > 0.8]
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
- **Create a bag of words**Here I would like to introduce a new package.[Scikit-learn](https://scikit-learn.org/stable/install.html) is an open source machine learning library that supports supervised and unsupervised learning. It also provides various tools for model fitting, data preprocessing, model selection, model evaluation, and many other utilities.In this example, we are going to use [sklearn.feature_extraction.text.CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html?highlight=countvectorizersklearn.feature_extraction.text.CountVectorizer) to convert a collection of text documents to a matrix of token counts.The reason why we need to convert the text into a bag-of-words model is because the logistic regression algorithm cannot understand text.
train_matrix = vectorizer.fit_transform(train['title']) test_matrix = vectorizer.transform(test['title'])
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
- **Import Logistic Regression**
from sklearn.linear_model import LogisticRegression lr = LogisticRegression()
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
- **Split target and independent variables**
X_train = train_matrix X_test = test_matrix y_train = train['sentiment'] y_test = test['sentiment']
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
- **Fit model on data**
lr.fit(X_train,y_train)
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
- **Make predictionsa**
predictions = lr.predict(X_test)
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
The output will be either 1 or -1. As we assumed before, 1 presents the model predict the review is a positive review and vice versa. TestingNow, we can test the accuracy of our model!
from sklearn.metrics import confusion_matrix, classification_report new = np.asarray(y_test) confusion_matrix(predictions,y_test) print(classification_report(predictions,y_test))
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
![accuracy](https://live.staticflickr.com/65535/51929260132_045027628c_z.jpg)The accuracy is as high as 89%! Sentiment Analysis (Method 2)In this process, you will learn how to build your own sentiment analysis classifier using Python and understand the basics of NLP (natural language processing). First, let's try to use a quick and dirty method to utilize the [Naive Bayes classifier](https://www.datacamp.com/community/tutorials/simplifying-sentiment-analysis-python) to predict the sentiments of Amazon product review. Based on the application's requirements, we should first put each review in a txt file and catogorize them as negative or positive review in different folder.
#Find all negative review neg = df[df["sentiment"] == -1].review #Reset the index neg.index = range(len(neg.index)) ## Write each DataFrame to separate txt for i in range(len(neg)): data = neg[i] with open(str(i) + ".txt","w") as file: file.write(data + "\n")
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
Next, we sort the order of the official data and remove all the content. In other words, we only keep the file name.
import os import pandas as pd #Get file names file_names = os.listdir('/Users/zeyu/nltk_data/corpora/movie_reviews/neg') #Convert pandas neg_df = pd.DataFrame (file_names, columns = ['file_name']) #split to sort neg_df[['number','id']] = neg_df.file_name.apply( lambda x: pd.Series(str(x).split("_"))) #change the number to be the index neg_df_index = neg_df.set_index('number') neg_org = neg_df_index.sort_index(ascending=True) #del neg["id"] neg_org.reset_index(inplace=True) neg_org = neg_org.drop([0], axis=0).reset_index(drop=True) neg_names = neg_org['file_name'] for file_name in neg_names: t = open(f'/Users/zeyu/nltk_data/corpora/movie_reviews/neg/{file_name}', 'w') t.write("") t.close()
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
Next, we insert the content of amazon review to the official files with their original file names.
#Get file names file_names = os.listdir('/Users/zeyu/Desktop/DS/neg') #Convert pandas pos_df = pd.DataFrame (file_names, columns = ['file_name']) pos_names = pos_df['file_name'] for index, file_name in enumerate(pos_names): try: t = open(f'/Users/zeyu/Desktop/DS/neg/{file_name}', 'r') # t.write("") t_val = ascii(t.read()) t.close() writefname = pos_names_org[index] t = open(f'/Users/zeyu/nltk_data/corpora/movie_reviews/neg/{writefname}', 'w') t.write(t_val) t.close() except: print(f'{index} Reading/writing Error')
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
Eventually, we can just run these few lines to predict the sentiments of Amazon product review.
import nltk from nltk.corpus import movie_reviews import random documents = [(list(movie_reviews.words(fileid)), category) for category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] #All words, not unique. random.shuffle(documents) #Change to lower case. Count word appears. all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words()) #Only show first 2000. word_features = list(all_words)[:2000] def document_features(document): document_words = set(document) features = {} for word in word_features: features['contains({})'.format(word)] = (word in document_words) return features #Calculate the accuracy of the given. featuresets = [(document_features(d), c) for (d,c) in documents] train_set, test_set = featuresets[100:], featuresets[:100] classifier = nltk.NaiveBayesClassifier.train(train_set) print(nltk.classify.accuracy(classifier, test_set)) classifier.show_most_informative_features(5)
_____no_output_____
Apache-2.0
_notebooks/2022-04-07-scraping2.ipynb
christopherGuan/sample-ds-blog
Install and monitor the FLIR camera serviceInstall
! sudo cp flir-server.service /etc/systemd/system/flir-server.service
_____no_output_____
Apache-2.0
run/monitor-flir-service.ipynb
johnnewto/FLIR-pubsub
Start the service
! sudo systemctl start flir-server.service
_____no_output_____
Apache-2.0
run/monitor-flir-service.ipynb
johnnewto/FLIR-pubsub
Stop the service
! sudo systemctl stop flir-server.service
Warning: The unit file, source configuration file or drop-ins of flir-server.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Apache-2.0
run/monitor-flir-service.ipynb
johnnewto/FLIR-pubsub
Enable it so that it starts on boot
! sudo systemctl enable flir-server.service # enable at boot
_____no_output_____
Apache-2.0
run/monitor-flir-service.ipynb
johnnewto/FLIR-pubsub
Disable it so that it does not start on boot
! sudo systemctl enable flir-server.service # enable at boot
_____no_output_____
Apache-2.0
run/monitor-flir-service.ipynb
johnnewto/FLIR-pubsub
To show status of the service
! sudo systemctl status flir-server.service
● flir-server.service - FLIR-camera server service Loaded: loaded (/etc/systemd/system/flir-server.service; enabled; vendor pres Active: active (running) since Tue 2020-03-24 07:53:04 NZDT; 20min ago Main PID: 765 (python) Tasks: 17 (limit: 4915) CGroup: /system.slice/flir-server.service └─765 /home/rov/.virtualenvs/flir/bin/python -u run/flir-server.py  Mar 24 07:53:06 rov-UP python[765]: 19444715 - executing: "Gain.SetValue(6)" Mar 24 07:53:06 rov-UP python[765]: 19444715 - executing: "BlackLevelSelector.Se Mar 24 07:53:06 rov-UP python[765]: 19444715 - executing: "BlackLevel.SetValue(0 Mar 24 07:53:07 rov-UP python[765]: 19444715 - executing: "GammaEnable.SetValue( Mar 24 07:53:07 rov-UP python[765]: Starting : FrontLeft Mar 24 07:53:17 rov-UP python[765]: Stopping FrontLeft due to inactivity. Mar 24 07:54:12 rov-UP python[765]: Starting : FrontLeft Mar 24 07:54:26 rov-UP python[765]: Stopping FrontLeft due to inactivity. Mar 24 07:54:33 rov-UP python[765]: Starting : FrontLeft Mar 24 07:54:48 rov-UP python[765]: Stopping FrontLeft due to inactivity. [?1l>1-18/18 (END)
Apache-2.0
run/monitor-flir-service.ipynb
johnnewto/FLIR-pubsub
Analysis of motifs using Motif Miner (RINGS tool that employs alpha frequent subtree mining)
csv_files = ["ABA_14361_100ug_v5.0_DATA.csv", "ConA_13799-10ug_V5.0_DATA.csv", 'PNA_14030_10ug_v5.0_DATA.csv', "RCAI_10ug_14110_v5.0_DATA.csv", "PHA-E-10ug_13853_V5.0_DATA.csv", "PHA-L-10ug_13856_V5.0_DATA.csv", "LCA_10ug_13934_v5.0_DATA.csv", "SNA_10ug_13631_v5.0_DATA.csv", "MAL-I_10ug_13883_v5.0_DATA.csv", "MAL_II_10ug_13886_v5.0_DATA.csv", "GSL-I-B4_10ug_13920_v5.0_DATA.csv", "jacalin-1ug_14301_v5.0_DATA.csv", 'WGA_14057_1ug_v5.0_DATA.csv', "UEAI_100ug_13806_v5.0_DATA.csv", "SBA_14042_10ug_v5.0_DATA.csv", "DBA_100ug_13897_v5.0_DATA.csv", "PSA_14040_10ug_v5.0_DATA.csv", "HA_PuertoRico_8_34_13829_v5_DATA.csv", 'H3N8-HA_16686_v5.1_DATA.csv', "Human-DC-Sign-tetramer_15320_v5.0_DATA.csv"] csv_file_normal_names = [ r"\textit{Agaricus bisporus} agglutinin (ABA)", r"Concanavalin A (Con A)", r'Peanut agglutinin (PNA)', r"\textit{Ricinus communis} agglutinin I (RCA I/RCA\textsubscript{120})", r"\textit{Phaseolus vulgaris} erythroagglutinin (PHA-E)", r"\textit{Phaseolus vulgaris} leucoagglutinin (PHA-L)", r"\textit{Lens culinaris} agglutinin (LCA)", r"\textit{Sambucus nigra} agglutinin (SNA)", r"\textit{Maackia amurensis} lectin I (MAL-I)", r"\textit{Maackia amurensis} lectin II (MAL-II)", r"\textit{Griffonia simplicifolia} Lectin I isolectin B\textsubscript{4} (GSL I-B\textsubscript{4})", r"Jacalin", r'Wheat germ agglutinin (WGA)', r"\textit{Ulex europaeus} agglutinin I (UEA I)", r"Soybean agglutinin (SBA)", r"\textit{Dolichos biflorus} agglutinin (DBA)", r"\textit{Pisum sativum} agglutinin (PSA)", r"Influenza hemagglutinin (HA) (A/Puerto Rico/8/34) (H1N1)", r'Influenza HA (A/harbor seal/Massachusetts/1/2011) (H3N8)', r"Human DC-SIGN tetramer"] import sys import os import pandas as pd import numpy as np from scipy import interp sys.path.append('..') from ccarl.glycan_parsers.conversions import kcf_to_digraph, cfg_to_kcf from ccarl.glycan_plotting import draw_glycan_diagram from ccarl.glycan_graph_methods import generate_digraph_from_glycan_string from ccarl.glycan_features import generate_features_from_subtrees import ccarl.glycan_plotting from sklearn.linear_model import LogisticRegressionCV, LogisticRegression from sklearn.metrics import matthews_corrcoef, make_scorer, roc_curve, auc import matplotlib.pyplot as plt from collections import defaultdict aucs = defaultdict(list) ys = defaultdict(list) probs = defaultdict(list) motifs = defaultdict(list) for fold in [1,2,3,4,5]: print(f"Running fold {fold}...") for csv_file in csv_files: alpha = 0.8 minsup = 0.2 input_file = f'./temp_{csv_file}' training_data = pd.read_csv(f"../Data/CV_Folds/fold_{fold}/training_set_{csv_file}") test_data = pd.read_csv(f"../Data/CV_Folds/fold_{fold}/test_set_{csv_file}") pos_glycan_set = training_data['glycan'][training_data.binding == 1].values kcf_string = '\n'.join([cfg_to_kcf(x) for x in pos_glycan_set]) with open(input_file, 'w') as f: f.write(kcf_string) min_sup = int(len(pos_glycan_set) * minsup) subtrees = os.popen(f"ruby Miner_cmd.rb {min_sup} {alpha} {input_file}").read() subtree_graphs = [kcf_to_digraph(x) for x in subtrees.split("///")[0:-1]] motifs[csv_file].append(subtree_graphs) os.remove(input_file) binding_class = training_data.binding.values glycan_graphs = [generate_digraph_from_glycan_string(x, parse_linker=True, format='CFG') for x in training_data.glycan] glycan_graphs_test = [generate_digraph_from_glycan_string(x, parse_linker=True, format='CFG') for x in test_data.glycan] features = [generate_features_from_subtrees(subtree_graphs, glycan) for glycan in glycan_graphs] features_test = [generate_features_from_subtrees(subtree_graphs, glycan) for glycan in glycan_graphs_test] logistic_clf = LogisticRegression(penalty='l2', C=100, solver='lbfgs', class_weight='balanced', max_iter=1000) X = features y = binding_class logistic_clf.fit(X, y) y_test = test_data.binding.values X_test = features_test fpr, tpr, _ = roc_curve(y_test, logistic_clf.predict_proba(X_test)[:,1], drop_intermediate=False) aucs[csv_file].append(auc(fpr, tpr)) ys[csv_file].append(y_test) probs[csv_file].append(logistic_clf.predict_proba(X_test)[:,1]) # Assess the number of subtrees generated for each CV round. subtree_lengths = defaultdict(list) for fold in [1,2,3,4,5]: print(f"Running fold {fold}...") for csv_file in csv_files: alpha = 0.8 minsup = 0.2 input_file = f'./temp_{csv_file}' training_data = pd.read_csv(f"../Data/CV_Folds/fold_{fold}/training_set_{csv_file}") test_data = pd.read_csv(f"../Data/CV_Folds/fold_{fold}/test_set_{csv_file}") pos_glycan_set = training_data['glycan'][training_data.binding == 1].values kcf_string = '\n'.join([cfg_to_kcf(x) for x in pos_glycan_set]) with open(input_file, 'w') as f: f.write(kcf_string) min_sup = int(len(pos_glycan_set) * minsup) subtrees = os.popen(f"ruby Miner_cmd.rb {min_sup} {alpha} {input_file}").read() subtree_graphs = [kcf_to_digraph(x) for x in subtrees.split("///")[0:-1]] subtree_lengths[csv_file].append(len(subtree_graphs)) os.remove(input_file) subtree_lengths = [y for x in subtree_lengths.values() for y in x] print(np.mean(subtree_lengths)) print(np.max(subtree_lengths)) print(np.min(subtree_lengths)) def plot_multiple_roc(data): '''Plot multiple ROC curves. Prints out key AUC values (mean, median etc). Args: data (list): A list containing [y, probs] for each model, where: y: True class labels probs: Predicted probabilities Returns: Figure, Axes, Figure, Axes ''' mean_fpr = np.linspace(0, 1, 100) fig, axes = plt.subplots(figsize=(4, 4)) ax = axes ax.set_title('') #ax.legend(loc="lower right") ax.set_xlabel('False Positive Rate') ax.set_ylabel('True Positive Rate') ax.set_aspect('equal', adjustable='box') auc_values = [] tpr_list = [] for y, probs in data: #data_point = data[csv_file] #y = data_point[7] # test binding #X = data_point[8] # test features #logistic_clf = data_point[0] # model fpr, tpr, _ = roc_curve(y, probs, drop_intermediate=False) tpr_list.append(interp(mean_fpr, fpr, tpr)) auc_values.append(auc(fpr, tpr)) ax.plot(fpr, tpr, color='blue', alpha=0.1, label=f'ROC curve (area = {auc(fpr, tpr): 2.3f})') ax.plot([0,1], [0,1], linestyle='--', color='grey', linewidth=0.8, dashes=(5, 10)) mean_tpr = np.mean(tpr_list, axis=0) median_tpr = np.median(tpr_list, axis=0) upper_tpr = np.percentile(tpr_list, 75, axis=0) lower_tpr = np.percentile(tpr_list, 25, axis=0) ax.plot(mean_fpr, median_tpr, color='black') ax.fill_between(mean_fpr, lower_tpr, upper_tpr, color='grey', alpha=.5, label=r'$\pm$ 1 std. dev.') fig.savefig("Motif_Miner_CV_ROC_plot_all_curves.svg") fig2, ax2 = plt.subplots(figsize=(4, 4)) ax2.hist(auc_values, range=[0.5,1], bins=10, rwidth=0.9, color=(0, 114/255, 178/255)) ax2.set_xlabel("AUC value") ax2.set_ylabel("Counts") fig2.savefig("Motif_Miner_CV_AUC_histogram.svg") print(f"Mean AUC value: {np.mean(auc_values): 1.3f}") print(f"Median AUC value: {np.median(auc_values): 1.3f}") print(f"IQR of AUC values: {np.percentile(auc_values, 25): 1.3f} - {np.percentile(auc_values, 75): 1.3f}") return fig, axes, fig2, ax2, auc_values # Plot ROC curves for all test sets roc_data = [[y, prob] for y_fold, prob_fold in zip(ys.values(), probs.values()) for y, prob in zip(y_fold, prob_fold)] _, _, _, _, auc_values = plot_multiple_roc(roc_data) auc_values_ccarl = [0.950268817204301, 0.9586693548387097, 0.9559811827956988, 0.8686155913978494, 0.9351222826086956, 0.989010989010989, 0.9912587412587414, 0.9090909090909092, 0.9762626262626264, 0.9883597883597884, 0.9065533980582524, 0.9417475728155339, 0.8268608414239482, 0.964349376114082, 0.9322638146167558, 0.9178037686809616, 0.96361273554256, 0.9362139917695472, 0.9958847736625515, 0.9526748971193415, 0.952300785634119, 0.9315375982042648, 0.9705387205387206, 0.9865319865319865, 0.9849773242630385, 0.9862385321100917, 0.9862385321100918, 0.9606481481481481, 0.662037037037037, 0.7796296296296297, 0.9068627450980392, 0.915032679738562, 0.9820261437908496, 0.9893790849673203, 0.9882988298829882, 0.9814814814814815, 1.0, 0.8439153439153441, 0.9859813084112149, 0.9953271028037383, 0.8393308080808081, 0.8273358585858586, 0.7954545454545453, 0.807070707070707, 0.8966329966329966, 0.8380952380952381, 0.6201058201058202, 0.7179894179894181, 0.6778846153846154, 0.75, 0.9356060606060607, 0.8619528619528619, 0.8787878787878789, 0.9040816326530613, 0.7551020408163266, 0.9428694158075602, 0.9226804123711341, 0.8711340206185567, 0.7840909090909091, 0.8877840909090909, 0.903225806451613, 0.8705594120049, 0.9091465904450796, 0.8816455696202531, 0.8521097046413502, 0.8964521452145213, 0.9294554455445544, 0.8271452145214522, 0.8027272727272727, 0.8395454545454546, 0.8729967948717949, 0.9306891025641025, 0.9550970873786407, 0.7934686672550749, 0.8243601059135041, 0.8142100617828772, 0.9179611650485436, 0.8315533980582525, 0.7266990291262136, 0.9038834951456312, 0.9208916083916084, 0.7875, 0.9341346153846154, 0.9019230769230768, 0.9086538461538461, 0.9929245283018868, 0.9115566037735848, 0.9952830188679246, 0.9658018867924528, 0.7169811320754716, 0.935981308411215, 0.9405660377358491, 0.9905660377358491, 0.9937106918238994, 0.9302935010482181, 0.7564814814814815, 0.9375, 0.8449074074074074, 0.8668981481481483, 0.7978971962616823] auc_value_means = [np.mean(auc_values[x*5:x*5+5]) for x in range(int(len(auc_values) / 5))] auc_value_means_ccarl = [np.mean(auc_values_ccarl[x*5:x*5+5]) for x in range(int(len(auc_values_ccarl) / 5))] auc_value_mean_glymmr = np.array([0.6067939 , 0.76044574, 0.66786624, 0.69578298, 0.81659623, 0.80536403, 0.77231548, 0.96195032, 0.70013384, 0.60017685, 0.77336818, 0.78193305, 0.66269668, 0.70333122, 0.54247748, 0.63003707, 0.79619231, 0.85141509, 0.9245296 , 0.63366329]) auc_value_mean_glymmr_best = np.array([0.77559242, 0.87452658, 0.75091636, 0.7511371 , 0.87450697, 0.82895628, 0.81083123, 0.96317065, 0.75810185, 0.82680149, 0.84747054, 0.8039597 , 0.69651882, 0.73431593, 0.582194 , 0.67407767, 0.83049825, 0.88891509, 0.9345188 , 0.72702016]) auc_value_motiffinder = [0.9047619047619048, 0.9365601503759399, 0.6165413533834586, 0.9089068825910931, 0.4962962962962963, 0.6358816964285713, 0.8321078431372548, 0.8196576151121606, 0.8725400457665904, 0.830220713073005, 0.875, 0.7256367663344407, 0.8169291338582677, 0.9506818181818182, 0.7751351351351351, 0.9362947658402204, 0.6938461538461539, 0.6428571428571428, 0.7168021680216802, 0.5381136950904392] #Note, only from a single test-train split. import seaborn as sns sns.set(style="ticks") plot_data = np.array([auc_value_mean_glymmr, auc_value_mean_glymmr_best, auc_value_motiffinder, auc_value_means, auc_value_means_ccarl]).T ax = sns.violinplot(data=plot_data, cut=2, inner='quartile') sns.swarmplot(data=plot_data, color='black') ax.set_ylim([0.5, 1.05]) ax.set_xticklabels(["GLYMMR\n(mean)", "GLYMMR\n(best)", "MotifFinder", "Glycan\nMiner Tool", "CCARL"]) #ax.grid('off') ax.set_ylabel("AUC") ax.figure.savefig('method_comparison_violin_plot.svg') auc_value_means_ccarl print("CCARL Performance") print(f"Median AUC value: {np.median(auc_value_means_ccarl): 1.3f}") print(f"IQR of AUC values: {np.percentile(auc_value_means_ccarl, 25): 1.3f} - {np.percentile(auc_value_means_ccarl, 75): 1.3f}") print("Glycan Miner Tool Performance") print(f"Median AUC value: {np.median(auc_value_means): 1.3f}") print(f"IQR of AUC values: {np.percentile(auc_value_means, 25): 1.3f} - {np.percentile(auc_value_means, 75): 1.3f}") print("Glycan Miner Tool Performance") print(f"Median AUC value: {np.median(auc_value_mean_glymmr_best): 1.3f}") print(f"IQR of AUC values: {np.percentile(auc_value_mean_glymmr_best, 25): 1.3f} - {np.percentile(auc_value_mean_glymmr_best, 75): 1.3f}") print("Glycan Miner Tool Performance") print(f"Median AUC value: {np.median(auc_value_mean_glymmr): 1.3f}") print(f"IQR of AUC values: {np.percentile(auc_value_mean_glymmr, 25): 1.3f} - {np.percentile(auc_value_mean_glymmr, 75): 1.3f}") from matplotlib.backends.backend_pdf import PdfPages sns.reset_orig() import networkx as nx for csv_file in csv_files: with PdfPages(f"./motif_miner_motifs/glycan_motif_miner_motifs_{csv_file}.pdf") as pdf: for motif in motifs[csv_file][0]: fig, ax = plt.subplots() ccarl.glycan_plotting.draw_glycan_diagram(motif, ax) pdf.savefig(fig) plt.close(fig) glymmr_mean_stdev = np.array([0.15108904, 0.08300011, 0.11558078, 0.05259819, 0.061275 , 0.09541182, 0.09239553, 0.05114523, 0.05406571, 0.16180131, 0.10345311, 0.06080207, 0.0479003 , 0.09898648, 0.06137992, 0.09813596, 0.07010635, 0.14010784, 0.05924527, 0.13165457]) glymmr_best_stdev = np.array([0.08808868, 0.04784959, 0.13252895, 0.03163248, 0.04401516, 0.08942411, 0.08344247, 0.05714308, 0.05716086, 0.05640053, 0.08649275, 0.05007289, 0.05452531, 0.05697662, 0.0490626 , 0.1264917 , 0.04994508, 0.1030053 , 0.03359648, 0.12479809]) auc_value_std_ccarl = [np.std(auc_values_ccarl[x*5:x*5+5]) for x in range(int(len(auc_values_ccarl) / 5))] print(r"Lectin & GLYMMR(mean) & GLYMMR(best) & Glycan Miner Tool & MotifFinder & CCARL \\ \hline") for i, csv_file, name in zip(list(range(len(csv_files))), csv_files, csv_file_normal_names): print(f"{name} & {auc_value_mean_glymmr[i]:0.3f} ({glymmr_mean_stdev[i]:0.3f}) & {auc_value_mean_glymmr_best[i]:0.3f} ({glymmr_best_stdev[i]:0.3f}) \ & {np.mean(aucs[csv_file]):0.3f} ({np.std(aucs[csv_file]):0.3f}) & {auc_value_motiffinder[i]:0.3f} & {auc_value_means_ccarl[i]:0.3f} ({auc_value_std_ccarl[i]:0.3f}) \\\\")
Lectin & GLYMMR(mean) & GLYMMR(best) & Glycan Miner Tool & MotifFinder & CCARL \\ \hline \textit{Agaricus bisporus} agglutinin (ABA) & 0.607 (0.151) & 0.776 (0.088) & 0.888 (0.067) & 0.905 & 0.934 (0.034) \\ Concanavalin A (Con A) & 0.760 (0.083) & 0.875 (0.048) & 0.951 (0.042) & 0.937 & 0.971 (0.031) \\ Peanut agglutinin (PNA) & 0.668 (0.116) & 0.751 (0.133) & 0.894 (0.041) & 0.617 & 0.914 (0.048) \\ \textit{Ricinus communis} agglutinin I (RCA I/RCA\textsubscript{120}) & 0.696 (0.053) & 0.751 (0.032) & 0.848 (0.034) & 0.909 & 0.953 (0.026) \\ \textit{Phaseolus vulgaris} erythroagglutinin (PHA-E) & 0.817 (0.061) & 0.875 (0.044) & 0.910 (0.016) & 0.496 & 0.965 (0.021) \\ \textit{Phaseolus vulgaris} leucoagglutinin (PHA-L) & 0.805 (0.095) & 0.829 (0.089) & 0.858 (0.110) & 0.636 & 0.875 (0.132) \\ \textit{Lens culinaris} agglutinin (LCA) & 0.772 (0.092) & 0.811 (0.083) & 0.908 (0.083) & 0.832 & 0.956 (0.037) \\ \textit{Sambucus nigra} agglutinin (SNA) & 0.962 (0.051) & 0.963 (0.057) & 0.962 (0.050) & 0.820 & 0.961 (0.059) \\ \textit{Maackia amurensis} lectin I (MAL-I) & 0.700 (0.054) & 0.758 (0.057) & 0.868 (0.050) & 0.873 & 0.833 (0.035) \\ \textit{Maackia amurensis} lectin II (MAL-II) & 0.600 (0.162) & 0.827 (0.056) & 0.850 (0.091) & 0.830 & 0.721 (0.073) \\ \textit{Griffonia simplicifolia} Lectin I isolectin B\textsubscript{4} (GSL I-B\textsubscript{4}) & 0.773 (0.103) & 0.847 (0.086) & 0.875 (0.066) & 0.875 & 0.867 (0.061) \\ Jacalin & 0.782 (0.061) & 0.804 (0.050) & 0.848 (0.026) & 0.726 & 0.882 (0.055) \\ Wheat germ agglutinin (WGA) & 0.663 (0.048) & 0.697 (0.055) & 0.831 (0.034) & 0.817 & 0.883 (0.021) \\ \textit{Ulex europaeus} agglutinin I (UEA I) & 0.703 (0.099) & 0.734 (0.057) & 0.866 (0.023) & 0.951 & 0.859 (0.047) \\ Soybean agglutinin (SBA) & 0.542 (0.061) & 0.582 (0.049) & 0.781 (0.046) & 0.775 & 0.875 (0.061) \\ \textit{Dolichos biflorus} agglutinin (DBA) & 0.630 (0.098) & 0.674 (0.126) & 0.722 (0.083) & 0.936 & 0.839 (0.069) \\ \textit{Pisum sativum} agglutinin (PSA) & 0.796 (0.070) & 0.830 (0.050) & 0.858 (0.064) & 0.694 & 0.891 (0.053) \\ Influenza hemagglutinin (HA) (A/Puerto Rico/8/34) (H1N1) & 0.851 (0.140) & 0.889 (0.103) & 0.838 (0.144) & 0.643 & 0.917 (0.104) \\ Influenza HA (A/harbor seal/Massachusetts/1/2011) (H3N8) & 0.925 (0.059) & 0.935 (0.034) & 0.947 (0.021) & 0.717 & 0.958 (0.028) \\ Human DC-SIGN tetramer & 0.634 (0.132) & 0.727 (0.125) & 0.823 (0.130) & 0.538 & 0.841 (0.062) \\
MIT
motif_miner_comparison/motif_miner_performance.ipynb
andrewguy/CCARL
Descripción del TP IntroducciónEn este práctico se pide desarrollar un programa que calcule parámetros de interés para un circuito RLC serie. Tomando como datos de entrada:* εmax* la frecuencia de la fuente en Hz* los valores de R, L y C: El programa debe determinar: * Imax * la diferencia de potencial de cada componente * la potencia media* el factor Q* el factor de potencia (cosφ)* la impedancia compleja Z* La frecuencia de resonancia del sistema.* Cuál de las tres impedancias domina el comportamiento del circuitoTambién debe representar gráficamente: * El diagrama de fasores de tensiones del circuito RLC (`función quiver`). * Las tensiones VL, VC, VR y ε superpuestas en el dominio del tiempo. * Las ecuaciones de la FEM y la corriente con los valores correspondientes a los datos de entrada. Requerimientos de la implementación1. El desarrollo debe realizarse empleando el entorno Google Collaboratory (BTW donde estás leyendo ahora), el cual está basado en el lenguaje de programación Python.2. Crear un nuevo notebook para este práctico. Cuando se entregue el práctico el notebook debe estar con visibilidad pública. * Durante el desarrollo debe compartirse con [email protected], [email protected] y [email protected], con permiso de visualización y generar comentarios *(no edición)* * Para crear un notebook nuevo:>Archivo → Bloc de notas nuevo 3. Cada cambio significativo debe guardarse fijando una revisión del archivo `Archivo → Guardar y fijar revisión`. 4. Cada bloque de código debe de estar documentado mediante comentarios que expliquen cómo funciona. No es necesario comentar cada sentencia de programación, pero sí es necesario entender qué es lo que hacen.5. La implementación de cada funcionalidad del programa debe estar implementada en un bloque de código diferente.6. Cada bloque de código debe ir acompañado por un título introductorio a su funcionalidad (introduciendo un bloque de texto antes).7. El ingreso de datos para la ejecución del programa debe hacerse mediante un [formulario interactivo](https://colab.research.google.com/notebooks/forms.ipynb)8. Incluir un esquema del circuito, se puede dibujar en https://www.circuit-diagram.org/editor/ Tutorial Google Colaboratory (Colab) y Jupyter NotebookEl espacio en el que vamos a trabajar se llama Google Colaboratory, esta es una plataforma de cómputo en la nube. La forma que dispone para interactuar con ella es [Jupiter Notebook](https://jupyter.org/), que permite de manera interactiva escribir texto y ejecutar código en Python. Usando MarkdownMarkdown son un conjunto de instrucciones mediante las cuales le indicamos a Jupyter Notebook qué formato darle al texto. ***Doble click en cualquier parte del texto de este notebook ver cómo está escrito usando markdown***. Trabajando en equipoPara trabajar colaborativamente, hacer click en el botón `Compartir` del panel superior. Programación en Python Importar librerías
#hola, soy un comentario import numpy #importamos la librería de funciones matemáticas from matplotlib import pyplot #importamos la librería de gráficos cartesianos
_____no_output_____
MIT
Consignas TP_RLC.ipynb
EmanuelDri/CalculadoraRLC
Trabajar con variablesPara instanciar o asignarle valor a una variable usamos el signo "="Las variables persisten en todo el notebook, se pueden usar en cualquier bloque de código una vez que son instanciadas
a=7 #creamos la variable "a" y le asignamos el valor 7 #se puede destinar un bloque de código para
_____no_output_____
MIT
Consignas TP_RLC.ipynb
EmanuelDri/CalculadoraRLC
En el bloque de abajo usamos la variable "a" de arriba
a+5 #realizamos una operación con la variable "a", debajo aparece el resultado de la operación
_____no_output_____
MIT
Consignas TP_RLC.ipynb
EmanuelDri/CalculadoraRLC
Ingresar datos usando formulariosLos formularios permiten capturar datos de usuario para usarlos posteriormente en la ejecución de código. * Para insertar un formulario se debe presionar el botón más acciones de celda (**⋮**) a la derecha de la celda de código. *No se debe usar `Insertar→Agregar campo de formulario` porque actualmente no funciona esta opción*. * Ejemplos del uso de formularios se pueden encontrar [aquí](https://colab.research.google.com/notebooks/forms.ipynbscrollTo=_7gRpQLXQSID). * Desde (**⋮**) se puede ocultar el código asociado a un formulario.
#@title Ejemplos de formulario valor_formulario_numerico = 5 #@param {type:"number"} valor_formulario_slider = 16 #@param {type:"slider", min:0, max:100, step:1}
_____no_output_____
MIT
Consignas TP_RLC.ipynb
EmanuelDri/CalculadoraRLC
Funciones matemáticas* Las funciones aritméticas se encuentran incluidas en Python mediante [operadores](https://www.aprendeprogramando.es/cursos-online/python/operadores-aritmeticos/operadores-aritmeticos)* Funciones adicionales como las trigonométricas se invocan desde la librería numpy. Una vez importada, la podemos usar mediante la sintaxis:>`numpy.nombre de la función`*[Tutorial de funciones de la librería numpy](https://www.interactivechaos.com/manual/tutorial-de-numpy/funciones-universales-trigonometricas)
#Ejemplos de uso de funciones matemáticas #La función print muestra en pantalla un mensaje suministrado por el programador, #el contenido de una variable o alguna operación más compleja con variables print(f'Valor del formulario numerico: {valor_formulario_numerico}') print(f'Valor del formulario numerico al cuadrado: {valor_formulario_numerico**2}') print(f'Raiz cuadrada del formulario numerico al cuadrado: {numpy.sqrt(valor_formulario_numerico)}') print(f'Seno del formulario slider al cuadrado: {numpy.sin(valor_formulario_slider)}')
Valor del formulario numerico: 5 Valor del formulario numerico al cuadrado: 25 Raiz cuadrada del formulario numerico al cuadrado: 2.23606797749979 Seno del formulario slider al cuadrado: -0.2879033166650653
MIT
Consignas TP_RLC.ipynb
EmanuelDri/CalculadoraRLC
Generando gráficas
pi=numpy.pi #Obtenemos el valor de π desde numpy y lo guardamos en la variable pi t = [i for i in numpy.arange(0,2*pi,pi/1000)] #generamos un vector de 2000 elementos, #con valores entre 0 y 2π en pasos de π/1000 corriente=numpy.sin(t); #calculamos el seno de los valores de "t" tension=numpy.sin(t+numpy.ones(len(t))*pi/2); #calculamos el seno de los valores #de "t" y le sumamos un ángulo de fase de π/2 pyplot.plot(t,corriente,t,tension, '-') #generamos un gráfico de tensión y corriente #en función de t grafic pyplot.xlabel("Eje X") #damos nombre al eje X pyplot.ylabel("Eje Y") #damos nombre al eje Y pyplot.title("Onda seno") #agregamos el título al gráfico pyplot.legend(["Corriente","Tensión"]) pyplot.figure(dpi=100) #ajustamos la resolución de la figura generada pyplot.show() #mostramos el gráfico cuando ya lo tenemos listo
_____no_output_____
MIT
Consignas TP_RLC.ipynb
EmanuelDri/CalculadoraRLC
Mostrar vectoresPara mostrar vectores usamos la funcíon quiver: `(origen, coordenadas X de las puntas, coordenadas Y de las flechas, color=colores de las flechas)`
V = numpy.array([[1,1],[-2,2],[4,-7]]) origin = [0,0,0], [0,0,0] # origin point pyplot.grid() pyplot.quiver([0,0,0], [0,0,0], V[:,0], V[:,1], color=['r','b','g'], scale=23) pyplot.show()
_____no_output_____
MIT
Consignas TP_RLC.ipynb
EmanuelDri/CalculadoraRLC
Mostrar fórmulasPara mostrar fórmulas podemos acudir a pyplot `(se podría usar otra librería u otra función si se quisiera)` Las fórmulas introducidas de esta forma se escriben en LaTex> Un buen editor de fórmulas de LaTex: https://www.latex4technics.com/ Para generar la fórmula usamos pyplot.text* `pyplot.text(pos X, pos Y, r 'formula en latex', fontsize=tamaño de fuente)`> * En el ejemplo %i indica que se debe reemplazar este símbolo por un valor entero o el contenido de una variable a la derecha del texto > * Cambiando %i por %f indicaría que se debe interpretar el número externo al texto como valor con coma* `pyplot.axis('off')` indica que no se debe mostrar nada más que el texto* Finalmente, `pyplot.show()` muestra la ecuación
#add text pyplot.text(1, 1,r'$\alpha > \beta > %i$'% valor_formulario_slider,fontsize=40) pyplot.axis('off') pyplot.show() #or savefig
_____no_output_____
MIT
Consignas TP_RLC.ipynb
EmanuelDri/CalculadoraRLC
import math def f(x): return(math.exp(x)) #Trigo function a = -1 b = 1 n = 10 h = (b-a)/n #Width of Trapezoid S = h * (f(a)+f(b)) #Value of summation for i in range(1,n): S += f(a+i*h) Integral = S*h print('Integral = %0.4f' %Integral)
Integral = 2.1731
Apache-2.0
Problem_3.ipynb
Nickamaes/Numerical-Methods-58011
CODE HAS BEEN RUN UNTILL HERE.........v analysis of mistakes
analysis_runs.loc[analysis_runs.success == 0].sort_values('terminated_at_step') #plt.hist(analysis_runs.loc[analysis_runs.success == 0].terminated_at_step, bins=8) len(analysis_runs.loc[analysis_runs.success == 0]) analysis_runs['instance_size'] = analysis_runs.apply(lambda row: str(row.original_length).replace('37', '14').replace('41', '15').replace('43', '16').replace('46','17'), axis=1) import seaborn as sns sns.set(style="darkgrid") bins = [0,5,10,15,20,25,30,35,40,45,50] g = sns.FacetGrid(analysis_runs.loc[analysis_runs.success == 0], col="instance_size", margin_titles=True) g.set(ylim=(0, 100), xlim=(0,50)) g.map(plt.hist, "terminated_at_step", color="steelblue", bins=bins, lw=0) sns.plt.savefig('185k_failures.eps')
_____no_output_____
MIT
train_algo_relocation/analysis/get_solved_new_instances-24h-relocation-14151617-expensive_relocation-185k.ipynb
hsinyic/tusp-archive
ERROR: type should be string, got " https://github.com/PMBio/scLVM/blob/master/tutorials/tcell_demo.ipynb Variational Autoencoder Model (VAE) with latent subspaces based on:https://arxiv.org/pdf/1812.06190.pdf"
#Step 1: import dependencies from tensorflow.keras import layers import numpy as np import matplotlib.pyplot as plt import seaborn as sns import tensorflow as tf from keras import regularizers import time from __future__ import division import tensorflow as tf import tensorflow_probability as tfp tfd = tfp.distributions %matplotlib inline plt.style.use('dark_background') import pandas as pd import os from matplotlib import cm import h5py import scipy as SP import pylab as PL data = os.path.join('data_Tcells_normCounts.h5f') f = h5py.File(data,'r') Y = f['LogNcountsMmus'][:] # gene expression matrix tech_noise = f['LogVar_techMmus'][:] # technical noise genes_het_bool=f['genes_heterogen'][:] # index of heterogeneous genes geneID = f['gene_names'][:] # gene names cellcyclegenes_filter = SP.unique(f['cellcyclegenes_filter'][:].ravel() -1) # idx of cell cycle genes from GO cellcyclegenes_filterCB = f['ccCBall_gene_indices'][:].ravel() -1 # idx of cell cycle genes from cycle base ... # filter cell cycle genes idx_cell_cycle = SP.union1d(cellcyclegenes_filter,cellcyclegenes_filterCB) # determine non-zero counts idx_nonzero = SP.nonzero((Y.mean(0)**2)>0)[0] idx_cell_cycle_noise_filtered = SP.intersect1d(idx_cell_cycle,idx_nonzero) # subset gene expression matrix Ycc = Y[:,idx_cell_cycle_noise_filtered] plt = PL.subplot(1,1,1); PL.imshow(Ycc,cmap=cm.RdBu,vmin=-3,vmax=+3,interpolation='None'); #PL.colorbar(); plt.set_xticks([]); plt.set_yticks([]); PL.xlabel('genes'); PL.ylabel('cells'); X = np.delete(Y, idx_cell_cycle_noise_filtered, axis=1) X = Y #base case U = Y[:,idx_cell_cycle_noise_filtered] mean = np.mean(X, axis=0) variance = np.var(X, axis=0) indx_small_mean = np.argwhere(mean < 0.00001) X = np.delete(X, indx_small_mean, axis=1) mean = np.mean(X, axis=0) variance = np.var(X, axis=0) fano = variance/mean print(fano.shape) indx_small_fano = np.argwhere(fano < 1.0) X = np.delete(X, indx_small_fano, axis=1) mean = np.mean(X, axis=0) variance = np.var(X, axis=0) fano = variance/mean print(fano.shape) #Reconstruction loss def x_given_z(z, output_size): with tf.variable_scope('M/x_given_w_z'): act = tf.nn.leaky_relu h = z h = tf.layers.dense(h, 8, act) h = tf.layers.dense(h, 16, act) h = tf.layers.dense(h, 32, act) h = tf.layers.dense(h, 64, act) h = tf.layers.dense(h, 128, act) h = tf.layers.dense(h, 256, act) loc = tf.layers.dense(h, output_size) #log_variance = tf.layers.dense(x, latent_size) #scale = tf.nn.softplus(log_variance) scale = 0.01*tf.ones(tf.shape(loc)) return tfd.MultivariateNormalDiag(loc, scale) #KL term for z def z_given_x(x, latent_size): #+ with tf.variable_scope('M/z_given_x'): act = tf.nn.leaky_relu h = x h = tf.layers.dense(h, 256, act) h = tf.layers.dense(h, 128, act) h = tf.layers.dense(h, 64, act) h = tf.layers.dense(h, 32, act) h = tf.layers.dense(h, 16, act) h = tf.layers.dense(h, 8, act) loc = tf.layers.dense(h,latent_size) log_variance = tf.layers.dense(h, latent_size) scale = tf.nn.softplus(log_variance) # scale = 0.01*tf.ones(tf.shape(loc)) return tfd.MultivariateNormalDiag(loc, scale) def z_given(latent_size): with tf.variable_scope('M/z_given'): loc = tf.zeros(latent_size) scale = 0.01*tf.ones(tf.shape(loc)) return tfd.MultivariateNormalDiag(loc, scale) #Connect encoder and decoder and define the loss function tf.reset_default_graph() x_in = tf.placeholder(tf.float32, shape=[None, X.shape[1]], name='x_in') x_out = tf.placeholder(tf.float32, shape=[None, X.shape[1]], name='x_out') z_latent_size = 2 beta = 0.000001 #KL_z zI = z_given(z_latent_size) zIx = z_given_x(x_in, z_latent_size) zIx_sample = zIx.sample() zIx_mean = zIx.mean() #kl_z = tf.reduce_mean(zIx.log_prob(zIx_sample)- zI.log_prob(zIx_sample)) kl_z = tf.reduce_mean(tfd.kl_divergence(zIx, zI)) #analytical #Reconstruction xIz = x_given_z(zIx_sample, X.shape[1]) rec_out = xIz.mean() rec_loss = tf.losses.mean_squared_error(x_out, rec_out) loss = rec_loss + beta*kl_z optimizer = tf.train.AdamOptimizer(0.001).minimize(loss) #Helper function def batch_generator(features, x, u, batch_size): """Function to create python generator to shuffle and split features into batches along the first dimension.""" idx = np.arange(features.shape[0]) np.random.shuffle(idx) for start_idx in range(0, features.shape[0], batch_size): end_idx = min(start_idx + batch_size, features.shape[0]) part = idx[start_idx:end_idx] yield features[part,:], x[part,:] , u[part, :] n_epochs = 5000 batch_size = X.shape[0] start = time.time() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(n_epochs): gen = batch_generator(X, X, U, batch_size) #create batch generator rec_loss_ = 0 kl_z_ = 0 for j in range(np.int(X.shape[0]/batch_size)): x_in_batch, x_out_batch, u_batch = gen.__next__() _, rec_loss__, kl_z__= sess.run([optimizer, rec_loss, kl_z], feed_dict={x_in: x_in_batch, x_out: x_out_batch}) rec_loss_ += rec_loss__ kl_z_ += kl_z__ if (i+1)% 50 == 0 or i == 0: zIx_mean_, rec_out_= sess.run([zIx_mean, rec_out], feed_dict ={x_in:X, x_out:X}) end = time.time() print('epoch: {0}, rec_loss: {1:.3f}, kl_z: {2:.2f}'.format((i+1), rec_loss_/(1+np.int(X.shape[0]/batch_size)), kl_z_/(1+np.int(X.shape[0]/batch_size)))) start = time.time() from sklearn.decomposition import TruncatedSVD svd = TruncatedSVD(n_components=2, n_iter=7, random_state=42) svd.fit(U.T) print(svd.explained_variance_ratio_) print(svd.explained_variance_ratio_.sum()) print(svd.singular_values_) U_ = svd.components_ U_ = U_.T import matplotlib.pyplot as plt fig, axs = plt.subplots(1, 2, figsize=(14,5)) axs[0].scatter(zIx_mean_[:,0],zIx_mean_[:,1], c=U_[:,0], cmap='viridis', s=5.0); axs[0].set_xlabel('z1') axs[0].set_ylabel('z2') fig.suptitle('X1') plt.show() fig, axs = plt.subplots(1, 2, figsize=(14,5)) axs[0].scatter(wIxy_mean_[:,0],wIxy_mean_[:,1], c=U_[:,1], cmap='viridis', s=5.0); axs[0].set_xlabel('w1') axs[0].set_ylabel('w2') axs[1].scatter(zIx_mean_[:,0],zIx_mean_[:,1], c=U_[:,1], cmap='viridis', s=5.0); axs[1].set_xlabel('z1') axs[1].set_ylabel('z2') fig.suptitle('X1') plt.show() error = np.abs(X-rec_out_) plt.plot(np.reshape(error, -1), '*', markersize=0.1); plt.hist(np.reshape(error, -1), bins=50);
_____no_output_____
MIT
VAE_cell_cycle.ipynb
dauparas/tensorflow_examples
- [Evcxr](https://github.com/google/evcxr): An evaluation context for Rust.
println!("Hello Rust!");
Hello Rust!
MIT
code-snippet/rust/notebook/example-evcxr.ipynb
zhoujiagen/ml_hacks
Predicting Boston Housing Prices Using XGBoost in SageMaker (Hyperparameter Tuning)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level API for hyperparameter tuning, we will look again at the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we are only interested in creating a tuned model and testing its performance.
# Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0
Collecting sagemaker==1.72.0 Downloading sagemaker-1.72.0.tar.gz (297 kB)  |████████████████████████████████| 297 kB 16.6 MB/s eta 0:00:01 [?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.17.55) Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5) Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.15.2) Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.5.3) Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5) Collecting smdebug-rulesconfig==0.1.4 Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB) Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.7.0) Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.9) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0) Requirement already satisfied: botocore<1.21.0,>=1.20.55 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.20.55) Requirement already satisfied: s3transfer<0.5.0,>=0.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.4.1) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.55->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.55->boto3>=1.14.12->sagemaker==1.72.0) (1.26.4) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0) Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3) Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7) Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0) Building wheels for collected packages: sagemaker Building wheel for sagemaker (setup.py) ... [?25ldone [?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=918ca6ca3fd8db5bdf10a9a6e2b41c5779c041e02424f8e8ff5b4fb7d7129a62 Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7 Successfully built sagemaker Installing collected packages: smdebug-rulesconfig, sagemaker Attempting uninstall: smdebug-rulesconfig Found existing installation: smdebug-rulesconfig 1.0.1 Uninstalling smdebug-rulesconfig-1.0.1: Successfully uninstalled smdebug-rulesconfig-1.0.1 Attempting uninstall: sagemaker Found existing installation: sagemaker 2.38.0 Uninstalling sagemaker-2.38.0: Successfully uninstalled sagemaker-2.38.0 Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
%matplotlib inline import os import time from time import gmtime, strftime import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_boston import sklearn.model_selection
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
import sagemaker from sagemaker import get_execution_role from sagemaker.amazon.amazon_estimator import get_image_uri # This is an object that represents the SageMaker session that we are currently operating in. This # object contains some useful information that we will need to access later such as our region. session = sagemaker.Session() # This is an object that represents the IAM role that we are currently assigned. When we construct # and launch the training job later we will need to tell it what IAM role it should have. Since our # use case is relatively simple we will simply assign the training job the role we currently have. role = get_execution_role()
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
boston = load_boston()
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
# First we package up the input data and the target variable (the median value) as pandas dataframes. This # will make saving the data to a file a little easier later on. X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names) Y_bos_pd = pd.DataFrame(boston.target) # We split the dataset into 2/3 training and 1/3 testing sets. X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33) # Then we split the training set further into 2/3 training and 1/3 validation sets. X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3. **My Comment**:To solve the "no space left" issue: remove `-sagemaker-deployment/cache/sentiment_analysis`.
# This is our local data directory. We need to make sure that it exists. data_dir = '../data/boston' if not os.path.exists(data_dir): os.makedirs(data_dir) # We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header # information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and # validation data, it is assumed that the first entry in each row is the target variable. X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
prefix = 'boston-xgboost-tuning-LL' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. Unlike in the previous notebooks, instead of training a single model, we will use SageMakers hyperparameter tuning functionality to train multiple models and use the one that performs the best on the validation set. Set up the training jobFirst, we will set up a training job for our model. This is very similar to the way in which we constructed the training job in previous notebooks. Essentially this describes the *base* training job from which SageMaker will create refinements by changing some hyperparameters during the hyperparameter tuning job.
# We will need to know the name of the container that we want to use for training. SageMaker provides # a nice utility method to construct this for us. container = get_image_uri(session.boto_region_name, 'xgboost') # We now specify the parameters we wish to use for our training job training_params = {} # We need to specify the permissions that this training job will have. For our purposes we can use # the same permissions that our current SageMaker session has. training_params['RoleArn'] = role # Here we describe the algorithm we wish to use. The most important part is the container which # contains the training code. training_params['AlgorithmSpecification'] = { "TrainingImage": container, "TrainingInputMode": "File" } # We also need to say where we would like the resulting model artifacts stored. training_params['OutputDataConfig'] = { "S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output" } # We also need to set some parameters for the training job itself. Namely we need to describe what sort of # compute instance we wish to use along with a stopping condition to handle the case that there is # some sort of error and the training script doesn't terminate. training_params['ResourceConfig'] = { "InstanceCount": 1, "InstanceType": "ml.m4.xlarge", "VolumeSizeInGB": 5 } training_params['StoppingCondition'] = { "MaxRuntimeInSeconds": 86400 } # Next we set the algorithm specific hyperparameters. In this case, since we are setting up # a training job which will serve as the base training job for the eventual hyperparameter # tuning job, we only specify the _static_ hyperparameters. That is, the hyperparameters that # we do _not_ want SageMaker to change. training_params['StaticHyperParameters'] = { "gamma": "4", "subsample": "0.8", "objective": "reg:linear", "early_stopping_rounds": "10", "num_round": "200" } # Now we need to tell SageMaker where the data should be retrieved from. training_params['InputDataConfig'] = [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": train_location, "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "csv", "CompressionType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": val_location, "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "csv", "CompressionType": "None" } ]
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2. There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example: get_image_uri(region, 'xgboost', '1.0-1').
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Set up the tuning jobNow that the *base* training job has been set up, we can describe the tuning job that we would like SageMaker to perform. In particular, like in the high level notebook, we will specify which hyperparameters we wish SageMaker to change and what range of values they may take on.In addition, we specify the *number* of models to construct (`max_jobs`) and the number of those that can be trained in parallel (`max_parallel_jobs`). In the cell below we have chosen to train `20` models, of which we ask that SageMaker train `3` at a time in parallel. Note that this results in a total of `20` training jobs being executed which can take some time, in this case almost a half hour. With more complicated models this can take even longer so be aware!
# We need to construct a dictionary which specifies the tuning job we want SageMaker to perform tuning_job_config = { # First we specify which hyperparameters we want SageMaker to be able to vary, # and we specify the type and range of the hyperparameters. "ParameterRanges": { "CategoricalParameterRanges": [], "ContinuousParameterRanges": [ { "MaxValue": "0.5", "MinValue": "0.05", "Name": "eta" }, ], "IntegerParameterRanges": [ { "MaxValue": "12", "MinValue": "3", "Name": "max_depth" }, { "MaxValue": "8", "MinValue": "2", "Name": "min_child_weight" } ]}, # We also need to specify how many models should be fit and how many can be fit in parallel "ResourceLimits": { "MaxNumberOfTrainingJobs": 20, "MaxParallelTrainingJobs": 3 }, # Here we specify how SageMaker should update the hyperparameters as new models are fit "Strategy": "Bayesian", # And lastly we need to specify how we'd like to determine which models are better or worse "HyperParameterTuningJobObjective": { "MetricName": "validation:rmse", "Type": "Minimize" } }
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Execute the tuning jobNow that we've built the data structures that describe the tuning job we want SageMaker to execute, it is time to actually start the job.
# First we need to choose a name for the job. This is useful for if we want to recall information about our # tuning job at a later date. Note that SageMaker requires a tuning job name and that the name needs to # be unique, which we accomplish by appending the current timestamp. tuning_job_name = "tuning-job" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And now we ask SageMaker to create (and execute) the training job session.sagemaker_client.create_hyper_parameter_tuning_job(HyperParameterTuningJobName = tuning_job_name, HyperParameterTuningJobConfig = tuning_job_config, TrainingJobDefinition = training_params)
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
The tuning job has now been created by SageMaker and is currently running. Since we need the output of the tuning job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the tuning job and continue doing so until the job terminates.
session.wait_for_tuning_job(tuning_job_name)
............................................................................................................................................................................................................................................................................................................................................................!
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Build the modelNow that the tuning job has finished, SageMaker has fit a number of models, the results of which are stored in a data structure which we can access using the name of the tuning job.
tuning_job_info = session.sagemaker_client.describe_hyper_parameter_tuning_job(HyperParameterTuningJobName=tuning_job_name)
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Among the pieces of information included in the `tuning_job_info` object is the name of the training job which performed best out of all of the models that SageMaker fit to our data. Using this training job name we can get access to the resulting model artifacts, from which we can construct a model.
# We begin by asking SageMaker to describe for us the results of the best training job. The data # structure returned contains a lot more information than we currently need, try checking it out # yourself in more detail. best_training_job_name = tuning_job_info['BestTrainingJob']['TrainingJobName'] training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=best_training_job_name) model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts'] # Just like when we created a training job, the model name must be unique model_name = best_training_job_name + "-model" # We also need to tell SageMaker which container should be used for inference and where it should # retrieve the model artifacts from. In our case, the xgboost container that we used for training # can also be used for inference. primary_container = { "Image": container, "ModelDataUrl": model_artifacts } # And lastly we construct the SageMaker model model_info = session.sagemaker_client.create_model( ModelName = model_name, ExecutionRoleArn = role, PrimaryContainer = primary_container)
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique. transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # Now we construct the data structure which will describe the batch transform job. transform_request = \ { "TransformJobName": transform_job_name, # This is the name of the model that we created earlier. "ModelName": model_name, # This describes how many compute instances should be used at once. If you happen to be doing a very large # batch transform job it may be worth running multiple compute instances at once. "MaxConcurrentTransforms": 1, # This says how big each individual request sent to the model should be, at most. One of the things that # SageMaker does in the background is to split our data up into chunks so that each chunks stays under # this size limit. "MaxPayloadInMB": 6, # Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of # the chunks that we send should contain multiple samples of our input data. "BatchStrategy": "MultiRecord", # This next object describes where the output data should be stored. Some of the more advanced options which # we don't cover here also describe how SageMaker should collect output from various batches. "TransformOutput": { "S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix) }, # Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in # addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to # split our data up into chunks, it needs to know how the individual samples in our data file appear. In our # case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what # type of data is being sent, in this case csv, so that it can properly serialize the data. "TransformInput": { "ContentType": "text/csv", "SplitType": "Line", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": test_location, } } }, # And lastly we tell SageMaker what sort of compute instance we would like it to use. "TransformResources": { "InstanceType": "ml.m4.xlarge", "InstanceCount": 1 } }
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Execute the batch transform jobNow that we have created the request data structure, it is time to as SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
transform_response = session.sagemaker_client.create_transform_job(**transform_request) transform_desc = session.wait_for_transform_job(transform_job_name)
............................................................!
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix) !aws s3 cp --recursive $transform_output $data_dir
download: s3://sagemaker-us-east-1-888201120197/boston-xgboost-tuning-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) plt.scatter(Y_test, Y_pred) plt.xlabel("Median Price") plt.ylabel("Predicted Price") plt.title("Median Price vs Predicted Price")
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
# First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir
_____no_output_____
MIT
6_sagemaker-deployment/Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - Low Level.ipynb
JasmineMou/udacity_DL
Question 1:
Create a function that takes a list of non-negative integers and strings and return a new list without the strings. Examples filter_list([1, 2, "a", "b"]) ➞ [1, 2] filter_list([1, "a", "b", 0, 15]) ➞ [1, 0, 15] filter_list([1, 2, "aasf", "1", "123", 123]) ➞ [1, 2, 123]
_____no_output_____
MIT
Python Programming Basic Assignment/Assignment_18.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
def filter_list(lst): return list(filter(lambda x: type(x) == int,lst))print(filter_list([1, 2, "a", "b"]))print(filter_list([1, "a", "b", 0, 15]))print(filter_list([1, 2, "aasf", "1", "123", 123]))
Question 2:
_____no_output_____
MIT
Python Programming Basic Assignment/Assignment_18.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
The "Reverser" takes a string as input and returns that string in reverse order, with theopposite case.Examplesreverse("Hello World") ➞ "DLROw OLLEh"reverse("ReVeRsE") ➞ "eSrEvEr"reverse("Radar") ➞ "RADAr"
Answer :
_____no_output_____
MIT
Python Programming Basic Assignment/Assignment_18.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
def reverse(string): return string[::-1].swapcase()print(reverse("Hello World"))print(reverse("ReVeRsE"))print(reverse("Radar"))
Question 3:
_____no_output_____
MIT
Python Programming Basic Assignment/Assignment_18.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
You can assign variables from lists like this:lst = [1, 2, 3, 4, 5, 6]first = lst[0]middle = lst[1:-1]last = lst[-1]print(first) ➞ outputs 1print(middle) ➞ outputs [2, 3, 4, 5]print(last) ➞ outputs 6With Python 3, you can assign variables from lists in a much more succinct way. Createvariables first, middle and last from the given list using destructuring assignment(check the Resources tab for some examples), where:first ➞ 1middle ➞ [2, 3, 4, 5]last ➞ 6Your task is to unpack the list writeyourcodehere into three variables, being first, middle, and last, with middle being everything in between the first and last element. Thenprint all three variables.
Answer :
_____no_output_____
MIT
Python Programming Basic Assignment/Assignment_18.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
def unpack_list(lst): first = lst[0] middle = lst[1:-1] last = lst[-1] return first,middle,lastlst = [1, 2, 3, 4, 5, 6]first,middle,last = unpack_list(lst)print(first)print(middle)print(last)
Question 4:
_____no_output_____
MIT
Python Programming Basic Assignment/Assignment_18.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Write a function that calculates the factorial of a number recursively. Examplesfactorial(5) ➞ 120factorial(3) ➞ 6factorial(1) ➞ 1factorial(0) ➞ 1
def factorial(num): if num<0: return f"{num} is negative number." if num == 0 or num == 1: return 1 else: return num*factorial(num-1) print(factorial(5)) print(factorial(3)) print(factorial(1)) print(factorial(0))
120 6 1 1
MIT
Python Programming Basic Assignment/Assignment_18.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Question 5:
Write a function that moves all elements of one type to the end of the list. Examples move_to_end([1, 3, 2, 4, 4, 1], 1) ➞ [3, 2, 4, 4, 1, 1] # Move all the 1s to the end of the array. move_to_end([7, 8, 9, 1, 2, 3, 4], 9) ➞ [7, 8, 1, 2, 3, 4, 9] move_to_end(["a", "a", "a", "b"], "a") ➞ ["b", "a", "a", "a"]
_____no_output_____
MIT
Python Programming Basic Assignment/Assignment_18.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Answer :
def move_to_end(lst,x): lst.remove(x) lst.append(x) return lst print(move_to_end([1, 3, 2, 4, 4, 1], 1)) print(move_to_end([7, 8, 9, 1, 2, 3, 4], 9)) print(move_to_end(["a", "a", "a", "b"], "a"))
[3, 2, 4, 4, 1, 1] [7, 8, 1, 2, 3, 4, 9] ['a', 'a', 'b', 'a']
MIT
Python Programming Basic Assignment/Assignment_18.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Get Training Data
# get training data train_df = pd.read_csv(os.path.join(ROOT_DIR,DATA_DIR,FEATURE_SET,'train.csv.gz')) X_train = train_df.drop(ID_VAR + [TARGET_VAR],axis=1) y_train = train_df.loc[:,TARGET_VAR] X_train.shape y_train.shape y_train[:10]
_____no_output_____
MIT
eda/hyper-parameter_tuning/random_forest-Level0.ipynb
jimthompson5802/model-stacking-framework
Setup pipeline for hyper-parameter tuning
# set up pipeline pipe = Pipeline([('this_model',ThisModel(n_jobs=-1))])
_____no_output_____
MIT
eda/hyper-parameter_tuning/random_forest-Level0.ipynb
jimthompson5802/model-stacking-framework
this_scorer = make_scorer(lambda y, y_hat: np.sqrt(mean_squared_error(y,y_hat)),greater_is_better=False)
def kag_rmsle(y,y_hat): return np.sqrt(mean_squared_error(y,y_hat)) this_scorer = make_scorer(kag_rmsle, greater_is_better=False) grid_search = RandomizedSearchCV(pipe, param_distributions=PARAM_GRID, scoring=this_scorer,cv=5, n_iter=N_ITER, verbose=2, n_jobs=1, refit=False) grid_search.fit(X_train,y_train) grid_search.best_params_ grid_search.best_score_ df = pd.DataFrame(grid_search.cv_results_).sort_values('rank_test_score') df hyper_parameters = dict(FeatureSet=FEATURE_SET,cv_run=df) with open(os.path.join(CONFIG['ROOT_DIR'],'eda','hyper-parameter_tuning',MODEL_ALGO),'wb') as f: pickle.dump(hyper_parameters,f)
_____no_output_____
MIT
eda/hyper-parameter_tuning/random_forest-Level0.ipynb
jimthompson5802/model-stacking-framework
We can see in the next two cells that the parallelized code took off 16 minutes of computation time here. I haven't run on anything larger, so I dont know how it scales yet. There is another example past this as well.
cProfile.runctx('get_phenotype_graph_optimized(database, paramslist)', globals(), {'database':database,'paramslist':paramslist}) cProfile.runctx('get_phenotype_graph_parallel(database, paramslist, num_processes)', globals(), {'database':database,'paramslist':paramslist, 'num_processes':8}) database = Database("/home/elizabeth/Desktop/ACDC/ACDC_FullconnE.db") network = Network("/home/elizabeth/Desktop/ACDC/ACDC_FullconnE") parameter_graph = ParameterGraph(network) print(parameter_graph.size()) AP35 = {"Hb":[0,2], "Gt":2, "Kr":0, "Kni":0} AP37 = {"Hb":2, "Gt":[0,1], "Kr":0, "Kni":0} AP40 = {"Hb":2, "Gt":1, "Kr":[0,1], "Kni":0} #edit AP45 = {"Hb":[0,1], "Gt":1, "Kr":2, "Kni":0} #edit AP47 = {"Hb":[0,1], "Gt":0, "Kr":2, "Kni":0} AP51 = {"Hb":1, "Gt":0, "Kr":2, "Kni":[0,1]} #edit AP57 = {"Hb":1, "Gt":0, "Kr":[0,1], "Kni":2} #edit AP61 = {"Hb":0, "Gt":0, "Kr":[0,1], "Kni":2} AP63 = {"Hb":0, "Gt":[0,1], "Kr":1, "Kni":2} #edit AP67 = {"Hb":0, "Gt":2, "Kr":1, "Kni":[0,1]} #edit E = [[AP37], [AP40], [AP45], [AP47], [AP51], [AP57], [AP61], [AP63], [AP67]] paramslist = get_paramslist(database, E, '<')
_____no_output_____
MIT
notebooks/Profiling_parallel_code.ipynb
Eandreas1857/dsgrn_acdc
This one is smaller than the first example, but I feel shows promising results in speeding up computation time.
cProfile.runctx('get_phenotype_graph_optimized(database, paramslist)', globals(), {'database':database,'paramslist':paramslist}) cProfile.runctx('get_phenotype_graph_parallel(database, paramslist, num_processes)', globals(), {'database':database,'paramslist':paramslist, 'num_processes':8})
1871 function calls in 0.325 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 15 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:1009(_handle_fromlist) 15 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:416(parent) 1 0.004 0.004 0.325 0.325 <string>:1(<module>) 1 0.003 0.003 0.321 0.321 PhenotypeGraphFun.py:252(get_phenotype_graph_parallel) 8 0.000 0.000 0.000 0.000 __init__.py:212(_acquireLock) 8 0.000 0.000 0.000 0.000 __init__.py:221(_releaseLock) 11 0.000 0.000 0.000 0.000 _weakrefset.py:81(add) 4 0.000 0.000 0.000 0.000 connection.py:117(__init__) 2 0.000 0.000 0.000 0.000 connection.py:506(Pipe) 2 0.000 0.000 0.000 0.000 context.py:109(SimpleQueue) 1 0.000 0.000 0.028 0.028 context.py:114(Pool) 6 0.000 0.000 0.000 0.000 context.py:186(get_context) 4 0.000 0.000 0.000 0.000 context.py:196(get_start_method) 1 0.000 0.000 0.000 0.000 context.py:232(get_context) 8 0.000 0.000 0.024 0.003 context.py:274(_Popen) 4 0.000 0.000 0.000 0.000 context.py:64(Lock) 32 0.000 0.000 0.001 0.000 iostream.py:197(schedule) 16 0.000 0.000 0.005 0.000 iostream.py:337(flush) 32 0.000 0.000 0.000 0.000 iostream.py:93(_event_pipe) 8 0.000 0.000 0.001 0.000 pool.py:152(Process) 1 0.000 0.000 0.028 0.028 pool.py:155(__init__) 1 0.000 0.000 0.027 0.027 pool.py:227(_repopulate_pool) 1 0.000 0.000 0.000 0.000 pool.py:250(_setup_queues) 1 0.000 0.000 0.000 0.000 pool.py:367(map_async) 1 0.000 0.000 0.000 0.000 pool.py:375(_map_async) 1 0.000 0.000 0.000 0.000 pool.py:631(__init__) 1 0.000 0.000 0.000 0.000 pool.py:639(ready) 1 0.000 0.000 0.290 0.290 pool.py:647(wait) 1 0.000 0.000 0.290 0.290 pool.py:650(get) 1 0.000 0.000 0.000 0.000 pool.py:676(__init__) 8 0.000 0.000 0.024 0.003 popen_fork.py:16(__init__) 220 0.000 0.000 0.001 0.000 popen_fork.py:25(poll) 8 0.000 0.000 0.018 0.002 popen_fork.py:67(_launch) 8 0.000 0.000 0.026 0.003 process.py:101(start) 8 0.000 0.000 0.000 0.000 process.py:180(name) 8 0.000 0.000 0.000 0.000 process.py:184(name) 8 0.000 0.000 0.000 0.000 process.py:196(daemon) 4 0.000 0.000 0.000 0.000 process.py:36(current_process) 8 0.000 0.000 0.002 0.000 process.py:53(_cleanup) 8 0.000 0.000 0.000 0.000 process.py:72(__init__) 16 0.000 0.000 0.000 0.000 process.py:85(<genexpr>) 8 0.000 0.000 0.000 0.000 process.py:90(_check_closed) 2 0.000 0.000 0.000 0.000 queues.py:330(__init__) 32 0.000 0.000 0.000 0.000 random.py:224(_randbelow) 32 0.000 0.000 0.000 0.000 random.py:256(choice) 32 0.000 0.000 0.000 0.000 socket.py:342(send) 4 0.000 0.000 0.000 0.000 synchronize.py:114(_make_name) 4 0.000 0.000 0.000 0.000 synchronize.py:161(__init__) 4 0.000 0.000 0.000 0.000 synchronize.py:50(__init__) 4 0.000 0.000 0.000 0.000 synchronize.py:90(_make_methods) 4 0.000 0.000 0.000 0.000 tempfile.py:142(rng) 4 0.000 0.000 0.000 0.000 tempfile.py:153(__next__) 4 0.000 0.000 0.000 0.000 tempfile.py:156(<listcomp>) 48 0.000 0.000 0.000 0.000 threading.py:1050(_wait_for_tstate_lock) 48 0.000 0.000 0.000 0.000 threading.py:1092(is_alive) 3 0.000 0.000 0.000 0.000 threading.py:1116(daemon) 3 0.000 0.000 0.000 0.000 threading.py:1131(daemon) 3 0.000 0.000 0.000 0.000 threading.py:1225(current_thread) 20 0.000 0.000 0.000 0.000 threading.py:216(__init__) 20 0.000 0.000 0.000 0.000 threading.py:240(__enter__) 20 0.000 0.000 0.000 0.000 threading.py:243(__exit__) 19 0.000 0.000 0.000 0.000 threading.py:249(_release_save) 19 0.000 0.000 0.000 0.000 threading.py:252(_acquire_restore) 19 0.000 0.000 0.000 0.000 threading.py:255(_is_owned) 19 0.000 0.000 0.294 0.015 threading.py:264(wait) 20 0.000 0.000 0.000 0.000 threading.py:499(__init__) 55 0.000 0.000 0.000 0.000 threading.py:507(is_set) 20 0.000 0.000 0.294 0.015 threading.py:534(wait) 3 0.000 0.000 0.000 0.000 threading.py:728(_newname) 3 0.000 0.000 0.000 0.000 threading.py:763(__init__) 3 0.000 0.000 0.000 0.000 threading.py:834(start) 4 0.000 0.000 0.000 0.000 util.py:148(register_after_fork) 9 0.000 0.000 0.000 0.000 util.py:163(__init__) 8 0.000 0.000 0.005 0.001 util.py:410(_flush_std_streams) 12 0.000 0.000 0.000 0.000 util.py:48(debug) 4 0.000 0.000 0.000 0.000 weakref.py:165(__setitem__) 4 0.000 0.000 0.000 0.000 weakref.py:336(__new__) 4 0.000 0.000 0.000 0.000 weakref.py:341(__init__) 4 0.000 0.000 0.000 0.000 {built-in method __new__ of type object at 0x565487df0240} 16 0.000 0.000 0.000 0.000 {built-in method _imp.lock_held} 39 0.000 0.000 0.000 0.000 {built-in method _thread.allocate_lock} 3 0.000 0.000 0.000 0.000 {built-in method _thread.get_ident} 3 0.000 0.000 0.000 0.000 {built-in method _thread.start_new_thread} 1 0.000 0.000 0.000 0.000 {built-in method builtins.divmod} 1 0.000 0.000 0.325 0.325 {built-in method builtins.exec} 4 0.000 0.000 0.000 0.000 {built-in method builtins.getattr} 16 0.000 0.000 0.000 0.000 {built-in method builtins.hasattr} 4 0.000 0.000 0.000 0.000 {built-in method builtins.id} 9 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance} 37 0.000 0.000 0.000 0.000 {built-in method builtins.len} 26 0.000 0.000 0.000 0.000 {built-in method builtins.next} 1 0.000 0.000 0.000 0.000 {built-in method from_iterable} 8 0.000 0.000 0.000 0.000 {built-in method posix.close} 8 0.017 0.002 0.018 0.002 {built-in method posix.fork} 29 0.000 0.000 0.000 0.000 {built-in method posix.getpid} 10 0.000 0.000 0.000 0.000 {built-in method posix.pipe} 220 0.001 0.000 0.001 0.000 {built-in method posix.waitpid} 20 0.000 0.000 0.000 0.000 {method '__enter__' of '_thread.lock' objects} 20 0.000 0.000 0.000 0.000 {method '__exit__' of '_thread.lock' objects} 8 0.000 0.000 0.000 0.000 {method 'acquire' of '_thread.RLock' objects} 124 0.294 0.002 0.294 0.002 {method 'acquire' of '_thread.lock' objects} 19 0.000 0.000 0.000 0.000 {method 'add' of 'set' objects} 51 0.000 0.000 0.000 0.000 {method 'append' of 'collections.deque' objects} 8 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects} 32 0.000 0.000 0.000 0.000 {method 'bit_length' of 'int' objects} 8 0.000 0.000 0.000 0.000 {method 'copy' of 'dict' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 8 0.000 0.000 0.000 0.000 {method 'get' of 'dict' objects} 53 0.000 0.000 0.000 0.000 {method 'getrandbits' of '_random.Random' objects} 12 0.000 0.000 0.000 0.000 {method 'join' of 'str' objects} 1 0.000 0.000 0.000 0.000 {method 'put' of '_queue.SimpleQueue' objects} 8 0.000 0.000 0.000 0.000 {method 'release' of '_thread.RLock' objects} 19 0.000 0.000 0.000 0.000 {method 'release' of '_thread.lock' objects} 8 0.000 0.000 0.000 0.000 {method 'replace' of 'str' objects} 15 0.000 0.000 0.000 0.000 {method 'rpartition' of 'str' objects}
MIT
notebooks/Profiling_parallel_code.ipynb
Eandreas1857/dsgrn_acdc
Cell Painting morphological (CP) and L1000 gene expression (GE) profiles for the following datasets: - **CDRP**-BBBC047-Bray-CP-GE (Cell line: U2OS) : * $\bf{CP}$ There are 30,430 unique compounds for CP dataset, median number of replicates --> 4 * $\bf{GE}$ There are 21,782 unique compounds for GE dataset, median number of replicates --> 3 * 20,131 compounds are present in both datasets.- **CDRP-bio**-BBBC036-Bray-CP-GE (Cell line: U2OS) : * $\bf{CP}$ There are 2,242 unique compounds for CP dataset, median number of replicates --> 8 * $\bf{GE}$ There are 1,917 unique compounds for GE dataset, median number of replicates --> 2 * 1916 compounds are present in both datasets. - **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) : * $\bf{CP}$ There are 593 unique alleles for CP dataset, median number of replicates --> 8 * $\bf{GE}$ There are 529 unique alleles for GE dataset, median number of replicates --> 8 * 525 alleles are present in both datasets. - **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) : * $\bf{CP}$ There are 323 unique alleles for CP dataset, median number of replicates --> 5 * $\bf{GE}$ There are 327 unique alleles for GE dataset, median number of replicates --> 2 * 150 alleles are present in both datasets. - **LINCS**-Pilot1-CP-GE (Cell line: U2OS) : * $\bf{CP}$ There are 1570 unique compounds across 7 doses for CP dataset, median number of replicates --> 5 * $\bf{GE}$ There are 1402 unique compounds for GE dataset, median number of replicates --> 3 * $N_{p/d}$: 6984 compounds are present in both datasets.-------------------------------------------- Link to the processed profiles: https://cellpainting-datasets.s3.us-east-1.amazonaws.com/Rosetta-GE-CP
%matplotlib notebook %load_ext autoreload %autoreload 2 import numpy as np import scipy.spatial import pandas as pd import sklearn.decomposition import matplotlib.pyplot as plt import seaborn as sns import os from cmapPy.pandasGEXpress.parse import parse from utils.replicateCorrs import replicateCorrs from utils.saveAsNewSheetToExistingFile import saveAsNewSheetToExistingFile,saveDF_to_CSV_GZ_no_timestamp from importlib import reload from utils.normalize_funcs import standardize_per_catX # sns.set_style("whitegrid") # np.__version__ pd.__version__
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
Input / ouput files: - **CDRPBIO**-BBBC047-Bray-CP-GE (Cell line: U2OS) : * $\bf{CP}$ * Input: * Output: * $\bf{GE}$ * Input: .mat files that are generated using https://github.com/broadinstitute/2014_wawer_pnas * Output: - **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) : * $\bf{CP}$ * Input: * Output: * $\bf{GE}$ * Input: * Output: - **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) : * $\bf{CP}$ * Input: * Output: * $\bf{GE}$ * Input: https://data.broadinstitute.org/icmap/custom/TA/brew/pc/TA.OE005_U2OS_72H/ * Output: Reformat Cell-Painting Data Sets- CDRP and TA-ORF are in /storage/data/marziehhaghighi/Rosetta/raw-profiles/- Luad is already processed by Juan, source of the files is at /storage/luad/profiles_cp in case you want to reformat
fileName='RepCorrDF' ### dirs on gpu cluster # rawProf_dir='/storage/data/marziehhaghighi/Rosetta/raw-profiles/' # procProf_dir='/home/marziehhaghighi/workspace_rosetta/workspace/' ### dirs on ec2 rawProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/' # procProf_dir='./' procProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/' # s3://imaging-platform/projects/2018_04_20_Rosetta/workspace/preprocessed_data # aws s3 sync preprocessed_data s3://cellpainting-datasets/Rosetta-GE-CP/preprocessed_data --profile jumpcpuser filename='../../results/RepCor/'+fileName+'.xlsx' # ls ../../ # https://cellpainting-datasets.s3.us-east-1.amazonaws.com/
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
CDRP-BBBC047-Bray GE - L1000 - CDRP
os.listdir(rawProf_dir+'/l1000_CDRP/') cdrp_dataDir=rawProf_dir+'/l1000_CDRP/' cpd_info = pd.read_csv(cdrp_dataDir+"/compounds.txt", sep="\t", dtype=str) cpd_info.columns from scipy.io import loadmat x = loadmat(cdrp_dataDir+'cdrp.all.prof.mat') k1=x['metaWell']['pert_id'][0][0] k2=x['metaGen']['AFFX_PROBE_ID'][0][0] k3=x['metaWell']['pert_dose'][0][0] k4=x['metaWell']['det_plate'][0][0] # pert_dose # x['metaWell']['pert_id'][0][0][0][0][0] pertID = [] probID=[] for r in range(len(k1)): v = k1[r][0][0] pertID.append(v) # probID.append(k2[r][0][0]) for r in range(len(k2)): probID.append(k2[r][0][0]) pert_dose=[] det_plate=[] for r in range(len(k3)): pert_dose.append(k3[r][0]) det_plate.append(k4[r][0][0]) dataArray=x['pclfc']; cdrp_l1k_rep = pd.DataFrame(data=dataArray,columns=probID) cdrp_l1k_rep['pert_id']=pertID cdrp_l1k_rep['pert_dose']=pert_dose cdrp_l1k_rep['det_plate']=det_plate cdrp_l1k_rep['BROAD_CPD_ID']=cdrp_l1k_rep['pert_id'].str[:13] cdrp_l1k_rep2=pd.merge(cdrp_l1k_rep, cpd_info, how='left',on=['BROAD_CPD_ID']) l1k_features_cdrp=cdrp_l1k_rep2.columns[cdrp_l1k_rep2.columns.str.contains("_at")] cdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['BROAD_CPD_ID']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str) cdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_id']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str) # cdrp_l1k_df.head() print(cpd_info.shape,cdrp_l1k_rep.shape,cdrp_l1k_rep2.shape) cdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['pert_id_dose'].replace('DMSO_-666.0', 'DMSO') cdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_sample_dose'].replace('DMSO_-666.0', 'DMSO') saveDF_to_CSV_GZ_no_timestamp(cdrp_l1k_rep2,procProf_dir+'preprocessed_data/CDRP-BBBC047-Bray/L1000/replicate_level_l1k.csv.gz'); # cdrp_l1k_rep2.head() # cpd_info
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
CP - CDRP
profileType=['_augmented','_normalized'] bioactiveFlag="";# either "-bioactive" or "" plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/') for pt in profileType[1:2]: repLevelCDRP0=[] for p in plates: # repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv')) repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive repLevelCDRP = pd.concat(repLevelCDRP0) metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv') # metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'}) # metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower() repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample']) # repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str) # repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str) repLevelCDRP2["Metadata_mmoles_per_liter2"]=(repLevelCDRP2["Metadata_mmoles_per_liter"]*2).round(2) repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str) repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO') repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO') # repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip') # , if bioactiveFlag: dataFolderName='CDRPBIO-BBBC036-Bray' saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\ '/CellPainting/replicate_level_cp'+pt+'.csv.gz') else: # sgfsgf dataFolderName='CDRP-BBBC047-Bray' saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\ '/CellPainting/replicate_level_cp'+pt+'.csv.gz') print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape) dataFolderName='CDRP-BBBC047-Bray' cp_feats=repLevelCDRP.columns[repLevelCDRP.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")].tolist() features_to_remove =find_correlation(repLevelCDRP2[cp_feats], threshold=0.9, remove_negative=False) repLevelCDRP2_var_sel=repLevelCDRP2.drop(columns=features_to_remove) saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2_var_sel,procProf_dir+'preprocessed_data/'+dataFolderName+\ '/CellPainting/replicate_level_cp'+'_normalized_variable_selected'+'.csv.gz') # features_to_remove # features_to_remove # features_to_remove repLevelCDRP2['Nuclei_Texture_Variance_RNA_3_0'] # repLevelCDRP2.shape # cp_scaled.columns[cp_scaled.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")].tolist()
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
CDRP-bio-BBBC036-Bray GE - L1000 - CDRPBIO
bioactiveFlag="-bioactive";# either "-bioactive" or "" plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/') # plates cdrp_l1k_rep2_bioactive=cdrp_l1k_rep2[cdrp_l1k_rep2["pert_sample_dose"].isin(repLevelCDRP2.Metadata_Sample_Dose.unique().tolist())] cdrp_l1k_rep.det_plate
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
CP - CDRPBIO
profileType=['_augmented','_normalized','_normalized_variable_selected'] bioactiveFlag="-bioactive";# either "-bioactive" or "" plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/') for pt in profileType: repLevelCDRP0=[] for p in plates: # repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv')) repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive repLevelCDRP = pd.concat(repLevelCDRP0) metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv') # metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'}) # metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower() repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample']) # repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str) # repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str) repLevelCDRP2["Metadata_mmoles_per_liter2"]=(repLevelCDRP2["Metadata_mmoles_per_liter"]*2).round(2) repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str) repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO') repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO') # repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip') # , if bioactiveFlag: dataFolderName='CDRPBIO-BBBC036-Bray' saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\ '/CellPainting/replicate_level_cp'+pt+'.csv.gz') else: dataFolderName='CDRP-BBBC047-Bray' saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\ '/CellPainting/replicate_level_cp'+pt+'.csv.gz') print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape)
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
LUAD-BBBC041-Caicedo GE - L1000 - LUAD
os.listdir(rawProf_dir+'/l1000_LUAD/input/') os.listdir(rawProf_dir+'/l1000_LUAD/output/') luad_dataDir=rawProf_dir+'/l1000_LUAD/' luad_info1 = pd.read_csv(luad_dataDir+"/input/TA.OE014_A549_96H.map", sep="\t", dtype=str) luad_info2 = pd.read_csv(luad_dataDir+"/input/TA.OE015_A549_96H.map", sep="\t", dtype=str) luad_info=pd.concat([luad_info1, luad_info2], ignore_index=True) luad_info.head() luad_l1k_df = parse(luad_dataDir+"/output/high_rep_A549_8reps_141230_ZSPCINF_n4232x978.gctx").data_df.T.reset_index() luad_l1k_df=luad_l1k_df.rename(columns={"cid":"id"}) # cdrp_l1k_df['XX']=cdrp_l1k_df['cid'].str[0] # cdrp_l1k_df['BROAD_CPD_ID']=cdrp_l1k_df['cid'].str[2:15] luad_l1k_df2=pd.merge(luad_l1k_df, luad_info, how='inner',on=['id']) luad_l1k_df2=luad_l1k_df2.rename(columns={"x_mutation_status":"allele"}) l1k_features=luad_l1k_df2.columns[luad_l1k_df2.columns.str.contains("_at")] luad_l1k_df2['allele']=luad_l1k_df2['allele'].replace('UnTrt', 'DMSO') print(luad_info.shape,luad_l1k_df.shape,luad_l1k_df2.shape) saveDF_to_CSV_GZ_no_timestamp(luad_l1k_df2,procProf_dir+'/preprocessed_data/LUAD-BBBC041-Caicedo/L1000/replicate_level_l1k.csv.gz') luad_l1k_df_scaled = standardize_per_catX(luad_l1k_df2,'det_plate',l1k_features.tolist()); x_l1k_luad=replicateCorrs(luad_l1k_df_scaled.reset_index(drop=True),'allele',l1k_features,1) # x_l1k_luad=replicateCorrs(luad_l1k_df2[luad_l1k_df2['allele']!='DMSO'].reset_index(drop=True),'allele',l1k_features,1) # saveAsNewSheetToExistingFile(filename,x_l1k_luad[2],'l1k-luad')
here3
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
CP - LUAD
profileType=['_augmented','_normalized','_normalized_variable_selected'] plates=os.listdir('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/') for pt in profileType[1:2]: repLevelLuad0=[] for p in plates: repLevelLuad0.append(pd.read_csv('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/'+p+'/'+p+pt+'.csv')) repLevelLuad = pd.concat(repLevelLuad0) metaLuad1=pd.read_csv(rawProf_dir+'/CP_LUAD/metadata/combined_platemaps_AHB_20150506_ssedits.csv') metaLuad1=metaLuad1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'}) metaLuad1['Metadata_Well']=metaLuad1['Metadata_Well'].str.lower() # metaLuad2=pd.read_csv('~/workspace_rosetta/workspace/raw_profiles/CP_LUAD/metadata/barcode_platemap.csv') # Y[Y['Metadata_Well']=='g05']['Nuclei_Texture_Variance_Mito_5_0'] repLevelLuad2=pd.merge(repLevelLuad, metaLuad1, how='inner',on=['Metadata_Plate_Map_Name','Metadata_Well']) repLevelLuad2['x_mutation_status']=repLevelLuad2['x_mutation_status'].replace(np.nan, 'DMSO') cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] # repLevelLuad2.to_csv(procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip') saveDF_to_CSV_GZ_no_timestamp(repLevelLuad2,procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz') print(metaLuad1.shape,repLevelLuad.shape,repLevelLuad2.shape) pt=['_normalized'] # Read save data repLevelLuad2=pd.read_csv('./preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz') # repLevelTA.head() cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] cols2remove0=[i for i in cp_features if ((repLevelLuad2[i].isnull()).sum(axis=0)/repLevelLuad2.shape[0])>0.05] print(cols2remove0) repLevelLuad2=repLevelLuad2.drop(cols2remove0, axis=1); cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] repLevelLuad2 = repLevelLuad2.interpolate() repLevelLuad2 = standardize_per_catX(repLevelLuad2,'Metadata_Plate',cp_features.tolist()); df1=repLevelLuad2[~repLevelLuad2['x_mutation_status'].isnull()].reset_index(drop=True) x_cp_luad=replicateCorrs(df1,'x_mutation_status',cp_features,1) saveAsNewSheetToExistingFile(filename,x_cp_luad[2],'cp-luad')
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
TA-ORF-BBBC037-Rohban GE - L1000
taorf_datadir=rawProf_dir+'/l1000_TA_ORF/' gene_info = pd.read_csv(taorf_datadir+"TA.OE005_U2OS_72H.map.txt", sep="\t", dtype=str) # gene_info.columns # TA.OE005_U2OS_72H_INF_n729x22268.gctx # TA.OE005_U2OS_72H_QNORM_n729x978.gctx # TA.OE005_U2OS_72H_ZSPCINF_n729x22268.gctx # TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx taorf_l1k0 = parse(taorf_datadir+"TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx") # taorf_l1k0 = parse(taorf_datadir+"TA.OE005_U2OS_72H_QNORM_n729x978.gctx") taorf_l1k_df0=taorf_l1k0.data_df taorf_l1k_df=taorf_l1k_df0.T.reset_index() l1k_features=taorf_l1k_df.columns[taorf_l1k_df.columns.str.contains("_at")] taorf_l1k_df=taorf_l1k_df.rename(columns={"cid":"id"}) taorf_l1k_df2=pd.merge(taorf_l1k_df, gene_info, how='inner',on=['id']) # print(taorf_l1k_df.shape,gene_info.shape,taorf_l1k_df2.shape) taorf_l1k_df2.head() # x_genesymbol_mutation taorf_l1k_df2['pert_id']=taorf_l1k_df2['pert_id'].replace('CMAP-000', 'DMSO') # compression_opts = dict(method='zip',archive_name='out.csv') # taorf_l1k_df2.to_csv(procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz',index=False,compression=compression_opts) saveDF_to_CSV_GZ_no_timestamp(taorf_l1k_df2,procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz') print(gene_info.shape,taorf_l1k_df.shape,taorf_l1k_df2.shape) # gene_info.head() taorf_l1k_df2.groupby(['x_genesymbol_mutation']).size().describe() taorf_l1k_df2.groupby(['pert_id']).size().describe()
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
Check Replicate Correlation
# df1=taorf_l1k_df2[taorf_l1k_df2['pert_id']!='CMAP-000'] df1_scaled = standardize_per_catX(taorf_l1k_df2,'det_plate',l1k_features.tolist()); df1_scaled2=df1_scaled[df1_scaled['pert_id']!='DMSO'] x=replicateCorrs(df1_scaled2,'pert_id',l1k_features,1)
here3
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
CP - TAORF
profileType=['_augmented','_normalized','_normalized_variable_selected'] plates=os.listdir(rawProf_dir+'TA-ORF-BBBC037-Rohban/') for pt in profileType[0:1]: repLevelTA0=[] for p in plates: repLevelTA0.append(pd.read_csv(rawProf_dir+'TA-ORF-BBBC037-Rohban/'+p+'/'+p+pt+'.csv')) repLevelTA = pd.concat(repLevelTA0) metaTA1=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA.csv') metaTA2=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA_2.csv') # metaTA2=metaTA2.rename(columns={"Metadata_broad_sample":"Metadata_broad_sample_2",'Metadata_Treatment':'Gene Allele Name'}) metaTA=pd.merge(metaTA2, metaTA1, how='left',on=['Metadata_broad_sample']) # metaTA2=metaTA2.rename(columns={"Metadata_Treatment":"Metadata_pert_name"}) # repLevelTA2=pd.merge(repLevelTA, metaTA2, how='left',on=['Metadata_pert_name']) repLevelTA2=pd.merge(repLevelTA, metaTA, how='left',on=['Metadata_broad_sample']) # repLevelTA2=repLevelTA2.rename(columns={"Gene Allele Name":"Allele"}) repLevelTA2['Metadata_broad_sample']=repLevelTA2['Metadata_broad_sample'].replace(np.nan, 'DMSO') saveDF_to_CSV_GZ_no_timestamp(repLevelTA2,procProf_dir+'/preprocessed_data/TA-ORF-BBBC037-Rohban/CellPainting/replicate_level_cp'+pt+'.csv.gz') print(metaTA.shape,repLevelTA.shape,repLevelTA2.shape) # repLevelTA.head() cp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] cols2remove0=[i for i in cp_features if ((repLevelTA2[i].isnull()).sum(axis=0)/repLevelTA2.shape[0])>0.05] print(cols2remove0) repLevelTA2=repLevelTA2.drop(cols2remove0, axis=1); # cp_features=list(set(cp_features)-set(cols2remove0)) # repLevelTA2=repLevelTA2.replace('nan', np.nan) repLevelTA2 = repLevelTA2.interpolate() cp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] repLevelTA2 = standardize_per_catX(repLevelTA2,'Metadata_Plate',cp_features.tolist()); df1=repLevelTA2[~repLevelTA2['Metadata_broad_sample'].isnull()].reset_index(drop=True) x_taorf_cp=replicateCorrs(df1,'Metadata_broad_sample',cp_features,1) # saveAsNewSheetToExistingFile(filename,x_taorf_cp[2],'cp-taorf') # plates
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
LINCS-Pilot1 GE - L1000 - LINCS
os.listdir(rawProf_dir+'/l1000_LINCS/2016_04_01_a549_48hr_batch1_L1000/') os.listdir(rawProf_dir+'/l1000_LINCS/metadata/') data_meta_match_ls=[['level_3','level_3_q2norm_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'], ['level_4W','level_4W_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'], ['level_4','level_4_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'], ['level_5_modz','level_5_modz_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt'], ['level_5_rank','level_5_rank_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt']] lincs_dataDir=rawProf_dir+'/l1000_LINCS/' lincs_pert_info = pd.read_csv(lincs_dataDir+"/metadata/REP.A_A549_pert_info.txt", sep="\t", dtype=str) lincs_meta_level3 = pd.read_csv(lincs_dataDir+"/metadata/col_meta_level_3_REP.A_A549_only_n27837.txt", sep="\t", dtype=str) # lincs_info1 = pd.read_csv(lincs_dataDir+"/metadata/REP.A_A549_pert_info.txt", sep="\t", dtype=str) print(lincs_meta_level3.shape) lincs_meta_level3.head() # lincs_info2 = pd.read_csv(lincs_dataDir+"/input/TA.OE015_A549_96H.map", sep="\t", dtype=str) # lincs_info=pd.concat([lincs_info1, lincs_info2], ignore_index=True) # lincs_info.head() # lincs_meta_level3.groupby('distil_id').size() lincs_meta_level3['distil_id'].unique().shape # lincs_meta_level3.columns.tolist() # lincs_meta_level3.pert_id ls /home/ubuntu/workspace_rosetta/workspace/software/2018_04_20_Rosetta/preprocessed_data/LINCS-Pilot1/CellPainting # procProf_dir+'preprocessed_data/LINCS-Pilot1/' procProf_dir for el in data_meta_match_ls: lincs_l1k_df=parse(lincs_dataDir+"/2016_04_01_a549_48hr_batch1_L1000/"+el[1]).data_df.T.reset_index() lincs_meta0 = pd.read_csv(lincs_dataDir+"/metadata/"+el[2], sep="\t", dtype=str) lincs_meta=pd.merge(lincs_meta0, lincs_pert_info, how='left',on=['pert_id']) lincs_meta=lincs_meta.rename(columns={"distil_id":"cid"}) lincs_l1k_df2=pd.merge(lincs_l1k_df, lincs_meta, how='inner',on=['cid']) lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id']+'_'+lincs_l1k_df2['nearest_dose'].astype(str) lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id_dose'].replace('DMSO_-666', 'DMSO') # lincs_l1k_df2.to_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz',index=False,compression='gzip') saveDF_to_CSV_GZ_no_timestamp(lincs_l1k_df2,procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz') # lincs_l1k_df2 lincs_l1k_rep['pert_id_dose'].unique() lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[1][0]+'.csv.gz') # l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")] # x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1) # # saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs') # # lincs_l1k_rep.head() lincs_l1k_rep.pert_id.unique().shape lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz') lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains('dose')] lincs_l1k_rep[['pert_dose', 'pert_dose_unit', 'pert_idose', 'nearest_dose']] lincs_l1k_rep['nearest_dose'].unique() # lincs_l1k_rep.rna_plate.unique() lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz') l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")] lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist()); x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1) lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz') l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")] lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist()); x_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1) saveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs') lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz') l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")] lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist()); x_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1) saveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs') saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs')
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
raw data
# set(repLevelLuad2)-set(Y1.columns) # Y1[['Allele', 'Category', 'Clone ID', 'Gene Symbol']].head() # repLevelLuad2[repLevelLuad2['PublicID']=='BRDN0000553807'][['Col','InsertLength','NCBIGeneID','Name','OtherDescriptions','PublicID','Row','Symbol','Transcript','Vector','pert_type','x_mutation_status']].head()
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
Check Replicate Correlation CP - LINCS
# Ran the following on: # https://ec2-54-242-99-61.compute-1.amazonaws.com:5006/notebooks/workspace_nucleolar/2020_07_20_Nucleolar_Calico/1-NucleolarSizeMetrics.ipynb # Metadata def recode_dose(x, doses, return_level=False): closest_index = np.argmin([np.abs(dose - x) for dose in doses]) if np.isnan(x): return 0 if return_level: return closest_index + 1 else: return doses[closest_index] primary_dose_mapping = [0.04, 0.12, 0.37, 1.11, 3.33, 10, 20] metadata=pd.read_csv("/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/CP_LINCS/metadata/matadata_lincs_2.csv") metadata['Metadata_mmoles_per_liter']=metadata.mmoles_per_liter.values.round(2) metadata=metadata.rename(columns={"Assay_Plate_Barcode": "Metadata_Plate",'broad_sample':'Metadata_broad_sample','well_position':'Metadata_Well'}) lincs_submod_root_dir="/home/ubuntu/datasetsbucket/lincs-cell-painting/" profileType=['_augmented','_normalized','_normalized_dmso',\ '_normalized_feature_select','_normalized_feature_select_dmso'] # profileType=['_normalized'] # plates=metadata.Assay_Plate_Barcode.unique().tolist() plates=metadata.Metadata_Plate.unique().tolist() for pt in profileType[4:5]: repLevelLINCS0=[] for p in plates: profile_add=lincs_submod_root_dir+"/profiles/2016_04_01_a549_48hr_batch1/"+p+"/"+p+pt+".csv.gz" if os.path.exists(profile_add): repLevelLINCS0.append(pd.read_csv(profile_add)) repLevelLINCS = pd.concat(repLevelLINCS0) meta_lincs1=metadata.rename(columns={"broad_sample": "Metadata_broad_sample"}) # metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'}) # metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower() repLevelLINCS2=pd.merge(repLevelLINCS,meta_lincs1,how='left', on=["Metadata_broad_sample","Metadata_Well","Metadata_Plate",'Metadata_mmoles_per_liter']) repLevelLINCS2 = repLevelLINCS2.assign(Metadata_dose_recode=(repLevelLINCS2.Metadata_mmoles_per_liter.apply( lambda x: recode_dose(x, primary_dose_mapping, return_level=False)))) repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str) # repLevelLINCS2['Metadata_Sample_Dose']=repLevelLINCS2['Metadata_broad_sample']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str) repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id_dose'].replace(np.nan, 'DMSO') # saveDF_to_CSV_GZ_no_timestamp(repLevelLINCS2,procProf_dir+'/preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt+'.csv.gz') print(meta_lincs1.shape,repLevelLINCS.shape,repLevelLINCS2.shape) # (8120, 15) (52223, 1810) (688699, 1825) # repLevelLINCS # pd.merge(repLevelLINCS,meta_lincs1,how='left', on=["Metadata_broad_sample"]).shape repLevelLINCS.shape,meta_lincs1.shape (8120, 15) (52223, 1238) (52223, 1253) csv_l1k_lincs=pd.read_csv('./preprocessed_data/LINCS-Pilot1/L1000/replicate_level_l1k'+'.csv.gz') csv_pddf=pd.read_csv('./preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz') csv_l1k_lincs.head() csv_l1k_lincs.pert_id_dose.unique() csv_pddf.Metadata_pert_id_dose.unique()
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
Read saved data
repLevelLINCS2.groupby(['Metadata_pert_id']).size() repLevelLINCS2.groupby(['Metadata_pert_id_dose']).size().describe() repLevelLINCS2.Metadata_Plate.unique().shape repLevelLINCS2['Metadata_pert_id_dose'].unique().shape # csv_pddf['Metadata_mmoles_per_liter'].round(0).unique() # np.sort(csv_pddf['Metadata_mmoles_per_liter'].unique()) csv_pddf.groupby(['Metadata_dose_recode']).size()#.median() # repLevelLincs2=csv_pddf.copy() import gc cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05] print(cols2remove0) repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1); print('here0') # cp_features=list(set(cp_features)-set(cols2remove0)) # repLevelTA2=repLevelTA2.replace('nan', np.nan) del repLevelLincs2 gc.collect() print('here0') cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] repLevelLincs3[cp_features] = repLevelLincs3[cp_features].interpolate() print('here1') repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist()); print('here1') # df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True) # repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index() repSizeDF=repLevelLincs3.groupby(['Metadata_pert_id_dose']).size().reset_index() highRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id_dose.tolist() highRepComp.remove('DMSO') # df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\ # (repLevelLincs3['Metadata_dose_recode']==1.11)] df0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id_dose'].isin(highRepComp))] x_lincs_cp=replicateCorrs(df0,'Metadata_pert_id_dose',cp_features,1) # saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs') repSizeDF # repLevelLincs2=csv_pddf.copy() # cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] # cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05] # print(cols2remove0) # repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1); # # cp_features=list(set(cp_features)-set(cols2remove0)) # # repLevelTA2=repLevelTA2.replace('nan', np.nan) # repLevelLincs3 = repLevelLincs3.interpolate() # repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist()); # cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] # # df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True) # # repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index() repSizeDF=repLevelLincs3.groupby(['Metadata_pert_id']).size().reset_index() highRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id.tolist() # highRepComp.remove('DMSO') # df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\ # (repLevelLincs3['Metadata_dose_recode']==1.11)] df0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id'].isin(highRepComp))] x_lincs_cp=replicateCorrs(df0,'Metadata_pert_id',cp_features,1) # saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs') # x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1) # highRepComp[-1] saveAsNewSheetToExistingFile(filename,x[2],'cp-lincs') # repLevelLincs3.Metadata_Plate repLevelLincs3.head() # csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']=="BRD-A00147595")][['Metadata_Plate','Metadata_Well']].drop_duplicates() # csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']=="BRD-A00147595") & # (csv_pddf['Metadata_Plate']=='SQ00015196') & (csv_pddf['Metadata_Well']=="B12")][csv_pddf.columns[1820:]].drop_duplicates() # def standardize_per_catX(df,column_name): column_name='Metadata_Plate' repLevelLincs_scaled_perPlate=repLevelLincs3.copy() repLevelLincs_scaled_perPlate[cp_features.tolist()]=repLevelLincs3[cp_features.tolist()+[column_name]].groupby(column_name).transform(lambda x: (x - x.mean()) / x.std()).values # def standardize_per_catX(df,column_name): # # column_name='Metadata_Plate' # cp_features=df.columns[df.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] # df_scaled_perPlate=df.copy() # df_scaled_perPlate[cp_features.tolist()]=\ # df[cp_features.tolist()+[column_name]].groupby(column_name)\ # .transform(lambda x: (x - x.mean()) / x.std()).values # return df_scaled_perPlate df0=repLevelLincs_scaled_perPlate[(repLevelLincs_scaled_perPlate['Metadata_Sample_Dose'].isin(highRepComp))] x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1)
_____no_output_____
BSD-3-Clause
0-preprocess_datasets.ipynb
carpenterlab/2021_Haghighi_submitted
IntroductionVisualization of statistics that support the claims of Black Lives Matter movement, data from 2015 and 2016.Data source: https://www.theguardian.com/us-news/ng-interactive/2015/jun/01/about-the-countedIdea from BuzzFeed article: https://www.buzzfeednews.com/article/peteraldhous/race-and-police-shootings ImportsLibraries and data
import pandas as pd import numpy as np from bokeh.io import output_notebook, show, export_png from bokeh.plotting import figure, output_file from bokeh.models import HoverTool, ColumnDataSource,NumeralTickFormatter from bokeh.palettes import Spectral4, PuBu4 from bokeh.transform import dodge from bokeh.layouts import gridplot selectcolumns=['raceethnicity','armed'] df1 = pd.read_csv('the-counted-2015.csv',usecols=selectcolumns) df1.head() df2 = pd.read_csv('the-counted-2016.csv',usecols=selectcolumns) df2.head() df=pd.concat([df1,df2]) df.shape # df contains "The Counted" data from both 2015 and 2016
_____no_output_____
MIT
thecounted_visual.ipynb
KikiCS/thecounted
Source for ethnicities percentage in 2015: https://www.statista.com/statistics/270272/percentage-of-us-population-by-ethnicities/Source for population total: https://en.wikipedia.org/wiki/Demography_of_the_United_StatesVital_statistics_from_1935
ethndic={"White": 61.72, "Latino": 17.66, "Black": 12.38, "Others": (5.28+2.05+0.73+0.17) } #print(type(ethndic)) print(ethndic) population=(321442000 + 323100000)/2 # average between 2015 and 2016 data # estimates by ethnicity ethnestim={"White": round((population*ethndic["White"]/100)), "Latino": round((population*ethndic["Latino"]/100)), "Black": round((population*ethndic["Black"]/100)), "Others": round((population*ethndic["Others"]/100)) } print(ethnestim)
{'White': 61.72, 'Latino': 17.66, 'Black': 12.38, 'Others': 8.23} {'White': 198905661, 'Latino': 56913059, 'Black': 39897150, 'Others': 26522903}
MIT
thecounted_visual.ipynb
KikiCS/thecounted