markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Transforming the dataThe World Bank reports GDP in US dollars and cents. To make the data easier to read, the GDP is converted to millions of British pounds (the author's local currency) with the following auxiliary functions, using the average 2013 dollar-to-pound conversion rate provided by .
def roundToMillions (value): return round(value / 1000000) def usdToGBP (usd): return usd / 1.334801 GDP = 'GDP (£m)' gdpCountries[GDP] = gdpCountries[GDP_INDICATOR].apply(usdToGBP).apply(roundToMillions) gdpCountries.head() COUNTRY = 'Country Name' headings = [COUNTRY, GDP] gdpClean = gdpCountries[headings] gdpClean.head() LIFE = 'Life expectancy (years)' lifeCountries[LIFE] = lifeCountries[LIFE_INDICATOR].apply(round) headings = [COUNTRY, LIFE] lifeClean = lifeCountries[headings] lifeClean.head() gdpVsLife = pd.merge(gdpClean, lifeClean, on=COUNTRY, how='inner') gdpVsLife.head()
_____no_output_____
MIT
Ugwu Lilian WT-21-138/2018_LE_GDP.ipynb
ruthwaiharo/Week-5-Assessment
Calculating the correlationTo measure if the life expectancy and the GDP grow together, the Spearman rank correlation coefficient is used. It is a number from -1 (perfect inverse rank correlation: if one indicator increases, the other decreases) to 1 (perfect direct rank correlation: if one indicator increases, so does the other), with 0 meaning there is no rank correlation. A perfect correlation doesn't imply any cause-effect relation between the two indicators. A p-value below 0.05 means the correlation is statistically significant.
from scipy.stats import spearmanr gdpColumn = gdpVsLife[GDP] lifeColumn = gdpVsLife[LIFE] (correlation, pValue) = spearmanr(gdpColumn, lifeColumn) print('The correlation is', correlation) if pValue < 0.05: print('It is statistically significant.') else: print('It is not statistically significant.')
The correlation is -0.01111757436417062 It is not statistically significant.
MIT
Ugwu Lilian WT-21-138/2018_LE_GDP.ipynb
ruthwaiharo/Week-5-Assessment
The value shows a direct correlation, i.e. richer countries tend to have longer life expectancy. Showing the dataMeasures of correlation can be misleading, so it is best to see the overall picture with a scatterplot. The GDP axis uses a logarithmic scale to better display the vast range of GDP values, from a few million to several billion (million of million) pounds.
%matplotlib inline gdpVsLife.plot(x=GDP, y=LIFE, kind='scatter', grid=True, logx=True, figsize=(10, 4))
_____no_output_____
MIT
Ugwu Lilian WT-21-138/2018_LE_GDP.ipynb
ruthwaiharo/Week-5-Assessment
The plot shows there is no clear correlation: there are rich countries with low life expectancy, poor countries with high expectancy, and countries with around 10 thousand (104) million pounds GDP have almost the full range of values, from below 50 to over 80 years. Towards the lower and higher end of GDP, the variation diminishes. Above 40 thousand million pounds of GDP (3rd tick mark to the right of 104), most countries have an expectancy of 70 years or more, whilst below that threshold most countries' life expectancy is below 70 years.Comparing the 10 poorest countries and the 10 countries with the lowest life expectancy shows that total GDP is a rather crude measure. The population size should be taken into account for a more precise definiton of what 'poor' and 'rich' means. Furthermore, looking at the countries below, droughts and internal conflicts may also play a role in life expectancy.
# the 10 countries with lowest GDP gdpVsLife.sort_values(GDP).head(10) # the 10 countries with lowest life expectancy gdpVsLife.sort_values(LIFE).head(10)
_____no_output_____
MIT
Ugwu Lilian WT-21-138/2018_LE_GDP.ipynb
ruthwaiharo/Week-5-Assessment
Immune disease associations of Neanderthal-introgressed SNPsThis code investigates if Neanderthal-introgressed SNPs (present in Chen introgressed sequences) have been associated with any immune-related diseases, including infectious diseases, allergic diseases, autoimmune diseases and autoinflammatory diseases, using data from the NHGRI-EBI GWAS Catalog.Neanderthal-introgressed SNPs from:1. Dannemann M, Prufer K & Kelso J. Functional implications of Neandertal introgression in modern humans. *Genome Biol* 2017 **18**:61.2. Simonti CN *et al.* The phenotypic legacy of admixture between modern humans and Neandertals. *Science* 2016 **351**:737-41. Neanderthal-introgressed sequences by Chen *et al.* from:* Chen L *et al.* Identifying and interpreting apparent Neanderthal ancestry in African individuals. *Cell* 2020 **180**:677-687. GWAS summary statistics from:* [GWAS Catalog](https://www.ebi.ac.uk/gwas/docs/file-downloads)
# Import modules import pandas as pd
_____no_output_____
MIT
disease/neanderthal_gwas.ipynb
kshiyao/neanderthal_introgression
Get Neanderthal SNPs present in GWAS Catalog
# Load Chen Neanderthal-introgressed SNPs chen = pd.read_excel('../chen/Additional File 1.xlsx', 'Sheet1', usecols=['Chromosome', 'Position', 'Source', 'ID', 'Chen']) neanderthal = chen.loc[chen.Chen == 'Yes'].copy() neanderthal.drop('Chen', axis=1) # Load GWAS catalog catalog = pd.read_csv('GWAS_Catalog.tsv', sep="\t", header=0, usecols=['DISEASE/TRAIT', 'CHR_ID', 'CHR_POS', 'REPORTED GENE(S)', 'MAPPED_GENE', 'STRONGEST SNP-RISK ALLELE', 'SNPS', 'RISK ALLELE FREQUENCY', 'P-VALUE', 'OR or BETA', '95% CI (TEXT)', 'MAPPED_TRAIT', 'STUDY ACCESSION'], low_memory=False) catalog = catalog.loc[catalog.CHR_ID != 'X'].copy() catalog = catalog.loc[catalog.CHR_ID != 'Y'].copy() catalog.rename(columns={'CHR_ID': 'Chromosome', 'CHR_POS': 'Position', 'SNPS': 'ID'}, inplace=True) # Neanderthal SNPs present in GWAS catalog nean_catalog = neanderthal.merge(catalog.drop(columns=['Chromosome', 'Position']), how='inner', on='ID') nean_catalog
_____no_output_____
MIT
disease/neanderthal_gwas.ipynb
kshiyao/neanderthal_introgression
Immune-related diseases associated with Neanderthal SNPs Infections
nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('influenza')] nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('wart')] nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('HIV')] nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('Malaria')]
_____no_output_____
MIT
disease/neanderthal_gwas.ipynb
kshiyao/neanderthal_introgression
Allergic diseases
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('allerg')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('asthma')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Eczema')]
_____no_output_____
MIT
disease/neanderthal_gwas.ipynb
kshiyao/neanderthal_introgression
Autoimmune/autoinflammatory diseases
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('lupus')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('rheumatoid')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('scleroderma')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Sjogren')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Grave')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('glomerulonephritis')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('colitis')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Crohn')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('bowel')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('psoriasis')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('celiac')] nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('multiple sclerosis')]
_____no_output_____
MIT
disease/neanderthal_gwas.ipynb
kshiyao/neanderthal_introgression
Do immune disease-associated Neanderthal SNPs show eQTL?
# Load eQTL data fairfax_ori = pd.read_csv("../fairfax/tab2_a_cis_eSNPs.txt", sep="\t", usecols=["SNP", "Gene", "Min.dataset", "LPS2.FDR", "LPS24.FDR", "IFN.FDR", "Naive.FDR"]) fairfax_re = pd.read_csv('overlap_filtered_fairfax.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta']) fairfax_re.sort_values('pvalue', inplace=True) fairfax_re.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True) nedelec_re = pd.read_csv('overlap_filtered_nedelec.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta']) nedelec_re.sort_values('pvalue', inplace=True) nedelec_re.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True) quach = pd.read_csv('overlap_filtered_quach.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta']) quach.sort_values('pvalue', inplace=True) quach.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True) alasoo = pd.read_csv('overlap_filtered_alasoo.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta']) alasoo.sort_values('pvalue', inplace=True) alasoo.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True) # Selected Neanderthal SNPs with immune disease associations gwas = open('overlapped_SNPs.txt', 'r').read().splitlines() gwas # Overlap with original Fairfax eQTLs ls = set(list(fairfax_ori.SNP)).intersection(gwas) fairfax_ori.loc[fairfax_ori.SNP.isin(ls)] # Overlap with recomputed Fairfax eQTLs ls = set(list(fairfax_re.rsid)).intersection(gwas) fairfax_re.loc[fairfax_re.rsid.isin(ls)] # Overlap with recomputed Nedelec eQTLs ls = set(list(nedelec_re.rsid)).intersection(gwas) nedelec_re.loc[nedelec_re.rsid.isin(ls)] # Overlap with recomputed Quach eQTLs ls = set(list(quach.rsid)).intersection(gwas) quach.loc[quach.rsid.isin(ls)] # Overlap with recomputed Alasoo eQTLs ls = set(list(alasoo.rsid)).intersection(gwas) alasoo.loc[alasoo.rsid.isin(ls)]
_____no_output_____
MIT
disease/neanderthal_gwas.ipynb
kshiyao/neanderthal_introgression
American Gut Project exampleThis notebook was created from a question we recieved from a user of MGnify.The question was:```I am attempting to retrieve some of the MGnify results from samples that are part of the American Gut Project based on sample location. However latitude and longitude do not appear to be searchable fields. Is it possible to query these fields myself or to work with someone to retrieve a list of samples from a specific geographic range? I am interested in samples from people in Hawaii, so 20.5 - 20.7 and -154.0 - -161.2.```Let's decompose the question:- project "American Gut Project"- Metadata filtration using the geographic location of a sample. - Get samples for Hawai: 20.5 - 20.7 ; -154.0 - -161.2Each sample if MGnify it's obtained from [ENA](https://www.ebi.ac.uk/ena). Get samplesThe first step is to obtain the samples using [ENA advanced search API](https://www.ebi.ac.uk/ena/browser/advanced-search).
from pandas import DataFrame import requests base_url = 'https://www.ebi.ac.uk/ena/portal/api/search' # parameters params = { 'result': 'sample', 'query': ' AND '.join([ 'geo_box1(16.9175,-158.4687,21.6593,-152.7969)', 'description="*American Gut Project*"' ]), 'fields': ','.join(['secondary_sample_accession', 'lat', 'lon']), 'format': 'json', } response = requests.post(base_url, data=params) agp_samples = response.json() df = DataFrame(columns=('secondary_sample_accession', 'lat', 'lon')) df.index.name = 'accession' for s in agp_samples: df.loc[s.get('accession')] = [ s.get('secondary_sample_accession'), s.get('lat'), s.get('lon') ] df
secondary_sample_accession lat lon accession SAMEA104163502 ERS1822520 19.6 -155.0 SAMEA104163503 ERS1822521 19.6 -155.0 SAMEA104163504 ERS1822522 19.6 -155.0 SAMEA104163505 ERS1822523 19.6 -155.0 SAMEA104163506 ERS1822524 19.6 -155.0 ... ... ... ... SAMEA4588733 ERS2409455 21.5 -157.8 SAMEA4588734 ERS2409456 21.5 -157.8 SAMEA4786501 ERS2606437 21.4 -157.7 SAMEA92368918 ERS1561273 19.4 -155.0 SAMEA92936668 ERS1562030 21.3 -157.7 [121 rows x 3 columns]
Apache-2.0
mgnify/src/notebooks/American_Gut_filter_based_in_location.ipynb
ProteinsWebTeam/ebi-metagenomics-examples
Now we can use EMG API to get the information.
#!/bin/usr/env python import requests import sys def get_links(data): return data["links"]["related"] if __name__ == "__main__": samples_url = "https://www.ebi.ac.uk/metagenomics/api/v1/samples/" tsv = sys.argv[1] if len(sys.argv) == 2 else None if not tsv: print("The first arg is the tsv file") exit(1) tsv_fh = open(tsv, "r") # header next(tsv_fh) for record in tsv_fh: # get the runs first # mgnify references the secondary accession _, sec_acc, *_ = record.split("\t") samples_res = requests.get(samples_url + sec_acc) if samples_res.status_code == 404: print(sec_acc + " not found in MGnify") continue # then the analysis for that run runs_url = get_links(samples_res.json()["data"]["relationships"]["runs"]) if not runs_url: print("No runs for sample " + sec_acc) continue print("Getting the runs: " + runs_url) run_res = requests.get(runs_url) if run_res.status_code != 200: print(run_url + " failed", file=sys.stderr) continue # iterate over the sample runs run_data = run_res.json() # this script doesn't consider pagination, it's just an example # there could be more that one page of runs # use links -> next to get the next page for run in run_data["data"]: analyses_url = get_links(run["relationships"]["analyses"]) if not analyses_url: print("No analyses for run " + run) continue analyses_res = requests.get(analyses_url) if analyses_res.status_code != 200: print(analyses_url + " failed", file=sys.stderr) continue # dump print("Raw analyses data") print(analyses_res.json()) print("=" * 30) tsv_fh.close()
_____no_output_____
Apache-2.0
mgnify/src/notebooks/American_Gut_filter_based_in_location.ipynb
ProteinsWebTeam/ebi-metagenomics-examples
Employee Attrition PredictionThere is a class of problems that predict that some event happens after N years. Examples are employee attrition, hard drive failure, life expectancy, etc. Usually these kind of problems are considered simple problems and are the models have vairous degree of performance. Usually it is treated as a classification problem, predicting if after exactly N years the event happens. The problem with this approach is that people care not so much about the likelihood that the event happens exactly after N years, but the probability that the event happens today. While you can infer this using Bayes theorem, doing it during prediction will not give you good accuracy because the Bayesian inference will be based on one piece of data. It is better to do this kind of inference during training time, and learn the probability than the likelihood function.Thus, the problem is learning a conditional probability of the person quitting, given he has not quit yet, and is similar to the Hazard function in survival analysis problem
#Import import numpy as np import pandas as pd import numpy.random import matplotlib.pyplot as plt import seaborn as sns import tensorflow as tf from sklearn.preprocessing import MinMaxScaler import math %matplotlib inline numpy.random.seed(1239) # Read the data # Source: https://www.ibm.com/communities/analytics/watson-analytics-blog/hr-employee-attrition/ raw_data = pd.read_csv('data/WA_Fn-UseC_-HR-Employee-Attrition.csv') #Check if any is nan. If no nans, we don't need to worry about dealing with them raw_data.isna().sum().sum() def prepare_data(raw_data): ''' Prepare the data 1. Set EmployeeNumber as the index 2. Drop redundant columns 3. Reorder columns to make YearsAtCompany first 4. Change OverTime to the boolean type 5. Do 1-hot encoding ''' labels = raw_data.Attrition == 'Yes' employee_data = raw_data.set_index('EmployeeNumber').drop(columns=['Attrition', 'EmployeeCount', 'Over18']) employee_data.loc[:, 'OverTime'] = (employee_data.OverTime == 'Yes').astype('float') employee_data = pd.get_dummies(employee_data) employee_data = pd.concat([employee_data.YearsAtCompany, employee_data.drop(columns='YearsAtCompany')], axis=1) return employee_data, labels #Split to features and labels employee_data, labels = prepare_data(raw_data)
_____no_output_____
Apache-2.0
employee_attrition/attrition-tf.ipynb
mlarionov/machine_learning_POC
First we will work on the synthetic set of data, for this reason we will not split the dataset to train/test yet
#Now scale the entire dataset, but not the first column (YearsAtCompany). Instead scale the dataset to be similar range #to the first column max_year = employee_data.YearsAtCompany.max() scaler = MinMaxScaler(feature_range=(0, max_year)) scaled_data = pd.DataFrame(scaler.fit_transform(employee_data.values.astype('float')), columns=employee_data.columns, index=employee_data.index)
_____no_output_____
Apache-2.0
employee_attrition/attrition-tf.ipynb
mlarionov/machine_learning_POC
Based on the chart it seems like a realistic data set.Now we need to construct our loss function. It will have an additional parameter: number of yearsWe define probability $p(x, t)$ that the person quits this very day, where t is the number of years and x is the remaining features. Then the likelihood that the person has quit after the year $t$ is $$P(x,t) = (\prod_{l=0}^{t-1} (1-p(x,l))) p(x,t) $$ whereas the likelihood that the person will remain after the year $t$ is $$P(x,t) = \prod_{l=0}^{t} (1-p(x,l)) $$Strictly speaking x is also dependent on t, but we don't have the historical data for this, so we assume that x is independent of t.Using the principle of maximum likelihood, we derive the loss function taking negative log of the likelihood function:$$\mathscr{L}(y,p) = -\sum_{l=0}^{t-1} \log(1-p(x,l)) - y \log{p} - (1-y) \log(1-p) $$Where y is an indicator if the person has quit after working exactly t years or not.Notice that the last two terms is the cross-entropy loss function, and the first term is a hitorical term. We will use a modified Cox Hazard function mechanism and model the conditional probability $p(x,l)$ a sigmoid function (for simplicity we include bias in the list of weights, and so the weight for the t parameter): $$p=\frac{1}{1 + e^{-\bf{w}\bf{x}}}$$ To create a synthetic set we assume that p does not depend on anything. Then the maximum likelihood gives us this simple formula: $$Pos=M p \bar{t}$$ Here Pos is the number of positive example (people who quit) and M is the total number of examples and $\bar{t}$ is the mean time (number of years)
#pick a p p = 0.01 #Get the maximum years. We need it to make sure that the product of p YearsAtCompany never exceeds 1. #In reality that is not a problem, but we will use it to correctly create synthetic labels scaled_data.YearsAtCompany.max() #Create the synthetic labels. synthetic_labels = numpy.random.rand(employee_data.shape[0]) < p * employee_data.YearsAtCompany #Plot the data with the synthetic labels sns.swarmplot(y='years', x='quit', data=pd.DataFrame({"quit":synthetic_labels, 'years':employee_data.YearsAtCompany})); #We expect the probability based on the synthesized data (but we are getting something else....) to be close to p synthetic_labels.sum()/len(synthetic_labels)/employee_data.YearsAtCompany.mean()
_____no_output_____
Apache-2.0
employee_attrition/attrition-tf.ipynb
mlarionov/machine_learning_POC
Indeed pretty close to the value of p we set beforehand Logistic Regression with the synthetic labelsIn this version of the POC we will use TensorFlow We need to add ones to the dataframe.But since we scaled everything to be between `0` and `40`, the convergence will be faster if we add `40.0` instead of `1`
#Add 1 to the employee data. #But to make convergence fa scaled_data['Ones'] = 40.0 scaled_data def reset_graph(seed=1239): tf.reset_default_graph() tf.set_random_seed(seed) np.random.seed(seed) def create_year_column(X, w, year): year_term = tf.reshape(X[:,0]-year, (-1,1)) * w[0] year_column = tf.reshape(X @ w - year_term,(-1,)) return year_column * tf.cast(tf.greater(X[:,0],year), dtype=tf.float32) def logit(X, w): ''' IMPORTANT: This assumes that the weight for the temporal variable is w[0] TODO: Remove this assumption and allow to specify the index of the temporal variable ''' max_year_tf = tf.reduce_max(X[:,0]) tensors = tf.map_fn(lambda year: create_year_column(X, w, year), tf.range(max_year_tf)) return tf.transpose(tensors) logit_result = logit(X,weights) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) result = logit_result.eval() result[1] def get_loss(X, y, w): ''' The loss function ''' #The first term logit_ = logit(X, w) temp_tensor = tf.sigmoid(logit_) * tf.cast(tf.greater(logit_, 0), tf.float32) sum_loss = tf.reduce_sum(tf.log(1-temp_tensor),1) sum_loss = tf.reshape(sum_loss, (-1,1)) logistic_prob = tf.sigmoid(X @ w) return -sum_loss - y * tf.log(logistic_prob) - (1-y) * tf.log(1-logistic_prob) loss_result = get_loss(X, y, weights/100) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) result = loss_result.eval() result reset_graph() learning_rate = 0.0005 l2 = 2.0 X = tf.constant(scaled_data.values, dtype=tf.float32, name="X") y = tf.constant(synthetic_labels.values.reshape(-1, 1), dtype=tf.float32, name="y") weights = tf.Variable(tf.random_uniform([scaled_data.values.shape[1], 1], -0.01, 0.01, seed=1239), name="weights") loss = get_loss(X, y, weights) l2_regularizer = tf.nn.l2_loss(weights) - 0.5 * weights[-1] ** 2 cost = tf.reduce_mean(loss) + l2 * l2_regularizer optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(cost) init = tf.global_variables_initializer() n_epochs = 20000 with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): if epoch % 1000 == 0: print("Epoch", epoch, "Cost =", cost.eval()) print(f'w: {weights[-1].eval()}') sess.run(training_op) best_theta = weights.eval()
Epoch 0 Cost = [0.4480857] w: [-0.00260041] Epoch 1000 Cost = [0.25044656] w: [-0.04913734] Epoch 2000 Cost = [0.24958777] w: [-0.06650413] Epoch 3000 Cost = [0.24919516] w: [-0.07856989] Epoch 4000 Cost = [0.2489799] w: [-0.08747929] Epoch 5000 Cost = [0.24980566] w: [-0.09409016] Epoch 6000 Cost = [0.24926803] w: [-0.09901612] Epoch 7000 Cost = [0.24923217] w: [-0.10267571] Epoch 8000 Cost = [0.24968402] w: [-0.10539492] Epoch 9000 Cost = [0.24967311] w: [-0.10741644] Epoch 10000 Cost = [0.2496681] w: [-0.10891172] Epoch 11000 Cost = [0.24966364] w: [-0.1100379] Epoch 12000 Cost = [0.24966182] w: [-0.11086603] Epoch 13000 Cost = [0.24966045] w: [-0.11149137] Epoch 14000 Cost = [0.24966016] w: [-0.11194912] Epoch 15000 Cost = [0.24965991] w: [-0.11229044] Epoch 16000 Cost = [0.24965975] w: [-0.1125449] Epoch 17000 Cost = [0.24965967] w: [-0.11273381] Epoch 18000 Cost = [0.24966054] w: [-0.1128688] Epoch 19000 Cost = [0.2496596] w: [-0.11298056]
Apache-2.0
employee_attrition/attrition-tf.ipynb
mlarionov/machine_learning_POC
The cost will never go down to zero, because of the additional term in the loss function.
#We will print the learned weights. learned_weights = [(column_name,float(best_theta[column_num])) \ for column_num, column_name in enumerate(scaled_data.columns)] #We print the weights sorted by the absolute value of the value sorted(learned_weights, key=lambda x: abs(x[1]), reverse=True)
_____no_output_____
Apache-2.0
employee_attrition/attrition-tf.ipynb
mlarionov/machine_learning_POC
To compare with the other result we need to multiplty the last weight by 40
print(f'The predicted probability is: {float(1/(1+np.exp(-best_theta[-1]*40)))}')
The predicted probability is: 0.010747312568128109
Apache-2.0
employee_attrition/attrition-tf.ipynb
mlarionov/machine_learning_POC
Training a Boltzmann Generator for Alanine DipeptideThis notebook introduces basic concepts behind `bgflow`. It shows how to build an train a Boltzmann generator for a small peptide. The most important aspects it will cover are- retrieval of molecular training data- defining a internal coordinate transform- defining normalizing flow classes- combining different normalizing flows- training a Boltzmann generator via NLL and KLLThe main purpose of this tutorial is to introduce the implementation. The network design is optimized for educational purposes rather than good performance. In the conlusions, we will discuss some aspects of the generator that are not ideal and outline improvements. Some PreliminariesWe instruct jupyter to reload any imports automatically and define the device and datatype, on which we want to perform the computations.
%load_ext autoreload %autoreload 2 import torch device = "cuda:3" if torch.cuda.is_available() else "cpu" dtype = torch.float32 # a context tensor to send data to the right device and dtype via '.to(ctx)' ctx = torch.zeros([], device=device, dtype=dtype)
_____no_output_____
MIT
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb
michellab/bgflow
Load the Data and the Molecular SystemMolecular trajectories and their corresponding potential energy functions are available from the `bgmol` repository.
# import os # from bgmol.datasets import Ala2TSF300 # target_energy = Ala2TSF300().get_energy_model(n_workers=1) import os import mdtraj #dataset = mdtraj.load('output.dcd', top='ala2_fromURL.pdb') dataset = mdtraj.load('TSFtraj.dcd', top='ala2_fromURL.pdb') #fname = "obc_xmlsystem_savedmodel" #coordinates = dataset.xyz #target_energy = Ala2TSF300().get_energy_model(n_workers=1) print(dataset) import numpy as np rigid_block = np.array([6, 8, 9, 10, 14]) z_matrix = np.array([ [0, 1, 4, 6], [1, 4, 6, 8], [2, 1, 4, 0], [3, 1, 4, 0], [4, 6, 8, 14], [5, 4, 6, 8], [7, 6, 8, 4], [11, 10, 8, 6], [12, 10, 8, 11], [13, 10, 8, 11], [15, 14, 8, 16], [16, 14, 8, 6], [17, 16, 14, 15], [18, 16, 14, 8], [19, 18, 16, 14], [20, 18, 16, 19], [21, 18, 16, 19] ]) def dimensions(dataset): return np.prod(dataset.xyz[0].shape) dim = dimensions(dataset) print(dim) from simtk import openmm with open('ala2_xml_system.txt') as f: xml = f.read() system = openmm.XmlSerializer.deserialize(xml) from bgflow.distribution.energy.openmm import OpenMMBridge, OpenMMEnergy from openmmtools import integrators from simtk import unit temperature = 300.0 * unit.kelvin collision_rate = 1.0 / unit.picosecond timestep = 4.0 * unit.femtosecond integrator = integrators.LangevinIntegrator(temperature=temperature,collision_rate=collision_rate,timestep=timestep) energy_bridge = OpenMMBridge(system, integrator, n_workers=1) target_energy = OpenMMEnergy(int(dim), energy_bridge)
_____no_output_____
MIT
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb
michellab/bgflow
The energy model is a `bgflow.Energy` that wraps around OpenMM. The `n_workers` argument determines the number of openmm contexts that are used for energy evaluations. In notebooks, we set `n_workers=1` to avoid hickups. In production, we can omit this argument so that `n_workers` is automatically set to the number of CPU cores. Visualize Data: Ramachandran Plot for the Backbone Angles
# def compute_phi_psi(trajectory): # phi_atoms = [4, 6, 8, 14] # phi = md.compute_dihedrals(trajectory, indices=[phi_atoms])[:, 0] # psi_atoms = [6, 8, 14, 16] # psi = md.compute_dihedrals(trajectory, indices=[psi_atoms])[:, 0] # return phi, psi import numpy as np import mdtraj as md from matplotlib import pyplot as plt from matplotlib.colors import LogNorm # def plot_phi_psi(ax, trajectory): # if not isinstance(trajectory, md.Trajectory): # trajectory = md.Trajectory( # xyz=trajectory.cpu().detach().numpy().reshape(-1, 22, 3), # topology=md.load('ala2_fromURL.pdb').topology # ) # phi, psi = compute_phi_psi(trajectory) # ax.hist2d(phi, psi, 50, norm=LogNorm()) # ax.set_xlim(-np.pi, np.pi) # ax.set_ylim(-np.pi, np.pi) # ax.set_xlabel("$\phi$") # _ = ax.set_ylabel("$\psi$") # return trajectory import numpy as np n_train = len(dataset)//2 n_test = len(dataset) - n_train permutation = np.random.permutation(n_train) all_data = dataset.xyz.reshape(-1, dimensions(dataset)) training_data = torch.tensor(all_data[permutation]).to(ctx) test_data = torch.tensor(all_data[permutation + n_train]).to(ctx) #print(training_data.shape)
torch.Size([143147, 66])
MIT
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb
michellab/bgflow
Define the Internal Coordinate TransformRather than generating all-Cartesian coordinates, we use a mixed internal coordinate transform.The five central alanine atoms will serve as a Cartesian "anchor", from which all other atoms are placed with respect to internal coordinates (IC) defined through a z-matrix. We have deposited a valid `z_matrix` and the corresponding `rigid_block` in the `dataset.system` from `bgmol`.
import bgflow as bg # throw away 6 degrees of freedom (rotation and translation) dim_cartesian = len(rigid_block) * 3 - 6 print(dim_cartesian) #dim_cartesian = len(system.rigid_block) * 3 dim_bonds = len(z_matrix) print(dim_bonds) dim_angles = dim_bonds dim_torsions = dim_bonds coordinate_transform = bg.MixedCoordinateTransformation( data=training_data, z_matrix=z_matrix, fixed_atoms=rigid_block, #keepdims=None, keepdims=dim_cartesian, normalize_angles=True, ).to(ctx)
_____no_output_____
MIT
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb
michellab/bgflow
For demonstration, we transform the first 3 samples from the training data set into internal coordinates as follows:
# bonds, angles, torsions, cartesian, dlogp = coordinate_transform.forward(training_data[:3]) # bonds.shape, angles.shape, torsions.shape, cartesian.shape, dlogp.shape # #print(bonds)
_____no_output_____
MIT
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb
michellab/bgflow
Prior DistributionThe next step is to define a prior distribution that we can easily sample from. The normalizing flow will be trained to transform such latent samples into molecular coordinates. Here, we just take a normal distribution, which is a rather naive choice for reasons that will be discussed in other notebooks.
dim_ics = dim_bonds + dim_angles + dim_torsions + dim_cartesian mean = torch.zeros(dim_ics).to(ctx) # passing the mean explicitly to create samples on the correct device prior = bg.NormalDistribution(dim_ics, mean=mean)
_____no_output_____
MIT
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb
michellab/bgflow
Normalizing FlowNext, we set up the normalizing flow by stacking together different neural networks. For now, we will do this in a rather naive way, not distinguishing between bonds, angles, and torsions. Therefore, we will first define a flow that splits the output from the prior into the different IC terms. Split Layer
split_into_ics_flow = bg.SplitFlow(dim_bonds, dim_angles, dim_torsions, dim_cartesian) # test #print(prior.sample(3)) # ics = split_into_ics_flow(prior.sample(1)) # #print(_ics) # coordinate_transform.forward(*ics, inverse=True)[0].shape
_____no_output_____
MIT
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb
michellab/bgflow
Coupling LayersNext, we will set up so-called RealNVP coupling layers, which split the input into two channels and then learn affine transformations of channel 1 conditioned on channel 2. Here we will do the split naively between the first and second half of the degrees of freedom.
class RealNVP(bg.SequentialFlow): def __init__(self, dim, hidden): self.dim = dim self.hidden = hidden super().__init__(self._create_layers()) def _create_layers(self): dim_channel1 = self.dim//2 dim_channel2 = self.dim - dim_channel1 split_into_2 = bg.SplitFlow(dim_channel1, dim_channel2) layers = [ # -- split split_into_2, # --transform self._coupling_block(dim_channel1, dim_channel2), bg.SwapFlow(), self._coupling_block(dim_channel2, dim_channel1), # -- merge bg.InverseFlow(split_into_2) ] return layers def _dense_net(self, dim1, dim2): return bg.DenseNet( [dim1, *self.hidden, dim2], activation=torch.nn.ReLU() ) def _coupling_block(self, dim1, dim2): return bg.CouplingFlow(bg.AffineTransformer( shift_transformation=self._dense_net(dim1, dim2), scale_transformation=self._dense_net(dim1, dim2) )) #RealNVP(dim_ics, hidden=[128]).to(ctx).forward(prior.sample(3))[0].shape
_____no_output_____
MIT
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb
michellab/bgflow
Boltzmann GeneratorFinally, we define the Boltzmann generator.It will sample molecular conformations by 1. sampling in latent space from the normal prior distribution,2. transforming the samples into a more complication distribution through a number of RealNVP blocks (the parameters of these blocks will be subject to optimization),3. splitting the output of the network into blocks that define the internal coordinates, and4. transforming the internal coordinates into Cartesian coordinates through the inverse IC transform.
n_realnvp_blocks = 5 layers = [] for i in range(n_realnvp_blocks): layers.append(RealNVP(dim_ics, hidden=[128, 128, 128])) layers.append(split_into_ics_flow) layers.append(bg.InverseFlow(coordinate_transform)) flow = bg.SequentialFlow(layers).to(ctx) # test #flow.forward(prior.sample(3))[0].shape flow.load_state_dict(torch.load('modelTSFtraj_xmlsystem_20000KLL.pt')) # print number of trainable parameters "#Parameters:", np.sum([np.prod(p.size()) for p in flow.parameters()]) generator = bg.BoltzmannGenerator( flow=flow, prior=prior, target=target_energy ) def plot_energies(ax, samples, target_energy, test_data): sample_energies = target_energy.energy(samples).cpu().detach().numpy() md_energies = target_energy.energy(test_data[:len(samples)]).cpu().detach().numpy() cut = max(np.percentile(sample_energies, 80), 20) ax.set_xlabel("Energy [$k_B T$]") # y-axis on the right ax2 = plt.twinx(ax) ax.get_yaxis().set_visible(False) ax2.hist(sample_energies, range=(-50, cut), bins=40, density=False, label="BG") ax2.hist(md_energies, range=(-50, cut), bins=40, density=False, label="MD") ax2.set_ylabel(f"Count [#Samples / {len(samples)}]") ax2.legend() def plot_energy_onlyMD(ax, target_energy, test_data): md_energies = target_energy.energy(test_data[:1000]).cpu().detach().numpy() ax.set_xlabel("Energy [$k_B T$]") # y-axis on the right ax2 = plt.twinx(ax) ax.get_yaxis().set_visible(False) #ax2.hist(sample_energies, range=(-50, cut), bins=40, density=False, label="BG") ax2.hist(md_energies, bins=40, density=False, label="MD") ax2.set_ylabel(f"Count [#Samples / 1000]") ax2.legend() n_samples = 10000 samples = generator.sample(n_samples) print(samples.shape) fig, axes = plt.subplots(1, 2, figsize=(6,3)) fig.tight_layout() samplestrajectory = plot_phi_psi(axes[0], samples) plot_energies(axes[1], samples, target_energy, test_data) #plt.savefig(f"varysnapshots/{fname}.png", bbox_inches = 'tight') #samplestrajectory.save("mytraj_full_samples.dcd") #del samples
torch.Size([10000, 66])
MIT
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb
michellab/bgflow
bonds, angles, torsions, cartesian, dlogp = coordinate_transform.forward(samples)print(bonds.shape)print('1:', bonds[0])CHbond_indices = [0, 2, 3 ,7 ,8, 9 ,14 ,15 ,16]bonds_new = bonds.clone().detach()bonds_new[:,CHbond_indices] = 0.109print('2:', bonds_new[0:3])samples_corrected = coordinate_transform.forward(bonds_new,angles,torsions,cartesian,inverse=True)print(samples_corrected[0].shape)
samplestrajectory = mdtraj.Trajectory( xyz=samples[0].cpu().detach().numpy().reshape(-1, 22, 3), topology=mdtraj.load('ala2_fromURL.pdb').topology ) #samplestrajectory.save('mysamples_traj_correctedonce.dcd') import nglview as nv #samplestrajectory.save("Samplestraj.pdb") #md.save(samplestrajectory, "obcstride10Samplestraj.dcd") widget = nv.show_mdtraj(samplestrajectory) widget
_____no_output_____
MIT
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb
michellab/bgflow
LassoLars Regression with Robust Scaler This Code template is for the regression analysis using a simple LassoLars Regression. It is a lasso model implemented using the LARS algorithm and feature scaling using Robust Scaler in a Pipeline Required Packages
import warnings import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.preprocessing import RobustScaler from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error from sklearn.linear_model import LassoLars warnings.filterwarnings('ignore')
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
InitializationFilepath of CSV file
#filepath file_path= ""
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
List of features which are required for model training .
#x_values features=[]
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
Target feature for prediction.
#y_value target=''
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
df=pd.read_csv(file_path) df.head()
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
X=df[features] Y=df[target]
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df)
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
Calling preprocessing functions on the feature and target set.
x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head()
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show()
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
ModelLassoLars is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients. Tuning parameters> **fit_intercept** -> whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations> **alpha** -> Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.> **eps** -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.> **max_iter** -> Maximum number of iterations to perform.> **positive** -> Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.> **precompute** -> Whether to use a precomputed Gram matrix to speed up calculations. Feature ScalingRobust Scaler scale features using statistics that are robust to outliers.This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).For more information... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html)
model=make_pipeline(RobustScaler(),LassoLars()) model.fit(x_train,y_train)
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.score: The score function returns the coefficient of determination R2 of the prediction.
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
Accuracy score 79.97 %
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
R2 Score: 79.97 % Mean Absolute Error 4016.94 Mean Squared Error 30625388.66
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show()
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_RobustScaler.ipynb
shreepad-nade/ds-seed
Wczytanie danych
df = pd.read_hdf("../data/car.h5") df.sample() SUFFIX_CAT = '__cat' for feat in df.columns: if isinstance(df[feat][0], list): continue factorized_values = df[feat].factorize()[0] if SUFFIX_CAT in feat: df[feat] = factorized_values else: df[feat+ SUFFIX_CAT] = factorized_values cat_feats = [x for x in df.columns if SUFFIX_CAT in x] cat_feats = [x for x in cat_feats if 'price' not in x] df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x)) df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) =='None' else int(x.split(' ')[0])) df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) =='None' else int(x.split('cm')[0].replace(' ', ''))) feats = [ 'param_rok-produkcji', 'param_stan__cat', 'param_napęd__cat', 'param_skrzynia-biegów__cat', 'param_moc', 'param_faktura-vat__cat', 'param_marka-pojazdu__cat', 'param_typ__cat', 'feature_kamera-cofania__cat', 'param_wersja__cat', 'param_model-pojazdu__cat', 'param_pojemność-skokowa', 'param_kod-silnika__cat', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'feature_czujniki-parkowania-przednie__cat', 'param_uszkodzony__cat', 'feature_system-start-stop__cat', 'feature_regulowane-zawieszenie__cat', 'feature_asystent-pasa-ruchu__cat', ] def run_model(model, feats): X = df[feats].values y = df['price_value'].values scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error') return np.mean(scores), np.std(scores)
_____no_output_____
MIT
matrix_two/matrix2_day5_hyperopt.ipynb
AardJan/dw_matrix
XGBoost
xgb_params = { 'max_depth':5, 'n_estimatords':50, 'learning_rate':0.1, 'seed':0, 'nthread': 3 } model = xgb.XGBRegressor(**xgb_params) run_model(model, feats) def obj_func(params): print("Traniang with params: ") print(params) mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats) return {"loss": np.abs(mean_mae), "status": STATUS_OK} xgb_reg_params = { 'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)), 'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype=int)), 'subsample': hp.quniform('subsample', 0.5, 1, 0.05), 'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05), 'objective': 'reg:squarederror', 'n_estimatords': 100, 'seed':0, 'nthread': 4 } best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=30) best
Traniang with params: {'colsample_bytree': 0.6000000000000001, 'learning_rate': 0.3, 'max_depth': 5, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9} Traniang with params: {'colsample_bytree': 0.5, 'learning_rate': 0.2, 'max_depth': 9, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9} Traniang with params: {'colsample_bytree': 0.8, 'learning_rate': 0.25, 'max_depth': 7, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001} Traniang with params: {'colsample_bytree': 0.9, 'learning_rate': 0.15000000000000002, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001} Traniang with params: {'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.15000000000000002, 'max_depth': 9, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.75} Traniang with params: {'colsample_bytree': 0.9500000000000001, 'learning_rate': 0.15000000000000002, 'max_depth': 5, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9} Traniang with params: {'colsample_bytree': 0.55, 'learning_rate': 0.25, 'max_depth': 7, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001} Traniang with params: {'colsample_bytree': 0.5, 'learning_rate': 0.2, 'max_depth': 12, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.55} Traniang with params: {'colsample_bytree': 0.6000000000000001, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9} Traniang with params: {'colsample_bytree': 0.8, 'learning_rate': 0.05, 'max_depth': 7, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.6000000000000001} Traniang with params: {'colsample_bytree': 0.9, 'learning_rate': 0.25, 'max_depth': 8, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001} Traniang with params: {'colsample_bytree': 0.55, 'learning_rate': 0.3, 'max_depth': 14, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8} Traniang with params: {'colsample_bytree': 0.9, 'learning_rate': 0.3, 'max_depth': 5, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001} Traniang with params: {'colsample_bytree': 0.8, 'learning_rate': 0.25, 'max_depth': 9, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8} Traniang with params: {'colsample_bytree': 0.5, 'learning_rate': 0.05, 'max_depth': 9, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.55} Traniang with params: {'colsample_bytree': 0.9, 'learning_rate': 0.3, 'max_depth': 12, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0} Traniang with params: {'colsample_bytree': 0.8, 'learning_rate': 0.05, 'max_depth': 12, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001} Traniang with params: {'colsample_bytree': 0.55, 'learning_rate': 0.25, 'max_depth': 5, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.65} Traniang with params: {'colsample_bytree': 0.9500000000000001, 'learning_rate': 0.25, 'max_depth': 12, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9} Traniang with params: {'colsample_bytree': 0.55, 'learning_rate': 0.15000000000000002, 'max_depth': 13, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001} Traniang with params: {'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0} Traniang with params: {'colsample_bytree': 0.65, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0} Traniang with params: {'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0} Traniang with params: {'colsample_bytree': 0.75, 'learning_rate': 0.1, 'max_depth': 10, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0} Traniang with params: {'colsample_bytree': 0.65, 'learning_rate': 0.1, 'max_depth': 6, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001} Traniang with params: {'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.1, 'max_depth': 11, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001} Traniang with params: {'colsample_bytree': 0.75, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0} Traniang with params: {'colsample_bytree': 0.75, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001} Traniang with params: {'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.1, 'max_depth': 11, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001} Traniang with params: {'colsample_bytree': 0.75, 'learning_rate': 0.2, 'max_depth': 15, 'n_estimatords': 100, 'nthread': 4, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8} 100%|██████████| 30/30 [19:10<00:00, 55.79s/it, best loss: 6987.881796093094]
MIT
matrix_two/matrix2_day5_hyperopt.ipynb
AardJan/dw_matrix
Cross Validation
value_array = [] error_array = [] from sklearn.model_selection import StratifiedKFold skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=1) for train_index, test_index in skf.split(X, Y): print("TRAIN:", train_index, "TEST:", test_index) xTrain, xTest = X[train_index], X[test_index] yTrain, yTest = Y[train_index], Y[test_index] model.fit(xTrain, yTrain) value = model.score(xTest, yTest) error = model.mean_squared_error(xTest, yTest) value_array.append(value) error_array.append(error) np.mean(value_array) np.mean(error_array)
_____no_output_____
CC-BY-4.0
FuzzyKNN/Esperimenti su FKNN.ipynb
ritafolisi/Tirocinio
Model Selection & Cross Validation
a = np.arange (1, 21, 2) parameters = {"k" : a} parameters["k"] from sklearn.model_selection import GridSearchCV clf = GridSearchCV(model, parameters, cv = 5) clf.fit(xTrain, yTrain) clf.score(xTest, yTest) best_params = clf.best_params_ best_params model = clf.best_estimator_ def MSE_membership(self, X, y): memb, _ = self.predict(X) res = [] for t in memb: res.append(t[1]) return mean_squared_error(y, res) model.RMSE_membership(xTest, yTest) from sklearn.model_selection import GridSearchCV, StratifiedKFold from sklearn.metrics import classification_report, mean_squared_error from sklearn.metrics import accuracy_score from sklearn.utils import shuffle df = pd.read_csv('iris-setosa.csv') X = df.iloc[:, 1:3].values y = df.iloc[:,0].values seed = 10 X, y = shuffle(X, y, random_state=seed) a = np.arange (1, 21, 2) parameters = {"k" : a} N_SPLIT = 5 err = [] acc = [] skf = StratifiedKFold(n_splits=N_SPLIT, shuffle=False, random_state=5) for train_index, validation_index in skf.split(X, y): print(train_index) X_train, X_validation = X[train_index], X[validation_index] y_train, y_validation = y[train_index], y[validation_index] model = FuzzyKNN() clf = GridSearchCV(model, parameters, cv=5) clf.fit(X_train, y_train) best_model = clf.best_estimator_ best_model.fit(X_train, y_train) acc.append(best_model.score(X_validation, y_validation)) val = best_model.RMSE_membership(X_validation, y_validation) err.append(val) acc err
_____no_output_____
CC-BY-4.0
FuzzyKNN/Esperimenti su FKNN.ipynb
ritafolisi/Tirocinio
Import libraries and define const values
import json import folium from geopandas import GeoDataFrame from pysal.viz.mapclassify import Natural_Breaks import requests id_field = 'id' value_field = 'score' num_bins = 4 fill_color = 'YlOrRd' fill_opacity = 0.9 REST_API_ADDRESS= 'http://10.90.46.32:4646/' Alive_URL = REST_API_ADDRESS + 'alive' BRS_URL = REST_API_ADDRESS + 'BRS' Flush_URL = REST_API_ADDRESS + 'flushBuffer' ChangeProteus_URL = REST_API_ADDRESS + 'changeProteus'
_____no_output_____
Apache-2.0
executable/REST_example.ipynb
smartdatalake/best_region_search
Identify the areas where start-ups thrive
topk = 11 # eps = 0.1 # We measure distance in radians, where 1 radian is around 100km, and epsilon is the length of each side of the region f = "null" # dist = True keywordsColumn = "flags" keywords = "startup-registroimprese" keywordsColumn2 = "" keywords2 = "" table = "BRSflags" data = {'topk' : topk, 'eps' : eps, 'f' : f, 'input' : table, "keywordsColumn" : keywordsColumn, "keywords" : keywords,"keywordsColumn2":keywordsColumn2,"keywords2":keywords2,"dist":dist} response = requests.get(BRS_URL, params=data) print(response.text) res = json.loads(response.text) results_geojson={"type":"FeatureCollection","features":[]} for region in res: results_geojson['features'].append({"type": "Feature", "geometry": { "type": "Point", "coordinates": region['center']}, "properties": { "id": region['rank'], "score": region['score'] }})
[ { "rank":1, "center":[9.191005,45.47981], "score":77.0 } ,{ "rank":2, "center":[12.50779,41.873835], "score":35.0 } ,{ "rank":3, "center":[7.661105,45.064135], "score":16.0 } ,{ "rank":4, "center":[14.238015,40.869564999999994], "score":12.0 } ,{ "rank":5, "center":[11.382850000000001,44.483135], "score":9.0 } ,{ "rank":6, "center":[9.652125,45.671640000000004], "score":7.0 } ,{ "rank":7, "center":[11.92423,45.40219000000001], "score":6.0 } ,{ "rank":8, "center":[18.183224735000003,40.369488649999994], "score":6.0 } ,{ "rank":9, "center":[11.223689069999999,43.809649345], "score":6.0 } ,{ "rank":10, "center":[13.353245000000003,38.117855000000006], "score":6.0 } ,{ "rank":11, "center":[8.93764,44.41054], "score":5.0 } ]
Apache-2.0
executable/REST_example.ipynb
smartdatalake/best_region_search
Initialize the map and visualize the output regions
m = folium.Map( location=[45.474989560000004,9.205786594999998], tiles='Stamen Toner', zoom_start=11 ) gdf = GeoDataFrame.from_features(results_geojson['features']) gdf.crs = {'init': 'epsg:4326'} gdf['geometry'] = gdf.buffer(data['eps']/2).envelope threshold_scale = Natural_Breaks(gdf[value_field], k=num_bins).bins.tolist() threshold_scale.insert(0, gdf[value_field].min()) choropleth = folium.Choropleth(gdf, data=gdf, columns=[id_field, value_field], key_on='feature.properties.{}'.format(id_field), fill_color=fill_color, fill_opacity=fill_opacity, threshold_scale=threshold_scale).add_to(m) fields = list(gdf.columns.values) fields.remove('geometry') tooltip = folium.features.GeoJsonTooltip(fields=fields) choropleth.geojson.add_child(tooltip) m
_____no_output_____
Apache-2.0
executable/REST_example.ipynb
smartdatalake/best_region_search
Analyze A/B Test ResultsYou may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric). **Please save regularly.**This project will assure you have mastered the subjects covered in the statistics lessons. The hope is to have this project be as comprehensive of these topics as possible. Good luck! Table of Contents- [Introduction](intro)- [Part I - Probability](probability)- [Part II - A/B Test](ab_test)- [Part III - Regression](regression) IntroductionA/B tests are very commonly performed by data analysts and data scientists. It is important that you get some practice working with the difficulties of these For this project, you will be working to understand the results of an A/B test run by an e-commerce website. Your goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.**As you work through this notebook, follow along in the classroom and answer the corresponding quiz questions associated with each question.** The labels for each classroom concept are provided for each question. This will assure you are on the right track as you work through the project, and you can feel more confident in your final submission meeting the criteria. As a final check, assure you meet all the criteria on the [RUBRIC](https://review.udacity.com/!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric). Part I - ProbabilityTo get started, let's import our libraries.
import pandas as pd import numpy as np import random import matplotlib.pyplot as plt %matplotlib inline #We are setting the seed to assure you get the same answers on quizzes as we set up random.seed(42)
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
`1.` Now, read in the `ab_data.csv` data. Store it in `df`. **Use your dataframe to answer the questions in Quiz 1 of the classroom.**a. Read in the dataset and take a look at the top few rows here:
#import the dataset df = pd.read_csv('ab_data.csv') #show the first 5 rows df.head()
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
b. Use the cell below to find the number of rows in the dataset.
#show the total number of rows df.shape[0]
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
c. The number of unique users in the dataset.
#calculare the number of unique user_id len(df['user_id'].unique())
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
d. The proportion of users converted.
#calculate the converted users df['converted'].mean()
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
e. The number of times the `new_page` and `treatment` don't match.
#treatment in group will be called A and new_page in landing_page will be called B df_A_not_B = df.query('group == "treatment" & landing_page != "new_page"') df_B_not_A = df.query('group != "treatment" & landing_page == "new_page"') #calculate thenumber of time new_page and treatment don't line up len(df_A_not_B) + len(df_B_not_A)
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
f. Do any of the rows have missing values?
#view if there is any missing value df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 294478 entries, 0 to 294477 Data columns (total 5 columns): user_id 294478 non-null int64 timestamp 294478 non-null object group 294478 non-null object landing_page 294478 non-null object converted 294478 non-null int64 dtypes: int64(2), object(3) memory usage: 11.2+ MB
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
**No missing Values** `2.` For the rows where **treatment** does not match with **new_page** or **control** does not match with **old_page**, we cannot be sure if this row truly received the new or old page. Use **Quiz 2** in the classroom to figure out how we should handle these rows. a. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in **df2**.
#remove the mismatch rows df1 = df.drop(df[(df.group == "treatment") & (df.landing_page != "new_page")].index) df2 = df1.drop(df1[(df1.group == "control") & (df1.landing_page != "old_page")].index) # Double Check all of the correct rows were removed - this should be 0 df2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape[0]
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
`3.` Use **df2** and the cells below to answer questions for **Quiz3** in the classroom. a. How many unique **user_id**s are in **df2**?
#calculare the number of unique user_id len(df2['user_id'].unique())
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
b. There is one **user_id** repeated in **df2**. What is it?
#find out the duplicate user_id df2.loc[df2.user_id.duplicated()]
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
c. What is the row information for the repeat **user_id**?
#find out the duplicate user_id df2.loc[df2.user_id.duplicated()]
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
d. Remove **one** of the rows with a duplicate **user_id**, but keep your dataframe as **df2**.
# Now we remove duplicate rows df2 = df2.drop_duplicates() # Check agin if duplicated values are deleted or not sum(df2.duplicated())
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
`4.` Use **df2** in the cells below to answer the quiz questions related to **Quiz 4** in the classroom.a. What is the probability of an individual converting regardless of the page they receive?
# Probability of an individual converting regardless of the page they receive df2['converted'].mean()
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
b. Given that an individual was in the `control` group, what is the probability they converted?
# The probability of an individual converting given that an individual was in the control group control_group = len(df2.query('group=="control" and converted==1'))/len(df2.query('group=="control"')) control_group
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
c. Given that an individual was in the `treatment` group, what is the probability they converted?
# The probability of an individual converting given that an individual was in the treatment group treatment_group = len(df2.query('group=="treatment" and converted==1'))/len(df2.query('group=="treatment"')) treatment_group
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
d. What is the probability that an individual received the new page?
# The probability of individual received new page len(df2.query('landing_page=="new_page"'))/len(df2.index)
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
e. Consider your results from parts (a) through (d) above, and explain below whether you think there is sufficient evidence to conclude that the new treatment page leads to more conversions. **Your answer goes here.** Part II - A/B TestNotice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed. However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another? These questions are the difficult parts associated with A/B tests in general. `1.` For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of **$p_{old}$** and **$p_{new}$**, which are the converted rates for the old and new pages. **Put your answer here.** `2.` Assume under the null hypothesis, $p_{new}$ and $p_{old}$ both have "true" success rates equal to the **converted** success rate regardless of page - that is $p_{new}$ and $p_{old}$ are equal. Furthermore, assume they are equal to the **converted** rate in **ab_data.csv** regardless of the page. Use a sample size for each page equal to the ones in **ab_data.csv**. Perform the sampling distribution for the difference in **converted** between the two pages over 10,000 iterations of calculating an estimate from the null. Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use **Quiz 5** in the classroom to make sure you are on the right track. a. What is the **conversion rate** for $p_{new}$ under the null?
p_new = len(df2.query( 'converted==1'))/len(df2.index) p_new
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
b. What is the **conversion rate** for $p_{old}$ under the null?
p_old = len(df2.query('converted==1'))/len(df2.index) p_old p_new = len(df2.query( 'converted==1'))/len(df2.index) p_new # probablity under null p=np.mean([p_old,p_new]) p # difference of p_new and p_old p_diff=p_new-p_old
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
Under null p_old is equal to p_new c. What is $n_{new}$, the number of individuals in the treatment group?
#calculate number of queries when landing_page is equal to new_page n_new = len(df2.query('landing_page=="new_page"')) #print n_new n_new
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
d. What is $n_{old}$, the number of individuals in the control group?
#calculate number of queries when landing_page is equal to old_page n_old = len(df2.query('landing_page=="old_page"')) #print n_old n_old
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
e. Simulate $n_{new}$ transactions with a conversion rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in **new_page_converted**.
## simulate n_old transactions with a convert rate of p_new under the null new_page_converted = np.random.choice([0, 1], n_new, p = [p_new, 1-p_new])
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
f. Simulate $n_{old}$ transactions with a conversion rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in **old_page_converted**.
# simulate n_old transactions with a convert rate of p_old under the null old_page_converted = np.random.choice([0, 1], n_old, p = [p_old, 1-p_old])
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).
# differences computed in from p_new and p_old obs_diff= new_page_converted.mean() - old_page_converted.mean()# differences computed in from p_new and p_old obs_diff
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
h. Create 10,000 $p_{new}$ - $p_{old}$ values using the same simulation process you used in parts (a) through (g) above. Store all 10,000 values in a NumPy array called **p_diffs**.
# Create sampling distribution for difference in p_new-p_old simulated values # with boostrapping p_diffs = [] for i in range(10000): # 1st parameter dictates the choices you want. In this case [1, 0] p_new1 = np.random.choice([1, 0],n_new,replace = True,p = [p_new, 1-p_new]) p_old1 = np.random.choice([1, 0],n_old,replace = True,p = [p_old, 1-p_old]) p_new2 = p_new1.mean() p_old2 = p_old1.mean() p_diffs.append(p_new2-p_old2) #_p_diffs = np.array(_p_diffs)
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
i. Plot a histogram of the **p_diffs**. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.
p_diffs=np.array(p_diffs) #histogram of p_diff plt.hist(p_diffs) plt.title('Graph of p_diffs')#title of graphs plt.xlabel('Page difference') # x-label of graphs plt.ylabel('Count') # y-label of graphs
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
j. What proportion of the **p_diffs** are greater than the actual difference observed in **ab_data.csv**?
#histogram of p_diff plt.hist(p_diffs); plt.title('Graph of p_diffs') #title of graphs plt.xlabel('Page difference') # x-label of graphs plt.ylabel('Count') # y-label of graphs plt.axvline(x= obs_diff, color='r');
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
k. Please explain using the vocabulary you've learned in this course what you just computed in part **j.** What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages? 89.57% is the proportion of the p_diffs that are greater than the actual difference observed in ab_data.csv. In scientific studies this value is also called p-value. This value means that we cannot reject the null hypothesis and that we do not have sufficient evidence that the new_page has a higher conversion rate than the old_page. **Put your answer here.** l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let `n_old` and `n_new` refer the the number of rows associated with the old page and new pages, respectively.
import statsmodels.api as sm convert_old = len(df2.query('converted==1 and landing_page=="old_page"')) #rows converted with old_page convert_new = len(df2.query('converted==1 and landing_page=="new_page"')) #rows converted with new_page n_old = len(df2.query('landing_page=="old_page"')) #rows_associated with old_page n_new = len(df2.query('landing_page=="new_page"')) #rows associated with new_page n_new
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
m. Now use `stats.proportions_ztest` to compute your test statistic and p-value. [Here](https://docs.w3cub.com/statsmodels/generated/statsmodels.stats.proportion.proportions_ztest/) is a helpful link on using the built in.
#Computing z_score and p_value z_score, p_value = sm.stats.proportions_ztest([convert_old,convert_new], [n_old, n_new],alternative='smaller') #display z_score and p_value print(z_score,p_value)
1.31160753391 0.905173705141
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts **j.** and **k.**?
from scipy.stats import norm norm.cdf(z_score) #how significant our z_score is norm.ppf(1-(0.05)) #critical value of 95% confidence
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
The z-score and the p_value mean that one doesn't reject the Null. The Null being the converted rate of the old_page is the same or greater than the converted rate of the new_page. The p_value is 0.91 and is higher than 0.05 significance level. That means we can not be confident with a 95% confidence level that the converted rate of the new_page is larger than the old_page. Part III - A regression approach`1.` In this final part, you will see that the result you achieved in the A/B test in Part II above can also be achieved by performing regression. a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case? The dependent variable is a binary variable (converted vs not converted). Thus, you need to use a logistic regression. b. The goal is to use **statsmodels** to fit the regression model you specified in part **a.** to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create in df2 a column for the intercept, and create a dummy variable column for which page each user received. Add an **intercept** column, as well as an **ab_page** column, which is 1 when an individual receives the **treatment** and 0 if **control**.
#adding an intercept column df2['intercept'] = 1 #Create dummy variable column df2['ab_page'] = pd.get_dummies(df2['group'])['treatment'] df2.head()
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
c. Use **statsmodels** to instantiate your regression model on the two columns you created in part b., then fit the model using the two columns you created in part **b.** to predict whether or not an individual converts.
import statsmodels.api as sm model=sm.Logit(df2['converted'],df2[['intercept','ab_page']]) results=model.fit()
Optimization terminated successfully. Current function value: 0.366118 Iterations 6
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
d. Provide the summary of your model below, and use it as necessary to answer the following questions.
results.summary()
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
e. What is the p-value associated with **ab_page**? Why does it differ from the value you found in **Part II**? **Hint**: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in **Part II**? The p-value associated with ab_page is 0.19. It is higher than 0.05. Thus, the coefficient is not significant.Alternative hypothesis from part II: the conversion rate of the old_page is less than the conversion rate of the new_page. This assumes a one-tailed test. In Part III, the alternative hypothesis can be formulated as follows: (1) The landing_page type influences (positively or negatively) the conversion rate or (2) the conversion rate of the old_page is different to the conversion rate of the new_page. This assumes a two-tailed test.in both cases, the results do not support the alternative hypothesis sufficiently.The p-value is very different. In part II the p-value is 0.91. This might be because the tests of the regression model (not the A/B test) assumes an intercept and because of differences in one or two-tailed testing. f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model? It is a good idea to consider other factors in order to identify other potencial influences on the conversion rate. A disadvantage is that the model gets more complex. g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives in. You will need to read in the **countries.csv** dataset and merge together your datasets on the appropriate rows. [Here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html) are the docs for joining tables. Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - **Hint: You will need two columns for the three dummy variables.** Provide the statistical output as well as a written response to answer this question.
# Store Countries.csv data in dataframe countries = pd.read_csv('countries.csv') countries.head() #Inner join two datas new = countries.set_index('user_id').join(df2.set_index('user_id'), how = 'inner') new.head() #adding dummy variables with 'CA' as the baseline new[['US', 'UK']] = pd.get_dummies(new['country'])[['US', "UK"]] new.head() new['US_ab_page'] = new['US']*new['ab_page'] new.head() new['UK_ab_page'] = new['UK']*new['ab_page'] new.head() new['intercept'] = 1 logit3 = sm.Logit(new['converted'], new[['intercept', 'ab_page', 'US', 'UK', 'US_ab_page', 'US_ab_page']]) logit3 #Check the result results = logit3.fit() #Check the result results.summary()
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model. Provide the summary results, and your conclusions based on the results. **Conclusions:** None of the variables have significant p-values. Therefore, we will fail to reject the null and conclude that there is not sufficient evidence to suggest that there is an interaction between country and page received that will predict whether a user converts or not.In the larger picture, based on the available information, we do not have sufficient evidence to suggest that the new page results in more conversions than the old page.
from subprocess import call call(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
Finishing Up> Congratulations! You have reached the end of the A/B Test Results project! You should be very proud of all you have accomplished!> **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the rubric (found on the project submission page at the end of the lesson). You should also probably remove all of the "Tips" like this one so that the presentation is as polished as possible. Directions to Submit> Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).> Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.> Once you've done this, you can submit your project by clicking on the "Submit Project" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations!
from subprocess import call call(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])
_____no_output_____
FTL
Analyze-A-B-Results-masterAnalyze_ab_test_results_notebook.ipynb
DishaMukherjee/Analyze-A-B-Results
2章 微分積分 2.1 関数
# 必要ライブラリの宣言 %matplotlib inline import numpy as np import matplotlib.pyplot as plt # PDF出力用 from IPython.display import set_matplotlib_formats set_matplotlib_formats('png', 'pdf') def f(x): return x**2 +1 f(1) f(2)
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
図2-2 点(x, f(x))のプロットとy=f(x)のグラフ
x = np.linspace(-3, 3, 601) y = f(x) x1 = np.linspace(-3, 3, 7) y1 = f(x1) plt.figure(figsize=(6,6)) plt.ylim(-2,10) plt.plot([-3,3],[0,0],c='k') plt.plot([0,0],[-2,10],c='k') plt.scatter(x1,y1,c='k',s=50) plt.grid() plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.show() x2 = np.linspace(-3, 3, 31) y2 = f(x2) plt.figure(figsize=(6,6)) plt.ylim(-2,10) plt.plot([-3,3],[0,0],c='k') plt.plot([0,0],[-2,10],c='k') plt.scatter(x2,y2,c='k',s=50) plt.grid() plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.show() plt.figure(figsize=(6,6)) plt.plot(x,y,c='k') plt.ylim(-2,10) plt.plot([-3,3],[0,0],c='k') plt.plot([0,0],[-2,10],c='k') plt.scatter([1,2],[2,5],c='k',s=50) plt.grid() plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
2.2 合成関数・逆関数 図2.6 逆関数のグラフ
def f(x): return(x**2 + 1) def g(x): return(np.sqrt(x - 1)) xx1 = np.linspace(0.0, 4.0, 200) xx2 = np.linspace(1.0, 4.0, 200) yy1 = f(xx1) yy2 = g(xx2) plt.figure(figsize=(6,6)) plt.xlabel('$x$',fontsize=14) plt.ylabel('$y$',fontsize=14) plt.ylim(-2.0, 4.0) plt.xlim(-2.0, 4.0) plt.grid() plt.plot(xx1,yy1, linestyle='-', c='k', label='$y=x^2+1$') plt.plot(xx2,yy2, linestyle='-.', c='k', label='$y=\sqrt{x-1}$') plt.plot([-2,4],[-2,4], color='black') plt.plot([-2,4],[0,0], color='black') plt.plot([0,0],[-2,4],color='black') plt.legend(fontsize=14) plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
2.3 微分と極限 図2-7 関数のグラフを拡大したときの様子
from matplotlib import pyplot as plt import numpy as np def f(x): return(x**3 - x) delta = 2.0 x = np.linspace(0.5-delta, 0.5+delta, 200) y = f(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta) plt.xlim(0.5-delta, 0.5+delta) plt.plot(x, y, 'b-', lw=1, c='k') plt.scatter([0.5], [-3.0/8.0]) plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.grid() plt.title('delta = %.4f' % delta, fontsize=14) plt.show() delta = 0.2 x = np.linspace(0.5-delta, 0.5+delta, 200) y = f(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta) plt.xlim(0.5-delta, 0.5+delta) plt.plot(x, y, 'b-', lw=1, c='k') plt.scatter([0.5], [-3.0/8.0]) plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.grid() plt.title('delta = %.4f' % delta, fontsize=14) plt.show() delta = 0.01 x = np.linspace(0.5-delta, 0.5+delta, 200) y = f(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta) plt.xlim(0.5-delta, 0.5+delta) plt.plot(x, y, 'b-', lw=1, c='k') plt.scatter(0.5, -3.0/8.0) plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.grid() plt.title('delta = %.4f' % delta, fontsize=14) plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
図2-8 関数のグラフ上の2点を結んだ直線の傾き
delta = 2.0 x = np.linspace(0.5-delta, 0.5+delta, 200) x1 = 0.6 x2 = 1.0 y = f(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-1, 0.5) plt.xlim(0, 1.5) plt.plot(x, y, 'b-', lw=1, c='k') plt.scatter([x1, x2], [f(x1), f(x2)], c='k', lw=1) plt.plot([x1, x2], [f(x1), f(x2)], c='k', lw=1) plt.plot([x1, x2, x2], [f(x1), f(x1), f(x2)], c='k', lw=1) plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(color='white') plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
図2-10 接線の方程式
def f(x): return(x**2 - 4*x) def g(x): return(-2*x -1) x = np.linspace(-2, 6, 500) fig = plt.figure(figsize=(6,6)) plt.scatter([1],[-3],c='k') plt.plot(x, f(x), 'b-', lw=1, c='k') plt.plot(x, g(x), 'b-', lw=1, c='b') plt.plot([x.min(), x.max()], [0, 0], lw=2, c='k') plt.plot([0, 0], [g(x).min(), f(x).max()], lw=2, c='k') plt.grid(lw=2) plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(color='white') plt.xlabel('X') plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
2.4 極大・極小 図2-11 y= x3-3xのグラフと極大・極小
def f1(x): return(x**3 - 3*x) x = np.linspace(-3, 3, 500) y = f1(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-4, 4) plt.xlim(-3, 3) plt.plot(x, y, 'b-', lw=1, c='k') plt.plot([0,0],[-4,4],c='k') plt.plot([-3,3],[0,0],c='k') plt.grid() plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
図2-12 極大でも極小でもない例 (y=x3のグラフ)
def f2(x): return(x**3) x = np.linspace(-3, 3, 500) y = f2(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-4, 4) plt.xlim(-3, 3) plt.plot(x, y, 'b-', lw=1, c='k') plt.plot([0,0],[-4,4],c='k') plt.plot([-3,3],[0,0],c='k') plt.grid() plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
2.7 合成関数の微分 図2-14 逆関数の微分
#逆関数の微分 def f(x): return(x**2 + 1) def g(x): return(np.sqrt(x - 1)) xx1 = np.linspace(0.0, 4.0, 200) xx2 = np.linspace(1.0, 4.0, 200) yy1 = f(xx1) yy2 = g(xx2) plt.figure(figsize=(6,6)) plt.xlabel('$x$',fontsize=14) plt.ylabel('$y$',fontsize=14) plt.ylim(-2.0, 4.0) plt.xlim(-2.0, 4.0) plt.grid() plt.plot(xx1,yy1, linestyle='-', color='blue') plt.plot(xx2,yy2, linestyle='-', color='blue') plt.plot([-2,4],[-2,4], color='black') plt.plot([-2,4],[0,0], color='black') plt.plot([0,0],[-2,4],color='black') plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
2.9 積分 図2-15 面積を表す関数S(x)とf(x)の関係
def f(x) : return x**2 + 1 xx = np.linspace(-4.0, 4.0, 200) yy = f(xx) plt.figure(figsize=(6,6)) plt.xlim(-2,2) plt.ylim(-1,4) plt.plot(xx, yy) plt.plot([-2,2],[0,0],c='k',lw=1) plt.plot([0,0],[-1,4],c='k',lw=1) plt.plot([0,0],[0,f(0)],c='b') plt.plot([1,1],[0,f(1)],c='b') plt.plot([1.5,1.5],[0,f(1.5)],c='b') plt.plot([1,1.5],[f(1),f(1)],c='b') plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(color='white') plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
図2-16 グラフの面積と定積分
plt.figure(figsize=(6,6)) plt.xlim(-2,2) plt.ylim(-1,4) plt.plot(xx, yy) plt.plot([-2,2],[0,0],c='k',lw=1) plt.plot([0,0],[-1,4],c='k',lw=1) plt.plot([0,0],[0,f(0)],c='b') plt.plot([1,1],[0,f(1)],c='b') plt.plot([1.5,1.5],[0,f(1.5)],c='b') plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(color='white') plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
図2-17 積分と面積の関係
def f(x) : return x**2 + 1 x = np.linspace(-1.0, 2.0, 200) y = f(x) N = 10 xx = np.linspace(0.5, 1.5, N+1) yy = f(xx) print(xx) plt.figure(figsize=(6,6)) plt.xlim(-1,2) plt.ylim(-1,4) plt.plot(x, y) plt.plot([-1,2],[0,0],c='k',lw=2) plt.plot([0,0],[-1,4],c='k',lw=2) plt.plot([0.5,0.5],[0,f(0.5)],c='b') plt.plot([1.5,1.5],[0,f(1.5)],c='b') plt.bar(xx[:-1], yy[:-1], align='edge', width=1/N*0.9) plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(color='white') plt.grid() plt.show()
_____no_output_____
Apache-2.0
notebooks/ch02-diff.ipynb
evilboy1973/math_dl_book_info
VacationPy---- Note* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import gmaps import os # Import API key from api_keys import g_key # Configure gmaps gmaps.configure(api_key=gkey) print(gkey)
_____no_output_____
ADSL
VacationPy/VacationPy.ipynb
kdturner83/PythonAPI_Challenge
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame
# Create vacation dataframe #clean_city_data_df.to_csv('../Resources/city_output.csv') vacation_df = pd.read_csv('../Resources/city_output.csv') #vacation_df = vacation_df.drop(columns="Unnamed: 0") vacation_df.head()
_____no_output_____
ADSL
VacationPy/VacationPy.ipynb
kdturner83/PythonAPI_Challenge
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map.
# Store latitude and longitude in locations locations = vacation_df[["lat", "long"]] weights = vacation_df["humidity"].astype(float) fig = gmaps.figure() # Create heat layer heat_layer = gmaps.heatmap_layer(locations, weights=weights, dissipating=False, max_intensity=10, point_radius=300) fig
_____no_output_____
ADSL
VacationPy/VacationPy.ipynb
kdturner83/PythonAPI_Challenge
Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values.
#vacation_df.dropna(inplace = True) max temp, cloudiness = 0, wind speed <10, 70> <80 city_weather_df = vacation_df.copy() city_weather_df.dropna(inplace = True) city_weather_df
_____no_output_____
ADSL
VacationPy/VacationPy.ipynb
kdturner83/PythonAPI_Challenge