hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
list
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
list
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
list
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
list
cell_types
list
cell_type_groups
list
4a5e927927a22754c7f1fa8d5e9cb52932201981
236,931
ipynb
Jupyter Notebook
Date_Cleaning.ipynb
peterwei425/Interdisciplinary-Health-Data-Competition
0ae1707d70abeb1f2c484ba345129e0971a3994b
[ "MIT" ]
null
null
null
Date_Cleaning.ipynb
peterwei425/Interdisciplinary-Health-Data-Competition
0ae1707d70abeb1f2c484ba345129e0971a3994b
[ "MIT" ]
null
null
null
Date_Cleaning.ipynb
peterwei425/Interdisciplinary-Health-Data-Competition
0ae1707d70abeb1f2c484ba345129e0971a3994b
[ "MIT" ]
null
null
null
43.393956
336
0.334853
[ [ [ "# Interdisciplinary Health Data Competition - Data Cleaning", "_____no_output_____" ], [ "## Import necessary libraries", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport warnings", "_____no_output_____" ] ], [ [ "## Agenda", "_____no_output_____" ], [ "Step 1 - Read in Data Files\n- Read in drug and prescription files\n- Inspect their initial format\n- Inspect their initial data types\n- Inspect data distribution\n\nStep 2 - Check for Nulls\n- Check for nulls by count\n- Check for nulls by percentage\n- Replace any known nulls\n- Drop columns with ~20% or more missing values\n- Drop rows with ~10% or less missing values\n\nStep 3 - Try to Impute Nulls in Percent Change\n- analyze head of dataframe\n- analyze the cost per script rank\n- remove original percent change features\n\nStep 4 - Properly Join the Two Files\n- join prescription 2012 to prescription 2016\n- create new percent change features\n- fill in calculations for the percent change columns in prescription\n- join drug 2012 to drug 2016\n\nStep 5 - Create Lists Where Applicable\n\nStep 6 - Write cleaned data to new files", "_____no_output_____" ], [ "## Step 1 - Read in Data Files", "_____no_output_____" ] ], [ [ "# read in the first file and inspect\ndrug = pd.read_excel(\"DrugDetailMerged.xlsx\")\ndrug.head()", "_____no_output_____" ], [ "# inspect data types\ndrug.dtypes", "_____no_output_____" ], [ "# Inspect data distribution\ndrug.describe()", "_____no_output_____" ], [ "# read in the first file and inspect\nprescription = pd.read_excel(\"SummaryMerged.xlsx\")\nprescription.head()", "_____no_output_____" ], [ "# inspect data types\nprescription.dtypes", "_____no_output_____" ], [ "# Inspect data distribution\nprescription.describe()", "_____no_output_____" ] ], [ [ "## Step 2 - Check for Nulls", "_____no_output_____" ] ], [ [ "# count of nulls\ndrug.isnull().sum()", "_____no_output_____" ], [ "# Percentage of null values for easier analysis\ndrug.isnull().sum()* 100 / len(drug)", "_____no_output_____" ], [ "# count of nulls\nprescription.isnull().sum()", "_____no_output_____" ], [ "# Percentage of null values for easier analysis\nprescription.isnull().sum()* 100 / len(prescription)", "_____no_output_____" ], [ "# Replace any nulls with known values\n# 'PROPNAME' NA means generic\ndrug['PROPNAME'] = drug['PROPNAME'].fillna('GENERIC')\ndrug.isnull().sum()* 100 / len(drug)", "_____no_output_____" ], [ "# Drop columns with ~20% of more missing values\n#prescription = prescription.drop(['PCT_SCRIPTS_0_18','PCT_SCRIPTS_19_44', 'PCT_SCRIPTS_45_64',\n# 'PCT_SCRIPTS_65_PLUS', 'PCT_URBAN_CORE', 'PCT_SUBURBAN',\n# 'PCT_MICROPOLITAN', 'PCT_RURAL_SMALLTOWN'], axis = 1)", "_____no_output_____" ], [ "# Confirm columns were dropped\nprescription.isnull().sum()* 100 / len(prescription)", "_____no_output_____" ], [ "# Drop rows with ~10% or less missing values and confirm rows were dropped\n#prescription = prescription.dropna(axis=0, subset=['PCT_SCRIPTS_FEMALE', 'PCT_SCRIPTS_MALE'])\n#prescription.isnull().sum()* 100 / len(prescription)", "_____no_output_____" ], [ "# Drop rows with ~10% or less missing values and confirm rows were dropped\ndrug = drug.dropna(axis=0, subset=['DOSAGE_FORM', 'ACTIVE_STRENGTH', 'ACTIVE_STRENGTH_UNIT', 'LABELERNAME', 'LAUNCH_YEAR', 'PRODUCT_NDC'])\ndrug.isnull().sum()* 100 / len(drug)", "_____no_output_____" ] ], [ [ "## Step 3 - Try to Impute Nulls in Percent Change", "_____no_output_____" ] ], [ [ "# analyze head of dataframe\nprescription.head()", "_____no_output_____" ], [ "# analyze cost per script rank\n\nscriptRank = prescription[['YEAR','COST_PER_USER_RANK', 'COST_PER_SCRIPT_RANK', 'COST_PER_DAYS_SUPPLY_RANK',\n 'COST_PER_UNIT_DISPENSED_RANK', 'TOTAL_SCRIPTS_FILLED_RANK',\n 'PCT_CHANGE_COST_PER_SCRIPT_RANK']]\nscriptRank.head()", "_____no_output_____" ] ], [ [ "Appears the 'PCT_CHANGE_COST_PER_SCRIPT_RANK' and 'PCT_CHANGE_COST_PER_SCRIPT' are by year. Since I do not have the prior years (2011 and 2015), I cannot calculate the missing values.\n\nInstead I will make a new feature to compare the growth between 2012 and 2016. These will replace the 'PCT_CHANGE_COST_PER_SCRIPT_RANK' and 'PCT_CHANGE_COST_PER_SCRIPT' features. If the year is 2012, the percent change will be 0. If the year is 2016 I will calculate the percent change as follows:\n\n$PCT\\_CHANGE\\_COST\\_PER\\_SCRIPT\\_RANK = \\frac{(COST\\_PER\\_SCRIPT\\_RANK\\_2016 - COST\\_PER\\_SCRIPT\\_RANK\\_2012)}{COST\\_PER\\_SCRIPT\\_RANK\\_2012} \\times 100\\%$\n\n$PCT\\_CHANGE\\_COST\\_PER\\_SCRIPT = \\frac{(COST\\_PER\\_SCRIPT\\_2016 - COST\\_PER\\_SCRIPT\\_2012)}{COST\\_PER\\_SCRIPT\\_2012} \\times 100\\%$\n\nThese formulas will be applied later.", "_____no_output_____" ] ], [ [ "# remove original percent change features and confirm removal\nprescription = prescription.drop(['PCT_CHANGE_COST_PER_SCRIPT_RANK','PCT_CHANGE_COST_PER_SCRIPT'], axis = 1)\nprescription.isnull().sum()* 100 / len(prescription)", "_____no_output_____" ] ], [ [ "## Step 4 - Properly Join the Two Files", "_____no_output_____" ] ], [ [ "# divide the prescription file into their two years\nprescription_2012 = prescription.loc[prescription['YEAR'] == 2012]\nprescription_2012 = prescription_2012.add_suffix('_2012')\nprescription_2016 = prescription.loc[prescription['YEAR'] == 2016]\nprescription_2016 = prescription_2016.add_suffix('_2016')", "_____no_output_____" ], [ "# join prescription file on ['RECORD_TYPE','NPROPNAME', 'THER_CLASS','PAYER']\nprescription_merged = prescription_2012.merge(prescription_2016, how = \"outer\", left_on = ['RECORD_TYPE_2012','NPROPNAME_2012', 'THER_CLASS_2012','PAYER_2012'],\n right_on = ['RECORD_TYPE_2016','NPROPNAME_2016', 'THER_CLASS_2016','PAYER_2016'])\nprescription_merged.head()", "_____no_output_____" ], [ "# lost a significant portion of data with this merge (lost almost 40%)\n# additionally, multiple NaN were introduced, but there is a business interpretation to this\n# if the NaN is in 2012, then a new drug could have been created and then prescribed in 2016\n# if the NaN is in 2016, then a drug is no longer being prescribed that was once available\nprescription_merged.shape", "_____no_output_____" ] ], [ [ "Upon inspection, it is possible to join the two prescription files, but it will take a lot of work. For example in the 'NPROPNAME' an item has been listed two different ways: CALCIUM PANTOTHEN and CALCIUM P. I believe these are the same but will need outsider information to confirm. Until then these tables will not be joined.\n\nConfirmed this fact on my own using https://www.drugbank.ca/salts/DBSALT000034 and https://www.drugs.com/international/calcium-p.html\n\nWould love if someone else could confirm", "_____no_output_____" ] ], [ [ "# create new percent change columns filled with zeros and check\nprescription_merged['PCT_CHANGE_COST_PER_SCRIPT_RANK'] = (prescription_merged['COST_PER_SCRIPT_RANK_2016'] - prescription_merged['COST_PER_SCRIPT_RANK_2012'] ) / prescription_merged['COST_PER_SCRIPT_RANK_2012'] * 100\nprescription_merged['PCT_CHANGE_COST_PER_SCRIPT'] = (prescription_merged['COST_PER_SCRIPT_2016'] - prescription_merged['COST_PER_SCRIPT_2012'] ) / prescription_merged['COST_PER_SCRIPT_2012'] * 100\nprescription_merged.head()", "_____no_output_____" ], [ "# divide the drug file into their two years\ndrug_2012 = drug.loc[drug['YEAR'] == 2012]\ndrug_2012 = drug_2012.add_suffix('_2012')\ndrug_2016 = drug.loc[drug['YEAR'] == 2016]\ndrug_2016 = drug_2016.add_suffix('_2016')", "_____no_output_____" ], [ "# join drug file on ['NDC9','PRODUCT_NDC']\ndrug_merged = drug_2012.merge(drug_2016, how = \"outer\", left_on = ['NDC9_2012','PRODUCT_NDC_2012', 'RECORD_TYPE_2012','NPROPNAME_2012', 'THER_CLASS_2012','PAYER_2012'],\n right_on = ['NDC9_2016','PRODUCT_NDC_2016', 'RECORD_TYPE_2016','NPROPNAME_2016', 'THER_CLASS_2016','PAYER_2016'])\ndrug_merged.head()", "_____no_output_____" ], [ "# lost a significant portion of data with this merge (lost almost 25%)\n# additionally, multiple NaN were introduced, but there is a business interpretation to this\n# if the NaN is in 2012, then a new drug could have been created and then prescribed in 2016\n# if the NaN is in 2016, then a drug is no longer being prescribed that was once available\ndrug_merged.shape", "_____no_output_____" ] ], [ [ "## Step 5 - Convert to lists where applicable", "_____no_output_____" ] ], [ [ "# NPROPNAME in drug and prescription\nprescription_merged['NPROPNAMES_2012'] = prescription_merged['NPROPNAME_2012'].str.split(\",\")\ndrug_merged['NPROPNAMES_2012'] = drug_merged['NPROPNAME_2012'].str.split(\",\")\n\nprescription_merged['NPROPNAMES_2016'] = prescription_merged['NPROPNAME_2016'].str.split(\",\")\ndrug_merged['NPROPNAMES_2016'] = drug_merged['NPROPNAME_2016'].str.split(\",\")\n\n# DOSAGE_FORM in drug\ndrug_merged['DOSAGE_FORMS_2012'] = drug_merged['DOSAGE_FORM_2012'].str.split(\",\")\n\ndrug_merged['DOSAGE_FORMS_2016'] = drug_merged['DOSAGE_FORM_2016'].str.split(\",\")\n\n# ACTIVE_STRENGTH in drug\ndrug_merged['ACTIVE_STRENGTHS_2012'] = drug_merged['ACTIVE_STRENGTH_2012'].str.split(\";\")\n\ndrug_merged['ACTIVE_STRENGTHS_2016'] = drug_merged['ACTIVE_STRENGTH_2016'].str.split(\";\")\n\n# ACTIVE_STRENGTH_UNIT in drug\ndrug_merged['ACTIVE_STRENGTH_UNITS_2012'] = drug_merged['ACTIVE_STRENGTH_UNIT_2012'].str.split(\";\")\ndrug_merged['ACTIVE_STRENGTH_UNITS_2016'] = drug_merged['ACTIVE_STRENGTH_UNIT_2016'].str.split(\";\")\n\n# drop original columns\nprescription_merged.drop(['NPROPNAME_2012', 'NPROPNAME_2016'], axis = 1)\ndrug_merged.drop(['NPROPNAME_2012', 'NPROPNAME_2016', 'DOSAGE_FORM_2012', 'DOSAGE_FORM_2016', \n 'ACTIVE_STRENGTH_2012', 'ACTIVE_STRENGTH_2016', 'ACTIVE_STRENGTH_UNIT_2012',\n 'ACTIVE_STRENGTH_UNIT_2016'], axis = 1)", "_____no_output_____" ] ], [ [ "## Step 6 - Write cleaned data to new files", "_____no_output_____" ] ], [ [ "drug_merged.to_excel(\"C:/Users/LMoor/Downloads/Drug_Clean_v2.xlsx\")\nprescription_merged.to_excel(\"C:/Users/LMoor/Downloads/Prescription_Clean_v2.xlsx\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a5e985b325a361ef450cbdaeba70ad917f94183
3,267
ipynb
Jupyter Notebook
kernel57378d48ba.ipynb
bhonesh1998/Naive-Bayes-on-iris-
2b8b5c3cd1ef88f90e1b002fcb75d4bc0946dbf0
[ "Apache-2.0" ]
null
null
null
kernel57378d48ba.ipynb
bhonesh1998/Naive-Bayes-on-iris-
2b8b5c3cd1ef88f90e1b002fcb75d4bc0946dbf0
[ "Apache-2.0" ]
null
null
null
kernel57378d48ba.ipynb
bhonesh1998/Naive-Bayes-on-iris-
2b8b5c3cd1ef88f90e1b002fcb75d4bc0946dbf0
[ "Apache-2.0" ]
null
null
null
31.718447
121
0.546985
[ [ [ "# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load in \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n# Input data files are available in the \"../input/\" directory.\n# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory\n\ndata=pd.read_csv(\"../input/Iris.csv\")\n\nfrom sklearn import metrics\nfrom sklearn.naive_bayes import GaussianNB\n\ndata=data.drop(['Id'],axis=1)\n\nx=data.drop(['Species'],axis=1)\ny=data['Species']\n\nfrom sklearn.model_selection import train_test_split\nx_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=4)\nmodel=GaussianNB()\nmodel.fit(x_train,y_train)\npredicted=model.predict(x_test)\nprint(metrics.accuracy_score(y_test,predicted))\nprint(metrics.classification_report(y_test,predicted))\nprint(metrics.confusion_matrix(y_test,predicted))\nnew_data=[[3,4,5,6],[7,8,9,10],[1,3.4,5.6,7.8],[3,4,5,2],[5,4,2,2],[3, 2, 4, 0.2], [ 4.7, 3, 1.3, 0.2 ]]\nprint(model.predict(new_data))\n\n\n\n", "0.9777777777777777\n precision recall f1-score support\n\n Iris-setosa 1.00 1.00 1.00 21\nIris-versicolor 0.91 1.00 0.95 10\n Iris-virginica 1.00 0.93 0.96 14\n\n accuracy 0.98 45\n macro avg 0.97 0.98 0.97 45\n weighted avg 0.98 0.98 0.98 45\n\n[[21 0 0]\n [ 0 10 0]\n [ 0 1 13]]\n['Iris-virginica' 'Iris-virginica' 'Iris-virginica' 'Iris-virginica'\n 'Iris-virginica' 'Iris-versicolor' 'Iris-setosa']\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a5ea7479d413bd9ee93c97a344aacdead07aba5
16,754
ipynb
Jupyter Notebook
T6.ipynb
abphilip-vit/College4
ea85d0a627fcf616d691c2b73e4e69efa0ba86c0
[ "MIT" ]
2
2020-10-16T09:34:45.000Z
2020-11-13T11:35:26.000Z
T6.ipynb
abphilip-vit/College4
ea85d0a627fcf616d691c2b73e4e69efa0ba86c0
[ "MIT" ]
1
2021-04-29T23:48:06.000Z
2021-04-30T00:02:07.000Z
T6.ipynb
allenalvin333/College4
ea85d0a627fcf616d691c2b73e4e69efa0ba86c0
[ "MIT" ]
null
null
null
26.593651
1,484
0.475289
[ [ [ "# Import Libraries", "_____no_output_____" ] ], [ [ "import nltk\nfrom nltk.corpus import stopwords \nfrom nltk.tokenize import word_tokenize, sent_tokenize ", "_____no_output_____" ] ], [ [ "# Sentences", "_____no_output_____" ] ], [ [ "sentence = [(\"the\", \"DT\"), (\"little\", \"JJ\"), (\"yellow\", \"JJ\"),(\"dog\", \"NN\"), (\"barked\", \"VBD\"), (\"at\", \"IN\"), (\"the\", \"DT\"), (\"cat\", \"NN\")]", "_____no_output_____" ], [ "sentence2 = \"Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal\"", "_____no_output_____" ], [ "sentence3 = \"Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we can not dedicate—we can not consecrate—we can not hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.\"", "_____no_output_____" ] ], [ [ "# Regex Function ", "_____no_output_____" ] ], [ [ "grammar = \"NP: {<DT>?<JJ>*<NN>}\"\nc = nltk.RegexpParser(grammar)\nresult = c.parse(sentence)\nprint(result)", "(S\n (NP the/DT little/JJ yellow/JJ dog/NN)\n barked/VBD\n at/IN\n (NP the/DT cat/NN))\n" ], [ "result.draw()", "_____no_output_____" ] ], [ [ "# Preprocessing - I", "_____no_output_____" ] ], [ [ "# Sentence read by Keerthivasan S M :D", "_____no_output_____" ], [ "stop_words = set(stopwords.words('english')) ", "_____no_output_____" ], [ "l = list(sentence2.split(\" \")) \nprint(l)", "['Four', 'score', 'and', 'seven', 'years', 'ago', 'our', 'fathers', 'brought', 'forth', 'on', 'this', 'continent,', 'a', 'new', 'nation,', 'conceived', 'in', 'Liberty,', 'and', 'dedicated', 'to', 'the', 'proposition', 'that', 'all', 'men', 'are', 'created', 'equal']\n" ], [ "# Geeks for Geeks\n\ntokenized = sent_tokenize(sentence2) \nfor i in tokenized: \n wordsList = nltk.word_tokenize(i) \n wordsList = [w for w in wordsList if not w in stop_words] \n tagged = nltk.pos_tag(wordsList) ", "_____no_output_____" ] ], [ [ "# Regex Function - I", "_____no_output_____" ] ], [ [ "grammar = \"NP: {<DT>?<JJ>*<NN>}\"\nc = nltk.RegexpParser(grammar)\nresult = c.parse(tagged)\nprint(result)", "(S\n Four/CD\n (NP score/NN)\n seven/CD\n years/NNS\n ago/RB\n fathers/NNS\n brought/VBD\n (NP forth/JJ continent/NN)\n ,/,\n (NP new/JJ nation/NN)\n ,/,\n conceived/VBD\n Liberty/NNP\n ,/,\n dedicated/VBN\n (NP proposition/NN)\n men/NNS\n created/VBD\n equal/JJ)\n" ], [ "result.draw()", "_____no_output_____" ] ], [ [ "# Assignment", "_____no_output_____" ] ], [ [ "stop_words = set(stopwords.words('english')) ", "_____no_output_____" ], [ "l = list(sentence3.split(\". \")) \nprint(l)", "['Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal', 'Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure', 'We are met on a great battle-field of that war', 'We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live', 'It is altogether fitting and proper that we should do this', 'But, in a larger sense, we can not dedicate—we can not consecrate—we can not hallow—this ground', 'The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract', 'The world will little note, nor long remember what we say here, but it can never forget what they did here', 'It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced', 'It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.']\n" ], [ "for i in range(len(l)):\n l1 = l[i].split(\" \")\n tokenized = sent_tokenize(l[i]) \n for i in tokenized: \n wordsList = nltk.word_tokenize(i) \n wordsList = [w for w in wordsList if not w in stop_words] \n tagged = nltk.pos_tag(wordsList) \n grammar = \"NP: {<DT>?<JJ>*<NN>}\"\n c = nltk.RegexpParser(grammar)\n result = c.parse(tagged)\n print(result)\n print()", "(S\n Four/CD\n (NP score/NN)\n seven/CD\n years/NNS\n ago/RB\n fathers/NNS\n brought/VBD\n (NP forth/JJ continent/NN)\n ,/,\n (NP new/JJ nation/NN)\n ,/,\n conceived/VBD\n Liberty/NNP\n ,/,\n dedicated/VBN\n (NP proposition/NN)\n men/NNS\n created/VBD\n equal/JJ)\n\n(S\n Now/RB\n engaged/VBN\n (NP great/JJ civil/JJ war/NN)\n ,/,\n testing/VBG\n whether/IN\n (NP nation/NN)\n ,/,\n (NP nation/NN)\n conceived/VBD\n dedicated/VBN\n ,/,\n (NP long/JJ endure/NN))\n\n(S We/PRP met/VBD (NP great/JJ battle-field/JJ war/NN))\n\n(S\n We/PRP\n come/VBP\n (NP dedicate/JJ portion/NN)\n (NP field/NN)\n ,/,\n final/JJ\n resting/VBG\n (NP place/NN)\n gave/VBD\n lives/NNS\n (NP nation/NN)\n might/MD\n live/VB)\n\n(S It/PRP altogether/RB fitting/VBG (NP proper/NN))\n\n(S\n But/CC\n ,/,\n larger/JJR\n (NP sense/NN)\n ,/,\n (NP dedicate—we/NN)\n (NP consecrate—we/NN)\n (NP hallow—this/NN)\n (NP ground/NN))\n\n(S\n (NP The/DT brave/NN)\n men/NNS\n ,/,\n living/VBG\n dead/JJ\n ,/,\n struggled/VBD\n ,/,\n consecrated/VBN\n ,/,\n far/RB\n (NP poor/JJ power/NN)\n (NP add/NN)\n (NP detract/NN))\n\n(S\n (NP The/DT world/NN)\n (NP little/JJ note/NN)\n ,/,\n (NP long/JJ remember/NN)\n say/VBP\n ,/,\n never/RB\n forget/VB)\n\n(S\n It/PRP\n us/PRP\n living/VBG\n ,/,\n rather/RB\n ,/,\n dedicated/VBD\n (NP unfinished/JJ work/NN)\n (NP fought/NN)\n thus/RB\n far/RB\n nobly/RB\n advanced/JJ)\n\n(S\n It/PRP\n rather/RB\n us/PRP\n dedicated/VBD\n (NP great/JJ task/NN)\n remaining/VBG\n us—that/RB\n honored/VBN\n (NP dead/JJ take/NN)\n increased/VBD\n (NP devotion/NN)\n (NP cause/NN)\n gave/VBD\n (NP last/JJ full/JJ measure/NN)\n devotion—that/WP\n highly/RB\n resolve/VBP\n dead/JJ\n shall/MD\n died/VBD\n (NP vain—that/DT nation/NN)\n ,/,\n God/NNP\n ,/,\n shall/MD\n (NP new/JJ birth/NN)\n (NP freedom—and/JJ government/NN)\n people/NNS\n ,/,\n people/NNS\n ,/,\n people/NNS\n ,/,\n shall/MD\n perish/VB\n (NP earth/NN)\n ./.)\n\n" ], [ "for i in range(len(l)):\n l1 = l[i].split(\" \")\n tokenized = sent_tokenize(l[i]) \n for i in tokenized: \n wordsList = nltk.word_tokenize(i) \n wordsList = [w for w in wordsList if not w in stop_words] \n tagged = nltk.pos_tag(wordsList) \n grammar = \"VP: {<MD>?<VB.*><NP|PP>}\"\n c = nltk.RegexpParser(grammar)\n result = c.parse(tagged)\n print(result)\n print()", "(S\n Four/CD\n score/NN\n seven/CD\n years/NNS\n ago/RB\n fathers/NNS\n brought/VBD\n forth/JJ\n continent/NN\n ,/,\n new/JJ\n nation/NN\n ,/,\n conceived/VBD\n Liberty/NNP\n ,/,\n dedicated/VBN\n proposition/NN\n men/NNS\n created/VBD\n equal/JJ)\n\n(S\n Now/RB\n engaged/VBN\n great/JJ\n civil/JJ\n war/NN\n ,/,\n testing/VBG\n whether/IN\n nation/NN\n ,/,\n nation/NN\n conceived/VBD\n dedicated/VBN\n ,/,\n long/JJ\n endure/NN)\n\n(S We/PRP met/VBD great/JJ battle-field/JJ war/NN)\n\n(S\n We/PRP\n come/VBP\n dedicate/JJ\n portion/NN\n field/NN\n ,/,\n final/JJ\n resting/VBG\n place/NN\n gave/VBD\n lives/NNS\n nation/NN\n might/MD\n live/VB)\n\n(S It/PRP altogether/RB fitting/VBG proper/NN)\n\n(S\n But/CC\n ,/,\n larger/JJR\n sense/NN\n ,/,\n dedicate—we/NN\n consecrate—we/NN\n hallow—this/NN\n ground/NN)\n\n(S\n The/DT\n brave/NN\n men/NNS\n ,/,\n living/VBG\n dead/JJ\n ,/,\n struggled/VBD\n ,/,\n consecrated/VBN\n ,/,\n far/RB\n poor/JJ\n power/NN\n add/NN\n detract/NN)\n\n(S\n The/DT\n world/NN\n little/JJ\n note/NN\n ,/,\n long/JJ\n remember/NN\n say/VBP\n ,/,\n never/RB\n forget/VB)\n\n(S\n It/PRP\n us/PRP\n living/VBG\n ,/,\n rather/RB\n ,/,\n dedicated/VBD\n unfinished/JJ\n work/NN\n fought/NN\n thus/RB\n far/RB\n nobly/RB\n advanced/JJ)\n\n(S\n It/PRP\n rather/RB\n us/PRP\n dedicated/VBD\n great/JJ\n task/NN\n remaining/VBG\n us—that/RB\n honored/VBN\n dead/JJ\n take/NN\n increased/VBD\n devotion/NN\n cause/NN\n gave/VBD\n last/JJ\n full/JJ\n measure/NN\n devotion—that/WP\n highly/RB\n resolve/VBP\n dead/JJ\n shall/MD\n died/VBD\n vain—that/DT\n nation/NN\n ,/,\n God/NNP\n ,/,\n shall/MD\n new/JJ\n birth/NN\n freedom—and/JJ\n government/NN\n people/NNS\n ,/,\n people/NNS\n ,/,\n people/NNS\n ,/,\n shall/MD\n perish/VB\n earth/NN\n ./.)\n\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a5eb6b1753056bafa7d1909aeeea8d7e0fc6dae
53,859
ipynb
Jupyter Notebook
Copy_of_pytorch_quick_start.ipynb
shivangisachan20/ML-DL-Projects
668240f37faeaab3d4526e18ac49e8c81a4ab7af
[ "MIT" ]
null
null
null
Copy_of_pytorch_quick_start.ipynb
shivangisachan20/ML-DL-Projects
668240f37faeaab3d4526e18ac49e8c81a4ab7af
[ "MIT" ]
null
null
null
Copy_of_pytorch_quick_start.ipynb
shivangisachan20/ML-DL-Projects
668240f37faeaab3d4526e18ac49e8c81a4ab7af
[ "MIT" ]
null
null
null
73.982143
26,792
0.733304
[ [ [ "<a href=\"https://colab.research.google.com/github/shivangisachan20/ML-DL-Projects/blob/master/Copy_of_pytorch_quick_start.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# PyTorch 1.2 Quickstart with Google Colab\nIn this code tutorial we will learn how to quickly train a model to understand some of PyTorch's basic building blocks to train a deep learning model. This notebook is inspired by the [\"Tensorflow 2.0 Quickstart for experts\"](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb#scrollTo=DUNzJc4jTj6G) notebook. \n\nAfter completion of this tutorial, you should be able to import data, transform it, and efficiently feed the data in batches to a convolution neural network (CNN) model for image classification.\n\n**Author:** [Elvis Saravia](https://twitter.com/omarsar0)\n\n**Complete Code Walkthrough:** [Blog post](https://medium.com/dair-ai/pytorch-1-2-quickstart-with-google-colab-6690a30c38d)", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ], [ "!pip3 install torch==1.2.0+cu92 torchvision==0.4.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html", "Looking in links: https://download.pytorch.org/whl/torch_stable.html\nCollecting torch==1.2.0+cu92\n\u001b[?25l Downloading https://download.pytorch.org/whl/cu92/torch-1.2.0%2Bcu92-cp36-cp36m-manylinux1_x86_64.whl (663.1MB)\n\u001b[K |████████████████████████████████| 663.1MB 20kB/s \n\u001b[?25hCollecting torchvision==0.4.0+cu92\n\u001b[?25l Downloading https://download.pytorch.org/whl/cu92/torchvision-0.4.0%2Bcu92-cp36-cp36m-manylinux1_x86_64.whl (8.8MB)\n\u001b[K |████████████████████████████████| 8.8MB 44.2MB/s \n\u001b[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.2.0+cu92) (1.16.4)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision==0.4.0+cu92) (1.12.0)\nRequirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==0.4.0+cu92) (4.3.0)\nRequirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.1.1->torchvision==0.4.0+cu92) (0.46)\nInstalling collected packages: torch, torchvision\n Found existing installation: torch 1.1.0\n Uninstalling torch-1.1.0:\n Successfully uninstalled torch-1.1.0\n Found existing installation: torchvision 0.3.0\n Uninstalling torchvision-0.3.0:\n Successfully uninstalled torchvision-0.3.0\nSuccessfully installed torch-1.2.0+cu92 torchvision-0.4.0+cu92\n" ] ], [ [ "Note: We will be using the latest stable version of PyTorch so be sure to run the command above to install the latest version of PyTorch, which as the time of this tutorial was 1.2.0. We PyTorch belowing using the `torch` module. ", "_____no_output_____" ] ], [ [ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision\nimport torchvision.transforms as transforms", "_____no_output_____" ], [ "import fastai", "_____no_output_____" ], [ "from fastai.imports import *", "_____no_output_____" ], [ "from fastai.transforms import *\nfrom fastai.conv_learner import *", "_____no_output_____" ], [ "print(torch.__version__)", "1.2.0+cu92\n" ] ], [ [ "## Import The Data\nThe first step before training the model is to import the data. We will use the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) which is like the Hello World dataset of machine learning. \n\nBesides importing the data, we will also do a few more things:\n- We will tranform the data into tensors using the `transforms` module\n- We will use `DataLoader` to build convenient data loaders or what are referred to as iterators, which makes it easy to efficiently feed data in batches to deep learning models. \n- As hinted above, we will also create batches of the data by setting the `batch` parameter inside the data loader. Notice we use batches of `32` in this tutorial but you can change it to `64` if you like. I encourage you to experiment with different batches.", "_____no_output_____" ] ], [ [ "BATCH_SIZE = 32\n\n## transformations\ntransform = transforms.Compose(\n [transforms.ToTensor()])\n\n## download and load training dataset\ntrainset = torchvision.datasets.MNIST(root='./data', train=True,\n download=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=BATCH_SIZE,\n shuffle=True, num_workers=2)\n\n## download and load testing dataset\ntestset = torchvision.datasets.MNIST(root='./data', train=False,\n download=True, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=BATCH_SIZE,\n shuffle=False, num_workers=2)", "\r0it [00:00, ?it/s]" ] ], [ [ "## Exploring the Data\nAs a practioner and researcher, I am always spending a bit of time and effort exploring and understanding the dataset. It's fun and this is a good practise to ensure that everything is in order. ", "_____no_output_____" ], [ "Let's check what the train and test dataset contains. I will use `matplotlib` to print out some of the images from our dataset. ", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\n\n## functions to show an image\ndef imshow(img):\n #img = img / 2 + 0.5 # unnormalize\n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1, 2, 0)))\n\n## get some random training images\ndataiter = iter(trainloader)\nimages, labels = dataiter.next()\n\n## show images\nimshow(torchvision.utils.make_grid(images))", "_____no_output_____" ] ], [ [ "**EXERCISE:** Try to understand what the code above is doing. This will help you to better understand your dataset before moving forward. ", "_____no_output_____" ], [ "Let's check the dimensions of a batch.", "_____no_output_____" ] ], [ [ "for images, labels in trainloader:\n print(\"Image batch dimensions:\", images.shape)\n print(\"Image label dimensions:\", labels.shape)\n break", "Image batch dimensions: torch.Size([32, 1, 28, 28])\nImage label dimensions: torch.Size([32])\n" ] ], [ [ "## The Model\nNow using the classical deep learning framework pipeline, let's build the 1 convolutional layer model. \n\nHere are a few notes for those who are beginning with PyTorch:\n- The model below consists of an `__init__()` portion which is where you include the layers and components of the neural network. In our model, we have a convolutional layer denoted by `nn.Conv2d(...)`. We are dealing with an image dataset that is in a grayscale so we only need one channel going in, hence `in_channels=1`. We hope to get a nice representation of this layer, so we use `out_channels=32`. Kernel size is 3, and for the rest of parameters we use the default values which you can find [here](https://pytorch.org/docs/stable/nn.html?highlight=conv2d#conv2d). \n- We use 2 back to back dense layers or what we refer to as linear transformations to the incoming data. Notice for `d1` I have a dimension which looks like it came out of nowhere. 128 represents the size we want as output and the (`26*26*32`) represents the dimension of the incoming data. If you would like to find out how to calculate those numbers refer to the [PyTorch documentation](https://pytorch.org/docs/stable/nn.html?highlight=linear#conv2d). In short, the convolutional layer transforms the input data into a specific dimension that has to be considered in the linear layer. The same applies for the second linear transformation (`d2`) where the dimension of the output of the previous linear layer was added as `in_features=128`, and `10` is just the size of the output which also corresponds to the number of classes.\n- After each one of those layers, we also apply an activation function such as `ReLU`. For prediction purposes, we then apply a `softmax` layer to the last transformation and return the output of that. ", "_____no_output_____" ] ], [ [ "class MyModel(nn.Module):\n def __init__(self):\n super(MyModel, self).__init__()\n\n # 28x28x1 => 26x26x32\n self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3)\n self.d1 = nn.Linear(26 * 26 * 32, 128)\n self.d2 = nn.Linear(128, 10)\n\n def forward(self, x):\n # 32x1x28x28 => 32x32x26x26\n x = self.conv1(x)\n x = F.relu(x)\n\n # flatten => 32 x (32*26*26)\n x = x.flatten(start_dim = 1)\n\n # 32 x (32*26*26) => 32x128\n x = self.d1(x)\n x = F.relu(x)\n\n # logits => 32x10\n logits = self.d2(x)\n out = F.softmax(logits, dim=1)\n return out", "_____no_output_____" ] ], [ [ "As I have done in my previous tutorials, I always encourage to test the model with 1 batch to ensure that the output dimensions are what we expect. ", "_____no_output_____" ] ], [ [ "## test the model with 1 batch\nmodel = MyModel()\nfor images, labels in trainloader:\n print(\"batch size:\", images.shape)\n out = model(images)\n print(out.shape)\n break", "batch size: torch.Size([32, 1, 28, 28])\ntorch.Size([32, 10])\n" ] ], [ [ "## Training the Model\nNow we are ready to train the model but before that we are going to setup a loss function, an optimizer and a function to compute accuracy of the model. ", "_____no_output_____" ] ], [ [ "learning_rate = 0.001\nnum_epochs = 5\n\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nmodel = MyModel()\nmodel = model.to(device)\ncriterion = nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)", "_____no_output_____" ], [ "## compute accuracy\ndef get_accuracy(logit, target, batch_size):\n ''' Obtain accuracy for training round '''\n corrects = (torch.max(logit, 1)[1].view(target.size()).data == target.data).sum()\n accuracy = 100.0 * corrects/batch_size\n return accuracy.item()", "_____no_output_____" ] ], [ [ "Now it's time for training.", "_____no_output_____" ] ], [ [ "for epoch in range(num_epochs):\n train_running_loss = 0.0\n train_acc = 0.0\n\n model = model.train()\n\n ## training step\n for i, (images, labels) in enumerate(trainloader):\n \n images = images.to(device)\n labels = labels.to(device)\n\n ## forward + backprop + loss\n logits = model(images)\n loss = criterion(logits, labels)\n optimizer.zero_grad()\n loss.backward()\n\n ## update model params\n optimizer.step()\n\n train_running_loss += loss.detach().item()\n train_acc += get_accuracy(logits, labels, BATCH_SIZE)\n \n model.eval()\n print('Epoch: %d | Loss: %.4f | Train Accuracy: %.2f' \\\n %(epoch, train_running_loss / i, train_acc/i)) ", "Epoch: 0 | Loss: 1.4901 | Train Accuracy: 96.97\nEpoch: 1 | Loss: 1.4808 | Train Accuracy: 97.90\nEpoch: 2 | Loss: 1.4767 | Train Accuracy: 98.34\nEpoch: 3 | Loss: 1.4748 | Train Accuracy: 98.55\nEpoch: 4 | Loss: 1.4725 | Train Accuracy: 98.81\n" ] ], [ [ "We can also compute accuracy on the testing dataset to see how well the model performs on the image classificaiton task. As you can see below, our basic CNN model is performing very well on the MNIST classification task.", "_____no_output_____" ] ], [ [ "test_acc = 0.0\nfor i, (images, labels) in enumerate(testloader, 0):\n images = images.to(device)\n labels = labels.to(device)\n outputs = model(images)\n test_acc += get_accuracy(outputs, labels, BATCH_SIZE)\n \nprint('Test Accuracy: %.2f'%( test_acc/i))", "Test Accuracy: 98.04\n" ] ], [ [ "**EXERCISE:** As a way to practise, try to include the testing part inside the code where I was outputing the training accuracy, so that you can also keep testing the model on the testing data as you proceed with the training steps. This is useful as sometimes you don't want to wait until your model has completed training to actually test the model with the testing data.", "_____no_output_____" ], [ "## Final Words\nThat's it for this tutorial! Congratulations! You are now able to implement a basic CNN model in PyTorch for image classification. If you would like, you can further extend the CNN model by adding more convolution layers and max pooling, but as you saw, you don't really need it here as results look good. If you are interested in implementing a similar image classification model using RNNs see the references below. ", "_____no_output_____" ], [ "## References\n- [Building RNNs is Fun with PyTorch and Google Colab](https://colab.research.google.com/drive/1NVuWLZ0cuXPAtwV4Fs2KZ2MNla0dBUas)\n- [CNN Basics with PyTorch by Sebastian Raschka](https://github.com/rasbt/deeplearning-models/blob/master/pytorch_ipynb/cnn/cnn-basic.ipynb)\n- [Tensorflow 2.0 Quickstart for experts](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb#scrollTo=DUNzJc4jTj6G) ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
4a5eb8cd36a88772c71fde2ef3f65d9dbb0f8570
16,606
ipynb
Jupyter Notebook
colab.ipynb
luuil/talking-head-anime-2-demo
b0429cdabbc9c8aa53a14b61bb26e051a3259cc7
[ "MIT" ]
null
null
null
colab.ipynb
luuil/talking-head-anime-2-demo
b0429cdabbc9c8aa53a14b61bb26e051a3259cc7
[ "MIT" ]
null
null
null
colab.ipynb
luuil/talking-head-anime-2-demo
b0429cdabbc9c8aa53a14b61bb26e051a3259cc7
[ "MIT" ]
null
null
null
36.021692
168
0.563471
[ [ [ "# Talking Head Anime from a Single Image 2: More Expressive (Manual Poser Tool)\n\n**Instruction**\n\n1. Run the four cells below, one by one, in order by clicking the \"Play\" button to the left of it. Wait for each cell to finish before going to the next one.\n2. Scroll down to the end of the last cell, and play with the GUI.\n\n**Constraints on Images**\n\n1. Must be an image of a single humanoid anime character.\n3. The head must be roughly contained in the middle box.\n5. Must have an alpha channel.\n\n**Links**\n\n* Github repository: http://github.com/pkhungurn/talking-head-anime-2-demo\n* Project writeup: http://pkhungurn.github.io/talking-head-anime-2/", "_____no_output_____" ] ], [ [ "# Clone the repository\n%cd /content\n!git clone https://github.com/pkhungurn/talking-head-anime-2-demo.git", "_____no_output_____" ], [ "# CD into the repository directory.\n%cd /content/talking-head-anime-2-demo", "_____no_output_____" ], [ "# Download model files\n!wget -O data/combiner.pt https://www.dropbox.com/s/at2r3v22xgyoxtk/combiner.pt?dl=0\n!wget -O data/eyebrow_decomposer.pt https://www.dropbox.com/s/pbomb5vgens03rk/eyebrow_decomposer.pt?dl=0\n!wget -O data/eyebrow_morphing_combiner.pt https://www.dropbox.com/s/yk9m5ok03e0ub1f/eyebrow_morphing_combiner.pt?dl=0\n!wget -O data/face_morpher.pt https://www.dropbox.com/s/77sza8qkiwd4qq5/face_morpher.pt?dl=0\n!wget -O data/two_algo_face_rotator.pt https://www.dropbox.com/s/ek261g9sspf0cqi/two_algo_face_rotator.pt?dl=0", "_____no_output_____" ], [ "import PIL.Image\nimport io\nfrom io import StringIO, BytesIO\nimport IPython.display\nimport numpy\nimport ipywidgets\nfrom tha2.util import extract_PIL_image_from_filelike, resize_PIL_image, extract_pytorch_image_from_PIL_image, convert_output_image_from_torch_to_numpy\nimport tha2.poser.modes.mode_20\nimport time\nimport threading\nimport torch\n\nFRAME_RATE = 30.0\nDEVICE_NAME = 'cuda'\ndevice = torch.device(DEVICE_NAME)\n\nlast_torch_input_image = None\ntorch_input_image = None\n\ndef show_pytorch_image(pytorch_image, output_widget=None):\n output_image = pytorch_image.detach().cpu()\n numpy_image = numpy.uint8(numpy.rint(convert_output_image_from_torch_to_numpy(output_image) * 255.0))\n pil_image = PIL.Image.fromarray(numpy_image, mode='RGBA')\n IPython.display.display(pil_image)\n\ninput_image_widget = ipywidgets.Output(\n layout={\n 'border': '1px solid black',\n 'width': '256px',\n 'height': '256px'\n })\n\nupload_input_image_button = ipywidgets.FileUpload(\n accept='.png',\n multiple=False,\n layout={\n 'width': '256px'\n }\n)\n\noutput_image_widget = ipywidgets.Output(\n layout={\n 'border': '1px solid black',\n 'width': '256px',\n 'height': '256px'\n }\n)\n\neyebrow_dropdown = ipywidgets.Dropdown(\n options=[\"troubled\", \"angry\", \"lowered\", \"raised\", \"happy\", \"serious\"],\n value=\"troubled\",\n description=\"Eyebrow:\", \n)\neyebrow_left_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=0.0,\n max=1.0,\n step=0.01,\n description=\"Left:\",\n readout=True,\n readout_format=\".2f\"\n)\neyebrow_right_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=0.0,\n max=1.0,\n step=0.01,\n description=\"Right:\",\n readout=True,\n readout_format=\".2f\"\n)\n\neye_dropdown = ipywidgets.Dropdown(\n options=[\"wink\", \"happy_wink\", \"surprised\", \"relaxed\", \"unimpressed\", \"raised_lower_eyelid\"],\n value=\"wink\",\n description=\"Eye:\", \n)\neye_left_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=0.0,\n max=1.0,\n step=0.01,\n description=\"Left:\",\n readout=True,\n readout_format=\".2f\"\n)\neye_right_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=0.0,\n max=1.0,\n step=0.01,\n description=\"Right:\",\n readout=True,\n readout_format=\".2f\"\n)\n\nmouth_dropdown = ipywidgets.Dropdown(\n options=[\"aaa\", \"iii\", \"uuu\", \"eee\", \"ooo\", \"delta\", \"lowered_corner\", \"raised_corner\", \"smirk\"],\n value=\"aaa\",\n description=\"Mouth:\", \n)\nmouth_left_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=0.0,\n max=1.0,\n step=0.01,\n description=\"Value:\",\n readout=True,\n readout_format=\".2f\"\n)\nmouth_right_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=0.0,\n max=1.0,\n step=0.01,\n description=\" \",\n readout=True,\n readout_format=\".2f\",\n disabled=True,\n)\n\ndef update_mouth_sliders(change):\n if mouth_dropdown.value == \"lowered_corner\" or mouth_dropdown.value == \"raised_corner\":\n mouth_left_slider.description = \"Left:\"\n mouth_right_slider.description = \"Right:\"\n mouth_right_slider.disabled = False\n else:\n mouth_left_slider.description = \"Value:\"\n mouth_right_slider.description = \" \"\n mouth_right_slider.disabled = True\n\nmouth_dropdown.observe(update_mouth_sliders, names='value')\n\niris_small_left_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=0.0,\n max=1.0,\n step=0.01,\n description=\"Left:\",\n readout=True,\n readout_format=\".2f\"\n)\niris_small_right_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=0.0,\n max=1.0,\n step=0.01,\n description=\"Right:\",\n readout=True,\n readout_format=\".2f\", \n)\niris_rotation_x_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=-1.0,\n max=1.0,\n step=0.01,\n description=\"X-axis:\",\n readout=True,\n readout_format=\".2f\"\n)\niris_rotation_y_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=-1.0,\n max=1.0,\n step=0.01,\n description=\"Y-axis:\",\n readout=True,\n readout_format=\".2f\", \n)\n\nhead_x_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=-1.0,\n max=1.0,\n step=0.01,\n description=\"X-axis:\",\n readout=True,\n readout_format=\".2f\"\n)\nhead_y_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=-1.0,\n max=1.0,\n step=0.01,\n description=\"Y-axis:\",\n readout=True,\n readout_format=\".2f\", \n)\nneck_z_slider = ipywidgets.FloatSlider(\n value=0.0,\n min=-1.0,\n max=1.0,\n step=0.01,\n description=\"Z-axis:\",\n readout=True,\n readout_format=\".2f\", \n)\n\n\ncontrol_panel = ipywidgets.VBox([\n eyebrow_dropdown,\n eyebrow_left_slider,\n eyebrow_right_slider,\n ipywidgets.HTML(value=\"<hr>\"),\n eye_dropdown,\n eye_left_slider,\n eye_right_slider,\n ipywidgets.HTML(value=\"<hr>\"),\n mouth_dropdown,\n mouth_left_slider,\n mouth_right_slider,\n ipywidgets.HTML(value=\"<hr>\"),\n ipywidgets.HTML(value=\"<center><b>Iris Shrinkage</b></center>\"),\n iris_small_left_slider,\n iris_small_right_slider,\n ipywidgets.HTML(value=\"<center><b>Iris Rotation</b></center>\"),\n iris_rotation_x_slider,\n iris_rotation_y_slider,\n ipywidgets.HTML(value=\"<hr>\"),\n ipywidgets.HTML(value=\"<center><b>Head Rotation</b></center>\"),\n head_x_slider,\n head_y_slider,\n neck_z_slider,\n])\n\ncontrols = ipywidgets.HBox([\n ipywidgets.VBox([\n input_image_widget, \n upload_input_image_button\n ]),\n control_panel,\n ipywidgets.HTML(value=\"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\"),\n output_image_widget,\n])\n\nposer = tha2.poser.modes.mode_20.create_poser(device)\npose_parameters = tha2.poser.modes.mode_20.get_pose_parameters()\npose_size = poser.get_num_parameters()\nlast_pose = torch.zeros(1, pose_size).to(device)\n\niris_small_left_index = pose_parameters.get_parameter_index(\"iris_small_left\")\niris_small_right_index = pose_parameters.get_parameter_index(\"iris_small_right\")\niris_rotation_x_index = pose_parameters.get_parameter_index(\"iris_rotation_x\")\niris_rotation_y_index = pose_parameters.get_parameter_index(\"iris_rotation_y\")\nhead_x_index = pose_parameters.get_parameter_index(\"head_x\")\nhead_y_index = pose_parameters.get_parameter_index(\"head_y\")\nneck_z_index = pose_parameters.get_parameter_index(\"neck_z\")\n\ndef get_pose():\n pose = torch.zeros(1, pose_size)\n\n eyebrow_name = f\"eyebrow_{eyebrow_dropdown.value}\"\n eyebrow_left_index = pose_parameters.get_parameter_index(f\"{eyebrow_name}_left\")\n eyebrow_right_index = pose_parameters.get_parameter_index(f\"{eyebrow_name}_right\")\n pose[0, eyebrow_left_index] = eyebrow_left_slider.value\n pose[0, eyebrow_right_index] = eyebrow_right_slider.value\n\n eye_name = f\"eye_{eye_dropdown.value}\"\n eye_left_index = pose_parameters.get_parameter_index(f\"{eye_name}_left\")\n eye_right_index = pose_parameters.get_parameter_index(f\"{eye_name}_right\")\n pose[0, eye_left_index] = eye_left_slider.value\n pose[0, eye_right_index] = eye_right_slider.value\n\n mouth_name = f\"mouth_{mouth_dropdown.value}\"\n if mouth_name == \"mouth_lowered_cornered\" or mouth_name == \"mouth_raised_corner\":\n mouth_left_index = pose_parameters.get_parameter_index(f\"{mouth_name}_left\")\n mouth_right_index = pose_parameters.get_parameter_index(f\"{mouth_name}_right\")\n pose[0, mouth_left_index] = mouth_left_slider.value\n pose[0, mouth_right_index] = mouth_right_slider.value\n else:\n mouth_index = pose_parameters.get_parameter_index(mouth_name)\n pose[0, mouth_index] = mouth_left_slider.value\n\n pose[0, iris_small_left_index] = iris_small_left_slider.value\n pose[0, iris_small_right_index] = iris_small_right_slider.value\n pose[0, iris_rotation_x_index] = iris_rotation_x_slider.value\n pose[0, iris_rotation_y_index] = iris_rotation_y_slider.value\n pose[0, head_x_index] = head_x_slider.value\n pose[0, head_y_index] = head_y_slider.value\n pose[0, neck_z_index] = neck_z_slider.value\n\n return pose.to(device)\n\ndisplay(controls)\n\ndef update(change):\n global last_pose\n global last_torch_input_image\n\n if torch_input_image is None:\n return\n\n needs_update = False\n if last_torch_input_image is None:\n needs_update = True \n else:\n if (torch_input_image - last_torch_input_image).abs().max().item() > 0:\n needs_update = True \n\n pose = get_pose()\n if (pose - last_pose).abs().max().item() > 0:\n needs_update = True\n\n if not needs_update:\n return\n\n output_image = poser.pose(torch_input_image, pose)[0]\n with output_image_widget:\n output_image_widget.clear_output(wait=True)\n show_pytorch_image(output_image, output_image_widget) \n\n last_torch_input_image = torch_input_image\n last_pose = pose\n\ndef upload_image(change):\n global torch_input_image\n for name, file_info in upload_input_image_button.value.items():\n content = io.BytesIO(file_info['content'])\n if content is not None:\n pil_image = resize_PIL_image(extract_PIL_image_from_filelike(content))\n w, h = pil_image.size\n if pil_image.mode != 'RGBA':\n with input_image_widget:\n input_image_widget.clear_output(wait=True)\n display(ipywidgets.HTML(\"Image must have an alpha channel!!!\"))\n else:\n torch_input_image = extract_pytorch_image_from_PIL_image(pil_image).to(device)\n with input_image_widget:\n input_image_widget.clear_output(wait=True)\n show_pytorch_image(torch_input_image, input_image_widget)\n update(None)\n\nupload_input_image_button.observe(upload_image, names='value')\neyebrow_dropdown.observe(update, 'value')\neyebrow_left_slider.observe(update, 'value')\neyebrow_right_slider.observe(update, 'value')\neye_dropdown.observe(update, 'value')\neye_left_slider.observe(update, 'value')\neye_right_slider.observe(update, 'value')\nmouth_dropdown.observe(update, 'value')\nmouth_left_slider.observe(update, 'value')\nmouth_right_slider.observe(update, 'value')\niris_small_left_slider.observe(update, 'value')\niris_small_right_slider.observe(update, 'value')\niris_rotation_x_slider.observe(update, 'value')\niris_rotation_y_slider.observe(update, 'value')\nhead_x_slider.observe(update, 'value')\nhead_y_slider.observe(update, 'value')\nneck_z_slider.observe(update, 'value')", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a5ebc70b22cb3f491edf2591f67f5b8d6269cd7
62,705
ipynb
Jupyter Notebook
scripts/fig3_hinge.ipynb
CaptainCandy/influence-release
a152486a1c130fb5f907259c6692b9fe0d2ef6d0
[ "MIT" ]
676
2017-07-14T02:07:08.000Z
2022-03-26T15:11:32.000Z
scripts/fig3_hinge.ipynb
CaptainCandy/influence-release
a152486a1c130fb5f907259c6692b9fe0d2ef6d0
[ "MIT" ]
22
2017-08-17T06:05:52.000Z
2021-07-01T06:00:12.000Z
scripts/fig3_hinge.ipynb
CaptainCandy/influence-release
a152486a1c130fb5f907259c6692b9fe0d2ef6d0
[ "MIT" ]
198
2017-07-18T08:48:52.000Z
2022-03-24T06:22:49.000Z
311.965174
56,294
0.914728
[ [ [ "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import absolute_import\nfrom __future__ import unicode_literals \n \nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom scipy.stats import pearsonr\nfrom scipy.misc import logsumexp\n\nsns.set(color_codes=True)", "_____no_output_____" ], [ "def annotate_upper_left(ax, text, annotation_offset=(-50, 30)): \n ax.annotate(text, xy=(0, 1), xycoords='axes fraction', fontsize=18,\n xytext=annotation_offset, textcoords='offset points',\n ha='left', va='top')", "_____no_output_____" ], [ "f = np.load('../output/hinge_results.npz')\n\ntemps = f['temps']\nindices_to_remove = f['indices_to_remove']\nactual_loss_diffs = f['actual_loss_diffs']\npredicted_loss_diffs = f['predicted_loss_diffs']\ninfluences = f['influences']", "_____no_output_____" ], [ "sns.set_style('white')\nfontsize=14\n\nfig, axs = plt.subplots(1, 4, sharex=False, sharey=False, figsize=(13, 3))\n\n# Graph of approximations\nx = np.arange(-5, 15, 0.01)\nts = [0.001, 0.01, 0.1]\ny = 1 - x\ny[y < 0] = 0 \naxs[0].plot(x, y, label='t=0 (Hinge)')\nfor t in ts:\n# y = t * np.log(1 + np.exp(-(x-1)/t)) \n y = t * logsumexp(\n np.vstack((np.zeros_like(x), -(x-1)/t)),\n axis=0)\n axs[0].plot(x, y, label='t=%s' % t)\naxs[0].set_xlim((0.8, 1.2))\naxs[0].set_xticks((0.8, 0.9, 1.0, 1.1, 1.2))\naxs[0].set_ylim((0, 0.3))\naxs[0].legend(fontsize=fontsize-4)\n\naxs[0].set_xlabel('s', fontsize=fontsize)\naxs[0].set_ylabel('SmoothHinge(s)', fontsize=fontsize)\n\n# Hinge loss\nax_idx = 1\ntemp_idx = 0\nsmooth_influence_preds = influences[temp_idx, indices_to_remove[0, :]]\nprint(temps[temp_idx], pearsonr(actual_loss_diffs[0, :], smooth_influence_preds)[0])\n\naxs[ax_idx].scatter(actual_loss_diffs[0, :], smooth_influence_preds)\n\nmax_value = 1.1 * np.max([np.max(np.abs(actual_loss_diffs[0, :])), np.max(np.abs(smooth_influence_preds))])\naxs[ax_idx].set_xlim((-0.025, 0.025))\naxs[ax_idx].set_ylim(-max_value,max_value)\naxs[ax_idx].set_xlabel('Actual diff in loss', fontsize=fontsize)\naxs[ax_idx].set_ylabel('Predicted diff in loss', fontsize=fontsize)\naxs[ax_idx].plot([-0.025, 0.025], [-0.025, 0.025], 'k-', alpha=0.2, zorder=1)\naxs[ax_idx].set_title('t=0 (Hinge)', fontsize=fontsize)\n\n# t = 0.001\nax_idx = 2\ntemp_idx = 1\nsmooth_influence_preds = influences[temp_idx, indices_to_remove[0, :]]\nprint(temps[temp_idx], pearsonr(actual_loss_diffs[0, :], smooth_influence_preds)[0])\n\naxs[ax_idx].scatter(actual_loss_diffs[0, :], smooth_influence_preds)\n\nmax_value = 1.1 * np.max([np.max(np.abs(actual_loss_diffs[0, :])), np.max(np.abs(smooth_influence_preds))])\naxs[ax_idx].set_xlim((-0.025, 0.025)) \naxs[ax_idx].set_ylim((-0.025, 0.025))\naxs[ax_idx].set_aspect('equal')\naxs[ax_idx].set_xlabel('Actual diff in loss', fontsize=fontsize)\naxs[ax_idx].plot([-0.025, 0.025], [-0.025, 0.025], 'k-', alpha=0.2, zorder=1)\naxs[ax_idx].set_title('t=0.001', fontsize=fontsize)\n\n# t = 0.1\nax_idx = 3\ntemp_idx = 2\nsmooth_influence_preds = influences[temp_idx, indices_to_remove[0, :]]\nprint(temps[temp_idx], pearsonr(actual_loss_diffs[0, :], smooth_influence_preds)[0])\n\naxs[ax_idx].scatter(actual_loss_diffs[0, :], smooth_influence_preds)\n\nmax_value = 1.1 * np.max([np.max(np.abs(actual_loss_diffs[0, :])), np.max(np.abs(smooth_influence_preds))])\n\naxs[ax_idx].set_xlim((-0.025, 0.025))\naxs[ax_idx].set_ylim((-0.025, 0.025))\naxs[ax_idx].set_aspect('equal')\naxs[ax_idx].set_xlabel('Actual diff in loss', fontsize=fontsize)\naxs[ax_idx].plot([-0.025, 0.025], [-0.025, 0.025], 'k-', alpha=0.2, zorder=1)\naxs[ax_idx].set_title('t=0.1', fontsize=fontsize)\n\n# plt.setp(axs[ax_idx].get_yticklabels(), visible=False)\n\ndef move_ax_right(ax, dist):\n bbox = ax.get_position()\n bbox.x0 += dist\n bbox.x1 += dist\n ax.set_position(bbox)\n\nmove_ax_right(axs[1], 0.05)\nmove_ax_right(axs[2], 0.06)\nmove_ax_right(axs[3], 0.07)\n\nannotate_upper_left(axs[0], '(a)', (-50, 15))\nannotate_upper_left(axs[1], '(b)', (-50, 15))\n# plt.savefig(\n# '../figs/fig-hinge.png', \n# dpi=600, bbox_inches='tight')", "0.0 0.632477617528\n0.001 0.950173523416\n0.1 0.914073494404\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
4a5ecb98247864654001b37df2043581bb2a46ea
48,872
ipynb
Jupyter Notebook
hermione/module_templates/__IMPLEMENTED_BASE__/src/ml/notebooks/example_notebook.ipynb
RodrigoATorres/hermione
6cbed73e309f8025a48f33165d8f29561c6a3cc7
[ "Apache-2.0" ]
null
null
null
hermione/module_templates/__IMPLEMENTED_BASE__/src/ml/notebooks/example_notebook.ipynb
RodrigoATorres/hermione
6cbed73e309f8025a48f33165d8f29561c6a3cc7
[ "Apache-2.0" ]
null
null
null
hermione/module_templates/__IMPLEMENTED_BASE__/src/ml/notebooks/example_notebook.ipynb
RodrigoATorres/hermione
6cbed73e309f8025a48f33165d8f29561c6a3cc7
[ "Apache-2.0" ]
null
null
null
38.421384
260
0.450156
[ [ [ "# Notebook example", "_____no_output_____" ], [ "Installing some necessary packages:", "_____no_output_____" ] ], [ [ "!pip install ipywidgets\n!jupyter nbextension enable --py widgetsnbextension\n!jupyter labextension install @jupyter-widgets/jupyterlab-manager", "Requirement already satisfied: ipywidgets in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (7.5.1)\nRequirement already satisfied: widgetsnbextension~=3.5.0 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipywidgets) (3.5.1)\nRequirement already satisfied: ipython>=4.0.0; python_version >= \"3.3\" in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipywidgets) (7.9.0)\nRequirement already satisfied: traitlets>=4.3.1 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipywidgets) (4.3.3)\nRequirement already satisfied: nbformat>=4.2.0 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipywidgets) (4.4.0)\nRequirement already satisfied: ipykernel>=4.5.1 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipywidgets) (5.1.3)\nRequirement already satisfied: notebook>=4.4.1 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from widgetsnbextension~=3.5.0->ipywidgets) (6.0.1)\nRequirement already satisfied: setuptools>=18.5 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets) (41.6.0.post20191101)\nRequirement already satisfied: pickleshare in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets) (0.7.5)\nRequirement already satisfied: prompt-toolkit<2.1.0,>=2.0.0 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets) (2.0.10)\nRequirement already satisfied: jedi>=0.10 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets) (0.15.1)\nRequirement already satisfied: pygments in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets) (2.4.2)\nRequirement already satisfied: decorator in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets) (4.4.1)\nRequirement already satisfied: colorama; sys_platform == \"win32\" in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets) (0.4.1)\nRequirement already satisfied: backcall in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets) (0.1.0)\nRequirement already satisfied: ipython-genutils in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from traitlets>=4.3.1->ipywidgets) (0.2.0)\nRequirement already satisfied: six in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from traitlets>=4.3.1->ipywidgets) (1.12.0)\nRequirement already satisfied: jsonschema!=2.5.0,>=2.4 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from nbformat>=4.2.0->ipywidgets) (3.1.1)\nRequirement already satisfied: jupyter_core in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from nbformat>=4.2.0->ipywidgets) (4.5.0)\nRequirement already satisfied: tornado>=4.2 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipykernel>=4.5.1->ipywidgets) (6.0.3)\nRequirement already satisfied: jupyter-client in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from ipykernel>=4.5.1->ipywidgets) (5.3.3)\nRequirement already satisfied: Send2Trash in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (1.5.0)\nRequirement already satisfied: nbconvert in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (5.6.1)\nRequirement already satisfied: prometheus-client in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (0.7.1)\nRequirement already satisfied: jinja2 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (2.10.3)\nRequirement already satisfied: pyzmq>=17 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (18.1.0)\nRequirement already satisfied: terminado>=0.8.1 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (0.8.2)\nRequirement already satisfied: wcwidth in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from prompt-toolkit<2.1.0,>=2.0.0->ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets) (0.1.7)\nRequirement already satisfied: parso>=0.5.0 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from jedi>=0.10->ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets) (0.5.1)\nRequirement already satisfied: importlib-metadata in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets) (0.23)\nRequirement already satisfied: attrs>=17.4.0 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets) (19.3.0)\nRequirement already satisfied: pyrsistent>=0.14.0 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets) (0.15.5)\nRequirement already satisfied: pywin32>=1.0; sys_platform == \"win32\" in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from jupyter-client->ipykernel>=4.5.1->ipywidgets) (225)\nRequirement already satisfied: python-dateutil>=2.1 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from jupyter-client->ipykernel>=4.5.1->ipywidgets) (2.8.1)\nRequirement already satisfied: mistune<2,>=0.8.1 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (0.8.4)\nRequirement already satisfied: entrypoints>=0.2.2 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (0.3)\nRequirement already satisfied: defusedxml in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (0.6.0)\nRequirement already satisfied: bleach in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (3.1.0)\nRequirement already satisfied: testpath in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (0.4.4)\nRequirement already satisfied: pandocfilters>=1.4.1 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (1.4.2)\nRequirement already satisfied: MarkupSafe>=0.23 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from jinja2->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (1.1.1)\nRequirement already satisfied: zipp>=0.5 in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from importlib-metadata->jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets) (0.6.0)\nRequirement already satisfied: webencodings in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets) (0.5.1)\nRequirement already satisfied: more-itertools in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from zipp>=0.5->importlib-metadata->jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets) (7.2.0)\n" ], [ "!pip install xgboost", "Collecting xgboost\n Downloading https://files.pythonhosted.org/packages/3d/1b/83e5dc0021d12884e9998999945e156cf3628a79dacecaed2ede9f3107cb/xgboost-1.3.3-py3-none-win_amd64.whl (95.2MB)\nRequirement already satisfied: numpy in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from xgboost) (1.17.3)\nRequirement already satisfied: scipy in c:\\users\\gusta\\appdata\\local\\continuum\\anaconda3\\envs\\octopus\\lib\\site-packages (from xgboost) (1.4.1)\nInstalling collected packages: xgboost\nSuccessfully installed xgboost-1.3.3\n" ] ], [ [ "**It is necessary to change the working directory so the project structure works properly:**", "_____no_output_____" ] ], [ [ "import sys\nsys.path.append(\"../../\")", "_____no_output_____" ] ], [ [ "From this point, it's on you!\n\n---", "_____no_output_____" ] ], [ [ "import pandas as pd\n\nfrom ml.data_source.spreadsheet import Spreadsheet\nfrom ml.preprocessing.preprocessing import Preprocessing\nfrom ml.model.trainer import TrainerSklearn\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier", "/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/interpret_community/common/gpu_kmeans.py:32: UserWarning: cuML is required to use GPU explainers. Check https://rapids.ai/start.html for more information on how to install it.\n for more information on how to install it.\")\ncuML is required to use GPU explainers. Check https://rapids.ai/start.html for more information on how to install it.\n" ], [ "df = Spreadsheet().get_data('../../../data/raw/train.csv')", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "p = Preprocessing()", "_____no_output_____" ], [ "df = p.clean_data(df)\ndf = p.categ_encoding(df)", "INFO:root:Cleaning data\nINFO:root:Category encoding\n" ], [ "df.head()", "_____no_output_____" ], [ "X = df.drop(columns=[\"Survived\"])\ny = df[\"Survived\"]", "_____no_output_____" ], [ "# Ensure the same random state passed to TrainerSkleran().train()\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123)\nX_train.shape, X_test.shape, y_train.shape, y_test.shape", "_____no_output_____" ], [ "rf = TrainerSklearn().train(X, y, classification=True, \n algorithm=RandomForestClassifier, \n preprocessing=p,\n data_split=('train_test', {'test_size':.3}),\n random_state=123)", "Setting feature_perturbation = \"tree_path_dependent\" because no background data was given.\nINFO:interpret_community.TabularExplainer:Initialized valid explainer TreeExplainer with args {'explain_subset': None, 'features': ['Age', 'Pclass_1', 'Pclass_2', 'Pclass_3', 'Sex_female', 'Sex_male'], 'classes': None}\n" ], [ "rf.get_metrics()", "_____no_output_____" ], [ "rf.get_columns()", "_____no_output_____" ], [ "rf.predict_proba(X_test, binary=True)", "_____no_output_____" ], [ "# Predicting new data\ndef predict_new(X, model, probs=True):\n X = p.clean_data(X)\n X = p.categ_encoding(X)\n \n columns = model.get_columns()\n for col in columns:\n if col not in X.columns:\n X[col] = 0\n print(X)\n if probs:\n return model.predict_proba(X)\n else:\n return model.predict(X)", "_____no_output_____" ], [ "new_data = pd.DataFrame({\n 'Pclass':3,\n 'Sex': 'male',\n 'Age':4\n}, index=[0])\n\nnew_data", "_____no_output_____" ], [ "predict_new(new_data, rf)", "INFO:root:Cleaning data\nINFO:root:Category encoding\n" ] ], [ [ "**Data Quality:**", "_____no_output_____" ] ], [ [ "from ml.preprocessing.dataquality import DataQuality\nimport great_expectations as ge\nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "df = Spreadsheet().get_data('../../../data/raw/train.csv')", "_____no_output_____" ], [ "X_train, X_test = train_test_split(df, test_size=0.3, random_state=123)\nX_train.shape, X_test.shape", "_____no_output_____" ], [ "dq = DataQuality(discrete_cat_cols=['Sex', 'Pclass'])\ndf_ge = dq.perform(X_train, target='Survived')\n#df_ge.save_expectation_suite('/opt/ml/processing/output/expectations/expectations.json')", "_____no_output_____" ], [ "df_ge.save_expectation_suite('../../../output/expectations.json')", "INFO:great_expectations.data_asset.data_asset:\t7 expectation(s) included in expectation_suite. Omitting 1 expectation(s) that failed when last run; set discard_failed_expectations=False to include them. result_format settings filtered.\n" ], [ "X_test.drop(columns=['Survived'], inplace=True)\ndf_ge = ge.dataset.PandasDataset(X_test)\nge_val = df_ge.validate(expectation_suite='../../../output/expectations.json', only_return_failures=False)", "_____no_output_____" ], [ "ge_val", "_____no_output_____" ] ], [ [ "**Get local explainer for each instance:**", "_____no_output_____" ] ], [ [ "# Get local explainer\nres = rf.local_interpret(X_test, len(X_test.columns))", "_____no_output_____" ], [ "res", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a5ef6ad447fedbd2bed276bb9013f72d499d6d5
1,002,250
ipynb
Jupyter Notebook
notebooks/07_decomposition_example.ipynb
noahbouchier/segregation
88bd9608251b8bc42eae9265adb7941279b9868c
[ "BSD-3-Clause" ]
92
2019-02-17T02:36:29.000Z
2022-01-22T04:29:10.000Z
notebooks/07_decomposition_example.ipynb
noahbouchier/segregation
88bd9608251b8bc42eae9265adb7941279b9868c
[ "BSD-3-Clause" ]
128
2019-02-22T03:52:40.000Z
2022-02-28T18:39:01.000Z
notebooks/07_decomposition_example.ipynb
noahbouchier/segregation
88bd9608251b8bc42eae9265adb7941279b9868c
[ "BSD-3-Clause" ]
29
2019-02-17T02:36:50.000Z
2022-03-17T04:15:49.000Z
1,184.692671
198,956
0.956971
[ [ [ "# Segregation Index Decomposition", "_____no_output_____" ], [ "## Table of Contents\n* [Decomposition framework of the PySAL *segregation* module](#Decomposition-framework-of-the-PySAL-*segregation*-module)\n\t* [Map of the composition of the Metropolitan area of Los Angeles](#Map-of-the-composition-of-the-Metropolitan-area-of-Los-Angeles)\n\t* [Map of the composition of the Metropolitan area of New York](#Map-of-the-composition-of-the-Metropolitan-area-of-New-York)\n\t* [Composition Approach (default)](#Composition-Approach-%28default%29)\n\t* [Share Approach](#Share-Approach)\n\t* [Dual Composition Approach](#Dual-Composition-Approach)\n\t* [Inspecting a different index: Relative Concentration](#Inspecting-a-different-index:-Relative-Concentration)\n", "_____no_output_____" ], [ "This is a notebook that explains a step-by-step procedure to perform decomposition on comparative segregation measures.\n\nFirst, let's import all the needed libraries.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport pickle\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport segregation\nfrom segregation.decomposition import DecomposeSegregation", "_____no_output_____" ] ], [ [ "In this example, we are going to use census data that the user must download its own copy, following similar guidelines explained in https://github.com/spatialucr/geosnap/blob/master/examples/01_getting_started.ipynb where you should download the full type file of 2010. The zipped file download will have a name that looks like `LTDB_Std_All_fullcount.zip`. After extracting the zipped content, the filepath of the data should looks like this:", "_____no_output_____" ] ], [ [ "#filepath = '~/LTDB_Std_2010_fullcount.csv'", "_____no_output_____" ] ], [ [ "Then, we read the data:", "_____no_output_____" ] ], [ [ "df = pd.read_csv(filepath, encoding = \"ISO-8859-1\", sep = \",\")", "_____no_output_____" ] ], [ [ "We are going to work with the variable of the nonhispanic black people (`nhblk10`) and the total population of each unit (`pop10`). So, let's read the map of all census tracts of US and select some specific columns for the analysis:", "_____no_output_____" ] ], [ [ "# This file can be download here: https://drive.google.com/open?id=1gWF0OCn6xuR_WrEj7Ot2jY6KI2t6taIm\nwith open('data/tracts_US.pkl', 'rb') as input:\n map_gpd = pickle.load(input)\n \nmap_gpd['INTGEOID10'] = pd.to_numeric(map_gpd[\"GEOID10\"])\ngdf_pre = map_gpd.merge(df, left_on = 'INTGEOID10', right_on = 'tractid')\ngdf = gdf_pre[['GEOID10', 'geometry', 'pop10', 'nhblk10']]", "_____no_output_____" ] ], [ [ "In this notebook, we use the Metropolitan Statistical Area (MSA) of US (we're also using the word 'cities' here to refer them). So, let's read the correspondence table that relates the tract id with the corresponding Metropolitan area...", "_____no_output_____" ] ], [ [ "# You can download this file here: https://drive.google.com/open?id=10HUUJSy9dkZS6m4vCVZ-8GiwH0EXqIau\nwith open('data/tract_metro_corresp.pkl', 'rb') as input:\n tract_metro_corresp = pickle.load(input).drop_duplicates()", "_____no_output_____" ] ], [ [ "..and merge them with the previous data.", "_____no_output_____" ] ], [ [ "merged_gdf = gdf.merge(tract_metro_corresp, left_on = 'GEOID10', right_on = 'geoid10')", "_____no_output_____" ] ], [ [ "We now build the composition variable (`compo`) which is the division of the frequency of the chosen group and total population. Let's inspect the first rows of the data.", "_____no_output_____" ] ], [ [ "merged_gdf['compo'] = np.where(merged_gdf['pop10'] == 0, 0, merged_gdf['nhblk10'] / merged_gdf['pop10'])\nmerged_gdf.head()", "_____no_output_____" ] ], [ [ "Now, we chose two different metropolitan areas to compare the degree of segregation.", "_____no_output_____" ], [ "## Map of the composition of the Metropolitan area of Los Angeles", "_____no_output_____" ] ], [ [ "la_2010 = merged_gdf.loc[(merged_gdf.name == \"Los Angeles-Long Beach-Anaheim, CA\")]\nla_2010.plot(column = 'compo', figsize = (10, 10), cmap = 'OrRd', legend = True)\nplt.axis('off')", "_____no_output_____" ] ], [ [ "## Map of the composition of the Metropolitan area of New York", "_____no_output_____" ] ], [ [ "ny_2010 = merged_gdf.loc[(merged_gdf.name == 'New York-Newark-Jersey City, NY-NJ-PA')]\nny_2010.plot(column = 'compo', figsize = (20, 10), cmap = 'OrRd', legend = True)\nplt.axis('off')", "_____no_output_____" ] ], [ [ "We first compare the Gini index of both cities. Let's import the `Gini_Seg` class from `segregation`, fit both indexes and check the difference in point estimation.", "_____no_output_____" ] ], [ [ "from segregation.aspatial import GiniSeg\n\nG_la = GiniSeg(la_2010, 'nhblk10', 'pop10')\nG_ny = GiniSeg(ny_2010, 'nhblk10', 'pop10')\n\nG_la.statistic - G_ny.statistic", "_____no_output_____" ] ], [ [ "Let's decompose these difference according to *Rey, S. et al \"Comparative Spatial Segregation Analytics\". Forthcoming*. You can check the options available in this decomposition below:", "_____no_output_____" ] ], [ [ "help(DecomposeSegregation)", "Help on class DecomposeSegregation in module segregation.decomposition.decompose_segregation:\n\nclass DecomposeSegregation(builtins.object)\n | Decompose segregation differences into spatial and attribute components.\n | \n | Given two segregation indices of the same type, use Shapley decomposition\n | to measure whether the differences between index measures arise from\n | differences in spatial structure or population structure\n | \n | Parameters\n | ----------\n | index1 : segregation.SegIndex class\n | First SegIndex class to compare.\n | index2 : segregation.SegIndex class\n | Second SegIndex class to compare.\n | counterfactual_approach : str, one of\n | [\"composition\", \"share\", \"dual_composition\"]\n | The technique used to generate the counterfactual population\n | distributions.\n | \n | Attributes\n | ----------\n | \n | c_s : float\n | Shapley's Spatial Component of the decomposition\n | \n | c_a : float\n | Shapley's Attribute Component of the decomposition\n | \n | Methods\n | ----------\n | \n | plot : Visualize features of the Decomposition performed\n | plot_type : str, one of ['cdfs', 'maps']\n | \n | 'cdfs' : visualize the cumulative distribution functions of the compositions/shares\n | 'maps' : visualize the spatial distributions for original data and counterfactuals generated and Shapley's components (only available for GeoDataFrames)\n | \n | Examples\n | --------\n | Several examples can be found at https://github.com/pysal/segregation/blob/master/notebooks/decomposition_wrapper_example.ipynb.\n | \n | Methods defined here:\n | \n | __init__(self, index1, index2, counterfactual_approach='composition')\n | Initialize self. See help(type(self)) for accurate signature.\n | \n | plot(self, plot_type='cdfs')\n | Plot the Segregation Decomposition Profile\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n\n" ] ], [ [ "## Composition Approach (default)", "_____no_output_____" ], [ "The difference of -0.10653 fitted previously, can be decomposed into two components. The Spatial component and the attribute component. Let's estimate both, respectively.", "_____no_output_____" ] ], [ [ "DS_composition = DecomposeSegregation(G_la, G_ny)\nDS_composition.c_s", "_____no_output_____" ], [ "DS_composition.c_a", "_____no_output_____" ] ], [ [ "So, the first thing to notice is that attribute component, i.e., given by a difference in the population structure (in this case, the composition) plays a more important role in the difference, since it has a higher absolute value.\n\nThe difference in the composition can be inspected in the plotting method with the type `cdfs`:", "_____no_output_____" ] ], [ [ "DS_composition.plot(plot_type = 'cdfs')", "_____no_output_____" ] ], [ [ "If your data is a GeoDataFrame, it is also possible to visualize the counterfactual compositions with the argument `plot_type = 'maps'`\n\nThe first and second contexts are Los Angeles and New York, respectively.", "_____no_output_____" ] ], [ [ "DS_composition.plot(plot_type = 'maps')", "_____no_output_____" ] ], [ [ "*Note that in all plotting methods, the title presents each component of the decomposition performed.*", "_____no_output_____" ], [ "## Share Approach", "_____no_output_____" ], [ "The share approach takes into consideration the share of each group in each city. Since this approach takes into consideration the focus group and the complementary group share to build the \"counterfactual\" total population of each unit, it is of interest to inspect all these four cdf's.\n\n*ps.: The share is the population frequency of each group in each unit over the total population of that respectively group.*", "_____no_output_____" ] ], [ [ "DS_share = DecomposeSegregation(G_la, G_ny, counterfactual_approach = 'share')\nDS_share.plot(plot_type = 'cdfs')", "_____no_output_____" ] ], [ [ "We can see that curve between the contexts are closer to each other which represent a drop in the importance of the population structure (attribute component) to -0.062. However, this attribute still overcomes the spatial component (-0.045) in terms of importance due to both absolute magnitudes.", "_____no_output_____" ] ], [ [ "DS_share.plot(plot_type = 'maps')", "_____no_output_____" ] ], [ [ "We can see that the counterfactual maps of the composition (outside of the main diagonal), in this case, are slightly different from the previous approach.", "_____no_output_____" ], [ "## Dual Composition Approach", "_____no_output_____" ], [ "The `dual_composition` approach is similar to the composition approach. However, it uses also the counterfactual composition of the cdf of the complementary group.", "_____no_output_____" ] ], [ [ "DS_dual = DecomposeSegregation(G_la, G_ny, counterfactual_approach = 'dual_composition')\nDS_dual.plot(plot_type = 'cdfs')", "_____no_output_____" ] ], [ [ "It is possible to see that the component values are very similar with slight changes from the `composition` approach.", "_____no_output_____" ] ], [ [ "DS_dual.plot(plot_type = 'maps')", "_____no_output_____" ] ], [ [ "The counterfactual distributions are virtually the same (but not equal) as the one from the `composition` approach.", "_____no_output_____" ], [ "## Inspecting a different index: Relative Concentration", "_____no_output_____" ] ], [ [ "from segregation.spatial import RelativeConcentration\n\nRCO_la = RelativeConcentration(la_2010, 'nhblk10', 'pop10')\nRCO_ny = RelativeConcentration(ny_2010, 'nhblk10', 'pop10')\n\nRCO_la.statistic - RCO_ny.statistic", "_____no_output_____" ], [ "RCO_DS_composition = DecomposeSegregation(RCO_la, RCO_ny)\nRCO_DS_composition.c_s", "_____no_output_____" ], [ "RCO_DS_composition.c_a", "_____no_output_____" ] ], [ [ "It is possible to note that, in this case, the spatial component is playing a much more relevant role in the decomposition.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
4a5ef6bb17bdbe770be95a752467c1d63cfe690f
30,029
ipynb
Jupyter Notebook
rendering/PAG_sequenced_cells_thesis_plots.ipynb
opavon/PAG_brainrender
341b2257018508c3790a868e5f184eb7958324c6
[ "MIT" ]
null
null
null
rendering/PAG_sequenced_cells_thesis_plots.ipynb
opavon/PAG_brainrender
341b2257018508c3790a868e5f184eb7958324c6
[ "MIT" ]
null
null
null
rendering/PAG_sequenced_cells_thesis_plots.ipynb
opavon/PAG_brainrender
341b2257018508c3790a868e5f184eb7958324c6
[ "MIT" ]
null
null
null
50.983022
548
0.636718
[ [ [ "# Visualising PAG neurons in CCF space\nIn this notebook we will load the .csv file containing the metadata from our PAG_scRNAseq project and use the CCF coordinates obtained after registration with Sharp-Track to visualise our sequenced cells with Brainrender. We will also write some code to generate some figures for the thesis.", "_____no_output_____" ], [ "### 1 | Import packages and set defaults", "_____no_output_____" ] ], [ [ "import brainrender\nfrom brainrender import Scene, Animation\nfrom brainrender.actors import Points\nfrom vedo import settings as vsettings\nfrom brainrender.video import VideoMaker\nimport pandas as pd # used to load the cwv\nimport numpy as np # used to set beginning and end of a custom slice\nfrom vedo import embedWindow, Plotter, show # <- this will be used to render an embedded scene \n\n\n# // DEFAULT SETTINGS //\n# You can see all the default settings here: https://github.com/brainglobe/brainrender/blob/19c63b97a34336898871d66fb24484e8a55d4fa7/brainrender/settings.py\n\n# --------------------------- brainrender settings --------------------------- #\n# Change some of the default settings\nbrainrender.settings.BACKGROUND_COLOR = \"white\" # color of the background window (defaults to \"white\", try \"blackboard\")\nbrainrender.settings.DEFAULT_ATLAS = \"allen_mouse_25um\" # default atlas\nbrainrender.settings.DEFAULT_CAMERA = \"three_quarters\" # Default camera settings (orientation etc. see brainrender.camera.py)\nbrainrender.settings.INTERACTIVE = False # rendering interactive ?\nbrainrender.settings.LW = 2 # e.g. for silhouettes\nbrainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])\nbrainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)\nbrainrender.settings.SCREENSHOT_SCALE = 1 # values >1 yield higher resolution screenshots\nbrainrender.settings.SHADER_STYLE = \"cartoon\" # affects the look of rendered brain regions, values can be: [\"metallic\", \"plastic\", \"shiny\", \"glossy\", \"cartoon\"] and can be changed in interactive mode\nbrainrender.settings.SHOW_AXES = False\nbrainrender.settings.WHOLE_SCREEN = True # If true render window is full screen\nbrainrender.settings.OFFSCREEN = False\n\n# ------------------------------- vedo settings ------------------------------ #\n# For transparent background with screenshots\nvsettings.screenshotTransparentBackground = True # vedo for transparent bg\nvsettings.useFXAA = False # This needs to be false for transparent bg\n\n\n# // SET PARAMETERS //\n# Save folder\nsave_folder = r\"D:\\Dropbox (UCL)\\Project_transcriptomics\\analysis\\PAG_scRNAseq_brainrender\\output\"", "_____no_output_____" ] ], [ [ "#### 1.1 | Check atlas availability\nBrainrender integrates several atlases that can be used to visualise and explore brain anatomy. We can check which atlases are available, take a look at the ones we have already downloaded, and render a basic scene with the axis to get an idea of the overall picture.", "_____no_output_____" ] ], [ [ "# Run this to get a list of the available atlases:\nfrom bg_atlasapi import show_atlases\nshow_atlases()", "_____no_output_____" ], [ "# Find the dimensions of an atlas:\nfrom bg_atlasapi.bg_atlas import BrainGlobeAtlas\natlas = BrainGlobeAtlas(\"allen_mouse_25um\")\nreference_image = atlas.reference\nprint(reference_image.shape)", "_____no_output_____" ] ], [ [ "Currently, among the atlases we can use for mouse data:\n* allen_mouse_10um_v1.2 - Volume dimension of \\[AP, SI, LR\\] equivalent to \\[1320, 800, 1140\\]\n* allen_mouse_25um_v1.2 - Volume dimension of \\[AP, SI, LR\\] equivalent to \\[528, 320, 456\\] (default atlas)\n* kim_mouse_10um_v1.0 - Volume dimension of \\[AP, SI, LR\\] equivalent to \\[1320, 800, 1140\\]\n* kim_unified_25um_v1.0 - Volume dimension of \\[AP, SI, LR\\] equivalent to \\[528, 320, 456\\]\n* kim_unified_50um_v1.0 - Volume dimension of \\[AP, SI, LR\\] equivalent to \\[264, 160, 228\\]\n* osten_mouse_10um_v1.1 - Volume dimension of \\[AP, SI, LR\\] equivalent to \\[1320, 800, 1140\\]\n\nThe CCF coordinates we obtained from Sharp-Track are at 10um resolution, with a Volume dimension of \\[AP, SI, LR\\] equivalent to \\[1320, 800, 1140\\]", "_____no_output_____" ] ], [ [ "embedWindow(None) # <- this will make your scene popup\n\n# Create a scene with the with the preferred atlas and check the dimensions\nbrainrender.settings.SHOW_AXES = True\nscene = Scene(root = True, atlas_name = 'allen_mouse_25um', inset = False, title = 'allen_mouse_25um', screenshots_folder = save_folder, plotter = None)\n\nscene.render(interactive = True, camera = \"sagittal\", zoom = 1) # make sure to press 'Esc' to close not 'q' or kernel dies", "_____no_output_____" ] ], [ [ "### 2 | Load metadata including CCF coordinates\nOther options to load registered points include `add_cells_from_file` and `get_probe_points_from_sharptrack`.", "_____no_output_____" ] ], [ [ "# Load data\npag_data = pd.read_csv(\"D:\\\\Dropbox (UCL - SWC)\\\\Project_transcriptomics\\\\analysis\\\\PAG_scRNAseq_brainrender\\\\PAG_scRNAseq_metadata_211018.csv\")\n\n# Look at the first 5 rows of the metadata\npag_data.head()", "_____no_output_____" ], [ "# List all column names in data\npag_data.columns", "_____no_output_____" ] ], [ [ "#### 2.1 | Scaling up coordinates\nThe CCF coordinates were obtained by registering images to the Allen Brain Atlas using Sharp-Track (see Shamash et al. bioRxiv 2018 and https://github.com/cortex-lab/allenCCF), which yields coordinates at a resolution of 10 micrometers. In the Allen Brain Atlas, a point at coordinates \\[1, 0, 0\\] is at 10um from the origin (in other words, 1 unit of the atlas space equals 10um). However, BrainRender's space is at 1um resolution, so the first thing we need to do is to scale up the coordinate values by 10 to get them to match correctly.", "_____no_output_____" ] ], [ [ "# Inspect the CCF coordinates for each cell\npag_data[[\"cell.id\", \"cell.type\", \"PAG.area\", \"CCF.AllenAP\", \"CCF.AllenDV\", \"CCF.AllenML\"]]", "_____no_output_____" ], [ "# Scale up data\nsharptrack_to_brainrender_scale_factor = 10 # Sharp-Track uses a 10um reference atlas so the coordinates need to be scaled to match brainrender's\n\npag_data[\"CCF.AllenAP\"] *= sharptrack_to_brainrender_scale_factor\npag_data[\"CCF.AllenDV\"] *= sharptrack_to_brainrender_scale_factor\npag_data[\"CCF.AllenML\"] *= sharptrack_to_brainrender_scale_factor\n\npag_data[[\"cell.id\", \"cell.type\", \"CCF.AllenAP\", \"CCF.AllenDV\", \"CCF.AllenML\"]]", "_____no_output_____" ] ], [ [ "### 3 | Rendering cells with Brainrender\nNow that we have the coordinates at the right scale, we can render our cells in Brainrender and colour them according to any metadata we want. We will prepare different renderings in each of the following code chunks.", "_____no_output_____" ], [ "#### 3.1 | Selecting a subset of cells\nWe can first take a look at how to use the metadata to select subsets of cells. This will be useful to either render just some of the cells, or to color cells based on some metadata info.", "_____no_output_____" ] ], [ [ "# We can subset cells using any criteria we want. For instance, let's keep cells coming from each hemisphere in separate variables:\ncells_hemisphere_right = pag_data.loc[pag_data[\"PAG.hemisphere\"] == \"right\"]\ncells_hemisphere_left = pag_data.loc[pag_data[\"PAG.hemisphere\"] == \"left\"]\ncells_hemisphere_right.head()", "_____no_output_____" ], [ "# We can also use multiple criteria at the same time, such as hemisphere and cell type:\nvgat_cells_hemisphere_left = pag_data.loc[(pag_data[\"PAG.hemisphere\"] == \"left\")&(pag_data[\"cell.type\"] == \"VGAT\")]\nvgat_cells_hemisphere_left.head()", "_____no_output_____" ] ], [ [ "We can now render a scene adding each subset independently, so we can tweak variables such as color or size separately. However, brainrender requires an array of 3 values as coordinates, so we need to get the values out instead of subsetting the dataframe and providing it as input when creating a scene: ", "_____no_output_____" ] ], [ [ "column_names = [\"CCF.AllenAP\", \"CCF.AllenDV\", \"CCF.AllenML\"] # name of the columns containing the CCF coordinates\ncells_hemisphere_right[column_names].head().values # brainrender needs a numpy array as coordinates. Without the .values we get a dataframe.", "_____no_output_____" ] ], [ [ "Now we can render a scene and color the cells on the right and left hemisphere differently:", "_____no_output_____" ] ], [ [ "embedWindow(None) # <- this will make your scene popup\nbrainrender.settings.SHOW_AXES = False # Set this back to False\n\n# Create a variable containing the XYZ coordinates of the cells.\ncolumn_names = [\"CCF.AllenAP\", \"CCF.AllenDV\", \"CCF.AllenML\"] # name of the columns containing the CCF coordinates\n\n# // CREATE SCENE //\nscene = Scene(root = True, atlas_name = 'allen_mouse_25um', inset = False, title = 'Aspirated Cells', screenshots_folder = save_folder, plotter = None)\n\n# // ADD REGIONS AND CELLS//\nscene.add_brain_region(\"PAG\", alpha = 0.1, color = \"darkgoldenrod\", silhouette = None, hemisphere = \"both\")\nscene.add(Points(cells_hemisphere_right[column_names].values, name = \"right hemisphere\", colors = \"salmon\", alpha = 1, radius = 20, res = 16))\nscene.add(Points(cells_hemisphere_left[column_names].values, name = \"left hemisphere\", colors = \"skyblue\", alpha = 1, radius = 20, res = 16))\n\n# // RENDER INTERACTIVELY //\n# Render interactively. You can press \"s\" to take a screenshot\nscene.render(interactive = True, camera = \"sagittal\", zoom = 1) # make sure to press 'Esc' to close not 'q' or kernel dies", "_____no_output_____" ] ], [ [ "### 4 | Rendering VGAT and VGluT2 cells for Chapter 2\nWe can follow the strategy above and assign each cell type to a different variable and render them separately.", "_____no_output_____" ], [ "#### 4.1 | Plot brain, PAG, and aspirated cells\n\nWe have increased the screenshot resoolution, changed the save folder, and increased the radius of the cells so they are visible. \n\nWe will use \"plastic\" as shader style in this overview figure, have the axes so we can draw a scale bar, and render using the Top camera and a 1.1x zoom.\n\nOnce rendered, save a screenshot by pressing \"s\".", "_____no_output_____" ] ], [ [ "embedWindow(None) # <- this will make your scene popup\nbrainrender.settings.SHOW_AXES = True\nbrainrender.settings.SCREENSHOT_SCALE = 2 # values >1 yield higher resolution screenshots\nbrainrender.settings.SHADER_STYLE = \"plastic\" # affects the look of rendered brain regions, values can be: [\"metallic\", \"plastic\", \"shiny\", \"glossy\", \"cartoon\"] and can be changed in interactive mode\nbrainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])\nbrainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)\nsave_folder_thesis = r\"D:\\Dropbox (UCL)\\Project_transcriptomics\\analysis\\PAG_scRNAseq_brainrender\\output\\figures_thesis_chapter_2\"\n\n# Create a variable containing the XYZ coordinates of the cells.\ncolumn_names = [\"CCF.AllenAP\", \"CCF.AllenDV\", \"CCF.AllenML\"] # name of the columns containing the CCF coordinates\n\n# Color cells according to whether they are excitatory or inhibitory:\nvgat_cells = pag_data.loc[pag_data[\"cell.type\"] == \"VGAT\"]\nvglut2_cells = pag_data.loc[pag_data[\"cell.type\"] == \"VGluT2\"]\n\n# // CREATE SCENE //\nscene = Scene(root = True, atlas_name = 'allen_mouse_25um', inset = False, title = None, screenshots_folder = save_folder_thesis, plotter = None)\n\n# // ADD REGIONS AND CELLS//\nscene.add_brain_region(\"PAG\", alpha = 0.2, color = \"darkgoldenrod\", silhouette = None, hemisphere = \"both\")\n#scene.add_brain_region(\"SCm\", alpha = 0.1, color = \"olivedrab\", silhouette = None, hemisphere = \"both\")\nscene.add(Points(vgat_cells[column_names].values, name = \"vgat\", colors = \"#FF8080\", alpha = 1, radius = 30, res = 16)) # colour is #FF8080 in figures, but salmon is the same\nscene.add(Points(vglut2_cells[column_names].values, name = \"vglut2\", colors = \"#0F99B2\", alpha = 1, radius = 30, res = 16)) # colour is #0F99B2 in figures, but skyblue looks good here\n\n# // RENDER INTERACTIVELY //\n# Render interactively. You can press \"s\" to take a screenshot\nscene.render(interactive = True, camera = \"top\", zoom = 1.1) # cameras can be \"sagittal\", \"sagittal2\", \"frontal\", \"top\", \"top_side\", \"three_quarters\"", "_____no_output_____" ] ], [ [ "#### 4.2 | Plot PAG and aspirated cells at different angles\n\nNow we want to render and save some images in which we zoom into the PAG to visualise the registered cells. We can color by cell type or by PAG subdivision.", "_____no_output_____" ], [ "Colouring cells by cell_type (VGAT or VGluT2), we will save the following screenshots:\n* Axes True&False, Zoom 4, Top Camera\n* Axes True&False, Zoom 4, Sagittal Camera\n* Axes True&False, Zoom 4, Three quarters Camera", "_____no_output_____" ] ], [ [ "embedWindow(None) # <- this will make your scene popup\nbrainrender.settings.SHOW_AXES = True\nbrainrender.settings.SCREENSHOT_SCALE = 2 # values >1 yield higher resolution screenshots\nbrainrender.settings.SHADER_STYLE = \"cartoon\" # affects the look of rendered brain regions, values can be: [\"metallic\", \"plastic\", \"shiny\", \"glossy\", \"cartoon\"] and can be changed in interactive mode\nbrainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])\nbrainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)\nsave_folder_thesis = r\"D:\\Dropbox (UCL)\\Project_transcriptomics\\analysis\\PAG_scRNAseq_brainrender\\output\\figures_thesis_chapter_2\"\n\n# Create a variable containing the XYZ coordinates of the cells.\ncolumn_names = [\"CCF.AllenAP\", \"CCF.AllenDV\", \"CCF.AllenML\"] # name of the columns containing the CCF coordinates\n\n# Color cells according to whether they are excitatory or inhibitory:\nvgat_cells = pag_data.loc[pag_data[\"cell.type\"] == \"VGAT\"]\nvglut2_cells = pag_data.loc[pag_data[\"cell.type\"] == \"VGluT2\"]\n\n# // CREATE SCENE //\nscene = Scene(root = False, atlas_name = 'allen_mouse_25um', inset = False, title = None, screenshots_folder = save_folder_thesis, plotter = None)\n\n# // ADD REGIONS AND CELLS//\nscene.add_brain_region(\"PAG\", alpha = 0.2, color = \"darkgoldenrod\", silhouette = None, hemisphere = \"both\")\n#scene.add_brain_region(\"SCm\", alpha = 0.1, color = \"olivedrab\", silhouette = None, hemisphere = \"both\")\nscene.add(Points(vgat_cells[column_names].values, name = \"vgat\", colors = \"#FF8080\", alpha = 1, radius = 20, res = 16)) # colour is #FF8080 in figures, but salmon is the same\nscene.add(Points(vglut2_cells[column_names].values, name = \"vglut2\", colors = \"#0F99B2\", alpha = 1, radius = 20, res = 16)) # colour is #0F99B2 in figures, but skyblue looks good here\n\n# // RENDER INTERACTIVELY //\n# Render interactively. You can press \"s\" to take a screenshot\nscene.render(interactive = True, camera = \"top\", zoom = 4) # cameras can be \"sagittal\", \"sagittal2\", \"frontal\", \"top\", \"top_side\", \"three_quarters\"", "_____no_output_____" ] ], [ [ "Colouring cells by cell_type (VGAT or VGluT2), we will save the following screenshots:\n* Axes True&False, Zoom 4, Frontal Camera, sliced", "_____no_output_____" ] ], [ [ "embedWindow(None) # <- this will make your scene popup\nbrainrender.settings.SHOW_AXES = True\nbrainrender.settings.SCREENSHOT_SCALE = 2 # values >1 yield higher resolution screenshots\nbrainrender.settings.SHADER_STYLE = \"cartoon\" # affects the look of rendered brain regions, values can be: [\"metallic\", \"plastic\", \"shiny\", \"glossy\", \"cartoon\"] and can be changed in interactive mode\nbrainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])\nbrainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)\nsave_folder_thesis = r\"D:\\Dropbox (UCL)\\Project_transcriptomics\\analysis\\PAG_scRNAseq_brainrender\\output\\figures_thesis_chapter_2\"\n\n# Create a variable containing the XYZ coordinates of the cells.\ncolumn_names = [\"CCF.AllenAP\", \"CCF.AllenDV\", \"CCF.AllenML\"] # name of the columns containing the CCF coordinates\n\n# Color cells according to whether they are excitatory or inhibitory:\nvgat_cells = pag_data.loc[pag_data[\"cell.type\"] == \"VGAT\"]\nvglut2_cells = pag_data.loc[pag_data[\"cell.type\"] == \"VGluT2\"]\n\n# // CREATE SCENE //\nscene = Scene(root = False, atlas_name = 'allen_mouse_25um', inset = False, title = None, screenshots_folder = save_folder_thesis, plotter = None)\n\n# // ADD REGIONS AND CELLS//\npag = scene.add_brain_region(\"PAG\", alpha = 0.2, color = \"darkgoldenrod\", silhouette = None, hemisphere = \"both\")\n#scene.add_brain_region(\"SCm\", alpha = 0.1, color = \"olivedrab\", silhouette = None, hemisphere = \"both\")\nscene.add(Points(vgat_cells[column_names].values, name = \"vgat\", colors = \"#FF8080\", alpha = 1, radius = 20, res = 16)) # colour is #FF8080 in figures, but salmon is the same\nscene.add(Points(vglut2_cells[column_names].values, name = \"vglut2\", colors = \"#0F99B2\", alpha = 1, radius = 20, res = 16)) # colour is #0F99B2 in figures, but skyblue looks good here\n\n# // SLICE SCENE //\nslice_start = pag.centerOfMass() + np.array([-150, 0, 0]) # X microns from center of mass towards the nose (if positive) or cerebellum (if negative)\nslice_end = pag.centerOfMass() + np.array([+800, 0, 0]) # X microns from center of mass towards the nose (if adding) or cerebellum (if subtracting)\n\nfor p, n in zip((slice_start, slice_end), (1, -1)):\n plane = scene.atlas.get_plane(pos = p, norm = (n, 0, 0))\n scene.slice(plane, actors = pag, close_actors = True)\n\n# // RENDER INTERACTIVELY //\n# Render interactively. You can press \"s\" to take a screenshot\nscene.render(interactive = True, camera = \"frontal\", zoom = 4) # cameras can be \"sagittal\", \"sagittal2\", \"frontal\", \"top\", \"top_side\", \"three_quarters\"", "_____no_output_____" ] ], [ [ "Colouring cells by PAG subdivision, we will save the following screenshots:\n* Axes False, Zoom 4, Top Camera\n* Axes False, Zoom 4, Sagittal Camera\n* Axes False, Zoom 4, Three quarters Camera", "_____no_output_____" ] ], [ [ "embedWindow(None) # <- this will make your scene popup\nbrainrender.settings.SHOW_AXES = False # Set this back to False\nbrainrender.settings.SCREENSHOT_SCALE = 2 # values >1 yield higher resolution screenshots\nbrainrender.settings.SHADER_STYLE = \"cartoon\" # affects the look of rendered brain regions, values can be: [\"metallic\", \"plastic\", \"shiny\", \"glossy\", \"cartoon\"] and can be changed in interactive mode\nbrainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])\nbrainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)\nsave_folder_thesis = r\"D:\\Dropbox (UCL)\\Project_transcriptomics\\analysis\\PAG_scRNAseq_brainrender\\output\\figures_thesis_chapter_2\"\n\n# Create a variable containing the XYZ coordinates of the cells.\ncolumn_names = [\"CCF.AllenAP\", \"CCF.AllenDV\", \"CCF.AllenML\"] # name of the columns containing the CCF coordinates\n\n# Color cells according to their subdivision:\ndmpag_cells = pag_data.loc[pag_data[\"PAG.area\"] == \"dmpag\"]\ndlpag_cells = pag_data.loc[pag_data[\"PAG.area\"] == \"dlpag\"]\nlpag_cells = pag_data.loc[pag_data[\"PAG.area\"] == \"lpag\"]\nvlpag_cells = pag_data.loc[pag_data[\"PAG.area\"] == \"vlpag\"]\n\n# // CREATE SCENE //\nscene = Scene(root = False, atlas_name = 'allen_mouse_25um', inset = False, title = None, screenshots_folder = save_folder_thesis, plotter = None)\n\n# // ADD REGIONS AND CELLS//\nscene.add_brain_region(\"PAG\", alpha = 0.2, color = \"darkgoldenrod\", silhouette = None, hemisphere = \"both\")\n#scene.add_brain_region(\"SCm\", alpha = 0.1, color = \"olivedrab\", silhouette = None, hemisphere = \"both\")\nscene.add(Points(dmpag_cells[column_names].values, name = \"dmpag\", colors = \"cornflowerblue\", alpha = 1, radius = 20, res = 16))\nscene.add(Points(dlpag_cells[column_names].values, name = \"dlpag\", colors = \"darkorange\", alpha = 1, radius = 20, res = 16))\nscene.add(Points(lpag_cells[column_names].values, name = \"lpag\", colors = \"forestgreen\", alpha = 1, radius = 20, res = 16))\nscene.add(Points(vlpag_cells[column_names].values, name = \"vlpag\", colors = \"firebrick\", alpha = 1, radius = 20, res = 16))\n\n# // RENDER INTERACTIVELY //\n# Render interactively. You can press \"s\" to take a screenshot\nscene.render(interactive = True, camera = \"top\", zoom = 4) # choose one of the cameras: sagittal, sagittal2, frontal, top, top_side, three_quarters", "_____no_output_____" ] ], [ [ "Colouring cells by PAG subdivision, we will save the following screenshots:\n* Axes False, Zoom 4, Frontal Camera, sliced", "_____no_output_____" ] ], [ [ "embedWindow(None) # <- this will make your scene popup\nbrainrender.settings.SHOW_AXES = False # Set this back to False\nbrainrender.settings.SCREENSHOT_SCALE = 2 # values >1 yield higher resolution screenshots\nbrainrender.settings.SHADER_STYLE = \"cartoon\" # affects the look of rendered brain regions, values can be: [\"metallic\", \"plastic\", \"shiny\", \"glossy\", \"cartoon\"] and can be changed in interactive mode\nbrainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])\nbrainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)\nsave_folder_thesis = r\"D:\\Dropbox (UCL)\\Project_transcriptomics\\analysis\\PAG_scRNAseq_brainrender\\output\\figures_thesis_chapter_2\"\n\n# Create a variable containing the XYZ coordinates of the cells.\ncolumn_names = [\"CCF.AllenAP\", \"CCF.AllenDV\", \"CCF.AllenML\"] # name of the columns containing the CCF coordinates\n\n# Color cells according to their subdivision:\ndmpag_cells = pag_data.loc[pag_data[\"PAG.area\"] == \"dmpag\"]\ndlpag_cells = pag_data.loc[pag_data[\"PAG.area\"] == \"dlpag\"]\nlpag_cells = pag_data.loc[pag_data[\"PAG.area\"] == \"lpag\"]\nvlpag_cells = pag_data.loc[pag_data[\"PAG.area\"] == \"vlpag\"]\n\n# // CREATE SCENE //\nscene = Scene(root = False, atlas_name = 'allen_mouse_25um', inset = False, title = None, screenshots_folder = save_folder_thesis, plotter = None)\n\n# // ADD REGIONS AND CELLS//\npag = scene.add_brain_region(\"PAG\", alpha = 0.2, color = \"darkgoldenrod\", silhouette = None, hemisphere = \"both\")\n#scene.add_brain_region(\"SCm\", alpha = 0.1, color = \"olivedrab\", silhouette = None, hemisphere = \"both\")\nscene.add(Points(dmpag_cells[column_names].values, name = \"dmpag\", colors = \"cornflowerblue\", alpha = 1, radius = 20, res = 16))\nscene.add(Points(dlpag_cells[column_names].values, name = \"dlpag\", colors = \"darkorange\", alpha = 1, radius = 20, res = 16))\nscene.add(Points(lpag_cells[column_names].values, name = \"lpag\", colors = \"forestgreen\", alpha = 1, radius = 20, res = 16))\nscene.add(Points(vlpag_cells[column_names].values, name = \"vlpag\", colors = \"firebrick\", alpha = 1, radius = 20, res = 16))\n\n# // SLICE SCENE //\nslice_start = pag.centerOfMass() + np.array([-150, 0, 0]) # X microns from center of mass towards the nose (if positive) or cerebellum (if negative)\nslice_end = pag.centerOfMass() + np.array([+800, 0, 0]) # X microns from center of mass towards the nose (if adding) or cerebellum (if subtracting)\n\nfor p, n in zip((slice_start, slice_end), (1, -1)):\n plane = scene.atlas.get_plane(pos = p, norm = (n, 0, 0))\n scene.slice(plane, actors = pag, close_actors = True)\n\n# // RENDER INTERACTIVELY //\n# Render interactively. You can press \"s\" to take a screenshot\nscene.render(interactive = True, camera = \"frontal\", zoom = 4) # choose one of the cameras: sagittal, sagittal2, frontal, top, top_side, three_quarters", "_____no_output_____" ], [ "# Get the hex codes for colors\nimport matplotlib\nprint(matplotlib.colors.to_hex(\"cornflowerblue\")) # #6495ed\nprint(matplotlib.colors.to_hex(\"darkorange\")) # #ff8c00\nprint(matplotlib.colors.to_hex(\"forestgreen\")) # #228b22\nprint(matplotlib.colors.to_hex(\"firebrick\")) # #b22222", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
4a5f0650d3998056aad183a2b0f71ce3ccf4bbe2
8,164
ipynb
Jupyter Notebook
PhotoZ_images.ipynb
john-judge/PhotoZ_Image
1525beb8adb8eb7e9f0ddd24ebadd41bad4154fe
[ "MIT" ]
null
null
null
PhotoZ_images.ipynb
john-judge/PhotoZ_Image
1525beb8adb8eb7e9f0ddd24ebadd41bad4154fe
[ "MIT" ]
null
null
null
PhotoZ_images.ipynb
john-judge/PhotoZ_Image
1525beb8adb8eb7e9f0ddd24ebadd41bad4154fe
[ "MIT" ]
null
null
null
32.141732
126
0.490568
[ [ [ "import numpy as np\nfrom matplotlib import pyplot as plt\n%matplotlib", "Using matplotlib backend: Qt5Agg\n" ], [ "# if you are plotting at the rig computer and want to plot the last debugging\n# run images, set this to True.\nplot_at_rig = True \nprocessed_is_CDS_subtracted = True # whether to halve the processed_img size\n\n# to help explore possible settings\nreshape_factor = 2 # width is divided by this, height is multiplied by this\n\n# set to False if the image was already remapped in PhotoZ\nremap_quadrants = False \n\n############ COMMON TO CHANGE: ###################\n\nprog = 7 # sizes from 0-smallest to 7-largest\n\n# which images to plot\nplot_all = False\n\n# Used if plot_all is False\nimage_no_selected = [355] # [0, 150, 350]", "_____no_output_____" ], [ "path = \"C:\\\\Users\\\\RedShirtImaging\\\\Desktop\\\\PhotoZ_Jan2021\\\\PhotoZ_upgrades\\\\PhotoZ\\\\\"\n\nif not plot_at_rig:\n\n path = \".\\\\data\\\\\"\n\n \n plot_all = True\n date = \"6-29-21\"\n path += date + \"\\\\prog\" + str(prog) + \"\\\\\"\n \n# (internal) dimensions of the quadrants\n'''\n(= 7 - m_program in PhotoZ)\n\"DM2K_2048x512.cfg\", 7 \"200 Hz 2048x1024\"\n\"DM2K_2048x50.cfg\", 6 \"2000 Hz 2048x100\"\n\"DM2K_1024x160.cfg\", 5 \"1000 Hz 1024x320\"\n\"DM2K_1024x80.cfg\", 4 \"2000 Hz 1024x160\"\n\"DM2K_1024x80.cfg\", 3 \"2000 Hz 512x160\"\n\"DM2K_1024x40.cfg\", 2 \"4000 Hz 512x80\"\n\"DM2K_1024x30.cfg\", 1 \"5000 Hz 256x60\"\n\"DM2K_1024x20.cfg\" 0 \"7500 Hz 256x40\"\n'''\ndimensions = {\n 7 : {'height': 512,\n 'width': 2048 },\n 6 : {'height': 50,\n 'width': 2048 },\n 5 : {'height': 160,\n 'width': 1024 },\n 4 : {'height': 80,\n 'width': 1024 },\n 3 : {'height': 80,\n 'width': 1024},\n 2 : {'height': 40,\n 'width': 1024},\n 1 : {'height': 30,\n 'width': 1024},\n 0 : {'height': 20,\n 'width': 1024}\n}\nheight = int(dimensions[prog]['height'] * reshape_factor)\nwidth = int(dimensions[prog]['width'] // reshape_factor)\nquadrantSize = height * width * 4", "_____no_output_____" ], [ "def load_image(image_version, quadrantSize, height, delim=' ', load_raw=False):\n height *= 2\n final = np.genfromtxt(path + image_version + \".txt\", delimiter = delim) # always tab-delim\n raw_img = None\n if load_raw:\n raw = np.genfromtxt(path + \"raw-\" + image_version + \".txt\", delimiter = delim)\n\n raw_img = raw[:quadrantSize,1].reshape(height,-1)\n \n final_size = quadrantSize\n if processed_is_CDS_subtracted:\n quadrantSize = quadrantSize // 2\n \n final_height = height\n if not remap_quadrants: # if PhotoZ already moved quadrants\n height //= 2\n final_img = final[:final_size,1].reshape(height,-1) # CDS-correct image is half the size. Reset rows were removed\n\n print(\"Shaping final image to:\",final_size, quadrantSize // height)\n return raw_img, final_img\n \ndef plot_image(raw_img, final_img, image_version, image_no, plot_raw=False):\n fig = plt.figure()\n \n n_plots = int(plot_raw) + 1\n\n ax1 = fig.add_subplot(n_plots, 1, 1)\n if plot_raw:\n ax2 = fig.add_subplot(n_plots, 1, 2)\n ax2.set_title(\"Raw, RLI frame \" + str(image_no))\n ax2.imshow(raw_img[1:,:], aspect='auto', cmap='jet')\n fig.subplots_adjust(hspace = 0.6)\n\n ax1.set_title(\"Processed, RLI frame \" + str(image_no))\n ax1.imshow(final_img, aspect='auto', cmap='jet')\n\n plt.show()\n # save to date-specific directory\n plt.savefig(path + 'readout-RLI-' + image_version + \".png\")", "_____no_output_____" ], [ "def remapQuadrants(img):\n # Place second half to the right of the first half\n h,w = img.shape\n\n q0 = img[:h//4,:]\n q1 = img[h//4:h//2,:]\n q2 = img[h//2:3*h//4,:]\n q3 = img[3*h//4:,:]\n img = np.zeros((h//2, w*2))\n img[:h//4,:w] = q0 # upper left\n img[h//4:,:w] = q1 # upper right\n img[:h//4,w:] = q2 # lower left\n img[h//4:,w:] = q3 # lower right\n return img\n\ndef normalizeQuadrants(img):\n h,w = img.shape\n img[:h//2,:w//2] = (img[:h//2,:w//2] - np.min(img[:h//2,:w//2])) / np.max(img[:h//2,:w//2])\n img[:h//2,w//2:] = (img[:h//2,w//2:] - np.min(img[:h//2,w//2:])) / np.max(img[:h//2,w//2:])\n img[h//2:,:w//2] = (img[h//2:,:w//2] - np.min(img[h//2:,:w//2])) / np.max(img[h//2:,:w//2])\n img[h//2:,w//2:] = (img[h//2:,w//2:] - np.min(img[h//2:,w//2:])) / np.max(img[h//2:,w//2:])\n return img\n\ndef plot_remapped(raw_img, final_img, image_version, image_no, plot_raw=False):\n raw_img_2 = remapQuadrants(raw_img)\n final_img_2 = remapQuadrants(final_img)\n\n plot_image(raw_img_2, \n final_img_2, \n image_version, \n image_no, \n plot_raw=plot_raw)", "_____no_output_____" ], [ "for image_no in [0, 150, 350, 355, 450]:\n if plot_all or (image_no in image_no_selected):\n try:\n image_version = \"full-out\" + str(image_no)\n raw_img, final_img = load_image(image_version, quadrantSize, height)\n plot_image(raw_img, final_img, image_version, image_no)\n if remap_quadrants:\n plot_remapped(raw_img, final_img, image_version, image_no)\n print(\"Displayed frame \" + str(image_no))\n except:\n print(\"Can't plot frame \" + str(image_no))", "Shaping final image to: 409600 2048\nDisplayed frame 355\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
4a5f17d6737ba8110458469f7e4d640e89d74274
24,410
ipynb
Jupyter Notebook
notebooks/01_Beginners_guide/02_DEA.ipynb
aogeodh/aogeodh-cube-in-a-box
01ffbd17bf23a37511d668edc80f1b5c759207e8
[ "MIT" ]
null
null
null
notebooks/01_Beginners_guide/02_DEA.ipynb
aogeodh/aogeodh-cube-in-a-box
01ffbd17bf23a37511d668edc80f1b5c759207e8
[ "MIT" ]
null
null
null
notebooks/01_Beginners_guide/02_DEA.ipynb
aogeodh/aogeodh-cube-in-a-box
01ffbd17bf23a37511d668edc80f1b5c759207e8
[ "MIT" ]
null
null
null
71.794118
646
0.721467
[ [ [ "# Introduction to Digital Earth Australia <img align=\"right\" src=\"../Supplementary_data/dea_logo.jpg\">\n\n* **Acknowledgement**: This notebook was originally created by [Digital Eath Australia (DEA)](https://www.ga.gov.au/about/projects/geographic/digital-earth-australia) and has been modified for use in the EY Data Science Program\n* **Prerequisites**: Users of this notebook should have a basic understanding of:\n * How to run a [Jupyter notebook](01_Jupyter_notebooks.ipynb)", "_____no_output_____" ], [ "## Background\n[Digital Earth Australia](https://www.ga.gov.au/dea) (DEA) is a digital platform that catalogues large amounts of Earth observation data covering continental Australia.\nIt is underpinned by the [Open Data Cube](https://www.opendatacube.org/) (ODC), an open source software package that has an ever growing number of users, contributors and implementations.\n\nThe ODC and DEA platforms are designed to:\n\n* Catalogue large amounts of Earth observation data\n* Provide a Python based API for high performance querying and data access\n* Give users easy ability to perform exploratory data analysis\n* Allow scalable continent-scale processing of the stored data\n* Track the provenance of data to allow for quality control and updates\n\nThe DEA program catalogues data from a range of satellite sensors and has adopted processes and terminology that users should be aware of to enable efficient querying and use of the datasets stored within.\nThis notebook introduces these important concepts and forms the basis of understanding for the remainder of the notebooks in this beginner's guide.\nResources to further explore these concepts are recommended at the end of the notebook.", "_____no_output_____" ], [ "## Description\nThis introduction to DEA will briefly introduce the ODC and review the types of data catalogued in the DEA platform.\nIt will also cover commonly-used terminology for measurements within product datasets.\nTopics covered include:\n\n* A brief introduction to the ODC\n* A review of the satellite sensors that provide data to DEA\n* An introduction to analysis ready data and the processes to make it \n* DEA's data naming conventions\n* Coordinate reference scheme\n* Derived products\n \n***", "_____no_output_____" ], [ "## Open Data Cube\n\n![Open Data Cube logo](../Supplementary_data/02_DEA/odc.png)\n\nThe [Open Data Cube](https://www.opendatacube.org/) (ODC) is an open-source software package for organising and analysing large quantities of Earth observation data.\nAt its core, the Open Data Cube consists of a database where data is stored, along with commands to load, view and analyse that data.\nThis functionality is delivered by the [datacube-core](https://github.com/opendatacube/datacube-core) open-source Python library.\nThe library is designed to enable and support:\n\n* Large-scale workflows on high performance computing infrastructures\n* Exploratory data analysis\n* Cloud-based services\n* Standalone applications\n\nThere are a number of existing implementations of the ODC, including DEA and [Digital Earth Africa](https://www.digitalearthafrica.org/).\nMore information can be found in the [Open Data Cube Manual](https://datacube-core.readthedocs.io/en/latest/index.html).\n", "_____no_output_____" ], [ "## Satellite datasets in DEA\nDigital Earth Australia catalogues data from a range of satellite sensors. \nThe earliest datasets of optical satellite imagery in DEA date from 1986.\nDEA includes data from:\n\n* [Landsat 5 TM](https://www.usgs.gov/land-resources/nli/landsat/landsat-5?qt-science_support_page_related_con=0#qt-science_support_page_related_con) (LS5 TM), operational between March 1984 and January 2013\n* [Landsat 7 ETM+](https://www.usgs.gov/land-resources/nli/landsat/landsat-7?qt-science_support_page_related_con=0#qt-science_support_page_related_con) (LS7 ETM+), operational since April 1999\n* [Landsat 8 OLI](https://www.usgs.gov/land-resources/nli/landsat/landsat-8?qt-science_support_page_related_con=0#qt-science_support_page_related_con) (LS8 OLI), operational since February 2013\n* [Sentinel 2A MSI](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) (S2A MSI), operational since June 2015\n* [Sentinel 2B MSI](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) (S2B MSI, operational since March 2017\n\nLandsat missions are jointly operated by the United States Geological Survey (USGS) and National Aeronautics and Space Administration (NASA).\nSentinel missions are operated by the European Space Agency (ESA).\nOne major difference between the two programs is the spatial resolution: each Landsat pixel represents 30 x 30 m on the ground while each Sentinel-2 pixel represents 10 x 10 m to 60 x 60 m depending on the spectral band.\n\n### Spectral bands\nAll of the datasets listed above are captured by multispectral satellites.\nThis means that the satellites measure primarily light that is reflected from the Earth's surface in discrete sections of the electromagnetic spectrum, known as *spectral bands*. \nFigure 1 shows the spectral bands for recent Landsat and Sentinel-2 sensors, allowing a direct comparison of how each sensor samples the overall electromagnetic spectrum.\nLandsat 5 TM is not displayed in this image; for reference, it measured light in seven bands that covered the same regions as bands 1 to 7 on Landsat 7 ETM+.\n\n![Image](https://prd-wret.s3-us-west-2.amazonaws.com/assets/palladium/production/s3fs-public/styles/full_width/public/thumbnails/image/dmidS2LS7Comparison.png)\n\n> **Figure 1:** The bands that are detected by each of the satellites are shown in the numbered boxes and the width of each box represents the spectral range that band detects.\nThe bands are overlaid on the percentage transmission of each wavelength returned to the atmosphere from the Earth relative to the amount of incoming solar radiation. \nThe y-axis has no bearing on the comparison of the satellite sensors [[source]](https://directory.eoportal.org/web/eoportal/satellite-missions/l/landsat-9).\n\nFigure 1 highlights that the numbering of the bands relative to the detected wavelengths is inconsistent between sensors.\nAs an example, in the green region of the electromagnetic spectrum (around 560 nm), Landsat 5 TM and Landsat 7 ETM+ detect a wide green region called band 2, where as Landsat 8 OLI detects a slightly narrower region and calls it band 3.\nFinally, Sentinel-2 MSI (A and B) detects a narrow green region but also calls it band 3.\nConsequently, when working with different sensors, it is important to understand the differences in their bands, and any impact this could have on an analysis.\nTo promote awareness of these differences, DEA band naming is based on both the spectral band name and sample region.\nThe naming convention will be covered in more detail in the [DEA band naming conventions section](#DEA-band-naming-conventions).", "_____no_output_____" ], [ "## Analysis Ready Data\n\nDigital Earth Australia produces Analysis Ready Data (ARD) for each of the sensors listed above.\nThe [ARD standard](http://ceos.org/ard/) for satellite data requires that data have undergone a number of processing steps, along with the creation of additional attributes for the data.\nDEA's ARD datasets include the following characteristics:\n\n* **Geometric correction:** This includes establishing ground position, accounting for terrain (orthorectification) and ground control points, and assessing absolute position accuracy. \nGeometric calibration means that imagery is positioned accurately on the Earth's surface and stacked consistently so that sequential observations can be used to track meaningful change over time.\nAdjustments for ground variability typically use a Digital Elevation Model (DEM).\n* **Surface reflectance correction:** This includes adjustments for sensor/instrument gains, biases and offsets, include adjustments for terrain illumination and sensor viewing angle with respect to the pixel position on the surface.\nOnce satellite data is processed to surface reflectance, pixel values from the same sensor can be compared consistently both spatially and over time.\n* **Observation attributes:** Per-pixel metadata such as quality flags and content attribution that enable users to make informed decisions about the suitability of the products for their use. For example, clouds, cloud shadows, missing data, saturation and water are common pixel level attributes.\n* **Metadata:** Dataset metadata including the satellite, instrument, acquisition date and time, spatial boundaries, pixel locations, mode, processing details, spectral or frequency response and grid projection.", "_____no_output_____" ], [ "### Surface reflectance\n\nOptical sensors, such as those on the Landsat and Sentinel-2 satellites, measure light that has come from the sun and been reflected by the Earth's surface.\nThe sensor measures the intensity of light in each of its spectral bands (known as \"radiance\").\nThe intensity of this light is affected by many factors including the angle of the sun relative to the ground, the angle of the sensor relative to the ground, and how the light interacts with the Earth's atmosphere on its way to the sensor. \nBecause radiance can be affected by so many factors, it is typically more valuable to determine how much light was originally reflected at the ground level.\nThis is known as bottom-of-atmosphere **surface reflectance**.\nSurface reflectance can be calculated by using robust physical models to correct the observed radiance values based on atmospheric conditions, the angle of the sun, sensor geometry and local topography or terrain.\n\nThere are many approaches to satellite surface reflectance correction and DEA opts to use two: NBAR and NBART.\n**Users will choose which of these measurements to load when querying the DEA datacube and so it is important to understand their major similarities and differences.**\n\n#### NBAR\nNBAR stands for *Nadir-corrected BRDF Adjusted Reflectance*, where BRDF stands for *Bidirectional reflectance distribution function*.\nThe approach involves atmospheric correction to compute bottom-of-atmosphere radiance, and bi-directional reflectance modelling to remove the effects of topography and angular variation in reflectance.\nNBAR can be useful for analyses in extremely flat areas not affected by terrain shadow, and for producing attractive data visualisations that are not affected by NBART's nodata gaps (see below).\n\n#### NBART\nNBART has the same features of NBAR but includes an additional *terrain illumination* reflectance correction and as such considered to be actual surface reflectance as it takes into account the surface topography.\nTerrain affects optical satellite images in a number of ways; for example, slopes facing the sun receive more sunlight and appear brighter compared to those facing away from the sun.\nTo obtain comparable surface reflectance from satellite images covering hilly areas, it is therefore necessary to process the images to reduce or remove the topographic effect.\nThis correction is performed with a Digital Surface Model (DSM) that has been resampled to the same resolution as the satellite data being corrected.\nNBART is typically the default choice for most analyses as removing terrain illumination and shadows allows changes in the landscape to be compared more consistently across time. \nHowever, it can be prone to distortions in extremely flat areas if noisy elevation values exist in the DSM.\n\n![Comparison between NBAR and NBART](../Supplementary_data/02_DEA/nbar_nbart_animation.gif)\n\n> **Figure 2:** The animation above demonstrates how the NBART correction results in a significantly more two-dimensional looking image that is less affected by terrain illumination and shadow.\nBlack pixels in the NBART image represent areas of deep terrain shadow that can't be corrected as they're determined not to be viewable by either the sun or the satellite. \nThese are represented by -999 `nodata` values in the data.\n", "_____no_output_____" ], [ "### Observation Attributes\n\nThe *Observation Attributes (OA)* are a suite of measurements included in DEA's analysis ready datasets.\nThey are an assessment of each image pixel to determine if it is an unobscured, unsaturated observation of the Earth's surface, along with whether the pixel is represented in each spectral band. \nThe OA product allows users to exclude pixels that do not meet the quality criteria for their analysis.\nThe capacity to automatically exclude such pixels is essential for analysing any change over time, since poor-quality pixels can significantly alter the percieved change over time.\nThe most common use of OA is for cloud masking, where users can choose to remove images that have too much cloud, or ignore the clouds within each satellite image.\nA demonstration of how to use cloud masking can be found in the [masking data](../Frequently_used_code/Masking_data.ipynb) notebook.\n\nThe OA suite of measurements include the following observation pixel-based attributes:\n\n* Null pixels\n* Clear pixels\n* Cloud pixels\n* Cloud shadow pixels\n* Snow pixels\n* Water pixels\n* Terrain shaded pixels\n* Spectrally contiguous pixels (i.e. whether a pixel contains data in every spectral band)\n\nAlso included is a range of pixel-based attributes related to the satellite, solar and sensing geometries:\n\n* Solar zenith\n* Solar azimuth\n* Satellite view\n* Incident angle\n* Exiting angle\n* Azimuthal incident\n* Azimuthal exiting\n* Relative azimuth\n* Timedelta\n", "_____no_output_____" ], [ "## Data format\n\n### DEA band naming conventions\n\nTo account for the various available satellite datasets, DEA uses a band naming convention to help distinguish datasets that come from the different sensors. \nThe band names are comprised of the applied surface reflectance correction (NBAR or NBART) and the spectral region detected by the satellite. \nThis removes all reference to the sensor band numbering scheme (e.g. [Figure 1](#Spectral-Bands)) and assumes that users understand that the spectral region described by the DEA band name is only approximately the same between sensors, not identical.\n\n**Table 1** summarises the DEA band naming terminology for the spectral regions common to both Landsat and Sentinel, coupled with the corresponding NBAR and NBAR band names for the available sensors:\n\n|Spectral region|DEA measurement name (NBAR)|DEA measurement name (NBAR)|Landsat 5<br>TM|Landsat 7<br>ETM+|Landsat 8<br>OLI|Sentinel-2A,B<br>MSI|\n|----|----|----|----|----|----|----|\n|Coastal aerosol|nbar_coastal_aerosol|nbart_coastal_aerosol|||1|1|\n|Blue|nbar_blue|nbart_blue|1|1|2|2|\n|Green|nbar_green|nbart_green|2|2|3|3|\n|Red|nbar_red|nbart_red|3|3|4|4|\n|NIR (Near infra-red)|nbar_nir (Landsat)<br>nbar_nir_1 (Sentinel-2)|nbart_nir (Landsat) <br>nbart_nir_1 (Sentinel-2)|4|4|5|8|\n|SWIR 1 (Short wave infra-red 1)|nbar_swir_1 (Landsat) <br>nbar_swir_2 (Sentinel-2) |nbart_swir_1 (Landsat) <br>nbart_swir_2 (Sentinel-2)|5|5|6|11|\n|SWIR 2 (Short wave infra-red 2)|nbar_swir_2 (Landsat) <br>nbar_swir_3 (Sentinel-2) |nbart_swir_2 (Landsat) <br>nbart_swir_3 (Sentinel-2)|7|7|7|12|\n\n> **Note:** Be aware that NIR and SWIR band names differ between Landsat and Sentinel-2 due to the different number of these bands available in Sentinel-2. The `nbar_nir` Landsat band corresponds to the spectral region covered by Sentinel-2's `nbar_nir_1` band, the `nbar_swir_1` Landsat band corresponds to Sentinel-2's `nbar_swir_2` band, and the `nbar_swir_2` Landsat band corresponds to Sentinel-2's `nbar_swir_3` band.\n", "_____no_output_____" ], [ "### DEA satellite data projection and holdings\nKeeping with the practices of the Landsat and Sentinel satellite programs, all DEA satellite datasets are projected using the **Universal Transverse Mercator (UTM)** coordinate reference system.\nThe World Geodectic System 84 (WGS84) ellipsoid is used to model the UTM projection. All data queries default to the WGS84 datum's coordinate reference system unless specified otherwise.\n\nBy default, the spatial extent of the DEA data holdings is approximately the Australian coastal shelf. \nThe actual extent varies based on the sensor and product. \nThe current extents of each DEA product can be viewed using the interactive [DEA Datacube Explorer](http://explorer.sandbox.dea.ga.gov.au/ga_ls8c_ard_3).", "_____no_output_____" ], [ "## Derived products\n\n![DEA products](../Supplementary_data/02_DEA/dea_products.jpg)\n\nIn addition to ARD satellite data, DEA generates a range of products that are derived from Landsat or Sentinel-2 surface reflectance data.\nThese products have been developed to characterise and monitor different aspects of Australia's natural and built environment, such as mapping the distribution of water and vegetation across the landscape through time.\nDerived DEA products include:\n\n* **Water Observations from Space (WOfS):** WOfS is the world's first continent-scale map of surface water and provides images and data showing where water has been seen in Australia from 1987 to the present. This map can be used to better understand where water usually occurs across the continent and to plan water management strategies. \n\n* **Fractional Cover (FC):** Fractional Cover (FC) is a measurement that splits the landscape into three parts, or fractions; green (leaves, grass, and growing crops), brown (branches, dry grass or hay, and dead leaf litter), and bare ground (soil or rock). DEA uses Fractional Cover to characterise every 25 m square of Australia for any point in time from 1987 to today. This measurement can inform a broad range of natural resource management issues. \n\n* **High and Low Tide Composites (HLTC):** The High and Low Tide Composites (HLTC) are imagery mosaics developed to visualise Australia's coasts, estuaries and reefs at low and high tide, whilst removing the influence of noise features such as clouds, breaking water and sun-glint. These products are highly interpretable, and provide a valuable snapshot of the coastline at different biophysical states.\n\n* **Intertidal Extents Model (ITEM):** The Intertidal Extents Model (ITEM) product utilises 30 years of Earth observation data from the Landsat archive to map the extents and topography of Australia's intertidal mudflats, beaches and reefs; the area exposed between high and low tide.\n\n* **National Intertidal Digital Elevation Model (NIDEM):** The National Intertidal Digital Elevation Model (NIDEM) is a national dataset that maps the three-dimensional structure of Australia’s intertidal zone. NIDEM provides a first-of-its kind source of intertidal elevation data for Australia’s entire coastline. \n\nEach of the products above have dataset-specific naming conventions, measurements, resolutions, data types and coordinate reference systems.\nFor more information about DEA's derived products, refer to the [DEA website](http://www.ga.gov.au/dea/products), the [Content Management Interface](https://cmi.ga.gov.au/) (CMI) containing detailed product metadata, or the \"DEA datasets\" notebooks in this repository.", "_____no_output_____" ], [ "## Recommended next steps\nFor more detailed information on the concepts introduced in this notebook, please see the [DEA User Guide](https://docs.dea.ga.gov.au/index.html#) and [Open Data Cube Manual](https://datacube-core.readthedocs.io/en/latest/).\nFor more information on the development of the DEA platform, please see [Dhu et al. 2017](https://doi.org/10.1080/20964471.2017.1402490).\n\nTo continue with the beginner's guide, the following notebooks are designed to be worked through in the following order:\n\n1. [Jupyter Notebooks](01_Jupyter_notebooks.ipynb)\n2. **Digital Earth Australia (this notebook)**\n3. [Products and Measurements](03_Products_and_measurements.ipynb)\n4. [Loading data](04_Loading_data.ipynb)\n5. [Plotting](05_Plotting.ipynb)\n6. [Performing a basic analysis](06_Basic_analysis.ipynb)\n7. [Introduction to Numpy](07_Intro_to_numpy.ipynb)\n8. [Introduction to Xarray](08_Intro_to_xarray.ipynb)\n9. [Parallel processing with Dask](09_Parallel_processing_with_dask.ipynb)", "_____no_output_____" ], [ "***\n## Additional information\n\n**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). \nDigital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.\n\n**Contact:** If you need assistance, please post a question in the [Troubleshooting EY Data Science Program MS Teams Channel](https://teams.microsoft.com/l/channel/19%3a90804a73cb5a4159a60693c41a8820d2%40thread.tacv2/Troubleshooting?groupId=f6acd945-fed9-4db4-bed8-414988473a36&tenantId=5b973f99-77df-4beb-b27d-aa0c70b8482c) or on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).\nIf you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).\n\n**Last modified:** October 2020", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a5f26c4b2dde1fec0cf4bb9bafe3fc7a639587e
765,114
ipynb
Jupyter Notebook
output/Movie_Analysis.ipynb
jkeung/Movies_Analysis
5c0ae1f2cf12ab0020f39e014b31aa96790217e5
[ "Apache-2.0" ]
3
2015-10-11T04:46:52.000Z
2018-05-12T12:59:56.000Z
output/Movie_Analysis.ipynb
jkeung/Movies_Analysis
5c0ae1f2cf12ab0020f39e014b31aa96790217e5
[ "Apache-2.0" ]
null
null
null
output/Movie_Analysis.ipynb
jkeung/Movies_Analysis
5c0ae1f2cf12ab0020f39e014b31aa96790217e5
[ "Apache-2.0" ]
null
null
null
579.193036
202,984
0.927413
[ [ [ "#Given a budget of 30 million dollar (or less) and genre, can I predict gross domestic profit using linear regression?", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport pickle\nfrom pprint import pprint\nimport pandas as pd\nimport numpy as np\nfrom dateutil.parser import parse\nimport math\n# For plotting\nimport seaborn as sb\nimport matplotlib.pyplot as plt\n\n# For linear regression\nfrom patsy import dmatrices\nfrom patsy import dmatrix\nimport statsmodels.api as sm\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn import cross_validation", "_____no_output_____" ], [ "def perform_linear_regression(df, axes, title):\n plot_data = df.sort('budget', ascending = True)\n \n\n y, X = dmatrices('log_gross ~ budget', data = plot_data, return_type = 'dataframe')\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)\n \n columns = ['budget']\n #Patsy\n model = sm.OLS(y_train,X_train)\n fitted = model.fit()\n r_squared = fitted.rsquared\n pval = fitted.pvalues\n params = fitted.params\n\n\n\n #Plotting\n axes.plot(X_train[columns], y_train, 'go')\n #axes.plot(X_test[columns], y_test, 'yo')\n #axes.plot(X_test[columns], fitted.predict(X_test), 'ro')\n axes.plot(X[columns], fitted.predict(X), '-')\n \n axes.set_title('{0} (Rsquared = {1:.2f}) p = {2:.2f} m = {3:.2f}'.format(title, r_squared, pval[1], np.exp(params[1])))\n axes.set_xlabel('Budget')\n axes.set_ylabel('ln(Gross)')\n axes.set_ylim(0, 25)\n return None\n\n", "_____no_output_____" ], [ "def perform_linear_regression1(df, axes, title):\n plot_data = df.sort('budget', ascending = True)\n \n\n y, X = dmatrices('log_gross ~ budget', data = plot_data, return_type = 'dataframe')\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)\n \n columns = ['budget']\n #Patsy\n model = sm.OLS(y_train,X_train)\n fitted = model.fit()\n r_squared = fitted.rsquared\n pval = fitted.pvalues\n params = fitted.params\n y_test = y_test\n #Plotting\n #axes.plot(X_train[columns], y_train, 'go')\n axes.plot(X_test[columns], y_test, 'yo')\n #axes.plot(X_test[columns], fitted.predict(X_test), 'ro')\n \n \n axes.plot(X[columns], fitted.predict(X), '-')\n \n axes.set_title('{0} (Rsquared = {1:.2f}) p = {2:.2f}'.format(title, r_squared, pval[1]))\n axes.set_xlabel('Budget')\n axes.set_ylabel('ln(Gross)')\n axes.set_ylim(0, 25)\n return None\n\n", "_____no_output_____" ], [ "def perform_linear_regression_all(df, axes, title):\n plot_data = df.sort('budget', ascending = True)\n \n\n y, X = dmatrices('log_gross ~ budget', data = plot_data, return_type = 'dataframe')\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)\n \n columns = ['budget']\n #Patsy\n model = sm.OLS(y_train,X_train)\n fitted = model.fit()\n r_squared = fitted.rsquared\n pval = fitted.pvalues\n params = fitted.params\n \n #Plotting\n axes.plot(X_train[columns], y_train, 'go')\n axes.plot(X_test[columns], y_test, 'yo')\n \n axes.set_title('{0}'.format(title))\n axes.set_xlabel('Budget')\n axes.set_ylabel('ln(Gross)')\n axes.set_ylim(0, 25)\n return None\n\n", "_____no_output_____" ], [ "def create_genre_column(df, genre):\n return df['genre'].apply(lambda x: 1 if genre in x else 0)", "_____no_output_____" ], [ "def get_genre_dataframes(df, genre):\n columns = ['log_gross', 'gross', 'log_budget', 'budget', 'runtime']\n df_out = df.copy()[df[genre] == 1][columns]\n df_out['genre'] = genre\n return df_out", "_____no_output_____" ] ], [ [ "###Load the movie dictionary", "_____no_output_____" ] ], [ [ "d = pickle.load(open('movie_dictionary.p'))", "_____no_output_____" ], [ "#Create a dataframe \ndf = pd.DataFrame.from_dict(d, orient = 'index')\n", "_____no_output_____" ] ], [ [ "###Clean the data and remove N/A's\nKeep only movies with a positive runtime", "_____no_output_____" ] ], [ [ "df2 = df.copy()\ndf2 = df2[['gross', 'date', 'budget', 'genre', 'runtime']]\ndf2['gross'][df2.gross == 'N/A'] = np.nan\ndf2['budget'][df2.budget == 'N/A'] = np.nan\ndf2['date'][df2.date == 'N/A'] = np.nan\ndf2['genre'][df2.genre == 'N/A'] = np.nan\ndf2['genre'][df2.genre == 'Unknown'] = np.nan\ndf2['runtime'][df2.runtime == 'N/A'] = np.nan\ndf2 = df2[df2.date > parse('01-01-2005').date()]\ndf2 = df2[df2.runtime >= 0]\n#df2 = df2[df2.budget <30]\ndf2 = df2.dropna()", "_____no_output_____" ] ], [ [ "For budget and gross, if data is missing, populate them with the mean of all the movies", "_____no_output_____" ] ], [ [ "#df2['budget'][df2['budget'].isnull()] = df2['budget'].mean()\n#df2['gross'][df2['gross'].isnull()] = df2['gross'].mean()\ndf2['date'] = pd.to_datetime(df2['date'])\ndf2['year'] = df2['date'].apply(lambda x: x.year)\ndf2['gross'] = df2['gross'].astype(float)\ndf2['budget'] = df2['budget'].astype(float)\ndf2['runtime'] = df2['runtime'].astype(float)\ndf2['genre'] = df2['genre'].astype(str)\n", "_____no_output_____" ] ], [ [ "###Create some log columns", "_____no_output_____" ] ], [ [ "df2['log_runtime'] = df2['runtime'].apply(lambda x: np.log(x))\ndf2['log_budget'] = df2['budget'].apply(lambda x: np.log(x))\ndf2['log_gross'] = df2['gross'].apply(lambda x: np.log(x))", "_____no_output_____" ] ], [ [ "###How does the gross and budget data look? (Not normally distributed)", "_____no_output_____" ] ], [ [ "fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10,5))\ndf2['gross'].plot(ax = axes[0], kind = 'hist', title = 'Gross Histogram')\ndf2['budget'].plot(ax = axes[1], kind = 'hist', title = 'Budget Histogram')", "_____no_output_____" ] ], [ [ "###Looks more normally distributed now!", "_____no_output_____" ] ], [ [ "fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10,5))\ndf2['log_gross'].plot(ax = axes[0], kind = 'hist', title = 'log(Gross)')\ndf2['budget'].plot(ax = axes[1], kind = 'hist', title = 'Budget')", "_____no_output_____" ] ], [ [ "###Check top grossing genres", "_____no_output_____" ] ], [ [ "df2.groupby('genre')[['gross']].mean().sort('gross', ascending = True).plot(figsize = (10,10), kind = 'barh', legend = False, title = 'Mean Domestic Gross by Genre')", "_____no_output_____" ], [ "test = df2.groupby('genre')[['gross']].mean()\ntest['Count'] = df2.groupby('genre')[['gross']].count()\ntest.sort('gross', ascending = False)", "_____no_output_____" ] ], [ [ "### Check top genres by count", "_____no_output_____" ] ], [ [ "df2.groupby('genre')[['gross']].count().sort('gross', ascending = True).plot(figsize = (10,10), kind = 'barh', legend = False, title = 'Count of Movies by Genre')", "_____no_output_____" ] ], [ [ "###Create categories for top unique grossing genres", "_____no_output_____" ] ], [ [ "genre_list = ['Comedy', 'Drama', 'Horror', 'Romance', 'Thriller', 'Sci-Fi', 'Music', 'Action', 'Adventure', 'Historical', \\\n 'Family', 'War', 'Sports', 'Crime', 'Animation']\n \nfor genre in genre_list:\n df2[genre] = create_genre_column(df2, genre)\n", "_____no_output_____" ] ], [ [ "###Create a new column for genres that concatenates all the individual columns", "_____no_output_____" ] ], [ [ "df_comedy = get_genre_dataframes(df2, 'Comedy')\ndf_drama = get_genre_dataframes(df2, 'Drama')\ndf_horror = get_genre_dataframes(df2, 'Horror')\ndf_romance = get_genre_dataframes(df2, 'Romance')\ndf_thriller = get_genre_dataframes(df2, 'Thriller')\ndf_scifi = get_genre_dataframes(df2, 'Sci-Fi')\ndf_music = get_genre_dataframes(df2, 'Music')\ndf_action = get_genre_dataframes(df2, 'Action')\ndf_adventure = get_genre_dataframes(df2, 'Adventure')\ndf_historical = get_genre_dataframes(df2, 'Historical')\ndf_family = get_genre_dataframes(df2, 'Family')\ndf_war = get_genre_dataframes(df2, 'War')\ndf_sports = get_genre_dataframes(df2, 'Sports')\ndf_crime = get_genre_dataframes(df2, 'Crime')\ndf_animation = get_genre_dataframes(df2, 'Animation')", "_____no_output_____" ], [ "final_df = df_comedy.copy()\nfinal_df = final_df.append(df_drama)\nfinal_df = final_df.append(df_horror)\nfinal_df = final_df.append(df_romance)\nfinal_df = final_df.append(df_thriller)\nfinal_df = final_df.append(df_scifi)\nfinal_df = final_df.append(df_music)\nfinal_df = final_df.append(df_action)\nfinal_df = final_df.append(df_adventure)\nfinal_df = final_df.append(df_historical)\nfinal_df = final_df.append(df_family)\nfinal_df = final_df.append(df_war)\nfinal_df = final_df.append(df_sports)\nfinal_df = final_df.append(df_crime)\nfinal_df = final_df.append(df_animation)\nfinal_df[['genre', 'budget', 'log_gross']].head()\n", "_____no_output_____" ], [ "final_df[['log_gross', 'genre']].groupby('genre').count().sort('log_gross', ascending = False).plot(kind = 'bar', legend = False, title = 'Counts of Movies by Genre')", "_____no_output_____" ], [ "temp = final_df[['gross', 'genre']].groupby('genre').mean()\ntemp['Count'] = final_df[['gross', 'genre']].groupby('genre').count()\ntemp.sort('gross',ascending = False)\ntemp = temp.rename(columns={'gross': 'Average Gross'})\ntemp.sort('Average Gross', ascending = False)", "_____no_output_____" ], [ "fig, axes = plt.subplots(nrows=4, ncols=4,figsize=(25,25))\nperform_linear_regression_all(df_comedy, axes[0,0], 'Comedy')\nperform_linear_regression_all(df_horror, axes[0,1], 'Horror')\nperform_linear_regression_all(df_romance, axes[0,2], 'Romance')\nperform_linear_regression_all(df_thriller, axes[0,3], 'Thriller')\nperform_linear_regression_all(df_scifi, axes[1,0], 'Sci_Fi')\nperform_linear_regression_all(df_music, axes[1,1], 'Music')\nperform_linear_regression_all(df_action, axes[1,2], 'Action')\nperform_linear_regression_all(df_adventure, axes[1,3], 'Adventure')\nperform_linear_regression_all(df_historical, axes[2,0], 'Historical')\nperform_linear_regression_all(df_family, axes[2,1], 'Family')\nperform_linear_regression_all(df_war, axes[2,2], 'War')\nperform_linear_regression_all(df_sports, axes[2,3], 'Sports')\nperform_linear_regression_all(df_crime, axes[3,0], 'Crime')\nperform_linear_regression_all(df_animation, axes[3,1], 'Animation')\nperform_linear_regression_all(df_drama, axes[3,2], 'Drama')\n\n", "_____no_output_____" ], [ "fig, axes = plt.subplots(nrows=4, ncols=4,figsize=(25,25))\nperform_linear_regression(df_comedy, axes[0,0], 'Comedy')\nperform_linear_regression(df_horror, axes[0,1], 'Horror')\nperform_linear_regression(df_romance, axes[0,2], 'Romance')\nperform_linear_regression(df_thriller, axes[0,3], 'Thriller')\nperform_linear_regression(df_scifi, axes[1,0], 'Sci_Fi')\nperform_linear_regression(df_music, axes[1,1], 'Music')\nperform_linear_regression(df_action, axes[1,2], 'Action')\nperform_linear_regression(df_adventure, axes[1,3], 'Adventure')\nperform_linear_regression(df_historical, axes[2,0], 'Historical')\nperform_linear_regression(df_family, axes[2,1], 'Family')\nperform_linear_regression(df_war, axes[2,2], 'War')\nperform_linear_regression(df_sports, axes[2,3], 'Sports')\nperform_linear_regression(df_crime, axes[3,0], 'Crime')\nperform_linear_regression(df_animation, axes[3,1], 'Animation')\nperform_linear_regression(df_drama, axes[3,2], 'Drama')\n", "_____no_output_____" ] ], [ [ "###Linear Regression", "_____no_output_____" ] ], [ [ "fig, axes = plt.subplots(nrows=4, ncols=4,figsize=(25,25))\nperform_linear_regression1(df_comedy, axes[0,0], 'Comedy')\nperform_linear_regression1(df_horror, axes[0,1], 'Horror')\nperform_linear_regression1(df_romance, axes[0,2], 'Romance')\nperform_linear_regression1(df_thriller, axes[0,3], 'Thriller')\nperform_linear_regression1(df_scifi, axes[1,0], 'Sci-Fi')\nperform_linear_regression1(df_music, axes[1,1], 'Music')\nperform_linear_regression1(df_action, axes[1,2], 'Action')\nperform_linear_regression1(df_adventure, axes[1,3], 'Adventure')\nperform_linear_regression1(df_historical, axes[2,0], 'Historical')\nperform_linear_regression1(df_family, axes[2,1], 'Family')\nperform_linear_regression1(df_war, axes[2,2], 'War')\nperform_linear_regression1(df_sports, axes[2,3], 'Sports')\nperform_linear_regression1(df_crime, axes[3,0], 'Crime')\nperform_linear_regression1(df_animation, axes[3,1], 'Animation')\nperform_linear_regression1(df_drama, axes[3,2], 'Drama')\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
4a5f36434cffbd618a37780920395c4f550bb589
195,762
ipynb
Jupyter Notebook
examples/onnx_to_pytorch.ipynb
samgd/gamma
7a22b832cece6acccc4b09c4fc7d1ca53eef1f82
[ "MIT" ]
1
2018-02-05T17:32:39.000Z
2018-02-05T17:32:39.000Z
examples/onnx_to_pytorch.ipynb
samgd/gamma
7a22b832cece6acccc4b09c4fc7d1ca53eef1f82
[ "MIT" ]
2
2018-02-20T17:49:11.000Z
2018-07-17T14:36:07.000Z
examples/onnx_to_pytorch.ipynb
samgd/gamma
7a22b832cece6acccc4b09c4fc7d1ca53eef1f82
[ "MIT" ]
3
2018-02-19T12:36:41.000Z
2018-07-12T08:46:17.000Z
88.821234
2,290
0.577426
[ [ [ "## Import a model from ONNX and run using PyTorch", "_____no_output_____" ], [ "We demonstrate how to import a model from ONNX and convert to PyTorch", "_____no_output_____" ], [ "#### Imports", "_____no_output_____" ] ], [ [ "import os\nimport operator as op\nimport warnings; warnings.simplefilter(action='ignore', category=FutureWarning)\n\n\nimport numpy as np\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom torch.autograd import Variable\nimport onnx\n\nimport gamma\nfrom gamma import convert, protobuf, utils", "_____no_output_____" ] ], [ [ "#### 1: Download the model", "_____no_output_____" ] ], [ [ "fpath = utils.get_file('https://s3.amazonaws.com/download.onnx/models/squeezenet.tar.gz')\nonnx_model = onnx.load(os.path.join(fpath, 'squeezenet/model.onnx'))\n\ninputs = [i.name for i in onnx_model.graph.input if \n i.name not in {x.name for x in onnx_model.graph.initializer}]\noutputs = [o.name for o in onnx_model.graph.output]", "_____no_output_____" ] ], [ [ "#### 2: Import into Gamma", "_____no_output_____" ] ], [ [ "graph = convert.from_onnx(onnx_model)\nconstants = {k for k, (v, i) in graph.items() if v['type'] == 'Constant'}\nutils.draw(gamma.strip(graph, constants))", "_____no_output_____" ] ], [ [ "#### 3: Convert to PyTorch", "_____no_output_____" ] ], [ [ "make_node = gamma.make_node_attr\ndef torch_padding(params):\n padding = params.get('pads', [0,0,0,0])\n assert (padding[0] == padding[2]) and (padding[1] == padding[3]) \n return (padding[0], padding[1])\n \ntorch_ops = {\n 'Add': lambda params: op.add,\n 'Concat': lambda params: (lambda *xs: torch.cat(xs, dim=params['axis'])),\n 'Constant': lambda params: nn.Parameter(torch.FloatTensor(params['value'])), \n 'Dropout': lambda params: nn.Dropout(params.get('ratio', 0.5)).eval(), #.eval() sets to inference mode. where should this logic live?\n 'GlobalAveragePool': lambda params: nn.AdaptiveAvgPool2d(1),\n 'MaxPool': lambda params: nn.MaxPool2d(params['kernel_shape'], stride=params.get('strides', [1,1]),\n padding=torch_padding(params), \n dilation=params.get('dilations', [1,1])),\n 'Mul': lambda params: op.mul,\n 'Relu': lambda params: nn.ReLU(),\n 'Softmax': lambda params: nn.Softmax(dim=params.get('axis', 1)), \n} \n\ndef torch_op(node, inputs):\n if node['type'] in torch_ops:\n op = torch_ops[node['type']](node['params'])\n return make_node('Torch_op', {'op': op}, inputs)\n return (node, inputs)\n\n\ndef torch_conv_node(params, x, w, b):\n ko, ki, kh, kw = w.shape\n group = params.get('group', 1)\n ki *= group\n conv = nn.Conv2d(ki, ko, (kh,kw), \n stride=tuple(params.get('strides', [1,1])),\n padding=torch_padding(params), \n dilation=tuple(params.get('dilations', [1,1])),\n groups=group)\n conv.weight = nn.Parameter(torch.FloatTensor(w))\n conv.bias = nn.Parameter(torch.FloatTensor(b))\n return make_node('Torch_op', {'op': conv}, [x])\n\ndef convert_to_torch(graph):\n v, _ = gamma.var, gamma.Wildcard\n conv_pattern = {\n v('conv'): make_node('Conv', v('params'), [v('x'), v('w'), v('b')]),\n v('w'): make_node('Constant', {'value': v('w_val')}, []),\n v('b'): make_node('Constant', {'value': v('b_val')}, [])\n }\n matches = gamma.search(conv_pattern, graph)\n g = gamma.union(graph, {m[v('conv')]: \n torch_conv_node(m[v('params')], m[v('x')], m[v('w_val')], m[v('b_val')]) \n for m in matches})\n remove = {m[x] for m in matches for x in (v('w'), v('b'))}\n g = {k: torch_op(v, i) for k, (v, i) in g.items() if k not in remove}\n return g\n\ndef torch_graph(graph):\n return gamma.FuncCache(lambda k: graph[k][0]['params']['op'](*[tg[x] for x in graph[k][1]]))", "_____no_output_____" ], [ "g = convert_to_torch(graph)\nutils.draw(g)", "_____no_output_____" ] ], [ [ "#### 4: Load test example and check PyTorch output", "_____no_output_____" ] ], [ [ "def load_onnx_tensor(fname):\n tensor = onnx.TensorProto()\n with open(fname, 'rb') as f:\n tensor.ParseFromString(f.read())\n return protobuf.unwrap(tensor)\n\ninput_0 = load_onnx_tensor(os.path.join(fpath, 'squeezenet/test_data_set_0/input_0.pb'))\noutput_0 = load_onnx_tensor(os.path.join(fpath, 'squeezenet/test_data_set_0/output_0.pb'))", "_____no_output_____" ], [ "tg = torch_graph(g)\ntg[inputs[0]] = Variable(torch.Tensor(input_0))\ntorch_outputs = tg[outputs[0]]\n\nnp.testing.assert_almost_equal(output_0, torch_outputs.data.numpy(), decimal=5)\nprint('Success!')", "Success!\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a5f366c1cea6747f62d8479d42ad26edbc2b6c4
28,133
ipynb
Jupyter Notebook
Starter_Code/credit_risk_resampling.ipynb
AntZam1776/Modul_12
7ad318970a2079eec79fe4593187126fe9f33f6e
[ "MIT" ]
null
null
null
Starter_Code/credit_risk_resampling.ipynb
AntZam1776/Modul_12
7ad318970a2079eec79fe4593187126fe9f33f6e
[ "MIT" ]
null
null
null
Starter_Code/credit_risk_resampling.ipynb
AntZam1776/Modul_12
7ad318970a2079eec79fe4593187126fe9f33f6e
[ "MIT" ]
null
null
null
31.017641
411
0.473536
[ [ [ "# Credit Risk Classification\n\nCredit risk poses a classification problem that’s inherently imbalanced. This is because healthy loans easily outnumber risky loans. In this Challenge, you’ll use various techniques to train and evaluate models with imbalanced classes. You’ll use a dataset of historical lending activity from a peer-to-peer lending services company to build a model that can identify the creditworthiness of borrowers.\n\n## Instructions:\n\nThis challenge consists of the following subsections:\n\n* Split the Data into Training and Testing Sets\n\n* Create a Logistic Regression Model with the Original Data\n\n* Predict a Logistic Regression Model with Resampled Training Data \n\n### Split the Data into Training and Testing Sets\n\nOpen the starter code notebook and then use it to complete the following steps.\n\n1. Read the `lending_data.csv` data from the `Resources` folder into a Pandas DataFrame.\n\n2. Create the labels set (`y`) from the “loan_status” column, and then create the features (`X`) DataFrame from the remaining columns.\n\n > **Note** A value of `0` in the “loan_status” column means that the loan is healthy. A value of `1` means that the loan has a high risk of defaulting. \n\n3. Check the balance of the labels variable (`y`) by using the `value_counts` function.\n\n4. Split the data into training and testing datasets by using `train_test_split`.\n\n### Create a Logistic Regression Model with the Original Data\n\nEmploy your knowledge of logistic regression to complete the following steps:\n\n1. Fit a logistic regression model by using the training data (`X_train` and `y_train`).\n\n2. Save the predictions on the testing data labels by using the testing feature data (`X_test`) and the fitted model.\n\n3. Evaluate the model’s performance by doing the following:\n\n * Calculate the accuracy score of the model.\n\n * Generate a confusion matrix.\n\n * Print the classification report.\n\n4. Answer the following question: How well does the logistic regression model predict both the `0` (healthy loan) and `1` (high-risk loan) labels?\n\n### Predict a Logistic Regression Model with Resampled Training Data\n\nDid you notice the small number of high-risk loan labels? Perhaps, a model that uses resampled data will perform better. You’ll thus resample the training data and then reevaluate the model. Specifically, you’ll use `RandomOverSampler`.\n\nTo do so, complete the following steps:\n\n1. Use the `RandomOverSampler` module from the imbalanced-learn library to resample the data. Be sure to confirm that the labels have an equal number of data points. \n\n2. Use the `LogisticRegression` classifier and the resampled data to fit the model and make predictions.\n\n3. Evaluate the model’s performance by doing the following:\n\n * Calculate the accuracy score of the model.\n\n * Generate a confusion matrix.\n\n * Print the classification report.\n \n4. Answer the following question: How well does the logistic regression model, fit with oversampled data, predict both the `0` (healthy loan) and `1` (high-risk loan) labels?\n\n### Write a Credit Risk Analysis Report\n\nFor this section, you’ll write a brief report that includes a summary and an analysis of the performance of both machine learning models that you used in this challenge. You should write this report as the `README.md` file included in your GitHub repository.\n\nStructure your report by using the report template that `Starter_Code.zip` includes, and make sure that it contains the following:\n\n1. An overview of the analysis: Explain the purpose of this analysis.\n\n\n2. The results: Using bulleted lists, describe the balanced accuracy scores and the precision and recall scores of both machine learning models.\n\n3. A summary: Summarize the results from the machine learning models. Compare the two versions of the dataset predictions. Include your recommendation for the model to use, if any, on the original vs. the resampled data. If you don’t recommend either model, justify your reasoning.", "_____no_output_____" ] ], [ [ "# Import the modules\nimport numpy as np\nimport pandas as pd\nfrom pathlib import Path\nfrom sklearn.metrics import balanced_accuracy_score\nfrom sklearn.metrics import confusion_matrix\nfrom imblearn.metrics import classification_report_imbalanced\n\nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "## Split the Data into Training and Testing Sets", "_____no_output_____" ], [ "### Step 1: Read the `lending_data.csv` data from the `Resources` folder into a Pandas DataFrame.", "_____no_output_____" ] ], [ [ "# Read the CSV file from the Resources folder into a Pandas DataFrame\nlending_data_df = pd.read_csv(Path(\"Resources/lending_data.csv\"))\n\n# Review the DataFrame\nlending_data_df.head()", "_____no_output_____" ] ], [ [ "### Step 2: Create the labels set (`y`) from the “loan_status” column, and then create the features (`X`) DataFrame from the remaining columns.", "_____no_output_____" ] ], [ [ "# Separate the data into labels and features\n\n# Separate the y variable, the labels\ny = lending_data_df[\"loan_status\"]\n\n# Separate the X variable, the features\nX = lending_data_df.drop(columns=\"loan_status\")", "_____no_output_____" ], [ "# Review the y variable Series\ny", "_____no_output_____" ], [ "# Review the X variable DataFrame\nX", "_____no_output_____" ] ], [ [ "### Step 3: Check the balance of the labels variable (`y`) by using the `value_counts` function.", "_____no_output_____" ] ], [ [ "# Check the balance of our target values\ny.value_counts()", "_____no_output_____" ] ], [ [ "### Step 4: Split the data into training and testing datasets by using `train_test_split`.", "_____no_output_____" ] ], [ [ "# Import the train_test_learn module\nfrom sklearn.model_selection import train_test_split\n\n# Split the data using train_test_split\n# Assign a random_state of 1 to the function\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "## Create a Logistic Regression Model with the Original Data", "_____no_output_____" ], [ "### Step 1: Fit a logistic regression model by using the training data (`X_train` and `y_train`).", "_____no_output_____" ] ], [ [ "# Import the LogisticRegression module from SKLearn\nfrom sklearn.linear_model import LogisticRegression\n\n# Instantiate the Logistic Regression model\n# Assign a random_state parameter of 1 to the model\nlogistical_regression = LogisticRegression(random_state=1)\n\n# Fit the model using training data\nlogistical_regression.fit(X_train, y_train)", "_____no_output_____" ] ], [ [ "### Step 2: Save the predictions on the testing data labels by using the testing feature data (`X_test`) and the fitted model.", "_____no_output_____" ] ], [ [ "# Make a prediction using the testing data\ny_prediction = logistical_regression.predict(X_test)", "_____no_output_____" ] ], [ [ "### Step 3: Evaluate the model’s performance by doing the following:\n\n* Calculate the accuracy score of the model.\n\n* Generate a confusion matrix.\n\n* Print the classification report.", "_____no_output_____" ] ], [ [ "# Print the balanced_accuracy score of the model\nbalanced_accuracy_score_set = balanced_accuracy_score(y_test, y_prediction)*100\nprint(f\"The balanced accuracy score is {(balanced_accuracy_score_set):.2f}%\")", "The balanced accuracy score is 95.20%\n" ], [ "# Generate a confusion matrix for the model\nconfusion_matrix_set = confusion_matrix(y_test, y_prediction)\nprint(confusion_matrix_set)", "[[18663 102]\n [ 56 563]]\n" ], [ "# Print the classification report for the model\nclassification_report = classification_report_imbalanced(y_test, y_prediction)\nprint(classification_report)", " pre rec spe f1 geo iba sup\n\n 0 1.00 0.99 0.91 1.00 0.95 0.91 18765\n 1 0.85 0.91 0.99 0.88 0.95 0.90 619\n\navg / total 0.99 0.99 0.91 0.99 0.95 0.91 19384\n\n" ] ], [ [ "### Step 4: Answer the following question.", "_____no_output_____" ], [ "**Question:** How well does the logistic regression model predict both the `0` (healthy loan) and `1` (high-risk loan) labels?\n\n**Answer:** It appears that predition of healthy loans is 100%, however there is an 84% likelyhood of predicting a \"1\", or, high risk loan.", "_____no_output_____" ], [ "---", "_____no_output_____" ], [ "## Predict a Logistic Regression Model with Resampled Training Data", "_____no_output_____" ], [ "### Step 1: Use the `RandomOverSampler` module from the imbalanced-learn library to resample the data. Be sure to confirm that the labels have an equal number of data points. ", "_____no_output_____" ] ], [ [ "# Import the RandomOverSampler module form imbalanced-learn\nfrom imblearn.over_sampling import RandomOverSampler\n\n# Instantiate the random oversampler model\n# # Assign a random_state parameter of 1 to the model\nrandom_over_sampler = RandomOverSampler(random_state=1)\n\n# Fit the original training data to the random_oversampler model\nX_train_ros, y_train_ros = random_over_sampler.fit_resample(X_train, y_train)", "_____no_output_____" ], [ "# Count the distinct values of the resampled labels data\ny_train_ros.value_counts()\n", "_____no_output_____" ] ], [ [ "### Step 2: Use the `LogisticRegression` classifier and the resampled data to fit the model and make predictions.", "_____no_output_____" ] ], [ [ "# Instantiate the Logistic Regression model\n# Assign a random_state parameter of 1 to the model\nlog_ros = LogisticRegression(random_state=1)\n\n# Fit the model using the resampled training data\nlog_ros.fit(X_train_ros, y_train_ros)\n\n# Make a prediction using the testing data\ny_prediction_ros = log_ros.predict(X_test)\n", "_____no_output_____" ] ], [ [ "### Step 3: Evaluate the model’s performance by doing the following:\n\n* Calculate the accuracy score of the model.\n\n* Generate a confusion matrix.\n\n* Print the classification report.", "_____no_output_____" ] ], [ [ "# Print the balanced_accuracy score of the model \nbalanced_accuracy_score_ros = balanced_accuracy_score(y_test, y_prediction_ros)*100\nprint(f\"The balanced accuracy score for this is {(balanced_accuracy_score_ros):.2f}%\")", "The balanced accuracy score for this is 99.37%\n" ], [ "# Generate a confusion matrix for the model\nconfusion_matrix_ros = confusion_matrix(y_test, y_prediction_ros)\nprint(confusion_matrix_ros)", "[[18649 116]\n [ 4 615]]\n" ], [ "# Print the classification report for the model\nclassification_report_ros = classification_report_imbalanced(y_test, y_prediction_ros)\nprint(classification_report_ros)", " pre rec spe f1 geo iba sup\n\n 0 1.00 0.99 0.99 1.00 0.99 0.99 18765\n 1 0.84 0.99 0.99 0.91 0.99 0.99 619\n\navg / total 0.99 0.99 0.99 0.99 0.99 0.99 19384\n\n" ] ], [ [ "### Step 4: Answer the following question", "_____no_output_____" ], [ "**Question:** How well does the logistic regression model, fit with oversampled data, predict both the `0` (healthy loan) and `1` (high-risk loan) labels?\n\n**Answer:** It appears that this has 100% accuaracy for predicting \"0\", or, Healthy Loans, while there is a 84% likely hood, again, of predicitng \"1\", or, High Risk Loans. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ] ]
4a5f3aee9863316ceb6bf3740258b58fc03976fb
45,278
ipynb
Jupyter Notebook
Google_CoLab_DL_Recommender.ipynb
LeonVillanueva/CoLab
dfae1c35b4a44a8f02dbede09bb3da58d743feab
[ "Unlicense" ]
null
null
null
Google_CoLab_DL_Recommender.ipynb
LeonVillanueva/CoLab
dfae1c35b4a44a8f02dbede09bb3da58d743feab
[ "Unlicense" ]
null
null
null
Google_CoLab_DL_Recommender.ipynb
LeonVillanueva/CoLab
dfae1c35b4a44a8f02dbede09bb3da58d743feab
[ "Unlicense" ]
null
null
null
57.0971
21,212
0.671562
[ [ [ "<a href=\"https://colab.research.google.com/github/LeonVillanueva/CoLab/blob/master/Google_CoLab_DL_Recommender.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "### Loading Libraries", "_____no_output_____" ] ], [ [ "!pip install -q tensorflow==2.0.0-beta1", "_____no_output_____" ], [ "%%capture\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport tensorflow as tf", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler", "_____no_output_____" ], [ "from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, Concatenate, GlobalMaxPooling2D, MaxPooling1D, GaussianNoise, BatchNormalization, MaxPooling2D, SimpleRNN, GRU, LSTM, GlobalMaxPooling1D, Embedding\nfrom tensorflow.keras.models import Model, Sequential\nfrom tensorflow.keras.optimizers import SGD, Adam\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom tensorflow.keras.preprocessing.sequence import TimeseriesGenerator\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\nfrom tensorflow.keras.preprocessing.text import Tokenizer", "_____no_output_____" ], [ "from scipy import stats\nimport math\nimport seaborn as sns\nimport re\nfrom nltk.stem import WordNetLemmatizer\nimport re", "_____no_output_____" ] ], [ [ "### Data", "_____no_output_____" ] ], [ [ "!wget -nc http://files.grouplens.org/datasets/movielens/ml-latest-small.zip", "File ‘ml-latest-small.zip’ already there; not retrieving.\n\n" ], [ "!unzip ml-latest-small.zip", "Archive: ml-latest-small.zip\n creating: ml-latest-small/\n inflating: ml-latest-small/links.csv \n inflating: ml-latest-small/tags.csv \n inflating: ml-latest-small/ratings.csv \n inflating: ml-latest-small/README.txt \n inflating: ml-latest-small/movies.csv \n" ], [ "df = pd.read_csv ('ml-latest-small/ratings.csv')", "_____no_output_____" ], [ "df.sort_values (by='timestamp', inplace=True, ascending=True)", "_____no_output_____" ], [ "df.head(3)", "_____no_output_____" ], [ "cutoff = int(len(df)*.90)\ndf['user_id'] = pd.Categorical (df['userId'])\ndf['user_id'] = df['user_id'].cat.codes\ndf['movie_id'] = pd.Categorical (df['movieId'])\ndf['movie_id'] = df['movie_id'].cat.codes\ntrain, test = df.iloc[:cutoff], df.iloc[cutoff:]", "_____no_output_____" ], [ "df.head(3)", "_____no_output_____" ], [ "U = len(set(df['user_id']))\nM = len(set(df['movie_id']))", "_____no_output_____" ], [ "K = 12 # embedding dimensions", "_____no_output_____" ], [ "user_ids = df['user_id'].values\nmovie_ids = df['movie_id'].values\nrating = df['rating'].values", "_____no_output_____" ], [ "len(user_ids) == len(movie_ids), len(movie_ids) == len(rating)", "_____no_output_____" ], [ "p = np.random.permutation (len(user_ids))", "_____no_output_____" ], [ "user_ids = user_ids[p]\nmovie_ids = movie_ids[p]\nrating = rating[p]", "_____no_output_____" ], [ "train_user = user_ids[:cutoff]\ntrain_movie = movie_ids[:cutoff]\ntrain_rating = rating[:cutoff]\n\ntest_user = user_ids[cutoff:]\ntest_movie = movie_ids[cutoff:]\ntest_rating = rating[cutoff:]\n\nrating_mean = train_rating.mean()", "_____no_output_____" ], [ "train_rating = train_rating - rating_mean\ntest_rating = test_rating - rating_mean", "_____no_output_____" ], [ "u = Input ((1,))\nm = Input ((1,))", "_____no_output_____" ], [ "u_emb = Embedding (U,K) (u) # samples, 1, K\nm_emb = Embedding (M,K) (m)", "_____no_output_____" ], [ "u_emb = Flatten () (u_emb) # samples, K\nm_emb = Flatten () (m_emb)\n\nx = Concatenate () ([u_emb, m_emb])\nx = Dense (400, activation='relu') (x)\nx = Dropout (0.5) (x)\nx = Dense (400, activation='relu') (x)\nx = Dense (1, activation='relu') (x)\n\nmodel = Model(inputs=[u,m], outputs=x)", "_____no_output_____" ], [ "adam = tf.keras.optimizers.Adam (learning_rate=0.005, decay=5e-6)", "_____no_output_____" ], [ "model.compile (optimizer='adam',\n loss='mse')", "_____no_output_____" ], [ "epochs = 20\n\nr = model.fit ([train_user, train_movie], train_rating, validation_data=([test_user, test_movie], test_rating), verbose=False, epochs=epochs, batch_size=1024)", "_____no_output_____" ], [ "plt.plot (r.history['loss'], label='loss', color='#840000')\nplt.plot (r.history['val_loss'], label='validation loss', color='#00035b')\nplt.legend ()", "_____no_output_____" ], [ "re = model.evaluate ([test_user, test_movie], test_rating)", "10084/10084 [==============================] - 1s 64us/sample - loss: 0.9882\n" ], [ "re**2", "_____no_output_____" ], [ "model.summary()", "Model: \"model_4\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_5 (InputLayer) [(None, 1)] 0 \n__________________________________________________________________________________________________\ninput_6 (InputLayer) [(None, 1)] 0 \n__________________________________________________________________________________________________\nembedding_4 (Embedding) (None, 1, 12) 7320 input_5[0][0] \n__________________________________________________________________________________________________\nembedding_5 (Embedding) (None, 1, 12) 116688 input_6[0][0] \n__________________________________________________________________________________________________\nflatten_6 (Flatten) (None, 12) 0 embedding_4[0][0] \n__________________________________________________________________________________________________\nflatten_7 (Flatten) (None, 12) 0 embedding_5[0][0] \n__________________________________________________________________________________________________\nflatten_8 (Flatten) (None, 12) 0 flatten_6[0][0] \n__________________________________________________________________________________________________\nflatten_9 (Flatten) (None, 12) 0 flatten_7[0][0] \n__________________________________________________________________________________________________\nconcatenate_4 (Concatenate) (None, 24) 0 flatten_8[0][0] \n flatten_9[0][0] \n__________________________________________________________________________________________________\ndense_12 (Dense) (None, 400) 10000 concatenate_4[0][0] \n__________________________________________________________________________________________________\ndropout_4 (Dropout) (None, 400) 0 dense_12[0][0] \n__________________________________________________________________________________________________\ndense_13 (Dense) (None, 400) 160400 dropout_4[0][0] \n__________________________________________________________________________________________________\ndense_14 (Dense) (None, 1) 401 dense_13[0][0] \n==================================================================================================\nTotal params: 294,809\nTrainable params: 294,809\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a5f49eaa8378caa2461c5a059e7cf56e783c265
144,331
ipynb
Jupyter Notebook
01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb
HarryGuapo/intro-to-deep-learning
cbeb764c4280025b0be7b2186f7ea5c11b579b59
[ "Unlicense" ]
1
2020-05-02T10:05:08.000Z
2020-05-02T10:05:08.000Z
01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb
HarryGuapo/intro-to-deep-learning
cbeb764c4280025b0be7b2186f7ea5c11b579b59
[ "Unlicense" ]
null
null
null
01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb
HarryGuapo/intro-to-deep-learning
cbeb764c4280025b0be7b2186f7ea5c11b579b59
[ "Unlicense" ]
null
null
null
258.194991
51,712
0.913012
[ [ [ "# Building Simple Neural Networks\n\nIn this section you will:\n\n* Import the MNIST dataset from Keras.\n* Format the data so it can be used by a Sequential model with Dense layers.\n* Split the dataset into training and test sections data.\n* Build a simple neural network using Keras Sequential model and Dense layers.\n* Train that model.\n* Evaluate the performance of that model.\n\nWhile we are accomplishing these tasks, we will also stop to discuss important concepts:\n\n* Splitting data into test and training sets.\n* Training rounds, batch size, and epochs.\n* Validation data vs test data.\n* Examining results.\n\n## Importing and Formatting the Data\n\nKeras has several built-in datasets that are already well formatted and properly cleaned. These datasets are an invaluable learning resource. Collecting and processing datasets is a serious undertaking, and deep learning tactics perform poorly without large high quality datasets. We will be leveraging the [Keras built in datasets](https://keras.io/datasets/) extensively, and you may wish to explore them further on your own.\n\nIn this exercise, we will be focused on the MNIST dataset, which is a set of 70,000 images of handwritten digits each labeled with the value of the written digit. Additionally, the images have been split into training and test sets.", "_____no_output_____" ] ], [ [ "# For drawing the MNIST digits as well as plots to help us evaluate performance we\n# will make extensive use of matplotlib\nfrom matplotlib import pyplot as plt\n\n# All of the Keras datasets are in keras.datasets\nfrom tensorflow.keras.datasets import mnist\n\n# Keras has already split the data into training and test data\n(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n\n# Training images is a list of 60,000 2D lists.\n# Each 2D list is 28 by 28—the size of the MNIST pixel data.\n# Each item in the 2D array is an integer from 0 to 255 representing its grayscale\n# intensity where 0 means white, 255 means black.\nprint(len(training_images), training_images[0].shape)\n\n# training_labels are a value between 0 and 9 indicating which digit is represented.\n# The first item in the training data is a 5\nprint(len(training_labels), training_labels[0])\n", "60000 (28, 28)\n60000 5\n" ], [ "# Lets visualize the first 100 images from the dataset\nfor i in range(100):\n ax = plt.subplot(10, 10, i+1)\n ax.axis('off')\n plt.imshow(training_images[i], cmap='Greys')\n", "_____no_output_____" ] ], [ [ "## Problems With This Data\n\nThere are (at least) two problems with this data as it is currently formatted, what do you think they are?", "_____no_output_____" ], [ "1. The input data is formatted as a 2D array, but our deep neural network needs to data as a 1D vector.\n * This is because of how deep neural networks are constructed, it is simply not possible to send anything but a vector as input.\n * These vectors can be/represent anything, but from the computer's perspective they must be a 1D vector.\n2. Our labels are numbers, but we're not performing regression. We need to use a 1-hot vector encoding for our labels.\n * This is important because if we use the number values we would be training our network to think of these values as continuous.\n * If the digit is supposed to be a 2, guessing 1 and guessing 9 are both equally wrong.\n * Training the network with numbers would imply that a prediction of 1 would be \"less wrong\" than a prediction of 9, when in fact both are equally wrong. ", "_____no_output_____" ], [ "### Fixing the data format\n\nLuckily, this is a common problem and we can use two methods to fix the data: `numpy.reshape` and `keras.utils.to_categorical`. This is nessesary because of how deep neural networks process data, there is no way to send 2D data to a `Sequential` model made of `Dense` layers.", "_____no_output_____" ] ], [ [ "from tensorflow.keras.utils import to_categorical\n\n# Preparing the dataset\n# Setup train and test splits\n(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n\n\n# 28 x 28 = 784, because that's the dimensions of the MNIST data.\nimage_size = 784\n\n# Reshaping the training_images and test_images to lists of vectors with length 784\n# instead of lists of 2D arrays. Same for the test_images\ntraining_data = training_images.reshape(training_images.shape[0], image_size) \ntest_data = test_images.reshape(test_images.shape[0], image_size)\n\n# [\n# [1,2,3]\n# [4,5,6]\n# ]\n\n# => [1,2,3,4,5,6]\n\n# Just showing the changes...\nprint(\"training data: \", training_images.shape, \" ==> \", training_data.shape)\nprint(\"test data: \", test_images.shape, \" ==> \", test_data.shape)", "training data: (60000, 28, 28) ==> (60000, 784)\ntest data: (10000, 28, 28) ==> (10000, 784)\n" ], [ "# Create 1-hot encoded vectors using to_categorical\nnum_classes = 10 # Because it's how many digits we have (0-9) \n\n# to_categorical takes a list of integers (our labels) and makes them into 1-hot vectors\ntraining_labels = to_categorical(training_labels, num_classes)\ntest_labels = to_categorical(test_labels, num_classes)", "_____no_output_____" ], [ "# Recall that before this transformation, training_labels[0] was the value 5. Look now:\nprint(training_labels[0])", "[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]\n" ] ], [ [ "## Building a Deep Neural Network\n\nNow that we've prepared our data, it's time to build a simple neural network. To start we'll make a deep network with 3 layers—the input layer, a single hidden layer, and the output layer. In a deep neural network all the layers are 1 dimensional. The input layer has to be the shape of our input data, meaning it must have 784 nodes. Similarly, the output layer must match our labels, meaning it must have 10 nodes. We can choose the number of nodes in our hidden layer, I've chosen 32 arbitrarally.", "_____no_output_____" ] ], [ [ "from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense\n\n# Sequential models are a series of layers applied linearly.\nmodel = Sequential()\n\n# The first layer must specify it's input_shape.\n# This is how the first two layers are added, the input layer and the hidden layer.\nmodel.add(Dense(units=32, activation='sigmoid', input_shape=(image_size,)))\n\n# This is how the output layer gets added, the 'softmax' activation function ensures\n# that the sum of the values in the output nodes is 1. Softmax is very\n# common in classification networks. \nmodel.add(Dense(units=num_classes, activation='softmax'))\n\n# This function provides useful text data for our network\nmodel.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense (Dense) (None, 32) 25120 \n_________________________________________________________________\ndense_1 (Dense) (None, 10) 330 \n=================================================================\nTotal params: 25,450\nTrainable params: 25,450\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "## Compiling and Training a Model\n\nOur model must be compiled and trained before it can make useful predictions. Models are trainined with the training data and training labels. During this process Keras will use an optimizer, loss function, metrics of our chosing to repeatedly make predictions and recieve corrections. The loss function is used to train the model, the metrics are only used for human evaluation of the model during and after training.\n\nTraining happens in a series of epochs which are divided into a series of rounds. Each round the network will recieve `batch_size` samples from the training data, make predictions, and recieve one correction based on the errors in those predictions. In a single epoch, the model will look at every item in the training set __exactly once__, which means individual data points are sampled from the training data without replacement during each round of each epoch.\n\nDuring training, the training data itself will be broken into two parts according to the `validation_split` parameter. The proportion that you specify will be left out of the training process, and used to evaluate the accuracy of the model. This is done to preserve the test data, while still having a set of data left out in order to test against — and hopefully prevent — overfitting. At the end of each epoch, predictions will be made for all the items in the validation set, but those predictions won't adjust the weights in the model. Instead, if the accuracy of the predictions in the validation set stops improving then training will stop early, even if accuracy in the training set is improving. ", "_____no_output_____" ] ], [ [ "# sgd stands for stochastic gradient descent.\n# categorical_crossentropy is a common loss function used for categorical classification.\n# accuracy is the percent of predictions that were correct.\nmodel.compile(optimizer=\"sgd\", loss='categorical_crossentropy', metrics=['accuracy'])\n\n# The network will make predictions for 128 flattened images per correction.\n# It will make a prediction on each item in the training set 5 times (5 epochs)\n# And 10% of the data will be used as validation data.\nhistory = model.fit(training_data, training_labels, batch_size=128, epochs=5, verbose=True, validation_split=.1)", "Train on 54000 samples, validate on 6000 samples\nEpoch 1/5\n54000/54000 [==============================] - 1s 23us/sample - loss: 1.3004 - accuracy: 0.6599 - val_loss: 0.8499 - val_accuracy: 0.8328\nEpoch 2/5\n54000/54000 [==============================] - 1s 11us/sample - loss: 0.7825 - accuracy: 0.8300 - val_loss: 0.6217 - val_accuracy: 0.8803\nEpoch 3/5\n54000/54000 [==============================] - 1s 11us/sample - loss: 0.6256 - accuracy: 0.8633 - val_loss: 0.5193 - val_accuracy: 0.8980\nEpoch 4/5\n54000/54000 [==============================] - 1s 11us/sample - loss: 0.5424 - accuracy: 0.8778 - val_loss: 0.4621 - val_accuracy: 0.9082\nEpoch 5/5\n54000/54000 [==============================] - 1s 11us/sample - loss: 0.4903 - accuracy: 0.8864 - val_loss: 0.4099 - val_accuracy: 0.9105\n" ] ], [ [ "## Evaluating Our Model\n\nNow that we've trained our model, we want to evaluate its performance. We're using the \"test data\" here although in a serious experiment, we would likely not have done nearly enough work to warrent the application of the test data. Instead, we would rely on the validation metrics as a proxy for our test results until we had models that we believe would perform well. \n\nOnce we evaluate our model on the test data, any subsequent changes we make would be based on what we learned from the test data. Meaning, we would have functionally incorporated information from the test set into our training procedure which could bias and even invalidate the results of our research. In a non-research setting the real test might be more like putting this feature into production. \n\nNevertheless, it is always wise to create a test set that is not used as an evaluative measure until the very end of an experimental lifecycle. That is, once you have a model that you believe __should__ generalize well to unseen data you should test it on the test data to test that hypothosis. If your model performs poorly on the test data, you'll have to reevaluate your model, training data, and procedure. ", "_____no_output_____" ] ], [ [ "loss, accuracy = model.evaluate(test_data, test_labels, verbose=True)\n\nplt.plot(history.history['accuracy'])\nplt.plot(history.history['val_accuracy'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\nplt.show()\n\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\nplt.show()\n\nprint(f'Test loss: {loss:.3}')\nprint(f'Test accuracy: {accuracy:.3}')", "10000/10000 [==============================] - 0s 21us/sample - loss: 0.4473 - accuracy: 0.8978\n" ] ], [ [ "## How Did Our Network Do? \n\n* Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy?\n* Our model was more accurate on the validation data than it was on the training data. \n * Is this okay? Why or why not?\n * What if our model had been more accurate on the training data than the validation data?\n* Did our model get better during each epoch?\n * If not: why might that be the case?\n * If so: should we always expect this, where each epoch strictly improves training/validation accuracy/loss?", "_____no_output_____" ], [ "### Answers:\n\n\n* Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy?\n * __Because we only evaluate the test data once at the very end, but we evaluate training and validation scores once per epoch.__\n* Our model was more accurate on the validation data than it was on the training data. \n * Is this okay? Why or why not?\n * __Yes, this is okay, and even good. When our validation scores are better than our training scores, it's a sign that we are probably not overfitting__\n * What if our model had been more accurate on the training data than the validation data?\n * __This would concern us, because it would suggest we are probably overfitting.__\n* Did our model get better during each epoch?\n * If not: why might that be the case?\n * __Optimizers rely on the gradient to update our weights, but the 'function' we are optimizing (our neural network) is not a ground truth. A single batch, and even a complete epoch, may very well result in an adjustment that hurts overall performance.__\n * If so: should we always expect this, where each epoch strictly improves training/validation accuracy/loss?\n * __Not at all, see the above answer.__", "_____no_output_____" ], [ "## Look at Specific Results\n\nOften, it can be illuminating to view specific results, both when the model is correct and when the model is wrong. Lets look at the images and our model's predictions for the first 16 samples in the test set.", "_____no_output_____" ] ], [ [ "from numpy import argmax\n\n# Predicting once, then we can use these repeatedly in the next cell without recomputing the predictions.\npredictions = model.predict(test_data)\n\n# For pagination & style in second cell\npage = 0\nfontdict = {'color': 'black'}", "_____no_output_____" ], [ "# Repeatedly running this cell will page through the predictions\nfor i in range(16):\n ax = plt.subplot(4, 4, i+1)\n ax.axis('off')\n plt.imshow(test_images[i + page], cmap='Greys')\n prediction = argmax(predictions[i + page])\n true_value = argmax(test_labels[i + page])\n\n fontdict['color'] = 'black' if prediction == true_value else 'red'\n plt.title(\"{}, {}\".format(prediction, true_value), fontdict=fontdict)\n\npage += 16\nplt.tight_layout()\nplt.show()", "_____no_output_____" ] ], [ [ "## Will A Different Network Perform Better?\n\nGiven what you know so far, use Keras to build and train another sequential model that you think will perform __better__ than the network we just built and trained. Then evaluate that model and compare its performance to our model. Remember to look at accuracy and loss for training and validation data over time, as well as test accuracy and loss. ", "_____no_output_____" ] ], [ [ "# Your code here...\n", "_____no_output_____" ] ], [ [ "## Bonus questions: Go Further\n\nHere are some questions to help you further explore the concepts in this lab.\n\n* Does the original model, or your model, fail more often on a particular digit? \n * Write some code that charts the accuracy of our model's predictions on the test data by digit.\n * Is there a clear pattern? If so, speculate about why that could be...\n* Training for longer typically improves performance, up to a point.\n * For a simple model, try training it for 20 epochs, and 50 epochs.\n * Look at the charts of accuracy and loss over time, have you reached diminishing returns after 20 epochs? after 50?\n* More complex networks require more training time, but can outperform simpler networks.\n * Build a more complex model, with at least 3 hidden layers.\n * Like before, train it for 5, 20, and 50 epochs. \n * Evaluate the performance of the model against the simple model, and compare the total amount of time it took to train.\n * Was the extra complexity worth the additional training time? \n * Do you think your complex model would get even better with more time?\n* A little perspective on this last point: Some models train for [__weeks to months__](https://openai.com/blog/ai-and-compute/). ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a5f4fbaacd6f0155147284bcef5e03f92034331
333,752
ipynb
Jupyter Notebook
.ipynb_checkpoints/WaveformExtraction-checkpoint.ipynb
YangChuan80/AusculPi
53153bb95a4090684282adcf8c985af3773100a7
[ "MIT", "Unlicense" ]
3
2020-08-20T03:06:03.000Z
2021-02-12T07:50:50.000Z
.ipynb_checkpoints/WaveformExtraction-checkpoint.ipynb
YangChuan80/AusculPi-Console
53153bb95a4090684282adcf8c985af3773100a7
[ "MIT", "Unlicense" ]
null
null
null
.ipynb_checkpoints/WaveformExtraction-checkpoint.ipynb
YangChuan80/AusculPi-Console
53153bb95a4090684282adcf8c985af3773100a7
[ "MIT", "Unlicense" ]
null
null
null
384.064442
30,544
0.944537
[ [ [ "from PIL import Image\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nimport math\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# Volunteer 1", "_____no_output_____" ], [ "## 3M Littmann Data", "_____no_output_____" ] ], [ [ "image = Image.open('3Ms.bmp')\nimage", "_____no_output_____" ], [ "x = image.size[0]\ny = image.size[1]", "_____no_output_____" ], [ "print(x)\nprint(y)", "448\n99\n" ], [ "matrix = []\npoints = []\nintegrated_density = 0\n\nfor i in range(x):\n matrix.append([])\n for j in range(y):\n matrix[i].append(image.getpixel((i,j)))\n #integrated_density += image.getpixel((i,j))[1]\n #points.append(image.getpixel((i,j))[1])", "_____no_output_____" ] ], [ [ "### Extract Red Line Position", "_____no_output_____" ] ], [ [ "redMax = 0\nxStore = 0\nyStore = 0\nfor xAxis in range(x):\n for yAxis in range(y):\n currentPoint = matrix[xAxis][yAxis]\n if currentPoint[0] ==255 and currentPoint[1] < 10 and currentPoint[2] < 10:\n redMax = currentPoint[0]\n xStore = xAxis\n yStore = yAxis\n \nprint(xStore, yStore)", "445 51\n" ] ], [ [ "- The redline position is located at y = 252.", "_____no_output_____" ], [ "### Extract Blue Points", "_____no_output_____" ] ], [ [ "redline_pos = 51\nabsMax = 0\nlittmannArr = []\npoints_vertical = []\ntheOne = 0\n\nfor xAxis in range(x):\n for yAxis in range(y):\n currentPoint = matrix[xAxis][yAxis]\n # Pickup Blue points\n if currentPoint[2] == 255 and currentPoint[0] < 220 and currentPoint[1] < 220:\n points_vertical.append(yAxis)\n \n #print(points_vertical)\n \n \n # Choose the largest amplitude\n for item in points_vertical:\n \n if abs(item-redline_pos) > absMax:\n absMax = abs(item-redline_pos)\n theOne = item \n littmannArr.append((theOne-redline_pos)*800)\n \n absMax = 0 \n theOne = 0\n points_vertical = []", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(littmannArr, linewidth=0.6, color='blue')", "_____no_output_____" ] ], [ [ "# Ascul Pi Data", "_____no_output_____" ] ], [ [ "pathBase = 'C://Users//triti//OneDrive//Dowrun//Text//Manuscripts//Data//YangChuan//AusculPi//'\nfilename = 'Numpy_Array_File_2020-06-21_07_54_16.npy'\nline = pathBase + filename\narr = np.load(line)\narr", "_____no_output_____" ], [ "arr.shape", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(arr[0], linewidth=1.0, color='black')", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(arr[:,100], linewidth=1.0, color='black')", "_____no_output_____" ], [ "start = 1830\nend = 2350\n\nstart_adj = int(start * 2583 / 3000)\nend_adj = int(end * 2583 / 3000)", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(arr[start_adj:end_adj,460], linewidth=0.6, color='black')", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(littmannArr, linewidth=0.6, color='blue')", "_____no_output_____" ], [ "asculArr = arr[start_adj:end_adj,460]", "_____no_output_____" ] ], [ [ "## Preprocess the two array", "_____no_output_____" ] ], [ [ "asculArr_processed = []\nlittmannArr_processed = []\n\nfor item in asculArr:\n asculArr_processed.append(abs(item))\n \nfor item in littmannArr:\n littmannArr_processed.append(abs(item))", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(asculArr_processed, linewidth=0.6, color='black')", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(littmannArr_processed, linewidth=0.6, color='blue')", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(asculArr_processed[175:375], linewidth=1.0, color='black')", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(littmannArr_processed[:200], linewidth=1.0, color='blue')", "_____no_output_____" ], [ "len(littmannArr)", "_____no_output_____" ], [ "len(asculArr)", "_____no_output_____" ] ], [ [ "### Coeffient", "_____no_output_____" ] ], [ [ "stats.pearsonr(asculArr_processed, littmannArr_processed)", "_____no_output_____" ], [ "stats.pearsonr(asculArr_processed[176:336], littmannArr_processed[:160])", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(arr[start_adj:end_adj,460][176:336], linewidth=0.6, color='black')", "_____no_output_____" ], [ "fig = plt.figure()\ns = fig.add_subplot(111)\ns.plot(littmannArr[:160], linewidth=0.6, color='blue')", "_____no_output_____" ] ], [ [ "### Fitness", "_____no_output_____" ] ], [ [ "stats.chisquare(asculArr_processed[174:334], littmannArr_processed[:160])", "_____no_output_____" ], [ "def cosCalculate(a, b):\n l = len(a)\n \n sumXY = 0\n sumRootXSquare = 0\n sumRootYSquare = 0\n \n for i in range(l):\n sumXY = sumXY + a[i]*b[i]\n sumRootXSquare = sumRootXSquare + math.sqrt(a[i]**2)\n sumRootYSquare = sumRootYSquare + math.sqrt(b[i]**2)\n \n cosValue = sumXY / (sumRootXSquare * sumRootYSquare)\n \n return cosValue ", "_____no_output_____" ], [ "cosCalculate(asculArr_processed[175:335], littmannArr_processed[:160])", "C:\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:9: RuntimeWarning: overflow encountered in long_scalars\n if __name__ == '__main__':\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a5f5330b976755d7a4710a2e0bb00a2900d0887
65,703
ipynb
Jupyter Notebook
notebooks_type_you/3.0-snk-creating-model210620_2.ipynb
sarakohnke/type_you
0b790f7d77acc02995ca5b814791aef1cf0d8360
[ "MIT" ]
null
null
null
notebooks_type_you/3.0-snk-creating-model210620_2.ipynb
sarakohnke/type_you
0b790f7d77acc02995ca5b814791aef1cf0d8360
[ "MIT" ]
null
null
null
notebooks_type_you/3.0-snk-creating-model210620_2.ipynb
sarakohnke/type_you
0b790f7d77acc02995ca5b814791aef1cf0d8360
[ "MIT" ]
1
2020-06-17T00:55:47.000Z
2020-06-17T00:55:47.000Z
105.972581
22,812
0.868788
[ [ [ "#Set working directory\nimport os\npath=\"/Users/sarakohnke/Desktop/data_type_you/processed-final/\"\nos.chdir(path)\nos.getcwd()", "_____no_output_____" ], [ "#Import required packages\nimport pandas as pd\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "_____no_output_____" ], [ "#Import cleaned dataframe\ndataframe=pd.read_csv('dataframe240620.csv',index_col=0)\ndataframe.shape", "_____no_output_____" ], [ "#dataframe=pd.read_csv('dataframe240620.csv',index_col=0)\n#from pycaret.classification import *\n#exp1=setup(dataframe, target='A1C (%)')", "_____no_output_____" ], [ "#compare_models()", "_____no_output_____" ], [ "from sklearn.tree import DecisionTreeRegressor\nfrom pprint import pprint\nfrom sklearn.model_selection import train_test_split\n\nX_rf = dataframe.drop(['A1C (%)'],1)\ny_rf = dataframe['A1C (%)']\nX_train_rf, X_test_rf, y_train_rf, y_test_rf = train_test_split(X_rf, y_rf, random_state = 0)\n\nclf_rf = DecisionTreeRegressor(random_state = 0).fit(X_train_rf, y_train_rf)\nprint('Parameters currently in use:\\n')\npprint(clf_rf.get_params())\n\nX_train_rf.to_csv('X_train.csv')\ny_train_rf.to_csv('y_train.csv')", "Parameters currently in use:\n\n{'ccp_alpha': 0.0,\n 'criterion': 'mse',\n 'max_depth': None,\n 'max_features': None,\n 'max_leaf_nodes': None,\n 'min_impurity_decrease': 0.0,\n 'min_impurity_split': None,\n 'min_samples_leaf': 1,\n 'min_samples_split': 2,\n 'min_weight_fraction_leaf': 0.0,\n 'presort': 'deprecated',\n 'random_state': 0,\n 'splitter': 'best'}\n" ], [ "from sklearn.model_selection import RandomizedSearchCV\n\n\n# Number of features to consider at every split - for regressor, none is good\nmax_features = None\n\n# Maximum number of levels in tree\nmax_depth = [int(x) for x in np.linspace(10, 110, num = 11)]\nmax_depth.append(None)\n\n# Minimum number of samples required to split a node\nmin_samples_split = [2, 5, 10]\n\n# Minimum number of samples required at each leaf node\nmin_samples_leaf = [1, 2, 4]\n\n# Method of selecting samples for training each tree\nbootstrap = True\n\n# Create the random grid\nrandom_grid = {\n 'max_depth': max_depth,\n 'min_samples_split': min_samples_split,\n 'min_samples_leaf': min_samples_leaf\n \n }\npprint(random_grid)", "{'max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, None],\n 'min_samples_leaf': [1, 2, 4],\n 'min_samples_split': [2, 5, 10]}\n" ], [ "# Use the random grid to search for best hyperparameters\n# First create the base model to tune\nclf_rf = DecisionTreeRegressor(random_state=0)\n\n# Random search of parameters, using 3 fold cross validation, \n# search across 10 different combinations, and use all available cores\nrf_random = RandomizedSearchCV(estimator = clf_rf, param_distributions = random_grid, n_iter=50,cv = 3, verbose=2, random_state=0, n_jobs = -1)\n\n# Fit the random search model\nrf_random.fit(X_train_rf, y_train_rf)\nrf_random.best_params_", "Fitting 3 folds for each of 50 candidates, totalling 150 fits\n" ], [ "# Make features and target objects\nX_rf2 = dataframe.drop(['A1C (%)'],1)\ny_rf2 = dataframe['A1C (%)']\nX_train_rf2, X_test_rf2, y_train_rf2, y_test_rf2 = train_test_split(X_rf2, y_rf2, random_state = 0)\n\n# Train model\nclf_rf2 = DecisionTreeRegressor(max_depth=10,min_samples_leaf=4,\n min_samples_split=10,random_state = 0).fit(X_train_rf2, y_train_rf2)", "_____no_output_____" ], [ "# Print r2 score\nprint('R-squared score (training): {:.3f}'\n .format(clf_rf2.score(X_train_rf2, y_train_rf2)))\nprint('R-squared score (test): {:.3f}'\n .format(clf_rf2.score(X_test_rf2, y_test_rf2)))", "R-squared score (training): 0.654\nR-squared score (test): 0.313\n" ], [ "#Find feature importances in model\nimport pandas as pd\nfeat_importances = pd.Series(clf_rf2.feature_importances_, index=X_rf2.columns)\nfeat_importances.to_csv('feat_importances.csv')\nfeat_importances.nlargest(10).plot(kind='barh')\n#plt.savefig('importance.png',dpi=300)", "_____no_output_____" ], [ "X_test_rf2.head()\nX_test_rf2.to_csv('xtestjun22.csv')", "_____no_output_____" ], [ "y_test_rf2.head()\ny_test_rf2.to_csv('ytestjun22.csv')", "_____no_output_____" ], [ "#Make patient lists from test data for app\npatient1_list=X_test_rf2.iloc[115,:].values.tolist()\npatient2_list=X_test_rf2.iloc[253,:].values.tolist()\npatient3_list=X_test_rf2.iloc[603,:].values.tolist()", "_____no_output_____" ], [ "#Find current predicted A1C score\nA1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])\nprint('your current score: '+str(A1C_prediction_drug1_2[0]))", "your current score: 9.419999999999998\n" ], [ "#insulin - 1-3?->just1\npatient1_list[6]=1\nA1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])\nprint('your predicted score with insulin: '+str(A1C_prediction_drug1_2[0]))", "your predicted score with insulin: 9.419999999999998\n" ], [ "#bmi \npatient1_list[18]=24\nA1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])\nprint('your predicted score with ideal bmi: '+str(A1C_prediction_drug1_2[0]))", "your predicted score with ideal bmi: 6.659999999999999\n" ], [ "#bp \npatient1_list[3]=120\npatient1_list[4]=80\nA1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])\nprint('your predicted score with ideal blood pressure: '+str(A1C_prediction_drug1_2[0]))", "your predicted score with ideal blood pressure: 9.419999999999998\n" ], [ "#triglycerides\npatient1_list[32]=150\nA1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])\nprint('your predicted score with ideal triglycerides: '+str(A1C_prediction_drug1_2[0]))", "your predicted score with ideal triglycerides: 9.419999999999998\n" ], [ "#healthy diet\npatient1_list[43]=1\nA1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])\nprint('your predicted score with healthy diet: '+str(A1C_prediction_drug1_2[0]))", "your predicted score with healthy diet: 9.419999999999998\n" ], [ "#cholesterol \npatient1_list[19]=170\nA1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])\nprint('your predicted score with cholesterol: '+str(A1C_prediction_drug1_2[0]))", "your predicted score with cholesterol: 9.419999999999998\n" ] ], [ [ " do the same for other patients", "_____no_output_____" ], [ "test my model against dummy regressor", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\npredicted_rf2 = clf_rf2.predict(X_test_rf2)\n\n\nplt.scatter(y_test_rf2,predicted_rf2)\nplt.xlabel('Actual values')\nplt.ylabel('Predicted values')\n\nplt.plot(np.unique(y_test_rf2), np.poly1d(np.polyfit(y_test_rf2, predicted_rf2, 1))(np.unique(y_test_rf2)))\n\n#plt.savefig('r2.png',dpi=300)\n", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.dummy import DummyRegressor\ndummy_mean=DummyRegressor(strategy='mean')\npredicted_rf2 = clf_rf2.predict(X_test_rf2)\npredicted_dummy=dummy_mean.predict(X_test_rf2)\n\nplt.scatter(y_test_rf2,predicted_dummy)\nplt.xlabel('Actual values')\nplt.ylabel('Predicted values')\n\nplt.plot(np.unique(y_test_rf2), np.poly1d(np.polyfit(y_test_rf2, predicted_dummy, 1))(np.unique(y_test_rf2)))\n\nplt.show()", "_____no_output_____" ], [ "#pickle the model so can upload trained model to app\nimport pickle\n#import bz2\n#import _pickle as cPickle\nwith open('model_pkl.pickle','wb') as output_file:\n pickle.dump(clf_rf2,output_file)\n \n# Pickle a file and then compress it into a file with extension \n#def compressed_pickle(model, clf_rf2):\n# with bz2.BZ2File(model + '.bz2', 'w') as f: \n# cPickle.dump(clf_rf2, f)\n \n##file=bz2.BZ2File('c.pkl','w')\n#pickle.dump(clf_rf2,sfile)\n#compressed_pickle('model',clf_rf2)", "_____no_output_____" ], [ "# to open compressed pickle\n#def decompress_pickle(file):\n# model=bz2.BZ2File(file,'rb')\n# model=cPickle.load(model)\n# return model\n\n#model=decompress_pickle('model.pbz2')", "_____no_output_____" ], [ "#to open normal pickle\nwith open('model_pkl.pickle','rb') as input_file:\n model=pickle.load(input_file)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ] ]
4a5f60d8d8aee0910ad7b72253e8409aa63e2b59
9,090
ipynb
Jupyter Notebook
examples/01-filter/compute-volume.ipynb
pyvista/pyvista-examples
60dda52b4c8069c15d73768d04bee3f84d2f803f
[ "MIT" ]
4
2019-05-13T06:42:18.000Z
2020-12-18T05:09:29.000Z
examples/01-filter/compute-volume.ipynb
pyvista/pyvista-examples
60dda52b4c8069c15d73768d04bee3f84d2f803f
[ "MIT" ]
1
2020-07-30T11:53:38.000Z
2020-07-30T11:55:44.000Z
examples/01-filter/compute-volume.ipynb
pyvista/pyvista-examples
60dda52b4c8069c15d73768d04bee3f84d2f803f
[ "MIT" ]
2
2020-07-26T14:09:58.000Z
2020-11-03T11:27:29.000Z
34.562738
604
0.557096
[ [ [ "%matplotlib inline\nfrom pyvista import set_plot_theme\nset_plot_theme('document')", "_____no_output_____" ] ], [ [ "Volumetric Analysis\n===================\n\nCalculate mass properties such as the volume or area of datasets\n", "_____no_output_____" ] ], [ [ "# sphinx_gallery_thumbnail_number = 4\nimport numpy as np\nfrom pyvista import examples", "_____no_output_____" ] ], [ [ "Computing mass properties such as the volume or area of datasets in\nPyVista is quite easy using the\n`pyvista.DataSetFilters.compute_cell_sizes`{.interpreted-text\nrole=\"func\"} filter and the `pyvista.DataSet.volume`{.interpreted-text\nrole=\"attr\"} property on all PyVista meshes.\n\nLet\\'s get started with a simple gridded mesh:\n", "_____no_output_____" ] ], [ [ "# Load a simple example mesh\ndataset = examples.load_uniform()\ndataset.set_active_scalars(\"Spatial Cell Data\")", "_____no_output_____" ] ], [ [ "We can then calculate the volume of every cell in the array using the\n`.compute_cell_sizes` filter which will add arrays to the cell data of\nthe mesh core the volume and area by default.\n", "_____no_output_____" ] ], [ [ "# Compute volumes and areas\nsized = dataset.compute_cell_sizes()\n\n# Grab volumes for all cells in the mesh\ncell_volumes = sized.cell_arrays[\"Volume\"]", "_____no_output_____" ] ], [ [ "We can also compute the total volume of the mesh using the `.volume`\nproperty:\n", "_____no_output_____" ] ], [ [ "# Compute the total volume of the mesh\nvolume = dataset.volume", "_____no_output_____" ] ], [ [ "Okay awesome! But what if we have have a dataset that we threshold with\ntwo volumetric bodies left over in one dataset? Take this for example:\n", "_____no_output_____" ] ], [ [ "threshed = dataset.threshold_percent([0.15, 0.50], invert=True)\nthreshed.plot(show_grid=True, cpos=[-2, 5, 3])", "_____no_output_____" ] ], [ [ "We could then assign a classification array for the two bodies, compute\nthe cell sizes, then extract the volumes of each body. Note that there\nis a simpler implementation of this below in\n`split_vol_ref`{.interpreted-text role=\"ref\"}.\n", "_____no_output_____" ] ], [ [ "# Create a classifying array to ID each body\nrng = dataset.get_data_range()\ncval = ((rng[1] - rng[0]) * 0.20) + rng[0]\nclassifier = threshed.cell_arrays[\"Spatial Cell Data\"] > cval\n\n# Compute cell volumes\nsizes = threshed.compute_cell_sizes()\nvolumes = sizes.cell_arrays[\"Volume\"]\n\n# Split volumes based on classifier and get volumes!\nidx = np.argwhere(classifier)\nhvol = np.sum(volumes[idx])\nidx = np.argwhere(~classifier)\nlvol = np.sum(volumes[idx])\n\nprint(f\"Low grade volume: {lvol}\")\nprint(f\"High grade volume: {hvol}\")\nprint(f\"Original volume: {dataset.volume}\")", "_____no_output_____" ] ], [ [ "Or better yet, you could simply extract the largest volume from your\nthresholded dataset by passing `largest=True` to the `connectivity`\nfilter or by using `extract_largest` filter (both are equivalent).\n", "_____no_output_____" ] ], [ [ "# Grab the largest connected volume present\nlargest = threshed.connectivity(largest=True)\n# or: largest = threshed.extract_largest()\n\n# Get volume as numeric value\nlarge_volume = largest.volume\n\n# Display it!\nlargest.plot(show_grid=True, cpos=[-2, 5, 3])", "_____no_output_____" ] ], [ [ "------------------------------------------------------------------------\n\nSplitting Volumes {#split_vol_ref}\n=================\n\nWhat if instead, we wanted to split all the different connected bodies /\nvolumes in a dataset like the one above? We could use the\n`pyvista.DataSetFilters.split_bodies`{.interpreted-text role=\"func\"}\nfilter to extract all the different connected volumes in a dataset into\nblocks in a `pyvista.MultiBlock`{.interpreted-text role=\"class\"}\ndataset. For example, lets split the thresholded volume in the example\nabove:\n", "_____no_output_____" ] ], [ [ "# Load a simple example mesh\ndataset = examples.load_uniform()\ndataset.set_active_scalars(\"Spatial Cell Data\")\nthreshed = dataset.threshold_percent([0.15, 0.50], invert=True)\n\nbodies = threshed.split_bodies()\n\nfor i, body in enumerate(bodies):\n print(f\"Body {i} volume: {body.volume:.3f}\")", "_____no_output_____" ], [ "bodies.plot(show_grid=True, multi_colors=True, cpos=[-2, 5, 3])", "_____no_output_____" ] ], [ [ "------------------------------------------------------------------------\n\nA Real Dataset\n==============\n\nHere is a realistic training dataset of fluvial channels in the\nsubsurface. This will threshold the channels from the dataset then\nseparate each significantly large body and compute the volumes for each!\n\nLoad up the data and threshold the channels:\n", "_____no_output_____" ] ], [ [ "data = examples.load_channels()\nchannels = data.threshold([0.9, 1.1])", "_____no_output_____" ] ], [ [ "Now extract all the different bodies and compute their volumes:\n", "_____no_output_____" ] ], [ [ "bodies = channels.split_bodies()\n# Now remove all bodies with a small volume\nfor key in bodies.keys():\n b = bodies[key]\n vol = b.volume\n if vol < 1000.0:\n del bodies[key]\n continue\n # Now lets add a volume array to all blocks\n b.cell_arrays[\"TOTAL VOLUME\"] = np.full(b.n_cells, vol)", "_____no_output_____" ] ], [ [ "Print out the volumes for each body:\n", "_____no_output_____" ] ], [ [ "for i, body in enumerate(bodies):\n print(f\"Body {i:02d} volume: {body.volume:.3f}\")", "_____no_output_____" ] ], [ [ "And visualize all the different volumes:\n", "_____no_output_____" ] ], [ [ "bodies.plot(scalars=\"TOTAL VOLUME\", cmap=\"viridis\", show_grid=True)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a5f62b899ccdd8efbf79a63ce8625415f31d367
181,405
ipynb
Jupyter Notebook
Aula 55 - Correlacao cruzada exemplos/Correlacao Cruzada exemplos.ipynb
RicardoGMSilveira/codes_proc_de_sinais
e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0
[ "CC0-1.0" ]
8
2020-10-01T20:59:33.000Z
2021-07-27T22:46:58.000Z
Aula 55 - Correlacao cruzada exemplos/Correlacao Cruzada exemplos.ipynb
RicardoGMSilveira/codes_proc_de_sinais
e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0
[ "CC0-1.0" ]
null
null
null
Aula 55 - Correlacao cruzada exemplos/Correlacao Cruzada exemplos.ipynb
RicardoGMSilveira/codes_proc_de_sinais
e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0
[ "CC0-1.0" ]
9
2020-10-15T12:08:22.000Z
2021-04-12T12:26:53.000Z
734.433198
105,696
0.946429
[ [ [ "# Exemplo sobre a correlação cruzada\n\nA correlação cruzada é definida por\n\n\\begin{equation}\nR_{xy}(\\tau)=\\int_{-\\infty}^{\\infty}x(t)y(t+\\tau)\\mathrm{d} t\n\\tag{1}\n\\end{equation}\n\nConsiderede um navio a navegar por águas não muito conhecidas. Para navegar com segurança, o navio necessita ter uma noção da profundidade da coluna de água sobre a qual navega. É difícil inspecionar a coluna d'água por inspeção visual, já que a luz não se propaga bem na água. No entanto, podemos usar as ondas sonoras para tal. \n\n![boat.png](attachment:boat.png)\n\nAssim, o navio é equipado com uma fonte sonora e um hidrofone. A fonte emite um sinal na água, $s(t)$, que se propaga até o fundo e é, então refletido. O hidrofone, próximo à fonte sonora, captará o som direto, $s(t)$, e a reflexão - uma versão atrasada e reduzida do sinal emitido, $r_c s(t-\\Delta)$ . No entanto, ambos sinais são corrompidos por ruído, especialmente a reflexão. Assim, os sinais medidos são:\n\n\\begin{equation}\nx(t)=s(t) + n_x(t)\n\\end{equation}\n\n\\begin{equation}\ny(t)=s(t) + r_c s(t-\\Delta) + n_y(t)\n\\end{equation}\n\nVamos iniciar olhando para estes sinais.", "_____no_output_____" ] ], [ [ "# importar as bibliotecas necessárias\nimport numpy as np # arrays\nimport matplotlib.pyplot as plt # plots\nfrom scipy.stats import norm\nfrom scipy import signal\nplt.rcParams.update({'font.size': 14})\nimport IPython.display as ipd # to play signals\nimport sounddevice as sd", "_____no_output_____" ], [ "# Frequencia de amostragem e vetor temporal\nfs = 1000\ntime = np.arange(0, 2, 1/fs)\nDelta = 0.25\nr_c = 0.5\n\n# inicializar o gerador de números aleatórios\n#np.random.seed(0)\n\n# sinal s(t)\nst = np.random.normal(loc = 0, scale = 1, size = len(time))\n\n# Ruído de fundo\nn_x = np.random.normal(loc = 0, scale = 0.1, size = len(time))\nn_y = np.random.normal(loc = 0, scale = 1, size = len(time))\n\n# Sinais x(t) e y(t)\nxt = st + n_x # O sinal é totalmente contaminado por ruído\n\nyt = np.zeros(len(time)) + st + n_y # Inicialize - o sinal é totalmente contaminado por ruído\nyt[int(Delta*fs):] = yt[int(Delta*fs):] + r_c * st[:len(time)-int(Delta*fs)] # A partir de um certo instante temos a reflexão\n\n# plot signal\nplt.figure(figsize = (10, 6))\nplt.subplot(2,1,1)\nplt.plot(time, xt, linewidth = 1, color='b', alpha = 0.7)\nplt.grid(linestyle = '--', which='both')\nplt.title('Sinal emitidqo e contaminado por ruído')\nplt.ylabel(r'$x(t)$')\nplt.xlabel('Tempo [s]')\nplt.xlim((0, time[-1]))\nplt.ylim((-5, 5))\n\nplt.subplot(2,1,2)\nplt.plot(time, yt, linewidth = 1, color='b', alpha = 0.7)\nplt.grid(linestyle = '--', which='both')\nplt.title('Sinal gravado e contaminado por ruído')\nplt.ylabel(r'$y(t)$')\nplt.xlabel('Tempo [s]')\nplt.xlim((0, time[-1]))\nplt.ylim((-5, 5))\nplt.tight_layout()\n", "_____no_output_____" ] ], [ [ "# Como podemos estimar a distância até o fundo?\n\nVamos pensar em mensurar a auto-correlação de $y(t)$ e a correlação cruzada entre $x(t)$ e $y(t)$. Tente suar o conceito de estimadores ($E[\\cdot]$) para ter uma intuição a respeito. Com eles, você poderá provar que\n\n\\begin{equation}\nR_{yy}(\\tau)=(1+r_{c}^{2})R_{ss}(\\tau) + R_{n_y n_y}(\\tau) + r_c R_{ss}(\\tau-\\Delta) + r_c R_{ss}(\\tau+\\Delta)\n\\end{equation}\n\n\\begin{equation}\nR_{xy}(\\tau)=R_{ss}(\\tau) + r_c R_{ss}(\\tau-\\Delta)\n\\end{equation}", "_____no_output_____" ] ], [ [ "# Calculemos a auto-correlação\nRyy = np.correlate(yt, yt, mode = 'same')\nRxy = np.correlate(xt, yt, mode = 'same')\n\ntau = np.linspace(-0.5*len(Rxy)/fs, 0.5*len(Rxy)/fs, len(Rxy))\n\n#tau = np.linspace(0, len(Rxy)/fs, len(Rxy))\n\n# plot autocorrelação\nplt.figure(figsize = (10, 3))\nplt.plot(tau, Ryy/len(Ryy), linewidth = 1, color='b')\nplt.grid(linestyle = '--', which='both')\nplt.ylabel(r'$R_{yy}(\\tau)$')\n#plt.xlim((tau[0], tau[-1]))\nplt.xlabel(r'$\\tau$ [s]')\nplt.tight_layout()\n\n# plot Correlação cruzada\nplt.figure(figsize = (10, 3))\nplt.plot(-tau,Rxy/len(Ryy), linewidth = 1, color='b')\nplt.grid(linestyle = '--', which='both')\nplt.ylabel(r'$R_{xy}(\\tau)$')\n#plt.xlim((tau[0], tau[-1]))\nplt.xlabel(r'$\\tau$ [s]')\nplt.tight_layout()", "_____no_output_____" ] ], [ [ "# Conhecendo a velocidade do som na água...\n\nPodemos calcular a distância. $c_{a} = 1522$ [m/s].", "_____no_output_____" ] ], [ [ "find_peak = np.where(np.logical_and(Rxy/len(Ryy) >= 0.2, Rxy/len(Ryy) <= 0.5))\nlag = -tau[find_peak[0][0]]\ndistance = 0.5*1522*lag\n\nprint('O atraso detectado é: {:.2f} [s]'.format(lag))\nprint('A distância para o fundo é: {:.2f} [m]'.format(distance))", "O atraso detectado é: 0.25 [s]\nA distância para o fundo é: 189.96 [m]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a5f6f7209f920176c44b54f2459c912b9584ae2
10,268
ipynb
Jupyter Notebook
tasks.ipynb
UrFU-Python-GitHub-Classroom/da
07734a079a675a83bdb4762af0e37577d0067c72
[ "MIT" ]
null
null
null
tasks.ipynb
UrFU-Python-GitHub-Classroom/da
07734a079a675a83bdb4762af0e37577d0067c72
[ "MIT" ]
null
null
null
tasks.ipynb
UrFU-Python-GitHub-Classroom/da
07734a079a675a83bdb4762af0e37577d0067c72
[ "MIT" ]
1
2021-12-25T09:33:10.000Z
2021-12-25T09:33:10.000Z
20.373016
122
0.428711
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# Загрузка данных", "_____no_output_____" ] ], [ [ "works = pd.read_csv(\"works.csv\")", "_____no_output_____" ], [ "works.head()", "_____no_output_____" ], [ "works.info()", "_____no_output_____" ] ], [ [ "## 1. Узнать общее количество записей в датасете", "_____no_output_____" ], [ "1. Узнать количество записей в DataFrame можно в информации о DataFrame в поле RangeIndex", "_____no_output_____" ] ], [ [ "works.info()", "_____no_output_____" ] ], [ [ "2. Узнать количество записей в DataFrame можно по первому элементу tuple, содержащего размерность DataFrame", "_____no_output_____" ] ], [ [ "works.shape[0]", "_____no_output_____" ] ], [ [ "3. Узнать количество записей в DataFrame можно с помощью функции len и индекса DataFrame", "_____no_output_____" ] ], [ [ "len(works.index)", "_____no_output_____" ] ], [ [ "# 2. Узнать количество мужчин и женщин в датасете", "_____no_output_____" ], [ "1. Применяя метод value_counts для столбца можно узнать все уникальные значения в данном столбце и их количество", "_____no_output_____" ] ], [ [ "works[\"gender\"].value_counts()", "_____no_output_____" ] ], [ [ "2. Чтобы найти количество определенных значений в столбце мы можем использовать булевую индексацию", "_____no_output_____" ] ], [ [ "# Мужчины\n# works[works[\"gender\"] == \"?\"]", "_____no_output_____" ], [ "# Женщины", "_____no_output_____" ] ], [ [ "# 3. Узнать сколько значений в столбце skills не NAN", "_____no_output_____" ], [ "1. Для этого применим функцию count к столбцу skills", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "2. Также данную информацию можно получить с помощью метода info DataFrame", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "# 4. Получить все заполненные скиллы", "_____no_output_____" ] ], [ [ "# Функция dropna", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "# 5. Вывести зарплату только у тех, у которых в скиллах есть Python (Питон)", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "# 6. Построить перцентили по заработной плате у мужчин и женщин.", "_____no_output_____" ] ], [ [ "# Использовать query и quantile\n# men_salary = \nmen_salary", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nax.plot(percentiles, men_salary)\nplt.xlabel(\"Перцентили\")\nplt.ylabel(\"Зарплата мжучин\")\n\nplt.show()", "_____no_output_____" ], [ "# Женщины", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nax.plot(percentiles, women_salary)\nplt.xlabel(\"Перцентили\")\nplt.ylabel(\"Зарплата женщин\")\n\nplt.show()", "_____no_output_____" ] ], [ [ "# 7*. Построить графики распределения по заработной плате мужчин и женщин в зависимости от высшего образования", "_____no_output_____" ] ], [ [ "# Мужчины\n# men_salary = works.groupby(\"???\").agg(\"mean\").reset_index()\nmen_salary", "_____no_output_____" ], [ "# Женщины\n# women_salary = \nwomen_salary", "_____no_output_____" ], [ "# works.query(\"gender == '??' and educationType == '??'\").hist(bins=100, alpha=0.5)", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a5f6faa8e8544295914720f3bad76ccdd878341
833,903
ipynb
Jupyter Notebook
Vehicle Detection.ipynb
MaximeLeybaert/CarND-Vehicle-Detection-master
24050fbafee2ac8e0f095313d87e7e0a8aac048e
[ "MIT" ]
null
null
null
Vehicle Detection.ipynb
MaximeLeybaert/CarND-Vehicle-Detection-master
24050fbafee2ac8e0f095313d87e7e0a8aac048e
[ "MIT" ]
null
null
null
Vehicle Detection.ipynb
MaximeLeybaert/CarND-Vehicle-Detection-master
24050fbafee2ac8e0f095313d87e7e0a8aac048e
[ "MIT" ]
null
null
null
454.69084
141,428
0.932586
[ [ [ "## Vehicle Detection", "_____no_output_____" ], [ "### Import ", "_____no_output_____" ], [ "Import of the used packages.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport os\nimport cv2\nimport pickle\nimport glob\nimport matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML\nfrom skimage.feature import hog\nimport time\nfrom sklearn.svm import LinearSVC\nfrom sklearn.utils import shuffle\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn import linear_model\nimport Augmentor\n%matplotlib inline", "_____no_output_____" ] ], [ [ "### Load training data", "_____no_output_____" ], [ "The following code load the raw data of car and none car images that are stored in folder as .PNG images. It prints their lenght and the ratio of car to none car image. Ratio is near to 100% meaning their approximatively have the same size which is valuable for training the classifier. I have added the possibility to increase the dataset using Augmentor pipeline. \n\nHowever, the accuracy of the classifier din't imprve much meaning it tends to overfit. ", "_____no_output_____" ] ], [ [ "augment_dataCar = False\nif augment_dataCar is True : \n p = Augmentor.Pipeline('train_data/vehicles/KITTI_extracted',save_format='PNG')\n p.rotate(probability=0.8, max_left_rotation=2, max_right_rotation=2)\n p.zoom(probability=0.8, min_factor=1.1, max_factor=1.4)\n p.flip_left_right(probability=0.5)\n p.random_distortion(probability=0.6, magnitude = 1, grid_width = 8, grid_height = 8)\n p.sample(8000)\n p.process()\n\naugment_dataNonCar = False\nif augment_dataNonCar is True : \n p = Augmentor.Pipeline('train_data/non-vehicles')\n p.rotate(probability=0.8, max_left_rotation=2, max_right_rotation=2)\n p.zoom(probability=0.8, min_factor=1, max_factor=1.2)\n p.flip_left_right(probability=0.5)\n p.random_distortion(probability=0.6, magnitude = 1, grid_width = 8, grid_height = 8)\n p.sample(5000)\n p.process()", "_____no_output_____" ], [ "def renamedir() : \n dirname = \"train_data/vehicles/KITTI_extracted/output\"\n for i, filename in enumerate(os.listdir(dirname)):\n os.rename(dirname + \"/\" + filename, dirname +\"/\"+ str(i) + \".png\")\nif (augment_dataCar and augment_dataNonCar) : \n renamedir()", "_____no_output_____" ], [ "car_images = glob.glob('train_data/vehicles/KITTI_extracted/output/*.png')\nnoncar_images = glob.glob('train_data/non-vehicles/output/**/*.png')\nratio = (len(car_images)/ len(noncar_images))*100\nprint(len(car_images), len(noncar_images), round(ratio))", "13966 13968 100\n" ], [ "def show_image_compare(image1,image2) : \n f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 3))\n #f.tight_layout()\n ax1.imshow(image1)\n ax1.set_title('Dataset car image', fontsize=20)\n ax2.imshow(image2)\n ax2.set_title('Data noncar image', fontsize=20)\nrand = np.random.randint(0,len(car_images))\nshow_image(mpimg.imread(car_images[rand]), mpimg.imread(noncar_images[rand]))", "_____no_output_____" ], [ "def show_image_compare_feature_extraction(image1,image2,image3,original,colorspace) : \n f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(16, 8))\n #f.tight_layout()\n ax1.imshow(image1, cmap='gray')\n ax1.set_title('channel1', fontsize=20)\n ax2.imshow(image2, cmap='gray')\n ax2.set_title('channel2', fontsize=20)\n ax3.imshow(image3, cmap='gray')\n ax3.set_title('channel3', fontsize=20)\n ax4.imshow(original, cmap='gray')\n ax4.set_title('original', fontsize=20)\n ax1.set_xlabel(colorspace)", "_____no_output_____" ] ], [ [ "### Convert image to Histogram of Oriented Gradient (HOG)", "_____no_output_____" ], [ "HOG stands for histogram of oriented gradiant. A build in function in provided within the sklearn library. The following parameters of the hog function are listed below. \n* <b>img </b>: input image \n* <b>orient </b>: number of possible orientation of the gradient\n* <b>pix_per_cell </b>: size (in pixel) of a cell\n* <b>cell_per_block </b>: Number of cells in each block\n* <b>vis </b>: Allow returning an image of the gradient\n* <b>feature_vec </b>: Allow returning the data as a feature vector\n\nThe ``get_hog_features`` returns the extracted features and/or an example image within a ``numpy.ndarray`` depending on the value of the ``vis`` (``True`` or ``False``) parameter. \n\n<i>The code is copy-paste from the lession material.</i> ", "_____no_output_____" ] ], [ [ "def get_hog_features(img, orient, pix_per_cell, cell_per_block, \n vis=False, feature_vec=True):\n # Call with two outputs if vis==True\n if vis == True:\n features, hog_image = hog(img, orientations=orient, \n pixels_per_cell=(pix_per_cell, pix_per_cell),\n cells_per_block=(cell_per_block, cell_per_block), \n transform_sqrt=False, \n visualise=vis, feature_vector=feature_vec)\n return features, hog_image\n # Otherwise call with one output\n else: \n features = hog(img, orientations=orient, \n pixels_per_cell=(pix_per_cell, pix_per_cell),\n cells_per_block=(cell_per_block, cell_per_block), \n transform_sqrt=True, \n visualise=vis, feature_vector=feature_vec)\n return features", "_____no_output_____" ] ], [ [ "### Method to Extract HOG Features from an Array of Car and Non-Car Images", "_____no_output_____" ], [ "The ``extract_features`` function extracts features from a list of image and returns them into a ``list``. \n\n<u>Note</u>: This function could also be used to call bin_spatial() and color_hist() (as in the lessons) to extract flattened spatial color features and color histogram features and combine them all to be used together for classification. \n\n[Jeremy Shannon](https://github.com/jeremy-shannon/CarND-Vehicle-Detection/blob/master/vehicle_detection_project.ipynb) has provided in his GitHub an insightful study of the influence of the parameters of the ``get_hog_features`` function. He has chosen YUV color space,\t11 orientations,\t16 Pixels Per Cell,\t2 Cells Per Block, and to use ALL of the color channel. It provided him a 98,17% accuracy and 55,22 s extracting time for the entire dataset. Its comparison can be found in his GitHub. \n\nAs concluded by this [document](https://www.researchgate.net/publication/224200365_Color_exploitation_in_hog-based_traffic_sign_detection) (which analyzed the affect of color spaces for traffic sign classifiction), YCrCb and CIELab color spaces perform well. YUV shares similitudes with YCrCb ([Tushar Chugh](https://github.com/TusharChugh/Vehicle-Detection-HOG/blob/master/src/vehicle-detection.ipynb)) making YUV a good candidate as shown by different medium articles. \n\nFrom [Tushar Chugh](https://github.com/TusharChugh/Vehicle-Detection-HOG/blob/master/src/vehicle-detection.ipynb) I concluded that color histogram and spatial histogram does not provide strong accuracy improvement and reduce the extracting time. Thus, they little improvement is not a strong asset in this case. \n\nIn the case of extraction feature for autonomous driving the accuracy is as important as the extraction time. Hence, the choice of Jeremy Shanon was logic, but I think it might be interesting to investigate another parameter set which provide great extracting time and great accuracy even if they are not the best. Jeremy Shannon didn't apply data-set enhancement which could provide an accuracy improvement. The used parameter set bring a 1,52% accuracy decrease and 13,19 s improvement meaning the data enhancement should compensate the accuracy drop to be efficient (this assumption revealed to be not true due to overfitting). \n\nAfter some experimentation, the LAB colorspace doesn't work as well as YUV or YCrCb, hence I continued with YUV. The orientation effect the calculation, I have found 12 possible gradient orientation is great deal. \n\n<i>The code is copy-paste from the lession material. Color Histogram and Spatial Binning have been omitted.</i>", "_____no_output_____" ] ], [ [ "def extract_features(imgs, cspace='RGB', orient=9, \n pix_per_cell=8, cell_per_block=2, hog_channel=0):\n # Create a list to append feature vectors to\n features = []\n # Iterate through the list of images\n for file in imgs:\n # Read in each one by one\n image = mpimg.imread(file)\n #image = np.copy(np.sqrt(np.mean(np.square(image))))\n # apply color conversion if other than 'RGB'\n if cspace != 'RGB':\n if cspace == 'HSV':\n feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)\n elif cspace == 'LUV':\n feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2LUV)\n elif cspace == 'HLS':\n feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)\n elif cspace == 'YUV':\n feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)\n elif cspace == 'YCrCb':\n feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YCrCb)\n elif cspace == 'LAB':\n feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2LAB)\n else: feature_image = np.copy(image) \n\n # Call get_hog_features() with vis=False, feature_vec=True\n if hog_channel == 'ALL':\n hog_features = []\n for channel in range(feature_image.shape[2]):\n hog_features.append(get_hog_features(feature_image[:,:,channel], \n orient, pix_per_cell, cell_per_block, \n vis=False, feature_vec=True))\n hog_features = np.ravel(hog_features) \n else:\n hog_features = get_hog_features(feature_image[:,:,hog_channel], orient, \n pix_per_cell, cell_per_block, vis=False, feature_vec=True)\n # Append the new feature vector to the features list\n features.append(hog_features)\n # Return list of feature vectors\n return features", "_____no_output_____" ] ], [ [ "### Preparing the data", "_____no_output_____" ] ], [ [ "# Feature extraction parameters\ncolorspace = 'YUV'\norient = 12\npix_per_cell = 8\ncell_per_block = 2\nhog_channel = 'ALL'", "_____no_output_____" ], [ "# Extracting feature from car and noncar images\ncar_features = extract_features(car_images, cspace=colorspace, orient=orient, \n pix_per_cell=pix_per_cell, cell_per_block=cell_per_block, \n hog_channel=hog_channel)\nnotcar_features = extract_features(noncar_images, cspace=colorspace, orient=orient, \n pix_per_cell=pix_per_cell, cell_per_block=cell_per_block, \n hog_channel=hog_channel)", "_____no_output_____" ], [ "img = mpimg.imread('./test_images/1.bmp')\nimg2 = cv2.cvtColor(img,cv2.COLOR_RGB2LAB)\nt=time.time()\ntemp, im1 = get_hog_features(img2[:,:,0], orient, pix_per_cell, cell_per_block, vis=True, feature_vec=True)\ntemp, im2 = get_hog_features(img2[:,:,1], orient, pix_per_cell, cell_per_block, vis=True, feature_vec=True)\ntemp, im3 = get_hog_features(img2[:,:,2], orient, pix_per_cell, cell_per_block, vis=True, feature_vec=True)\nt2 = time.time()\nshow_image_compare_feature_extraction(im1,im2,im3,img,'LAB')\nprint(round(t2-t,2), 's of extraction time per image (ALL channel)')", "16.86 s of extraction time per image (ALL channel)\n" ] ], [ [ "In the code below, the ```car_feature``` and ```notcar_feature``` are combined vertically. The created vetor is then scaled using ```StandardSacler``` (it removes the mean and provide a unit variance of the dataset). The result vector is created and combined horizontally to feature vector. The test and validation are then created randomly with 20% proportion and shuffle as well. ", "_____no_output_____" ] ], [ [ "# Create an array stack of feature vectors\nX = np.vstack((car_features, notcar_features)).astype(np.float64) \n#print(X[0])\n#StandardScaler decrease classifying performances (?)\n # Fit a per-column scaler\nX_scaler = StandardScaler().fit(X)\npickle.dump(X_scaler, open('X_scaler', 'wb'))\n # Apply the scaler to X\nscaled_X = X_scaler.transform(X)\npickle.dump(X_scaler, open('scaled_X', 'wb'))\n#print(scaled_X[0])\n# Define the labels vector\ny = np.hstack((np.ones(len(car_features)), np.zeros(len(notcar_features))))\n\n# Split X and y (data) into training and testing set\n#rand_state = np.random.randint(0, 100)\nX_train, X_test, y_train, y_test = train_test_split(\n scaled_X, y, test_size=0.2, random_state=42)", "_____no_output_____" ] ], [ [ "### Train a classifier", "_____no_output_____" ], [ "The SGDC classifier used is variant of a linear support vector machine model. I have tested Naive Bayes and decision three model, which have shown to have similar accurcy result. The special SGDC classifier provides however an improved training time for a some accuracy, therefore I prefered using this insteaf of the other. ", "_____no_output_____" ] ], [ [ "# Use a linear SVC \nSGDC = linear_model.SGDClassifier()\n# Check the training time for the SVC\nt = time.time()\nSGDC.fit(X_train, y_train)\nt2 = time.time()\nprint(round(t2-t, 2), 'Seconds to train SVC...')\n# Check the score of the SVC\nprint('Test Accuracy of SVC = ', round(SGDC.score(X_test, y_test), 4))\n# Check the prediction time for a single sample\nt=time.time()\nn_predict = 10\nprint('My SGDC predicts: ', SGDC.predict(X_test[0:n_predict]))\nprint('For these',n_predict, 'labels: ', y_test[0:n_predict])\nt2 = time.time()\nprint(round(t2-t, 5), 'Seconds to predict', n_predict,'labels with SVC')\n\n# save the model to disk\nfilename = 'model'+str(colorspace)+str(orient)+str(pix_per_cell)+str(cell_per_block)+str(hog_channel)+'.sav'\npickle.dump(SGDC, open(filename, 'wb'))", "1.15 Seconds to train SVC...\nTest Accuracy of SVC = 0.9835\nMy SGDC predicts: [ 0. 1. 1. 0. 1. 0. 1. 0. 1. 1.]\nFor these 10 labels: [ 0. 1. 1. 0. 1. 0. 1. 0. 1. 1.]\n0.01099 Seconds to predict 10 labels with SVC\n" ], [ "# load the model from disk\nfilename = 'model'+str(colorspace)+str(orient)+str(pix_per_cell)+str(cell_per_block)+hog_channel+'.sav'\nSGDC = pickle.load(open(filename, 'rb'))\n\nX_scaler = pickle.load(open('X_scaler', 'rb'))\nscaled_X = pickle.load(open('scaled_X', 'rb'))", "_____no_output_____" ] ], [ [ "### Sliding Windows", "_____no_output_____" ], [ "```Find_car``` is a function which extracts hog features of an entire image than apply a sliding windows technic to this HOG image. Each frame taken appart from the sliding windows is analyzed by the SDGC classifier to predict whether the frame is a vehicle or not. If the frame reveals to be a car, the the boxes coordinates of the predicted vehicle are calculated and returned by the the function. An image with the boxen vehicle is also returned. \n\nThe function ```apply_sliding_window``` is in charge to apply the ```Find_car``` sliding window method using multiple window sizes which allow the procedure to be scale proof. The y start and stop position are use a band where the sliding window technic is applied. It allows to focus on a ROI. The scaled of the sliding window is find in order to fit a entire car in the frame. \n\nI am facing a computation time problem : computing the sliding window technic takes 15s per images, involving a tremedous amount of time to calute an entire video. This problem should be solved, but I don't find any problem. \n\nEven if the classifier provides a great accuracy, the amount of false positive and negative is still high. Increasing the the ROI and adding sliding window search wasn't viable due to the really high time of computation. The color histogram might avoid those false positive, therefore this possibility should be tested. \n\n<i>The code is inspired from the lession material.</i>", "_____no_output_____" ] ], [ [ "# conver the given image into the chosen color channel\ndef convert_color(img, conv='RGB2YCrCb'):\n if conv == 'RGB2YCrCb':\n return cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)\n if conv == 'BGR2YCrCb':\n return cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb)\n if conv == 'RGB2LUV':\n return cv2.cvtColor(img, cv2.COLOR_RGB2LUV)\n if conv == 'RGB2YUV':\n return cv2.cvtColor(img, cv2.COLOR_RGB2YUV)\n \n# Here is your draw_boxes function from the previous exercise (from lecture)\ndef draw_boxes(img, bboxes, color=(0, 0, 255), thick=6):\n # Make a copy of the image\n imcopy = np.copy(img)\n random_color = False\n # Iterate through the bounding boxes\n for bbox in bboxes:\n if color == 'random' or random_color:\n color = (np.random.randint(0,255), np.random.randint(0,255), np.random.randint(0,255))\n random_color = True\n # Draw a rectangle given bbox coordinates\n cv2.rectangle(imcopy, bbox[0], bbox[1], color, thick)\n # Return the image copy with boxes drawn\n return imcopy", "_____no_output_____" ], [ "def find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block):\n \n draw_img = np.copy(img)\n img = img.astype(np.float32)/255\n \n img_tosearch = img[ystart:ystop,:,:] # sub-sampling\n ctrans_tosearch = convert_color(img_tosearch, conv='RGB2YUV')\n if scale != 1:\n imshape = ctrans_tosearch.shape\n ctrans_tosearch = cv2.resize(ctrans_tosearch, (np.int(imshape[1]/scale), np.int(imshape[0]/scale)))\n \n ch1 = ctrans_tosearch[:,:,0]\n ch2 = ctrans_tosearch[:,:,1]\n ch3 = ctrans_tosearch[:,:,2]\n\n # Define blocks and steps as above\n nxblocks = (ch1.shape[1] // pix_per_cell) - cell_per_block + 1\n nyblocks = (ch1.shape[0] // pix_per_cell) - cell_per_block + 1 \n nfeat_per_block = orient*cell_per_block**2\n \n # 64 was the orginal sampling rate, with 8 cells and 8 pix per cell\n window = 64\n nblocks_per_window = (window // pix_per_cell) - cell_per_block + 1\n #nblocks_per_window = (window // pix_per_cell)-1 \n\n cells_per_step = 2 # Instead of overlap, define how many cells to step\n nxsteps = (nxblocks - nblocks_per_window) // cells_per_step\n nysteps = (nyblocks - nblocks_per_window) // cells_per_step\n \n # Compute individual channel HOG features for the entire image\n hog1 = get_hog_features(ch1, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)\n hog2 = get_hog_features(ch2, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)\n hog3 = get_hog_features(ch3, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)\n \n bboxes = []\n for xb in range(nxsteps):\n for yb in range(nysteps):\n ypos = yb*cells_per_step\n xpos = xb*cells_per_step\n # Extract HOG for this patch\n hog_feat1 = hog1[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel() \n hog_feat2 = hog2[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel() \n hog_feat3 = hog3[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel() \n hog_features = np.hstack((hog_feat1, hog_feat2, hog_feat3))\n\n xleft = xpos*pix_per_cell\n ytop = ypos*pix_per_cell\n\n # Extract the image patch\n subimg = cv2.resize(ctrans_tosearch[ytop:ytop+window, xleft:xleft+window], (64,64))\n \n # Get color features\n #spatial_features = bin_spatial(subimg, size=spatial_size)\n #hist_features = color_hist(subimg, nbins=hist_bins)\n\n # Scale features and make a prediction\n test_stacked = np.hstack(hog_features).reshape(1, -1)\n test_features = X_scaler.transform(test_stacked) \n #test_features = scaler.transform(np.array(features).reshape(1, -1))\n #test_features = X_scaler.transform(np.hstack((shape_feat, hist_feat)).reshape(1, -1)) \n test_prediction = SGDC.predict(test_features)\n \n if test_prediction == 1:\n xbox_left = np.int(xleft*scale)\n ytop_draw = np.int(ytop*scale)\n win_draw = np.int(window*scale)\n cv2.rectangle(draw_img,(xbox_left, ytop_draw+ystart),(xbox_left+win_draw,ytop_draw+win_draw+ystart),(0,0,255),6) \n bboxes.append(((int(xbox_left), int(ytop_draw+ystart)),(int(xbox_left+win_draw),int(ytop_draw+win_draw+ystart))))\n\n return draw_img, bboxes", "_____no_output_____" ], [ "def apply_sliding_window(image, SGDC, X_scaler, orient, pix_per_cell, cell_per_block): \n t = time.time()\n #rectangles = []\n bboxes = []\n ystart = 400\n ystop = 500 \n out_img, bboxes1 = find_cars(image, ystart, ystop, 1.0, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n ystart = 400\n ystop = 500 \n out_img, bboxes2 = find_cars(image, ystart, ystop, 1.3, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n ystart = 410\n ystop = 500 \n out_img, bboxes3 = find_cars(out_img, ystart, ystop, 1.4, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n ystart = 420\n ystop = 556 \n out_img, bboxes4 = find_cars(out_img, ystart, ystop, 1.6, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n ystart = 430\n ystop = 556 \n out_img, bboxes5 = find_cars (out_img, ystart, ystop, 1.8, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n ystart = 430\n ystop = 556 \n out_img, bboxes6 = find_cars (out_img, ystart, ystop, 2.0, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n ystart = 440\n ystop = 556 \n out_img, bboxes7 = find_cars (out_img, ystart, ystop, 1.9, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n ystart = 400\n ystop = 556 \n out_img, bboxes8 = find_cars (out_img, ystart, ystop, 1.3, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n ystart = 400\n ystop = 556 \n out_img, bboxes9 = find_cars (out_img, ystart, ystop, 2.2, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n ystart = 500 \n ystop = 656 \n out_img, bboxes10 = find_cars (out_img, ystart, ystop, 3.0, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n bboxes.extend(bboxes1)\n bboxes.extend(bboxes2)\n bboxes.extend(bboxes3)\n bboxes.extend(bboxes4)\n bboxes.extend(bboxes5)\n bboxes.extend(bboxes6)\n bboxes.extend(bboxes7)\n bboxes.extend(bboxes8)\n bboxes.extend(bboxes9)\n bboxes.extend(bboxes10)\n \n t2 = time.time()\n print(round(t2-t,2), 'apply sliding window')\n return out_img, bboxes", "_____no_output_____" ], [ "img = mpimg.imread('./test_images/3.bmp')\nystart = 400\nystop = 596\nscale = 1.2\n#plt.imshow(out_img)\nt=time.time()\nout, bo = apply_sliding_window(img, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\nt2 = time.time()\nprint(round(t2-t,2), 's of execution per frame')", "14.81 apply sliding window\n14.81 s of execution per frame\n" ], [ "temp=draw_boxes(img,bo)\nplt.imshow(temp)", "_____no_output_____" ] ], [ [ "### Heatmap", "_____no_output_____" ], [ "To avoid false postive, a heat map is used. The principle is the following, first a map is create, zero array-like of the analyzed image size. Once, a car is detected and box is found, the boxen area of image is added by 1 on the same region of the array. Therefore, when a zero on the heat means that no car has ever been found in the process. When its values is one, a vehicle have been found once and when the value is greater than one, a vehcile have been multiple times. The heatmap thus a measure of certainty of the prediction. Applying a treshold of 1, meaning we consider that the confidence of unique positive prediction is this area is not enough, allow one to filter the false positves. ", "_____no_output_____" ] ], [ [ "from scipy.ndimage.measurements import label\n\ndef add_heat(heatmap, bbox_list):\n # Iterate through list of bboxes\n for box in bbox_list:\n # Add += 1 for all pixels inside each bbox\n # Assuming each \"box\" takes the form ((x1, y1), (x2, y2))\n heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1\n\n # Return updated heatmap\n return heatmap\n\ndef apply_threshold(heatmap, threshold):\n # Zero out pixels below the threshold\n heatmap[heatmap <= threshold] = 0\n # Return thresholded map\n return heatmap\n\ndef draw_labeled_bboxes(img, labels):\n # Iterate through all detected cars\n for car_number in range(1, labels[1]+1):\n # Find pixels with each car_number label value\n nonzero = (labels[0] == car_number).nonzero()\n # Identify x and y values of those pixels\n nonzeroy = np.array(nonzero[0])\n nonzerox = np.array(nonzero[1])\n # Define a bounding box based on min/max x and y\n bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))\n # Draw the box on the image\n cv2.rectangle(img, bbox[0], bbox[1], (0,0,255), 6)\n # Return the image\n return img", "_____no_output_____" ], [ "heatmap = np.zeros_like(out[:,:,0]).astype(np.float)\nheatmap = add_heat(heatmap,bo)\nheatmap = apply_threshold(heatmap, 1)\nlabels = label(heatmap)\n\nprint(labels[1], 'cars found')\nplt.imshow(labels[0], cmap='gray')\n\n# Read in the last image above\n#image = mpimg.imread('img105.jpg')\n# Draw bounding boxes on a copy of the image\ndraw_img = draw_labeled_bboxes(np.copy(img), labels)\n# Display the image\nplt.imshow(draw_img)\n\nshow_image_compare_heatmap(img,heatmap,temp,draw_img)", "0 cars found\n" ], [ "def show_image_compare_heatmap(img,heatmap,temp,boxen) : \n f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(16, 8))\n #f.tight_layout()\n ax1.imshow(img, cmap='gray')\n ax1.set_title('Original', fontsize=20)\n ax2.imshow(temp, cmap='gray')\n ax2.set_title('Boxen cars', fontsize=20)\n ax3.imshow(heatmap, cmap='gray')\n ax3.set_title('Heatmap (tresholded)', fontsize=20)\n ax4.imshow(boxen, cmap='gray')\n ax4.set_title('Filtered boxen image', fontsize=20)", "_____no_output_____" ] ], [ [ "### Vheicle detection pipeline", "_____no_output_____" ] ], [ [ "def vehicle_detection_piepline(image) : \n # Find cars in image using multiple sliding window method \n # Filter false positive using tresholded heatmap\n # label the vehicles\n # draw the boxes on the image\n detected_image, boxes = apply_sliding_window(image, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n heatmap = np.zeros_like(detected_image[:,:,0]).astype(np.float)\n heatmap = add_heat(heatmap,boxes)\n heatmap = apply_threshold(heatmap, 2)\n labels = label(heatmap)\n draw_img = draw_labeled_bboxes(np.copy(image), labels)\n return draw_img\n \nimage_output_vehicle = vehicle_detection_piepline(mpimg.imread('./test_images/3.bmp'))\nplt.imshow(image_output_vehicle)", "14.46 apply sliding window\n" ], [ "from collections import deque\nimport imageio\nimageio.plugins.ffmpeg.download()\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML\n\noutput = 'test_result.mp4'\nclip = VideoFileClip(\"test_video.mp4\")\nvideo_clip = clip.fl_image(vehicle_detection_piepline)\n%time video_clip.write_videofile(output, audio=False)", "14.33 apply sliding window\n[MoviePy] >>>> Building video test_result.mp4\n[MoviePy] Writing video test_result.mp4\n" ], [ "history = deque(maxlen = 8)\noutput = 'result.mp4'\nclip = VideoFileClip(\"project_video.mp4\")\nvideo_clip = clip.fl_image(vehicle_detection_piepline)\n%time video_clip.write_videofile(output, audio=False)", "[MoviePy] >>>> Building video result.mp4\n[MoviePy] Writing video result.mp4\n" ] ], [ [ "The output video can be found [here](https://youtu.be/Gt2ZO6IfRfo)", "_____no_output_____" ], [ "### temp ", "_____no_output_____" ] ], [ [ "def apply_sliding_window(img, SGDC, X_scaler, orient, pix_per_cell, cell_per_block): \n t = time.time()\n rectangles = []\n ystart = 400\n ystop = 464\n scale = 1.0\n boxen_image, bbobx = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx)\n \n ystart = 400\n ystop = 464\n scale = 1.3\n boxen_image, bbobx8 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx8)\n \n ystart = 416\n ystop = 480\n scale = 1.0\n boxen_image, bbobx1 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx)\n \n ystart = 400\n ystop = 496\n scale = 1.5\n boxen_image, bbobx2 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx2)\n\n ystart = 432\n ystop = 528\n scale = 1.5\n boxen_image, bbobx3 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx3)\n \n ystart = 400\n ystop = 528\n scale = 2.0\n boxen_image, bbobx4 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx4)\n \n ystart = 432\n ystop = 560\n scale = 2.0\n boxen_image, bbobx5 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx5)\n \n ystart = 400\n ystop = 596\n scale = 1.2\n boxen_image, bbobx6 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx6)\n \n ystart = 464\n ystop = 660\n scale = 3.5\n boxen_image, bbobx7 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx7)\n \n ystart = 400\n ystop = 556\n scale = 1.3\n #boxen_image, bbobx9 = find_cars (img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n #rectangles.extend(bbobx9)\n \n ystart = 400\n ystop = 556\n scale = 2.2 \n #boxen_image, bbobx10 = find_cars (img, ystart, ystop, sacle, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n #rectangles.extend(bbobx10)\n t2 = time.time()\n print(round(t2-t,2), 'apply sliding window')\n return boxen_image, rectangles", "_____no_output_____" ], [ "def vehicle_detection_piepline(image) : \n # Find cars in image using sliding window \n # Filter false positive using tresholded heatmap\n # label the vehicles\n rectangles = []\n ystart = 400\n ystop = 464\n scale = 1.0\n boxen_image, bbobx = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx)\n \n ystart = 400\n ystop = 464\n scale = 1.3\n boxen_image, bbobx8 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx8)\n \n ystart = 416\n ystop = 480\n scale = 1.0\n boxen_image, bbobx1 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx)\n \n ystart = 400\n ystop = 496\n scale = 1.5\n boxen_image, bbobx2 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx2)\n\n ystart = 432\n ystop = 528\n scale = 1.5\n boxen_image, bbobx3 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx3)\n \n ystart = 400\n ystop = 528\n scale = 2.0\n boxen_image, bbobx4 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx4)\n \n ystart = 432\n ystop = 560\n scale = 2.0\n boxen_image, bbobx5 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx5)\n \n ystart = 400\n ystop = 596\n scale = 1.2\n boxen_image, bbobx6 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx6)\n \n ystart = 464\n ystop = 660\n scale = 3.5\n boxen_image, bbobx7 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx7)\n \n ystart = 400\n ystop = 556\n boxen_image, bbobx9 = find_cars (img, ystart, ystop, 1.3, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx9)\n \n ystart = 400\n ystop = 556\n boxen_image, bbobx10 = find_cars (img, ystart, ystop, 2.2, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)\n rectangles.extend(bbobx10)\n \n heatmap = np.zeros_like(boxen_image[:,:,0]).astype(np.float)\n heatmap = add_heat(heatmap,rectangles)\n heatmap = apply_threshold(heatmap, 2)\n labels = label(heatmap)\n draw_img = draw_labeled_bboxes(np.copy(image), labels)\n return draw_img\n \nimage_output_vehicle = vehicle_detection_piepline(mpimg.imread('./test_images/3.bmp'))\nplt.imshow(image_output_vehicle)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
4a5f827426ada6434e8c7d7d7a6b8b9a516da268
4,728
ipynb
Jupyter Notebook
examples/fiducial_proxy_detection/ee_imagery.ipynb
bpelto/hipp
f48bfccbabc96d2f63faec3d7f4b4e4cff980e21
[ "MIT" ]
12
2020-10-07T22:12:11.000Z
2022-02-15T23:10:53.000Z
examples/fiducial_proxy_detection/ee_imagery.ipynb
bpelto/hipp
f48bfccbabc96d2f63faec3d7f4b4e4cff980e21
[ "MIT" ]
7
2020-10-11T23:42:55.000Z
2021-12-15T23:16:43.000Z
examples/fiducial_proxy_detection/ee_imagery.ipynb
bpelto/hipp
f48bfccbabc96d2f63faec3d7f4b4e4cff980e21
[ "MIT" ]
4
2020-10-11T19:48:58.000Z
2022-03-08T21:32:13.000Z
22.62201
110
0.545051
[ [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "import hipp\nfrom getpass import getpass\nimport glob\nimport os", "_____no_output_____" ] ], [ [ "## Login", "_____no_output_____" ] ], [ [ "apiKey = hipp.dataquery.EE_login(input(), getpass())", "_____no_output_____" ] ], [ [ "## Inputs", "_____no_output_____" ] ], [ [ "bounds = (-122, 49, -121.5, 48.5)\nee_project_name = 'LK000'\nyear = 1950\nmonth = 9\nday = 2\noutput_directory = './'\nee_query_max_results = 10 #do more than 1 because you may get a calibration file\nee_query_label = 'test_download' #try to use the same one for consistent behavior from the EE API", "_____no_output_____" ] ], [ [ "## Query EE\n\nand filter results by the project name you are interested in (or look at the entire ee_results_df first)", "_____no_output_____" ] ], [ [ "ULLON, ULLAT, LRLON, LRLAT = bounds\n\nee_results_df = hipp.dataquery.EE_pre_select_images(\n apiKey,\n xmin = LRLON,\n ymin = LRLAT,\n xmax = ULLON,\n ymax = ULLAT,\n startDate = f\"{year}-{month}-{day}\",\n endDate = f\"{year}-{month}-{day}\",\n maxResults = ee_query_max_results\n)\n\nee_results_df = ee_results_df[ee_results_df['project'] == ee_project_name]", "_____no_output_____" ], [ "ee_results_df", "_____no_output_____" ] ], [ [ "## Select the first image for download and download it\nIdeally you want to select an image that is representative of the set, not necessarily the first one.\n(e.g. choose and image without over or under exposed areas)", "_____no_output_____" ] ], [ [ "ee_results_df = ee_results_df.head(1)", "_____no_output_____" ], [ "images_directory, calibration_reports_directory = hipp.dataquery.EE_download_images_to_disk(\n apiKey,\n ee_results_df['entityId'].tolist(),\n ee_query_label,\n output_directory\n)\nsingle_image_file = glob.glob(os.path.join(images_directory,'*.tif'))[0]", "_____no_output_____" ] ], [ [ "## Create fiducial templates from images\n\nNote that the optional parameter buffer_distance should be tuned to the dataset you are using.", "_____no_output_____" ] ], [ [ "fiducial_template_directory = os.path.join(output_directory, 'fiducials')\n\nhipp.core.create_midside_fiducial_proxies_template(\n single_image_file,\n fiducial_template_directory,\n buffer_distance = 250 #default value\n\n)", "_____no_output_____" ] ], [ [ "## Detect principal points and crop images", "_____no_output_____" ] ], [ [ "hipp.batch.preprocess_with_fiducial_proxies(images_directory,\n fiducial_template_directory)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a5f8470b0ca18309e8646f4f288985e0c80f00e
3,835
ipynb
Jupyter Notebook
_notebooks/2021-04-27-scala-companion-objects.ipynb
pockerman/qubit_opus
6824a86b302377616b89f92fe7716e96c6abaa12
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-04-27-scala-companion-objects.ipynb
pockerman/qubit_opus
6824a86b302377616b89f92fe7716e96c6abaa12
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-04-27-scala-companion-objects.ipynb
pockerman/qubit_opus
6824a86b302377616b89f92fe7716e96c6abaa12
[ "Apache-2.0" ]
null
null
null
22.558824
253
0.539765
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a5f8d15af7b0f2563b17e3b206147c1a05248da
9,271
ipynb
Jupyter Notebook
PythonBasics-Functions .ipynb
MBJamshidi/PythonBasics-Functions-
2be24bd694bb1d6d7fb0e312615a745219cd24b0
[ "MIT" ]
1
2020-05-16T18:41:52.000Z
2020-05-16T18:41:52.000Z
PythonBasics-Functions .ipynb
MBJamshidi/PythonBasics-Functions-
2be24bd694bb1d6d7fb0e312615a745219cd24b0
[ "MIT" ]
null
null
null
PythonBasics-Functions .ipynb
MBJamshidi/PythonBasics-Functions-
2be24bd694bb1d6d7fb0e312615a745219cd24b0
[ "MIT" ]
1
2021-03-08T00:13:47.000Z
2021-03-08T00:13:47.000Z
27.348083
104
0.516773
[ [ [ "#Python Basics\n#Functions in Python\n#Functions take some inputs, then they produce some outputs\n#The functions are just a piece of code that you can reuse\n#You can implement your functions, but in many cases, people reuse other people's functions\n#in this case, it is important how the function work and how we can import function\n#Python has many bult-in functions\n\n#For example function\"len\" which computes the length of the lists\nlist1=[1,7,7.8, 9,3.9, 2, 8, 5.01, 6,2, 9, 11, 46, 91, 58, 2]\nn=len(list1)\nprint(\"The length of list1 is \",n, \"elements\")\n\n\nlist2=[2, 8, 5.01, 6,2, 9]\nm=len(list2)\nprint(\"The length of list2 is \",m, \"elements\")\n\n#For example function\"sum\" which returns the total of all of the elements\n\nlist3=[10,74,798, 19,3.9, 12, 8, 5.01, 6,2, 19, 11, 246, 91, 58, 2.2]\nn=sum(list3)\nprint(\"The total of list1 is \",n)\n\n\nlist4=[72, 98, 15.01, 16,2, 69.78]\nm=sum(list4)\nprint(\"The total of list2 is \",m )\n", "_____no_output_____" ], [ "#METHODS are similar to the functions\n#sort vs sorted\n#for example we have two ways to sort a list through \"sort method\" and \"sorted function\"\n#Assume we have a list, namely Num\nNum=[10,74,798, 19,3.9, 12, 8, 5.01, 6,2, 19, 11, 246, 91, 58, 2.2]\n#we can sort this variable using sorted function as follows\n#sort vs sorted\nNum_rating=sorted(Num)\nprint(Num_rating)\nprint(Num)\n#So Num is not changed, but in the case of sort method the list itself is changed\n#in this case no new list is created \nNum=[10,74,798, 19,3.9, 12, 8, 5.01, 6,2, 19, 11, 246, 91, 58, 2.2]\nprint(\"Befor appling the sort method, Num has these values:\", Num)\nNum.sort()\nprint(\"After appling the sort method, Num has these values:\", Num)", "_____no_output_____" ], [ "#Making our own functions in Python\n\n#For making a function, def FunctionName(input):\ndef Add1(InputFunc):\n OUT=InputFunc+15\n return OUT\n\n#We can reuse function Add1 among our program\nprint(Add1(3))\nprint(Add1(15))\n\nAdd1(3.144)\n\n#Example\n#a is input of the function\n#y is the output of the fuvtion\n#Whenever function Time1 is called the output is calculated\ndef Time1(a):\n y=a*15\n return y\nc=Time1(2)\nprint(c)\nd=Time1(30)\nprint(d)", "_____no_output_____" ], [ "#Documentation function using \"\"\" Documentation\"\"\"\ndef Add1(InputFunc):\n \"\"\"\n ADD Function\n \"\"\"\n OUT=InputFunc+15\n return OUT\n\n#functions with multiple parameters\ndef MULTIPARA(a, b, c):\n W1=a*b+ c\n W2=(a+b)*c\n return (W1,W2)\nprint(MULTIPARA(2,3,7))\n\n#functions with multiple parameters\ndef Mu2(a, b):\n W1=a*b+ c\n W2=(a+b)*c\n W3=15/a\n W4=65/b\n return (W1,W2,W3,W4)\nprint(Mu2(11,3))\n\ndef Mu3(a1, a2, a3, a4):\n c1=a1*a2+ a4+23\n c2=(a3+a1)*a2\n c3=15/a3\n c4=65/a4+8976*d\n return (c1,c2,c3,c4)\nprint(Mu3(0.008,0.0454,0.0323, 0.00232))\n\n#repeating a string for n times\ndef mu4(St, REPEAT):\n OUT=St*REPEAT\n return OUT\n\nprint(mu4(\"Michel Jackson\", 2))\nprint(mu4(\"Michel Jackson\", 3))\nprint(mu4(\"Michel Jackson\", 4))\nprint(mu4(\"Michel Jackson\", 5))\n", "_____no_output_____" ], [ "#In many cases functions does not have a return statement\n#In these cases, Python will return \"NONE\" Special \n\n#Assume MBJ() function with no inputs\ndef MBJ():\n print(\"M: Mohammad\")\n print(\"B:Behdad\")\n print(\"J:Jamshdi\")\n#Calling functins with no parameters\nMBJ()\n\n\ndef ERROR():\n print(\"There is something wrong in codes\")\n#Calling function with no parameters\nERROR()", "_____no_output_____" ], [ "#Function which does not do anything\ndef NOWORK():\n pass\n\n#Calling function NOWORK\nprint(NOWORK())\n#this function return \"None\"", "_____no_output_____" ], [ "#LOOPS in FUNCTIONS\n#we can use loops in functions\n\n#example\n#Force [N] to Mass [Kg] Convertor\ndef Forve2Mass(F):\n for S,Val in enumerate(F):\n print(\"The mass of number\", S,\"is measured: \", Val/9.8, \"Kg\")\n \nFl=[344, 46783, 5623, 6357]\nForve2Mass(Fl)\n\n#Mass [Kg] to Force [N] Convertor\ndef Mass2Force(M):\n for S,Val in enumerate(M):\n print(\"The mass of number\", S,\"is measured: \",Val*9.8, \"Kg\")\n we=Val*9.8\n if (we>200):\n print (\"The above item is over weight\")\nM1=[54, 71, 59, 34, 21, 16, 15]\nMass2Force(M1)", "_____no_output_____" ], [ "#Collecting arguments\ndef AI_Methods(*names):\n #Star * is used for string\n for name in names:\n print(name)\n \n#calling the function \nAI_Methods(\"Deep Learning\", \"Machine Learning\", \"ANNs\", \"LSTM\")\n\n#or \nAI1=[\"Deep Learning\", \"Machine Learning\", \"ANNs\", \"LSTM\"]\nAI_Methods(AI1)\n", "_____no_output_____" ], [ "#Local scope and global scope in function\n#Every vaiable within the function is counted as a local scope \n#Every vaiable out of the function is counted as a global scope \n#local scopes just reflect the output of function while global scopes reflect the value of body\n\n#Assume we have Date as both global scope and local scope\n\n\ndef LOCAL(a):\n Date=a+15\n return(Date)\nDate=1986\ny=LOCAL(Date)\nprint(y)\n#The different is here, look at the output of the function\n\n#Global Scope\nprint(\"Global Scope (BODY): \",Date)\n#Local Scope\nprint(\"Local Scopes (Function): \", LOCAL(Date))", "_____no_output_____" ], [ "#if the variable is not defined in the function, function considers the value of it in BODY\n#Let's look at vaiable a\na=1\ndef add(b):\n return a+b\n\nc=add(10)\nprint(c)", "_____no_output_____" ], [ "def f(c):\n return sum(c)\n\nf([11, 67])", "_____no_output_____" ], [ "#Using if/else Statements and Loops in Functions\n# Function example\n\ndef type_of_album(artist, album, year_released):\n \n print(artist, album, year_released)\n if year_released > 1980:\n return \"Modern\"\n else:\n return \"Oldie\"\n \nx = type_of_album(\"Michael Jackson\", \"Thriller\", 1980)\nprint(x)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a5f9c40009b8eedd425f584f19fc70c70dbcd42
7,541
ipynb
Jupyter Notebook
p1_navigation/Navigation_Pixels.ipynb
Bhuvans/Udacity_DeepRL_ND
cb6280aea21322e711c40c1521b96cb940cfa1f5
[ "MIT" ]
null
null
null
p1_navigation/Navigation_Pixels.ipynb
Bhuvans/Udacity_DeepRL_ND
cb6280aea21322e711c40c1521b96cb940cfa1f5
[ "MIT" ]
null
null
null
p1_navigation/Navigation_Pixels.ipynb
Bhuvans/Udacity_DeepRL_ND
cb6280aea21322e711c40c1521b96cb940cfa1f5
[ "MIT" ]
null
null
null
36.606796
379
0.601644
[ [ [ "# Navigation\n\n---\n\nCongratulations for completing the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893)! In this notebook, you will learn how to control an agent in a more challenging environment, where it can learn directly from raw pixels! **Note that this exercise is optional!**\n\n### 1. Start the Environment\n\nWe begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).", "_____no_output_____" ] ], [ [ "from unityagents import UnityEnvironment\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.\n\n- **Mac**: `\"path/to/VisualBanana.app\"`\n- **Windows** (x86): `\"path/to/VisualBanana_Windows_x86/Banana.exe\"`\n- **Windows** (x86_64): `\"path/to/VisualBanana_Windows_x86_64/Banana.exe\"`\n- **Linux** (x86): `\"path/to/VisualBanana_Linux/Banana.x86\"`\n- **Linux** (x86_64): `\"path/to/VisualBanana_Linux/Banana.x86_64\"`\n- **Linux** (x86, headless): `\"path/to/VisualBanana_Linux_NoVis/Banana.x86\"`\n- **Linux** (x86_64, headless): `\"path/to/VisualBanana_Linux_NoVis/Banana.x86_64\"`\n\nFor instance, if you are using a Mac, then you downloaded `VisualBanana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:\n```\nenv = UnityEnvironment(file_name=\"VisualBanana.app\")\n```", "_____no_output_____" ] ], [ [ "env = UnityEnvironment(file_name=\"...\")", "_____no_output_____" ] ], [ [ "Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.", "_____no_output_____" ] ], [ [ "# get the default brain\nbrain_name = env.brain_names[0]\nbrain = env.brains[brain_name]", "_____no_output_____" ] ], [ [ "### 2. Examine the State and Action Spaces\n\nThe simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:\n- `0` - walk forward \n- `1` - walk backward\n- `2` - turn left\n- `3` - turn right\n\nThe environment state is an array of raw pixels with shape `(1, 84, 84, 3)`. *Note that this code differs from the notebook for the project, where we are grabbing **`visual_observations`** (the raw pixels) instead of **`vector_observations`**.* A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana. \n\nRun the code cell below to print some information about the environment.", "_____no_output_____" ] ], [ [ "# reset the environment\nenv_info = env.reset(train_mode=True)[brain_name]\n\n# number of agents in the environment\nprint('Number of agents:', len(env_info.agents))\n\n# number of actions\naction_size = brain.vector_action_space_size\nprint('Number of actions:', action_size)\n\n# examine the state space \nstate = env_info.visual_observations[0]\nprint('States look like:')\nplt.imshow(np.squeeze(state))\nplt.show()\nstate_size = state.shape\nprint('States have shape:', state.shape)", "_____no_output_____" ] ], [ [ "### 3. Take Random Actions in the Environment\n\nIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.\n\nOnce this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. \n\nOf course, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!", "_____no_output_____" ] ], [ [ "env_info = env.reset(train_mode=False)[brain_name] # reset the environment\nstate = env_info.visual_observations[0] # get the current state\nscore = 0 # initialize the score\nwhile True:\n action = np.random.randint(action_size) # select an action\n env_info = env.step(action)[brain_name] # send the action to the environment\n next_state = env_info.visual_observations[0] # get the next state\n reward = env_info.rewards[0] # get the reward\n done = env_info.local_done[0] # see if episode has finished\n score += reward # update the score\n state = next_state # roll over the state to next time step\n if done: # exit loop if episode finished\n break\n \nprint(\"Score: {}\".format(score))", "_____no_output_____" ] ], [ [ "When finished, you can close the environment.", "_____no_output_____" ] ], [ [ "env.close()", "_____no_output_____" ] ], [ [ "### 4. It's Your Turn!\n\nNow it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:\n```python\nenv_info = env.reset(train_mode=True)[brain_name]\n```", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a5f9ee9fdb689805ce8e8c43fe95d8c532e1ece
58,173
ipynb
Jupyter Notebook
matrix_one/day3.ipynb
Arbiej/dw_matrix
4d2ed1b717890978a6f2bca2dc3bc9c5d4068f1f
[ "MIT" ]
null
null
null
matrix_one/day3.ipynb
Arbiej/dw_matrix
4d2ed1b717890978a6f2bca2dc3bc9c5d4068f1f
[ "MIT" ]
null
null
null
matrix_one/day3.ipynb
Arbiej/dw_matrix
4d2ed1b717890978a6f2bca2dc3bc9c5d4068f1f
[ "MIT" ]
null
null
null
58,173
58,173
0.732797
[ [ [ "from goole.colab import drive\nimport pandas as pd\nimport numpy as np", "_____no_output_____" ], [ "!pip install datadotworld\n!pip install datadotworld[pandas]", "Collecting datadotworld\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/eb/2d/564c9b9056c414528f7a91c48bc33f2243bd5323ac07d52269002bd3d6c6/datadotworld-1.7.0-py2.py3-none-any.whl (158kB)\n\u001b[K |████████████████████████████████| 163kB 5.0MB/s \n\u001b[?25hCollecting datapackage<2.0a,>=1.6.2\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/b8/75/0a2e0853116d2fe3f6422f7844129eba9939e54c530706d6116545438401/datapackage-1.11.1-py2.py3-none-any.whl (84kB)\n\u001b[K |████████████████████████████████| 92kB 8.9MB/s \n\u001b[?25hRequirement already satisfied: python-dateutil<3.0a,>=2.6.0 in /usr/local/lib/python3.6/dist-packages (from datadotworld) (2.6.1)\nCollecting click<7.0a,>=6.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB)\n\u001b[K |████████████████████████████████| 71kB 1.5MB/s \n\u001b[?25hRequirement already satisfied: urllib3<2.0a,>=1.15 in /usr/local/lib/python3.6/dist-packages (from datadotworld) (1.24.3)\nRequirement already satisfied: requests<3.0a,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from datadotworld) (2.21.0)\nCollecting tabulator>=1.22.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/56/5c/e57ce30b0751c9071f848e9eb3bda128a921fff13ea442fc1059206cceb2/tabulator-1.34.0-py2.py3-none-any.whl (65kB)\n\u001b[K |████████████████████████████████| 71kB 9.9MB/s \n\u001b[?25hCollecting configparser<4.0a,>=3.5.0\n Downloading https://files.pythonhosted.org/packages/ab/1a/ec151e5e703ac80041eaccef923611bbcec2b667c20383655a06962732e9/configparser-3.8.1-py2.py3-none-any.whl\nRequirement already satisfied: certifi>=2017.04.17 in /usr/local/lib/python3.6/dist-packages (from datadotworld) (2019.11.28)\nCollecting tableschema<2.0a,>=1.5.2\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/3c/a5/75ebdbf57b828edd1c7b059d3ff86b1c23d7bd69daae4c16199c124f4aa3/tableschema-1.12.5-py2.py3-none-any.whl (66kB)\n\u001b[K |████████████████████████████████| 71kB 9.4MB/s \n\u001b[?25hRequirement already satisfied: six<2.0a,>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from datadotworld) (1.12.0)\nCollecting jsonpointer>=1.10\n Downloading https://files.pythonhosted.org/packages/18/b0/a80d29577c08eea401659254dfaed87f1af45272899e1812d7e01b679bc5/jsonpointer-2.0-py2.py3-none-any.whl\nCollecting unicodecsv>=0.14\n Downloading https://files.pythonhosted.org/packages/6f/a4/691ab63b17505a26096608cc309960b5a6bdf39e4ba1a793d5f9b1a53270/unicodecsv-0.14.1.tar.gz\nCollecting cchardet>=1.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/fa/4e/847feebfc3e71c773b23ee06c74687b8c50a5a6d6aaff452a0a4f4eb9a32/cchardet-2.1.5-cp36-cp36m-manylinux1_x86_64.whl (241kB)\n\u001b[K |████████████████████████████████| 245kB 52.4MB/s \n\u001b[?25hRequirement already satisfied: jsonschema>=2.5 in /usr/local/lib/python3.6/dist-packages (from datapackage<2.0a,>=1.6.2->datadotworld) (2.6.0)\nRequirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3.0a,>=2.0.0->datadotworld) (2.8)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3.0a,>=2.0.0->datadotworld) (3.0.4)\nCollecting jsonlines>=1.1\n Downloading https://files.pythonhosted.org/packages/4f/9a/ab96291470e305504aa4b7a2e0ec132e930da89eb3ca7a82fbe03167c131/jsonlines-1.2.0-py2.py3-none-any.whl\nRequirement already satisfied: boto3>=1.9 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld) (1.11.14)\nCollecting openpyxl>=2.6\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/95/8c/83563c60489954e5b80f9e2596b93a68e1ac4e4a730deb1aae632066d704/openpyxl-3.0.3.tar.gz (172kB)\n\u001b[K |████████████████████████████████| 174kB 52.2MB/s \n\u001b[?25hRequirement already satisfied: xlrd>=1.0 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld) (1.1.0)\nCollecting linear-tsv>=1.0\n Downloading https://files.pythonhosted.org/packages/82/e5/03207a0f11e1d60df85b97b61704ed701b725a7c2feaf83f7bfbd0c2d83e/linear-tsv-1.1.0.tar.gz\nCollecting ijson>=2.5\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/23/42/2066f77a714ab7221542ea23710b35a96c5dd398f1933429088afd888293/ijson-2.6.1-cp36-cp36m-manylinux1_x86_64.whl (65kB)\n\u001b[K |████████████████████████████████| 71kB 9.2MB/s \n\u001b[?25hRequirement already satisfied: sqlalchemy>=0.9.6 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld) (1.3.13)\nCollecting rfc3986>=1.1.0\n Downloading https://files.pythonhosted.org/packages/00/8d/9d56bfe43997f1864fe0891be69bc239ded98e69c9f56eb9eaa5b1789660/rfc3986-1.3.2-py2.py3-none-any.whl\nCollecting isodate>=0.5.4\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/9b/9f/b36f7774ff5ea8e428fdcfc4bb332c39ee5b9362ddd3d40d9516a55221b2/isodate-0.6.0-py2.py3-none-any.whl (45kB)\n\u001b[K |████████████████████████████████| 51kB 7.4MB/s \n\u001b[?25hRequirement already satisfied: botocore<1.15.0,>=1.14.14 in /usr/local/lib/python3.6/dist-packages (from boto3>=1.9->tabulator>=1.22.0->datadotworld) (1.14.14)\nRequirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from boto3>=1.9->tabulator>=1.22.0->datadotworld) (0.3.3)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from boto3>=1.9->tabulator>=1.22.0->datadotworld) (0.9.4)\nRequirement already satisfied: jdcal in /usr/local/lib/python3.6/dist-packages (from openpyxl>=2.6->tabulator>=1.22.0->datadotworld) (1.4.1)\nRequirement already satisfied: et_xmlfile in /usr/local/lib/python3.6/dist-packages (from openpyxl>=2.6->tabulator>=1.22.0->datadotworld) (1.0.1)\nRequirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore<1.15.0,>=1.14.14->boto3>=1.9->tabulator>=1.22.0->datadotworld) (0.15.2)\nBuilding wheels for collected packages: unicodecsv, openpyxl, linear-tsv\n Building wheel for unicodecsv (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for unicodecsv: filename=unicodecsv-0.14.1-cp36-none-any.whl size=10768 sha256=aecc10bd99f4e4242461c246f30b7bbd272c8bace508c41cb190086f8c24a688\n Stored in directory: /root/.cache/pip/wheels/a6/09/e9/e800279c98a0a8c94543f3de6c8a562f60e51363ed26e71283\n Building wheel for openpyxl (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for openpyxl: filename=openpyxl-3.0.3-py2.py3-none-any.whl size=241262 sha256=e30b254953f8691a6f66b8949c3638db44417dffaad269fa001b6225fdb66697\n Stored in directory: /root/.cache/pip/wheels/b5/85/ca/e768ac132e57e75e645a151f8badac71cc0089e7225dddf76b\n Building wheel for linear-tsv (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for linear-tsv: filename=linear_tsv-1.1.0-cp36-none-any.whl size=7383 sha256=b916e58778051ce4d110126032cadefe9a3b9e74c76d1f0f1bac70f8fde67c2e\n Stored in directory: /root/.cache/pip/wheels/3f/8a/cb/38917fd1ef4356b9870ace7331b83417dc594bf2c029bd991f\nSuccessfully built unicodecsv openpyxl linear-tsv\nInstalling collected packages: click, jsonpointer, unicodecsv, cchardet, jsonlines, openpyxl, linear-tsv, ijson, tabulator, rfc3986, isodate, tableschema, datapackage, configparser, datadotworld\n Found existing installation: Click 7.0\n Uninstalling Click-7.0:\n Successfully uninstalled Click-7.0\n Found existing installation: openpyxl 2.5.9\n Uninstalling openpyxl-2.5.9:\n Successfully uninstalled openpyxl-2.5.9\nSuccessfully installed cchardet-2.1.5 click-6.7 configparser-3.8.1 datadotworld-1.7.0 datapackage-1.11.1 ijson-2.6.1 isodate-0.6.0 jsonlines-1.2.0 jsonpointer-2.0 linear-tsv-1.1.0 openpyxl-3.0.3 rfc3986-1.3.2 tableschema-1.12.5 tabulator-1.34.0 unicodecsv-0.14.1\nRequirement already satisfied: datadotworld[pandas] in /usr/local/lib/python3.6/dist-packages (1.7.0)\nRequirement already satisfied: tabulator>=1.22.0 in /usr/local/lib/python3.6/dist-packages (from datadotworld[pandas]) (1.34.0)\nRequirement already satisfied: urllib3<2.0a,>=1.15 in /usr/local/lib/python3.6/dist-packages (from datadotworld[pandas]) (1.24.3)\nRequirement already satisfied: certifi>=2017.04.17 in /usr/local/lib/python3.6/dist-packages (from datadotworld[pandas]) (2019.11.28)\nRequirement already satisfied: configparser<4.0a,>=3.5.0 in /usr/local/lib/python3.6/dist-packages (from datadotworld[pandas]) (3.8.1)\nRequirement already satisfied: python-dateutil<3.0a,>=2.6.0 in /usr/local/lib/python3.6/dist-packages (from datadotworld[pandas]) (2.6.1)\nRequirement already satisfied: click<7.0a,>=6.0 in /usr/local/lib/python3.6/dist-packages (from datadotworld[pandas]) (6.7)\nRequirement already satisfied: requests<3.0a,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from datadotworld[pandas]) (2.21.0)\nRequirement already satisfied: datapackage<2.0a,>=1.6.2 in /usr/local/lib/python3.6/dist-packages (from datadotworld[pandas]) (1.11.1)\nRequirement already satisfied: tableschema<2.0a,>=1.5.2 in /usr/local/lib/python3.6/dist-packages (from datadotworld[pandas]) (1.12.5)\nRequirement already satisfied: six<2.0a,>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from datadotworld[pandas]) (1.12.0)\nCollecting pandas<0.25; extra == \"pandas\"\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/19/74/e50234bc82c553fecdbd566d8650801e3fe2d6d8c8d940638e3d8a7c5522/pandas-0.24.2-cp36-cp36m-manylinux1_x86_64.whl (10.1MB)\n\u001b[K |████████████████████████████████| 10.1MB 5.0MB/s \n\u001b[?25hCollecting numpy<=1.16.4; extra == \"pandas\"\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/87/2d/e4656149cbadd3a8a0369fcd1a9c7d61cc7b87b3903b85389c70c989a696/numpy-1.16.4-cp36-cp36m-manylinux1_x86_64.whl (17.3MB)\n\u001b[K |████████████████████████████████| 17.3MB 237kB/s \n\u001b[?25hRequirement already satisfied: sqlalchemy>=0.9.6 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld[pandas]) (1.3.13)\nRequirement already satisfied: jsonlines>=1.1 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld[pandas]) (1.2.0)\nRequirement already satisfied: openpyxl>=2.6 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld[pandas]) (3.0.3)\nRequirement already satisfied: linear-tsv>=1.0 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld[pandas]) (1.1.0)\nRequirement already satisfied: xlrd>=1.0 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld[pandas]) (1.1.0)\nRequirement already satisfied: boto3>=1.9 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld[pandas]) (1.11.14)\nRequirement already satisfied: cchardet>=1.0 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld[pandas]) (2.1.5)\nRequirement already satisfied: ijson>=2.5 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld[pandas]) (2.6.1)\nRequirement already satisfied: unicodecsv>=0.14 in /usr/local/lib/python3.6/dist-packages (from tabulator>=1.22.0->datadotworld[pandas]) (0.14.1)\nRequirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3.0a,>=2.0.0->datadotworld[pandas]) (2.8)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3.0a,>=2.0.0->datadotworld[pandas]) (3.0.4)\nRequirement already satisfied: jsonschema>=2.5 in /usr/local/lib/python3.6/dist-packages (from datapackage<2.0a,>=1.6.2->datadotworld[pandas]) (2.6.0)\nRequirement already satisfied: jsonpointer>=1.10 in /usr/local/lib/python3.6/dist-packages (from datapackage<2.0a,>=1.6.2->datadotworld[pandas]) (2.0)\nRequirement already satisfied: isodate>=0.5.4 in /usr/local/lib/python3.6/dist-packages (from tableschema<2.0a,>=1.5.2->datadotworld[pandas]) (0.6.0)\nRequirement already satisfied: rfc3986>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tableschema<2.0a,>=1.5.2->datadotworld[pandas]) (1.3.2)\nRequirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas<0.25; extra == \"pandas\"->datadotworld[pandas]) (2018.9)\nRequirement already satisfied: jdcal in /usr/local/lib/python3.6/dist-packages (from openpyxl>=2.6->tabulator>=1.22.0->datadotworld[pandas]) (1.4.1)\nRequirement already satisfied: et-xmlfile in /usr/local/lib/python3.6/dist-packages (from openpyxl>=2.6->tabulator>=1.22.0->datadotworld[pandas]) (1.0.1)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from boto3>=1.9->tabulator>=1.22.0->datadotworld[pandas]) (0.9.4)\nRequirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from boto3>=1.9->tabulator>=1.22.0->datadotworld[pandas]) (0.3.3)\nRequirement already satisfied: botocore<1.15.0,>=1.14.14 in /usr/local/lib/python3.6/dist-packages (from boto3>=1.9->tabulator>=1.22.0->datadotworld[pandas]) (1.14.14)\nRequirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore<1.15.0,>=1.14.14->boto3>=1.9->tabulator>=1.22.0->datadotworld[pandas]) (0.15.2)\n\u001b[31mERROR: plotnine 0.6.0 has requirement pandas>=0.25.0, but you'll have pandas 0.24.2 which is incompatible.\u001b[0m\n\u001b[31mERROR: mizani 0.6.0 has requirement pandas>=0.25.0, but you'll have pandas 0.24.2 which is incompatible.\u001b[0m\n\u001b[31mERROR: google-colab 1.0.0 has requirement pandas~=0.25.0; python_version >= \"3.0\", but you'll have pandas 0.24.2 which is incompatible.\u001b[0m\n\u001b[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.\u001b[0m\n\u001b[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.\u001b[0m\nInstalling collected packages: numpy, pandas\n Found existing installation: numpy 1.17.5\n Uninstalling numpy-1.17.5:\n Successfully uninstalled numpy-1.17.5\n Found existing installation: pandas 0.25.3\n Uninstalling pandas-0.25.3:\n Successfully uninstalled pandas-0.25.3\nSuccessfully installed numpy-1.16.4 pandas-0.24.2\n" ], [ "!dw configure", "API token (obtained at: https://data.world/settings/advanced): eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJwcm9kLXVzZXItY2xpZW50OmFyYmllaiIsImlzcyI6ImFnZW50OmFyYmllajo6ZmE3YTAxMjktMWEzNy00OTNmLThhNDgtMDY0MjkzNTc5NzIzIiwiaWF0IjoxNTgxNjc1MDE4LCJyb2xlIjpbInVzZXJfYXBpX3JlYWQiLCJ1c2VyX2FwaV93cml0ZSJdLCJnZW5lcmFsLXB1cnBvc2UiOnRydWUsInNhbWwiOnt9fQ.zUtgxNJrQ8z5sE39-8iSQ1bHeCxHTQKttLOKmfNmNQzu2aIiDRdo29FK4ok2X91HtlfA-dFJS0BYPH_d8TFOlQ\n" ], [ "from google.colab import drive\nimport pandas as pd\nimport numpy as np\nimport datadotworld as dw\n", "_____no_output_____" ], [ "\ndrive.mount(\"/content/drive\")", "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n" ], [ "ls\n", "\u001b[0m\u001b[01;34mdrive\u001b[0m/ \u001b[01;34msample_data\u001b[0m/\n" ], [ "cd \"drive/My Drive/Colab Notebooks/dw_matrix\"", "/content/drive/My Drive/Colab Notebooks/dw_matrix\n" ], [ "!mkdir data\n", "mkdir: cannot create directory ‘data’: File exists\n" ], [ "!echo 'data'>.gitignore", "_____no_output_____" ], [ "!git add .gitignore", "_____no_output_____" ], [ "data = dw.load_dataset('datafiniti/mens-shoe-prices')\n", "_____no_output_____" ], [ "df = data.dataframes['7004_1']\ndf.shape", "_____no_output_____" ], [ "df.sample(5)", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "df.prices_currency.unique()", "_____no_output_____" ], [ "df.prices_currency.value_counts()", "_____no_output_____" ], [ "df_usd=df[df.prices_currency=='USD'].copy()", "_____no_output_____" ], [ "df_usd.shape", "_____no_output_____" ], [ "df_usd['prices_amountmin']=df_usd.prices_amountmin.astype(np.float)\ndf_usd['prices_amountmin'].hist()", "_____no_output_____" ], [ "filter_max = np.percentile( df_usd['prices_amountmin'], 99 )\nfilter_max", "_____no_output_____" ], [ "df_usd_filter = df_usd[ df_usd['prices_amountmin'] < filter_max ]\ndf_usd_filter.prices_amountmin.hist(bins=100)", "_____no_output_____" ], [ "ls matrix_one\n", "day3.ipynb\n" ], [ "", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a5f9f206bf8c9a1144acf38cd0205b6ad688295
510,486
ipynb
Jupyter Notebook
BiSwitch_Cell.ipynb
Jere-Lyn12/jls_murray_2020SURF
5567e310c75fdc4c5385658578255c415d7bd8ef
[ "MIT" ]
null
null
null
BiSwitch_Cell.ipynb
Jere-Lyn12/jls_murray_2020SURF
5567e310c75fdc4c5385658578255c415d7bd8ef
[ "MIT" ]
null
null
null
BiSwitch_Cell.ipynb
Jere-Lyn12/jls_murray_2020SURF
5567e310c75fdc4c5385658578255c415d7bd8ef
[ "MIT" ]
null
null
null
408.06235
125,041
0.851244
[ [ [ "# Imports\nfrom biocrnpyler import *\nfrom genelet import *\nfrom subsbml import System, createSubsystem, combineSystems, createNewSubsystem, createBasicSubsystem, SimpleModel, SimpleReaction\nimport numpy as np\nimport pylab as plt\n\nfrom bokeh.layouts import row\nfrom bokeh.io import export_png\n\nimport warnings\nimport libsbml\nimport bokeh.io\nimport bokeh.plotting", "_____no_output_____" ] ], [ [ "## Cell 1 simple rR1 transport w/ rR1 external reservoir", "_____no_output_____" ] ], [ [ "ss1 = createSubsystem (\"SBML_Models/BiSwitch_CRN.xml\")\nss2 = createSubsystem (\"SBML_Models/rR1_external_reservoir.xml\")\n\n# Create a simple rR1 membrane \nmb1 = createSubsystem(\"SBML_Models/simplemembrane_rR1.xml\", membrane = True)\n\ncell_1 = System(\"cell_1\")\ncell_1.setInternal([ss1])\ncell_1.setExternal([ss2])\ncell_1.setMembrane(mb1)\ncell_1_model = cell_1.getModel()\n\n\n#cell_1 = System (\"cell_1\", ListOfInternalSubsystems = [ss1], \n# ListOfExternalSubsystems = [ss2], \n# ListOfMembraneSubsystems = [mb1])\n\n##### Set Species Amount #####\ncell_1_model.setSpeciesAmount('rna_rR1_e', 0, compartment = 'cell_1_external')\n#cell_1_model.setSpeciesAmount('rna_rR1', 100, compartment = 'cell_1_external')\n\ncell_1_model.setSpeciesAmount('Core1_OFF', 6e3, compartment = 'cell_1_internal')\ncell_1_model.setSpeciesAmount('Core2_OFF', 4e3, compartment = 'cell_1_internal')\ncell_1_model.setSpeciesAmount('dna_dA2', 6e3, compartment = 'cell_1_internal')\ncell_1_model.setSpeciesAmount('dna_dA1', 6e3, compartment = 'cell_1_internal')\ncell_1_model.setSpeciesAmount('protein_RNAseH', 40, compartment = 'cell_1_internal')\ncell_1_model.setSpeciesAmount('protein_RNAP', 300, compartment = 'cell_1_internal')\n#cell_1_model.getSBMLDocument().getModel().getCompartment(1).setSize(1e-4) \n\ncell_1_model.writeSBML('cell_1_model.xml')\n", "The subsystem from SBML_Models/simplemembrane_rR1.xml has multiple compartments\n" ], [ "# Calling Names\n\nX_id1 = cell_1_model.getSpeciesByName('complex_Core1_ON').getId()\nX_id2 = cell_1_model.getSpeciesByName('Core1_OFF').getId()\nX_id3 = cell_1_model.getSpeciesByName('complex_Core2_ON').getId()\nX_id4 = cell_1_model.getSpeciesByName('Core2_OFF').getId()\n\nX_id5 = cell_1_model.getSpeciesByName(\"rna_rR2\").getId()\n\nX_id6 = cell_1_model.getSpeciesByName(\"rna_rR1\", compartment = 'cell_1_internal').getId()\nX_id7 = cell_1_model.getSpeciesByName(\"rna_rR1_e\", compartment = 'cell_1_external').getId()\n\nX_id8 = cell_1_model.getSpeciesByName(\"complex_Core2_AI\").getId()\nX_id9 = cell_1_model.getSpeciesByName(\"complex_Core1_AI\").getId()\n\nX_id10 = cell_1_model.getSpeciesByName(\"dna_dA2\").getId()\nX_id11 = cell_1_model.getSpeciesByName(\"dna_dA1\").getId()\n\nprint (X_id6)", "rna_rR1_1_combined\n" ], [ "# Simulate with BioScrape\ntimepoints = np.linspace(0, 28800, 1000)\nresults_1,_ = cell_1_model.simulateWithBioscrape(timepoints)\ntimepoints = timepoints/3600\n\n# For label convenience\nx = 'Time (hours)'\ny = 'Concentration (uM)'\n\nbokeh.io.output_notebook()\na = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\na.circle(timepoints, results_1[X_id1], legend_label = \"Core1 ON\" , color = \"blue\")\na.circle(timepoints, results_1[X_id3], legend_label = \"Core2 ON\", color = \"red\")\na.legend.click_policy=\"hide\"\na.legend.location=\"bottom_right\"\n\nb = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\n#b.circle(timepoints, results_1[X_id5], legend_label = \"rR2\", color = \"red\")\nb.circle(timepoints, results_1[X_id6], legend_label = \"rR1_internal\", color = \"blue\")\nb.legend.click_policy=\"hide\"\nb.legend.location=\"center_right\"\n\nc = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\nc.circle(timepoints, results_1[X_id7], legend_label = \"rR1_external\", color = \"orange\")\nc.legend.click_policy=\"hide\"\nc.legend.location=\"center_right\"\n\nbokeh.io.show(row(a, b, c))\nwarnings.filterwarnings(\"ignore\")", "C:\\Users\\Jeremiah\\anaconda3\\lib\\site-packages\\bioscrape\\sbmlutil.py:93: UserWarning: SBML model contains reversible reaction!\nPlease check rate expressions and ensure they are non-negative before doing stochastic simulations.\n 'Please check rate expressions and ensure they are non-negative before doing '+\nC:\\Users\\Jeremiah\\anaconda3\\lib\\site-packages\\bioscrape\\sbmlutil.py:208: UserWarning: Compartments, UnitDefintions, Events, and some other SBML model components are not recognized by bioscrape. Refer to the bioscrape wiki for more information.\n warnings.warn('Compartments, UnitDefintions, Events, and some other SBML model components are not recognized by bioscrape. ' +\nC:\\Users\\Jeremiah\\anaconda3\\lib\\site-packages\\scipy\\integrate\\odepack.py:248: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.\n warnings.warn(warning_msg, ODEintWarning)\nodeint failed with mxstep=500..." ], [ "d = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\nd.circle(timepoints, results_1[X_id8], legend_label = \"rR1_i:dA1\", color = \"orange\")\nd.circle(timepoints, results_1[X_id9], legend_label = \"rR2:dA2\", color = \"magenta\")\nd.legend.click_policy=\"hide\"\nd.legend.location=\"bottom_right\"\n\n\nf = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\nf.circle(timepoints, results_1[X_id10], legend_label = \"dA2\", color = \"magenta\")\nf.circle(timepoints, results_1[X_id11], legend_label = \"dA1\", color = \"orange\")\nf.legend.click_policy=\"hide\"\nf.legend.location=\"top_right\"\n\nX_id12 = cell_1_model.getSpeciesByName(\"complex_Core1_ON_protein_RNAP\").getId()\nX_id13 = cell_1_model.getSpeciesByName(\"complex_Core2_ON_protein_RNAP\").getId()\n\ng = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\ng.circle(timepoints, results_1[X_id12], legend_label = \"Core1_ON_RNAP\", color = \"blue\")\n#g.circle(timepoints, results_1[X_id13], legend_label = \"Core2_ON_RNAP\", color = \"red\")\ng.legend.click_policy=\"hide\"\ng.legend.location=\"center_right\"\n\n\n\nbokeh.io.show(row(d, f, g))\nwarnings.filterwarnings(\"ignore\")", "_____no_output_____" ] ], [ [ "## Cell 2 with transport protein ##", "_____no_output_____" ] ], [ [ "ss1 = createSubsystem (\"SBML_Models/BiSwitch_CRN.xml\")\nss2 = createSubsystem (\"SBML_Models/rR1_external_reservoir.xml\")\n\n# Create a simple rR1 membrane \nmb2 = createSubsystem(\"SBML_Models/rR1_membrane1_detailed.xml\", membrane = True)\n\ncell_2 = System(\"cell_2\")\ncell_2.setInternal([ss1])\ncell_2.setExternal([ss2])\ncell_2.setMembrane(mb2)\n\ncell_2_model = cell_2.getModel()\n\n##### Set Species Amount #####\n\n#cell_2_model.setSpeciesAmount('rna_rR1_e', 0, compartment = 'cell_2_external')\ncell_2_model.setSpeciesAmount('rna_rR1_e', 100, compartment = 'cell_2_external')\n\ncell_2_model.setSpeciesAmount('Core1_OFF', 6e3, compartment = 'cell_2_internal')\ncell_2_model.setSpeciesAmount('Core2_OFF', 5e3, compartment = 'cell_2_internal')\ncell_2_model.setSpeciesAmount('dna_dA2', 6e3, compartment = 'cell_2_internal')\ncell_2_model.setSpeciesAmount('dna_dA1', 6e3, compartment = 'cell_2_internal')\n\ncell_2_model.setSpeciesAmount('protein_RNAseH', 20, compartment = 'cell_2_internal')\n#cell_2_model.setSpeciesAmount('protein_RNAseH', 40, compartment = 'cell_2_internal')\n\ncell_2_model.setSpeciesAmount('protein_RNAP', 150, compartment = 'cell_2_internal')\n\n#cell_2_model.getSBMLDocument().getModel().getCompartment(1).setSize(1e-4) \n\ncell_2_model.writeSBML('SBML_Models/cell_2_model.xml')", "The subsystem from SBML_Models/rR1_membrane1_detailed.xml has multiple compartments\n" ], [ "# Calling Names\n\nX_id1 = cell_2_model.getSpeciesByName('complex_Core1_ON').getId()\nX_id2 = cell_2_model.getSpeciesByName('Core1_OFF').getId()\nX_id3 = cell_2_model.getSpeciesByName('complex_Core2_ON').getId()\nX_id4 = cell_2_model.getSpeciesByName('Core2_OFF').getId()\n\nX_id5 = cell_2_model.getSpeciesByName(\"rna_rR2\").getId()\nX_id6 = cell_2_model.getSpeciesByName(\"rna_rR1\", compartment = 'cell_2_internal').getId()\nX_id7 = cell_2_model.getSpeciesByName(\"rna_rR1_e\", compartment = 'cell_2_external').getId()\n\nX_id8 = cell_2_model.getSpeciesByName(\"complex_Core2_AI\").getId()\nX_id9 = cell_2_model.getSpeciesByName(\"complex_Core1_AI\").getId()\n\nX_id10 = cell_2_model.getSpeciesByName(\"dna_dA2\").getId()\nX_id11 = cell_2_model.getSpeciesByName(\"dna_dA1\").getId()", "_____no_output_____" ], [ "# Simulate with BioScrape\ntimepoints = np.linspace(0, 28800, 1000)\nresults_2,_ = cell_2_model.simulateWithBioscrape(timepoints)\ntimepoints = timepoints/3600\n\n# For label convenience\nx = 'Time (hours)'\ny = 'Concentration (uM)'\n\nbokeh.io.output_notebook()\na = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\na.circle(timepoints, results_2[X_id1], legend_label = \"Core1 ON\" , color = \"blue\")\na.circle(timepoints, results_2[X_id3], legend_label = \"Core2 ON\", color = \"red\")\na.legend.click_policy=\"hide\"\na.legend.location=\"bottom_right\"\n\nb = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\nb.circle(timepoints, results_2[X_id5], legend_label = \"rR2\" , color = \"red\")\nb.circle(timepoints, results_2[X_id6], legend_label = \"rR1_i\", color = \"blue\")\nb.legend.click_policy=\"hide\"\nb.legend.location=\"center_right\"\n\nc = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\nc.circle(timepoints, results_2[X_id7], legend_label = \"rR1_e\", color = \"orange\")\nc.legend.click_policy=\"hide\"\nc.legend.location=\"center_right\"\n\nbokeh.io.show(row(a, c, b))", "odeint failed with mxstep=500..." ], [ "d = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\nd.circle(timepoints, results_2[X_id8], legend_label = \"rR1_i:dA1\", color = \"orange\")\nd.circle(timepoints, results_2[X_id9], legend_label = \"rR2:dA2\", color = \"magenta\")\nd.legend.click_policy=\"hide\"\nd.legend.location=\"bottom_right\"\n\n\nf = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)\nf.circle(timepoints, results_2[X_id10], legend_label = \"dA2\", color = \"magenta\")\nf.circle(timepoints, results_2[X_id11], legend_label = \"dA1\", color = \"orange\")\nf.legend.click_policy=\"hide\"\nf.legend.location=\"top_right\"\n\n\n\nbokeh.io.show(row(d, f))\nwarnings.filterwarnings(\"ignore\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a5fb855b268504a7faa49637e98fcc31d71da2c
514,753
ipynb
Jupyter Notebook
Naphat - lab1 -Report.ipynb
pection/MachineLearning-Practice
9aa7d61b4085d6de9d7acca9868c6035c1aab93a
[ "MIT" ]
null
null
null
Naphat - lab1 -Report.ipynb
pection/MachineLearning-Practice
9aa7d61b4085d6de9d7acca9868c6035c1aab93a
[ "MIT" ]
null
null
null
Naphat - lab1 -Report.ipynb
pection/MachineLearning-Practice
9aa7d61b4085d6de9d7acca9868c6035c1aab93a
[ "MIT" ]
1
2020-11-24T18:16:34.000Z
2020-11-24T18:16:34.000Z
90.833422
137,068
0.684144
[ [ [ "## 3. Exploring data tables with Pandas\n\n1. Use Pandas to read the house prices data. How many columns and rows are there in this dataset?\n2. The first step I usually do is to use commands like pandas.head() to print a few rows of data. Look around what kind of features are available and read data description.txt for more info. Try to understand as much as you can. Pick three features you think will be good predictors of house prices and explain what they are.\n3. How many unique conditions are there in SaleCondition? Use Pandas to find out how many samples are labeled with each condition. What do you learn from doing this?\n4. Select one variable you picked in b., do you want to know something more about that variable? Use Pandas to answer your own question and de- scribe what you did shortly here.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport sklearn", "_____no_output_____" ], [ "data1 =pd.read_csv('train.csv')", "_____no_output_____" ], [ "print (data1)", " Id MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape \\\n0 1 60 RL 65.0 8450 Pave NaN Reg \n1 2 20 RL 80.0 9600 Pave NaN Reg \n2 3 60 RL 68.0 11250 Pave NaN IR1 \n3 4 70 RL 60.0 9550 Pave NaN IR1 \n4 5 60 RL 84.0 14260 Pave NaN IR1 \n5 6 50 RL 85.0 14115 Pave NaN IR1 \n6 7 20 RL 75.0 10084 Pave NaN Reg \n7 8 60 RL NaN 10382 Pave NaN IR1 \n8 9 50 RM 51.0 6120 Pave NaN Reg \n9 10 190 RL 50.0 7420 Pave NaN Reg \n10 11 20 RL 70.0 11200 Pave NaN Reg \n11 12 60 RL 85.0 11924 Pave NaN IR1 \n12 13 20 RL NaN 12968 Pave NaN IR2 \n13 14 20 RL 91.0 10652 Pave NaN IR1 \n14 15 20 RL NaN 10920 Pave NaN IR1 \n15 16 45 RM 51.0 6120 Pave NaN Reg \n16 17 20 RL NaN 11241 Pave NaN IR1 \n17 18 90 RL 72.0 10791 Pave NaN Reg \n18 19 20 RL 66.0 13695 Pave NaN Reg \n19 20 20 RL 70.0 7560 Pave NaN Reg \n20 21 60 RL 101.0 14215 Pave NaN IR1 \n21 22 45 RM 57.0 7449 Pave Grvl Reg \n22 23 20 RL 75.0 9742 Pave NaN Reg \n23 24 120 RM 44.0 4224 Pave NaN Reg \n24 25 20 RL NaN 8246 Pave NaN IR1 \n25 26 20 RL 110.0 14230 Pave NaN Reg \n26 27 20 RL 60.0 7200 Pave NaN Reg \n27 28 20 RL 98.0 11478 Pave NaN Reg \n28 29 20 RL 47.0 16321 Pave NaN IR1 \n29 30 30 RM 60.0 6324 Pave NaN IR1 \n... ... ... ... ... ... ... ... ... \n1430 1431 60 RL 60.0 21930 Pave NaN IR3 \n1431 1432 120 RL NaN 4928 Pave NaN IR1 \n1432 1433 30 RL 60.0 10800 Pave Grvl Reg \n1433 1434 60 RL 93.0 10261 Pave NaN IR1 \n1434 1435 20 RL 80.0 17400 Pave NaN Reg \n1435 1436 20 RL 80.0 8400 Pave NaN Reg \n1436 1437 20 RL 60.0 9000 Pave NaN Reg \n1437 1438 20 RL 96.0 12444 Pave NaN Reg \n1438 1439 20 RM 90.0 7407 Pave NaN Reg \n1439 1440 60 RL 80.0 11584 Pave NaN Reg \n1440 1441 70 RL 79.0 11526 Pave NaN IR1 \n1441 1442 120 RM NaN 4426 Pave NaN Reg \n1442 1443 60 FV 85.0 11003 Pave NaN Reg \n1443 1444 30 RL NaN 8854 Pave NaN Reg \n1444 1445 20 RL 63.0 8500 Pave NaN Reg \n1445 1446 85 RL 70.0 8400 Pave NaN Reg \n1446 1447 20 RL NaN 26142 Pave NaN IR1 \n1447 1448 60 RL 80.0 10000 Pave NaN Reg \n1448 1449 50 RL 70.0 11767 Pave NaN Reg \n1449 1450 180 RM 21.0 1533 Pave NaN Reg \n1450 1451 90 RL 60.0 9000 Pave NaN Reg \n1451 1452 20 RL 78.0 9262 Pave NaN Reg \n1452 1453 180 RM 35.0 3675 Pave NaN Reg \n1453 1454 20 RL 90.0 17217 Pave NaN Reg \n1454 1455 20 FV 62.0 7500 Pave Pave Reg \n1455 1456 60 RL 62.0 7917 Pave NaN Reg \n1456 1457 20 RL 85.0 13175 Pave NaN Reg \n1457 1458 70 RL 66.0 9042 Pave NaN Reg \n1458 1459 20 RL 68.0 9717 Pave NaN Reg \n1459 1460 20 RL 75.0 9937 Pave NaN Reg \n\n LandContour Utilities ... PoolArea PoolQC Fence MiscFeature \\\n0 Lvl AllPub ... 0 NaN NaN NaN \n1 Lvl AllPub ... 0 NaN NaN NaN \n2 Lvl AllPub ... 0 NaN NaN NaN \n3 Lvl AllPub ... 0 NaN NaN NaN \n4 Lvl AllPub ... 0 NaN NaN NaN \n5 Lvl AllPub ... 0 NaN MnPrv Shed \n6 Lvl AllPub ... 0 NaN NaN NaN \n7 Lvl AllPub ... 0 NaN NaN Shed \n8 Lvl AllPub ... 0 NaN NaN NaN \n9 Lvl AllPub ... 0 NaN NaN NaN \n10 Lvl AllPub ... 0 NaN NaN NaN \n11 Lvl AllPub ... 0 NaN NaN NaN \n12 Lvl AllPub ... 0 NaN NaN NaN \n13 Lvl AllPub ... 0 NaN NaN NaN \n14 Lvl AllPub ... 0 NaN GdWo NaN \n15 Lvl AllPub ... 0 NaN GdPrv NaN \n16 Lvl AllPub ... 0 NaN NaN Shed \n17 Lvl AllPub ... 0 NaN NaN Shed \n18 Lvl AllPub ... 0 NaN NaN NaN \n19 Lvl AllPub ... 0 NaN MnPrv NaN \n20 Lvl AllPub ... 0 NaN NaN NaN \n21 Bnk AllPub ... 0 NaN GdPrv NaN \n22 Lvl AllPub ... 0 NaN NaN NaN \n23 Lvl AllPub ... 0 NaN NaN NaN \n24 Lvl AllPub ... 0 NaN MnPrv NaN \n25 Lvl AllPub ... 0 NaN NaN NaN \n26 Lvl AllPub ... 0 NaN NaN NaN \n27 Lvl AllPub ... 0 NaN NaN NaN \n28 Lvl AllPub ... 0 NaN NaN NaN \n29 Lvl AllPub ... 0 NaN NaN NaN \n... ... ... ... ... ... ... ... \n1430 Lvl AllPub ... 0 NaN NaN NaN \n1431 Lvl AllPub ... 0 NaN NaN NaN \n1432 Lvl AllPub ... 0 NaN NaN NaN \n1433 Lvl AllPub ... 0 NaN NaN NaN \n1434 Low AllPub ... 0 NaN NaN NaN \n1435 Lvl AllPub ... 0 NaN GdPrv NaN \n1436 Lvl AllPub ... 0 NaN GdWo NaN \n1437 Lvl AllPub ... 0 NaN NaN NaN \n1438 Lvl AllPub ... 0 NaN MnPrv NaN \n1439 Lvl AllPub ... 0 NaN NaN NaN \n1440 Bnk AllPub ... 0 NaN NaN NaN \n1441 Lvl AllPub ... 0 NaN NaN NaN \n1442 Lvl AllPub ... 0 NaN NaN NaN \n1443 Lvl AllPub ... 0 NaN NaN NaN \n1444 Lvl AllPub ... 0 NaN NaN NaN \n1445 Lvl AllPub ... 0 NaN NaN NaN \n1446 Lvl AllPub ... 0 NaN NaN NaN \n1447 Lvl AllPub ... 0 NaN NaN NaN \n1448 Lvl AllPub ... 0 NaN GdWo NaN \n1449 Lvl AllPub ... 0 NaN NaN NaN \n1450 Lvl AllPub ... 0 NaN NaN NaN \n1451 Lvl AllPub ... 0 NaN NaN NaN \n1452 Lvl AllPub ... 0 NaN NaN NaN \n1453 Lvl AllPub ... 0 NaN NaN NaN \n1454 Lvl AllPub ... 0 NaN NaN NaN \n1455 Lvl AllPub ... 0 NaN NaN NaN \n1456 Lvl AllPub ... 0 NaN MnPrv NaN \n1457 Lvl AllPub ... 0 NaN GdPrv Shed \n1458 Lvl AllPub ... 0 NaN NaN NaN \n1459 Lvl AllPub ... 0 NaN NaN NaN \n\n MiscVal MoSold YrSold SaleType SaleCondition SalePrice \n0 0 2 2008 WD Normal 208500 \n1 0 5 2007 WD Normal 181500 \n2 0 9 2008 WD Normal 223500 \n3 0 2 2006 WD Abnorml 140000 \n4 0 12 2008 WD Normal 250000 \n5 700 10 2009 WD Normal 143000 \n6 0 8 2007 WD Normal 307000 \n7 350 11 2009 WD Normal 200000 \n8 0 4 2008 WD Abnorml 129900 \n9 0 1 2008 WD Normal 118000 \n10 0 2 2008 WD Normal 129500 \n11 0 7 2006 New Partial 345000 \n12 0 9 2008 WD Normal 144000 \n13 0 8 2007 New Partial 279500 \n14 0 5 2008 WD Normal 157000 \n15 0 7 2007 WD Normal 132000 \n16 700 3 2010 WD Normal 149000 \n17 500 10 2006 WD Normal 90000 \n18 0 6 2008 WD Normal 159000 \n19 0 5 2009 COD Abnorml 139000 \n20 0 11 2006 New Partial 325300 \n21 0 6 2007 WD Normal 139400 \n22 0 9 2008 WD Normal 230000 \n23 0 6 2007 WD Normal 129900 \n24 0 5 2010 WD Normal 154000 \n25 0 7 2009 WD Normal 256300 \n26 0 5 2010 WD Normal 134800 \n27 0 5 2010 WD Normal 306000 \n28 0 12 2006 WD Normal 207500 \n29 0 5 2008 WD Normal 68500 \n... ... ... ... ... ... ... \n1430 0 7 2006 WD Normal 192140 \n1431 0 10 2009 WD Normal 143750 \n1432 0 8 2007 WD Normal 64500 \n1433 0 5 2008 WD Normal 186500 \n1434 0 5 2006 WD Normal 160000 \n1435 0 7 2008 COD Abnorml 174000 \n1436 0 5 2007 WD Normal 120500 \n1437 0 11 2008 New Partial 394617 \n1438 0 4 2010 WD Normal 149700 \n1439 0 11 2007 WD Normal 197000 \n1440 0 9 2008 WD Normal 191000 \n1441 0 5 2008 WD Normal 149300 \n1442 0 4 2009 WD Normal 310000 \n1443 0 5 2009 WD Normal 121000 \n1444 0 11 2007 WD Normal 179600 \n1445 0 5 2007 WD Normal 129000 \n1446 0 4 2010 WD Normal 157900 \n1447 0 12 2007 WD Normal 240000 \n1448 0 5 2007 WD Normal 112000 \n1449 0 8 2006 WD Abnorml 92000 \n1450 0 9 2009 WD Normal 136000 \n1451 0 5 2009 New Partial 287090 \n1452 0 5 2006 WD Normal 145000 \n1453 0 7 2006 WD Abnorml 84500 \n1454 0 10 2009 WD Normal 185000 \n1455 0 8 2007 WD Normal 175000 \n1456 0 2 2010 WD Normal 210000 \n1457 2500 5 2010 WD Normal 266500 \n1458 0 4 2010 WD Normal 142125 \n1459 0 6 2008 WD Normal 147500 \n\n[1460 rows x 81 columns]\n" ], [ "#3.1\nrows = 1460\ncolums = 81", "_____no_output_____" ], [ "#3.2 \n#MSSubClass LotArea LotFrontage เป็น Feature ที่มีผลกระทบต่อราคาบ้านค่อนข้างสูง", "_____no_output_____" ], [ "#3.3\nfor i in data1['SaleCondition'].unique():\n print (i)", "Normal\nAbnorml\nPartial\nAdjLand\nAlloca\nFamily\n" ], [ "#3.3\n#6 unique 1.Normal 2.Abnormal 3.Partial 4.AdjLand 5.Alloca 6.Family", "_____no_output_____" ], [ "data1['LotArea'].mean()", "_____no_output_____" ], [ "#3.4 \n# Pick 'LotArea' อยากทราบค่าmeanของค่า Lot size in square feet เพื่อที่จะทราบว่าคนส่วนใหญ่ ชอบซื้อประมาณไหน ", "_____no_output_____" ] ], [ [ "## 4. Learning to explore data with Seaborn\n\n1. Let us first look at the variable we want to predict SalePrice. Use Seaborn to plot histogram of sale prices. What do you notice in the histogram?\n2. Plot the histogram of the LotArea variable. What do you notice in the histogram?\n3. Use Seaborn to plot LotArea in the x-axis and SalePrice on the y-axis. Try plotting log(LotArea) versus log(SalePrice) and see if the plot looks better.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ], [ "sns.distplot(data1['SalePrice']) \n\n#4.1 เราสามารถเห็นว่าคนส่วนใหญ่ซื้อประมาณในราคาช่วง 1-2แสน", "_____no_output_____" ], [ "sns.distplot(data1['LotArea']) ", "_____no_output_____" ], [ "#4.2 จากรูป ทำให้เห้นค่า ของคนส่วนใหญ่ชอบความกว้างของ LotAreaประมาณไหน จากรูปจะเห็นได้อยู่ในช่วง 0- 5แสน และช่วง ประมาณ 1แสน-2แสน จะค่อนข้างสูง", "_____no_output_____" ], [ "sns.jointplot(x= 'LotArea',y ='SalePrice',data = data1)", "_____no_output_____" ], [ "data1['LotArea'] =np.log(data1['LotArea'])\ndata1['SalePrice'] =np.log(data1['SalePrice'])\nsns.jointplot(x= 'LotArea',y ='SalePrice',data = data1)", "_____no_output_____" ], [ "#4.3 จากรูปภาพข้างบนจะทำให้เห็นว่า ขอบ Scale ของค่า ทั้ง ภาพ2ภาพต่างกัน LotArea จะดูเล็กลงจนทำให้ค่าที่อยู่ใกล้กันกระจุกตัวกันมากขึ้น และไปอยู่ตรงกลางของกราฟ", "_____no_output_____" ] ], [ [ "## 5. Dealing with missing values\n\n1. Suppose we want to start the first step of house price modeling by exploring the relationship between four variables: MSSubClass, LotArea, LotFrontage and SalePrice. I have done some exploring and found out that LotFrontage has a lot of missing values, so you need to fix it.\n2. LotFrontage is the width of the front side of the property. Use Pandas to find out how many of the houses in our database is missing LotFrontage value.\n3. Use Pandas to replace NaN values with another number. Since we are just exploring and not modeling yet, you can simply replace NaN with zeros for now.", "_____no_output_____" ] ], [ [ "data1[['MSSubClass','LotArea','LotFrontage','SalePrice']]", "_____no_output_____" ], [ "#5.1 more NaN in LotFrontage", "_____no_output_____" ], [ "#data1['LotFrontage'].mean()", "_____no_output_____" ], [ "#data1['LotFrontage'].fillna(70)", "_____no_output_____" ], [ "data1['LotFrontage'].isnull().sum()", "_____no_output_____" ], [ "#5.2 ใช้ คำสั่ง .isnull().sum() ทำให้มันรวมค่าทั้งหมดที่เป็นค่าที่ระบุไม่ได้ ให้ = 259", "_____no_output_____" ], [ "data1['LotFrontage']=data1['LotFrontage'].fillna(0)", "_____no_output_____" ], [ "print (data1['LotFrontage'])", "0 65.0\n1 80.0\n2 68.0\n3 60.0\n4 84.0\n5 85.0\n6 75.0\n7 0.0\n8 51.0\n9 50.0\n10 70.0\n11 85.0\n12 0.0\n13 91.0\n14 0.0\n15 51.0\n16 0.0\n17 72.0\n18 66.0\n19 70.0\n20 101.0\n21 57.0\n22 75.0\n23 44.0\n24 0.0\n25 110.0\n26 60.0\n27 98.0\n28 47.0\n29 60.0\n ... \n1430 60.0\n1431 0.0\n1432 60.0\n1433 93.0\n1434 80.0\n1435 80.0\n1436 60.0\n1437 96.0\n1438 90.0\n1439 80.0\n1440 79.0\n1441 0.0\n1442 85.0\n1443 0.0\n1444 63.0\n1445 70.0\n1446 0.0\n1447 80.0\n1448 70.0\n1449 21.0\n1450 60.0\n1451 78.0\n1452 35.0\n1453 90.0\n1454 62.0\n1455 62.0\n1456 85.0\n1457 66.0\n1458 68.0\n1459 75.0\nName: LotFrontage, Length: 1460, dtype: float64\n" ], [ "#5.3 จากโจทย์ ผมแทนค่า 0เข้าไปแทนที่ NaN ด้วยคำสั่ง.fillna(0)", "_____no_output_____" ] ], [ [ "## 6. Correlations between multiple variables\n\nOne incredible feature of Seaborn is the ability to create correlation grid with pairplot function. We want to create one single plot that show us how all variables are correlated.\n1. First, you need to create a data table with four columns: MSSubClass, LotArea (with log function applied), LotFrontage (missing values replaced) and SalePrice (with log function applied).\n2. Then, use pairplot to create a grid of correlation plots. What do you observe from this plot?", "_____no_output_____" ] ], [ [ "DATA_Create =data1[['MSSubClass','LotArea','LotFrontage','SalePrice']]", "_____no_output_____" ], [ "print (DATA_Create)\n#6.1 จากข้อเก่าๆผมได้ทำให้ LotArea ,SalePrice เป็น logแล้ว และ ได้เติม LotFrontage จากค่า NaN ให้เป็น 0แล้ว", " MSSubClass LotArea LotFrontage SalePrice\n0 60 9.041922 65.0 12.247694\n1 20 9.169518 80.0 12.109011\n2 60 9.328123 68.0 12.317167\n3 70 9.164296 60.0 11.849398\n4 60 9.565214 84.0 12.429216\n5 50 9.554993 85.0 11.870600\n6 20 9.218705 75.0 12.634603\n7 60 9.247829 0.0 12.206073\n8 50 8.719317 51.0 11.774520\n9 190 8.911934 50.0 11.678440\n10 20 9.323669 70.0 11.771436\n11 60 9.386308 85.0 12.751300\n12 20 9.470240 0.0 11.877569\n13 20 9.273503 91.0 12.540758\n14 20 9.298351 0.0 11.964001\n15 45 8.719317 51.0 11.790557\n16 20 9.327323 0.0 11.911702\n17 90 9.286468 72.0 11.407565\n18 20 9.524786 66.0 11.976659\n19 20 8.930626 70.0 11.842229\n20 60 9.562053 101.0 12.692503\n21 45 8.915835 57.0 11.845103\n22 20 9.184202 75.0 12.345835\n23 120 8.348538 44.0 11.774520\n24 20 9.017484 0.0 11.944708\n25 20 9.563108 110.0 12.454104\n26 20 8.881836 60.0 11.811547\n27 20 9.348187 98.0 12.631340\n28 20 9.700208 47.0 12.242887\n29 30 8.752107 60.0 11.134589\n... ... ... ... ...\n1430 60 9.995611 60.0 12.165980\n1431 120 8.502689 0.0 11.875831\n1432 30 9.287301 60.0 11.074421\n1433 60 9.236106 93.0 12.136187\n1434 20 9.764225 80.0 11.982929\n1435 20 9.035987 80.0 12.066811\n1436 20 9.104980 60.0 11.699405\n1437 20 9.428994 96.0 12.885671\n1438 20 8.910181 90.0 11.916389\n1439 60 9.357380 80.0 12.190959\n1440 70 9.352361 79.0 12.160029\n1441 120 8.395252 0.0 11.913713\n1442 60 9.305923 85.0 12.644328\n1443 30 9.088625 0.0 11.703546\n1444 20 9.047821 63.0 12.098487\n1445 85 9.035987 70.0 11.767568\n1446 20 10.171298 0.0 11.969717\n1447 60 9.210340 80.0 12.388394\n1448 50 9.373054 70.0 11.626254\n1449 180 7.334982 21.0 11.429544\n1450 90 9.104980 60.0 11.820410\n1451 20 9.133675 78.0 12.567551\n1452 180 8.209308 35.0 11.884489\n1453 20 9.753653 90.0 11.344507\n1454 20 8.922658 62.0 12.128111\n1455 60 8.976768 62.0 12.072541\n1456 20 9.486076 85.0 12.254863\n1457 70 9.109636 66.0 12.493130\n1458 20 9.181632 68.0 11.864462\n1459 20 9.204020 75.0 11.901583\n\n[1460 rows x 4 columns]\n" ], [ "sns.pairplot(DATA_Create)", "_____no_output_____" ], [ "#6.2 หลังจากทำ Pairplotแล้ว จะได้เห้นความสัมพันธ์ของ graph แต่ละ Graph โดย ถ้าเป็น Column เดียวกัน ทั้ง แกน X และ Y จะได้เป้น HisTrogram ส่วนอืนๆจะเป็นจุด", "_____no_output_____" ] ], [ [ "## 7. Data Preparation\n\nLet's prepare train.csv for model training\n\n1. Pick columns that are numeric data and plot distributions of those data (with Seaborn). If you find a column with skewed distribution you will write a script to transform that column with a log function. Then standardize them.\n2. For categorical variables, we will simply transform categorical data into numeric data by using function `pandas.get dummies()`.\n3. Split data into x and y. The variable x contains all the house features except the SalePrice. y contains only the SalePrice.", "_____no_output_____" ] ], [ [ "#data1.skew()\nNumeric_data = data1.select_dtypes(include = ['int8','int16','int32','int64','float16','float32','float64'])\nCat_dat = ['Id','MSSubClass','MoSold','YrSold','OverallQual','OverallCond','YearBuilt','YearRemodAdd','GarageYrBlt']\nColumn_new = Numeric_data.drop(columns=Cat_dat)\nfor i in Column_new.columns:\n Column_new[i] = Column_new[i].fillna(Column_new[i].mean())\n Column_new[i] =np.log(Column_new[i]+1)\n sns.distplot(Column_new[i])\nColumn_new", "_____no_output_____" ], [ "#7.1 แยก Data ที่ไม่ใช่ Numeric data ออก และเติมค่า mean ของค่าๆนั้นๆเมื่อเจอค่าที่หาค่าไม่ได้ และหลังจากนั้น แปลงเป็น log function ทั้งหมด", "_____no_output_____" ], [ "from sklearn import preprocessing", "_____no_output_____" ], [ "Cat= data1.select_dtypes(exclude = ['int8','int16','int32','int64','float16','float32','float64'])\nColumndown = data1[Cat_dat]\nConCad_dat = pd.concat([Columndown,Cat],axis =1)\nDummie = pd.get_dummies(ConCad_dat)\nDummie", "_____no_output_____" ], [ "#7.2 change categorical Variable to numeric with .getdummies() ", "_____no_output_____" ], [ "x = Column_new.drop(columns= ['SalePrice'],axis=1)\ny = Column_new['SalePrice']\n#7.3 X except SalePrice , Y Only SalePrice\nprint (x,y)", " LotFrontage LotArea MasVnrArea BsmtFinSF1 BsmtFinSF2 BsmtUnfSF \\\n0 4.189655 2.306769 5.283204 6.561031 0.000000 5.017280 \n1 4.394449 2.319395 0.000000 6.886532 0.000000 5.652489 \n2 4.234107 2.334871 5.093750 6.188264 0.000000 6.075346 \n3 4.110874 2.318881 0.000000 5.379897 0.000000 6.293419 \n4 4.442651 2.357567 5.860786 6.486161 0.000000 6.196444 \n5 4.454347 2.356599 0.000000 6.597146 0.000000 4.174387 \n6 4.330733 2.324220 5.231109 7.222566 0.000000 5.762051 \n7 0.000000 2.327066 5.484797 6.756932 3.496508 5.379897 \n8 3.951244 2.274115 0.000000 0.000000 0.000000 6.859615 \n9 3.931826 2.293740 0.000000 6.747587 0.000000 4.948760 \n10 4.262680 2.334439 0.000000 6.810142 0.000000 4.905275 \n11 4.454347 2.340488 5.659482 6.906755 0.000000 5.181784 \n12 0.000000 2.348537 0.000000 6.603944 0.000000 5.170484 \n13 4.521789 2.329568 5.726848 0.000000 0.000000 7.309881 \n14 0.000000 2.331984 5.361292 6.598509 0.000000 6.255750 \n15 3.951244 2.274115 0.000000 0.000000 0.000000 6.725034 \n16 0.000000 2.334793 5.198497 6.361302 0.000000 6.056784 \n17 4.290459 2.330829 0.000000 0.000000 0.000000 0.000000 \n18 4.204693 2.353733 0.000000 6.472346 0.000000 6.150603 \n19 4.262680 2.295624 0.000000 6.224558 0.000000 6.265301 \n20 4.624973 2.357268 5.942799 0.000000 0.000000 7.055313 \n21 4.060443 2.294133 0.000000 0.000000 0.000000 6.458338 \n22 4.330733 2.320838 5.641907 0.000000 0.000000 7.483244 \n23 3.806662 2.235220 0.000000 6.734592 0.000000 5.303305 \n24 0.000000 2.304332 0.000000 5.241747 6.505784 5.323010 \n25 4.709530 2.357368 6.463029 0.000000 0.000000 7.356918 \n26 4.110874 2.290698 0.000000 5.459586 6.188264 5.198497 \n27 4.595120 2.336811 5.303305 7.105786 0.000000 6.188264 \n28 3.871201 2.370263 0.000000 7.153052 0.000000 5.337538 \n29 4.110874 2.277483 0.000000 0.000000 0.000000 6.255750 \n... ... ... ... ... ... ... \n1430 4.110874 2.397496 0.000000 0.000000 0.000000 6.597146 \n1431 0.000000 2.251575 0.000000 6.865891 0.000000 0.000000 \n1432 4.110874 2.330910 0.000000 0.000000 0.000000 6.487684 \n1433 4.543295 2.325921 5.765191 0.000000 0.000000 6.842683 \n1434 4.394449 2.376228 0.000000 6.842683 0.000000 5.252273 \n1435 4.394449 2.306177 5.472271 0.000000 0.000000 7.185387 \n1436 4.110874 2.313028 0.000000 6.424869 0.000000 5.517453 \n1437 4.574711 2.344590 6.056784 7.198184 0.000000 6.391917 \n1438 4.510860 2.293563 0.000000 6.398595 0.000000 5.746203 \n1439 4.394449 2.337699 4.574711 5.755742 4.709530 4.744932 \n1440 4.382027 2.337215 0.000000 0.000000 0.000000 6.378426 \n1441 0.000000 2.240204 4.997212 6.548219 0.000000 5.023881 \n1442 4.454347 2.332719 5.081404 6.641182 0.000000 5.533389 \n1443 0.000000 2.311409 0.000000 0.000000 0.000000 6.859615 \n1444 4.158883 2.307356 4.672829 0.000000 0.000000 7.260523 \n1445 4.262680 2.306177 0.000000 5.236442 6.442540 0.000000 \n1446 0.000000 2.413348 5.247024 6.386879 0.000000 6.390241 \n1447 4.394449 2.323401 6.084499 6.984716 0.000000 4.955827 \n1448 4.262680 2.339212 0.000000 0.000000 0.000000 6.329721 \n1449 3.091042 2.120461 0.000000 6.317165 0.000000 4.356709 \n1450 4.110874 2.313028 0.000000 0.000000 0.000000 6.799056 \n1451 4.369448 2.315864 5.273000 0.000000 0.000000 7.361375 \n1452 3.583519 2.220215 4.394449 6.306275 0.000000 0.000000 \n1453 4.510860 2.375245 0.000000 0.000000 0.000000 7.039660 \n1454 4.143135 2.294821 0.000000 6.018593 0.000000 6.699500 \n1455 4.143135 2.300259 0.000000 0.000000 0.000000 6.860664 \n1456 4.454347 2.350048 4.787492 6.673298 5.099866 6.380123 \n1457 4.204693 2.313489 0.000000 5.620401 0.000000 6.777647 \n1458 4.234107 2.320585 0.000000 3.912023 6.937314 0.000000 \n1459 4.330733 2.322782 0.000000 6.722630 5.673323 4.919981 \n\n TotalBsmtSF 1stFlrSF 2ndFlrSF LowQualFinSF ... Fireplaces \\\n0 6.753438 6.753438 6.751101 0.000000 ... 0.000000 \n1 7.141245 7.141245 0.000000 0.000000 ... 0.693147 \n2 6.825460 6.825460 6.765039 0.000000 ... 0.693147 \n3 6.629363 6.869014 6.629363 0.000000 ... 0.693147 \n4 7.044033 7.044033 6.960348 0.000000 ... 0.693147 \n5 6.680855 6.680855 6.340359 0.000000 ... 0.000000 \n6 7.430707 7.435438 0.000000 0.000000 ... 0.693147 \n7 7.010312 7.010312 6.891626 0.000000 ... 1.098612 \n8 6.859615 6.930495 6.624065 0.000000 ... 1.098612 \n9 6.899723 6.982863 0.000000 0.000000 ... 1.098612 \n10 6.947937 6.947937 0.000000 0.000000 ... 0.000000 \n11 7.069874 7.075809 7.041412 0.000000 ... 1.098612 \n12 6.816736 6.816736 0.000000 0.000000 ... 0.000000 \n13 7.309881 7.309881 0.000000 0.000000 ... 0.693147 \n14 7.134094 7.134094 0.000000 0.000000 ... 0.693147 \n15 6.725034 6.751101 0.000000 0.000000 ... 0.000000 \n16 6.912743 6.912743 0.000000 0.000000 ... 0.693147 \n17 0.000000 7.167809 0.000000 0.000000 ... 0.000000 \n18 7.016610 7.016610 0.000000 0.000000 ... 0.000000 \n19 6.937314 7.200425 0.000000 0.000000 ... 0.000000 \n20 7.055313 7.055313 7.105786 0.000000 ... 0.693147 \n21 6.458338 7.011214 0.000000 0.000000 ... 0.693147 \n22 7.483244 7.493317 0.000000 0.000000 ... 0.693147 \n23 6.947937 6.966967 0.000000 0.000000 ... 0.693147 \n24 6.966967 6.966967 0.000000 0.000000 ... 0.693147 \n25 7.356918 7.378384 0.000000 0.000000 ... 0.693147 \n26 6.803505 6.803505 0.000000 0.000000 ... 0.000000 \n27 7.441320 7.441320 0.000000 0.000000 ... 0.693147 \n28 7.303170 7.378384 0.000000 0.000000 ... 1.098612 \n29 6.255750 6.255750 0.000000 0.000000 ... 0.000000 \n... ... ... ... ... ... ... \n1430 6.597146 6.599870 7.007601 0.000000 ... 0.693147 \n1431 6.865891 6.865891 0.000000 0.000000 ... 0.000000 \n1432 6.487684 6.876265 0.000000 0.000000 ... 0.000000 \n1433 6.842683 6.870053 6.722630 0.000000 ... 0.693147 \n1434 7.027315 7.027315 0.000000 0.000000 ... 0.693147 \n1435 7.185387 7.338238 0.000000 0.000000 ... 0.693147 \n1436 6.762730 6.762730 0.000000 0.000000 ... 0.000000 \n1437 7.566828 7.566828 0.000000 0.000000 ... 0.693147 \n1438 6.816736 7.120444 0.000000 0.000000 ... 0.000000 \n1439 6.291569 6.947937 6.530878 0.000000 ... 0.693147 \n1440 6.378426 7.261225 6.618739 5.953243 ... 0.693147 \n1441 6.744059 6.744059 0.000000 0.000000 ... 0.693147 \n1442 6.925595 6.934397 6.889591 0.000000 ... 0.693147 \n1443 6.859615 6.859615 0.000000 0.000000 ... 0.693147 \n1444 7.260523 7.260523 0.000000 0.000000 ... 0.000000 \n1445 6.703188 6.817831 0.000000 0.000000 ... 0.000000 \n1446 7.080868 7.080868 0.000000 0.000000 ... 0.000000 \n1447 7.107425 7.107425 6.769642 0.000000 ... 0.693147 \n1448 6.329721 6.680855 6.311735 0.000000 ... 0.000000 \n1449 6.447306 6.447306 0.000000 0.000000 ... 0.000000 \n1450 6.799056 6.799056 6.799056 0.000000 ... 0.000000 \n1451 7.361375 7.364547 0.000000 0.000000 ... 0.693147 \n1452 6.306275 6.978214 0.000000 0.000000 ... 0.000000 \n1453 7.039660 7.039660 0.000000 0.000000 ... 0.000000 \n1454 7.108244 7.108244 0.000000 0.000000 ... 0.000000 \n1455 6.860664 6.860664 6.543912 0.000000 ... 0.693147 \n1456 7.341484 7.637234 0.000000 0.000000 ... 1.098612 \n1457 7.050123 7.080868 7.050123 0.000000 ... 1.098612 \n1458 6.983790 6.983790 0.000000 0.000000 ... 0.000000 \n1459 7.136483 7.136483 0.000000 0.000000 ... 0.000000 \n\n GarageCars GarageArea WoodDeckSF OpenPorchSF EnclosedPorch \\\n0 1.098612 6.308098 0.000000 4.127134 0.000000 \n1 1.098612 6.133398 5.700444 0.000000 0.000000 \n2 1.098612 6.411818 0.000000 3.761200 0.000000 \n3 1.386294 6.466145 0.000000 3.583519 5.609472 \n4 1.386294 6.729824 5.262690 4.442651 0.000000 \n5 1.098612 6.175867 3.713572 3.433987 0.000000 \n6 1.098612 6.456770 5.545177 4.060443 0.000000 \n7 1.098612 6.184149 5.463832 5.323010 5.433722 \n8 1.098612 6.150603 4.510860 0.000000 5.327876 \n9 0.693147 5.327876 0.000000 1.609438 0.000000 \n10 0.693147 5.953243 0.000000 0.000000 0.000000 \n11 1.386294 6.602588 4.997212 3.091042 0.000000 \n12 0.693147 5.866468 4.948760 0.000000 0.000000 \n13 1.386294 6.734592 5.081404 3.526361 0.000000 \n14 0.693147 5.866468 0.000000 5.365976 5.176150 \n15 1.098612 6.357842 3.891820 4.727388 0.000000 \n16 1.098612 6.175867 0.000000 0.000000 0.000000 \n17 1.098612 6.248043 0.000000 0.000000 0.000000 \n18 1.098612 6.357842 0.000000 4.634729 0.000000 \n19 0.693147 5.686975 0.000000 0.000000 0.000000 \n20 1.386294 6.749931 5.484797 5.043425 0.000000 \n21 0.693147 5.638355 0.000000 0.000000 5.327876 \n22 1.098612 6.282267 5.147494 5.075174 0.000000 \n23 1.098612 6.350886 4.615121 4.709530 0.000000 \n24 0.693147 5.602119 6.008813 4.510860 0.000000 \n25 1.386294 6.792344 0.000000 4.043051 0.000000 \n26 1.098612 6.357842 5.407172 3.496508 0.000000 \n27 1.386294 6.650279 0.000000 3.931826 0.000000 \n28 0.693147 5.768321 5.666427 5.556828 0.000000 \n29 0.693147 5.484797 3.912023 0.000000 4.477337 \n... ... ... ... ... ... \n1430 1.098612 5.921578 4.615121 3.713572 0.000000 \n1431 1.098612 6.089045 0.000000 4.110874 0.000000 \n1432 0.693147 5.379897 0.000000 0.000000 0.000000 \n1433 1.098612 6.113682 0.000000 0.000000 0.000000 \n1434 1.098612 6.184149 5.690359 3.737670 0.000000 \n1435 1.098612 6.137727 0.000000 3.610918 0.000000 \n1436 1.098612 6.270988 0.000000 0.000000 0.000000 \n1437 1.386294 6.652863 0.000000 4.204693 0.000000 \n1438 1.098612 6.828712 0.000000 5.068904 5.068904 \n1439 1.098612 6.311735 0.000000 4.488636 5.379897 \n1440 1.098612 6.511745 6.068426 0.000000 0.000000 \n1441 1.098612 6.042633 5.010635 0.000000 0.000000 \n1442 1.386294 6.700731 5.129899 3.970292 0.000000 \n1443 0.693147 5.262690 0.000000 4.595120 0.000000 \n1444 1.098612 6.440947 5.262690 4.110874 0.000000 \n1445 0.693147 5.484797 0.000000 0.000000 5.533389 \n1446 0.693147 5.746203 5.568345 3.688879 0.000000 \n1447 1.098612 6.322565 0.000000 4.189655 0.000000 \n1448 0.693147 5.953243 5.129899 3.218876 0.000000 \n1449 0.000000 0.000000 0.000000 0.000000 0.000000 \n1450 0.000000 0.000000 3.496508 3.828641 0.000000 \n1451 1.386294 6.734592 0.000000 3.610918 0.000000 \n1452 1.098612 6.265301 0.000000 3.367296 0.000000 \n1453 0.000000 0.000000 3.610918 4.043051 0.000000 \n1454 1.098612 5.993961 0.000000 4.736198 0.000000 \n1455 1.098612 6.133398 0.000000 3.713572 0.000000 \n1456 1.098612 6.216606 5.857933 0.000000 0.000000 \n1457 0.693147 5.533389 0.000000 4.110874 0.000000 \n1458 0.693147 5.484797 5.905362 0.000000 4.727388 \n1459 0.693147 5.624018 6.602588 4.234107 0.000000 \n\n 3SsnPorch ScreenPorch PoolArea MiscVal \n0 0.000000 0.000000 0.0 0.000000 \n1 0.000000 0.000000 0.0 0.000000 \n2 0.000000 0.000000 0.0 0.000000 \n3 0.000000 0.000000 0.0 0.000000 \n4 0.000000 0.000000 0.0 0.000000 \n5 5.771441 0.000000 0.0 6.552508 \n6 0.000000 0.000000 0.0 0.000000 \n7 0.000000 0.000000 0.0 5.860786 \n8 0.000000 0.000000 0.0 0.000000 \n9 0.000000 0.000000 0.0 0.000000 \n10 0.000000 0.000000 0.0 0.000000 \n11 0.000000 0.000000 0.0 0.000000 \n12 0.000000 5.176150 0.0 0.000000 \n13 0.000000 0.000000 0.0 0.000000 \n14 0.000000 0.000000 0.0 0.000000 \n15 0.000000 0.000000 0.0 0.000000 \n16 0.000000 0.000000 0.0 6.552508 \n17 0.000000 0.000000 0.0 6.216606 \n18 0.000000 0.000000 0.0 0.000000 \n19 0.000000 0.000000 0.0 0.000000 \n20 0.000000 0.000000 0.0 0.000000 \n21 0.000000 0.000000 0.0 0.000000 \n22 0.000000 0.000000 0.0 0.000000 \n23 0.000000 0.000000 0.0 0.000000 \n24 0.000000 0.000000 0.0 0.000000 \n25 0.000000 0.000000 0.0 0.000000 \n26 0.000000 0.000000 0.0 0.000000 \n27 0.000000 0.000000 0.0 0.000000 \n28 0.000000 0.000000 0.0 0.000000 \n29 0.000000 0.000000 0.0 0.000000 \n... ... ... ... ... \n1430 0.000000 0.000000 0.0 0.000000 \n1431 0.000000 0.000000 0.0 0.000000 \n1432 0.000000 0.000000 0.0 0.000000 \n1433 0.000000 0.000000 0.0 0.000000 \n1434 0.000000 0.000000 0.0 0.000000 \n1435 0.000000 0.000000 0.0 0.000000 \n1436 0.000000 0.000000 0.0 0.000000 \n1437 5.720312 0.000000 0.0 0.000000 \n1438 0.000000 0.000000 0.0 0.000000 \n1439 0.000000 0.000000 0.0 0.000000 \n1440 0.000000 0.000000 0.0 0.000000 \n1441 0.000000 0.000000 0.0 0.000000 \n1442 0.000000 0.000000 0.0 0.000000 \n1443 0.000000 3.713572 0.0 0.000000 \n1444 0.000000 0.000000 0.0 0.000000 \n1445 0.000000 0.000000 0.0 0.000000 \n1446 0.000000 0.000000 0.0 0.000000 \n1447 0.000000 0.000000 0.0 0.000000 \n1448 0.000000 0.000000 0.0 0.000000 \n1449 0.000000 0.000000 0.0 0.000000 \n1450 0.000000 0.000000 0.0 0.000000 \n1451 0.000000 0.000000 0.0 0.000000 \n1452 0.000000 0.000000 0.0 0.000000 \n1453 0.000000 0.000000 0.0 0.000000 \n1454 0.000000 0.000000 0.0 0.000000 \n1455 0.000000 0.000000 0.0 0.000000 \n1456 0.000000 0.000000 0.0 0.000000 \n1457 0.000000 0.000000 0.0 7.824446 \n1458 0.000000 0.000000 0.0 0.000000 \n1459 0.000000 0.000000 0.0 0.000000 \n\n[1460 rows x 28 columns] 0 2.583824\n1 2.573300\n2 2.589054\n3 2.553297\n4 2.597433\n5 2.554946\n6 2.612611\n7 2.580677\n8 2.547453\n9 2.539903\n10 2.547211\n11 2.621133\n12 2.555487\n13 2.605704\n14 2.562176\n15 2.548707\n16 2.558134\n17 2.518306\n18 2.563152\n19 2.552739\n20 2.616848\n21 2.552963\n22 2.591204\n23 2.547453\n24 2.560687\n25 2.599284\n26 2.550347\n27 2.612372\n28 2.583461\n29 2.496060\n ... \n1430 2.577636\n1431 2.555352\n1432 2.491089\n1433 2.575371\n1434 2.563635\n1435 2.570075\n1436 2.541555\n1437 2.630857\n1438 2.558497\n1439 2.579532\n1440 2.577184\n1441 2.558290\n1442 2.613324\n1443 2.541881\n1444 2.572497\n1445 2.546908\n1446 2.562617\n1447 2.594388\n1448 2.535778\n1449 2.520076\n1450 2.551038\n1451 2.607681\n1452 2.556024\n1453 2.513211\n1454 2.574756\n1455 2.570514\n1456 2.584364\n1457 2.602181\n1458 2.554469\n1459 2.557350\nName: SalePrice, Length: 1460, dtype: float64\n" ] ], [ [ "## 8. Let us first fit a very simple linear regression model, just to see what we get.\n\n1. Use import LinearRegression from sklearn.linear model and use function `fit()` to fit the model.\n2. Use function `predict()` to get house price predictions from the model (let’s call the predicted house prices yhat).\n3. Plot `y` against `yhat` to see how good your predictions are.", "_____no_output_____" ] ], [ [ "from sklearn import linear_model", "_____no_output_____" ], [ "Data_LineReg = linear_model.LinearRegression()\nData_Linefit = Data_LineReg.fit(x,y)\nprint (Data_Linefit)\n#8.1 use fit to fit model LinearRegression", "LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)\n" ], [ "yhat = Data_Linefit.predict(x)\nyhat2 = Data_LineReg.predict(x)\nprint (yhat)\n#8.2 predict house prices with LinearRegression model equal yhat [2.58400315 2.57481799 2.5877421 ... 2.57729059 2.54419389 2.5616654 ]", "[2.58400315 2.57481799 2.5877421 ... 2.57729059 2.54419389 2.5616654 ]\n" ], [ "sns.regplot(x = y,y=yhat,data=data1)\n#8.3 plot data with y against yhat", "_____no_output_____" ] ], [ [ "## 9. Assessing Your Model\n\nAccording to Kaggle’s official rule on this problem, they use root mean square errors (rmse) to judge the accuracy of our model. This error computes the dif- ference between the log of actual house prices and the log of predicted house price. Find the mean and squareroot them.\n\nWe want to see how we compare to other machine learning contestants on Kag- gle so let us compute our rmse. Luckily, sklearn has done most of the work for you by providing mean square error function. You can use it by importing the function from sklearn.metrics. Then, you can compute mean square error and take a squareroot to get rmse.\n\nWhat’s the rmse of your current model? Check out Kaggle Leaderboard for this problem to see how your number measures up with the other contestants.", "_____no_output_____" ] ], [ [ "from sklearn.metrics import mean_squared_error", "_____no_output_____" ], [ "root_mean = np.sqrt(mean_squared_error(y,yhat))\nprint (root_mean)", "0.013526154149753134\n" ], [ "#9 it Room mean Sqaure error = 0.013526154149753134 from y againts yhat", "_____no_output_____" ] ], [ [ "## 10. Cross Validation\n\nAs we discussed earlier, don’t brag about your model’s accuracy until you have performed cross validation. Let us check cross-validated performance to avoid embarrassment.\n\nLuckily, scikit learn has done most of the work for us once again. You can use the function `cross_val_predict()` to train the model with cross validation method and output the predictions.\n\nWhat’s the rmse of your cross-validated model? Discuss what you observe in your results here. You may try plotting this new yhat with y to get better insights about this question.", "_____no_output_____" ] ], [ [ "from sklearn import datasets,linear_model", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_predict", "_____no_output_____" ], [ "Predict_cross = cross_val_predict(Data_Linefit,x,y)\nsns.distplot(Predict_cross)", "_____no_output_____" ], [ "Predict_cross", "_____no_output_____" ], [ "sns.regplot( x= y , y=Predict_cross,data= data1)", "_____no_output_____" ], [ "#10 rmse is rootmeansquare error it equae Predict_cross is ([2.5827847 , 2.57504129, 2.58578173, ..., 2.57913513, 2.54371586,2.56422615] \n#and we plot with new yhat with Precit_cross is CrossValidation predict fuction ", "_____no_output_____" ] ], [ [ "## 11 (Optional) Fit Better Models\nThere are other models you can fit that will perform better than linear regres- sion. For example, you can fit linear regression with L2 regularization. This class of models has a street name of ‘Ridge Regression’ and sklearn simply called them Ridge. As we learned last time, this model will fight overfitting problem. Furthermore, you can try linear regression with L1 regularization (street name Lasso Regression or Lasso in sklearn). Try these models and see how you com- pare with other Kagglers now. You can write about your findings below.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
4a5fce72288cc0778d5aae44836deaf28f5277ac
651,526
ipynb
Jupyter Notebook
.ipynb_checkpoints/Step_2_Color_Transform_and_Gradients_Threshold-checkpoint.ipynb
Ceachi/Project-Self-Driving-Car-Advanced-Lane-Lines
ab9b101b211fde7f2eb714d0ae6b47961d63f056
[ "MIT" ]
2
2020-01-04T15:30:48.000Z
2021-09-28T15:58:53.000Z
.ipynb_checkpoints/Step_2_Color_Transform_and_Gradients_Threshold-checkpoint.ipynb
Ceachi/Project-Self-Driving-Car-Advanced-Lane-Lines
ab9b101b211fde7f2eb714d0ae6b47961d63f056
[ "MIT" ]
null
null
null
.ipynb_checkpoints/Step_2_Color_Transform_and_Gradients_Threshold-checkpoint.ipynb
Ceachi/Project-Self-Driving-Car-Advanced-Lane-Lines
ab9b101b211fde7f2eb714d0ae6b47961d63f056
[ "MIT" ]
null
null
null
2,017.108359
640,176
0.958083
[ [ [ "#Import section\nimport numpy as np\nimport cv2\nimport glob\nimport matplotlib.pyplot as plt\nimport pickle\n%matplotlib inline", "_____no_output_____" ], [ "# Loading camera calibration coefficients(matrix and camera coefficients) from pickle file\ndef getCameraCalibrationCoefficientsFromPickleFile(filePath):\n cameraCalibration = pickle.load( open(filePath, 'rb' ) )\n mtx, dist = map(cameraCalibration.get, ('mtx', 'dist'))\n return mtx, dist\n\ndef getTestImages(filePath):\n # Load test images.\n testImages = list(map(lambda imageFileName: (imageFileName, cv2.imread(imageFileName)), glob.glob(filePath)))\n return testImages", "_____no_output_____" ], [ "def undistortImageAndGetHLS(image, mtx, dist):\n # hlsOriginal = undistortAndHLS(originalImage, mtx, dist)\n \"\"\"\n Undistort the image with `mtx`, `dist` and convert it to HLS.\n \"\"\"\n undist = cv2.undistort(image, mtx, dist, None, mtx)\n hls = cv2.cvtColor(undist, cv2.COLOR_RGB2HLS)\n #extract HLS from the image\n H = hls[:,:,0] #channels\n L = hls[:,:,1]\n S = hls[:,:,2]\n return H, L, S", "_____no_output_____" ], [ "def thresh(yourChannel, threshMin = 0, threshMax = 255):\n # Apply a threshold to the S channel\n # thresh = (0, 160)\n binary_output = np.zeros_like(yourChannel)\n binary_output[(yourChannel >= threshMin) & (yourChannel <= threshMax)] = 1\n # Return a binary image of threshold result\n return binary_output", "_____no_output_____" ], [ "def applySobel(img, orient='x', sobel_kernel=3, thresh_min = 0, thresh_max = 255):\n # Apply the following steps to img\n # 1) Take the derivative in x or y given orient = 'x' or 'y'\n sobel = 0\n if orient == 'x':\n sobel = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)\n else:\n sobel = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)\n # 2) Take the absolute value of the derivative or gradient\n abs_sobel = np.absolute(sobel)\n # 3) Scale to 8-bit (0 - 255) then convert to type = np.uint8\n scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))\n # 4) Create a mask of 1's where the scaled gradient magnitude is > thresh_min and < thresh_max\n binary_output = thresh(scaled_sobel,thresh_min,thresh_max) \n # 5) Return this mask as your binary_output image\n return binary_output", "_____no_output_____" ], [ "def applyActionToImages(images, action):\n return list(map(lambda img: (img[0], action(img[1])), images))\n\n# Method to plot images on cols / rows \ndef showImages(images, cols = 4, rows = 5, figsize=(15,10), cmap = None):\n imgLength = len(images)\n fig, axes = plt.subplots(rows, cols, figsize=figsize)\n indexes = range(cols * rows)\n for ax, index in zip(axes.flat, indexes):\n if index < imgLength:\n imagePathName, image = images[index]\n if cmap == None:\n ax.imshow(image)\n else:\n ax.imshow(image, cmap=cmap)\n ax.set_title(imagePathName)\n ax.axis('off')\n \n# Get camera matrix and distortion coefficient\nmtx, dist = getCameraCalibrationCoefficientsFromPickleFile('./pickled_data/camera_calibration.p')\n \n# Lambda action applied on all images\nuseSChannel = lambda img: undistortImageAndGetHLS(img, mtx, dist)[2]\n\n# Get Test images\ntestImages = getTestImages('./test_images/*.jpg')\n\n# Get all 'S' channels from all Test images\nresultSChannel = applyActionToImages(testImages, useSChannel)\n\n# Show our result\n#showImages(resultSChannel, 2, 3, (15, 13), cmap='gray')\n", "_____no_output_____" ], [ "# Apply Sobel in 'x' direction and plot images\napplySobelX = lambda img: applySobel(useSChannel(img), orient='x', thresh_min=10, thresh_max=160)\n\n# Get all 'S' channels from all Test images\nresultApplySobelX = applyActionToImages(testImages, applySobelX)\n\n# Show our result\n#showImages(resultApplySobelX, 2, 3, (15, 13), cmap='gray')", "_____no_output_____" ], [ "# Apply Sobel in 'y' direction and plot images\napplySobelY = lambda img: applySobel(useSChannel(img), orient='y', thresh_min=10, thresh_max=160)\n\n# Get all 'S' channels from all Test images\nresultApplySobelY = applyActionToImages(testImages, applySobelY)\n\n# Show our result\n#showImages(resultApplySobelY, 2, 3, (15, 13), cmap='gray')", "_____no_output_____" ], [ "def mag_thresh(img, sobel_kernel=3, thresh_min = 0, thresh_max = 255):\n # Apply the following steps to img\n # 1) Take the gradient in x and y separately\n sobelX = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)\n sobelY = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)\n # 2) Calculate the magnitude\n gradmag = np.sqrt(sobelX**2 + sobelY**2)\n # 3) Scale to 8-bit (0 - 255) and convert to type = np.uint8\n scale_factor = np.max(gradmag)/255 \n gradmag = (gradmag/scale_factor).astype(np.uint8) \n # 4) Create a binary mask where mag thresholds are met\n binary_output = thresh(gradmag,thresh_min, thresh_max)\n # 5) Return this mask as your binary_output image\n return binary_output", "_____no_output_____" ], [ "# Apply Magnitude in 'x' and 'y' directions in order to calcultate the magnitude of pixels and plot images\napplyMagnitude = lambda img: mag_thresh(useSChannel(img), thresh_min=5, thresh_max=160)\n\n# Apply the lamnda function to all test images\nresultMagnitudes = applyActionToImages(testImages, applyMagnitude)\n\n# Show our result\n#showImages(resultMagnitudes, 2, 3, (15, 13), cmap='gray')", "_____no_output_____" ], [ "# Define a function that applies Sobel x and y, \n# then computes the direction of the gradient\n# and applies a threshold.\ndef dir_threshold(img, sobel_kernel=3, thresh_min = 0, thresh_max = np.pi/2):\n # 1) Take the gradient in x and y separately and \n # Take the absolute value of the x and y gradients\n sobelX = np.absolute(cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel))\n sobelY = np.absolute(cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel))\n # 2) Use np.arctan2(abs_sobely, abs_sobelx) to calculate the direction of the gradient \n # sobelY / sobelX \n gradientDirection = np.arctan2(sobelY, sobelX)\n # 3) Create a binary mask where direction thresholds are met\n binary_output = thresh(gradientDirection, thresh_min, thresh_max)\n # 4) Return this mask as your binary_output image\n return binary_output", "_____no_output_____" ], [ "# Apply direction of the gradient\napplyDirection = lambda img: dir_threshold(useSChannel(img), thresh_min=0.79, thresh_max=1.20)\n\n# Apply the lambda function to all test images\nresultDirection = applyActionToImages(testImages, applyDirection)\n\n# Show our result\n#showImages(resultDirection, 2, 3, (15, 13), cmap='gray')", "_____no_output_____" ], [ "def combineGradients(img):\n sobelX = applySobelX(img)\n sobelY = applySobelY(img)\n magnitude = applyMagnitude(img)\n direction = applyDirection(img)\n combined = np.zeros_like(sobelX) \n combined[((sobelX == 1) & (sobelY == 1)) | ((magnitude == 1) & (direction == 1))] = 1\n return combined\n\nresultCombined = applyActionToImages(testImages, combineGradients)\n\n# Show our result\n#showImages(resultCombined, 2, 3, (15, 13), cmap='gray')", "_____no_output_____" ], [ "def show_compared_results():\n titles = ['Apply Sobel X', 'Apply Sobel Y', 'Apply Magnitude', 'Apply Direction', 'Combined']\n results = list(zip(resultApplySobelX, resultApplySobelY, resultMagnitudes, resultDirection, resultCombined))\n # only 5 images\n resultsAndTitle = list(map(lambda images: list(zip(titles, images)), results))[3:6]\n flattenResults = [item for sublist in resultsAndTitle for item in sublist]\n fig, axes = plt.subplots(ncols=5, nrows=len(resultsAndTitle), figsize=(25,10))\n for ax, imageTuple in zip(axes.flat, flattenResults):\n title, images = imageTuple\n imagePath, img = images\n ax.imshow(img, cmap='gray')\n ax.set_title(imagePath + '\\n' + title, fontsize=8)\n ax.axis('off')\n fig.subplots_adjust(hspace=0, wspace=0.05, bottom=0)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a5fd57b5fe1fbec7f07156ceb258e016657f773
4,389
ipynb
Jupyter Notebook
5. Evaluation.ipynb
rubicco/music-genre-classification
c097024388cf774e98fbab7c377e5ba12764c335
[ "MIT" ]
null
null
null
5. Evaluation.ipynb
rubicco/music-genre-classification
c097024388cf774e98fbab7c377e5ba12764c335
[ "MIT" ]
null
null
null
5. Evaluation.ipynb
rubicco/music-genre-classification
c097024388cf774e98fbab7c377e5ba12764c335
[ "MIT" ]
null
null
null
20.319444
100
0.531784
[ [ [ "from src.ModelTrainer import ModelTrainer, bert_settings, glove_settings, word2vec_settings\nfrom sklearn.metrics import classification_report\nimport os", "_____no_output_____" ], [ "label_mapping = {'Jazz': 0,\n 'Country': 1,\n 'Pop': 2,\n 'Rock': 3,\n 'Metal': 4,\n 'Hip-Hop': 5,\n 'Electronic': 6}\nlabels = [k for k,v in label_mapping.items()]\nlabels", "_____no_output_____" ] ], [ [ "# GloVe Evaluation", "_____no_output_____" ] ], [ [ "path_chkp = \"./output/GloveRun/20-03-10_200018_GloveLSTM/checkpoints/minLoss_epoch.chkp\"\nglove_settings[\"load_checkpoint\"] = path_chkp", "_____no_output_____" ], [ "model_trainer = ModelTrainer(glove_settings)", "_____no_output_____" ], [ "model_trainer.epoch_counter", "_____no_output_____" ], [ "%%time\ny_pred, y_true = model_trainer.predict()", "_____no_output_____" ], [ "print(classification_report(y_true, y_pred, target_names=labels))", "_____no_output_____" ] ], [ [ "# BERT Evaluation", "_____no_output_____" ] ], [ [ "path_chkp = \"./output/BertRun/20-03-10_200018_BertLSTM/checkpoints/minLoss_epoch.chkp\"\nbert_settings[\"load_checkpoint\"] = path_chkp", "_____no_output_____" ], [ "model_trainer = ModelTrainer(bert_settings)", "_____no_output_____" ], [ "model_trainer.epoch_counter", "_____no_output_____" ], [ "%%time\ny_pred, y_true = model_trainer.predict()", "_____no_output_____" ], [ "print(classification_report(y_true, y_pred, target_names=labels))", "_____no_output_____" ] ], [ [ "# Word2Vec Evaluation", "_____no_output_____" ] ], [ [ "path_chkp = \"./output/BertRun/20-03-10_200018_BertLSTM/checkpoints/minLoss_epoch.chkp\"\nword2vec_settings[\"load_checkpoint\"] = path_chkp", "_____no_output_____" ], [ "model_trainer = ModelTrainer(word2vec_settings)", "_____no_output_____" ], [ "model_trainer.epoch_counter", "_____no_output_____" ], [ "%%time\ny_pred, y_true = model_trainer.predict()", "_____no_output_____" ], [ "print(classification_report(y_true, y_pred, target_names=labels))", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
4a5fd9a8705f3b740e09fe562500df665e0efeee
6,059
ipynb
Jupyter Notebook
numpy_basics/numpy.ipynb
aleksander-ivanov/pyda_homeworks
aea53b673f51cfd9752dafb4be46d844ff593bb0
[ "MIT" ]
null
null
null
numpy_basics/numpy.ipynb
aleksander-ivanov/pyda_homeworks
aea53b673f51cfd9752dafb4be46d844ff593bb0
[ "MIT" ]
null
null
null
numpy_basics/numpy.ipynb
aleksander-ivanov/pyda_homeworks
aea53b673f51cfd9752dafb4be46d844ff593bb0
[ "MIT" ]
null
null
null
21.409894
156
0.478132
[ [ [ "# Домашнее задание «Библиотека numpy. Вычислительные задачи»", "_____no_output_____" ], [ "### Задание 1\nСоздайте numpy array с элементами от числа N до 0 (например, для N = 10 это будет array([9, 8, 7, 6, 5, 4, 3, 2, 1, 0])).\n", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ], [ "def build_np_array(n):\n return np.sort(np.arange(0, 10))[::-1]\n", "_____no_output_____" ], [ "build_np_array(10)", "_____no_output_____" ] ], [ [ "### Задание 2\nСоздайте диагональную матрицу с элементами от N до 0. Посчитайте сумму ее значений на диагонали.\n", "_____no_output_____" ] ], [ [ "def sum_np_diag(n,s):\n return np.trace(np.diag(np.arange(0, n, s)))", "_____no_output_____" ], [ "sum_np_diag(8, 2)", "_____no_output_____" ] ], [ [ "### Задание 3\nРешите систему уравнений:\n\n4x + 2y + z = 4\n\nx + 3y = 12\n\n5y + 4z = -3", "_____no_output_____" ] ], [ [ "from numpy import linalg\n\na = np.array( [ \n [4, 2, 1], \n [1, 3, 0], \n [0, 5, 4] ] )\n \nb = np.array( [4, 12, -3] )\n\nlinalg.solve(a,b)", "_____no_output_____" ] ], [ [ "### Задача 4 домашнего задания\nИмеется матрица покупок в интернет-магазине. Столбец А - ID пользователя. Остальные столбцы - количество покупок категорий товаров этим пользователем:", "_____no_output_____" ] ], [ [ "users_stats = np.array(\n [\n [2, 1, 0, 0, 0, 0],\n [1, 1, 2, 1, 0, 0],\n [2, 0, 1, 0, 0, 0],\n [1, 1, 2, 1, 0, 1],\n [0, 0, 1, 2, 0, 0],\n [0, 0, 0, 0, 0, 5],\n [1, 0, 0, 0, 0, 0],\n [0, 1, 1, 0, 0, 0],\n [0, 0, 0, 1, 1, 3],\n [1, 0, 0, 2, 1, 4]\n ], \n np.int32\n)", "_____no_output_____" ] ], [ [ "На сайт заходит очередной посетитель, о покупках которого известно следующее:", "_____no_output_____" ] ], [ [ "next_user_stats = np.array([0, 1, 2, 0, 0, 0])", "_____no_output_____" ] ], [ [ "Найдите самого похожего пользователя. Т. е. посчитайте косинусное сходство между этим пользователем и всеми пользователями из массива user_stats", "_____no_output_____" ] ], [ [ "def cosine( a, b ):\n \"\"\"\n Подсчет косинуса угла между векторами a, b по их координатам\n \"\"\"\n # длины векторов\n aLength = np.linalg.norm( a )\n bLength = np.linalg.norm( b )\n \n return np.dot( a, b ) / ( aLength * bLength )", "_____no_output_____" ], [ "def get_user_ids_with_most_cosine_similarity(stats, new_stat):\n\n # make dict{user_id: cosine}\n d = {i+1 : cosine(new_stat, stat) for i,stat in enumerate(stats)} # index == user_id\n\n # get smallest cosine value\n minval = min(d.values())\n\n # get user_ids with smallest cosine value\n res = list(filter(lambda x: d[x]==minval, d))\n\n return res", "_____no_output_____" ], [ "get_user_ids_with_most_cosine_similarity(users_stats, next_user_stats)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a5fdba09b40988e919dcee6b6f2843cc1aaa68a
398,744
ipynb
Jupyter Notebook
Movie success prediction.ipynb
Harika-hp/MovieSuccessPredictor
2f03a2d8e225967a45e249b400534790d2f37843
[ "Apache-2.0" ]
1
2021-05-20T14:57:47.000Z
2021-05-20T14:57:47.000Z
Movie success prediction.ipynb
Harika-hp/MovieSuccessPredictor
2f03a2d8e225967a45e249b400534790d2f37843
[ "Apache-2.0" ]
null
null
null
Movie success prediction.ipynb
Harika-hp/MovieSuccessPredictor
2f03a2d8e225967a45e249b400534790d2f37843
[ "Apache-2.0" ]
null
null
null
185.721472
177,954
0.854139
[ [ [ "import numpy as np\nimport re\nimport pandas as pd\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\nfrom sklearn.decomposition import PCA\nfrom sklearn.cluster import KMeans, DBSCAN\nfrom sklearn.neighbors import NearestNeighbors\nfrom requests import get\nimport unicodedata\nfrom bs4 import BeautifulSoup\nimport seaborn as sns\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\nimport xgboost as xgb\nfrom sklearn.metrics import accuracy_score\nimport sys\nreload(sys)\nsys.setdefaultencoding('utf-8')\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# Reading in the data", "_____no_output_____" ] ], [ [ "df = pd.read_csv('movie_metadata.csv')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "def classify(col):\n if col['imdb_score'] >= 0 and col['imdb_score'] < 4:\n return 0\n elif col['imdb_score'] >= 4 and col['imdb_score'] < 6:\n return 1\n elif col['imdb_score'] >= 6 and col['imdb_score'] < 7:\n return 2\n elif col['imdb_score'] >= 7 and col['imdb_score'] < 8:\n return 3\n elif col['imdb_score'] >= 8 and col['imdb_score'] <= 10:\n return 4", "_____no_output_____" ], [ "df['Success'] = df.apply(classify, axis=1)", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "# Filling NAN's with median.", "_____no_output_____" ] ], [ [ "def fill_nan(col):\n df[col] = df[col].fillna(df[col].median())\n\ncols = list(df.columns)\nfill_nan(cols)", "_____no_output_____" ] ], [ [ "# Cleaning", "_____no_output_____" ] ], [ [ "def clean_backward_title(col):\n string = col.rstrip()[:-2]\n return unicodedata.normalize('NFD', unicode(string, 'utf-8')).encode('ascii', 'ignore')", "_____no_output_____" ], [ "df['movie_title'] = df['movie_title'].astype(str)", "_____no_output_____" ], [ "df['movie_title'] = df['movie_title'].apply(clean_backward_title)", "_____no_output_____" ], [ "df['movie_title']", "_____no_output_____" ] ], [ [ "# IMDB Revenue scraping script. Redundant right now.. but can be useful in other projects", "_____no_output_____" ] ], [ [ "# def revenue_parse(url, revenue_per_movie):\n# url = url + 'business'\n# response = get(url)\n# html_soup = BeautifulSoup(response.text, 'html.parser')\n# movie_containers = html_soup.find('div', {\"id\": \"tn15content\"})\n# text_spend = movie_containers.text.split('\\n')\n# if 'Gross' in text_spend:\n# gross_index = text_spend.index('Gross')\n# rev = [int(i[1:].replace(',', '')) if i[1:].replace(',', '').isdigit() else -1 for i in re.findall(r'[$]\\S*', text_spend[gross_index+1])]\n# if len(rev) == 0:\n# revenue_per_movie.append(-1)\n# else:\n# revenue_per_movie.append(max(rev))\n# else:\n# revenue_per_movie.append(-1)\n\n\n# revenue_per_movie = []\n\n# for i in df['url']:\n# revenue_parse(i, revenue_per_movie)", "_____no_output_____" ] ], [ [ "# Describing the data to find the Missing values", "_____no_output_____" ] ], [ [ "df.describe()", "_____no_output_____" ] ], [ [ "# Normalizing or Standardizing the data.. change the commenting as per your needs", "_____no_output_____" ] ], [ [ "col = list(df.describe().columns)\ncol.remove('Success')", "_____no_output_____" ], [ "sc = StandardScaler()\n# sc = MinMaxScaler()\ntemp = sc.fit_transform(df[col])\ndf[col] = temp\ndf.head()", "_____no_output_____" ] ], [ [ "# PCA", "_____no_output_____" ] ], [ [ "pca = PCA(n_components=3)\ndf_pca = pca.fit_transform(df[col])", "_____no_output_____" ], [ "df_pca", "_____no_output_____" ], [ "pca.explained_variance_ratio_", "_____no_output_____" ], [ "df['pca_one'] = df_pca[:, 0]\ndf['pca_two'] = df_pca[:, 1]\ndf['pca_three'] = df_pca[:, 2]", "_____no_output_____" ], [ "plt.figure(figsize=(12,12))\nplt.scatter(df['pca_one'][:50], df['pca_two'][:50], color=['orange', 'cyan', 'brown'], cmap='viridis')\n\nfor m, p1, p2 in zip(df['movie_title'][:50], df['pca_one'][:50], df['pca_two'][:50]):\n plt.text(p1, p2, s=m, color=np.random.rand(3)*0.7)", "_____no_output_____" ] ], [ [ "# KMeans", "_____no_output_____" ] ], [ [ "km = KMeans(n_clusters = 5)", "_____no_output_____" ], [ "#P_fit = km.fit(df[['gross','imdb_score','num_critic_for_reviews','director_facebook_likes','actor_1_facebook_likes','movie_facebook_likes','actor_3_facebook_likes','actor_2_facebook_likes']])\nP_fit = km.fit(df[['gross','imdb_score']])\nP_fit.labels_\n# colormap = {0:'red',1:'green',2:'blue'}\n# lc = [colormap[c] for c in colormap]\n# plt.scatter(df['pca_one'],df['pca_two'],c = lc)\n", "_____no_output_____" ], [ "df['cluster'] = P_fit.labels_", "_____no_output_____" ], [ "np.unique(P_fit.labels_)", "_____no_output_____" ], [ "for i in np.unique(P_fit.labels_):\n temp = df[df['cluster'] == i]\n plt.scatter(temp['gross'], temp['imdb_score'], color=np.random.rand(3)*0.7)", "_____no_output_____" ] ], [ [ "# DBSCAN", "_____no_output_____" ] ], [ [ "cols3 = ['director_facebook_likes','imdb_score']", "_____no_output_____" ], [ "#The min_pts are taken as >= D+1 and the eps value is estimated from the elbow in k-distance graph\ndb = DBSCAN(eps = .5, min_samples=3).fit(df[cols3])\nlen(db.core_sample_indices_)\ndf['cluster'] = db.labels_\ncolors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(np.unique(db.labels_)))]\nplt.figure(figsize= (12,12))\nfor i in np.unique(db.labels_):\n temp = df[df['cluster'] == i]\n plt.scatter(temp['director_facebook_likes'], temp['imdb_score'], color = np.random.rand(3)*0.7)", "_____no_output_____" ] ], [ [ "# Random Forest", "_____no_output_____" ] ], [ [ "features = col", "_____no_output_____" ], [ "features.remove('imdb_score')", "_____no_output_____" ], [ "features", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(df[features], df['Success'], test_size=0.2)", "_____no_output_____" ], [ "# rf = RandomForestClassifier(random_state=1, n_estimators=250, min_samples_split=8, min_samples_leaf=4)\n\n# rf = GradientBoostingClassifier(random_state=0, n_estimators=250, min_samples_split=8, \n# min_samples_leaf=4, learning_rate=0.1)\n\nrf = xgb.XGBClassifier(n_estimators=250)\n\nrf.fit(X_train, y_train)\n\npredictions = rf.predict(X_test)", "_____no_output_____" ], [ "predictions = predictions.astype(int)", "_____no_output_____" ], [ "np.unique(predictions)", "_____no_output_____" ], [ "accuracy_score(y_test, predictions)", "_____no_output_____" ], [ "features.insert(0, 'imdb_score')", "_____no_output_____" ], [ "sns.heatmap(df[features].corr())", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a5fec56d61b8b95e288a2b7e18f521b263c9c97
700,873
ipynb
Jupyter Notebook
pretrained-model/vocoder/hifigan/export/universal-hifigan-512.ipynb
ishine/malaya-speech
fd34afc7107af1656dff4b3201fa51dda54fde18
[ "MIT" ]
null
null
null
pretrained-model/vocoder/hifigan/export/universal-hifigan-512.ipynb
ishine/malaya-speech
fd34afc7107af1656dff4b3201fa51dda54fde18
[ "MIT" ]
null
null
null
pretrained-model/vocoder/hifigan/export/universal-hifigan-512.ipynb
ishine/malaya-speech
fd34afc7107af1656dff4b3201fa51dda54fde18
[ "MIT" ]
null
null
null
1,033.735988
475,288
0.865723
[ [ [ "import os\n\nos.environ['CUDA_VISIBLE_DEVICES'] = ''", "_____no_output_____" ], [ "import tensorflow as tf\nimport numpy as np\nfrom glob import glob\nfrom itertools import cycle\n\nmels = glob('universal-mel/*.npy')\nfile_cycle = cycle(mels)\nf = next(file_cycle)", "_____no_output_____" ], [ "path = 'hifigan-512-combined'\nckpt_path = tf.train.latest_checkpoint(path)\nckpt_path", "_____no_output_____" ], [ "def generate(batch_max_steps = 8192, hop_size = 256):\n while True:\n f = next(file_cycle)\n mel = np.load(f)\n audio = np.load(f.replace('mels', 'audios'))\n\n yield {'mel': mel, 'audio': audio}", "_____no_output_____" ], [ "dataset = tf.data.Dataset.from_generator(\n generate,\n {'mel': tf.float32, 'audio': tf.float32},\n output_shapes = {\n 'mel': tf.TensorShape([None, 80]),\n 'audio': tf.TensorShape([None]),\n },\n)\nfeatures = dataset.make_one_shot_iterator().get_next()\nfeatures", "WARNING:tensorflow:From <ipython-input-5-64dc61424ff3>:9: DatasetV1.make_one_shot_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`.\n" ], [ "import malaya_speech\nimport malaya_speech.train\nfrom malaya_speech.train.model import hifigan\nfrom malaya_speech.train.model import stft\nimport malaya_speech.config", "_____no_output_____" ], [ "hifigan_config = malaya_speech.config.hifigan_config_v2\nhifigan_config['hifigan_generator_params']['filters'] = 512\ngenerator = hifigan.MultiGenerator(\n hifigan.GeneratorConfig(**hifigan_config['hifigan_generator_params']),\n name = 'hifigan_generator',\n)", "_____no_output_____" ], [ "y_hat = generator([features['mel']], training = False)\ny_hat", "WARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\n" ], [ "x = tf.placeholder(tf.float32, [None, None, 80])\ny_hat_ = generator(x, training = False)\ny_hat_", "_____no_output_____" ], [ "y_hat_ = tf.identity(y_hat_, name = 'logits')", "_____no_output_____" ], [ "sess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())", "_____no_output_____" ], [ "var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)\nsaver = tf.train.Saver(var_list = var_list)\nsaver.restore(sess, ckpt_path)", "INFO:tensorflow:Restoring parameters from hifigan-512-combined/model.ckpt-690000\n" ], [ "import IPython.display as ipd", "_____no_output_____" ], [ "# %%time\n# f, y_hat_ = sess.run([features, y_hat])", "_____no_output_____" ], [ "# ipd.Audio(f['audio'], rate = 22050)", "_____no_output_____" ], [ "# ipd.Audio(y_hat_[0,:,0], rate = 22050)", "_____no_output_____" ], [ "y, _ = malaya_speech.load('shafiqah-idayu.wav', sr = 22050)\nm = malaya_speech.featurization.universal_mel(y)", "_____no_output_____" ], [ "%%time\n\ny_ = sess.run(y_hat_, feed_dict = {x: [m]})\nipd.Audio(y_[0,:,0], rate = 22050)", "CPU times: user 6.96 s, sys: 2.49 s, total: 9.44 s\nWall time: 572 ms\n" ], [ "%%time\n\ny_ = sess.run(y_hat_, feed_dict = {x: [np.load(mels[-1000])]})\nipd.Audio(y_[0,:,0], rate = 22050)", "CPU times: user 12 s, sys: 2.14 s, total: 14.1 s\nWall time: 799 ms\n" ], [ "saver = tf.train.Saver()\nsaver.save(sess, 'universal-hifigan-512-output/model.ckpt')", "_____no_output_____" ], [ "strings = ','.join(\n [\n n.name\n for n in tf.get_default_graph().as_graph_def().node\n if ('Variable' in n.op\n or 'gather' in n.op.lower()\n or 'Placeholder' in n.name\n or 'logits' in n.name)\n and 'adam' not in n.name\n and 'global_step' not in n.name\n and 'Assign' not in n.name\n and 'ReadVariableOp' not in n.name\n and 'Gather' not in n.name\n ]\n)\nstrings.split(',')", "_____no_output_____" ], [ "def freeze_graph(model_dir, output_node_names):\n\n if not tf.gfile.Exists(model_dir):\n raise AssertionError(\n \"Export directory doesn't exists. Please specify an export \"\n 'directory: %s' % model_dir\n )\n\n checkpoint = tf.train.get_checkpoint_state(model_dir)\n input_checkpoint = checkpoint.model_checkpoint_path\n\n absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])\n output_graph = absolute_model_dir + '/frozen_model.pb'\n clear_devices = True\n with tf.Session(graph = tf.Graph()) as sess:\n saver = tf.train.import_meta_graph(\n input_checkpoint + '.meta', clear_devices = clear_devices\n )\n saver.restore(sess, input_checkpoint)\n output_graph_def = tf.graph_util.convert_variables_to_constants(\n sess,\n tf.get_default_graph().as_graph_def(),\n output_node_names.split(','),\n )\n with tf.gfile.GFile(output_graph, 'wb') as f:\n f.write(output_graph_def.SerializeToString())\n print('%d ops in the final graph.' % len(output_graph_def.node))", "_____no_output_____" ], [ "freeze_graph('universal-hifigan-512-output', strings)", "INFO:tensorflow:Restoring parameters from universal-hifigan-512-output/model.ckpt\nWARNING:tensorflow:From <ipython-input-22-9a7215a4e58a>:23: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.compat.v1.graph_util.convert_variables_to_constants`\nWARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/framework/graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.compat.v1.graph_util.extract_sub_graph`\nINFO:tensorflow:Froze 82 variables.\nINFO:tensorflow:Converted 82 variables to const ops.\n1446 ops in the final graph.\n" ], [ "from tensorflow.tools.graph_transforms import TransformGraph", "_____no_output_____" ], [ "transforms = ['add_default_attributes',\n 'remove_nodes(op=Identity, op=CheckNumerics)',\n 'fold_batch_norms',\n 'fold_old_batch_norms',\n 'quantize_weights(fallback_min=-1024, fallback_max=1024)',\n 'strip_unused_nodes',\n 'sort_by_execution_order']", "_____no_output_____" ], [ "pb = 'universal-hifigan-512-output/frozen_model.pb'", "_____no_output_____" ], [ "input_graph_def = tf.GraphDef()\nwith tf.gfile.FastGFile(pb, 'rb') as f:\n input_graph_def.ParseFromString(f.read())\n\ntransformed_graph_def = TransformGraph(input_graph_def, \n ['Placeholder'],\n ['logits'], transforms)\n \nwith tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:\n f.write(transformed_graph_def.SerializeToString())", "WARNING:tensorflow:From <ipython-input-27-cb3e89a69de3>:2: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.gfile.GFile.\n" ], [ "b2_application_key_id = os.environ['b2_application_key_id']\nb2_application_key = os.environ['b2_application_key']", "_____no_output_____" ], [ "from b2sdk.v1 import *\ninfo = InMemoryAccountInfo()\nb2_api = B2Api(info)\napplication_key_id = b2_application_key_id\napplication_key = b2_application_key\nb2_api.authorize_account(\"production\", application_key_id, application_key)\nfile_info = {'how': 'good-file'}\nb2_bucket = b2_api.get_bucket_by_name('malaya-speech-model')", "_____no_output_____" ], [ "file = 'universal-hifigan-512-output/frozen_model.pb'\noutPutname = 'vocoder-hifigan/universal-512/model.pb'\nb2_bucket.upload_local_file(\n local_file=file,\n file_name=outPutname,\n file_infos=file_info,\n)\n", "_____no_output_____" ], [ "file = 'universal-hifigan-512-output/frozen_model.pb.quantized'\noutPutname = 'vocoder-hifigan/universal-512-quantized/model.pb'\nb2_bucket.upload_local_file(\n local_file=file,\n file_name=outPutname,\n file_infos=file_info,\n)\n", "_____no_output_____" ], [ "!tar -zcvf universal-hifigan-512-output.tar.gz universal-hifigan-512-output", "universal-hifigan-512-output/\nuniversal-hifigan-512-output/model.ckpt.index\nuniversal-hifigan-512-output/model.ckpt.data-00000-of-00001\nuniversal-hifigan-512-output/frozen_model.pb.quantized\nuniversal-hifigan-512-output/checkpoint\nuniversal-hifigan-512-output/model.ckpt.meta\nuniversal-hifigan-512-output/frozen_model.pb\n" ], [ "file = 'universal-hifigan-512-output.tar.gz'\noutPutname = 'pretrained/universal-hifigan-512-output.tar.gz'\nb2_bucket.upload_local_file(\n local_file=file,\n file_name=outPutname,\n file_infos=file_info,\n)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a5fed8cd0eb9e4aa220b59b779b0e7a90404e4a
47,251
ipynb
Jupyter Notebook
Notebooks/SigmaRuleImporter.ipynb
CrisRomeo/Azure-Sentinel
a8c7f8cf74bade06d92f5cc89132e25ef60583f6
[ "MIT" ]
4
2020-02-14T10:29:46.000Z
2021-03-12T02:34:27.000Z
Notebooks/SigmaRuleImporter.ipynb
CrisRomeo/Azure-Sentinel
a8c7f8cf74bade06d92f5cc89132e25ef60583f6
[ "MIT" ]
1
2022-01-22T10:38:31.000Z
2022-01-22T10:38:31.000Z
Notebooks/SigmaRuleImporter.ipynb
CrisRomeo/Azure-Sentinel
a8c7f8cf74bade06d92f5cc89132e25ef60583f6
[ "MIT" ]
3
2020-01-21T11:58:47.000Z
2022-02-24T06:46:55.000Z
49.271116
141
0.652536
[ [ [ "# Import and convert Neo23x0 Sigma scripts\[email protected]\n\nThis notebook is a is a quick and dirty Sigma to Log Analytics converter.\nIt uses the modules from sigmac package to do the conversion.\n\nOnly a subset of the Sigma rules are convertible currently. Failure to convert\ncould be for one or more of these reasons:\n- known limitations of the converter\n- mismatch between the syntax expressible in Sigma and KQL\n- data sources referenced in Sigma rules do not yet exist in Azure Sentinel\n\nThe sigmac tool is downloadable as a package from PyPi but since we are downloading\nthe rules from the repo, we also copy and import the package from the repo source.\n\nAfter conversion you can use an interactive browser to step through the rules and\nview (and copy/save) the KQL equivalents. You can also take the conversion results and \nuse them in another way (e.g.bulk save to files).\n\nThe notebook is all somewhat experimental and offered as-is without any guarantees", "_____no_output_____" ], [ "## Download and unzip the Sigma repo", "_____no_output_____" ] ], [ [ "import requests\n# Download the repo ZIP\nsigma_git_url = 'https://github.com/Neo23x0/sigma/archive/master.zip'\nr = requests.get(sigma_git_url)", "_____no_output_____" ], [ "from ipywidgets import widgets, Layout\nimport os\nfrom pathlib import Path\ndef_path = Path.joinpath(Path(os.getcwd()), \"sigma\")\npath_wgt = widgets.Text(value=str(def_path), \n description='Path to extract to zipped repo files: ', \n layout=Layout(width='50%'),\n style={'description_width': 'initial'})\npath_wgt", "_____no_output_____" ], [ "import zipfile\nimport io\nrepo_zip = io.BytesIO(r.content)\n\nzip_archive = zipfile.ZipFile(repo_zip, mode='r')\nzip_archive.extractall(path=path_wgt.value)\nRULES_REL_PATH = 'sigma-master/rules'\nrules_root = Path(path_wgt.value) / RULES_REL_PATH", "_____no_output_____" ] ], [ [ "### Check that we have the files\nYou should see a folder with folders such as application, apt, windows...", "_____no_output_____" ] ], [ [ "%ls {rules_root}", " Volume in drive E is DATADRIVE1\n Volume Serial Number is 58A4-793E\n\n Directory of e:\\src\\notebooks\\experimental\\sigma\\sigma-master\\rules\n\n05/29/2019 10:17 <DIR> .\n05/29/2019 10:17 <DIR> ..\n05/29/2019 10:17 <DIR> application\n05/29/2019 10:17 <DIR> apt\n05/29/2019 10:17 <DIR> linux\n05/29/2019 10:17 <DIR> network\n05/29/2019 10:17 <DIR> proxy\n05/29/2019 10:17 <DIR> web\n05/29/2019 10:17 <DIR> windows\n 0 File(s) 0 bytes\n 9 Dir(s) 682,085,724,160 bytes free\n" ] ], [ [ "## Convert Sigma Files to Log Analytics Kql queries", "_____no_output_____" ] ], [ [ "# Read the Sigma YAML file paths into a dict and make a\n# a copy for the target Kql queries\nfrom pathlib import Path\nfrom collections import defaultdict\nimport copy\n\ndef get_rule_files(rules_root):\n file_dict = defaultdict(dict)\n for file in Path(rules_root).resolve().rglob(\"*.yml\"):\n rel_path = Path(file).relative_to(rules_root)\n path_key = '.'.join(rel_path.parent.parts)\n file_dict[path_key][rel_path.name] = file\n return file_dict\n \nsigma_dict = get_rule_files(rules_root)\nkql_dict = copy.deepcopy(sigma_dict)\n", "_____no_output_____" ], [ "# Add downloaded sigmac tool to sys.path and import Sigmac functions\nimport os\nimport sys\nmodule_path = os.path.abspath(os.path.join('sigma/sigma-master/tools'))\nif module_path not in sys.path:\n sys.path.append(module_path)\nfrom sigma.parser.collection import SigmaCollectionParser\nfrom sigma.parser.exceptions import SigmaCollectionParseError, SigmaParseError\nfrom sigma.configuration import SigmaConfiguration, SigmaConfigurationChain\nfrom sigma.config.exceptions import SigmaConfigParseError, SigmaRuleFilterParseException\nfrom sigma.filter import SigmaRuleFilter\nimport sigma.backends.discovery as backends\nfrom sigma.backends.base import BackendOptions\nfrom sigma.backends.exceptions import BackendError, NotSupportedError, PartialMatchError, FullMatchError", "_____no_output_____" ], [ "# Sigma to Log Analytics Conversion\nimport yaml\n_LA_MAPPINGS = '''\nfieldmappings:\n Image: NewProcessName\n ParentImage: ProcessName\n ParentCommandLine: NO_MAPPING\n'''\n\nNOT_CONVERTIBLE = 'Not convertible'\n\ndef sigma_to_la(file_path):\n with open(file_path, 'r') as input_file:\n try:\n sigmaconfigs = SigmaConfigurationChain()\n sigmaconfig = SigmaConfiguration(_LA_MAPPINGS)\n sigmaconfigs.append(sigmaconfig)\n backend_options = BackendOptions(None, None)\n backend = backends.getBackend('ala')(sigmaconfigs, backend_options)\n parser = SigmaCollectionParser(input_file, sigmaconfigs, None)\n results = parser.generate(backend)\n kql_result = ''\n for result in results:\n kql_result += result\n except (NotImplementedError, NotSupportedError):\n kql_result = NOT_CONVERTIBLE\n input_file.seek(0,0)\n sigma_txt = input_file.read()\n if not kql_result == NOT_CONVERTIBLE:\n try:\n kql_header = \"\\n\".join(get_sigma_properties(sigma_txt))\n kql_result = kql_header + \"\\n\" + kql_result\n except Exception as e:\n print(\"exception reading sigma YAML: \", e)\n print(sigma_txt, kql_result, sep='\\n')\n return sigma_txt, kql_result\n\nsigma_keys = ['title', 'description', 'tags', 'status', \n 'author', 'logsource', 'falsepositives', 'level']\n\ndef get_sigma_properties(sigma_rule):\n sigma_docs = yaml.load_all(sigma_rule, Loader=yaml.SafeLoader)\n sigma_rule_dict = next(sigma_docs)\n for prop in sigma_keys:\n yield get_property(prop, sigma_rule_dict)\n\ndef get_property(name, sigma_rule_dict):\n sig_prop = sigma_rule_dict.get(name, 'na')\n if isinstance(sig_prop, dict):\n sig_prop = ' '.join([f\"{k}: {v}\" for k, v in sig_prop.items()])\n return f\"// {name}: {sig_prop}\"\n \n \n_KQL_FILTERS = {\n 'date': ' | where TimeGenerated >= datetime({start}) and TimeGenerated <= datetime({end}) ',\n 'host': ' | where Computer has {host_name} '\n}\n\ndef insert_at(source, insert, find_sub):\n pos = source.find(find_sub)\n if pos != -1:\n return source[:pos] + insert + source[pos:]\n else:\n return source + insert\n \ndef add_filter_clauses(source, **kwargs):\n if \"{\" in source or \"}\" in source:\n source = (\"// Warning: embedded braces in source. Please edit if necessary.\\n\"\n + source)\n source = source.replace('{', '{{').replace('}', '}}')\n if kwargs.get('host', False):\n source = insert_at(source, _KQL_FILTERS['host'], '|')\n if kwargs.get('date', False):\n source = insert_at(source, _KQL_FILTERS['date'], '|')\n return source\n\n\n# Run the conversion\nconv_counter = {}\nfor categ, sources in sigma_dict.items():\n src_converted = 0\n for file_name, file_path in sources.items():\n sigma, kql = sigma_to_la(file_path)\n kql_dict[categ][file_name] = (sigma, kql)\n if not kql == NOT_CONVERTIBLE:\n src_converted += 1\n conv_counter[categ] = (len(sources), src_converted)\n \nprint(\"Conversion statistics\")\nprint(\"-\" * len(\"Conversion statistics\"))\nprint('\\n'.join([f'{categ}: rules: {counter[0]}, converted: {counter[1]}'\n for categ, counter in conv_counter.items()]))", "Conversion statistics\n---------------------\napplication: rules: 5, converted: 0\napt: rules: 29, converted: 21\nlinux: rules: 14, converted: 0\nlinux.auditd: rules: 2, converted: 0\nlinux.modsecurity: rules: 1, converted: 0\nnetwork: rules: 6, converted: 0\nproxy: rules: 18, converted: 0\nweb: rules: 5, converted: 0\nwindows.builtin: rules: 57, converted: 37\nwindows.malware: rules: 5, converted: 1\nwindows.other: rules: 3, converted: 0\nwindows.powershell: rules: 12, converted: 0\nwindows.process_creation: rules: 94, converted: 92\nwindows.sysmon: rules: 46, converted: 41\n" ] ], [ [ "## Display the results in an interactive browser", "_____no_output_____" ] ], [ [ "from ipywidgets import widgets, Layout\n\n# Browser Functions\ndef on_cat_value_change(change):\n queries_w.options = kql_dict[change['new']].keys()\n queries_w.value = queries_w.options[0]\n\ndef on_query_value_change(change):\n if view_qry_check.value:\n qry_text = kql_dict[sub_cats_w.value][queries_w.value][1]\n if \"Not convertible\" not in qry_text:\n qry_text = add_filter_clauses(qry_text,\n date=add_date_filter_check.value,\n host=add_host_filter_check.value)\n query_text_w.value = qry_text.replace('|', '\\n|')\n orig_text_w.value = kql_dict[sub_cats_w.value][queries_w.value][0]\n\ndef on_view_query_value_change(change):\n vis = 'visible' if view_qry_check.value else 'hidden'\n on_query_value_change(None)\n query_text_w.layout.visibility = vis\n orig_text_w.layout.visibility = vis\n\n# Function defs for ExecuteQuery cell below\ndef click_exec_hqry(b):\n global qry_results\n query_name = queries_w.value\n query_cat = sub_cats_w.value\n query_text = query_text_w.value\n query_text = query_text.format(**qry_wgt.query_params)\n\n disp_results(query_text)\n \ndef disp_results(query_text):\n out_wgt.clear_output()\n with out_wgt:\n print(\"Running query...\", end=' ')\n qry_results = execute_kql_query(query_text)\n print(f'done. {len(qry_results)} rows returned.')\n display(qry_results)\n \nexec_hqry_button = widgets.Button(description=\"Execute query..\")\nout_wgt = widgets.Output() #layout=Layout(width='100%', height='200px', visiblity='visible'))\nexec_hqry_button.on_click(click_exec_hqry)\n\n# Browser widget setup\ncategories = list(sorted(kql_dict.keys()))\nsub_cats_w = widgets.Select(options=categories, \n description='Category : ',\n layout=Layout(width='30%', height='120px'),\n style = {'description_width': 'initial'})\n\nqueries_w = widgets.Select(options = kql_dict[categories[0]].keys(),\n description='Query : ',\n layout=Layout(width='30%', height='120px'),\n style = {'description_width': 'initial'})\n\nquery_text_w = widgets.Textarea(\n value='',\n description='Kql Query:',\n layout=Layout(width='100%', height='300px', visiblity='hidden'),\n disabled=False)\norig_text_w = widgets.Textarea(\n value='',\n description='Sigma Query:',\n layout=Layout(width='100%', height='250px', visiblity='hidden'),\n disabled=False)\n\nquery_text_w.layout.visibility = 'hidden'\norig_text_w.layout.visibility = 'hidden'\nsub_cats_w.observe(on_cat_value_change, names='value')\nqueries_w.observe(on_query_value_change, names='value')\n\nview_qry_check = widgets.Checkbox(description=\"View query\", value=True)\nadd_date_filter_check = widgets.Checkbox(description=\"Add date filter\", value=False)\nadd_host_filter_check = widgets.Checkbox(description=\"Add host filter\", value=False)\n\nview_qry_check.observe(on_view_query_value_change, names='value')\nadd_date_filter_check.observe(on_view_query_value_change, names='value')\nadd_host_filter_check.observe(on_view_query_value_change, names='value')\n# view_qry_button.on_click(click_exec_hqry)\n# display(exec_hqry_button);\n\nvbox_opts = widgets.VBox([view_qry_check, add_date_filter_check, add_host_filter_check])\nhbox = widgets.HBox([sub_cats_w, queries_w, vbox_opts])\nvbox = widgets.VBox([hbox, orig_text_w, query_text_w])\non_view_query_value_change(None)\ndisplay(vbox)", "_____no_output_____" ] ], [ [ "## Click the `Execute query` button to run the currently display query\n**Notes:**\n- To run the queries, first authenticate to Log Analytics (scroll down and execute remaining cells in the notebook)\n- If you added a date filter to the query set the date range below", "_____no_output_____" ] ], [ [ "from msticpy.nbtools.nbwidgets import QueryTime\nqry_wgt = QueryTime(units='days', before=5, after=0, max_before=30, max_after=10)\nvbox = widgets.VBox([exec_hqry_button, out_wgt])\ndisplay(vbox)", "_____no_output_____" ] ], [ [ "### Set Query Time bounds", "_____no_output_____" ] ], [ [ "qry_wgt.display()", "_____no_output_____" ] ], [ [ "### Authenticate to Azure Sentinel", "_____no_output_____" ] ], [ [ "def clean_kql_comments(query_string):\n \"\"\"Cleans\"\"\"\n import re\n return re.sub(r'(//[^\\n]+)', '', query_string, re.MULTILINE).replace('\\n', '').strip()\n\ndef execute_kql_query(query_string):\n if not query_string or len(query_string.strip()) == 0:\n print('No query supplied')\n return None\n src_query = clean_kql_comments(query_string)\n result = get_ipython().run_cell_magic('kql', line='', cell=src_query)\n \n if result is not None and result.completion_query_info['StatusCode'] == 0:\n results_frame = result.to_dataframe()\n return results_frame\n return []", "_____no_output_____" ], [ "import os\nfrom msticpy.nbtools.wsconfig import WorkspaceConfig\nfrom msticpy.nbtools import kql, GetEnvironmentKey\n\nws_config_file = 'config.json'\ntry:\n ws_config = WorkspaceConfig(ws_config_file)\n print('Found config file')\n for cf_item in ['tenant_id', 'subscription_id', 'resource_group', 'workspace_id', 'workspace_name']:\n print(cf_item, ws_config[cf_item])\nexcept:\n ws_config = None\n\nws_id = GetEnvironmentKey(env_var='WORKSPACE_ID',\n prompt='Log Analytics Workspace Id:')\nif ws_config:\n ws_id.value = ws_config['workspace_id']\nws_id.display()", "_____no_output_____" ], [ "try:\n WORKSPACE_ID = select_ws.value\nexcept NameError:\n try:\n WORKSPACE_ID = ws_id.value\n except NameError:\n WORKSPACE_ID = None\n \nif not WORKSPACE_ID:\n raise ValueError('No workspace selected.')\n\nkql.load_kql_magic()\n\n%kql loganalytics://code().workspace(WORKSPACE_ID)", "_____no_output_____" ] ], [ [ "## Save All Converted Files", "_____no_output_____" ] ], [ [ "path_save_wgt = widgets.Text(value=str(def_path) + \"_kql_out\",\n description='Path to save KQL files: ',\n layout=Layout(width='50%'),\n style={'description_width': 'initial'})\npath_save_wgt", "_____no_output_____" ], [ "root = Path(path_save_wgt.value)\nroot.mkdir(exist_ok=True)\nfor categ, kql_files in kql_dict.items():\n sub_dir = root.joinpath(categ)\n \n for file_name, contents in kql_files.items():\n kql_txt = contents[1]\n if not kql_txt == NOT_CONVERTIBLE:\n sub_dir.mkdir(exist_ok=True)\n file_path = sub_dir.joinpath(file_name.replace('.yml', '.kql'))\n with open(file_path, 'w') as output_file:\n output_file.write(kql_txt)\n print(f\"Saved {file_path}\")\n", "Saved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_apt29_thinktanks.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_babyshark.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_bear_activity_gtr19.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_cloudhopper.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_dragonfly.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_elise.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_empiremonkey.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_equationgroup_dll_u_load.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_hurricane_panda.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_judgement_panda_gtr19.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_oceanlotus_registry.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_pandemic.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_slingshot.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_sofacy.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_sofacy_zebrocy.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_ta17_293a_ps.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_tropictrooper.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_turla_namedpipes.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_unidentified_nov_18.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\apt_zxshell.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\apt\\crime_fireball.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_account_backdoor_dcsync_rights.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_account_discovery.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_admin_rdp_login.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_admin_share_access.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_alert_active_directory_user_control.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_alert_ad_user_backdoors.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_alert_enable_weak_encryption.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_alert_hacktool_use.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_atsvc_task.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_dcsync.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_disable_event_logging.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_GPO_scheduledtasks.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_impacket_secretdump.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_lm_namedpipe.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_mal_wceaux_dll.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_net_ntlm_downgrade.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_overpass_the_hash.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_pass_the_hash.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_rdp_localhost_login.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_rdp_reverse_tunnel.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_add_sid_history.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_dsrm_password_change.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_failed_logon_reasons.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_interactive_logons.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_kerberos_manipulation.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_lsass_dump.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_mshta_execution.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_net_recon_activity.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_psexec.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_raccess_sensitive_fext.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_rc4_kerberos.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_sdelete.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_security_eventlog_cleared.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_susp_time_modification.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_svcctl_remote_service.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_user_added_to_local_administrators.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.builtin\\win_user_creation.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.malware\\win_mal_ursnif.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\powershell_xor_commandline.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_attrib_hiding_files.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_bypass_squiblytwo.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_cmdkey_recon.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_cmstp_com_object_access.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_etw_trace_evasion.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_exploit_cve_2015_1641.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_exploit_cve_2017_0261.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_exploit_cve_2017_11882.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_exploit_cve_2017_8759.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_hack_rubeus.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_lethalhta.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_malware_dridex.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_malware_notpetya.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_malware_script_dropper.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_malware_wannacry.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_mal_adwind.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_mal_lockergoga.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_mavinject_proc_inj.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_mshta_spawn_shell.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_netsh_fw_add.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_netsh_port_fwd.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_netsh_port_fwd_3389.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_office_shell.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_office_spawn_exe_from_users_directory.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_plugx_susp_exe_locations.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_possible_applocker_bypass.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_powershell_amsi_bypass.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_powershell_b64_shellcode.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_powershell_dll_execution.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_powershell_download.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_powershell_renamed_ps.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_powershell_suspicious_parameter_variation.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_process_creation_bitsadmin_download.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_proc_wrong_parent.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_psexesvc_start.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_renamed_paexec.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_sdbinst_shim_persistence.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_shell_spawn_susp_program.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_spn_enum.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_bcdedit.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_calc.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_certutil_command.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_certutil_encode.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_cli_escape.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_cmd_http_appdata.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_control_dll_load.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_csc.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_execution_path.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_execution_path_webserver.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_exec_folder.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_gup.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_iss_module_install.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_mmc_source.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_msiexec_web_install.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_net_execution.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_ntdsutil.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_outlook.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_ping_hex_ip.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_powershell_empire_lanuch.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_powershell_enc_cmd.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_powershell_hidden_b64_cmd.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_powershell_parent_combo.kql\nSaved e:\\src\\notebooks\\experimental\\sigma_kql_out\\windows.process_creation\\win_susp_procdump.kql\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a5ff0b127dd03c25bc0ffd1c43ec4d4ba88053b
359,256
ipynb
Jupyter Notebook
Colab_runs.ipynb
faraz2023/FINDER-pytorch
170255f9a442b11e1a27483fe6eaf2ee61766454
[ "MIT" ]
null
null
null
Colab_runs.ipynb
faraz2023/FINDER-pytorch
170255f9a442b11e1a27483fe6eaf2ee61766454
[ "MIT" ]
null
null
null
Colab_runs.ipynb
faraz2023/FINDER-pytorch
170255f9a442b11e1a27483fe6eaf2ee61766454
[ "MIT" ]
null
null
null
215.639856
56,756
0.794548
[ [ [ "## This notebook shows how to run evaluation on our models straight from Colab environment", "_____no_output_____" ] ], [ [ "# mount GD\nfrom google.colab import drive\ndrive.mount('/content/drive')\n\n# your GD path to clone the repo\nproject_path=\"/content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/\"", "Mounted at /content/drive\n" ], [ "# Clone repo\n%cd {project_path}\n\n!git clone https://github.com/faraz2023/FINDER-pytorch.git\n\n%cd FINDER-pytorch\n%ls -a", "_____no_output_____" ], [ "# if already cloned\n%cd {project_path}/FINDER-pytorch/\n!pwd", "/content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch\n/content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch\n" ], [ "# install environments, NEED TO restart kernel after installation\n# torch_sparse and torch_scatter are slow on installation (normal, don't abort)\n# could takes ~ 16 min\n!pip install cython==0.29.13\n!pip install networkx==2.3\n!pip install numpy==1.17.3\n!pip install pandas==0.25.2\n!pip install scipy==1.3.1\n!pip install tqdm==4.36.1\n!pip install torchvision\n!pip install torch_sparse\n!pip install torch_scatter\n!pip install tensorflow-gpu==1.14.0 ", "Collecting cython==0.29.13\n Downloading Cython-0.29.13-cp37-cp37m-manylinux1_x86_64.whl (2.1 MB)\n\u001b[K |████████████████████████████████| 2.1 MB 14.1 MB/s \n\u001b[?25hInstalling collected packages: cython\n Attempting uninstall: cython\n Found existing installation: Cython 0.29.28\n Uninstalling Cython-0.29.28:\n Successfully uninstalled Cython-0.29.28\nSuccessfully installed cython-0.29.13\nCollecting networkx==2.3\n Downloading networkx-2.3.zip (1.7 MB)\n\u001b[K |████████████████████████████████| 1.7 MB 12.2 MB/s \n\u001b[?25hRequirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.7/dist-packages (from networkx==2.3) (4.4.2)\nBuilding wheels for collected packages: networkx\n Building wheel for networkx (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for networkx: filename=networkx-2.3-py2.py3-none-any.whl size=1556008 sha256=83d170a05fd1803554184b1a3ff3a7bc3b6c6faafd2011774a5aeacfddf62186\n Stored in directory: /root/.cache/pip/wheels/44/e6/b8/4efaab31158e9e9ca9ed80b11f6b11130bac9a9672b3cbbeaf\nSuccessfully built networkx\nInstalling collected packages: networkx\n Attempting uninstall: networkx\n Found existing installation: networkx 2.6.3\n Uninstalling networkx-2.6.3:\n Successfully uninstalled networkx-2.6.3\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\nalbumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.\u001b[0m\nSuccessfully installed networkx-2.3\nCollecting numpy==1.17.3\n Downloading numpy-1.17.3-cp37-cp37m-manylinux1_x86_64.whl (20.0 MB)\n\u001b[K |████████████████████████████████| 20.0 MB 1.2 MB/s \n\u001b[?25hInstalling collected packages: numpy\n Attempting uninstall: numpy\n Found existing installation: numpy 1.21.5\n Uninstalling numpy-1.21.5:\n Successfully uninstalled numpy-1.21.5\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ntensorflow 2.8.0 requires tf-estimator-nightly==2.8.0.dev2021122109, which is not installed.\ntensorflow 2.8.0 requires numpy>=1.20, but you have numpy 1.17.3 which is incompatible.\ntables 3.7.0 requires numpy>=1.19.0, but you have numpy 1.17.3 which is incompatible.\nkapre 0.3.7 requires numpy>=1.18.5, but you have numpy 1.17.3 which is incompatible.\njaxlib 0.3.0+cuda11.cudnn805 requires numpy>=1.19, but you have numpy 1.17.3 which is incompatible.\njax 0.3.1 requires numpy>=1.19, but you have numpy 1.17.3 which is incompatible.\ndatascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.\nalbumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.\u001b[0m\nSuccessfully installed numpy-1.17.3\n" ], [ "# To where you want to build modules\n# Old tf ND_cost model\n%cd {project_path}/FINDER-pytorch/code/old_FINDER_ND_cost_tf/\n\n# New torch ND_cost model\n#%cd {project_path}/FINDER-pytorch/code/FINDER_ND_cost/", "/content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf\n" ], [ "# build modules\n!python setup.py build_ext -i", "running build_ext\ncythoning PrepareBatchGraph.pyx to PrepareBatchGraph.cpp\n/usr/local/lib/python3.7/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/PrepareBatchGraph.pxd\n tree = Parsing.p_module(s, pxd, full_module_name)\nbuilding 'PrepareBatchGraph' extension\ncreating build\ncreating build/temp.linux-x86_64-3.7\ncreating build/temp.linux-x86_64-3.7/src\ncreating build/temp.linux-x86_64-3.7/src/lib\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c PrepareBatchGraph.cpp -o build/temp.linux-x86_64-3.7/PrepareBatchGraph.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/PrepareBatchGraph.cpp -o build/temp.linux-x86_64-3.7/src/lib/PrepareBatchGraph.o -std=c++11\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:\u001b[m\u001b[K In member function ‘\u001b[01m\u001b[Kvoid PrepareBatchGraph::SetupGraphInput(std::vector<int>, std::vector<std::shared_ptr<Graph> >, std::vector<std::vector<int> >, const int*)\u001b[m\u001b[K’:\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:110:26:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\n for (size_t i = 0; \u001b[01;35m\u001b[Ki < (int)idxes.size()\u001b[m\u001b[K; ++i)\n \u001b[01;35m\u001b[K~~^~~~~~~~~~~~~~~~~~~\u001b[m\u001b[K\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:154:26:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\n for (size_t i = 0; \u001b[01;35m\u001b[Ki < (int)idxes.size()\u001b[m\u001b[K; ++i)\n \u001b[01;35m\u001b[K~~^~~~~~~~~~~~~~~~~~~\u001b[m\u001b[K\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:\u001b[m\u001b[K In function ‘\u001b[01m\u001b[Kstd::vector<std::shared_ptr<sparseMatrix> > n2n_construct(GraphStruct*, int)\u001b[m\u001b[K’:\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:259:24:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\n for (size_t j = 0; \u001b[01;35m\u001b[Kj < (int)list.size()\u001b[m\u001b[K; ++j)\n \u001b[01;35m\u001b[K~~^~~~~~~~~~~~~~~~~~\u001b[m\u001b[K\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:\u001b[m\u001b[K In function ‘\u001b[01m\u001b[Kstd::shared_ptr<sparseMatrix> e2n_construct(GraphStruct*)\u001b[m\u001b[K’:\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:306:24:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\n for (size_t j = 0; \u001b[01;35m\u001b[Kj < (int)list.size()\u001b[m\u001b[K; ++j)\n \u001b[01;35m\u001b[K~~^~~~~~~~~~~~~~~~~~\u001b[m\u001b[K\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:\u001b[m\u001b[K In function ‘\u001b[01m\u001b[Kstd::shared_ptr<sparseMatrix> e2e_construct(GraphStruct*)\u001b[m\u001b[K’:\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:340:30:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\n for (size_t j = 0; \u001b[01;35m\u001b[Kj < (int)list.size()\u001b[m\u001b[K; ++j)\n \u001b[01;35m\u001b[K~~^~~~~~~~~~~~~~~~~~\u001b[m\u001b[K\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:\u001b[m\u001b[K In function ‘\u001b[01m\u001b[Kstd::shared_ptr<sparseMatrix> subg_construct(GraphStruct*, std::vector<std::pair<int, int> >&)\u001b[m\u001b[K’:\n\u001b[01m\u001b[Ksrc/lib/PrepareBatchGraph.cpp:366:24:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\n for (size_t j = 0; \u001b[01;35m\u001b[Kj < (int)list.size()\u001b[m\u001b[K; ++j)\n \u001b[01;35m\u001b[K~~^~~~~~~~~~~~~~~~~~\u001b[m\u001b[K\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/graph.cpp -o build/temp.linux-x86_64-3.7/src/lib/graph.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/graph_struct.cpp -o build/temp.linux-x86_64-3.7/src/lib/graph_struct.o -std=c++11\nsrc/lib/graph_struct.cpp: In instantiation of ‘\u001b[01m\u001b[Kvoid LinkedTable<T>::Resize(int) [with T = int]\u001b[m\u001b[K’:\n\u001b[01m\u001b[Ksrc/lib/graph_struct.cpp:48:16:\u001b[m\u001b[K required from here\n\u001b[01m\u001b[Ksrc/lib/graph_struct.cpp:44:23:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\n for (size_t i = 0; i < (int)head.size(); ++i)\nsrc/lib/graph_struct.cpp: In instantiation of ‘\u001b[01m\u001b[Kvoid LinkedTable<T>::Resize(int) [with T = std::pair<int, int>]\u001b[m\u001b[K’:\n\u001b[01m\u001b[Ksrc/lib/graph_struct.cpp:49:16:\u001b[m\u001b[K required from here\n\u001b[01m\u001b[Ksrc/lib/graph_struct.cpp:44:23:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/disjoint_set.cpp -o build/temp.linux-x86_64-3.7/src/lib/disjoint_set.o -std=c++11\nx86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.7/PrepareBatchGraph.o build/temp.linux-x86_64-3.7/src/lib/PrepareBatchGraph.o build/temp.linux-x86_64-3.7/src/lib/graph.o build/temp.linux-x86_64-3.7/src/lib/graph_struct.o build/temp.linux-x86_64-3.7/src/lib/disjoint_set.o -o /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/PrepareBatchGraph.cpython-37m-x86_64-linux-gnu.so\ncythoning graph.pyx to graph.cpp\n/usr/local/lib/python3.7/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/graph.pxd\n tree = Parsing.p_module(s, pxd, full_module_name)\nbuilding 'graph' extension\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c graph.cpp -o build/temp.linux-x86_64-3.7/graph.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/graph.cpp -o build/temp.linux-x86_64-3.7/src/lib/graph.o -std=c++11\nx86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.7/graph.o build/temp.linux-x86_64-3.7/src/lib/graph.o -o /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/graph.cpython-37m-x86_64-linux-gnu.so\ncythoning mvc_env.pyx to mvc_env.cpp\n/usr/local/lib/python3.7/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/mvc_env.pxd\n tree = Parsing.p_module(s, pxd, full_module_name)\nbuilding 'mvc_env' extension\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c mvc_env.cpp -o build/temp.linux-x86_64-3.7/mvc_env.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/mvc_env.cpp -o build/temp.linux-x86_64-3.7/src/lib/mvc_env.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/graph.cpp -o build/temp.linux-x86_64-3.7/src/lib/graph.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/disjoint_set.cpp -o build/temp.linux-x86_64-3.7/src/lib/disjoint_set.o -std=c++11\nx86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.7/mvc_env.o build/temp.linux-x86_64-3.7/src/lib/mvc_env.o build/temp.linux-x86_64-3.7/src/lib/graph.o build/temp.linux-x86_64-3.7/src/lib/disjoint_set.o -o /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/mvc_env.cpython-37m-x86_64-linux-gnu.so\ncythoning utils.pyx to utils.cpp\n/usr/local/lib/python3.7/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/utils.pxd\n tree = Parsing.p_module(s, pxd, full_module_name)\nbuilding 'utils' extension\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c utils.cpp -o build/temp.linux-x86_64-3.7/utils.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/utils.cpp -o build/temp.linux-x86_64-3.7/src/lib/utils.o -std=c++11\n\u001b[01m\u001b[Ksrc/lib/utils.cpp:\u001b[m\u001b[K In member function ‘\u001b[01m\u001b[Kstd::vector<int> Utils::reInsert_inner(const std::vector<int>&, std::shared_ptr<Graph>&, const std::vector<int>&, std::shared_ptr<decreaseComponentStrategy>&, int)\u001b[m\u001b[K’:\n\u001b[01m\u001b[Ksrc/lib/utils.cpp:82:30:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\n if (\u001b[01;35m\u001b[KreinsertEachStep >= batchList.size()\u001b[m\u001b[K)\n \u001b[01;35m\u001b[K~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~\u001b[m\u001b[K\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/graph.cpp -o build/temp.linux-x86_64-3.7/src/lib/graph.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/graph_utils.cpp -o build/temp.linux-x86_64-3.7/src/lib/graph_utils.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/disjoint_set.cpp -o build/temp.linux-x86_64-3.7/src/lib/disjoint_set.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/decrease_strategy.cpp -o build/temp.linux-x86_64-3.7/src/lib/decrease_strategy.o -std=c++11\nx86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.7/utils.o build/temp.linux-x86_64-3.7/src/lib/utils.o build/temp.linux-x86_64-3.7/src/lib/graph.o build/temp.linux-x86_64-3.7/src/lib/graph_utils.o build/temp.linux-x86_64-3.7/src/lib/disjoint_set.o build/temp.linux-x86_64-3.7/src/lib/decrease_strategy.o -o /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/utils.cpython-37m-x86_64-linux-gnu.so\ncythoning nstep_replay_mem.pyx to nstep_replay_mem.cpp\n/usr/local/lib/python3.7/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/nstep_replay_mem.pxd\n tree = Parsing.p_module(s, pxd, full_module_name)\nbuilding 'nstep_replay_mem' extension\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c nstep_replay_mem.cpp -o build/temp.linux-x86_64-3.7/nstep_replay_mem.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/nstep_replay_mem.cpp -o build/temp.linux-x86_64-3.7/src/lib/nstep_replay_mem.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/graph.cpp -o build/temp.linux-x86_64-3.7/src/lib/graph.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/mvc_env.cpp -o build/temp.linux-x86_64-3.7/src/lib/mvc_env.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/disjoint_set.cpp -o build/temp.linux-x86_64-3.7/src/lib/disjoint_set.o -std=c++11\nx86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.7/nstep_replay_mem.o build/temp.linux-x86_64-3.7/src/lib/nstep_replay_mem.o build/temp.linux-x86_64-3.7/src/lib/graph.o build/temp.linux-x86_64-3.7/src/lib/mvc_env.o build/temp.linux-x86_64-3.7/src/lib/disjoint_set.o -o /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/nstep_replay_mem.cpython-37m-x86_64-linux-gnu.so\ncythoning nstep_replay_mem_prioritized.pyx to nstep_replay_mem_prioritized.cpp\n/usr/local/lib/python3.7/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/nstep_replay_mem_prioritized.pxd\n tree = Parsing.p_module(s, pxd, full_module_name)\nbuilding 'nstep_replay_mem_prioritized' extension\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c nstep_replay_mem_prioritized.cpp -o build/temp.linux-x86_64-3.7/nstep_replay_mem_prioritized.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/nstep_replay_mem_prioritized.cpp -o build/temp.linux-x86_64-3.7/src/lib/nstep_replay_mem_prioritized.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/graph.cpp -o build/temp.linux-x86_64-3.7/src/lib/graph.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/mvc_env.cpp -o build/temp.linux-x86_64-3.7/src/lib/mvc_env.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/disjoint_set.cpp -o build/temp.linux-x86_64-3.7/src/lib/disjoint_set.o -std=c++11\nx86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.7/nstep_replay_mem_prioritized.o build/temp.linux-x86_64-3.7/src/lib/nstep_replay_mem_prioritized.o build/temp.linux-x86_64-3.7/src/lib/graph.o build/temp.linux-x86_64-3.7/src/lib/mvc_env.o build/temp.linux-x86_64-3.7/src/lib/disjoint_set.o -o /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/nstep_replay_mem_prioritized.cpython-37m-x86_64-linux-gnu.so\ncythoning graph_struct.pyx to graph_struct.cpp\n/usr/local/lib/python3.7/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/graph_struct.pxd\n tree = Parsing.p_module(s, pxd, full_module_name)\nbuilding 'graph_struct' extension\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c graph_struct.cpp -o build/temp.linux-x86_64-3.7/graph_struct.o -std=c++11\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/lib/graph_struct.cpp -o build/temp.linux-x86_64-3.7/src/lib/graph_struct.o -std=c++11\nsrc/lib/graph_struct.cpp: In instantiation of ‘\u001b[01m\u001b[Kvoid LinkedTable<T>::Resize(int) [with T = int]\u001b[m\u001b[K’:\n\u001b[01m\u001b[Ksrc/lib/graph_struct.cpp:48:16:\u001b[m\u001b[K required from here\n\u001b[01m\u001b[Ksrc/lib/graph_struct.cpp:44:23:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\n for (size_t i = 0; i < (int)head.size(); ++i)\nsrc/lib/graph_struct.cpp: In instantiation of ‘\u001b[01m\u001b[Kvoid LinkedTable<T>::Resize(int) [with T = std::pair<int, int>]\u001b[m\u001b[K’:\n\u001b[01m\u001b[Ksrc/lib/graph_struct.cpp:49:16:\u001b[m\u001b[K required from here\n\u001b[01m\u001b[Ksrc/lib/graph_struct.cpp:44:23:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kcomparison between signed and unsigned integer expressions [\u001b[01;35m\u001b[K-Wsign-compare\u001b[m\u001b[K]\nx86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.7/graph_struct.o build/temp.linux-x86_64-3.7/src/lib/graph_struct.o -o /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/graph_struct.cpython-37m-x86_64-linux-gnu.so\ncythoning FINDER.pyx to FINDER.c\n/usr/local/lib/python3.7/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/FINDER.pyx\n tree = Parsing.p_module(s, pxd, full_module_name)\nbuilding 'FINDER' extension\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c FINDER.c -o build/temp.linux-x86_64-3.7/FINDER.o\nx86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fdebug-prefix-map=/build/python3.7-pX47U3/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.7/FINDER.o -o /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/FINDER-pytorch/code/old_FINDER_ND_cost_tf/FINDER.cpython-37m-x86_64-linux-gnu.so\n" ], [ "# If you encounter - cannot import name 'export_saved_model' from 'tensorflow.python.keras.saving.saved_model'\n# try resinstall tf and restart kernel\n!pip uninstall -y tensorflow-gpu\n!pip install tensorflow-gpu", "Found existing installation: tensorflow-gpu 1.14.0\nUninstalling tensorflow-gpu-1.14.0:\n Successfully uninstalled tensorflow-gpu-1.14.0\nCollecting tensorflow-gpu\n Downloading tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl (497.5 MB)\n\u001b[K |████████████████████████████████| 497.5 MB 28 kB/s \n\u001b[?25hRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (3.3.0)\nRequirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.13.3)\nRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.1.0)\nCollecting tf-estimator-nightly==2.8.0.dev2021122109\n Downloading tf_estimator_nightly-2.8.0.dev2021122109-py2.py3-none-any.whl (462 kB)\n\u001b[K |████████████████████████████████| 462 kB 75.6 MB/s \n\u001b[?25hRequirement already satisfied: keras<2.9,>=2.8.0rc0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (2.8.0)\nRequirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (0.24.0)\nRequirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (3.10.0.2)\nRequirement already satisfied: flatbuffers>=1.12 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (2.0)\nRequirement already satisfied: libclang>=9.0.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (13.0.0)\nRequirement already satisfied: keras-preprocessing>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.1.2)\nRequirement already satisfied: gast>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (0.5.3)\nRequirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.44.0)\nRequirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (0.2.0)\nRequirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (3.1.0)\nRequirement already satisfied: absl-py>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.0.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (57.4.0)\nRequirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (3.17.3)\nRequirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.6.3)\nCollecting tensorboard<2.9,>=2.8\n Downloading tensorboard-2.8.0-py3-none-any.whl (5.8 MB)\n\u001b[K |████████████████████████████████| 5.8 MB 58.6 MB/s \n\u001b[?25hRequirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.15.0)\nCollecting numpy>=1.20\n Downloading numpy-1.21.5-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)\n\u001b[K |████████████████████████████████| 15.7 MB 48.2 MB/s \n\u001b[?25hRequirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.7/dist-packages (from astunparse>=1.6.0->tensorflow-gpu) (0.37.1)\nRequirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py>=2.9.0->tensorflow-gpu) (1.5.2)\nRequirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow-gpu) (0.6.1)\nRequirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow-gpu) (2.23.0)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow-gpu) (0.4.6)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow-gpu) (1.8.1)\nRequirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow-gpu) (1.35.0)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow-gpu) (1.0.1)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow-gpu) (3.3.6)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.9,>=2.8->tensorflow-gpu) (4.2.4)\nRequirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.9,>=2.8->tensorflow-gpu) (4.8)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.9,>=2.8->tensorflow-gpu) (0.2.8)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.9,>=2.8->tensorflow-gpu) (1.3.1)\nRequirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<2.9,>=2.8->tensorflow-gpu) (4.11.2)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.9,>=2.8->tensorflow-gpu) (3.7.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard<2.9,>=2.8->tensorflow-gpu) (0.4.8)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.9,>=2.8->tensorflow-gpu) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.9,>=2.8->tensorflow-gpu) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.9,>=2.8->tensorflow-gpu) (2021.10.8)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.9,>=2.8->tensorflow-gpu) (1.24.3)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.9,>=2.8->tensorflow-gpu) (3.2.0)\nInstalling collected packages: numpy, tf-estimator-nightly, tensorboard, tensorflow-gpu\n Attempting uninstall: numpy\n Found existing installation: numpy 1.17.3\n Uninstalling numpy-1.17.3:\n Successfully uninstalled numpy-1.17.3\n Attempting uninstall: tensorboard\n Found existing installation: tensorboard 1.14.0\n Uninstalling tensorboard-1.14.0:\n Successfully uninstalled tensorboard-1.14.0\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\nxarray 0.18.2 requires pandas>=1.0, but you have pandas 0.25.2 which is incompatible.\nspacy 2.2.4 requires tqdm<5.0.0,>=4.38.0, but you have tqdm 4.36.1 which is incompatible.\ngoogle-colab 1.0.0 requires pandas>=1.1.0; python_version >= \"3.0\", but you have pandas 0.25.2 which is incompatible.\nfbprophet 0.7.1 requires pandas>=1.0.4, but you have pandas 0.25.2 which is incompatible.\ndatascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.\nalbumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.\u001b[0m\nSuccessfully installed numpy-1.21.5 tensorboard-2.8.0 tensorflow-gpu-2.8.0 tf-estimator-nightly-2.8.0.dev2021122109\n" ], [ "import time\nimport sys,os\n\nimport networkx as nx\nimport numpy as np\nimport random\nimport os\nimport os\nfrom shutil import copyfile\nfrom tqdm import tqdm\n\n\n# use old module functions\nsys.path.append(f'{project_path}/FINDER-pytorch/code/old_FINDER_ND_cost_tf/')\nfrom FINDER import FINDER\n\nold_finder = FINDER()\n\n# HXA with maxcc\ndef HXA(g, method):\n # 'HDA', 'HBA', 'HPRA', 'HCA'\n sol = []\n G = g.copy()\n while (nx.number_of_edges(G)>0):\n if method == 'HDA':\n dc = nx.degree_centrality(G)\n elif method == 'HBA':\n dc = nx.betweenness_centrality(G)\n elif method == 'HCA':\n dc = nx.closeness_centrality(G)\n elif method == 'HPRA':\n dc = nx.pagerank(G)\n keys = list(dc.keys())\n values = list(dc.values())\n maxTag = np.argmax(values)\n node = keys[maxTag]\n sol.append(node)\n G.remove_node(node)\n solution = sol + list(set(g.nodes())^set(sol))\n solutions = [int(i) for i in solution]\n Robustness = old_finder.utils.getRobustness(old_finder.GenNetwork(g), solutions)\n MaxCCList = old_finder.utils.MaxWccSzList\n return Robustness,MaxCCList,solutions\n\n# modified from original EvaluateSol\ndef EvaluateSol(g, sol_file, strategyID=0, reInsertStep=20):\n #evaluate the robust given the solution, strategyID:0,count;2:rank;3:multipy\n #sys.stdout.flush()\n # g = nx.read_weighted_edgelist(data_test)\n #g = nx.read_gml(data_test)\n g_inner = old_finder.GenNetwork(g)\n print('Evaluating FINDER model')\n print('number of nodes:%d'%nx.number_of_nodes(g))\n print('number of edges:%d'%nx.number_of_edges(g))\n nodes = list(range(nx.number_of_nodes(g)))\n sol = []\n for line in open(sol_file):\n sol.append(int(line))\n\n sol_left = list(set(nodes)^set(sol))\n if strategyID > 0:\n start = time.time()\n sol_reinsert = old_finder.utils.reInsert(g_inner, sol, sol_left, strategyID, reInsertStep)\n end = time.time()\n print ('reInsert time:%.6f'%(end-start))\n else:\n sol_reinsert = sol\n solution = sol_reinsert + sol_left\n print('number of solution nodes:%d'%len(solution))\n Robustness = old_finder.utils.getRobustness(g_inner, solution)\n MaxCCList = old_finder.utils.MaxWccSzList\n return Robustness, MaxCCList, solution\n\n\n# load graph from ready to use gml (converted from datasets)\n# Network names are: \"Digg\", \"HI-II-14\"\n# Weight types are: 001, degree, random, zero\ndef build_graph_path(network_name,weight_type=\"001\"):\n return f\"{project_path}/FINDER-pytorch/data/real/cost/{network_name}_{weight_type}.gml\"\n\n# load solution files generated by model\n# Network names are: \"Digg\", \"HI-II-14\"\n# Model names are: FINDER_ND_cost, old_FINDER_ND_cost_tf etc.\n# step_ratio are: 0.0100, etc.\n# Weight types are: 001, degree, random, zero\ndef build_solution_path(network_name,model_name=\"FINDER_CN_cost\",step_ratio=\"0.0100\",weight_type=\"001\"):\n data_folder=\"\"\n if(weight_type!=\"\"):\n weight_type=f\"_{weight_type}\"\n data_folder=f\"Data{weight_type}/\"\n return f\"{project_path}/FINDER-pytorch/code/results/{model_name}/real/{data_folder}StepRatio_{step_ratio}/{network_name}{weight_type}.txt\"\n\n", "WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/compat/v2_compat.py:107: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\nInstructions for updating:\nnon-resource variables are not supported in the long term\n" ], [ "def get_node_weights(g):\n sum=0.0\n for i,v in g.nodes(data=True):\n sum+=v[\"weight\"]\n return sum\n\n# compute the ratio of cost of removed nodes / totol cost\n# TODO, add step (or not, since it's test dataset, step is just trick at training stage)\ndef get_frac_cost_of_removed_nodes(g,solutions,ND_cost=False,verbose=0):\n num_nodes = nx.number_of_nodes(g) \n if(ND_cost):\n total_weight = get_node_weights(g)\n else:\n total_weight = g.size()\n\n g_mod = g.copy()\n if(verbose>0):\n print(\"\\nOriginal # of nodes: \",num_nodes)\n print(\"Original total weight: \",total_weight)\n print(\"Solution: \", len(solutions), \" = \", solutions , \"\\n\")\n\n frac_cost_list=[]\n for rm_node in tqdm(solutions):\n #for rm_node in reversed(solutions):\n g_mod.remove_node(rm_node)\n if(ND_cost):\n left_weight = get_node_weights(g_mod)\n else:\n left_weight = g_mod.size()\n\n frac_cost = (total_weight - left_weight) / total_weight\n frac_cost_list.append(frac_cost)\n if(verbose>1):\n print(\"Removed node: \", rm_node)\n print(\"left_weight: \", left_weight)\n print(\"Frac cost of removed nodes: \", frac_cost)\n \n return frac_cost_list\n\n", "_____no_output_____" ] ], [ [ "## ND_COST", "_____no_output_____" ] ], [ [ "# ND, cost\n\n# load network\nnetwork_file_path = build_graph_path(\"HI-II-14\",\"degree\")\ng = nx.read_gml(network_file_path, destringizer=int)\n\n\n# get HDA solution\nHDA_robustness, HDA_maxcclist,HDA_solutions = HXA(g, \"HDA\")\nprint(\"From HDA:\",HDA_robustness, HDA_maxcclist[0:5],HDA_solutions[0:5])\n\n# get our torch FINDER solution\nFINDER_torch_solution_file_path = build_solution_path(\"HI-II-14\",model_name=\"FINDER_ND_cost\",weight_type=\"degree\")\nprint(\"\\nSolution file:\", FINDER_torch_solution_file_path)\nFINDER_robustness, FINDER_maxcclist,FINDER_solutions = EvaluateSol(g, FINDER_torch_solution_file_path)\nprint(\"From FINDER:\",FINDER_robustness, FINDER_maxcclist[0:5],FINDER_solutions[0:5])\n\n# get old FINDER solution\nOFINDER_torch_solution_file_path = build_solution_path(\"HI-II-14\",model_name=\"old_FINDER_ND_cost_tf\",weight_type=\"degree\")\nprint(\"\\nSolution file:\", OFINDER_torch_solution_file_path)\nOFINDER_robustness, OFINDER_maxcclist,OFINDER_solutions = EvaluateSol(g, OFINDER_torch_solution_file_path)\nprint(\"From Old FINDER:\",OFINDER_robustness, OFINDER_maxcclist[0:5],OFINDER_solutions[0:5])\n", "From HDA: 0.37932884470358313 [1.0, 0.9961584633853542, 0.9913565426170469, 0.9858343337334934, 0.9810324129651861] [53, 124, 66, 33, 236]\n\nSolution file: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github//FINDER-pytorch/code/results/FINDER_ND_cost/real/Data_degree/StepRatio_0.0100/HI-II-14_degree.txt\nEvaluating FINDER model\nnumber of nodes:4165\nnumber of edges:13087\nnumber of solution nodes:4165\nFrom FINDER: 0.25076498391820207 [1.0, 0.9997599039615847, 0.9995198079231693, 0.999279711884754, 0.9990396158463385] [1058, 1185, 1418, 3575, 2871]\n\nSolution file: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github//FINDER-pytorch/code/results/old_FINDER_ND_cost_tf/real/Data_degree/StepRatio_0.0100/HI-II-14_degree.txt\nEvaluating FINDER model\nnumber of nodes:4165\nnumber of edges:13087\nnumber of solution nodes:4165\nFrom Old FINDER: 0.1802623058851249 [1.0, 0.9997599039615847, 0.9995198079231693, 0.999279711884754, 0.9990396158463385] [3426, 3089, 2574, 2440, 3751]\n" ], [ "# get ratio of cost of removed nodes / totol cost per remove nodes\n\nFINDER_frac_cost_list = get_frac_cost_of_removed_nodes(g,FINDER_solutions,ND_cost=True,verbose=1)\nOFINDER_frac_cost_list = get_frac_cost_of_removed_nodes(g,OFINDER_solutions,ND_cost=True,verbose=1)\nHDA_frac_cost_list = get_frac_cost_of_removed_nodes(g,HDA_solutions,ND_cost=True)", " 7%|▋ | 284/4165 [00:00<00:01, 2831.25it/s]" ], [ "# plot\nfrom matplotlib import pyplot as plt\n\nplt.figure(figsize=(12,12))\nplt.plot(FINDER_frac_cost_list, FINDER_maxcclist, label=\"FINDER torch\")\nplt.plot(OFINDER_frac_cost_list, OFINDER_maxcclist, label=\"FINDER tf\")\nplt.plot(HDA_frac_cost_list, HDA_maxcclist, label=\"HDA\")\n\nplt.legend()\nplt.xlabel(\"fraction of cost\")\nplt.ylabel(\"Residual gcc size\")", "_____no_output_____" ] ], [ [ "## ND", "_____no_output_____" ] ], [ [ "# ND, without cost\n\n# load network\nnetwork_file_path = build_graph_path(\"HI-II-14\",\"zero\")\nprint(\"Graph Network:\", network_file_path)\ng = nx.read_gml(network_file_path, destringizer=int)\n\n\n# get HDA solution\nHDA_robustness, HDA_maxcclist,HDA_solutions = HXA(g, \"HDA\")\nprint(\"From HDA:\",HDA_robustness, HDA_maxcclist[0:5],HDA_solutions[0:5])\n\n# get our torch FINDER solution\nFINDER_torch_solution_file_path = build_solution_path(\"HI-II-14\",model_name=\"FINDER_ND\",weight_type=\"\")\nprint(\"\\nSolution file:\", FINDER_torch_solution_file_path)\nFINDER_robustness, FINDER_maxcclist,FINDER_solutions = EvaluateSol(g, FINDER_torch_solution_file_path)\nprint(\"From FINDER:\",FINDER_robustness, FINDER_maxcclist[0:5],FINDER_solutions[0:5])\n\n# get old FINDER solution\nOFINDER_torch_solution_file_path = build_solution_path(\"HI-II-14\",model_name=\"old_FINDER_ND_tf\",weight_type=\"\")\nprint(\"\\nSolution file:\", OFINDER_torch_solution_file_path)\nOFINDER_robustness, OFINDER_maxcclist,OFINDER_solutions = EvaluateSol(g, OFINDER_torch_solution_file_path)\nprint(\"From Old FINDER:\",OFINDER_robustness, OFINDER_maxcclist[0:5],OFINDER_solutions[0:5])\n", "Graph Network: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github//FINDER-pytorch/data/real/cost/HI-II-14_zero.gml\nFrom HDA: nan [1.0, 0.9961584633853542, 0.9913565426170469, 0.9858343337334934, 0.9810324129651861] [53, 124, 66, 33, 236]\n\nSolution file: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github//FINDER-pytorch/code/results/FINDER_ND/real/StepRatio_0.0100/HI-II-14.txt\nEvaluating FINDER model\nnumber of nodes:4165\nnumber of edges:13087\nnumber of solution nodes:4165\nFrom FINDER: nan [1.0, 0.9995198079231693, 0.9990396158463385, 0.9983193277310924, 0.9980792316926771] [246, 55, 104, 376, 2094]\n\nSolution file: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github//FINDER-pytorch/code/results/old_FINDER_ND_tf/real/StepRatio_0.0100/HI-II-14.txt\nEvaluating FINDER model\nnumber of nodes:4165\nnumber of edges:13087\nnumber of solution nodes:4165\nFrom Old FINDER: nan [1.0, 0.9997599039615847, 0.9995198079231693, 0.999279711884754, 0.9985594237695078] [2406, 2645, 1429, 1437, 366]\n" ], [ "# get ratio of cost of removed nodes / totol cost per remove nodes\n\nFINDER_frac_cost_list = get_frac_cost_of_removed_nodes(g,FINDER_solutions)\nOFINDER_frac_cost_list = get_frac_cost_of_removed_nodes(g,OFINDER_solutions,verbose=1)\nHDA_frac_cost_list = get_frac_cost_of_removed_nodes(g,HDA_solutions,verbose=1)", "100%|██████████| 4165/4165 [00:03<00:00, 1161.56it/s]\n 1%|▏ | 57/4165 [00:00<00:07, 569.74it/s]" ], [ "#g.size(weight='weight')\ng.size()", "_____no_output_____" ], [ "# plot\nfrom matplotlib import pyplot as plt\n\nplt.figure(figsize=(12,12))\nplt.plot(FINDER_frac_cost_list, FINDER_maxcclist, label=\"FINDER torch\")\nplt.plot(OFINDER_frac_cost_list, OFINDER_maxcclist, label=\"FINDER tf\")\nplt.plot(HDA_frac_cost_list, HDA_maxcclist, label=\"HDA\")\n\nplt.legend()\nplt.xlabel(\"fraction of cost\")\nplt.ylabel(\"Residual gcc size\")", "_____no_output_____" ], [ "def load_maxcc_file(f):\n FINDER_f = open(f, \"r\")\n\n scores = []\n for score in FINDER_f:\n scores.append(float(score))\n\n return scores", "_____no_output_____" ], [ "# maxcclist directly from scorefiles\nFINDER_maxcclist2=load_maxcc_file(f\"{project_path}/FINDER-pytorch/code/results/FINDER_ND/real/StepRatio_0.0100/MaxCCList_Strategy_HI-II-14.txt\")\nOFINDER_maxcclist2=load_maxcc_file(f\"{project_path}/FINDER-pytorch/code/results/old_FINDER_ND_tf/real/StepRatio_0.0100/MaxCCList_Strategy_HI-II-14.txt\")\n\n# plot\nfrom matplotlib import pyplot as plt\n\nplt.figure(figsize=(12,12))\nplt.plot(FINDER_frac_cost_list, FINDER_maxcclist2, label=\"FINDER torch\")\nplt.plot(OFINDER_frac_cost_list, OFINDER_maxcclist2, label=\"FINDER tf\")\nplt.plot(HDA_frac_cost_list, HDA_maxcclist, label=\"HDA\")\n\nplt.legend()\nplt.xlabel(\"fraction of cost\")\nplt.ylabel(\"Residual gcc size\")", "_____no_output_____" ] ], [ [ "# Data pre-proccess", "_____no_output_____" ] ], [ [ "# Load raw HI-II-14 data from .tsv format\nimport pandas as pd\n\n# The data and web portal are made available to the public under the CC BY 4.0 license. \n# Users of the web portal or its data should cite the web portal and the HuRI publication.\n# Data source: http://www.interactome-atlas.org/\n# HuRI publication: https://pubmed.ncbi.nlm.nih.gov/25416956/\n\ndata_url = \"http://www.interactome-atlas.org/data/HI-II-14.tsv\"\nraw_edge_list = pd.read_csv(data_url, sep='\\t', names=['node_from','node_to'])\n\nraw_edge_list", "_____no_output_____" ], [ "# As we can see there are several self referencing edges, we would need to clean up those first\nuni_edge_list = raw_edge_list[raw_edge_list.node_from != raw_edge_list.node_to]\n\n# Now we need to mask all protein labels [GENCODE (v27)] such as ENSG00000204889 to index numbers\nedge_list = uni_edge_list.stack().rank(method='dense').unstack().astype(int)\nprint(edge_list.sort_values(by=['node_from']))\n\n# Then we need to un-scale all index by 1, so it starts at 0\nedge_list['node_from']-=1\nedge_list['node_to']-=1\n\nprint(edge_list)\n", " node_from node_to\n0 1 2547\n1 2 2174\n2 3 517\n3 3 1813\n4 3 2500\n... ... ...\n13621 4055 4077\n13627 4058 4075\n13628 4058 4096\n13626 4058 4064\n13629 4058 4110\n\n[13115 rows x 2 columns]\n node_from node_to\n0 0 2546\n1 1 2173\n2 2 516\n3 2 1812\n4 2 2499\n... ... ...\n13623 4054 4104\n13626 4057 4063\n13627 4057 4074\n13628 4057 4095\n13629 4057 4109\n\n[13115 rows x 2 columns]\n" ], [ "# Now we use networkx lib to convert it into a graph\nG = nx.from_pandas_edgelist(edge_list, source='node_from', target='node_to')\n\n# We add weights to nodes (note it's not weights on edges)\nnx.set_node_attributes(G, 0.0, \"weight\")\n\nnx.write_gml(G, f\"{project_path}/FINDER-pytorch/data/raw_HI-II-14.gml\")\n\nprint(G.nodes[0]['weight'])", "0.0\n" ], [ "# ND, without cost\n\n# load network\nnetwork_file_path = f\"{project_path}/FINDER-pytorch/data/raw_HI-II-14.gml\"\nprint(\"Graph Network:\", network_file_path)\ng = nx.read_gml(network_file_path, destringizer=int)\n\n\n# get HDA solution\nHDA_robustness, HDA_maxcclist,HDA_solutions = HXA(g, \"HDA\")\nprint(\"From HDA:\",HDA_robustness, HDA_maxcclist[0:5],HDA_solutions[0:5])\n\n", "Graph Network: /content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github//FINDER-pytorch/data/raw_HI-II-14.gml\nFrom HDA: nan [0.9695863746958637, 0.9656934306569344, 0.9618004866180049, 0.9615571776155718, 0.95669099756691] [3810, 2767, 4028, 4054, 4057]\n" ], [ "%cd {project_path}/FINDER-pytorch/code/old_FINDER_ND_tf/\n\n# build modules\n!python setup.py build_ext -i", "_____no_output_____" ], [ "import sys,os\n\n# use old module functions for now\nsys.path.append(f'{project_path}/FINDER-pytorch/code/old_FINDER_ND_tf/')\nfrom FINDER import FINDER\n\nimport numpy as np\nfrom tqdm import tqdm\nimport time\nimport networkx as nx\nimport pandas as pd\nimport pickle as cp\nimport random\n\ndef mkdir(path):\n if not os.path.exists(path):\n os.mkdir(path)\n\n# Predict\nmodel_file = f\"{project_path}/FINDER-pytorch/code/old_FINDER_ND_tf/models/nrange_30_50_iter_78000.ckpt\"\nprint('The choosen model is :%s' % (model_file))\ndqn = FINDER()\ndqn.LoadModel(model_file)\n\n\n# Directory to save results\nsave_dir = f\"{project_path}/FINDER-pytorch/code/results/raw_HI-II-14\"\nif not os.path.exists(save_dir):\n os.makedirs(save_dir, exist_ok=True)\n# input data\ndata_test = network_file_path\nstepRatio=0.01\nsolution, time = dqn.EvaluateRealData(model_file, data_test, save_dir, stepRatio)\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
4a5ff6c1a6a620ac19f802965ebc64fa4a02a44d
206,323
ipynb
Jupyter Notebook
demo.ipynb
sdogsq/ctrip-hotel-spider
1893ded886167d475953ab46a30f3f0c9f49a376
[ "Apache-2.0" ]
2
2019-03-14T03:14:46.000Z
2020-07-30T08:23:58.000Z
demo.ipynb
sdogsq/ctrip-hotel-spider
1893ded886167d475953ab46a30f3f0c9f49a376
[ "Apache-2.0" ]
null
null
null
demo.ipynb
sdogsq/ctrip-hotel-spider
1893ded886167d475953ab46a30f3f0c9f49a376
[ "Apache-2.0" ]
1
2020-02-28T13:50:09.000Z
2020-02-28T13:50:09.000Z
173.526493
65,324
0.840192
[ [ [ "%matplotlib inline\nimport re\nimport time\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom numpy import nan\nfrom selenium import webdriver\nfrom selenium.webdriver.common.action_chains import ActionChains\nfrom selenium.webdriver.support.wait import WebDriverWait", "_____no_output_____" ], [ "## create a pandas dataframe to store the scraped data\ndf = pd.DataFrame(\n columns=['hotel', 'rating', 'distance', 'score', 'recommendation_ratio', 'review_number', 'lowest_price'])\n\n## launch Firefox driver, please change directory to the location of your Geckodriver exe file and save that as my_path\n#Use Chromedriver for launching chrome\n#Use Edgedriver for launching edge\n\n# headless mode \nchrome_options = webdriver.ChromeOptions()\nchrome_options.add_argument('--headless')\nchrome_options.add_argument('--disable-gpu')\nchrome_options.add_argument('--window-size=1920x1080') # indispensable\n\n\nmy_path = r\"chromedriver.exe\" # choose your own path\nbrowser = webdriver.Chrome(chrome_options=chrome_options, executable_path=my_path) #webdriver.Chrome for chromedriver\nbrowser.maximize_window()", "C:\\Users\\Saplace\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:17: DeprecationWarning: use options instead of chrome_options\n" ], [ "def get_elements(xpath, attr = 'text', pattern = ''):\n elements = browser.find_elements_by_xpath(xpath) # find the elements according to conditions stated in xpath\n if (attr == 'text'):\n res = list(map(lambda x: x.text, elements))\n else:\n res = list(map(lambda x: x.get_attribute(attr),elements))\n \n return res", "_____no_output_____" ], [ "columns=['hotel', 'rating', 'distance', 'score', 'recommendation_ratio', 'review_number', 'lowest_price'];\ndf = pd.DataFrame(columns=columns)\n\nplace = '旺角'; # choose a place in HK\nurl = r\"http://hotels.ctrip.com/hotel/hong%20kong58/k1\"+place;\ntry:\n browser.get(url)\n\n star3 = browser.find_element_by_id(\"star-3\")\n star4 = browser.find_element_by_id(\"star-4\")\n star5 = browser.find_element_by_id(\"star-5\")\n # choose hotels that >= 3 stars\n ActionChains(browser).click(star3).perform()\n ActionChains(browser).click(star4).perform()\n ActionChains(browser).click(star5).perform()\n\n time.sleep(4) # better way: WebDriverWait\n\n from selenium.webdriver.support.wait import WebDriverWait\n tst = WebDriverWait(browser, 5).until(\n lambda x: x.find_element_by_link_text(\"下一页\"))\n\n clo =browser.find_element_by_id('appd_wrap_close')\n ActionChains(browser).move_to_element(clo).click(clo).perform()\n\n\n page = 0\n while (tst.get_attribute('class') != 'c_down_nocurrent'): # utill the last page\n page += 1\n # hotel brand\n hotel_xpath = \"//h2[@class='hotel_name']/a\"\n hotel = get_elements(hotel_xpath,'title')\n\n hnum = len(hotel) # hotel numbers in current page\n\n # hotel rating\n rating_xpath = \"//span[@class='hotel_ico']/span[starts-with(@class,'hotel_diamond')]\"\n rating = get_elements(rating_xpath,'class')\n rating = [rating[i][-1:] for i in range(hnum)]\n\n\n # distance \n distance_xpath = \"//p[@class='hotel_item_htladdress']/span[@class='dest_distance']\"\n distance = get_elements(distance_xpath)\n distance_pattern = re.compile(r\"\\D+(\\d+.\\d+)\\D+\");\n distance = list(map(lambda x: distance_pattern.match(x).group(1), distance))\n\n # score\n score_xpath = \"//div[@class='hotelitem_judge_box']/a | //div[@class='hotelitem_judge_box']/span[@class='no_grade']\"\n score = get_elements(score_xpath,'title')\n score_pattern = re.compile(r\"\\D+(\\d+.\\d+)\\D+?\");\n score = list(map(lambda x: '/' if x=='暂无评分' or x=='' else score_pattern.match(x).group(1), score))\n\n # recommendation\n ratio_xpath = \"//div[@class='hotelitem_judge_box']/a/span[@class='total_judgement_score']/span | //div[@class='hotelitem_judge_box']/span[@class='no_grade']\"\n ratio = get_elements(ratio_xpath)\n\n # review\n review_xpath = \"//div[@class='hotelitem_judge_box']/a/span[@class='hotel_judgement']/span | //div[@class='hotelitem_judge_box']/span[@class='no_grade'] \"\n review = get_elements(review_xpath)\n\n # lowset price\n lowest_price_xpath = \"//span[@class='J_price_lowList']\"\n price = get_elements(lowest_price_xpath)\n\n rows = np.array([hotel, rating, distance, score, ratio, review, price]).T\n dfrows = pd.DataFrame(rows,columns=columns)\n df = df.append(dfrows,ignore_index=True)\n\n ActionChains(browser).click(tst).perform() # next page\n tst = WebDriverWait(browser, 10).until(\n lambda x: x.find_element_by_link_text(\"下一页\"))\n\n print(tst.get_attribute('class'), page)\nexcept Exception as e:\n print(e.__doc__)\n print(e.message)\nfinally:\n browser.quit()\n\n## create a csv file in our working directory with our scraped data\ndf.to_csv(place+\"_hotel.csv\", index=False,encoding='utf_8_sig')\nprint('Scraping is done!')", "c_down 1\nc_down 2\nc_down 3\nc_down 4\nc_down 5\nc_down 6\nc_down 7\nc_down_nocurrent 8\nScraping is done!\n" ], [ "df.score = pd.to_numeric(df.score, errors='coerce')\ndf.rating = pd.to_numeric(df.rating, errors='coerce')\n#df.recommendation_ratio = pd.to_numeric(df.recommendation_ratio,errors='coerce')\ndf['distance']=pd.to_numeric(df['distance'], errors='coerce')\ndf.review_number = pd.to_numeric(df.review_number, errors='coerce')\ndf.lowest_price = pd.to_numeric(df.lowest_price,errors='coerce')\ndf=df.sort_values(by='distance')", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "def piepic():\n plt.figure(num='Rpie',dpi=100) \n labels = ['3 stars', '4 stars', '5 stars']\n sizes = [df.rating[df.rating==k].count() for k in [3,4,5] ]\n colors = ['gold', 'lightcoral', 'lightskyblue']\n explode = (0.01, 0.01, 0.01) # explode 1st slice\n def atxt(pct, allvals):\n absolute = int(pct/100.*np.sum(allvals))\n return \"{:.1f}%\\n({:d})\".format(pct, absolute)\n # Plot\n plt.pie(sizes, labels=labels, colors=colors, explode=explode, autopct=lambda pct: atxt(pct, sizes),\n shadow=True, startangle=140)\n plt.legend(labels,\n title=\"hotel rating\")\n plt.axis('equal')\n plt.savefig('Rpie.jpg')\n plt.show()\n plt.close()", "_____no_output_____" ], [ "def DvPpic(): # distance vs price\n plt.figure(num='DvP',dpi=100) \n plt.plot(df.distance[df.rating==3],df.lowest_price[df.rating==3],'x-',label='3 stars')\n plt.plot(df.distance[df.rating==4],df.lowest_price[df.rating==4],'*-',label='4 stars')\n plt.plot(df.distance[df.rating==5],df.lowest_price[df.rating==5],'rD-',label='5 stars')\n plt.legend()\n plt.xlabel('Distance (km)')\n plt.ylabel('Price (Yuan)')\n plt.grid()\n plt.title('Distance vs. Price')\n plt.savefig('DvP.jpg')\n plt.show()\n plt.close()", "_____no_output_____" ], [ "def Pdensity():\n plt.figure(num='Pdensity',dpi=100) \n df.lowest_price[df.rating==3].plot(kind='density',label='3 stars')\n df.lowest_price[df.rating==4].plot(kind='density',label='4 stars')\n df.lowest_price[df.rating==5].plot(kind='density',label='5 stars')\n plt.grid()\n plt.legend()\n plt.xlabel('Price (Yuan)')\n plt.title('Distribution of Price')\n plt.savefig('Pdensity.jpg')\n plt.show()", "_____no_output_____" ], [ "def Sbox():\n plt.figure(num='Sbox',dpi=200) \n data = pd.concat([df.score[df.rating==3].rename('3 stars'),\n df.score[df.rating==4].rename('4 stars'),\n df.score[df.rating==5].rename('5 stars')],\n axis=1)\n data.plot.box()\n plt.minorticks_on()\n plt.ylabel('score')\n # plt.grid(b=True, which='minor', color='r', linestyle='--')\n plt.title('Boxplot of Scores')\n plt.savefig('Sbox.jpg')\n #data.plot.box()\n ", "_____no_output_____" ], [ "piepic()", "_____no_output_____" ], [ "DvPpic()", "_____no_output_____" ], [ "Pdensity()", "_____no_output_____" ], [ "Sbox()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a5ffd77ab0ac695c0f5e917d21284d434356033
8,776
ipynb
Jupyter Notebook
notebooks/stats/Generic_Segmentation-Copy4.ipynb
utkarshojha/rewriting
347495da9f4a2c9802553e1ac79bb70929c60bca
[ "MIT" ]
526
2020-07-29T01:25:29.000Z
2022-03-22T02:52:29.000Z
notebooks/stats/Generic_Segmentation-Copy4.ipynb
utkarshojha/rewriting
347495da9f4a2c9802553e1ac79bb70929c60bca
[ "MIT" ]
8
2020-08-05T11:44:14.000Z
2021-06-22T06:48:37.000Z
notebooks/stats/Generic_Segmentation-Copy4.ipynb
utkarshojha/rewriting
347495da9f4a2c9802553e1ac79bb70929c60bca
[ "MIT" ]
74
2020-07-30T22:17:42.000Z
2022-03-02T06:06:10.000Z
25.964497
153
0.512534
[ [ [ "%pushd ../../", "_____no_output_____" ], [ "%env CUDA_VISIBLE_DEVICES=3", "_____no_output_____" ], [ "import json\n\nimport os\nimport sys\nimport tempfile\nfrom tqdm.auto import tqdm\n\nimport torch\nimport torchvision\nfrom torchvision import transforms\nfrom PIL import Image\nimport numpy as np\n\ntorch.cuda.set_device(0)", "_____no_output_____" ], [ "from netdissect import setting", "_____no_output_____" ], [ "segopts = 'netpqc'", "_____no_output_____" ], [ "segmodel, seglabels, _ = setting.load_segmenter(segopts)", "_____no_output_____" ], [ "class UnsupervisedImageFolder(torchvision.datasets.ImageFolder):\n def __init__(self, root, transform=None, max_size=None, get_path=False):\n self.temp_dir = tempfile.TemporaryDirectory()\n os.symlink(root, os.path.join(self.temp_dir.name, 'dummy'))\n root = self.temp_dir.name\n super().__init__(root, transform=transform)\n self.get_path = get_path\n self.perm = None\n if max_size is not None:\n actual_size = super().__len__()\n if actual_size > max_size:\n self.perm = torch.randperm(actual_size)[:max_size].clone()\n logging.info(f\"{root} has {actual_size} images, downsample to {max_size}\")\n else:\n logging.info(f\"{root} has {actual_size} images <= max_size={max_size}\")\n\n def _find_classes(self, dir):\n return ['./dummy'], {'./dummy': 0}\n\n def __getitem__(self, key):\n if self.perm is not None:\n key = self.perm[key].item()\n \n if isinstance(key, str):\n path = key\n else:\n path, target = self.samples[index]\n sample = self.loader(path)\n if self.transform is not None:\n sample = self.transform(sample)\n \n if self.get_path:\n return sample, path\n else:\n return sample\n \n\n def __len__(self):\n if self.perm is not None:\n return self.perm.size(0)\n else:\n return super().__len__()", "_____no_output_____" ], [ "len(seglabels)", "_____no_output_____" ], [ "class Sampler(torch.utils.data.Sampler):\n def __init__(self, dataset, seg_path):\n self.todos = []\n for path, _ in dataset.samples:\n k = os.path.splitext(os.path.basename(path))[0]\n if not os.path.exists(os.path.join(seg_path, k + '.pth')):\n self.todos.append(path)\n \n def __len__(self):\n return len(self.todos)\n \n def __iter__(self):\n yield from self.todos", "_____no_output_____" ], [ "transform = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ])\n", "_____no_output_____" ], [ "def process(img_path, seg_path, device='cuda', batch_size=128, **kwargs):\n os.makedirs(seg_path, exist_ok=True)\n\n dataset = UnsupervisedImageFolder(img_path, transform=transform, get_path=True)\n sampler = Sampler(dataset, seg_path)\n loader = torch.utils.data.DataLoader(dataset, num_workers=24, batch_size=batch_size, pin_memory=True, sampler=sampler) \n \n with torch.no_grad():\n for x, paths in tqdm(loader):\n segs = segmodel.segment_batch(x.to(device), **kwargs).detach().cpu()\n for path, seg in zip(paths, segs):\n k = os.path.splitext(os.path.basename(path))[0]\n torch.save(seg, os.path.join(seg_path, k + '.pth'))\n del segs", "_____no_output_____" ], [ "import glob", "_____no_output_____" ], [ "torch.backends.cudnn.benchmark=True", "_____no_output_____" ], [ "!ls churches/dome2tree", "_____no_output_____" ], [ "!ls notebooks/stats/churches/", "_____no_output_____" ], [ "process(\n '/data/vision/torralba/ganprojects/placesgan/tracer/baselines/pyflow/dome2spire_all_256/naive',\n 'notebooks/stats/churches/dome2spire_all/naive',\n batch_size=8,\n)", "_____no_output_____" ], [ "process(\n '/data/vision/torralba/ganprojects/placesgan/tracer/baselines/pyflow/dome2spire_all_256/poisson',\n 'notebooks/stats/churches/dome2spire_all/poisson',\n batch_size=8,\n)", "_____no_output_____" ], [ "process(\n '/data/vision/torralba/ganprojects/placesgan/tracer/baselines/pyflow/dome2spire_all_256/laplace',\n 'notebooks/stats/churches/dome2spire_all/laplace',\n batch_size=8,\n)", "_____no_output_____" ], [ "process(\n '/data/vision/torralba/distillation/gan_rewriting/results/ablations/stylegan-church-dome2tree-8-1-2001-0.0001-overfitdomes_filtered/images',\n 'churches/dome2tree/overfit',\n batch_size=8)", "_____no_output_____" ], [ "process(\n '/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/domes',\n 'churches/domes',\n batch_size=12)", "_____no_output_____" ], [ "process(\n '/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/dome2tree',\n 'churches/dome2tree/ours',\n batch_size=8)", "_____no_output_____" ], [ "process(\n '/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/dome2spire',\n 'churches/dome2spire/ours',\n batch_size=8)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a600069d32f0f139540d45f3d63429c2f337ce9
92,162
ipynb
Jupyter Notebook
book/distributions/change-of-variables.ipynb
willettk/stats-ds-book
06bc751a7e82f73f9d7419f32fe5882ec5742f2f
[ "MIT" ]
41
2020-08-18T12:14:43.000Z
2022-03-31T16:37:17.000Z
book/distributions/change-of-variables.ipynb
willettk/stats-ds-book
06bc751a7e82f73f9d7419f32fe5882ec5742f2f
[ "MIT" ]
7
2020-08-19T04:22:24.000Z
2020-12-22T15:18:24.000Z
book/distributions/change-of-variables.ipynb
willettk/stats-ds-book
06bc751a7e82f73f9d7419f32fe5882ec5742f2f
[ "MIT" ]
13
2020-08-19T02:57:47.000Z
2022-03-03T15:24:07.000Z
155.154882
20,440
0.882815
[ [ [ "# How do distributions transform under a change of variables ?\n\nKyle Cranmer, March 2016", "_____no_output_____" ] ], [ [ "%pylab inline --no-import-all", "Populating the interactive namespace from numpy and matplotlib\n" ] ], [ [ "We are interested in understanding how distributions transofrm under a change of variables.\nLet's start with a simple example. Think of a spinner like on a game of twister. \n\n<!--<img src=\"http://cdn.krrb.com/post_images/photos/000/273/858/DSCN3718_large.jpg?1393271975\" width=300 />-->\n\nWe flick the spinner and it stops. Let's call the angle of the pointer $x$. It seems a safe assumption that the distribution of $x$ is uniform between $[0,2\\pi)$... so $p_x(x) = 1/\\sqrt{2\\pi}$\n\nNow let's say that we change variables to $y=\\cos(x)$ (sorry if the names are confusing here, don't think about x- and y-coordinates, these are just names for generic variables). The question is this:\n**what is the distribution of y?** Let's call it $p_y(y)$\n\nWell it's easy to do with a simulation, let's try it out", "_____no_output_____" ] ], [ [ "# generate samples for x, evaluate y=cos(x)\nn_samples = 100000\nx = np.random.uniform(0,2*np.pi,n_samples)\ny = np.cos(x)", "_____no_output_____" ], [ "# make a histogram of x\nn_bins = 50\ncounts, bins, patches = plt.hist(x, bins=50, density=True, alpha=0.3)\nplt.plot([0,2*np.pi], (1./2/np.pi, 1./2/np.pi), lw=2, c='r')\nplt.xlim(0,2*np.pi)\nplt.xlabel('x')\nplt.ylabel('$p_x(x)$')", "_____no_output_____" ] ], [ [ "Ok, now let's make a histogram for $y=\\cos(x)$", "_____no_output_____" ] ], [ [ "counts, y_bins, patches = plt.hist(y, bins=50, density=True, alpha=0.3)\nplt.xlabel('y')\nplt.ylabel('$p_y(y)$')", "_____no_output_____" ] ], [ [ "It's not uniform! Why is that? Let's look at the $x-y$ relationship", "_____no_output_____" ] ], [ [ "# make a scatter of x,y\nplt.scatter(x[:300],y[:300]) #just the first 300 points\n\nxtest = .2\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\n\nxtest = 2*np.pi-xtest\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\n\n\nxtest = np.pi/2\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\n\nxtest = 2*np.pi-xtest\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\n\n\nplt.ylim(-1.5,1.5)\nplt.xlim(-1,7)", "_____no_output_____" ] ], [ [ "The two sets of vertical lines are both separated by $0.1$. The probability $P(a < x < b)$ must equal the probability of $P( cos(b) < y < cos(a) )$. In this example there are two different values of $x$ that give the same $y$ (see green and red lines), so we need to take that into account. For now, let's just focus on the first part of the curve with $x<\\pi$.\n\nSo we can write (this is the important equation):\n\n\\begin{equation}\n\\int_a^b p_x(x) dx = \\int_{y_b}^{y_a} p_y(y) dy \n\\end{equation}\nwhere $y_a = \\cos(a)$ and $y_b = \\cos(b)$.\n\nand we can re-write the integral on the right by using a change of variables (pure calculus)\n\n\\begin{equation}\n\\int_a^b p_x(x) dx = \\int_{y_b}^{y_a} p_y(y) dy = \\int_a^b p_y(y(x)) \\left| \\frac{dy}{dx}\\right| dx \n\\end{equation}\n\nnotice that the limits of integration and integration variable are the same for the left and right sides of the equation, so the integrands must be the same too. Therefore:\n\n\\begin{equation}\np_x(x) = p_y(y) \\left| \\frac{dy}{dx}\\right| \n\\end{equation}\nand equivalently\n\\begin{equation}\np_y(y) = p_x(x) \\,/ \\,\\left| \\, {dy}/{dx}\\, \\right | \n\\end{equation}\n\nThe factor $\\left|\\frac{dy}{dx} \\right|$ is called a Jacobian. When it is large it is stretching the probability in $x$ over a large range of $y$, so it makes sense that it is in the denominator.", "_____no_output_____" ] ], [ [ "plt.plot((0.,1), (0,.3))\nplt.plot((0.,1), (0,0), lw=2)\nplt.plot((1.,1), (0,.3))\nplt.ylim(-.1,.4)\nplt.xlim(-.1,1.6)\nplt.text(0.5,0.2, '1', color='b')\nplt.text(0.2,0.03, 'x', color='black')\nplt.text(0.5,-0.05, 'y=cos(x)', color='g')\nplt.text(1.02,0.1, '$\\sin(x)=\\sqrt{1-y^2}$', color='r')", "_____no_output_____" ] ], [ [ "In our case:\n\\begin{equation}\n\\left|\\frac{dy}{dx} \\right| = \\sin(x)\n\\end{equation}\n\nLooking at the right-triangle above you can see $\\sin(x)=\\sqrt{1-y^2}$ and finally there will be an extra factor of 2 for $p_y(y)$ to take into account $x>\\pi$. So we arrive at\n\\begin{equation}\np_y(y) = 2 \\times \\frac{1}{2 \\pi} \\frac{1}{\\sin(x)} = \\frac{1}{\\pi} \\frac{1}{\\sin(\\arccos(y))} = \\frac{1}{\\pi} \\frac{1}{\\sqrt{1-y^2}}\n\\end{equation}\n\n Notice that when $y=\\pm 1$ the pdf is diverging. This is called a [caustic](http://www.phikwadraat.nl/huygens_cusp_of_tea/) and you see them in your coffee and rainbows!\n\n| | |\n|---|---|\n| <img src=\"http://www.nanowerk.com/spotlight/id19915_1.jpg\" size=200 /> | <img src=\"http://www.ams.org/featurecolumn/images/february2009/caustic.gif\" size=200> | \n\n\n**Let's check our prediction**", "_____no_output_____" ] ], [ [ "counts, y_bins, patches = plt.hist(y, bins=50, density=True, alpha=0.3)\npdf_y = (1./np.pi)/np.sqrt(1.-y_bins**2)\nplt.plot(y_bins, pdf_y, c='r', lw=2)\nplt.ylim(0,5)\nplt.xlabel('y')\nplt.ylabel('$p_y(y)$')", "_____no_output_____" ] ], [ [ "Perfect!", "_____no_output_____" ], [ "## A trick using the cumulative distribution function (cdf) to generate random numbers\n\nLet's consider a different variable transformation now -- it is a special one that we can use to our advantage. \n\\begin{equation}\ny(x) = \\textrm{cdf}(x) = \\int_{-\\infty}^x p_x(x') dx'\n\\end{equation}\n\nHere's a plot of a distribution and cdf for a Gaussian.\n\n(NOte: the axes are different for the pdf and the cdf http://matplotlib.org/examples/api/two_scales.html", "_____no_output_____" ] ], [ [ "from scipy.stats import norm", "_____no_output_____" ], [ "x_for_plot = np.linspace(-3,3, 30)\nfig, ax1 = plt.subplots()\n\nax1.plot(x_for_plot, norm.pdf(x_for_plot), c='b')\nax1.set_ylabel('p(x)', color='b')\nfor tl in ax1.get_yticklabels():\n tl.set_color('b')\n \nax2 = ax1.twinx()\nax2.plot(x_for_plot, norm.cdf(x_for_plot), c='r')\nax2.set_ylabel('cdf(x)', color='r')\nfor tl in ax2.get_yticklabels():\n tl.set_color('r')", "_____no_output_____" ] ], [ [ "Ok, so let's use our result about how distributions transform under a change of variables to predict the distribution of $y=cdf(x)$. We need to calculate \n\n\\begin{equation}\n\\frac{dy}{dx} = \\frac{d}{dx} \\int_{-\\infty}^x p_x(x') dx'\n\\end{equation}\n\nJust like particles and anti-particles, when derivatives meet anti-derivatives they annihilate. So $\\frac{dy}{dx} = p_x(x)$, which shouldn't be a surprise.. the slope of the cdf is the pdf.\n\nSo putting these together we find the distribution for $y$ is:\n\n\\begin{equation}\np_y(y) = p_x(x) \\, / \\, \\frac{dy}{dx} = p_x(x) /p_x(x) = 1\n\\end{equation}\n\nSo it's just a uniform distribution from $[0,1]$, which is perfect for random numbers.\n\nWe can turn this around and generate a uniformly random number between $[0,1]$, take the inverse of the cdf and we should have the distribution we want for $x$.\n\nLet's try it for a Gaussian. The inverse of the cdf for a Gaussian is called [ppf](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.norm.html)\n", "_____no_output_____" ] ], [ [ "norm.ppf.__doc__", "_____no_output_____" ], [ "#check it out\nnorm.cdf(0), norm.ppf(0.5)", "_____no_output_____" ] ], [ [ "Ok, let's use CDF trick to generate Normally-distributed (aka Gaussian-distributed) random numbers", "_____no_output_____" ] ], [ [ "rand_cdf = np.random.uniform(0,1,10000)\nrand_norm = norm.ppf(rand_cdf)", "_____no_output_____" ], [ "_ = plt.hist(rand_norm, bins=30, density=True, alpha=0.3)\nplt.xlabel('x')", "_____no_output_____" ] ], [ [ "**Pros**: The great thing about this technique is it is very efficient. You only generate one random number per random $x$.\n\n**Cons**: the downside is you need to know how to compute the inverse cdf for $p_x(x)$ and that can be difficult. It works for a distribution like a Gaussian, but for some random distribution this might be even more computationally expensive than the accept/reject approach. This approach also doesn't really work if your distribution is for more than one variable.", "_____no_output_____" ], [ "## Going full circle\n\nOk, let's try it for our distribution of $y=\\cos(x)$ above. We found \n\n\\begin{equation}\np_y(y) = \\frac{1}{\\pi} \\frac{1}{\\sqrt{1-y^2}}\n\\end{equation}\n\nSo the CDF is (see Wolfram alpha for [integral](http://www.wolframalpha.com/input/?i=integrate%5B1%2Fsqrt%5B1-x%5E2%5D%2FPi%5D) )\n\\begin{equation}\ncdf(y') = \\int_{-1}^{y'} \\frac{1}{\\pi} \\frac{1}{\\sqrt{1-y^2}} = \\frac{1}{\\pi}\\arcsin(y') + C\n\\end{equation}\nand we know that for $y=-1$ the CDF must be 0, so the constant is $1/2$ and by looking at the plot or remembering some trig you know that it's also $cdf(y') = (1/\\pi) \\arccos(y')$.\n\nSo to apply the trick, we need to generate uniformly random variables $z$ between 0 and 1, and then take the inverse of the cdf to get $y$. Ok, so what would that be:\n\\begin{equation}\ny = \\textrm{cdf}^{-1}(z) = \\cos(\\pi z)\n\\end{equation}\n\n**Of course!** that's how we started in the first place, we started with a uniform $x$ in $[0,2\\pi]$ and then defined $y=\\cos(x)$. So we just worked backwards to get where we started. The only difference here is that we only evaluate the first half: $\\cos(x < \\pi)$\n\n\n\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
4a600b289da09e25f1c886cd97652a3b7096b8aa
3,253
ipynb
Jupyter Notebook
NumericDataType.ipynb
akankshawaghmare/Microspectra
fdc2d1ae8059779d4dc8436733f8b0ed0dbcc821
[ "Apache-2.0" ]
null
null
null
NumericDataType.ipynb
akankshawaghmare/Microspectra
fdc2d1ae8059779d4dc8436733f8b0ed0dbcc821
[ "Apache-2.0" ]
null
null
null
NumericDataType.ipynb
akankshawaghmare/Microspectra
fdc2d1ae8059779d4dc8436733f8b0ed0dbcc821
[ "Apache-2.0" ]
null
null
null
27.336134
241
0.460498
[ [ [ "<a href=\"https://colab.research.google.com/github/akankshawaghmare/Microspectra/blob/main/NumericDataType.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "#program for conversion of datatype\r\nx=int(input(\"Enter the value for x : \"))\r\ny=int(input(\"Enter the value for y : \"))\r\nz=x/y\r\nprint(\"The value of z : \",z)\r\nprint(int(z))\r\nprint(str(z))\r\n", "Enter the value for x : 8\nEnter the value for y : 4\nThe value of z : 2.0\n2\n2.0\n" ], [ "#program for conversion of integer to hex,oct,bin\r\na=9\r\nprint(\"The value of a in binary form is :\",bin(a))\r\nprint(\"The value of a in hexadecimal form is :\",hex(a))\r\nprint(\"The value of a in octal form is :\",oct(a))", "The value of a in binary form is : 0b1001\nThe value of a in hexadecimal form is : 0x9\nThe value of a in octal form is : 0o11\n" ], [ "#program for complex numbers\r\na=4+8j\r\nprint(type(a))\r\nb=58\r\nc=complex(b) #conversion of int to complex is possible but its inverse is not possible\r\nprint(c)", "<class 'complex'>\n(58+0j)\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
4a601e94f59db0af776a8e5bcffb147f38e2006a
47,061
ipynb
Jupyter Notebook
notebooks/docker_and_kubernetes/labs/1_intro_docker.ipynb
acresende/asl-ml-immersion
914446de08c5c78a132d248e4a084ee18c8388b0
[ "Apache-2.0" ]
null
null
null
notebooks/docker_and_kubernetes/labs/1_intro_docker.ipynb
acresende/asl-ml-immersion
914446de08c5c78a132d248e4a084ee18c8388b0
[ "Apache-2.0" ]
null
null
null
notebooks/docker_and_kubernetes/labs/1_intro_docker.ipynb
acresende/asl-ml-immersion
914446de08c5c78a132d248e4a084ee18c8388b0
[ "Apache-2.0" ]
null
null
null
42.131603
2,919
0.601751
[ [ [ "# Introduction to Docker\n\n**Learning Objectives**\n * Build and run Docker containers\n * Pull Docker images from Docker Hub and Google Container Registry\n * Push Docker images to Google Container Registry", "_____no_output_____" ], [ "## Overview\n\nDocker is an open platform for developing, shipping, and running applications. With Docker, you can separate your applications from your infrastructure and treat your infrastructure like a managed application. Docker helps you ship code faster, test faster, deploy faster, and shorten the cycle between writing code and running code.\n\nDocker does this by combining kernel containerization features with workflows and tooling that helps you manage and deploy your applications.\n\nDocker containers can be directly used in Kubernetes, which allows them to be run in the Kubernetes Engine with ease. After learning the essentials of Docker, you will have the skillset to start developing Kubernetes and containerized applications.", "_____no_output_____" ], [ "## Basic Docker commands", "_____no_output_____" ], [ "See what docker images you have. ", "_____no_output_____" ] ], [ [ "!docker images", "REPOSITORY TAG IMAGE ID CREATED SIZE\ngcr.io/qwiklabs-gcp-00-eeb852ce8ccb/taxifare_training_container latest 8d499a588f26 5 days ago 7.81GB\nus-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5 latest b5b158ef259d 2 months ago 7.81GB\ngcr.io/inverting-proxy/agent <none> fe507176d0e6 7 months ago 1.73GB\n" ] ], [ [ "If this is the first time working with docker you won't have any repositories listed. \n\n**Note**. If you are running this in an AI Notebook, then you should see a single image `gcr.io/inverting-proxy/agent`. This is the container that is currently running the AI Notebook. \n\nLet's use `docker run` to pull a docker image called `hello-world` from the public registry. The docker daemon will search for the `hello-world` image, if it doesn't find the image locally, it pulls the image from a public registry called Docker Hub, creates a container from that image, and runs the container for you.", "_____no_output_____" ] ], [ [ "!docker run hello-world", "Unable to find image 'hello-world:latest' locally\nlatest: Pulling from library/hello-world\n\n\u001b[1B9710123e: Pull complete 479kB/2.479kBB\u001b[1A\u001b[2KDigest: sha256:37a0b92b08d4919615c3ee023f7ddb068d12b8387475d64c622ac30f45c29c51\nStatus: Downloaded newer image for hello-world:latest\n\nHello from Docker!\nThis message shows that your installation appears to be working correctly.\n\nTo generate this message, Docker took the following steps:\n 1. The Docker client contacted the Docker daemon.\n 2. The Docker daemon pulled the \"hello-world\" image from the Docker Hub.\n (amd64)\n 3. The Docker daemon created a new container from that image which runs the\n executable that produces the output you are currently reading.\n 4. The Docker daemon streamed that output to the Docker client, which sent it\n to your terminal.\n\nTo try something more ambitious, you can run an Ubuntu container with:\n $ docker run -it ubuntu bash\n\nShare images, automate workflows, and more with a free Docker ID:\n https://hub.docker.com/\n\nFor more examples and ideas, visit:\n https://docs.docker.com/get-started/\n\n" ] ], [ [ "Now when we look at our docker images we should see `hello-world` there as well.", "_____no_output_____" ] ], [ [ "!docker images", "REPOSITORY TAG IMAGE ID CREATED SIZE\ngcr.io/qwiklabs-gcp-00-eeb852ce8ccb/taxifare_training_container latest 8d499a588f26 5 days ago 7.81GB\nhello-world latest feb5d9fea6a5 2 weeks ago 13.3kB\nus-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5 latest b5b158ef259d 2 months ago 7.81GB\ngcr.io/inverting-proxy/agent <none> fe507176d0e6 7 months ago 1.73GB\n" ] ], [ [ "This is the image pulled from the Docker Hub public registry. The Image ID is in `SHA256` hash format—this field specifies the Docker image that's been provisioned. When the docker daemon can't find an image locally, it will by default search the public registry for the image. Let's run the container again:", "_____no_output_____" ], [ "Now, if we want to run `docker run hello-world` again, it won't have to download from the container registry.", "_____no_output_____" ], [ "To see all docker containers running, use `docker ps`.", "_____no_output_____" ] ], [ [ "!docker ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n58ba5b916652 gcr.io/inverting-proxy/agent \"/bin/sh -c '/opt/bi…\" 2 weeks ago Up 2 weeks proxy-agent\n" ] ], [ [ "There are no running containers. **Note. If you are running this in at AI Notebook, you'll see one container running.**\n\nThe `hello-world` containers you ran previously already exited. In order to see all containers, including ones that have finished executing, run docker `ps -a`:", "_____no_output_____" ] ], [ [ "!docker ps -a", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nca883e1632d6 hello-world \"/hello\" 22 seconds ago Exited (0) 20 seconds ago gracious_booth\n58ba5b916652 gcr.io/inverting-proxy/agent \"/bin/sh -c '/opt/bi…\" 2 weeks ago Up 2 weeks proxy-agent\n" ] ], [ [ "This shows you the Container ID, a UUID generated by Docker to identify the container, and more metadata about the run. The container Names are also randomly generated but can be specified with docker run --name [container-name] hello-world.", "_____no_output_____" ], [ "## Build a Docker container", "_____no_output_____" ], [ "Let's build a Docker image that's based on a simple node application.", "_____no_output_____" ], [ "**Exercise**\n\nOpen the text file called `intro.docker` in the `dockerfiles` folder and complete the TODO there. ", "_____no_output_____" ], [ "Your dockerfile should have the following steps\n\n 1. use `FROM` to inherit an official Node runtime as the parent image; e.g. node:6\n 2. use `WORKDIR` to seet the working directory to /app\n 3. use `ADD` to copy the current directory to the container at /app\n 4. use `EXPOSE` to make the containers port 80 available to the outside world\n 5. use `CMD` to run the command `node ./src/app.js`", "_____no_output_____" ], [ "This file instructs the Docker daemon on how to build your image.\n\nThe initial line specifies the base parent image, which in this case is the official Docker image for node version 6.\nIn the second, we set the working (current) directory of the container.\nIn the third, we add the current directory's contents (indicated by the \".\" ) into the container.\nThen we expose the container's port so it can accept connections on that port and finally run the node command to start the application.\n\nCheck out the other [Docker command references](https://docs.docker.com/engine/reference/builder/#known-issues-run) to understand what each line does.", "_____no_output_____" ], [ "We're going to use this Docker container to run a simple node.js app. Have a look at `app.js`. This is a simple HTTP server that listens on port 80 and returns \"Hello World.\"\n", "_____no_output_____" ], [ "Now let's build the image. Note again the \"`.`\", which means current directory so you need to run this command from within the directory that has the Dockerfile.\n\nThe `-t` is to name and tag an image with the `name:tag` syntax. The name of the image is `node-app` and the tag is `0.1`. The tag is highly recommended when building Docker images. If you don't specify a tag, the tag will default to latest and it becomes more difficult to distinguish newer images from older ones. Also notice how each line in the Dockerfile above results in intermediate container layers as the image is built.", "_____no_output_____" ], [ "**Exercise**\n\nUse `docker build` to build the docker image at `dockerfiles/intro.docker`. Tag the image `node-app:0.1`. ", "_____no_output_____" ] ], [ [ "!docker build -t node-app:0.1 -f dockerfiles/intro.docker .", "Sending build context to Docker daemon 85.5kB\nStep 1/5 : FROM node:6\n6: Pulling from library/node\n\n\u001b[1B55d5a1d1: Pulling fs layer \n\u001b[1B80d00ae9: Pulling fs layer \n\u001b[1Bb3117dca: Pulling fs layer \n\u001b[1Ba19181b2: Pulling fs layer \n\u001b[1B7b2a5bcc: Pulling fs layer \n\u001b[1B12c70287: Pulling fs layer \n\u001b[1B5386a42d: Pulling fs layer \n\u001b[1BDigest: sha256:e133e66ec3bfc98da0440e552f452e5cdf6413319d27a2db3b01ac4b319759b3[5A\u001b[2K\u001b[4A\u001b[2K\u001b[5A\u001b[2K\u001b[8A\u001b[2K\u001b[5A\u001b[2K\u001b[8A\u001b[2K\u001b[3A\u001b[2K\u001b[8A\u001b[2K\u001b[5A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[8A\u001b[2K\u001b[2A\u001b[2K\u001b[4A\u001b[2K\u001b[2A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[8A\u001b[2K\u001b[8A\u001b[2K\u001b[8A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[7A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[6A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[5A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[4A\u001b[2K\u001b[3A\u001b[2K\u001b[2A\u001b[2K\u001b[2A\u001b[2K\u001b[2A\u001b[2K\u001b[2A\u001b[2K\u001b[2A\u001b[2K\u001b[2A\u001b[2K\u001b[2A\u001b[2K\u001b[2A\u001b[2K\u001b[2A\u001b[2K\u001b[1A\u001b[2K\u001b[1A\u001b[2K\u001b[1A\u001b[2K\nStatus: Downloaded newer image for node:6\n ---> ab290b853066\nStep 2/5 : WORKDIR /app\n ---> Running in 247ed7473e7c\nRemoving intermediate container 247ed7473e7c\n ---> ba7828831d5c\nStep 3/5 : ADD . /app\n ---> f053a9d238b2\nStep 4/5 : EXPOSE 80\n ---> Running in a79e82a5fb92\nRemoving intermediate container a79e82a5fb92\n ---> d8c27c23e64e\nStep 5/5 : CMD [\"node\", \"./src/app.js\"]\n ---> Running in f51820a3e65c\nRemoving intermediate container f51820a3e65c\n ---> 2dfd7e7f964e\nSuccessfully built 2dfd7e7f964e\nSuccessfully tagged node-app:0.1\n" ] ], [ [ "Let's check that the image has been created correctly. ", "_____no_output_____" ] ], [ [ "!docker images", "REPOSITORY TAG IMAGE ID CREATED SIZE\nnode-app 0.1 2dfd7e7f964e 21 seconds ago 884MB\ngcr.io/qwiklabs-gcp-00-eeb852ce8ccb/taxifare_training_container latest 8d499a588f26 6 days ago 7.81GB\nhello-world latest feb5d9fea6a5 2 weeks ago 13.3kB\nus-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5 latest b5b158ef259d 2 months ago 7.81GB\ngcr.io/inverting-proxy/agent <none> fe507176d0e6 7 months ago 1.73GB\nnode 6 ab290b853066 2 years ago 884MB\n" ] ], [ [ "You should see a `node-app` repository that was created only seconds ago. \n\nNotice `node` is the base image and `node-app` is the image you built. You can't remove `node` without removing `node-app` first. The size of the image is relatively small compared to VMs. Other versions of the node image such as `node:slim` and `node:alpine` can give you even smaller images for easier portability. The topic of slimming down container sizes is further explored in Advanced Topics. You can view all versions in the official repository here.\n\nNote, you can remove an image from your docker images using `docker rmi [repository]:[tag]`.", "_____no_output_____" ], [ "## Run a Docker container\n\nNow we'll run the container based on the image you built above using the `docker run` command. The `--name` flag allows you to name the container if you like. And `-p` instructs Docker to map the host's port 4000 to the container's port 80. This allows you to reach the server at http://localhost:4000. Without port mapping, you would not be able to reach the container at localhost.", "_____no_output_____" ] ], [ [ "!docker ps -a", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nca883e1632d6 hello-world \"/hello\" 12 minutes ago Exited (0) 12 minutes ago gracious_booth\n58ba5b916652 gcr.io/inverting-proxy/agent \"/bin/sh -c '/opt/bi…\" 2 weeks ago Up 2 weeks proxy-agent\n" ] ], [ [ "**Exercise**\n\nUse `docker run` to run the container you just build called `node-app:0.1`. Assign the host port `4000` to port `80` and assign it the name `my-app`.", "_____no_output_____" ] ], [ [ "%%bash\ndocker run -p 4000:80 --name my-app node-app:0.1", "Process is terminated.\n" ] ], [ [ "To test out the server, open a terminal window and type the following command:\n\n```bash\ncurl http://localhost:4000\n```\n\nYou should see the server respond with `Hello World`", "_____no_output_____" ], [ "The container will run as long as the initial terminal is running. If you want to stop the container, run the following command in the terminal to stop and remove the container:\n\n```bash\ndocker stop my-app && docker rm my-app\n```\nAfter a few moments the container will stop. You should notice the cell above will complete execution.\n\n#### Running the container in the background\nIf you want to the container to run in the background (not tied to the terminal's session), you need to specify the `-d` flag.\nNow run the following command to start the container in the background", "_____no_output_____" ], [ "**Exercise**\n\nModify your command above with `-d` flag to run `my-app` in the background.", "_____no_output_____" ] ], [ [ "%%bash\ndocker run -p 4000:80 --name my-app node-app:0.1 -d", "docker: Error response from daemon: Conflict. The container name \"/my-app\" is already in use by container \"a06bd758a9db8accab89d4a777a6dbff373320bc1099c94efa57d048bfad139a\". You have to remove (or rename) that container to be able to reuse that name.\nSee 'docker run --help'.\n" ] ], [ [ "Your container is now running in the background. You can check the status of your running container using `docker ps`", "_____no_output_____" ] ], [ [ "!docker ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n58ba5b916652 gcr.io/inverting-proxy/agent \"/bin/sh -c '/opt/bi…\" 2 weeks ago Up 2 weeks proxy-agent\n" ] ], [ [ "Notice the container is running in the output of docker ps. You can look at the logs by executing `docker logs [container_id]`. ", "_____no_output_____" ] ], [ [ "# Note, your container id will be different\n!docker logs b9d5fd6b8e33", "Error: No such container: b9d5fd6b8e33\n" ] ], [ [ "You should see \n```bash\nServer running at http://0.0.0.0:80/\n```\nIf you want to follow the log's output as the container is running, use the `-f` option.", "_____no_output_____" ], [ "## Modify & Publish\n\nLet's modify the application and push it to your Google Cloud Repository (gcr). After that you'll remove all local containers and images to simulate a fresh environment, and then pull and run your containers from gcr. This will demonstrate the portability of Docker containers.\n\n### Edit `app.js`\nOpen the file `./src/app.js` with the text editor and replace \"Hello World\" with another string. Then build this new image. ", "_____no_output_____" ], [ "**Exercise**\n\nAfter modifying the `app.js` file, use `docker build` to build a new container called `node-app:0.2` from the same docker file. ", "_____no_output_____" ] ], [ [ "%%bash\ndocker build -t node-app:0.2 -f dockerfiles/intro.docker .", "Sending build context to Docker daemon 92.67kB\nStep 1/5 : FROM node:6\n ---> ab290b853066\nStep 2/5 : WORKDIR /app\n ---> Using cache\n ---> ba7828831d5c\nStep 3/5 : ADD . /app\n ---> Using cache\n ---> f5acbde48bbb\nStep 4/5 : EXPOSE 80\n ---> Using cache\n ---> 5163390ca297\nStep 5/5 : CMD [\"node\", \"./src/app.js\"]\n ---> Using cache\n ---> 71d3fa5c16e8\nSuccessfully built 71d3fa5c16e8\nSuccessfully tagged node-app:0.2\n" ] ], [ [ "Notice in `Step 2` of the output we are using an existing cache layer. From `Step 3` and on, the layers are modified because we made a change in `app.js`.\n\nRun another container with the new image version. Notice how we map the host's port 8000 instead of 80. We can't use host port 4000 because it's already in use. ", "_____no_output_____" ], [ "**Exercise**\n\nRun this new container in the background using a different port and with the name `my-app-2`.", "_____no_output_____" ] ], [ [ "!docker run -p 8000:80 --name my-app-4 -d node-app:0.2 -d", "9582c559bedfc62450704266940e1d73bcadb440d14f8fa23775d15a200b7312\ndocker: Error response from daemon: driver failed programming external connectivity on endpoint my-app-4 (87e42023e12a7fe211747974d0f12971a051f5e68267fc16ec77ec72c2fcc18e): Bind for 0.0.0.0:8000 failed: port is already allocated.\n" ] ], [ [ "You can check that both container are running using `docker ps`.", "_____no_output_____" ] ], [ [ "!docker ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n58ba5b916652 gcr.io/inverting-proxy/agent \"/bin/sh -c '/opt/bi…\" 2 weeks ago Up 2 weeks proxy-agent\n" ] ], [ [ "And let's test boht containers using `curl` as before:", "_____no_output_____" ] ], [ [ "!curl http://localhost:8000", "Modified string!\n" ], [ "!curl http://localhost:4000", "curl: (7) Failed to connect to localhost port 4000: Connection refused\n" ] ], [ [ "Recall, to stop a container running, you can execute the following command either in a terminal or (because they are running in the background) in a cell in this notebook. ", "_____no_output_____" ], [ "### Publish to gcr\n\nNow you're going to push your image to the Google Container Registry (gcr). To push images to your private registry hosted by gcr, you need to tag the images with a registry name. The format is `[hostname]/[project-id]/[image]:[tag]`.\n\nFor gcr:\n\n * `[hostname]`= gcr.io\n * `[project-id]`= your project's ID\n * `[image]`= your image name\n * `[tag]`= any string tag of your choice. If unspecified, it defaults to \"latest\".", "_____no_output_____" ] ], [ [ "import os\n\nPROJECT_ID = \"qwiklabs-gcp-00-eeb852ce8ccb\" # REPLACE WITH YOUR PROJECT NAME\n\nos.environ[\"PROJECT_ID\"] = PROJECT_ID", "_____no_output_____" ] ], [ [ "Let's tag `node-app:0.2`.", "_____no_output_____" ] ], [ [ "!docker images", "REPOSITORY TAG IMAGE ID CREATED SIZE\nnode-app 0.1 71d3fa5c16e8 3 minutes ago 884MB\nnode-app 0.2 71d3fa5c16e8 3 minutes ago 884MB\n<none> <none> 2dfd7e7f964e 22 minutes ago 884MB\ngcr.io/qwiklabs-gcp-00-eeb852ce8ccb/taxifare_training_container latest 8d499a588f26 6 days ago 7.81GB\nhello-world latest feb5d9fea6a5 2 weeks ago 13.3kB\nus-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5 latest b5b158ef259d 2 months ago 7.81GB\ngcr.io/inverting-proxy/agent <none> fe507176d0e6 7 months ago 1.73GB\nnode 6 ab290b853066 2 years ago 884MB\n" ] ], [ [ "**Exercise**\n\nTag the `node-app:0.2` image with a new image name conforming to the naming convention `gcr.io/[project-id]/[image]:[tag]`. Keep the image and tag names the same.", "_____no_output_____" ] ], [ [ "%%bash\ndocker tag node-app:0.2 gcr.io/${PROJECT_ID}/node-app:0.2", "_____no_output_____" ] ], [ [ "Now when we list our docker images we should see this newly tagged repository.", "_____no_output_____" ] ], [ [ "!docker images", "REPOSITORY TAG IMAGE ID CREATED SIZE\nnode-app 0.1 71d3fa5c16e8 4 minutes ago 884MB\nnode-app 0.2 71d3fa5c16e8 4 minutes ago 884MB\ngcr.io/qwiklabs-gcp-00-eeb852ce8ccb/node-app 0.2 71d3fa5c16e8 4 minutes ago 884MB\n<none> <none> 2dfd7e7f964e 22 minutes ago 884MB\ngcr.io/qwiklabs-gcp-00-eeb852ce8ccb/taxifare_training_container latest 8d499a588f26 6 days ago 7.81GB\nhello-world latest feb5d9fea6a5 2 weeks ago 13.3kB\nus-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5 latest b5b158ef259d 2 months ago 7.81GB\ngcr.io/inverting-proxy/agent <none> fe507176d0e6 7 months ago 1.73GB\nnode 6 ab290b853066 2 years ago 884MB\n" ] ], [ [ "Next, let's push this image to gcr.", "_____no_output_____" ], [ "**Exercise**\n\nPush this new image to the gcr.", "_____no_output_____" ] ], [ [ "%%bash\ndocker push gcr.io/${PROJECT_ID}/node-app:0.2", "The push refers to repository [gcr.io/qwiklabs-gcp-00-eeb852ce8ccb/node-app]\n1fcc761d5933: Preparing\nfdddc305ad06: Preparing\nf39151891503: Preparing\nf1965d3c206f: Preparing\na27518e43e49: Preparing\n910d7fd9e23e: Preparing\n4230ff7f2288: Preparing\n2c719774c1e1: Preparing\nec62f19bb3aa: Preparing\nf94641f1fe1f: Preparing\n910d7fd9e23e: Waiting\n4230ff7f2288: Waiting\n2c719774c1e1: Waiting\nec62f19bb3aa: Waiting\nf94641f1fe1f: Waiting\na27518e43e49: Layer already exists\nf1965d3c206f: Layer already exists\nf39151891503: Layer already exists\n910d7fd9e23e: Layer already exists\n2c719774c1e1: Layer already exists\n4230ff7f2288: Layer already exists\nec62f19bb3aa: Layer already exists\nf94641f1fe1f: Layer already exists\n1fcc761d5933: Pushed\nfdddc305ad06: Pushed\n0.2: digest: sha256:127fcba3a5ffa164e60d46677fa5099a272c24e498f7c869d235db6eb40c92fc size: 2424\n" ] ], [ [ "Check that the image exists in `gcr` by visiting the image registry Cloud Console. You can navigate via the console to `Navigation menu > Container Registry` or visit the url from the cell below:", "_____no_output_____" ] ], [ [ "%%bash\necho \"http://gcr.io/${PROJECT_ID}/node-app\"", "http://gcr.io/qwiklabs-gcp-00-eeb852ce8ccb/node-app\n" ] ], [ [ "### Test the published gcr image\n\nLet's test this image. You could start a new VM, ssh into that VM, and install gcloud. For simplicity, we'll just remove all containers and images to simulate a fresh environment.\n\nFirst, stop and remove all containers using `docker stop` and `docker rm`. **Be careful not to stop the container running this AI Notebook!**.", "_____no_output_____" ] ], [ [ "!docker stop my-app && docker rm my-app", "_____no_output_____" ], [ "!docker stop my-app-2 && docker rm my-app-2", "_____no_output_____" ] ], [ [ "Now remove the docker images you've created above using `docker rmi`.", "_____no_output_____" ] ], [ [ "!docker images", "_____no_output_____" ], [ "%%bash\ndocker rmi node-app:0.2\ndocker rmi gcr.io/${PROJECT_ID}/node-app:0.2\ndocker rmi node-app:0.1\ndocker rmi node:6 \ndocker rmi -f hello-world:latest", "Untagged: node-app:0.2\nUntagged: gcr.io/qwiklabs-gcp-00-eeb852ce8ccb/node-app:0.2\nUntagged: gcr.io/qwiklabs-gcp-00-eeb852ce8ccb/node-app@sha256:127fcba3a5ffa164e60d46677fa5099a272c24e498f7c869d235db6eb40c92fc\nUntagged: node:6\nUntagged: node@sha256:e133e66ec3bfc98da0440e552f452e5cdf6413319d27a2db3b01ac4b319759b3\nUntagged: hello-world:latest\nUntagged: hello-world@sha256:37a0b92b08d4919615c3ee023f7ddb068d12b8387475d64c622ac30f45c29c51\nDeleted: sha256:feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412\n" ] ], [ [ "Confirm all images are removed with `docker images`.", "_____no_output_____" ] ], [ [ "!docker images", "_____no_output_____" ] ], [ [ "At this point you should have a pseudo-fresh environment. Now, pull the image and run it.", "_____no_output_____" ] ], [ [ "%%bash\ndocker pull gcr.io/${PROJECT_ID}/node-app:0.2\ndocker run -p 4000:80 -d gcr.io/${PROJECT_ID}/node-app:0.2", "0.2: Pulling from qwiklabs-gcp-00-eeb852ce8ccb/node-app\nDigest: sha256:127fcba3a5ffa164e60d46677fa5099a272c24e498f7c869d235db6eb40c92fc\nStatus: Downloaded newer image for gcr.io/qwiklabs-gcp-00-eeb852ce8ccb/node-app:0.2\ngcr.io/qwiklabs-gcp-00-eeb852ce8ccb/node-app:0.2\nbbf3e63f755d031c58df9add7fad1ec82dfb29339084fa3ec599dbfb5df9b174\n" ] ], [ [ "You can check that it's running as expected using before:", "_____no_output_____" ] ], [ [ "!curl http://localhost:4000", "Modified string!\n" ] ], [ [ "Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a60275299def6c97a41cb46d1c463001511867b
21,097
ipynb
Jupyter Notebook
leukemia-classifier.ipynb
darsh004/Leukemia-classifier
d4ded7da19dc67c5a2506608eb9e449f62cb1ca7
[ "MIT" ]
1
2021-12-08T15:09:27.000Z
2021-12-08T15:09:27.000Z
leukemia-classifier.ipynb
darsh004/Leukemia-classifier
d4ded7da19dc67c5a2506608eb9e449f62cb1ca7
[ "MIT" ]
null
null
null
leukemia-classifier.ipynb
darsh004/Leukemia-classifier
d4ded7da19dc67c5a2506608eb9e449f62cb1ca7
[ "MIT" ]
null
null
null
21,097
21,097
0.713466
[ [ [ "# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load\n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n# Input data files are available in the read-only \"../input/\" directory\n# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory\n\nimport os\nfor dirname, _, filenames in os.walk('/kaggle/input'):\n for filename in filenames:\n print(os.path.join(dirname, filename))\n\n# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using \"Save & Run All\" \n# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session", "_____no_output_____" ], [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport glob\nimport os", "_____no_output_____" ], [ "from skimage.io import imread,imshow\nfrom skimage.transform import resize\nfrom sklearn.utils import shuffle\nfrom tqdm import tqdm\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import InputLayer,Conv2D,MaxPool2D,BatchNormalization,Dropout,Flatten,Dense\nfrom tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau\n%matplotlib inline", "_____no_output_____" ], [ "from skimage.io import imread,imshow\nfrom skimage.transform import resize\nfrom sklearn.utils import shuffle\nfrom tqdm import tqdm\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import InputLayer,Conv2D,MaxPool2D,BatchNormalization,Dropout,Flatten,Dense\nfrom tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau\n%matplotlib inline", "_____no_output_____" ], [ "# my Contribution: Reduce Dataset \n\ntrain_dataset_0_all=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_0/all/*.bmp')\ntrain_dataset_0_hem=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_0/hem/*.bmp')\ntrain_dataset_1_all=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_1/all/*.bmp')\ntrain_dataset_1_hem=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_1/hem/*.bmp')", "_____no_output_____" ], [ "valid_data=pd.read_csv('../input/leukemia-classification/C-NMC_Leukemia/validation_data/C-NMC_test_prelim_phase_data_labels.csv')", "_____no_output_____" ], [ "Test_dataset = glob.glob('../input/leukemia-classification/C-NMC_Leukemia/testing_data/C-NMC_test_final_phase_data')", "_____no_output_____" ], [ "a,b=len(train_dataset_0_all),len(train_dataset_1_all)\nd=a+b\nprint('count:',d)", "_____no_output_____" ], [ "valid_data.head()", "_____no_output_____" ], [ "A=[]\nH=[]\nA.extend(train_dataset_0_all)\nA.extend(train_dataset_1_all)\nH.extend(train_dataset_0_hem)\nH.extend(train_dataset_1_hem)\nprint(len(A))\nprint(len(H))", "_____no_output_____" ], [ "A=np.array(A)\nH=np.array(H)", "_____no_output_____" ], [ "fig, ax = plt.subplots(nrows = 1, ncols = 5, figsize = (20,20))\nfor i in tqdm(range(0,5)):\n rand=np.random.randint(len(A))\n img=imread(A[rand])\n img=resize(img,(128,128))\n ax[i].imshow(img)", "_____no_output_____" ], [ "fig, ax = plt.subplots(nrows = 1, ncols = 5, figsize = (20,20))\nfor i in tqdm(range(0,5)):\n rand=np.random.randint(len(H))\n img=imread(H[rand])\n img=resize(img,(128,128))\n ax[i].imshow(img)", "_____no_output_____" ], [ "image=[]\nlabel=[]\nfor i in tqdm(range(len(A))):\n img=imread(A[i])\n img=resize(img,(128,128))\n image.append(img)\n label.append(1)\n \nfor i in tqdm(range(len(H))):\n img_=imread(H[i])\n img=resize(img,(128,128))\n image.append(img)\n label.append(0)\n\nimage=np.array(image)\nlabel=np.array(label)", "_____no_output_____" ], [ "del A \ndel H", "_____no_output_____" ], [ "image, label = shuffle(image, label, random_state = 42)", "_____no_output_____" ], [ "fig, ax = plt.subplots(nrows = 1, ncols = 5, figsize = (20,20))\nfor i in tqdm(range(0,5)):\n rand=np.random.randint(len(image))\n ax[i].imshow(image[rand])\n a=label[rand]\n if a==1:\n ax[i].set_title('diseased')\n else:\n ax[i].set_title('fine')", "_____no_output_____" ], [ "X=image\ny=label", "_____no_output_____" ], [ "del image\ndel label", "_____no_output_____" ], [ "X_val = []\n\nfor image_name in valid_data.new_names:\n # Loading images\n img = imread('../input/leukemia-classification/C-NMC_Leukemia/validation_data/C-NMC_test_prelim_phase_data/' + image_name)\n # Resizing \n img = resize(img, (128,128))\n # Appending them into list\n X_val.append(img)\n \n# Converting into array\nX_val = np.array(X_val)\n\n\n# Storing target values as well \ny_val = valid_data.labels.values", "_____no_output_____" ], [ "train_datagen = ImageDataGenerator(horizontal_flip=True,\n vertical_flip=True,\n zoom_range = 0.2)\n\ntrain_datagen.fit(X)", "_____no_output_____" ], [ "#Contribution: Made some changes in layers and add some hidden layers \n\nmodel=Sequential()\nmodel.add(InputLayer(input_shape=(128,128,3)))\n\nmodel.add(Conv2D(filters=32,kernel_size=(3,3),padding='valid',activation='relu'))\nmodel.add(BatchNormalization())\nmodel.add(Dense(4))\nmodel.add(MaxPool2D(pool_size=(2,2),padding='valid'))\nmodel.add(Dropout(.2))\n\nmodel.add(Conv2D(filters=64,kernel_size=(3,3),padding='valid',activation='relu'))\nmodel.add(BatchNormalization())\nmodel.add(Dense(2))\nmodel.add(MaxPool2D(pool_size=(2,2),padding='valid'))\nmodel.add(Dropout(.2))\n\nmodel.add(Conv2D(filters=128,kernel_size=(3,3),padding='valid',activation='relu'))\nmodel.add(BatchNormalization())\nmodel.add(Dense(1))\nmodel.add(MaxPool2D(pool_size=(2,2),padding='valid'))\nmodel.add(Dropout(.2))\n\nmodel.add(Conv2D(filters=256,kernel_size=(3,3),padding='valid',activation='relu')) #contribution\nmodel.add(BatchNormalization())#contribution\n\nmodel.add(Flatten())\n\nmodel.add(Dense(units = 128, activation = 'relu')) #contribution\nmodel.add(Dropout(0.3))\n\nmodel.add(Dense(units = 64, activation = 'relu')) #contribution\nmodel.add(Dropout(0.3))\n\n\nmodel.add(Dense(units=1, activation='sigmoid'))\n", "_____no_output_____" ], [ "model.summary()", "_____no_output_____" ], [ "model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])\n\n", "_____no_output_____" ], [ "filepath = './best_weights.hdf5'\n\nearlystopping = EarlyStopping(monitor = 'val_accuracy', \n mode = 'max' , \n patience = 15)\n\ncheckpoint = ModelCheckpoint(filepath, \n monitor = 'val_accuracy', \n mode='max', \n save_best_only=True, \n verbose = 1)\n\ncallback_list = [earlystopping, checkpoint]", "_____no_output_____" ], [ "len(X),len(X_val)", "_____no_output_____" ], [ "#contribution\nlr_reduction = ReduceLROnPlateau(monitor='val_loss', \n patience=10, \n verbose=2, \n factor=.75)\n\n\nmodel_checkpoint= ModelCheckpoint(\"/best_result_checkpoint\", monitor='val_loss', save_best_only=True, verbose=0)", "_____no_output_____" ], [ "#history1 = model.fit(train_datagen.flow(X, y, batch_size = 512),\n #validation_data = (X_val, y_val),\n # epochs = 2,\n #verbose = 1,\n # callbacks =[earlystopping])", "_____no_output_____" ], [ "#history2 = model.fit(train_datagen.flow(X, y, batch_size = 512),\n # validation_data = (X_val, y_val),\n # epochs = 4,\n #verbose = 1,\n #callbacks =[earlystopping])", "_____no_output_____" ], [ "#contribution: Reduced in Epochs and changed in Activation function using Softmax and reduce batch size \nhistory = model.fit(train_datagen.flow(X, y, batch_size = 212),\n validation_data = (X_val, y_val),\n epochs = 6,\n verbose = 1,\n callbacks =[earlystopping])\n\ntf.keras.models.save_model(model,'my_model.hdf5')", "_____no_output_____" ], [ "model.summary", "_____no_output_____" ], [ "#my Contribution\n\nprint(history.history['accuracy'])\n\nplt.plot(history.history['accuracy'],'--', label='accuracy on training set')\nplt.plot(history.history['val_accuracy'], label='accuracy on validation set')\nplt.xlabel('epoch')\nplt.ylabel('accuracy')\nplt.show()", "_____no_output_____" ], [ "#my Contribution\n\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\n \n \n# creating the dataset\ndata = {'Reference Accuracy':65, 'My Accuracy':97}\ncourses = list(data.keys())\nvalues = list(data.values())\n \nfig = plt.figure(figsize = (10, 5))\n \n# creating the bar plot\nplt.bar(courses, values, color ='maroon',\n width = 0.4)\n \nplt.xlabel(\"Comparison Between My Accuracy and Reference Accuracy \")\nplt.ylabel(\"Accuracy\")\nplt.show()", "_____no_output_____" ], [ "Reference:\n https://www.kaggle.com/dimaorizz/kravtsov-lab7\n https://towardsdatascience.com/cnn-architectures-a-deep-dive-a99441d18049\n https://ieeexplore.ieee.org/document/9071471\n google.com/leukemia classification\n ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a603e06876af35e384636a29097b3b6865127e4
133,595
ipynb
Jupyter Notebook
Tutorial_4_Flax_Zero2Hero_Colab.ipynb
gordicaleksa/get-started-with-JAX
036f5227a57f902861031f95001df368a5572ef7
[ "MIT" ]
234
2021-10-31T15:12:13.000Z
2022-03-30T21:43:33.000Z
Tutorial_4_Flax_Zero2Hero_Colab.ipynb
gordicaleksa/get-started-with-JAX
036f5227a57f902861031f95001df368a5572ef7
[ "MIT" ]
null
null
null
Tutorial_4_Flax_Zero2Hero_Colab.ipynb
gordicaleksa/get-started-with-JAX
036f5227a57f902861031f95001df368a5572ef7
[ "MIT" ]
28
2021-10-31T16:35:24.000Z
2022-03-20T23:02:56.000Z
119.708781
51,716
0.855818
[ [ [ "[<img src=\"https://deepnote.com/buttons/launch-in-deepnote-small.svg\">](https://deepnote.com/launch?url=https%3A%2F%2Fgithub.com%2Fgordicaleksa%2Fget-started-with-JAX%2Fblob%2Fmain%2FTutorial_4_Flax_Zero2Hero_Colab.ipynb)", "_____no_output_____" ], [ "<a href=\"https://colab.research.google.com/github/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_4_Flax_Zero2Hero_Colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Flax: From Zero to Hero!\n\nThis notebook heavily relies on the [official Flax docs](https://flax.readthedocs.io/en/latest/) and [examples](https://github.com/google/flax/blob/main/examples/) + some additional code/modifications, comments/notes, etc.", "_____no_output_____" ], [ "### Enter Flax - the basics ❤️\n\nBefore you jump into the Flax world I strongly recommend you check out my JAX tutorials, as I won't be covering the details of JAX here.\n\n* (Tutorial 1) ML with JAX: From Zero to Hero ([video](https://www.youtube.com/watch?v=SstuvS-tVc0), [notebook](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_1_JAX_Zero2Hero_Colab.ipynb))\n* (Tutorial 2) ML with JAX: from Hero to Hero Pro+ ([video](https://www.youtube.com/watch?v=CQQaifxuFcs), [notebook](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_2_JAX_HeroPro%2B_Colab.ipynb))\n* (Tutorial 3) ML with JAX: Coding a Neural Network from Scratch in Pure JAX ([video](https://www.youtube.com/watch?v=6_PqUPxRmjY), [notebook](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_3_JAX_Neural_Network_from_Scratch_Colab.ipynb))\n\nThat out of the way - let's start with the basics!", "_____no_output_____" ] ], [ [ "# Install Flax and JAX\n!pip install --upgrade -q \"jax[cuda11_cudnn805]\" -f https://storage.googleapis.com/jax-releases/jax_releases.html\n!pip install --upgrade -q git+https://github.com/google/flax.git\n!pip install --upgrade -q git+https://github.com/deepmind/dm-haiku # Haiku is here just for comparison purposes", "_____no_output_____" ], [ "import jax\nfrom jax import lax, random, numpy as jnp\n\n# NN lib built on top of JAX developed by Google Research (Brain team)\n# Flax was \"designed for flexibility\" hence the name (Flexibility + JAX -> Flax)\nimport flax\nfrom flax.core import freeze, unfreeze\nfrom flax import linen as nn # nn notation also used in PyTorch and in Flax's older API\nfrom flax.training import train_state # a useful dataclass to keep train state\n\n# DeepMind's NN JAX lib - just for comparison purposes, we're not learning Haiku here\nimport haiku as hk \n\n# JAX optimizers - a separate lib developed by DeepMind\nimport optax\n\n# Flax doesn't have its own data loading functions - we'll be using PyTorch dataloaders\nfrom torchvision.datasets import MNIST\nfrom torch.utils.data import DataLoader\n\n# Python libs\nimport functools # useful utilities for functional programs\nfrom typing import Any, Callable, Sequence, Optional\n\n# Other important 3rd party libs\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "The goal of this notebook is to get you started with Flax!\n\nI'll only cover the most essential parts of Flax (and Optax) - just as much as needed to get you started with training NNs!", "_____no_output_____" ] ], [ [ "# Let's start with the simplest model possible: a single feed-forward layer (linear regression)\nmodel = nn.Dense(features=5)\n\n# All of the Flax NN layers inherit from the Module class (similarly to PyTorch)\nprint(nn.Dense.__bases__)", "_____no_output_____" ] ], [ [ "So how can we do inference with this simple model? 2 steps: init and apply!", "_____no_output_____" ] ], [ [ "# Step 1: init\nseed = 23\nkey1, key2 = random.split(random.PRNGKey(seed))\nx = random.normal(key1, (10,)) # create a dummy input, a 10-dimensional random vector\n\n# Initialization call - this gives us the actual model weights \n# (remember JAX handles state externally!)\ny, params = model.init_with_output(key2, x) \nprint(y)\nprint(jax.tree_map(lambda x: x.shape, params))\n\n# Note1: automatic shape inference\n# Note2: immutable structure (hence FrozenDict)\n# Note3: init_with_output if you care, for whatever reason, about the output here", "_____no_output_____" ], [ "# Step 2: apply\ny = model.apply(params, x) # this is how you run prediction in Flax, state is external!\nprint(y)", "_____no_output_____" ], [ "try:\n y = model(x) # this doesn't work anymore (bye bye PyTorch syntax)\nexcept Exception as e:\n print(e)", "_____no_output_____" ], [ "# todo: a small coding exercise - let's contrast Flax with Haiku", "_____no_output_____" ], [ "#@title Haiku vs Flax solution\nmodel = hk.transform(lambda x: hk.Linear(output_size=5)(x))\n\nseed = 23\nkey1, key2 = random.split(random.PRNGKey(seed))\nx = random.normal(key1, (10,)) # create a dummy input, a 10-dimensional random vector\n\nparams = model.init(key2, x)\nout = model.apply(params, None, x)\nprint(out)\n\nprint(hk.Linear.__bases__)", "_____no_output_____" ] ], [ [ "All of this might (initially!) be overwhelming if you're used to stateful, object-oriented paradigm.\n\nWhat Flax offers is high performance and flexibility (similarly to JAX).\n\nHere are some [benchmark numbers](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) from the HuggingFace team.\n\n![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfgAAAG+CAYAAAB/KRYOAAAgAElEQVR4nOzdf3RT55no+29vU2cyV4bi4IChNSm23PCjKXAJMNyCjYKda3w5xOEocaedzKqvXYNNMuTU467ROeAY7qirrnvjIcaEokvWpM3gRBcMhxrf2MbYpsOAYZE0DSSDbBroJIZATI11T07c6en9Y29Je+uHJduSLW2ez1qshbdkaevd7/s+7/u875a/kDxj1p8yzFkIIYQQIvH1u64AcJ//AREow5wl5TPJpMxjR8o2scj1EmOhnbD/T1N4HkIIIYSIEQnwQgghhAFJgBdCCCEMSAK8EEIIYUAS4IUQQggDkgAvhBBCGJAEeCGEEMKAJMALIYQQBiQBXgghhDAgCfBCCCGEAUmAF0IIIQwoMQN8hYOOrh6O2nOn+kyEEEKIMMrZ19VDx7FaLJP4rveFf8r4lDl6sGaGftx9ZjdP2tpj9fbxocJBh3WUv9Q3dBb7pio6gz3PPcDlkw5efKmdQc1rBS230R6L1Neexf6jZ1iSmkzSF4GRQVy/fIm/3dONO4JfT9n0Q+r+Opf0lCQARgavcfIf/4afHhsc3/nEUmohP6j9ax5PT/F+1usd/0hlbTPhzjbl8Rd4sXQ9mZ5yAhgZYbDvCD8qb+Rtgtd99433OHngv/DySfUdYn09E8UD32Pmd7by6NLpXH3563z4QfjnP/S9F8jKfIjp96vH/vg5n/3+Cu/+LJfb/wY8cZqC/2DW/dof3J9w7fRL/OsvX1WP/F8s2vuXPHz3V3T+3X/kM92zR3ssAZmWsam8nO89kcXAkbVs3Rvm+am5PLezhMe/noYpST32xxFGhq5wzF7O/gtAXi1HbaswaX9vZJDrZ47wct1rvO0GJagVYfb0c7o3Ge0xv9N/tpE3i7O45vwbtu6dg/3YDlZM1zzhjyMMXv8Xju2p5fW3hyMrkzi0Yscb2B+/n97av8J2IjqfI2YB/sbVK7j+Xfn//Q89THpKEu4bVxj4vXLszvVPY/XW8ePfruHydFj3z2Te11JIcg/g+jf14g19yB3N00cGr3Htk8+B+0kzz2Phph3sMQ3z3d1nY3+u5sVkPjDIpa4ObjywiJXLszA/tYOf/PFDtu69Nvrvrt7Bnhdymf3HYVwnW7jMIh7PySL/hX3cf+sZ7Gdif/qRW4Xt5RewzAb3B+20XoaFj+di3vACe+4fGKWsU9hkP8hzq1MAcN+4xrXffw4o9Xt26sPM0D1/hMHffsinnwN/noY5fTGbduwj+bN4K4+psp2v7tzKglnT+RIAQ+F/Zcn/w8ribzHzi0rA/t21If4I8OcPMSt1FiYT3NY8/Q+/v85HanuakZFOZv6PmPbA55x3/lMMPk88WkZZ4042fT3FNxgNI2VTLa88v4qULwLuAa5fHeZzgD+fybzZc5iX4vcL3v7sfh6cN4/0nBL+/iF4uvy1iCYGYZmexf7dxdDXxI/2XgLmqA8Mc+ODAYaB+x98mPSvZfO9Hz8IT5XzekRvXIjttb/mWw/8hjrrjlEHGJOld/dLdC6qxfLsdlac2E1vFF4zZgH+mL2EY+r/LfYWbKuTGDhdEn70aCTHdrPVUwjqiDfpRjdbtzQGffrIB6+x1TNjW/hDftFYwOzVRVg5izPW53pmN8Vtw95GaSpu5M1nF2P+RgEQ/Hy9ls5jNuC+0MjW3S0A9Jla+MHKNL66FIirgLaMr84Ghs6yf8tuWgH6kjlatYrZ85YBwQP8wsp9SnAfvMiru3dEMFP4nL5DJdjaPL//Bnv+9zS+ZS2CM03R+zgJKx3Tl+HGP7/Kna98j0fnhXl66v/N0tJvMZMh/vWf/o6+fz6ie/hSkF9x/+urXHptn+/3Xyxgzsq/xOT8p+gEn7j3ILNTv8Tg5WZOf74e6/Lk0Z++8IfseWEVKQzy9oFd1Lx+MXw5afszUxE/aSpn6cICyr72Gj/97cQ/wYoXCliYNMjpg41c1z0ywOktJewHIBnrS02ULV1M/pZ5vF4XZkICQBpfTU8haej+8E+dNGfZ0/Ye33o2m+8W19N7cOKz+Klbg/9aAT94+Q2Onuyho6uHjpMdvPnKC6zx5HxM2Tyne7yFfc8vC/5apgLsR3ro6GphT/GiSfsIMXX5IteHgAdM+A+ax2rpjjeUPQt1hZqjyZQ5lDL7ySbAPaxrzO4bbkaAkc8iSLGfcnEDMJmz2ZQKpBayxpwMDNB3aoInH3Xd9P0bMD2LNZtSgBQ2ZWdhAm580B3id55lS14ajFzj2H/ZPq404OV3r+EGkpInejWN4j/x/n/6Ou/+09/x2f8I/+wZRQXMAX73y/KA4B6RWxe4cxd4YDp/NvbfTlDt1FgL+O5zL+EaCf/s72zLZTYjXD/yn/nbSIK7P3cTfTcATDxoDvfkSCxj/aI0uPUbWkedJAzj7BsAwJSymude66Gjq4OfWLXPKeQnv+yho62R72130NFVhBlg+ipsurXxFCwvNPKLX6pxp6uHE0d+TnWRJq549oC9tJs9R5Tn7KtQHjItfRb7ay2c8MStthZ+8h392T5YVMtBz+v/8ufYNvgGXu6DPVz6LImFq/9Kv/wxTlMX4M3ZrEwfwdXVzLEjLbx9C1IeKeQHLxYCyXyndgebvvEgI5fbOXakmc7LfyB59oNBXmgRZfUvsCJlBJeziucPBhvLJ6IH+VISMDIy4dnG2784y3XAZP4L3wYP01+xJBO4dRHnMf3zU1Y+i/3ZZZgY5FxrBLPNyz9mS+1ZBlNW8Zyzhw7nC6xIGaS3toSfXp7gyUfdJX665cf0Dqaw4oWjdHQd5bmVKQye+TFb6kLUnQ2LSU+Ckb52Xh7v55mehHI57425Y3RtZcYc4N9/w+9aT47zNaZz35eAf//vSlpf+Clg0dwkGLnCyT3j7UPnYfozADfuaGy9+VouC2eD++rZsOnq9AeUIOn+7F859vY1IIlFq4t8T9iUjdkE7nfbefVfOjh25KKy3+aza5w+0syxX3bTRzL59oPYNi1m9h+v0XuimdYz1xiZPo81W/4e+wZ9BsS09C/4Utd2nsxR9zXk7eZgXQkr0u/H7TpL65EWej/6A/drf+2BZXzvu1m4z7XQ+9thMM3DUr6bTd4nNNF3DUhfrDk2fjFL0YfllxLmOPzi1QJmf2UxSxkkc3YSMMA7B+t5+e1h4CVMJv8UUxL59r/HmpnE4Jkf87d7DRLcU1fxHVsRSx+AkctnORb+N0b32yO8fb2Q9PRFrM+DzjYw/fUyzMDgpRZv41GWUtQy/uwarS/+DT/tiuD1F5bzk+2rSBkZ4O2Os1xnHivXL2PF9pco+7CE/XEV5H0DwhsX2jl3HdJX57J09Qv8pOJDtgarQ19LwQS4fz/gOxawyWiYXnuBNyWvlbLyWf7u28tIYoTLZ5uj/5EMz8wD04C7Q/x377GtPPyjahZN8z3rzrkaznhS8loP/CUzvvs9Mh+Azz/4lW7fi/CYx4zpwJAbXy3PDdjQFnLjp2ke+eX/J49/BbjxHh0XonBKCx7EBAxcbxnlScmkbyjnxbw0YIDLJy9y/d1uLuc/y8JHVmGlCSdgzVmMiUFOH2mGc/DyuRQWPrWMlJEBTu95SVmD/9oLvLg6BdwX2f+d7TjV4LRfXa5c8dRfYTrR6ItZ/9bOj/Z4Mh3zeO672aR8cQSX82+C9yMASW7OvfgkNWcALqrl+zBLcuCY2te6BofhkQeZtxyYYDlOXYDHzONVz/K/LX6Yeakmkh5Qt2s+kMwMumk9fY2Vm+ZheamFFTfe4/SbDvYfuah7haRHSnkuJYWRviZetLUk/LqaafUOOrp2+A7c6ublKnWzyvAIIwBfTAr+y6O6xqtd75H/7GIyV6+Ctkt8Z2kWcI3T/+hbc+77VTPHbjxImnkxmZnzyH/xTZa2/Zjv2pUGrRsAAPQ1sb6kkU3FBZiTRrj8Wgl/q64bvXpbaRT5xYXsr4yjoLaplPzMJEYuv8aWSodStgc/Zc+RZ1mYX8qmvdsDB1Rq2Sc9oMkgDX7I5Q9SmAEkfyWL2QH5tGRW2HrosGl+5fRL2F4b1r3m+K7nvWaI//HvwJf+DN9esZu4+118+D8D09N5eFbgWuqMldUUrKz2/vzHT3/FP++r0b+mULn5wwiQlISvln/KtQ+uKIH/y2mYZwdZw88soqNLM1P+7ArOXZ4NYuprAuNa6VYH1sFlYe3qwZeFH8Hl3KVuYHVw8t1CFi5fzAorOJ1FrHgkKXyq/4lFpAPuS+3e4A7gPvge155djDn1YVaAd0Oe+/pFzb6AbMyzgc/eo3O0iebQFU57z6Gda7d2sGL6l7hf0w10fjSAjTRMUVjNm6IAP4+yulqsj8Dgb7ppvXCNvjNpWOsKSFef0fvSX/HdM0V877uFrFm4mPzn61m5/McUawL5yNBtRlJSSEl5mHkmuJzgEd6zi9790SX6PvgXnM6zvtu21DVx09zFpNOi23CSn54GwMBHoW+pcr95kb6ixSxcVMAK0zKWZMLI5W5e1WyEuX7CwcueH0xF/MRZztKcQr5jb+d1PAMATSMfUAZcs7+cDFzDddy3Nu0+7uLGs4tJ/3La+AojVr5iwgRcdx3xDQjdR3DdeJaF6SZmB/ud0y5uFC8m3bwKq6lJafwXGrGpo+vgt4R6dtG7+d3lK7jONeE8p8lbTvB63ltOcufGVvjKN5i9PJ2+C9eBI9x2HFF2zT9xmof/Q+Cir2cX/ec3f8Pvf/srbp/T7p7/hM/+GzAtnelfhs9+r3noG+nKHRGfXk/8W+Qi1o3rRgkL082ssCbjdA4DF9n/wxLl4QoHHdYgAV7dRf+HQReuD97j2JEWrnsb1gDuz4Dp81j4NWjVbrrb8DBpALc+HOcOdnUX/R8G6XO9x+WjzbRe9/U/x45c5DvLs9U0/SoWPQDX2xxR2ZkenElZUh2a+JJqNE3RGnwBSx5JgqGLvPLcbl7e8xqtKQ9qNpMlk5KazOC5Jn763DM8WfQarhFIWV6gX5cYeIMXnVcYSVnFD16rJT8auxKm0MgHr7F1Swl/u/sl9muDO8CZD5XU2Vf+gi3ataDUIiyLkoFBbrw9you7HZy+NAKpWeS/sAwzI/RdeCN8ZUxK8o6ir59w8PKel3z/nOrs/98B0jBv9J2XaaNZCZbxNktSF2Bnm5/yzQ5MTymj71B+6+Dk5REwLeN79S+wNKJ6puyi37plO/Y9jfrgDhO/nveUX3HjV7/hc+7n63/5Og/NSQ//K6i76P9hDX1N5X7BHaAF96cA6cz/D1s1x7/B9P/1MaYDtz+ORp45UVzj1Y73GCGZpaUv8dzSMDvuPW50s3VLCc/bfszLr2mDO8BZrt0CSGNlaYFmNp6CNXcRJmDw3/RZWZ1PlEFwcMou+q3PVfHTPa/pgjsAZ5p55wYkmVdRvW4xSSPvcfpAmN31F64pm4UfydbFElNRFvOAkY+ujDJAOMvvbgDTF2EJNhAaA8vcNOAPfB7BxshwpmgGr6Zupi/ju7tfYNHwPFauWowvS7GKysbtzHb18PaNEZLSVzEvCbgxgP9y7uW9L/Dy3J/zg9WreK6+nGsljQHPMQT3z/mvZ/L4weoUVlS18GbRFT79YzJp6WmYvggjH7Txapjb0Zxd7/GdpctYufpBcL/HSc1tGGWvtLDmfheX377G57PNLFq0mPQHwH2hg9fDnNrr//Usjz+yioXfaWLf3A7vffBJDNL7X38+8c8eTb9opjfvh6xY+Cy/eCWNk5774JNg8ExziM86zOtVL5H56g9Zk1nIT44VMPjRh3z63zzf8TCO84jC9bynnP4/+Od5vyT7L8w89p97+fzT6wx8+jn8+UOkpU0P//sBrnP7rV9xZ8u3mLGymryvf4+PPvmcP/+qmYceAD530X/iXrlfXuF+7Ye8bP45P1iTxaaXWsj3fC/Hn89k3tzxVPJhXj9ylserVpGy+occPVKE6xNI/so8ZUlr5Aon/3GU7/i4MMAgkJZeAIy2Dh/MRX5x7hqWTctYsxBGLl/Upd1hUM0uLKO47oesIYnTlfW0/iab731jFT94/ees+eeLfPrlZaxZOY+kPw7Q+QvHKBOii+w58R7fKl7M0opm3lzbzWkXpC//X7j/zJM8/0rkZ25OSYaR9+iLZP9TGFM0g3+NVw5e5MZIEulrCtm09kEuO1rwja8+5dqtPzB7ZQGbniokf+mDuH/bzf5duwmc1AzTaj9A7yAkZRbxor0gKrcXxJ9hWm3F2I9dYfAzSEnPwvy1NEwjg1zvaqRyi/99okEca+HyECQ9kIT73RbdWvONG25Mc5dheaqQ/NWLSf/iAJeP7aa4sinsLN99oootu5txDd2P+fFCNj2eRdLQNVpfKo7aNzJFjbsFW9lujn0wSJI5l01P5WJ+YJDrJ15iy2j7ONwt1FifxH7sPW641fJ/JIv06eC+cYXepnrqgmywCy0K1/Oecp3PfvEo/+8/HKHv5hB8OZ2Hs8w8/JXp/I//7xOunvsn3nYG2WA3mt/8R878wxGufvq59/Ue/NLn3P7tSX71ozXc/n34lzCWYVp3PMnTu5u5fGMYps/D/EgW5vQUJRV/pok9L41t2cjbNwyOQMo8zI/MY/YDIwz+tpv920vYP9q98r9VbhU2zVvG0nF8musHurk8onyuS2/5B+cmXnW+x+Afk5i9vIA1D41wh2Fef+5v2N91jcEH5rFiQyH5q+fBrfc49mJJ2C+ocr9WTuUr3VwfhJRvKH3LounD3Lg6lrMuInMecMPF6bF93KC+kDxj1p8yzFn0u65E4eWMScpn8kmZx46UbWK5l6/XihePYs+B0zbPzvMxMJUom2eHurFZd8Rw/T16lC8Yy6LvtUKeH+cX3WjrS2L+sRkhhBCG11t3hMsjKawsLieynRc+6X+9iswkuPFuS0IEd1jF83mLSRr8F45G4VvsQAK8EEKIeOV+jbojVyDzKf6uIrJvKV3z/G7s9p+z56kskobO8vpLk/C3PKJgxY4XsMwepNdRG7Xvxp/C++CFEEKI0V1/pYQNY9ik9qXZy1ixOhkG3+P1/1JFazzdtzaK3t3PsH53dF9TArwQQgjD6LQVxMVfh4sHkqIXQgghDEgCvBBCCGFAEuCFEEIIA5IAL4QQQhiQBHghhBDCgCTACyGEEAYkAV4IIYQwIO930QshhBAi8Xm+i/4+/wMi0L38xx6mipR57EjZJha5XmIstBN2SdELIYQQBiQBXgghhDAgCfBCCCGEAUmAF0IIIQxIArwQQghhQBLghRBCCAOSAC+EEEIYkAR4IYQQwoAkwAshhBAGJAFeCCGEMCAJ8EIIIYQBSYAXQgghDOi+8E+JjMXegm11su6Y+8xunrS1R/waZY4erDSxvqQxzDNzsR/bwQrOYt9URec4zndK5NVy1JZCa04J+2P+ZuXs6yrC3OdXnhUOOqzavx54BeeknM9UKWdfVwF37AXY2mLx+mpdnO47Eqrelzl6sGZ6fhqmN2bnFEP+9Wcolm3Qv2yNXldjT+mniW3dy6vlqG0VJs/P/n2QmDRRCPCaQJKjuYh5tRx92gxEHuAjVvEMKxjGPX0R6/OgM947SV2nODl/FcpiL8AccDQX+9JB7DlrvR1ymaMH67FaXIk0UIqErpMZpjdW71PxDDPa1rJ+r/Z9d7Cvop2tnmOeQHXLr40kIMvcQZw5a9Ugq3wu27FaiHr90ZTZJqXMDFtXJ0051tXJwHDs3qLCQYc1jV772sQbvBrQBFP0udiPFZF2ZnfgCK2tiidjNGorW5qF+1I9rX3JrHi6PCbvET3l7LNm4XKuZb1zkv7kY14tz68ewNXn/0A7thJ957j/zbPegZJx5GKvWAVndrPefhZ3LN9qb4kmkANtVbT2gXmpr15a7NuVQGWAWUynrUozg27H1nYFpqcEGUxOUMUzrJh+BaemzPaXNOGavgprRbTf7N5Q5igire9KDNtDOfusaYmZmTKoic3gPY0wojR8YCrT5Vyr7xwjUs6SzGEuv9nOfnLJty2jDOI4bdfIVs+sLUTH5FmacFLkTeEqaV50ZRbZkocS3Aaca3lnac84Ol7/65SIqeR2bJvUcsrLDfqM6Jb5aJRZk8s5enDXp+/H2zbiQF4tR22LuOy8xEKrJ4Oiptb9Mllh0+1Dg7h0Bxp5p6+I/Lm5KJlBI9TVSVLhwJp6FvteeN6WpnsoWm3BYi/APHQJ+2jl77/EI+n7mJrQDL5saRb0XYwguJazr8uTolyr/LOfJc3aw1F78A44FE8l6mgD2tq5PJRF/hhfIy5lFrHkbbVsnFcwrd5BR9cOJf3rPbYde5iZdplDKedIg4MlZxEmT3lqft9znexnBib2ueJZlMpcr5wlmeB6W+208h4mjSu846rlaFcPHeo/bb232FuUzjfH976JQclOuc+84dcHJLMiD/bkrGV9zm56h7KwdvXQkTeofkb1mCNM9i0gM5DLvFQwPaQcvafq6kTk1XLUmkbv3lGWNqLQFswPJeO+1I7Z4avnHV0OyvzPw66+Zk6T3wBORNuEd9G7Pwl/iZSgfBa7dqTWVsWeM8OYFuViifjdclm/SKlESkVtp+PSWF8jTvVpgvLeN+gdCnYsmYU5oQcz3kAR6Yi4woFtdTKuNk/DVzpQ7TXttJUYd0YUhTLXU5aszENncXpew5yCiSysFZ6ApwxuWb2DfWpGx/xQMtz60Nf5+qf940ou9mOezruAO/a1QWZ1w5pgoqbxgx3LXObr/P3tvYgL/SDAYt+uma3fY3V13MrZZ1vFgDNMZmPCbUEdfK3e4Rso5KzF2ZeF9Vit0j+bUzAxwDXveTSyVWbvMTXhAO8ZTY/GM7LzHz12dl3CPZb1u7xcFk6/QqumQ1Few2hryIpIBk9eFQ5sqwdwRrgBqczR490M4wsmSserjN4doTtfAxtTmWvl1XLUk6UKuAbD+tmTZ50+T+n49r95FndmER1dLWPMFkyFdmybPB14PVT0RFhXtB17JBrZmtOEK7PIOxt8nnqcfZ5rJHU1PN8eqfEMGMfVFvr02UNlj4+6b2LvG95szlgzt2J8JhTgXZ8MQ+rDkzZ7Lnt6lTIb0qQ6O2yrMJEIm+1iKRd7Xhb4lY01EwgIHMoMTEkJBxnV7y1R051p6mtJ5xlWhYMOm7LvIWA90TUYdFOT6xPNTua2Kp7MWct65wArbD0JEuhBCfZNuMhiSUw2vjWyNcc3G3zSBvNSYeAjdYAvdXV0Fc+wYjrqIMivv7T10BFuiWRM2rl2K8jhtg8Z0DxHGRw2MaCekwT62JpQgFdmz+F3tbo+CZ5GV9Z//TfShKKubTp9Dd639jY8errP8LSzKm16DGUTizeQa289Gn2m32krUNfIYtV5G4TmtqCgs6S2DxkgmRl+aaqAtDyoAWv3GJcF7iHqpt53/MpZ6moI6gBI989+FjfDyjp4lNPjQSd8eQ+TxjB3dJ28MnCzj3mJVozVxFL0bVU86byCOdhmubxajqojxE5bC67pq7BpR4x5tTyvW/8No2IZZgIbt/f1pXGHpy5xOEM27FzsjlppcGNQtjQL95n6UdY3G3GeGcZs1cwuKxxYM4fpfdNzf3eizDzL2ec36ytzFIVslxOTi92u7y+OWrNwOUt89+BLXY0rnn7+eW8sUO7oMfW1KO2jwuHddyImx8S/6GZvCev3KrvkO7p2aB4Yptdepf6/ka05sK+riI6uIs3jQb4MIVP7HM/z6iEvC/qaQuzYV26fsebVYhltp+i9zrPhq6sHq/9jfU2sL3FB6ipsXT3Y1MMJe7vWpFA3FmX6133Q3rLVaSsAews2b7n7386lvyYTvzUvVlzcSd2hb59DZ7HnxKjNLQrTX0hdjTOefl7THvxugzNbe+jwVPSYfguiAPhC8oxZf8owZ9HvSpRbcyaflM/kkzKPHSnbxCLXS4yFtr7IH5sRQgghDEgCvBBCCGFAEuCFEEIIA5IAL4QQQhiQBHghhBDCgCTACyGEEAYkAV4IIYQwIAnwQgghhAFJgBdCCCEMSAK8EEIIYUAS4IUQQggD8n4XvRBCCCESn+e76L1/Te7jjz6aspOJd3PmzpXymWRS5rEjZZtY5HqJsZgzd673/5KiF0IIIQxIArwQQghhQBLghRBCCAOSAC+EEEIYkAR4IYQQwoAkwAshhBAGJAFeCCGEMCAJ8EIIIYQBSYAXQgghDEgCvBBCCGFAEuCFEEIIA5IAL4QQQhjQfeGfEk4J9Sc2k6E9dPUwG7c5vD9mVzupXGnSPMHNhTorNeziUOVjaB/R8XudQOuobqpi+TTNK5+r5ds1p8b8KWIn8BwB+pvz2X4gdu+aXe2kMvWtwPIrbeB4ofZq9dO8YRsHY3cqU8ivbt49T13RTrqj+h6R1cHihlYK53t+SuAyt/i3WbUtd/qeov+sRNCOx8u/7wk8l4D6HpM6EM8C6+fofY9apjG6ZgGxQPc+wftKj1j3mUY0sQCvNvabzfls9BZ8CfUNvqcUN7RSOPM8dRt8jSq7uoEcgM6dfLtT83snNsNYLmLpZmac1Ly3ZReHKquoLz0VdxVhsiqnrnO96v/oOqq/OUjdhm3ea1Hc0Eph0y76jdbpBa2bMRBBHfRvA4lc5sVPpdC+Id87OMmudlJZ2UBxpzpgsezi0Vu1bNzmGeAo7fpQdX/0B96WdO5orq9yLk6q8QX57DmDmsGUEkAqm3ZBApb9+GTA+7Vs9JR9aQPHC1upJ3h/lF39hH6yFvWzeV8TC5S6cbwBNcifoqYosI5kVzupXPA+zXHWpyeCCaXos9cswHT1sF9FcbDdOyIr4dH5bi78TN+Yumu26UfZ43Vgm/69O3fSfhUyvlkShRePlgxmhBiRRl1pA4Xz+2nekE9zQHAHOEXNNv21OHjkPO5pC8ixTNI5Top1VH9fCe4xH1SFrYMlPDof+k/6yj2Ry/zgNn3mobvmLfrJ4NFS9UDnTrbrArmD5nNuTAvWkR3tk8K6Kd4AACAASURBVOncSY2m7JVzMTEjQ3tsp+Z8T1Fzsh+mpcQ0iMUXBzXa63HgMBfuwqw56wKfatnFlpU36Q/ad0THwRpt/6PUDWamj1I3SihcadK1HxG5iafo5y+hGEZJN5p4ZM066IyntPlkc3OnP/SjxQ2tFHKYZjZ7Z99Kmhddyirs8sOBbd7ZTPG4z9U/TRYk7RnvSjezfFr/qCP+qJV5lPintQ2djixt4HghNDdDoSd9rqbO0aZwJymdfk+VfUi+QfG732z1GwApfcIj7x/mgwWbvW1DKSf9Mkm0yy67+gky7p6nTvua/stE99yyS+QmNIPvrmnkwt0MCk84qQ46G3Gwvbkf08oqjjdMxqxanS39OhbrfeNkSWcWJpZXtnL8hPKvvjTI8+Zv5tFf57NxQz4bPWV2okpJ/3qPlYco5/HLXrMA09336VIDeHFDFctvH1bec0M+deduRvcNJ0H2nFlw9R36q53eMj8erI7GpMz966AyS8kobPAOuoqfekxX5tnVTjWF73vfhFG6hAz6eTdkp76OnAUm3O+f8uuAMyj85jtqPTtM/7THqDzRquwb0RzbUh1kphlCccNmMu6eH2VgV0J9YQbuc4d1SwwJW/bjkF1dzvJp/bQH7BFR2v1owdm08gn4madfcJNR2MrxE09wp057rCHyyYVlF1tGnZ0Hm72XUO9ZetuQz8YNtVy4Hekb3nsmuIv+FDVF+dSdQw1gQTrRA9uUxjp/s9LRxizQr6O6KVwDnwKdO/m2p/NQO5CMwiBBXrvUoabRAo+p2ZBoKW2gUteA1vHVmeC+5evkoracMokyUk0wfzNbaNQMVGB5pV/nE/UyD14Hu2us1J2bRaE62CicqZ9xZKSa4PZ1Xyfmn/aPV5ZdHPILmP6KG6qCBhTop3mbfhAU7Fi41H62ZhCXe6uWjQEzuXVUN3kGeUow0mZkErbsx6K0wVtGygBKv8ziHeSE2VTnPtfo7QuU5ZBgxzTLNUGVUO8ZdFcq+zlClnfpksD+3JLOLF1GNHDZUfhE5Ta57hqrphNt5VDAqNvBds+ofP5mjp8YwyhP10BDzIAtuzh0Qp15xnuq5sA2ZaT7+K6wa5LaQBttxQ2tHC+cxYU6bQNT1iiVmexYrlEcunueVzQduafzyQ0zIxx3mYesg0r91Q42Np5MoVIzGD545Dzu+ZuDD5DjVHa1k+PqTCr4EobSkSuz4wjvGLg7yFhL39P3bNyQzyuUB+kflEmI8pxG+H6rrm4nYtmP2YFtvrr36yUcP6Hpo0sbqFx5k+Zx9ps3Px7r8pUnFuSzccM7PHqileNNwfrCdVQ/nhGY+encSftVNSM6KVnhxBbV++C7a6xqWnNziODgYPuGWi7cDd/R+mgbqPJPN+IrbfB2NLG5FcdolICjdLxB1tbVzsA340y8QN9/yx3sKHfuxugNR6uDpZtZjn6wwYFtNF81sfwptYPyZHmab4bOhMWR4oZWJSiEmn1ZdnHoxGZmnQs2o46d7horzaNusj1FTdHhgE2BiVT2E6ZOMJTMiBJEIcObXTp+Qt2PMCmDHgfb60JsOLWs45Fpbj44HTiAOLhNTc3PjHVWOPFF/4tu+gdxM4uvhqwYp/jdbTClRmEfa2lDkFlo/AtIC04adQNdBJkOZWZ0OIKUW/zp/vhmkJ3Syt0MY59xhBGmDmbPmRX5ax3YpnRc0V6KiSLfLX8hZuWa2xPj6/sowkiAso++wMnTRs8dOFcPszHYBGCS+O8NCqSee3O/d6O3CDSx2+SqGwJGePoNRCXU+4+uLLvIjdJGuOJvZujWgOJSaYM+ZVjaQOFUbQS0rOORadp1Tn/rqG4Iv3QQ9w4cVjZ/aupe+A1Y4xOuDnbXvBW4WcyvDhQ3JEqWJPhtr1rBb52NjYByC2hbgf1PccNm3abAxCn78Qnoo9WNbYGbHieBZRf1usytsnM/MJCH2pgZ7DXEaCZ4m9wsJa1VqTl09TAbizQBZP5mjp/YrHmCmwt1+VEIysqGMNN8ZeezXnzd2pVR2MrxQs9P0fr84zmRFExqOq7Q/7Grh9m4rR9mqruZ1cOJecvQKWqKlNvdvHUvRt9iF74OOti+AepP6J+jL1f9NYm/b2NUqXeEZPi3efCWr2eDo77NK6Jdl/pvzaJSV5f921Y/d2ZW6c/lrv5LtxKm7Mep+2Oo9LteU9amO69DpV9bCfqNeaNn22at1L5GAn8r5CT4QvKMWX/KMGfx8UcfTfW5xK05c+dK+UwyKfPYkbJNLHK9xFjMmTuXftcVQP7YjBBCCGFIEuCFEEIIA5IAL4QQQhiQBHghhBDCgCTACyGEEAYkAV4IIYQwIAnwQgghhAFJgBdCCCEMSAK8EEIIYUAS4IUQQggDkgAvhBBCGJD3u+iFEEIIkfg830V/n/8BESjDnCXlM8mkzGNHyjaxyPUSY6GdsEuKXgghhDAgCfBCCCGEAUmAF0IIIQxIArwQQghhQBLghRBCCAOSAC+EEEIYkAR4IYQQwoAkwAshhBAGJAFeCCGEMCAJ8EIIIYQBSYAXQgghDEgCvBBCCGFA94V/ymjK2ddVhJkrOHNK2B/iWRZ7C7bVybica9m6VzlW5ujBmun3xKGz2DdV0Rnw+nra1wn2PlruM7t50tY+pk8VM3m1HLWl0DpKWU389Vdh8h4YptdegK1N+6Rc7Md2sGK65+fRr13iK2dfVwF3AsohBiK5vuo1Ip7q5VhEVMeipMJBh1X7ly6D1VW/PqKvifUljTE4mcQV0NfGrIz8+xZFsP5af04xrEP3uAkGeIBh3ENZLKkAggRdyGX9ouRgD/hVNKVy2I7Vgi7I+1WQCgcd1h72oa00aiPva2J9jqbi5tVy9GkzMMUdqa6jit1fhSp7OoXWnLXeDtBib8Fmc1DW5ukU1QZ4q4n1m5RyKnP0YD1Wi8uvzBOeLhAN0xvL9xrD9S17WjkndyzPJ4bC17FoycW+dBB7zlpvvQysq0q7Tzuzm/W2du/PR+2uxBw8xUJeLUs+2c36Ek95xL6MQk3AFJo+KEcGYrEWhRR9MjCMOa8WS7CHK55hBVdwDYV7nXZse8/inp4SMGPX2VuCsw/MS8vVA7nYj6mN3H9U2lbFk1M+mi9nnzULl3Mt652x/ZOP+0v0nWynrQUX6uALlGsx/QpOTZnsL2nCNX0V1goMJBd7hTJLXm8/G+NgOobrW+HAmhpJW4hfYetY1LRjK9EPOve/eRb39EWsz1N+ttgLMA+dZY83UDWy1XkF0+pnKIv26SSqtiq26gJ5I84zw5gW5QbvryfEzIzpoz/DYt+uBPcp75fvDVGYwcNAWwtYC1ifB51+aZaypVm4LzVxZ1EWM8K9kDlFk/oLzfXJMCx6GAvQ6QlacTtib2SrZ6QaohMsc/RgpQknRd60lbK0gC7lFZXlhqFBXH7n905fEflzc1EyHf5ptkRMn7Vj26SWU15u0GdEr8zDX19FOfusafTa66FiB2nBzkeTRh19FhTH8mo5alvEZeclFlo9GRQ1te6X6ZjY0pCSGXRfatdnnlyDuFnEvDygjcAlhYBlQBHd/meYO65Qj5VjXZ2Myzl6cDdMW4gDUdpk14jzDKx4ulx/OK+W/MwrtNpCXnENZSbkPvNG2EZvfigZbn1IJ8oAgr6Lib+GnFnEkrfXsj5HmQmaVu+go2sHM9q0x7ZjzxvDa1Ysw8wV3tE2joAMSS7zUsH0kHK0zOFJnynvaz8zMPHPFq9iUeZBebJM9UEHShZ7C9bUs9hzfO+bMILVMZJZkQd7ctayPmc3vUNZWLt66MgbVD+jesxRHupVg7LkLMI0dImONvDMFgc+8gs4bR8yQDIzzADl7LOtYsCplmvObnpvTejTGkCIgVE02kLew6SRzApbDx1dyr99Ff6PX+EdVy1Hu3zPOWr3DcITui3Eoajtou/suoQ7c5kuNWbJWQSjBezMIu9F7uhaxjs5a8OOEC32FqyZw/S+6RsFuj+JZAAR5/qafKPUvW/QOxTsWDILc4LPSAPk1XLUf8C09yIu9B2rxb5dM1tXgr22PDttJQk2ex+DaJd5CJ5BU6i6rR2wKu9bkhgzlmB1DIBhevd6Zsnt2NquBD/m11+MqsKhbNRtG8PsO+9h0nQzysC0/72mzLGDFdOv0OpfF6PRFtqqeNITmNXgbLZqgrw5BRNZWCs8g7+1rLefhdU7vM9J2LYQp6J3m1xbFa19WeR7R2PlWFfD5a5RAnafZ6bYFBB4tMxW32jPtnoAZ44+ZeyZfRrNeAcuFnsLHerM5Um/9betOU24NAOr56nH2ed5L6XjVUbvjntyHTPag0XvjGSUNcf9b57FnVlER1dLFLIFkyN0HQtlgGvjHCiWOXrosKbRax9jqratitY+dUY5xmyB8ZSzr6tHnR1HtjQy4bawtwT7Gf/9WdqBHuo1wvucRGwL8Syq98Hvf/MseDZvVCzD3NcS4eyvka32s7gzC4JeVJdTMyr0q5yuT4Yh9eEYbBhJTGUOzyAoVGfYyFbNKPtJG8xL1aQ695aoqfk0Ja16jwb66FDWHJm+Cps3U6WsaeoGUZ6Zj3NATW/Gd+cWvo5FSy72Y56g5L8PxMWdIUib6zej9Ju17y9RU/Op6qD2Xgz0ebUc9dxtMJX7D1yDQTe8uj4Z9v2QYG0h3kX3i27aqmi9tQprRS72vDRdGj2i3+1LZkVFiN34IXR2XcJtuF3g41PmGNsIHfDurH/Hr6PutBV4MyvR3yF9r9APprzrwEPKhiX/waoyuNodlWWBWBlXHRsX7S2dwYJSO9duBcnemVM06/S+59o2qeu5Y1kWMAJ1k2HkmZbo0qXcdfsjQjzHIwHaQiKI+jfZ7X/7Cua87Sy8FensXfO76i1bz9vHcEHbqnhSXes56v97ebUcvWdG7OUsyfRLfwXIxW7XlIe6hupyau6Td4xtgCUmrsyRKFmSSOpYlOTlstDvlk5/nnSubyOXesuiZ50+r5Z9Y+lLDMiSswiTdi09lioc+k11FQ6smeB623MNlVv0zFZNfa9w6PZUJU5bSAxRuU1OZ+8b9ObtYMbb47nPsRHnmQJsq7djz2uPfICwt4T1e8vZ16Xs/PQZptdeNY7zSEDqDlazrYcOm99j2luDFhXR0VWkPjBMr32tvpxTlXSy5yXkFpXJoOwyt6o/xdW3L2pFWseiwbMhS1MuXp4vyGqr4klqOWrroUN9kn99TVut7ROM/q2NgcwPJaubmYsCHotF2zZbfdciWP/SaSsAews273X1vw03QdpCgvhC8oxZf8owZ9HvktsRQpHymXxS5rEjZZtY5HqJsdDWF/ljM0IIIYQBSYAXQgghDEgCvBBCCGFAEuCFEEIIA5IAL4QQQhiQBHghhBDCgCTACyGEEAYkAV4IIYQwIAnwQgghhAFJgBdCCCEMSAK8EEIIYUDe76IXQgghROLzfBe996/JffzRR1N2MvFuzty5Uj6TTMo8dqRsE4tcLzEWc+bO9f5fUvRCCCGEAUmAF0IIIQxIArwQQghhQBLghRBCCAOSAC+EEEIYkAR4IYQQwoAkwAshhBAGJAFeCCGEMCAJ8EIIIYQBSYAXQgghDEgCvBBCCGFAEuCFEEIIA7ov/FPCKaH+xGYytIeuHmbjNgfFDa0Uzg/1e24u1Fmp6YTsaieVK01BHxtN4O/53jsuWXZxqDKF9g3bOBiL1y9t4Hih5krcPU9d0U66/Z6mL7fIyjpxlVB/4gnuTMZnDHV9/a8L/TTHqg7EmmUXhyofw9fqJqf+FDe0knurlm/XnAp/TvHcB0yBgH54kspHed/Auq4/nwRuCwlgYgFebVg3m/PZeMBzsIT6BuV/B7fley9cdrWTygXvBwSc4oZWCmeep26D73h2dQM5kZ6DLogpg41D1f3BO4Kpouvg+2P2NtlzBjWNZR3VTVVUNu2CIm3ZOqlceZPmDVYOen6ubKC402CNTNfpu7kQy/ca9fquo/qbg9Rt2Oa9BsUNrRQ27aI/yOAr3hU/lUL7Br92HbP6o9Th5dOUn9y3gjyltIHjhbO4UJdv4EHqBFh28eitWjZu8/SHk9RHWnaRG2Ry59/fJ3JbSAQTStFnr1mA6ephth/QHnWwPeLRYQmPzndz4Wf6i9tds22cjdVB8zk3ptSM8E+dNCXUF2bQ35zPxubYBXeA7pqdmk72FDUn+2Faiia7UkLhShP9zb7OuLumkQt3M8itXhfTc5tc66j+/mNwrpaNdedxx/S9wl3fU9Rs09fvg0fO4562gBxLTE8sJg5u0wfy7pq36CeDR0uj/17Z1eUs5zx1G2q5cDfYM0qoL5xl8AzUBHXuZLsukKt95IJ1ZMfsTZX2d/Oqf3so4dH50H/S1x4SuS0kgomn6OcvoRgmMHo38ciaddAZndFkRqop/JMmlYPtG9QBT4hOsLihlUIO08xmb+rKfa6Wb9egn8GcC5GijFTpEjLop1k3IDvF725XsTw1AziF/6wpMVP4p6gpUsvJEnzgEr0yD399I+GfRu1vzvcbOCcIyy4OVS7gg+b3eaTQk0FR07B+mY5wqdnuGqsaCNYFzehlVz9Bxt33qRutbvqn70MsW93Lot3/ZFeXs/z2YTb+egnHQy7RhjkfI7SFODChGbxn9ld4wkn1uEZgDrY392NaWcXxhpKJnIqitEFZ80nE9bf5m3n01/ls3KDMBE0rqzh+oooZJ7XHysdQzsrM0n3usC+dOmcW3B0MSCL333LDzHSygeKGKqVxblDet+7czah9xLgT9TKPTPaaBZjuvk+XGpiyq51q2tL3vglDHTS+q+uATSx/HF7ZkM/GDbVqH9HK8ccH1c+oHptgm89INeF+/xQZDa0cP+H510Cx9xkl1HuWED3ve3tCb2kA68hZoJSbbpATrbZQ2qAsAQbtg5XsQUah7xoVP/WYcdpCHJrgLvpT1BTlU3cOlle2cnw8gf7ANjZuOEz//M1KAx1ro5/2GJWexu0X0BKKdqnjwGElJRlwTM12hLSO6iZPR/cEd+ryxzjjX8dXZ4L7lq9RjX+5JAFEpczHqLSBypUmXZoyI9UEt6/7OtwD2xJjxmLZxaGgbU677KYuFQU7pmb/xkepq6aVVb7AtCGf5qsZFDbtUtLPlnRm4eaOtzoHLpfca4obqlg+rZ92/34hGm3BsotDhbO4UBc6M9NdY6Xu3CxlwHdCXY8vMkBbiFNRuU2uu8aqzvaUQH9ozOu5DrZvyNcEes0ovLRBMzpv5bin8Xrc1Yz2NuTTnloV+JwEpQ20kVEGXEpZNML3/Wc0Efz+Sc/ofSy/ZxxjL/PIFTe0ejeEaTutg0fO456/eXwD5CmSXe3kuDo7jmwQeZPfxWKg6LcHSFnTfYzCUqBzJ+1XTcrkIxoZwoRWQr0noEa4a31sbcGTLRltOU+ZgGyh0dtfbzyZQqWm3idiW4hnUb0PvrvGqqZyNo8zODjYrqbvvJu+DmzzVYYN+WwMs34mmzY8TlFTdFi3Aar745t+m+4UulGzWt6+Ufa9GeijS+nYlM41SAfYuZNvb8hnY/PN8WfCJlFxQ6t6J8ZUro2e4nfB0u2d19EuKh3cpqbmZ44zQ2gEll0cOrGZWedqw/af45Vd/QQZQEahZjJWmAEoyzOHqtdB6WaWc55XtAPCA9tovmpi+VPqdUmwthDvov9FN/2DuJnFV8d9UZSGG1874Q0i6LVRUp39v9avmSlZmcMx2yF971A3Ld4+HL5zPbBNXZ+O8rJAFPluc5r62yq1e0e8AtLy4M1sNU90WSABaW5ljuVtcZ4sru5fcz/KZkrlvbPnzIr8BROgLSSCid0mV90QMLry3zQxuhLq/UfU6v2T/gEnUmN7fyMJLMvihs36DVCelOX3fUsYnluRlJ3166huMMbyRtywrOORaaNv/CxuSJQsSfDbWqdKd81b9E97jC3eJUHl9izT1beULIllF/WGuv1z7ILfyjw1Aq8X6sZoX3+fOG0hMUzwNrlZShqlUnPo6mE2Fo0hOM/fzPETmzUH3GP70gp1k533FMb6/obRz52ZVfqyvKv/AiFQUpY0tPrKzP+2oZn68pRbVCYoIwWTmqYs9H/M+41i+scnfDtkrFjSmYWJDP82D1N0+5mD7Rug/oSy4xsI+Ja2WSs1j92D35qWkWoK0scqJr9tB7leAeeRIG0hQXwhecasP2WYs/j4o4+m+lzi1py5c6V8JpmUeexI2SYWuV5iLObMnUu/6wogf2xGCCGEMCQJ8EIIIYQBSYAXQgghDEgCvBBCCGFAEuCFEEIIA5IAL4QQQhiQBHghhBDCgCTACyGEEAYkAV4IIYQwIAnwQgghhAFJgBdCCCEMyPtd9EIIIYRIfJ7vor/P/4AIlGHOkvKZZFLmsSNlm1jkeomx0E7YJUUvhBBCGJAEeCGEEMKAJMALIYQQBiQBXgghhDAgCfBCCCGEAUmAF0IIIQxIArwQQghhQBLghRBCCAOSAC+EEEIYkAR4IYQQwoAkwAshhBAGJAFeCBE9ebUc7WrBnjfVJyIiVeboocNRPtWnIWLgvvBPiUQu9mM7WDFde+wKzpwS9k/w+WWOHqyZfgf7mlhf0jjhs54cgZ/VfWY3T9rafQfyajlqW4XJe2CYXnsBtrYYnE6Fgw6r318PHDqLfVMVnd4D5ezrKsLs+TGhyhuClbnLuZate4M/22JvwfZQy+R8RvVa418HErDMo1puAW0AAvoE/+dMUhkpfVDo/mxS6884BPShUSq3YH1zQN/mpdbvBKjXRjLxAO8JGH1NrN/UqD/e1cMS/451rM8HvwqpVJQOB4lRUSqeYUbbWtZ7PlNeLUdtO9hX0e79nGVPp9Cas9bbeVjsLdhsDsraQg2QJiggoGsp5Zt2Zjfrbe3en4/aXSEabjwywyXP+aPWuR72oa9bug6qb3LOrOxpJUC5dUcTq8xjV26jTArU4D7g7R8mqYzyasn3n2CopqL+jFleLUs+2c36Ek8ZRbfcQgd0PYu9wDd4FZNmgin6cvZZs3Cf2R0YbPeWsN55BbNVm64b6/ODaWSr8wpkLqNsYic/OfaW6AcsbVW09oF5qS8ltr9E36l12lpwkcWSiuifjmVu2uiP2wswD51lj7fRKuVtWv1MYpQ3AI3YtJ3O3jfoHYK0ubm+YxUOdVa2Fudkdc4VDqypV3AN+R9fhpkrtGrK3HlmGNOiXCyTdGoRi1W5mVP8Zu96ZU+vwtTXpGlLk1Evc7FXrGKgL8ifap2K+jMebVVs1QXgaNWtXOalRvjUvFqeXz2AK57LyaAmNINXRmVXcIYawe19g968HSzMyYW29jE/PyTXIG4WMS8PiEUaO97k1XLUtojLzksstHpSlOpsR5dyH21ZROPWhyFm77msX5SM+1K7/nH/8vZPlY6aEYhTe0u8WZXgAUJJ8y+81MTlRUXedL+S6ten00dL//uUs8+aRq+9Hip2MPowK4h4KfOw5eahL6OIZnpDg7hGedj9id+jey/ishb46mWFgw4rOJ1g9bQJtZywt2Bbnaw7Fq7sLPbtrLjVxPq3l9HhP4uPoBz8U9iR1ZOpY9GWUYR9ycBH4Wbv6iDJuZZ3lvb4zeKj3caEvwnN4M0PJUPfxVEqQTvXbuEdLY71+aHfOAUTA1xLyOBezpJMcL09yvKCOqN7R1ehk1mRB3ty1rI+Zze9Q1lYu3royBvErj0WZrOM+aFkyCyio6tH+XesVlPWZmZMD9Jo2z5kgGRmmJXz36emStd73vfWWMtgclns21kxXTtDjpxpdQHsVT6r/cwwZmsPHV0F3LFrjznCBLtc7MeKSDtTH3xfxd43/K5dOdbV2oFWopV5Mitsy3gnRz1f5xVMq7ePmpmzzE2D6auweeplV2CZmh7yS/LmPUyat156ZGFdelEtpyZc6mvaHmrRHXvensuoKhzYVg/gHOcyoMXegjX1rNo2lTKIDyEG8ZlFPE+9r36F7UuUvkJpD8q/o0HKtMyxgxW3mkYNztFpYyKYmO+id30yHN3n59Vy1JqF+8wbsVmfjimlozcPncUZqsKH/HzD9O71zDrasbVdCX4szNLF/hJPkFAbMquw6YJ8GHkPk8Ywd7yTqXZsJXE4e1f3dHR4O/fx7Wdwa4KysnQS7NjoyymeTi70DLYd26bd9KZ6Bl7Kerz3+YlS5houp6a8975B71CykpkLodNWoKmXa3H2ZWHVBPn9b1+BzCL2ectZmRkGpvWvaIKyko4OdmzUSUReLUetafTax78HxvxQsj5T5r9UN0XKHDuCD3Z1y3KR9CWNbNVcr/X2s7B6hy7Iewc5YQZJ0WhjIriYB/iAij6e52tnnOpMJh43H40qr5ajXUpHvz5EetBibxnj55toFqMd2yZlRmONtPG0VdHal8wKW5zfWrO3xNf5vL0s5AxjPMKnJX0i6uTyajnatd07i1mfs5bWh3b4siuJUuZe2sHI+OwvUWaR+Z5r5t2j45kxboe9Tbgiea8wqX89T7ZkYnex7H/zLO7MIjri5pbBcvZ19ahZhSADlzH00UG1VfGkdk+EJwMyzmWksbQxEdqE1uBdnwzDamWUF+p2uHmp4L7kGtfzvRL91gp1nXy0dSTfrTgFU5iZcHFnCBbOzQW0t/EpM8jL6mXZX7KW/er6WUdXUfxfn70l2Oe2YFuUi4X2SZz5Kql2UNPE2odW76Cj6xmcOSXwtHLbnDag7C9pYklXEdYK6NybgGUeC5p1b0AZGDHAO1FcqvPu9rb20GHVP2bt6iE/wl3jtFXxpGdfgK2HDlsMb30NR3Nr5vpJmRjlYs9T9kBYu3rQF2MRHV0FU1cW95gJzeA9qZP8UDOjimdYMX2Yy13t43q+IVQ46LCm0WsPE9xDjaxjTZf+VfdA+K91mlMwDV2iQ9cg27FtUtcWE+WOhknnl8b0rm8qm86UZYMx7Ea+p8o8xH4QDWVn/Wh7esbOf6nAabTGgwAAIABJREFUt36u7Jgfc+Zwb4l6zUdfoogZze2Fsc56WuamqdkStZ76laOzD2VgmiPBfbJMMEXvuVVlR0D602JvUWet2os51ucnvrKlWbr1pEDlLMnUrqXHUi52h3a9XV3H1ARvT2rRt9ap3NroalPPL6+WfVFKdceKxe7Qp0Xzanl+dZCNRXFBWe/034RW5ijybbRMgDKfqDKHfhOV7vMD5NVi1ywjWewtSsYrTrMY/p9nqlhyFvndXhgl/nUyrtvYvWviX3Szt4T1e8vZ17WDjq4d0X9+QlNmZ6bMYJ9VTdmh7AQ223rosPk9JRa3QqX6pYv9U71tVTxJLUdtvhSl/9JC2mrt54nw1rxJ1PkR2PzKM65vs9lbwno8qVzPQX25xnuZT1yWPp07dBZ7jr7uL9SmzYM8Hl/0nyfSL4SJNt9dM0UBj020TejrZJy3sXvUF5JnzPpThjmLflf0b+PQ3VcZwZrhWJ8/WWJVPiI0KfPYkbJNLHK9xFho60uUvos+uE5bwZhG2GN9vhBCCCGCk78mJ4QQQhiQBHghhBDCgCTACyGEEAYkAV4IIYQwIAnwQgghhAFJgBdCCCEMSAK8EEIIYUAS4IUQQggDkgAvhBBCGJAEeCGEEMKAvN9FL4QQQojEF/Bd9B9/9NGUnUy8mzN3rpTPJJMyjx0p28Qi10uMxZy5c73/lxS9EEIIYUAS4IUQQggDkgAvhBBCGJAEeCGEEMKAJMALIYQQBiQBXgghhDAgCfBCCCGEAUmAF0IIIQxIArwQQghhQBLghRBCCAOSAC+EEEIYkAR4IUT0WHZx6ISTastUn4iIVHFDK8cbSqb6NEQM3Bf+KaMpof7EZjLop3nDNg6GeFZ2tZPKlSb6m/PZfkA5VtzQSuF8dMf8Xxf/53OYjdscgW9g2cWhygV8UGelpnNinyj61lHdVMXyab4j7nO1fLvmlO+AZReHKh/DpPu90ct0IjzXQz0bLujKzXNNCfF4Iggs84B6VtrA8ULfp+TueeqKdtIdi9Pxf68g7+dpD1oB9SQeBNTV6NUPfRnErt7p6z9wVduv+Nd/LYO2hSi8ptbEXz8E/7p3NUQ8EF4TDPAAbtx3M3i0FAh6UdeRs8AU7AEAMgobKD4Qm0AWF0o3M+NkPhs9ZWPZxaHKKupLT/k1gtgFdC2lc7tJ8wYrBz0/VzZQ3Km+tyWdO82+81Ued1JNInVsGfB+LRs9wbG0geOFrdTj63iy5wxqylvpsCqbdkGsgnwEA4i4DOh+ip9KoX1DvreeBtSf8b5uQyuFM89Tt0Epo2i9bjAZvO99H09AP96AGiwcbN8QGDSUCcZbCdQGPMK3hbE7RU1RYD3NrnZSueB9mmMR3EsbOF44iwt1+Ql4DaZOFFL0JsBNxuO7yA72cOlmltNP/90gj109z4W7GRQaOT10YJu+IXXupP0qZHxT85kzUgg9BIqmEgpXmuhv9nWa3TWNXLibQW71Ou/51WjOt7vmLfoxMSP4lCZOOajRBsoDh7lwF2bNWec91F2zUxM4TlFzsh+mpYSYuU1M9pxZYZ6xjq/OjMEbx8DBbfqAq9QPdYA/XpZd5M53c+FnvgFQQL2MooM12oGWg+ZzbpiZHrz/0p7fkUScLYZvC9Gh9i0nYzFALqG+cFYCZk+mXhRm8HDz5FvcLHyCHAt0+12A4m9m4H7/MHcWZDAj4DevU/MzOFS5mfpSR2zSOoni7iD9oR7zLEE0v88jhZ4UlTrj16V/w2QBSpcoyym6cj7F725XsTw1A4hw9uifKotlenvKKLP6R94/zAcLNnvTkUr6UZ/GjSgleft62PK5+fEo5Z9wZa4vo/DZiZv8Ttd3nKLr/XJfvYxWGxiH4qcew3T1sD64JNz1CE+/dDG2csyufoKMu+ep07QDz7JqM5u9Sy9KPUCX4g9XN5TXfp+60YK7Aa9HNERpk52D5nOw/Cm/mbhlF7nz+2mvCRm6oHMnr5xzK6n66JxMnCvh0fnQ/2vfbCB7ziyY9hiVJ1o5fqKV4yeClYWJ5Y/DKxvy2bihVsl8nGjl+OOD1GmPjZINyZ4zK+hAov9W6BlMccNmMu6e1wwKSqivfIybzfls9Lzv7TEXwqTKri5n+bR+2kN2IiXUF2bgPnc4oEMzrXwCfqZ81rpzbjIKWzl+4gnu1GmPjV53M1JNMH+zem1bOd7kn+3KYMY01NdW/h3SzVzjuMzVQeO7ugGOieWVS3h3g3q+zf2YVpaH2Xg3i6/6PZ6RavKrlxNvAwEsu9gy2swz6Ow9jq9HGCHbwvzNbKHR93nGVI6jzN7nb+bRX2vrQRXHT1Qpy5YR1o2MVBPu90+R0eBrH/o+MnGvR6xFbRd99+n3cc9fouvostcsgCCdZsDvquk4Q6fqAWVW6B8wobvGqlZM5V/z1QwKA4K8NoWpppSDHfO7BuORXe30NqTcW7Vs1I6ELenMws0d7yjhFDXb4nCkXNrg/QyVqW+xMWA2so7qJk9noQTsYLMI97lG78xNSUcHOzZ6ivrgtnzN9a3lAo9RqQvyDrZrrv/GuvOwssoX5OO1zC27OBRiYKRdBlLSwiYeWRMiLdx5ig/umlj+fU2ZlDYEbDqMXhsood4TKCqVPQWhMjDZaxZguuq39h6v1yOUsG0BuHueV7z1f4x9SemSgD7N6+phX9mqywOBx0apG+rylWlllW+g4OkjPW0o0a7HJIrebXKdO2m/ql0zK6FwJXxwOpK07ylqfnYe9/zN1E9kLS+eWXZx6EQVy28f1gfMIA5uq41w/dE/rRkd2gHHK5Rz/ESr77p07qT9qonllXF+a82Bbb6A+eslQWbFp6gp8nQYjfD9UJmTQKOm0sM6RU3RYfqnPUZhqLreuZNvN/djWrlZOZ84LPPsaifH1VlT4MBI29lG4hQ1RerAxxN4v/kOdefcESxtjKcNaAdU7/Bo0KwKeGemv/Zbe4/D6zGqsG2BiJaQgltH9eMZuN8/FfHvu2+NqXIotIMC4OCR87g9bSjRrsckiup98AePnIcF65SGUrqEDP+R72iMnKovbfB2hlN5W0f3xzeDbiTLSDWFbODdNVaa/TYFKjPSWi7MVNPO8d6oDmyj7pwbk6duBlCD7kQ3i8VQPJV5cUOreidGNG+H0g64lHaSkWoaXzAYEwfb687jnraAHP80cdDlB0U8XY8xCdsWxsiyjkemuSOcyI3HKX4XLN3eeZ2bmh8T9nrEWHS/6KZzJ+23H6OwdB3Vj88a865TX6o+nTvBdt0nIs3tHZF3hsqa7MRmikH0D+IOWOtUUmABs5Sw1A65OTrLAveUgJRioOD7Jaa+zH23s8X4lk517Tt2gSO84m9mwNV3RvmcU389plr2mgWY7r5PVwx3twfdIxS0Dcn18Bf1b7I7+Ot+Mh4v55Hb47ln1JOqfyzklygkmuJvZujWbIM+p0GftShu2Bxy5jAhnlSWZq0zu7qc5fjWz/zPxbMW6h0AWHZRH4Nbl6Ipu7pBv2lH3UjlSyOWUO83wo9ZmbOO6gZt+ncd1d9/TN8p+pep//nGTZmX8Kjf7WzRUlytLSNl0xRh2s24BJRlkOuhnoP/ZtjQrxG/wreFiVC+4yQ6rxVad81b9E97jC3eMlevmSdDnEDXY7JF5TY5nQOHufB4FTPGPCNUde7klTV+3zTlMX8zx09s1hxQv1lqfO80CdQNIvOVnaN62m/FUnYDF3oeunte80Uc0XVwWz40tFJ5opVKz3tp9gT035pFpfZccAd8ucSsldrPMzlf0DMW3R9DZWUrxyt9x/S3svVzZ2aVvi7FsMyZ+ZivvCHoN3DpyzTw1ru4KHNLOrMwkeFXtsDEb0tK1ZdRf3M+347FbbOd16HSrz0G+0Y0dYb4QYgsS1xcjwiEbwsTEaNMYwAH2zdA/QlNmftds0S5HpPtC8kzZv0pw5zFxx99NNXnErfmzJ0r5TPJpMxjR8o2scj1EmMxZ+5c+l1XAPljM0IIIYQhSYAXQgghDEgCvBBCCGFAEuCFEEIIA5IAL4QQQhiQBHghhBDCgCTACyGEEAYkAV4IIYQwIAnwQgghhAFJgBdCCCEMSAK8EEIIYUDe76IXQgghROLzfBf9ff4HRKAMc5aUzySTMo8dKdvEItdLjIV2wi4peiGEEMKAJMALIYQQBiQBXgghhDAgCfBCCCGEAUmAF0IIIQxIArwQQghhQBLghRBCCAOSAC+EEEIYkAR4IYQQwoAkwAshhBAGJAFeCCGEMCAJ8EIIIYQB3Rf+KZHIxX5sByuma49dwZlTwn7d88rZ11WEOehjQIWDDiuax5Tn41zL1r2B72qxt2BbdAn7pio6o/NBoszzeYMZptdegK1t/K9usbdgW53sO9DXxPqSxtHPIehzDCivlqO2VZg8P0/S5y5z9GDNDKzf+ms18Ws/ZSocdFg1f31y6Oyo7U/53EzK5w1V9v7n7ArRn9wLQl0Ppew8P8W2furfK0QsEFEx8Rl8hYOOrh2suNXE+py1vn9OsHb1sK8i2C9lYXWUT/it418jW7Vlov5z9gF9LRNuQGYuYfe+bhOuzCI6dOWqBPe0M7t1zzlqz53YG8c7NbgPONdO7ufOqyU/M/Cw0qkO4FSvlf0MrLA5KIvt2cSEZe6g93Osz9lNL6uwHavFEvTZ5Vi1A9BYClH2SnBPo9f+/7d397FRnHmewL/RRbkbqR0LK4YxnoUsdnszOMqCLzGstQNOx90j40PEizpxtKtZDWrGweYQo0G+U0kOA0glncVqrMhth6OV1Y52hUmLsREy1rh7mqZX8hmCgB2dmchtZ0JmiZOQtc+4pblDd8r9UVXdVdXVfusXu57+fiQkXF3ul6fqeX7P83uep621S1NwekcgewrztjYW6+vRHojBWz6RbEvyeX+aXys4XQNvxvuHspVlgO/AgLcGifFz6aMjvy9jZYqPTyBR3ZYh+AvO04Pm6kXc/ij70eQFST9y6kdwfBEofzFZWVxyC5wLE3hfCiXPORacgqPhbVsGl5Vqf2svHNODulFaIT63G3LnXsxOm/+sp9KoxoOpUUpE6sXthRo027CjFZG6dKOtEKSxKaC0zDJL1R5oQ8X0FBJ5f1eZyt4N2VODxHhvqjPt9yE4XYL6t4phgGFkeT209sifakvyd392YFc1EB9LvdaFjyaQKK1FU1F2uPIvqxS9S25R0u3JAGLiv4zbnm7sbHQDY7pzHnXh/fERSN4A2v3FlZ7Rgk9aegyDCKItmbpKjJ/DmxIMUx/KsQxlncaNptoSJCZDxvRpfA4J1GK7B8AY0lPZy6Rc7SLxddx4wH8XcW9L8nPnusxd8kkli3WvDmH9SLKzTqkjhpRwCA8fd6N+sxOA8tzGtKUAaeTOgDJS8wMnpIq0x8JeIBgEvFrqXL3voJ/KWOG9mLHsPW7sLF3Eg6jx+sW/XgRqlY5wBAKWvZWlrgdm8dCQTQwhPHlSd38qU7A7JwfxoLYtWTeUcjJOAeai7IriehRIViN45+YSYPruEgE6hIePAUetOy0Fo/USiyNVr1pq9F7dhl33UmlER0M3wtFubBrTHzuZObXo6cGJhhJd79iJTaXA7CNTcBr7DLMowSYnAHRgwJDKPofbj3P5gdePY7NpTOl5ERXJz63Ktsw1nQElBW8xx++qrAAW5mDqbihBRs22uOQRQ9qyKWgeiW5UWgbvsrEN8PRg2FthGBWmq4F3993UFErpXkjRGKTNI4ZjJ5YbRS5R9nCWwZEWvIDIo9lk1sG+Zb8Ky16PCqXjq+PcXGLIBgKAo6EF8Gtp/EU4vTGEoy2Yl/XHlkrtK1lG/Tntb+2FY2ESYfUaFcX1KKC8r6KPf72Y4ZEQJH9xpepdjbVwZJp716eU/Zdxe8HqWImSDUnqwEA0hnA0hrBUhtHGVfZ0PS+iAouYT0afECSf/UfvF+5NAYb7SknhOswnrqnMTbTGU157Jsq5uQR4/Fmq3P2+DTxicUO+qt5zauNuzHBoncblFmlN6YKyOr1kccxqcJBUdGW/Fstcj7EQHiyUoL5TNw/eGTCMoDX6qY6INIK45bEa7FqiPY9ILZDHK+BV2y1vuTFLI/71KKy8B/i0C6Y31oX3l+31iUKdi7238rn3tDRzGv0ivrvYFY0hvJoFK2NdGJ0uQb0UMy3Os7nk+g8tEJ0E/IOIGzoz1pYvc72VBrOlXfhI6eiGo3ZY/BWCdEi753qBzhjCUa3+uiFfVRZ1rqlRtsh0ZFaMZb9aK7keIUiH1MWS2mBh913I44uZ222dtAzhsu8nhhPoTS06HiuDpCt7sa9H4WUV4ONfLwLVdUsEZze2ly/daBZNql6di72ft95oP47J+gUrccwvABWVptGnadR+waem5svblMotynXw+3Q7F1og4UVUWKRrs6GsQYGuIxFTt2PVwBuNYVh2G9LBeoaO71gX3mzch6bgrNLZsk3jFoJ0aDA1aut8G/WlUKc6tMzSXjiQ+07kSspeWW+Snn42TJvYtuxXYMXXQ99p24cmXz+cm0tW2dld4fuBftEv0hc9inw91kFWAV5LyWRcbdn5NuotFrkY6VL1lXMFWHG7Ptp31yyzXiHX1PUP5rloZ5lhzks7Vzqkznct2WGzL2VxY27LPyK1pG2BVOYMpxBsVFPXlkFG6fimZXP8PqWztdy0wEZl6FSp/+QJJLCobFPL4fcQrKjsDetNUpybLRaf2r3sraz1eqhrhZZut1fPVWle3LcEEa/HOsgyRa9tP+pO22PskkcQ9tYgvpIUmpaqb7CYJxWCuj1kFen5ZXl6MGAoc3WeWRe8tXRXai5aWRSVXIiX9hyC8PRA1s0DuuQR5QtQ1uMLfrRpEN0cp0s+iXpMJFfWtwfsMkXVgQHTKLw90JbnzFQ20hd1KfPLUxiVtN0Ldin7/GmX9dN6ytQH9FsLcyQijaQvnFTn+7W2kdcjt7L/Jju/D03+DgxElRXIaxWRetGU9m14Cqc3hrBXd0DbUmMXalr8QS4zXmOfAZKpzM3f1jbWhTfRg2EpVX7mLScVDfrnEOdbpXbq75mFCciN67d48IJvHxCIKavEtfdj2P6lpJW1t7u67ZCFFMd8eTfC0bbUoXUu2+VEpBZl612yfM33uF3KPo82703dm1DaiDfz0mHrx7FGpMUKY5vE65FLz5Rs2vJtlbMGM/Hcb0cwfD2njb8iNV/lQ5mxzPOHZWsvvF60Gvr7JUffRW8tIrVs2J49ERGRyPjX5IiIiATEAE9ERCQgBngiIiIBMcATEREJiAGeiIhIQAzwREREAmKAJyIiEhADPBERkYAY4ImIiATEAE9ERCSg5HfRExERkf2lfRf9F48erdub2ei2VlayfAqMZZ4/LFt74fWi1dhaWZn8P1P0REREAmKAJyIiEhADPBERkYAY4ImIiATEAE9ERCQgBngiIiIBMcATEREJiAGeiIhIQAzwREREAmKAJyIiEhADPBERkYAY4ImIiAT07PKnLMWH3uuHUWU6mrjVg3fO3ACO9uFaq/nR9POO9I2idYf50RkMHTiOD7N7gxuL6ywunSpDKF+fy3UWl069BkfyQAJ3zntxJmI8bf/pIE7tcSx5jt1Z3VPJ+xKwKCtAyHsuj/afDuJU+a9x8Hgg7691pG8U7se665dkaoM+vVKQ92NPPvRe/yHmC1Hfl2nrjPVTzDZoI8gywCtmhppx8qL6g+ssLp3qwqXTwDtnjuOg4fj38UmmCylyxTR0dGby9jJH/qoMoQPNyQq1/3QQp0714UjkuPHYnq8wdMCLDzOcIwpDQLfEgL4Whsb503y+0us4PdiFV59Xfko8Nj+uBPctt3pw8MyN5M+XTs8sc92LjKEzm8CdfL7Wsm2dek2/uYKDBwRt7zeQ3KfoI+/hg1sJOMozj9yLiw+9rVWYGWrGwaH8BXcA+PC4MVjdPPNrzKAKrxxNvZfWPQ7MDKXOu3mmH3eeVMF9+vW8vrfCeh1/8sIyp1SVmUbvtCJH+9C6YwZDB5oxlNfgDuw/3YFX8THOH+jBnSdWj/8QVU8+xgfJYB7AyaEZOPYcxpH8vjUbeR2nf/IacKsHB89/jEReX2v5tm7/6Q4luIs6mNtgcjKCN6sqZ9OZEsBJrad61PqMI32jaMUVDOFwcmSkjD5hHMEsOyJdxtFdqMIMhi7qD97AH77pwqvlVQBuwDxqsnP67KsvlimrJ3NL51PMU0wiZ5lW6mIqK2cZRLVM3dDv8FKrNmpUMyWm0d1y2ZObZ7y4CQB4HY1pj76Oxu87kPjdDfUc7WnnkMD38ScuABGkT8U8+Rjn294z/o7QbuBMmzYtZd2Jz137s1xbpw0wlq5D5uk1Q4aYViX3I/ijfWjdkcCdXxV5Q7haOw7jlX9pxsEDSu/XsacL1653YdNv9Mc6cNq1iudUA/pv1cqxf+sWy6A28zgBvLAN+wEc6dPSZ8rrnr/1VY4+YCFVYdPzQFXrKK5dV/5dMmUo9m/dAjz/Gk5d187pMwYs11lcat2CO+fV8j9wJY+TK6Jx4NU3gA8ONOPggR7ceVKF1uujuPbGHM7rj/X5sngN5RqndeIin+MrOLCpCgB86D31Gr4a0q5hD+58k8VLiiwf7Y+Zaxu2YAa/nTmLS9et6+b+00G0vvCxep/kP+spupwEeH1Deu3P7+PggTWM+HYcTj3H9VFcy6ry29CnV1K91ItXlJRk2jEHXvrBClPprrO41FqFxK0rq5hjVlLbicepSnXzzHEbjt4DOKk1EAeacfD8x8CeLkNDcvOMN/X4gWYMfVqFVn2QryqDA1/hDxHdcxb76H3FErjz37VR8g2c+c2M9bEdu/KbSndtwxYkMJ+8nW/gzPFiGr2vQq7bHytVZXCgCq0/0Tp/qbrZq474q8odwDefp67RxeMcvWchJwF+ZkjX29pxOHmxVuXTK4YGt+hToTAG2tXYfzqIa+rIZXUpfaXhVXrvfeLMY0bewzvLzM1+eLzHuBbh4pXkyNM8+qe10HeWCiTyHkKfOvDqqSIcMOTAWtufZZ5V19GDeo2AqjfOYj+AD3/1MRI7DuPa9WB22QICkOsU/cXjOH8rkbxYVHhH+kbVVfLp81Y3v/gKeL4sbVujodd88biamt+ipFVFCvSrcgNn2pTU/Fd7uizT/LSeZjD/BNiy1XRNTKP2D4+rqfkXDhdnZnAjmZmzXOQ381h3NPIe3jnQjINDXykdMwb6rOR8Dv7mmV9j5vnX8C4bw4I70jeqzl9lWLw0M4cEtigLkJKUtPzMvxgzJkoK+4ppFb59ZVp/kJJhTldN95+/lYDj+6+z47ph3MAfvkH6bp2qMjie/A7RiPHcM21ahjHP0wKUmWF9REpaWh5QBxo92U8LFLk8fJOdtlUlywUZtEo+vLLDlP4y01KWP0llWLStSMrK+tdxuk+A7IvrLHr1HUzXWby7x7ji+kifMTNxpO+wYUEijvatbaqJCkZL5/bqtoH2tlZh5jdqHTDfB7TOAhi6lUBVq67umRZlm+slZScv2+Rw8QruvNGlBJLIChe17DiMa9cPGw5xe8QquLZhCxyoOjWKa6dMj+m2Bn14vBnoG8Wp66M4ZXoMAPDCa6nHYN9rsEVdBaxJ/xzK/Hqr9uOTj3H+gPFerWodxbVW3eNFtb3KBiLv4R2cxaVTqetkvs7G+4BfbLTebp7xAqeDOJWse+ZtuMZ6mfXW4CL3TMmmLd9WOWvwxaNH6/1eNqytlZUsnwJjmecPy9ZeeL1oNbZWVmImPgWAf2yGiIhISAzwREREAmKAJyIiEhADPBERkYAY4ImIiATEAE9ERCQgBngiIiIBMcATEREJiAGeiIhIQAzwREREAmKAJyIiElDyu+iJiIjI/rTvon/WfIDSVTlrWD4FxjLPH5atvfB60WroB+xM0RMREQmIAZ6IiEhADPBEREQCYoAnIiISEAM8ERGRgBjgiYiIBMQAT0REJCAGeCIiIgExwBMREQmIAZ6IiEhADPBEREQCYoAnIiIS0NoDvKcHw9EYBjrTH2oPxBC+2gOX+YHOAMLREcgeAOjAQDSGcDSAdqvn7wyYHlPOt3o9AHDJI9avuZF4ejCc6fPmmEseQTjQseQ57YEYhmV3Ad7N+nLJI7r7znw8pv5Lf5wsdAZ0ZZahnqttQ/KcZe5Dyic35KsxwzVbsg1dwXlkH2sP8GMhPFgAKirNAaIDu6oBlNaiydygVlYAC5MIj+mP1sAregOgNYrSXjjy/FLtAaVySg0lGc5IVXhvdZ7fzIbQAa9FWbjkEUgNswg27kNT4z7I40C9VJjOl525KueSZdbUeA63sReSPsh7ejAs7cVsUDtnEPHqtqLoSG5MTmDynHot9qEpOAWnNz14twdikGonISev7T4c86/PO6bcySJFH0J4chGOWrexB99ZBycWkVgowc5GfaV2o6m2BInJECK6o/HxCSSq2wTuLXZgwFuDeFCpXHnVGYC3egrBxn0ITluf4pJPoh4TkBvP4fZCft/ORtAeaEPF9BQShqPqvTh+GRfUIxFpBHHUYJew92FuRKSuZJkBIUhjU0BpGZzqkfa39sIxPagLDv04FpyCo+Ftdp7WRT8kKZT60X8Zt80Ds84AvOUTkA91Gdpmsr9nlz8ls8ijWUgNykg9oo7KXZUVwPQIRtEG72YnAPXm8rixs3QRD6Ih45M86sL74yOQvAG0+326xkMU/TjW2K/8N0PwaA/E4MUggmhLjqoT4+fwpgTIV7tRX6o/FrJ+EgDw+9CkNqyZGtOI1KJWYjeaLM9wG14TWMRtuQXSmOXJG5vWcPmBE1LFmn4/7E39bWVMD6LJ15+79yeoxNdx4wH/XcS9LdjuATAGCHWP2Z4bsqcGiclzSwd31gVbym6Rnf8u4ijBJq37ro6M4vf6ceHeFFBdlwo0zjI4MIuHFpU4IvXi9kIRpOqXUt2GXfdSaTRHQzfC0W5sGtMfO5kgsYDaAAAO9klEQVT3eeL2QDfqHw8m03Ty+Gx+XzBfPD0Y9lbgtt9qVKKMPPXl6ZJb4MQU7vtNvy/rUs2Fe/c2oWSn9JkQAHBsdhpP87yICl07Icw9ZkMu+STqS6cwmhwoOLGpdBEPok51TZTFugnWBdvKchV9P+5PA87d6s2gjtLn41CDfyrl2b67Bpi+m2GEHoLkFz1Vvwx9WlNNo6UfM0975Job28uNI7CI5LPhyKoDA9JezAaXGBX6fWgKzqJe0tYszCLYqMsgpXVI+3GMIxYYF221YF7eZ8gqKR17fT12Q+7Urz0R5R6zEd3CSGnzCJr097na+aqX6nC/McO6CdYF28p6m5x+pO5qrIUjuYhOCf7KXI+y8C5+b4mbYqwL748vwunlQidNWqoz77SRbXfm3Q0bnhvy1TZUjJ9bcpFQeyCG8O67qcVH8hya9SuH/ZeVrFK0OHYarFwI0iEtEPQCnaadMH5fciGXElROAv5BxKF2/IW4x2zG70vd5/fqELa4p+NB/fRoP4Lji6l1E6wLtpX9PnjdSN252biILv61ugjP8yIqkhU8M6bqNwC1MZDHK+BdahvjRtX5NupLoQaQmG73QokyWg90AJ4eNFdPIagfhWgdTI+2IlwLZIOYVZ+LjZtZCNKhwfTFifqA0tgCCS+iQj8CtPs9Zmd+H+Rx3eLosc9gNUESeaQ/yrpgVzn4ops45heAisoe7Ko2LqKLRCeRKK2F9y39yH4pulR95Zxp5TMVUkRqUefabLay3BBctNH5BBJYVOYQff1qynGl+nGscZ+xUaRVUVbWp0/P2fYeE4rWfhsDtrKlec401866YDc5CPDadrlaYy8dUPfKl8BZnb49LiNtJNWQ/z3jZOaGHNjgXxaUC1rK0bSQ6ESD7j7tDBTvepCMOjBgyq61B9rSFifKunJzySPK1s1ktqRI7rENwiUHjAtzzfe5xYJT7Zz4WBfrgs1ltU1OE4lO4kTDXjimR0y99BAePu5GvdX2uKWeT+pFk2EbTYrTG0PYqzuwoOzfpBwp3wspGoOk/hgPiviFFyFIh5QtiOFoW/KoeRui4V5b4D5hII75cmOZYWECcqOxXHaay830eHHcYxtD5BEgSTGEpdSxtPL2+9CEAMK688znsC7Y0zMlm7Z8W+WswUw8z1/CYmMsn8JjmecPy9ZeeL1oNfT3C//YDBERkYAY4ImIiATEAE9ERCQgBngiIiIBMcATEREJiAGeiIhIQAzwREREAmKAJyIiEhADPBERkYAY4ImIiATEAE9ERCSg5HfRExERkf1p30Wf/GtyXzx6tG5vZqPbWlnJ8ikwlnn+sGzthdeLVmNrZWXy/0zRExERCYgBnoiISEAM8ERERAJigCciIhIQAzwREZGAGOCJiIgExABPREQkIAZ4IiIiATHAExERCYgBnoiISEAM8ERERAJigCciIhIQAzwREZGAGOCJiIgExABPREQkIAZ4IiIiATHAExERCYgBnoiISEDPZvXbR/twrbUq9fOTj3G+7T3cxOs4PdiFV59PPTQz1IyTF4H9p4M4tceReuDTKzh4PADAh97rh5F6tgTunPfiTCTzc63l9YmIiIrBMyWbtnxb5azBF48erfd72bC2VlayfAqMZZ4/LFt74fWi1dhaWYmZ+BQApuiJiIiExABPREQkIAZ4IiIiATHAExERCYgBnoiISEAM8ERERAJigCciIhIQAzwREZGAGOCJiIgExABPREQkIAZ4IiIiATHAExERCYgBnoiISEAM8ERERAJigCciIhJQ8u/BExERkf1pfw/+WfMBSlflrGH5FBjLPH9YtvbC60WroR+wM0VPREQkIAZ4IiIiATHAExERCYgBnoiISEAM8ERERAJigCciIhIQAzwREZGAGOCJiIgExABPREQkIAZ4IiIiATHAExERCYgBfgNoD8QQDnSs99soKizzPPH0YDg6Atmz3m+EVop1QVzPLn9KBp4eDEt74ch4whSCjT5cAIByN9p/+iO8sXs7yr6jPpyYxYPfBPDzX4QwB+Um81ZnfrnE+Dm8KYWUH/70p/jw71uxDcBctAtv/XzC+r1ND6LJ17/mj5hznh4MS2UY1colax0YiLbBqf1o/rydAYS9ur8UuDAB+VAXIjl5bbvowEC0BfNyC6Sx7J8t7T61usdMdcNw79qdZb3X1fUccMkjkDaPZKi7pnu+KO/ptch1ubkhX+1GfWnqSDy4D8f86g/mtkeP16xg1h7g5z7Dg0/KsAkAUIKKlyrgeDqHzz/9Bv8HAPAQXwLAzg4M/F0bnN8B8P8W8eXvZ7H4717A9soK7DzUjX/c/TJ+/qNf4MtPpxD/v8pT//vNL2Jb2XNIfDmF2f+lHJv//N+SL73tzTpsU/9f9mobDmECV9f8QQrAcLPn6q9CKRW2YvwcmqRQ8udhOZ4MJq7KOQQb96kNr1Ihpas9QDFULkMgWsTtHD3nrq/PocmnBev0Mtded1Zr7Dw9GJa6MdAZSjV+tpfbgK4xdJ6mLU7QlW2TMGVZAHkpNycwqbU9UNu4GAag3vd+n8VrKfUFY0XQ/mwQaw/wd/oh3dF+UHuHf5zCP76rv3i1+JmkBPe5W7/Au/9lCHPaQ479+Fl/N5q3teJE9//A35zzJYO0Sx6B1PAcZv/ZZ9Eo7sWPG7YDf7yLyCcvw7Xbib88BFzdsBG+AwPeGqV3iwDC3tw8q0tugXNhAnJyZNiPY8E6hL1vox0hXAAQkfTXIgRp7G2EvWVwAoJXMDfkzr3A+Dk0Rd0Ylmpz87RjXThmyAL0IzjeAqnWDRdCiABwNdbCsTCBoD/1O6NvxeDd3QFgA2WT1spZtkTWLgudAXirlY4DAjGkVxPlms7qR4m0Avkqt35Iku5H/2Xc9nRjZ6UbgHW2Ktlm8foVzNoD/Ep4/hY/+B6AxzdxXh/cASBxE393ug61f9+KbQ1t8GICwRU9519hdznw9MFdyOPAX+6uQ+0PfXBcDSCRlw+RrX4ca1Qb9s6lz1Q6NiXqT0uNktxoqi1BYjJkDNTxOSRQi+0eACtJR3t6MCzV4kFwEju92mhXfV1T1iEfI7b8CUE6pI2o3UueufIyzxUlk7JzchAPatuSKU4lvWlMo8Y3ajBbmEN82ZOMn2XZaQrdiK/d6vHOt1FfOpXqOFmeo3Sgg0HAq927ajoY+utcTCnilZSbKr91oQPehhLEg/pyF6AubHB5XWS37ZXtcAD48rdD1inS3/8Kk/8K4DsVcL66suc85KmFA08xPfFLIDiByT8Cz1XXwZuXYUUBVbfhBHrR1LgPTY3ncHuhBt6MC1+c2FQKzD4yNZhjn2EWJdjktPodJZOQGL9sqrQlqPcA7+tfNxpD2DMHeUXvxcZWVeZW0jtaEWkE8dK9OCGrHQtPD5qrgfg94+jd0dAC+PehqXEf5PFFOL0xhKMtmJf1xwLWwW4duSorgNK9kKIxhKMxhKNW77EE9VId7jcqn6UpOAVHw8msFt65KiuA6buIyyPq68YQtlzMVwPv7rvqNR1EXH2v0uYRw7Hk9RHcissty7rgkk+ivnQKoxk6cdro3aqjYde6YAd5DfDVZeoM6NzdDGc8ROJ/A4ADjrIVPKHDhzdeKQGeTuHWLwFgEPd//xR47mX84Oj2nLzndbMwgfeTlSMEaWwKqK7L8qZ2Q76qVWqlwqSPohZx26/1qtXXtTqW9XvZgLIs8/ZAt0Wj1o9jjYOYbehWyl2yTo8mxnuTi/4i0gjilsdqsGuZrE+hRaQWNQgo/4LTNfBaBPl4UDf681/G7YUS7Gxce1B1bi4xBaF9kMeBesn82lMIJhfn9SM4vmh5zFHrhmvN78Y+Vlxua6kLnYFkp0HpQC2dcYxnmHu3a12wg7wG+Pk/KknzkrK6DGdsh+M/AEAC/7Z8zg+Ot+pQ/RzwdPpucr7+nyam8BTAtv/Yllx4Z0uPP8tDyjAE6ZDWGPcCnZlGXGazeJiDFecb3prLvAMD0Ri85ROQzY1aZwDhqG702ngO856VbUNKy8jYwAWfMtprNoyIFzG/gvq8aoYglGr4m5cbja9oSkFgKym3tdQFvy/V2btXh3A0hmGra6FOE9xfRYrdjnVhI8prgL/321kkAHz3z9zYaXXCzjbs/h6AhYe49/vlnm07ftz4Mp4D8NzOH2FYSzcdUY7he3+BHzfk9O1vYHHMLwAVlabK5HkRFRkb1xCkQ4PsCWfL04NhbfdC2jyuG7Knxjh6RQiSfwKJ6hbuDc9C/OtFq6OYXyj4W7GVgpWb3wc5Q2akfXcNMH3XRmt4xJHfL7q5OojbXwL4Xgt+/t9aYcjCl7vxs//qxncBfPnP/7B87/FP27BnGwAs4stPphDX/ft87imAMuz+T8UxrwaE8PAx4Nhsmmx3lsGxMIlwMYy+14Nuu5H1gjFlbURxybAeJMcij2aB0jIY7/jCvLadrX+5dWCXxRoUKow8f5PdBOSzg4g/Bcr2/BQfjQ3jww8CGPjlMK4PdqN523N4+skg5POTyz5T/d/+Bb4L4OmDIbz7rg/HdP+O/MP/RAKA45UWHNL/0nf3Y+CDgO5ft/FxG7vw0QQS1W0YSI7G1e14yXmuDgyY0sLtgTY4sbpUGaW4GmvhmB5cYjWvMr9rXBCkbFMSpePVHjBO8RTsnvJfTlv41R5oy7hwi1R5KjeXHDBmpDw9ONFgsbOns45tzjrK7zY5AHjQj2N/Hcd/fs+HN/6sAtteUsbxTxdm8SCa+ia7pbXC+2oZgEVM/tpiO9zVQdz7mzr8oPxlvHGkBFf/VT3uqIDzpQrDqfez/kAbxFgX3kQPhqVYcm+9cStJHPPl3QhH21K/szABubFItgflgbZgyVCmKq3sI1KLsiUrqtvLLdS2LGWXheGzFeSeCkE6BMhXdfe0UOWaL/kpt8gjQJJiCOv2wlttZXNVVgALk8W9BmIdPVOyacu3Vc4azMRz9Q1r4mH5FB7LPH9YtvbC60Wrob9f+MdmiIiIBMQAT0REJCAGeCIiIgExwBMREQmIAZ6IiEhADPBEREQCYoAnIiISEAM8ERGRgBjgiYiIBMQAT0REJCAGeCIiIgElv4ueiIiI7E/7LvpnSjZt+Xad3wsRERHl2P8Hao74uH/KSGUAAAAASUVORK5CYII=)", "_____no_output_____" ], [ "Now that we have a an answer to \"why should I learn Flax?\" - let's start our descent into Flaxlandia!\n\n### A toy example 🚚 - training a linear regression model\n\nWe'll first implement a pure-JAX appoach and then we'll do it the Flax-way.", "_____no_output_____" ] ], [ [ "# Defining a toy dataset\nn_samples = 150\nx_dim = 2 # putting small numbers here so that we can visualize the data easily\ny_dim = 1\nnoise_amplitude = 0.1\n\n# Generate (random) ground truth W and b\n# Note: we could get W, b from a randomely initialized nn.Dense here, being closer to JAX for now \nkey, w_key, b_key = random.split(random.PRNGKey(seed), num=3)\nW = random.normal(w_key, (x_dim, y_dim)) # weight\nb = random.normal(b_key, (y_dim,)) # bias\n\n# This is the structure that Flax expects (recall from the previous section!)\ntrue_params = freeze({'params': {'bias': b, 'kernel': W}})\n\n# Generate samples with additional noise\nkey, x_key, noise_key = random.split(key, num=3)\nxs = random.normal(x_key, (n_samples, x_dim))\nys = jnp.dot(xs, W) + b\nys += noise_amplitude * random.normal(noise_key, (n_samples, y_dim))\nprint(f'xs shape = {xs.shape} ; ys shape = {ys.shape}')", "_____no_output_____" ], [ "# Let's visualize our data (becoming one with the data paradigm <3)\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nassert xs.shape[-1] == 2 and ys.shape[-1] == 1 # low dimensional data so that we can plot it\nax.scatter(xs[:, 0], xs[:, 1], zs=ys)\n\n# todo: exercise - let's show that our data lies on the 2D plane embedded in 3D\n# option 1: analytic approach\n# option 2: data-driven approach", "_____no_output_____" ], [ "def make_mse_loss(xs, ys):\n \n def mse_loss(params):\n \"\"\"Gives the value of the loss on the (xs, ys) dataset for the given model (params).\"\"\"\n \n # Define the squared loss for a single pair (x,y)\n def squared_error(x, y):\n pred = model.apply(params, x)\n # Inner because 'y' could have in general more than 1 dims\n return jnp.inner(y-pred, y-pred) / 2.0\n\n # Batched version via vmap\n return jnp.mean(jax.vmap(squared_error)(xs, ys), axis=0)\n\n return jax.jit(mse_loss) # and finally we jit the result (mse_loss is a pure function)\n\nmse_loss = make_mse_loss(xs, ys)\nvalue_and_grad_fn = jax.value_and_grad(mse_loss)", "_____no_output_____" ], [ "# Let's reuse the simple feed-forward layer since it trivially implements linear regression\nmodel = nn.Dense(features=y_dim)\nparams = model.init(key, xs)\nprint(f'Initial params = {params}')\n\n# Let's set some reasonable hyperparams\nlr = 0.3\nepochs = 20\nlog_period_epoch = 5\n\nprint('-' * 50)\nfor epoch in range(epochs):\n loss, grads = value_and_grad_fn(params)\n # SGD (closer to JAX again, but we'll progressively go towards how stuff is done in Flax)\n params = jax.tree_multimap(lambda p, g: p - lr * g, params, grads)\n\n if epoch % log_period_epoch == 0:\n print(f'epoch {epoch}, loss = {loss}')\n\nprint('-' * 50)\nprint(f'Learned params = {params}')\nprint(f'Gt params = {true_params}')", "_____no_output_____" ] ], [ [ "Now let's do the same thing but this time with dedicated optimizers!\n\nEnter DeepMind's optax! ❤️🔥", "_____no_output_____" ] ], [ [ "opt_sgd = optax.sgd(learning_rate=lr)\nopt_state = opt_sgd.init(params) # always the same pattern - handling state externally\nprint(opt_state)\n# todo: exercise - compare Adam's and SGD's states", "_____no_output_____" ], [ "params = model.init(key, xs) # let's start with fresh params again\n\nfor epoch in range(epochs):\n loss, grads = value_and_grad_fn(params)\n updates, opt_state = opt_sgd.update(grads, opt_state) # arbitrary optim logic!\n params = optax.apply_updates(params, updates)\n\n if epoch % log_period_epoch == 0:\n print(f'epoch {epoch}, loss = {loss}')\n\n# Note 1: as expected we get the same loss values\n# Note 2: we'll later see more concise ways to handle all of these state components (hint: TrainState)", "_____no_output_____" ] ], [ [ "In this toy SGD example Optax may not seem that useful but it's very powerful.\n\nYou can build arbitrary optimizers with arbitrary hyperparam schedules, chaining, param freezing, etc. You can check the [official docs here](https://optax.readthedocs.io/en/latest/).", "_____no_output_____" ] ], [ [ "#@title Optax Advanced Examples\n# This cell won't \"compile\" (no ml_collections package) and serves just as an example\n\n# Example from Flax (ImageNet example)\n# https://github.com/google/flax/blob/main/examples/imagenet/train.py#L88\ndef create_learning_rate_fn(\n config: ml_collections.ConfigDict,\n base_learning_rate: float,\n steps_per_epoch: int):\n \"\"\"Create learning rate schedule.\"\"\"\n warmup_fn = optax.linear_schedule(\n init_value=0., end_value=base_learning_rate,\n transition_steps=config.warmup_epochs * steps_per_epoch)\n cosine_epochs = max(config.num_epochs - config.warmup_epochs, 1)\n cosine_fn = optax.cosine_decay_schedule(\n init_value=base_learning_rate,\n decay_steps=cosine_epochs * steps_per_epoch)\n schedule_fn = optax.join_schedules(\n schedules=[warmup_fn, cosine_fn],\n boundaries=[config.warmup_epochs * steps_per_epoch])\n return schedule_fn\n\ntx = optax.sgd(\n learning_rate=learning_rate_fn,\n momentum=config.momentum,\n nesterov=True,\n)\n\n# Example from Haiku (ImageNet example)\n# https://github.com/deepmind/dm-haiku/blob/main/examples/imagenet/train.py#L116\ndef make_optimizer() -> optax.GradientTransformation:\n \"\"\"SGD with nesterov momentum and a custom lr schedule.\"\"\"\n return optax.chain(\n optax.trace(\n decay=FLAGS.optimizer_momentum,\n nesterov=FLAGS.optimizer_use_nesterov),\n optax.scale_by_schedule(lr_schedule), optax.scale(-1))", "_____no_output_____" ] ], [ [ "Now let's go beyond these extremely simple models!", "_____no_output_____" ], [ "### Creating custom models ⭐", "_____no_output_____" ] ], [ [ "class MLP(nn.Module):\n num_neurons_per_layer: Sequence[int] # data field (nn.Module is Python's dataclass)\n\n def setup(self): # because dataclass is implicitly using the __init__ function... :')\n self.layers = [nn.Dense(n) for n in self.num_neurons_per_layer]\n\n def __call__(self, x):\n activation = x\n for i, layer in enumerate(self.layers):\n activation = layer(activation)\n if i != len(self.layers) - 1:\n activation = nn.relu(activation)\n return activation\n\nx_key, init_key = random.split(random.PRNGKey(seed))\n\nmodel = MLP(num_neurons_per_layer=[16, 8, 1]) # define an MLP model\nx = random.uniform(x_key, (4,4)) # dummy input\nparams = model.init(init_key, x) # initialize via init\ny = model.apply(params, x) # do a forward pass via apply\n\nprint(jax.tree_map(jnp.shape, params))\nprint(f'Output: {y}')\n\n# todo: exercise - use @nn.compact pattern instead\n# todo: check out https://realpython.com/python-data-classes/", "_____no_output_____" ] ], [ [ "Great! \n\nNow that we know how to build more complex models let's dive deeper and understand how the 'nn.Dense' module is designed itself.\n\n#### Introducing \"param\"", "_____no_output_____" ] ], [ [ "class MyDenseImp(nn.Module):\n num_neurons: int\n weight_init: Callable = nn.initializers.lecun_normal()\n bias_init: Callable = nn.initializers.zeros\n\n @nn.compact\n def __call__(self, x):\n weight = self.param('weight', # parametar name (as it will appear in the FrozenDict)\n self.weight_init, # initialization function, RNG passed implicitly through init fn\n (x.shape[-1], self.num_neurons)) # shape info\n bias = self.param('bias', self.bias_init, (self.num_neurons,))\n\n return jnp.dot(x, weight) + bias\n\nx_key, init_key = random.split(random.PRNGKey(seed))\n\nmodel = MyDenseImp(num_neurons=3) # initialize the model\nx = random.uniform(x_key, (4,4)) # dummy input\nparams = model.init(init_key, x) # initialize via init\ny = model.apply(params, x) # do a forward pass via apply\n\nprint(jax.tree_map(jnp.shape, params))\nprint(f'Output: {y}')\n\n# todo: exercise - check out the source code:\n# https://github.com/google/flax/blob/main/flax/linen/linear.py\n# https://github.com/google/jax/blob/main/jax/_src/nn/initializers.py#L150 <- to see why lecun_normal() vs zeros (no brackets)", "_____no_output_____" ], [ "from inspect import signature\n\n# You can see it expects a PRNG key and it is passed implicitly through the init fn (same for zeros)\nprint(signature(nn.initializers.lecun_normal()))", "_____no_output_____" ] ], [ [ "So far we've only seen **trainable** params. \n\nML models often times have variables which are part of the state but are not optimized via gradient descent.\n\nLet's see how we can handle them using a simple (and contrived) example!\n\n#### Introducing \"variable\"\n\n*Note on terminology: variable is a broader term and it includes both params (trainable variables) as well as non-trainable vars.*", "_____no_output_____" ] ], [ [ "class BiasAdderWithRunningMean(nn.Module):\n decay: float = 0.99\n\n @nn.compact\n def __call__(self, x):\n is_initialized = self.has_variable('batch_stats', 'ema')\n\n # 'batch_stats' is not an arbitrary name!\n # Flax uses that name in its implementation of BatchNorm (hard-coded, probably not the best of designs?)\n ema = self.variable('batch_stats', 'ema', lambda shape: jnp.zeros(shape), x.shape[1:])\n\n # self.param will by default add this variable to 'params' collection (vs 'batch_stats' above)\n # Again some idiosyncrasies here we need to pass a key even though we don't actually use it...\n bias = self.param('bias', lambda key, shape: jnp.zeros(shape), x.shape[1:])\n\n if is_initialized:\n # self.variable returns a reference hence .value\n ema.value = self.decay * ema.value + (1.0 - self.decay) * jnp.mean(x, axis=0, keepdims=True)\n\n return x - ema.value + bias\n\nx_key, init_key = random.split(random.PRNGKey(seed))\n\nmodel = BiasAdderWithRunningMean()\nx = random.uniform(x_key, (10,4)) # dummy input\nvariables = model.init(init_key, x)\nprint(f'Multiple collections = {variables}') # we can now see a new collection 'batch_stats'\n\n# We have to use mutable since regular params are not modified during the forward\n# pass, but these variables are. We can't keep state internally (because JAX) so we have to return it.\ny, updated_non_trainable_params = model.apply(variables, x, mutable=['batch_stats'])\nprint(updated_non_trainable_params)", "_____no_output_____" ], [ "# Let's see how we could train such model!\ndef update_step(opt, apply_fn, x, opt_state, params, non_trainable_params):\n\n def loss_fn(params):\n y, updated_non_trainable_params = apply_fn(\n {'params': params, **non_trainable_params}, \n x, mutable=list(non_trainable_params.keys()))\n \n loss = ((x - y) ** 2).sum() # not doing anything really, just for the demo purpose\n\n return loss, updated_non_trainable_params\n\n (loss, non_trainable_params), grads = jax.value_and_grad(loss_fn, has_aux=True)(params)\n updates, opt_state = opt.update(grads, opt_state)\n params = optax.apply_updates(params, updates)\n \n return opt_state, params, non_trainable_params # all of these represent the state - ugly, for now\n\nmodel = BiasAdderWithRunningMean()\nx = jnp.ones((10,4)) # dummy input, using ones because it's easier to see what's going on\n\nvariables = model.init(random.PRNGKey(seed), x)\nnon_trainable_params, params = variables.pop('params')\ndel variables # delete variables to avoid wasting resources (this pattern is used in the official code)\n\nsgd_opt = optax.sgd(learning_rate=0.1) # originally you'll see them use the 'tx' naming (from opTaX)\nopt_state = sgd_opt.init(params)\n\nfor _ in range(3):\n # We'll later see how TrainState abstraction will make this step much more elegant!\n opt_state, params, non_trainable_params = update_step(sgd_opt, model.apply, x, opt_state, params, non_trainable_params)\n print(non_trainable_params)", "_____no_output_____" ] ], [ [ "Let's go a level up in abstraction again now that we understand params and variables!\n\nCertain layers like BatchNorm will use variables in the background.\n\nLet's see a last example that is conceptually as complicated as it gets when it comes to Flax's idiosyncrasies, and high-level at the same time.", "_____no_output_____" ] ], [ [ "class DDNBlock(nn.Module):\n \"\"\"Dense, dropout + batchnorm combo.\n\n Contains trainable variables (params), non-trainable variables (batch stats),\n and stochasticity in the forward pass (because of dropout).\n \"\"\"\n num_neurons: int\n training: bool\n\n @nn.compact\n def __call__(self, x):\n x = nn.Dense(self.num_neurons)(x)\n x = nn.Dropout(rate=0.5, deterministic=not self.training)(x)\n x = nn.BatchNorm(use_running_average=not self.training)(x)\n return x\n\nkey1, key2, key3, key4 = random.split(random.PRNGKey(seed), 4)\n\nmodel = DDNBlock(num_neurons=3, training=True)\nx = random.uniform(key1, (3,4,4))\n\n# New: because of Dropout we now have to include its unique key - kinda weird, but you get used to it\nvariables = model.init({'params': key2, 'dropout': key3}, x)\nprint(variables)\n\n# And same here, everything else remains the same as the previous example\ny, non_trainable_params = model.apply(variables, x, rngs={'dropout': key4}, mutable=['batch_stats'])\n\n# Let's run these model variables during \"evaluation\":\neval_model = DDNBlock(num_neurons=3, training=False)\n# Because training=False we don't have stochasticity in the forward pass neither do we update the stats\ny = eval_model.apply(variables, x)", "_____no_output_____" ] ], [ [ "### A fully-fledged CNN on MNIST example in Flax! 💥\n\nModified the official MNIST example here: https://github.com/google/flax/tree/main/examples/mnist\n\nWe'll be using PyTorch dataloading instead of TFDS.\n\nLet's start by defining a model:", "_____no_output_____" ] ], [ [ "class CNN(nn.Module): # lots of hardcoding, but it serves a purpose for a simple demo\n @nn.compact\n def __call__(self, x):\n x = nn.Conv(features=32, kernel_size=(3, 3))(x)\n x = nn.relu(x)\n x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))\n x = nn.Conv(features=64, kernel_size=(3, 3))(x)\n x = nn.relu(x)\n x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))\n x = x.reshape((x.shape[0], -1)) # flatten\n x = nn.Dense(features=256)(x)\n x = nn.relu(x)\n x = nn.Dense(features=10)(x)\n x = nn.log_softmax(x)\n return x", "_____no_output_____" ] ], [ [ "Let's add the data loading support in PyTorch!\n\nI'll be reusing code from [tutorial #3](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_3_JAX_Neural_Network_from_Scratch_Colab.ipynb):", "_____no_output_____" ] ], [ [ "def custom_transform(x):\n # A couple of modifications here compared to tutorial #3 since we're using a CNN\n # Input: (28, 28) uint8 [0, 255] torch.Tensor, Output: (28, 28, 1) float32 [0, 1] np array\n return np.expand_dims(np.array(x, dtype=np.float32), axis=2) / 255.\n\ndef custom_collate_fn(batch):\n \"\"\"Provides us with batches of numpy arrays and not PyTorch's tensors.\"\"\"\n transposed_data = list(zip(*batch))\n\n labels = np.array(transposed_data[1])\n imgs = np.stack(transposed_data[0])\n\n return imgs, labels\n\nmnist_img_size = (28, 28, 1)\nbatch_size = 128\n\ntrain_dataset = MNIST(root='train_mnist', train=True, download=True, transform=custom_transform)\ntest_dataset = MNIST(root='test_mnist', train=False, download=True, transform=custom_transform)\n\ntrain_loader = DataLoader(train_dataset, batch_size, shuffle=True, collate_fn=custom_collate_fn, drop_last=True)\ntest_loader = DataLoader(test_dataset, batch_size, shuffle=False, collate_fn=custom_collate_fn, drop_last=True)\n\n# optimization - loading the whole dataset into memory\ntrain_images = jnp.array(train_dataset.data)\ntrain_lbls = jnp.array(train_dataset.targets)\n\n# np.expand_dims is to convert shape from (10000, 28, 28) -> (10000, 28, 28, 1)\n# We don't have to do this for training images because custom_transform does it for us.\ntest_images = np.expand_dims(jnp.array(test_dataset.data), axis=3)\ntest_lbls = jnp.array(test_dataset.targets)", "_____no_output_____" ], [ "# Visualize a single image\nimgs, lbls = next(iter(test_loader))\nimg = imgs[0].reshape(mnist_img_size)[:, :, 0]\ngt_lbl = lbls[0]\n\nprint(gt_lbl)\nplt.imshow(img); plt.show()", "7\n" ] ], [ [ "Great - we have our data pipeline ready and the model architecture defined.\n\nNow let's define core training functions:", "_____no_output_____" ] ], [ [ "@jax.jit\ndef train_step(state, imgs, gt_labels):\n def loss_fn(params):\n logits = CNN().apply({'params': params}, imgs)\n one_hot_gt_labels = jax.nn.one_hot(gt_labels, num_classes=10)\n loss = -jnp.mean(jnp.sum(one_hot_gt_labels * logits, axis=-1))\n return loss, logits\n \n (_, logits), grads = jax.value_and_grad(loss_fn, has_aux=True)(state.params)\n state = state.apply_gradients(grads=grads) # this is the whole update now! concise!\n metrics = compute_metrics(logits=logits, gt_labels=gt_labels) # duplicating loss calculation but it's a bit cleaner\n return state, metrics\n\[email protected]\ndef eval_step(state, imgs, gt_labels):\n logits = CNN().apply({'params': state.params}, imgs)\n return compute_metrics(logits=logits, gt_labels=gt_labels)", "_____no_output_____" ], [ "def train_one_epoch(state, dataloader, epoch):\n \"\"\"Train for 1 epoch on the training set.\"\"\"\n batch_metrics = []\n for cnt, (imgs, labels) in enumerate(dataloader):\n state, metrics = train_step(state, imgs, labels)\n batch_metrics.append(metrics)\n\n # Aggregate the metrics\n batch_metrics_np = jax.device_get(batch_metrics) # pull from the accelerator onto host (CPU)\n epoch_metrics_np = {\n k: np.mean([metrics[k] for metrics in batch_metrics_np])\n for k in batch_metrics_np[0]\n }\n\n return state, epoch_metrics_np\n\ndef evaluate_model(state, test_imgs, test_lbls):\n \"\"\"Evaluate on the validation set.\"\"\"\n metrics = eval_step(state, test_imgs, test_lbls)\n metrics = jax.device_get(metrics) # pull from the accelerator onto host (CPU)\n metrics = jax.tree_map(lambda x: x.item(), metrics) # np.ndarray -> scalar\n return metrics", "_____no_output_____" ], [ "# This one will keep things nice and tidy compared to our previous examples\ndef create_train_state(key, learning_rate, momentum):\n cnn = CNN()\n params = cnn.init(key, jnp.ones([1, *mnist_img_size]))['params']\n sgd_opt = optax.sgd(learning_rate, momentum)\n # TrainState is a simple built-in wrapper class that makes things a bit cleaner\n return train_state.TrainState.create(apply_fn=cnn.apply, params=params, tx=sgd_opt)\n\ndef compute_metrics(*, logits, gt_labels):\n one_hot_gt_labels = jax.nn.one_hot(gt_labels, num_classes=10)\n\n loss = -jnp.mean(jnp.sum(one_hot_gt_labels * logits, axis=-1))\n accuracy = jnp.mean(jnp.argmax(logits, -1) == gt_labels)\n\n metrics = {\n 'loss': loss,\n 'accuracy': accuracy,\n }\n return metrics", "_____no_output_____" ], [ "# Finally let's define the high-level training/val loops\nseed = 0 # needless to say these should be in a config or defined like flags\nlearning_rate = 0.1\nmomentum = 0.9\nnum_epochs = 2\nbatch_size = 32\n\ntrain_state = create_train_state(jax.random.PRNGKey(seed), learning_rate, momentum)\n\nfor epoch in range(1, num_epochs + 1):\n train_state, train_metrics = train_one_epoch(train_state, train_loader, epoch)\n print(f\"Train epoch: {epoch}, loss: {train_metrics['loss']}, accuracy: {train_metrics['accuracy'] * 100}\")\n\n test_metrics = evaluate_model(train_state, test_images, test_lbls)\n print(f\"Test epoch: {epoch}, loss: {test_metrics['loss']}, accuracy: {test_metrics['accuracy'] * 100}\")\n\n# todo: exercise - how would we go about adding dropout? What about BatchNorm? What would have to change?", "Train epoch: 1, loss: 0.2903152406215668, accuracy: 91.86198115348816\nTest epoch: 1, loss: 44.35035705566406, accuracy: 94.77999806404114\nTrain epoch: 2, loss: 0.058339256793260574, accuracy: 98.23551177978516\nTest epoch: 2, loss: 17.13631820678711, accuracy: 97.33999967575073\n" ] ], [ [ "Bonus point: a walk-through the \"non-toy\", distributed ImageNet CNN training example.\n\nHead over to https://github.com/google/flax/tree/main/examples/imagenet\n\nYou'll keep seeing the same pattern/structure in all official Flax examples.", "_____no_output_____" ], [ "### Further learning resources 📚\n\nAside from the [official docs](https://flax.readthedocs.io/en/latest/) and [examples](https://github.com/google/flax/tree/main/examples) I found [HuggingFace's Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax) and the resources from their [\"community week\"](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects) useful as well.\n\nFinally, [source code](https://github.com/google/flax) is also your friend, as the library is still evolving.", "_____no_output_____" ], [ "### Connect with me ❤️\n\nLast but not least I regularly post AI-related stuff (paper summaries, AI news, etc.) on my Twitter/LinkedIn. We also have an ever increasing Discord community (1600+ members at the time of writing this). If you care about any of these I encourage you to connect! \n\nSocial: <br/>\n💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ <br/>\n🐦 Twitter - https://twitter.com/gordic_aleksa <br/>\n👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE <br/>\n🙏 Patreon - https://www.patreon.com/theaiepiphany <br/>\n\nContent: <br/>\n📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ <br/>\n📚 Medium - https://gordicaleksa.medium.com/ <br/>\n💻 GitHub - https://github.com/gordicaleksa <br/>\n📢 AI Newsletter - https://aiepiphany.substack.com/ <br/>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
4a604c0b37d35b4fe87c2cdc7c1a507064aa6fe6
42,194
ipynb
Jupyter Notebook
QSVM.ipynb
PCesteban/QMLforDDoS
18c5eab4fd0a2c8f8737ec53fe6a271317043695
[ "Apache-2.0" ]
null
null
null
QSVM.ipynb
PCesteban/QMLforDDoS
18c5eab4fd0a2c8f8737ec53fe6a271317043695
[ "Apache-2.0" ]
null
null
null
QSVM.ipynb
PCesteban/QMLforDDoS
18c5eab4fd0a2c8f8737ec53fe6a271317043695
[ "Apache-2.0" ]
null
null
null
140.179402
20,032
0.770299
[ [ [ "import numpy as np\nimport scipy\nfrom scipy.linalg import expm\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom sklearn import datasets\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\nfrom sklearn.decomposition import PCA", "_____no_output_____" ], [ "import pandas as pd\n\ndef SSDP_DDoS(training_size, test_size, n, PLOT_DATA=True):\n class_labels = [r'BENING', r'DrDoS_SSDP']\n \n data = pd.read_csv('DrDoS_SSDP_features_removed.csv', skiprows=[i for i in range(1,141550)], skipfooter=141547, engine=\"python\")\n x = StandardScaler().fit_transform(np.array(data.drop(columns=['Label'])))\n y = np.array(data['Label'].astype('category').cat.codes.astype(int))\n \n X_train, X_test, Y_train, Y_test = train_test_split(x, y, stratify=y, test_size=0.3, random_state=109)\n \n pca = PCA(n_components=n).fit(X_train)\n X_train = pca.transform(X_train)\n X_test = pca.transform(X_test)\n \n samples = np.append(X_train, X_test, axis=0)\n minmax_scale = MinMaxScaler((-1, 1)).fit(samples)\n X_train = minmax_scale.transform(X_train)\n X_test = minmax_scale.transform(X_test)\n\n training_input = {key: (X_train[Y_train == k, :])[:training_size] for k, key in enumerate(class_labels)}\n test_input = {key: (X_test[Y_test == k, :])[:test_size] for k, key in enumerate(class_labels)}\n \n if PLOT_DATA:\n for k in range(0, 2):\n x_axis_data = X_train[Y_train == k, 0][:training_size]\n y_axis_data = X_train[Y_train == k, 1][:training_size]\n label = 'DDoS' if k == 1 else 'Benign'\n plt.scatter(x_axis_data, y_axis_data, label=label)\n\n plt.title(\"DDoS_SSDP Dataset (Dimensionality Reduced With PCA)\")\n plt.legend()\n plt.show()\n \n\n return X_train, training_input, test_input, class_labels", "_____no_output_____" ], [ "from qiskit.aqua.utils import split_dataset_to_data_and_labels\n\nn = 2 # How many features to use (dimensionality)\ntraining_dataset_size = 1033\ntesting_dataset_size = 443\n\nsample_Total, training_input, test_input, class_labels = SSDP_DDoS(training_dataset_size, testing_dataset_size, n)\n\ndatapoints, class_to_label = split_dataset_to_data_and_labels(test_input)\nprint(class_to_label)", "_____no_output_____" ], [ "%load_ext memory_profiler", "_____no_output_____" ], [ "from qiskit import BasicAer\nfrom qiskit.ml.datasets import *\nfrom qiskit.circuit.library import ZZFeatureMap\nfrom qiskit.aqua.utils import split_dataset_to_data_and_labels, map_label_to_class_name\nfrom qiskit.aqua import QuantumInstance\nfrom qiskit.aqua.algorithms import QSVM\n\nseed = 10598\n\n\nfeature_map = ZZFeatureMap(feature_dimension=2, reps=2, entanglement='linear')\nqsvm = QSVM(feature_map, training_input, test_input, datapoints[0])\n\nbackend = BasicAer.get_backend('statevector_simulator')\nquantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)", "_____no_output_____" ], [ "%%time\n%memit result2 = qsvm.run(quantum_instance)", "peak memory: 5874.83 MiB, increment: 5528.37 MiB\nCPU times: user 2d 2h 49min 10s, sys: 2h 16min 24s, total: 2d 5h 5min 34s\nWall time: 2d 6h 49min 41s\n" ], [ "print(\"ground truth: {}\".format(datapoints[1]))\nprint(\"prediction: {}\".format(result2['predicted_labels']))\nprint(\"predicted class: {}\".format(result2['predicted_classes']))\nprint(\"accuracy: {}\".format(result2['testing_accuracy']))", "ground truth: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\nprediction: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\npredicted class: ['BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'DrDoS_SSDP', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'BENING', 'DrDoS_SSDP', 'BENING', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'BENING', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP', 'DrDoS_SSDP']\naccuracy: 0.9966101694915255\n" ], [ "from sklearn.metrics import classification_report, recall_score\nfrom sklearn.metrics import f1_score, accuracy_score, precision_score, make_scorer\n\n#Metrics\nclassification = classification_report(datapoints[1], result2['predicted_labels'])\nconfusion = confusion_matrix(datapoints[1], result2['predicted_labels'])\n \n # Accuracy\naccuracy = round(accuracy_score(datapoints[1], result2['predicted_labels']),5)\n \n # Recall\nrecall = round(recall_score(datapoints[1], result2['predicted_labels'], average='macro')*100,5)\n \n # Precision\nprecision = round(precision_score(datapoints[1], result2['predicted_labels'], average='weighted')*100,5)\n \n # F1\nf1 = round(f1_score(datapoints[1], result2['predicted_labels'], average='weighted')*100,5)", "_____no_output_____" ], [ "print(accuracy)\nprint(recall)\nprint(precision)\nprint(f1)\nprint(1-accuracy)", "0.99661\n99.66089\n99.66127\n99.66102\n0.003390000000000004\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a60516852bf2fcc1b0cc7de5bd99c44b25d662d
276,160
ipynb
Jupyter Notebook
kursovaya/machine_data_analysis_1.ipynb
TimBrighton/EmptyJupyterNoteebookForBinder
4c1e5045581dfbcf7645609b1ff994f6a746cd4c
[ "MIT" ]
null
null
null
kursovaya/machine_data_analysis_1.ipynb
TimBrighton/EmptyJupyterNoteebookForBinder
4c1e5045581dfbcf7645609b1ff994f6a746cd4c
[ "MIT" ]
null
null
null
kursovaya/machine_data_analysis_1.ipynb
TimBrighton/EmptyJupyterNoteebookForBinder
4c1e5045581dfbcf7645609b1ff994f6a746cd4c
[ "MIT" ]
null
null
null
228.231405
54,192
0.862417
[ [ [ "# Загрузка зависимостей\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import mean_squared_error\nfrom scipy import stats\nfrom statsmodels.graphics.gofplots import qqplot", "Using TensorFlow backend.\n" ], [ "# Загрузка подготовленного набора данных\ndataset = pd.read_csv('prepared_data.csv')\ndataset.head(10)", "_____no_output_____" ], [ "X = dataset.drop(columns=['age']).values\nY = dataset['age'].values", "_____no_output_____" ], [ "X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2)", "_____no_output_____" ], [ "# Задаем параметры структуры нейронной сети.\n\n\ninput_layer_size = 12\n\n\nfirst_hidden_layer_size = 10\nsecond_hidden_layer_size = 15\n\noutput_layer_size = 1\n\n\nepochs_number = 200\nbatch_size = 16", "_____no_output_____" ], [ "# Создание нейронной сети прямого распространения, пока она пустая, т.е. не содержит слоёв и нейронов.\nmodel = Sequential()\n\n# Входной слой и первый скрытый слой, функция активации - ReLU\nmodel.add(Dense(first_hidden_layer_size, input_dim=input_layer_size, activation='relu'))\n\nmodel.add(Dense(second_hidden_layer_size, activation='relu'))\n\n\nmodel.add(Dense(output_layer_size, activation='linear'))", "_____no_output_____" ], [ "model.summary()", "Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_1 (Dense) (None, 10) 130 \n_________________________________________________________________\ndense_2 (Dense) (None, 15) 165 \n_________________________________________________________________\ndense_3 (Dense) (None, 1) 16 \n=================================================================\nTotal params: 311\nTrainable params: 311\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "# Настройка нейронной сети.\nmodel.compile(loss='mean_squared_error', optimizer='adam', metrics=['mean_absolute_error', 'mean_squared_error'])", "_____no_output_____" ], [ "# Обучение нейронной сети.\nhistory = model.fit(X_train, Y_train, validation_data = (X_test,Y_test), epochs=epochs_number, batch_size=batch_size)", "Train on 242 samples, validate on 61 samples\nEpoch 1/200\n242/242 [==============================] - 1s 3ms/step - loss: 1.8921 - mean_absolute_error: 1.1543 - mean_squared_error: 1.8921 - val_loss: 1.5856 - val_mean_absolute_error: 1.0687 - val_mean_squared_error: 1.5856\nEpoch 2/200\n242/242 [==============================] - 0s 1ms/step - loss: 1.5387 - mean_absolute_error: 1.0438 - mean_squared_error: 1.5387 - val_loss: 1.2853 - val_mean_absolute_error: 0.9514 - val_mean_squared_error: 1.2853\nEpoch 3/200\n242/242 [==============================] - 0s 804us/step - loss: 1.3205 - mean_absolute_error: 0.9708 - mean_squared_error: 1.3205 - val_loss: 1.1364 - val_mean_absolute_error: 0.8892 - val_mean_squared_error: 1.1364\nEpoch 4/200\n242/242 [==============================] - 0s 451us/step - loss: 1.2033 - mean_absolute_error: 0.9284 - mean_squared_error: 1.2033 - val_loss: 1.0224 - val_mean_absolute_error: 0.8399 - val_mean_squared_error: 1.0224\nEpoch 5/200\n242/242 [==============================] - 0s 484us/step - loss: 1.1190 - mean_absolute_error: 0.8959 - mean_squared_error: 1.1190 - val_loss: 0.9561 - val_mean_absolute_error: 0.8088 - val_mean_squared_error: 0.9561\nEpoch 6/200\n242/242 [==============================] - 0s 438us/step - loss: 1.0628 - mean_absolute_error: 0.8735 - mean_squared_error: 1.0628 - val_loss: 0.8978 - val_mean_absolute_error: 0.7799 - val_mean_squared_error: 0.8978\nEpoch 7/200\n242/242 [==============================] - 0s 754us/step - loss: 1.0124 - mean_absolute_error: 0.8489 - mean_squared_error: 1.0124 - val_loss: 0.8418 - val_mean_absolute_error: 0.7482 - val_mean_squared_error: 0.8418\nEpoch 8/200\n242/242 [==============================] - 0s 461us/step - loss: 0.9620 - mean_absolute_error: 0.8262 - mean_squared_error: 0.9620 - val_loss: 0.8060 - val_mean_absolute_error: 0.7228 - val_mean_squared_error: 0.8060\nEpoch 9/200\n242/242 [==============================] - 0s 729us/step - loss: 0.9210 - mean_absolute_error: 0.8049 - mean_squared_error: 0.9210 - val_loss: 0.7663 - val_mean_absolute_error: 0.6929 - val_mean_squared_error: 0.7663\nEpoch 10/200\n242/242 [==============================] - 0s 454us/step - loss: 0.8856 - mean_absolute_error: 0.7835 - mean_squared_error: 0.8856 - val_loss: 0.7352 - val_mean_absolute_error: 0.6667 - val_mean_squared_error: 0.7352\nEpoch 11/200\n242/242 [==============================] - 0s 798us/step - loss: 0.8595 - mean_absolute_error: 0.7687 - mean_squared_error: 0.8595 - val_loss: 0.7187 - val_mean_absolute_error: 0.6502 - val_mean_squared_error: 0.7187\nEpoch 12/200\n242/242 [==============================] - 0s 784us/step - loss: 0.8378 - mean_absolute_error: 0.7545 - mean_squared_error: 0.8378 - val_loss: 0.6984 - val_mean_absolute_error: 0.6396 - val_mean_squared_error: 0.6984\nEpoch 13/200\n242/242 [==============================] - 0s 457us/step - loss: 0.8253 - mean_absolute_error: 0.7452 - mean_squared_error: 0.8253 - val_loss: 0.6824 - val_mean_absolute_error: 0.6345 - val_mean_squared_error: 0.6824\nEpoch 14/200\n242/242 [==============================] - 0s 426us/step - loss: 0.8133 - mean_absolute_error: 0.7375 - mean_squared_error: 0.8133 - val_loss: 0.6787 - val_mean_absolute_error: 0.6321 - val_mean_squared_error: 0.6787\nEpoch 15/200\n242/242 [==============================] - 0s 724us/step - loss: 0.8042 - mean_absolute_error: 0.7330 - mean_squared_error: 0.8042 - val_loss: 0.6683 - val_mean_absolute_error: 0.6291 - val_mean_squared_error: 0.6683\nEpoch 16/200\n242/242 [==============================] - 0s 442us/step - loss: 0.7966 - mean_absolute_error: 0.7294 - mean_squared_error: 0.7966 - val_loss: 0.6651 - val_mean_absolute_error: 0.6283 - val_mean_squared_error: 0.6651\nEpoch 17/200\n242/242 [==============================] - 0s 429us/step - loss: 0.7871 - mean_absolute_error: 0.7221 - mean_squared_error: 0.7871 - val_loss: 0.6527 - val_mean_absolute_error: 0.6239 - val_mean_squared_error: 0.6527\nEpoch 18/200\n242/242 [==============================] - 0s 421us/step - loss: 0.7805 - mean_absolute_error: 0.7161 - mean_squared_error: 0.7805 - val_loss: 0.6328 - val_mean_absolute_error: 0.6171 - val_mean_squared_error: 0.6328\nEpoch 19/200\n242/242 [==============================] - 0s 406us/step - loss: 0.7745 - mean_absolute_error: 0.7151 - mean_squared_error: 0.7745 - val_loss: 0.6329 - val_mean_absolute_error: 0.6166 - val_mean_squared_error: 0.6329\nEpoch 20/200\n242/242 [==============================] - 0s 417us/step - loss: 0.7655 - mean_absolute_error: 0.7113 - mean_squared_error: 0.7655 - val_loss: 0.6218 - val_mean_absolute_error: 0.6131 - val_mean_squared_error: 0.6218\nEpoch 21/200\n242/242 [==============================] - 0s 417us/step - loss: 0.7596 - mean_absolute_error: 0.7072 - mean_squared_error: 0.7596 - val_loss: 0.6130 - val_mean_absolute_error: 0.6094 - val_mean_squared_error: 0.6130\nEpoch 22/200\n242/242 [==============================] - 0s 410us/step - loss: 0.7580 - mean_absolute_error: 0.7066 - mean_squared_error: 0.7580 - val_loss: 0.6070 - val_mean_absolute_error: 0.6076 - val_mean_squared_error: 0.6070\nEpoch 23/200\n242/242 [==============================] - 0s 426us/step - loss: 0.7504 - mean_absolute_error: 0.6997 - mean_squared_error: 0.7504 - val_loss: 0.5878 - val_mean_absolute_error: 0.6005 - val_mean_squared_error: 0.5878\nEpoch 24/200\n242/242 [==============================] - 0s 425us/step - loss: 0.7442 - mean_absolute_error: 0.7001 - mean_squared_error: 0.7442 - val_loss: 0.5999 - val_mean_absolute_error: 0.6032 - val_mean_squared_error: 0.5999\nEpoch 25/200\n242/242 [==============================] - 0s 403us/step - loss: 0.7422 - mean_absolute_error: 0.6967 - mean_squared_error: 0.7422 - val_loss: 0.5733 - val_mean_absolute_error: 0.5926 - val_mean_squared_error: 0.5733\nEpoch 26/200\n242/242 [==============================] - 0s 355us/step - loss: 0.7345 - mean_absolute_error: 0.6954 - mean_squared_error: 0.7345 - val_loss: 0.5875 - val_mean_absolute_error: 0.5983 - val_mean_squared_error: 0.5875\nEpoch 27/200\n242/242 [==============================] - 0s 121us/step - loss: 0.7312 - mean_absolute_error: 0.6950 - mean_squared_error: 0.7312 - val_loss: 0.5767 - val_mean_absolute_error: 0.5930 - val_mean_squared_error: 0.5767\nEpoch 28/200\n242/242 [==============================] - 0s 402us/step - loss: 0.7243 - mean_absolute_error: 0.6903 - mean_squared_error: 0.7243 - val_loss: 0.5643 - val_mean_absolute_error: 0.5878 - val_mean_squared_error: 0.5643\nEpoch 29/200\n242/242 [==============================] - 0s 394us/step - loss: 0.7245 - mean_absolute_error: 0.6869 - mean_squared_error: 0.7245 - val_loss: 0.5564 - val_mean_absolute_error: 0.5843 - val_mean_squared_error: 0.5564\nEpoch 30/200\n242/242 [==============================] - 0s 366us/step - loss: 0.7215 - mean_absolute_error: 0.6838 - mean_squared_error: 0.7215 - val_loss: 0.5537 - val_mean_absolute_error: 0.5838 - val_mean_squared_error: 0.5537\nEpoch 31/200\n242/242 [==============================] - 0s 335us/step - loss: 0.7137 - mean_absolute_error: 0.6841 - mean_squared_error: 0.7137 - val_loss: 0.5667 - val_mean_absolute_error: 0.5880 - val_mean_squared_error: 0.5667\nEpoch 32/200\n242/242 [==============================] - 0s 91us/step - loss: 0.7147 - mean_absolute_error: 0.6855 - mean_squared_error: 0.7147 - val_loss: 0.5628 - val_mean_absolute_error: 0.5855 - val_mean_squared_error: 0.5628\nEpoch 33/200\n242/242 [==============================] - 0s 364us/step - loss: 0.7117 - mean_absolute_error: 0.6838 - mean_squared_error: 0.7117 - val_loss: 0.5577 - val_mean_absolute_error: 0.5839 - val_mean_squared_error: 0.5577\nEpoch 34/200\n242/242 [==============================] - 0s 361us/step - loss: 0.7057 - mean_absolute_error: 0.6800 - mean_squared_error: 0.7057 - val_loss: 0.5427 - val_mean_absolute_error: 0.5767 - val_mean_squared_error: 0.5427\nEpoch 35/200\n242/242 [==============================] - 0s 100us/step - loss: 0.7047 - mean_absolute_error: 0.6771 - mean_squared_error: 0.7047 - val_loss: 0.5385 - val_mean_absolute_error: 0.5746 - val_mean_squared_error: 0.5385\nEpoch 36/200\n242/242 [==============================] - 0s 114us/step - loss: 0.7055 - mean_absolute_error: 0.6763 - mean_squared_error: 0.7055 - val_loss: 0.5340 - val_mean_absolute_error: 0.5718 - val_mean_squared_error: 0.5340\nEpoch 37/200\n242/242 [==============================] - 0s 353us/step - loss: 0.7024 - mean_absolute_error: 0.6793 - mean_squared_error: 0.7024 - val_loss: 0.5507 - val_mean_absolute_error: 0.5780 - val_mean_squared_error: 0.5507\nEpoch 38/200\n242/242 [==============================] - 0s 368us/step - loss: 0.6976 - mean_absolute_error: 0.6753 - mean_squared_error: 0.6976 - val_loss: 0.5341 - val_mean_absolute_error: 0.5719 - val_mean_squared_error: 0.5341\nEpoch 39/200\n242/242 [==============================] - 0s 807us/step - loss: 0.6940 - mean_absolute_error: 0.6729 - mean_squared_error: 0.6940 - val_loss: 0.5290 - val_mean_absolute_error: 0.5691 - val_mean_squared_error: 0.5290\nEpoch 40/200\n242/242 [==============================] - 0s 440us/step - loss: 0.6913 - mean_absolute_error: 0.6712 - mean_squared_error: 0.6913 - val_loss: 0.5295 - val_mean_absolute_error: 0.5687 - val_mean_squared_error: 0.5295\nEpoch 41/200\n242/242 [==============================] - 0s 410us/step - loss: 0.6910 - mean_absolute_error: 0.6739 - mean_squared_error: 0.6910 - val_loss: 0.5389 - val_mean_absolute_error: 0.5723 - val_mean_squared_error: 0.5389\nEpoch 42/200\n242/242 [==============================] - 0s 408us/step - loss: 0.6922 - mean_absolute_error: 0.6727 - mean_squared_error: 0.6922 - val_loss: 0.5254 - val_mean_absolute_error: 0.5668 - val_mean_squared_error: 0.5254\nEpoch 43/200\n242/242 [==============================] - 0s 421us/step - loss: 0.6870 - mean_absolute_error: 0.6711 - mean_squared_error: 0.6870 - val_loss: 0.5306 - val_mean_absolute_error: 0.5686 - val_mean_squared_error: 0.5306\nEpoch 44/200\n242/242 [==============================] - 0s 717us/step - loss: 0.6842 - mean_absolute_error: 0.6705 - mean_squared_error: 0.6842 - val_loss: 0.5258 - val_mean_absolute_error: 0.5672 - val_mean_squared_error: 0.5258\nEpoch 45/200\n242/242 [==============================] - 0s 399us/step - loss: 0.6827 - mean_absolute_error: 0.6661 - mean_squared_error: 0.6827 - val_loss: 0.5146 - val_mean_absolute_error: 0.5623 - val_mean_squared_error: 0.5146\nEpoch 46/200\n242/242 [==============================] - 0s 97us/step - loss: 0.6859 - mean_absolute_error: 0.6725 - mean_squared_error: 0.6859 - val_loss: 0.5358 - val_mean_absolute_error: 0.5707 - val_mean_squared_error: 0.5358\nEpoch 47/200\n242/242 [==============================] - 0s 422us/step - loss: 0.6813 - mean_absolute_error: 0.6697 - mean_squared_error: 0.6813 - val_loss: 0.5213 - val_mean_absolute_error: 0.5640 - val_mean_squared_error: 0.5213\nEpoch 48/200\n242/242 [==============================] - 0s 407us/step - loss: 0.6790 - mean_absolute_error: 0.6672 - mean_squared_error: 0.6790 - val_loss: 0.5185 - val_mean_absolute_error: 0.5637 - val_mean_squared_error: 0.5185\nEpoch 49/200\n242/242 [==============================] - 0s 108us/step - loss: 0.6748 - mean_absolute_error: 0.6621 - mean_squared_error: 0.6748 - val_loss: 0.5073 - val_mean_absolute_error: 0.5583 - val_mean_squared_error: 0.5073\nEpoch 50/200\n242/242 [==============================] - 0s 117us/step - loss: 0.6745 - mean_absolute_error: 0.6648 - mean_squared_error: 0.6745 - val_loss: 0.5168 - val_mean_absolute_error: 0.5610 - val_mean_squared_error: 0.5168\nEpoch 51/200\n242/242 [==============================] - 0s 348us/step - loss: 0.6692 - mean_absolute_error: 0.6610 - mean_squared_error: 0.6692 - val_loss: 0.5025 - val_mean_absolute_error: 0.5535 - val_mean_squared_error: 0.5025\nEpoch 52/200\n242/242 [==============================] - 0s 371us/step - loss: 0.6696 - mean_absolute_error: 0.6609 - mean_squared_error: 0.6696 - val_loss: 0.5067 - val_mean_absolute_error: 0.5566 - val_mean_squared_error: 0.5067\nEpoch 53/200\n242/242 [==============================] - 0s 102us/step - loss: 0.6677 - mean_absolute_error: 0.6609 - mean_squared_error: 0.6677 - val_loss: 0.5100 - val_mean_absolute_error: 0.5581 - val_mean_squared_error: 0.5100\nEpoch 54/200\n242/242 [==============================] - 0s 358us/step - loss: 0.6655 - mean_absolute_error: 0.6599 - mean_squared_error: 0.6655 - val_loss: 0.5086 - val_mean_absolute_error: 0.5590 - val_mean_squared_error: 0.5086\nEpoch 55/200\n242/242 [==============================] - 0s 353us/step - loss: 0.6647 - mean_absolute_error: 0.6595 - mean_squared_error: 0.6647 - val_loss: 0.5127 - val_mean_absolute_error: 0.5598 - val_mean_squared_error: 0.5127\nEpoch 56/200\n242/242 [==============================] - 0s 363us/step - loss: 0.6642 - mean_absolute_error: 0.6619 - mean_squared_error: 0.6642 - val_loss: 0.5250 - val_mean_absolute_error: 0.5675 - val_mean_squared_error: 0.5250\nEpoch 57/200\n242/242 [==============================] - 0s 108us/step - loss: 0.6619 - mean_absolute_error: 0.6603 - mean_squared_error: 0.6619 - val_loss: 0.5160 - val_mean_absolute_error: 0.5622 - val_mean_squared_error: 0.5160\nEpoch 58/200\n242/242 [==============================] - 0s 361us/step - loss: 0.6594 - mean_absolute_error: 0.6578 - mean_squared_error: 0.6594 - val_loss: 0.5107 - val_mean_absolute_error: 0.5597 - val_mean_squared_error: 0.5107\nEpoch 59/200\n242/242 [==============================] - 0s 360us/step - loss: 0.6576 - mean_absolute_error: 0.6562 - mean_squared_error: 0.6576 - val_loss: 0.5104 - val_mean_absolute_error: 0.5584 - val_mean_squared_error: 0.5104\nEpoch 60/200\n242/242 [==============================] - 0s 114us/step - loss: 0.6552 - mean_absolute_error: 0.6554 - mean_squared_error: 0.6552 - val_loss: 0.5137 - val_mean_absolute_error: 0.5615 - val_mean_squared_error: 0.5137\nEpoch 61/200\n242/242 [==============================] - 0s 361us/step - loss: 0.6551 - mean_absolute_error: 0.6538 - mean_squared_error: 0.6551 - val_loss: 0.5026 - val_mean_absolute_error: 0.5572 - val_mean_squared_error: 0.5026\nEpoch 62/200\n242/242 [==============================] - 0s 369us/step - loss: 0.6579 - mean_absolute_error: 0.6522 - mean_squared_error: 0.6579 - val_loss: 0.5004 - val_mean_absolute_error: 0.5543 - val_mean_squared_error: 0.5004\nEpoch 63/200\n242/242 [==============================] - 0s 101us/step - loss: 0.6531 - mean_absolute_error: 0.6537 - mean_squared_error: 0.6531 - val_loss: 0.5147 - val_mean_absolute_error: 0.5649 - val_mean_squared_error: 0.5147\nEpoch 64/200\n242/242 [==============================] - 0s 297us/step - loss: 0.6474 - mean_absolute_error: 0.6502 - mean_squared_error: 0.6474 - val_loss: 0.5046 - val_mean_absolute_error: 0.5587 - val_mean_squared_error: 0.5046\nEpoch 65/200\n242/242 [==============================] - 0s 368us/step - loss: 0.6499 - mean_absolute_error: 0.6489 - mean_squared_error: 0.6499 - val_loss: 0.5036 - val_mean_absolute_error: 0.5579 - val_mean_squared_error: 0.5036\nEpoch 66/200\n242/242 [==============================] - 0s 98us/step - loss: 0.6495 - mean_absolute_error: 0.6469 - mean_squared_error: 0.6495 - val_loss: 0.4984 - val_mean_absolute_error: 0.5537 - val_mean_squared_error: 0.4984\nEpoch 67/200\n242/242 [==============================] - 0s 356us/step - loss: 0.6429 - mean_absolute_error: 0.6471 - mean_squared_error: 0.6429 - val_loss: 0.5057 - val_mean_absolute_error: 0.5587 - val_mean_squared_error: 0.5057\nEpoch 68/200\n242/242 [==============================] - 0s 363us/step - loss: 0.6431 - mean_absolute_error: 0.6498 - mean_squared_error: 0.6431 - val_loss: 0.5066 - val_mean_absolute_error: 0.5595 - val_mean_squared_error: 0.5066\nEpoch 69/200\n242/242 [==============================] - 0s 349us/step - loss: 0.6417 - mean_absolute_error: 0.6487 - mean_squared_error: 0.6417 - val_loss: 0.5018 - val_mean_absolute_error: 0.5586 - val_mean_squared_error: 0.5018\nEpoch 70/200\n242/242 [==============================] - 0s 154us/step - loss: 0.6432 - mean_absolute_error: 0.6457 - mean_squared_error: 0.6432 - val_loss: 0.4941 - val_mean_absolute_error: 0.5522 - val_mean_squared_error: 0.4941\nEpoch 71/200\n242/242 [==============================] - 0s 311us/step - loss: 0.6382 - mean_absolute_error: 0.6426 - mean_squared_error: 0.6382 - val_loss: 0.4952 - val_mean_absolute_error: 0.5542 - val_mean_squared_error: 0.4952\nEpoch 72/200\n242/242 [==============================] - 0s 367us/step - loss: 0.6362 - mean_absolute_error: 0.6442 - mean_squared_error: 0.6362 - val_loss: 0.4971 - val_mean_absolute_error: 0.5532 - val_mean_squared_error: 0.4971\nEpoch 73/200\n242/242 [==============================] - 0s 109us/step - loss: 0.6376 - mean_absolute_error: 0.6426 - mean_squared_error: 0.6376 - val_loss: 0.4958 - val_mean_absolute_error: 0.5530 - val_mean_squared_error: 0.4958\nEpoch 74/200\n242/242 [==============================] - 0s 350us/step - loss: 0.6341 - mean_absolute_error: 0.6455 - mean_squared_error: 0.6341 - val_loss: 0.5093 - val_mean_absolute_error: 0.5614 - val_mean_squared_error: 0.5093\nEpoch 75/200\n242/242 [==============================] - 0s 349us/step - loss: 0.6357 - mean_absolute_error: 0.6467 - mean_squared_error: 0.6357 - val_loss: 0.5014 - val_mean_absolute_error: 0.5587 - val_mean_squared_error: 0.5014\nEpoch 76/200\n242/242 [==============================] - 0s 100us/step - loss: 0.6318 - mean_absolute_error: 0.6417 - mean_squared_error: 0.6318 - val_loss: 0.4979 - val_mean_absolute_error: 0.5551 - val_mean_squared_error: 0.4979\nEpoch 77/200\n242/242 [==============================] - 0s 357us/step - loss: 0.6310 - mean_absolute_error: 0.6448 - mean_squared_error: 0.6310 - val_loss: 0.5007 - val_mean_absolute_error: 0.5555 - val_mean_squared_error: 0.5007\nEpoch 78/200\n242/242 [==============================] - 0s 358us/step - loss: 0.6280 - mean_absolute_error: 0.6427 - mean_squared_error: 0.6280 - val_loss: 0.4988 - val_mean_absolute_error: 0.5562 - val_mean_squared_error: 0.4988\nEpoch 79/200\n242/242 [==============================] - 0s 103us/step - loss: 0.6303 - mean_absolute_error: 0.6402 - mean_squared_error: 0.6303 - val_loss: 0.4929 - val_mean_absolute_error: 0.5550 - val_mean_squared_error: 0.4929\nEpoch 80/200\n242/242 [==============================] - 0s 347us/step - loss: 0.6282 - mean_absolute_error: 0.6371 - mean_squared_error: 0.6282 - val_loss: 0.4892 - val_mean_absolute_error: 0.5524 - val_mean_squared_error: 0.4892\nEpoch 81/200\n242/242 [==============================] - 0s 369us/step - loss: 0.6284 - mean_absolute_error: 0.6427 - mean_squared_error: 0.6284 - val_loss: 0.5034 - val_mean_absolute_error: 0.5610 - val_mean_squared_error: 0.5034\nEpoch 82/200\n242/242 [==============================] - 0s 370us/step - loss: 0.6258 - mean_absolute_error: 0.6427 - mean_squared_error: 0.6258 - val_loss: 0.4906 - val_mean_absolute_error: 0.5527 - val_mean_squared_error: 0.4906\nEpoch 83/200\n242/242 [==============================] - 0s 101us/step - loss: 0.6264 - mean_absolute_error: 0.6365 - mean_squared_error: 0.6264 - val_loss: 0.4872 - val_mean_absolute_error: 0.5475 - val_mean_squared_error: 0.4872\nEpoch 84/200\n242/242 [==============================] - 0s 342us/step - loss: 0.6219 - mean_absolute_error: 0.6382 - mean_squared_error: 0.6219 - val_loss: 0.5013 - val_mean_absolute_error: 0.5575 - val_mean_squared_error: 0.5013\nEpoch 85/200\n242/242 [==============================] - 0s 342us/step - loss: 0.6236 - mean_absolute_error: 0.6436 - mean_squared_error: 0.6236 - val_loss: 0.5088 - val_mean_absolute_error: 0.5629 - val_mean_squared_error: 0.5088\nEpoch 86/200\n242/242 [==============================] - 0s 97us/step - loss: 0.6192 - mean_absolute_error: 0.6400 - mean_squared_error: 0.6192 - val_loss: 0.4937 - val_mean_absolute_error: 0.5532 - val_mean_squared_error: 0.4937\nEpoch 87/200\n242/242 [==============================] - 0s 768us/step - loss: 0.6183 - mean_absolute_error: 0.6377 - mean_squared_error: 0.6183 - val_loss: 0.4935 - val_mean_absolute_error: 0.5516 - val_mean_squared_error: 0.4935\nEpoch 88/200\n242/242 [==============================] - 0s 356us/step - loss: 0.6175 - mean_absolute_error: 0.6356 - mean_squared_error: 0.6175 - val_loss: 0.4886 - val_mean_absolute_error: 0.5499 - val_mean_squared_error: 0.4886\nEpoch 89/200\n242/242 [==============================] - 0s 106us/step - loss: 0.6171 - mean_absolute_error: 0.6319 - mean_squared_error: 0.6171 - val_loss: 0.4860 - val_mean_absolute_error: 0.5485 - val_mean_squared_error: 0.4860\nEpoch 90/200\n242/242 [==============================] - 0s 118us/step - loss: 0.6149 - mean_absolute_error: 0.6317 - mean_squared_error: 0.6149 - val_loss: 0.4904 - val_mean_absolute_error: 0.5517 - val_mean_squared_error: 0.4904\nEpoch 91/200\n242/242 [==============================] - 0s 360us/step - loss: 0.6135 - mean_absolute_error: 0.6306 - mean_squared_error: 0.6135 - val_loss: 0.4831 - val_mean_absolute_error: 0.5459 - val_mean_squared_error: 0.4831\nEpoch 92/200\n242/242 [==============================] - 0s 353us/step - loss: 0.6128 - mean_absolute_error: 0.6295 - mean_squared_error: 0.6128 - val_loss: 0.4809 - val_mean_absolute_error: 0.5442 - val_mean_squared_error: 0.4809\nEpoch 93/200\n242/242 [==============================] - 0s 105us/step - loss: 0.6122 - mean_absolute_error: 0.6307 - mean_squared_error: 0.6122 - val_loss: 0.4846 - val_mean_absolute_error: 0.5473 - val_mean_squared_error: 0.4846\nEpoch 94/200\n242/242 [==============================] - 0s 360us/step - loss: 0.6089 - mean_absolute_error: 0.6311 - mean_squared_error: 0.6089 - val_loss: 0.5041 - val_mean_absolute_error: 0.5605 - val_mean_squared_error: 0.5041\nEpoch 95/200\n242/242 [==============================] - 0s 363us/step - loss: 0.6106 - mean_absolute_error: 0.6352 - mean_squared_error: 0.6106 - val_loss: 0.4960 - val_mean_absolute_error: 0.5556 - val_mean_squared_error: 0.4960\nEpoch 96/200\n242/242 [==============================] - 0s 342us/step - loss: 0.6107 - mean_absolute_error: 0.6294 - mean_squared_error: 0.6107 - val_loss: 0.4812 - val_mean_absolute_error: 0.5463 - val_mean_squared_error: 0.4812\nEpoch 97/200\n242/242 [==============================] - 0s 112us/step - loss: 0.6057 - mean_absolute_error: 0.6256 - mean_squared_error: 0.6057 - val_loss: 0.4844 - val_mean_absolute_error: 0.5466 - val_mean_squared_error: 0.4844\nEpoch 98/200\n242/242 [==============================] - 0s 357us/step - loss: 0.6037 - mean_absolute_error: 0.6253 - mean_squared_error: 0.6037 - val_loss: 0.4876 - val_mean_absolute_error: 0.5488 - val_mean_squared_error: 0.4876\nEpoch 99/200\n242/242 [==============================] - 0s 357us/step - loss: 0.6046 - mean_absolute_error: 0.6262 - mean_squared_error: 0.6046 - val_loss: 0.4831 - val_mean_absolute_error: 0.5468 - val_mean_squared_error: 0.4831\nEpoch 100/200\n242/242 [==============================] - 0s 95us/step - loss: 0.6057 - mean_absolute_error: 0.6316 - mean_squared_error: 0.6057 - val_loss: 0.5057 - val_mean_absolute_error: 0.5635 - val_mean_squared_error: 0.5057\nEpoch 101/200\n242/242 [==============================] - 0s 376us/step - loss: 0.6027 - mean_absolute_error: 0.6278 - mean_squared_error: 0.6027 - val_loss: 0.4785 - val_mean_absolute_error: 0.5464 - val_mean_squared_error: 0.4785\nEpoch 102/200\n242/242 [==============================] - 0s 375us/step - loss: 0.5998 - mean_absolute_error: 0.6204 - mean_squared_error: 0.5998 - val_loss: 0.4763 - val_mean_absolute_error: 0.5454 - val_mean_squared_error: 0.4763\nEpoch 103/200\n242/242 [==============================] - 0s 339us/step - loss: 0.6014 - mean_absolute_error: 0.6207 - mean_squared_error: 0.6014 - val_loss: 0.4750 - val_mean_absolute_error: 0.5437 - val_mean_squared_error: 0.4750\nEpoch 104/200\n242/242 [==============================] - 0s 98us/step - loss: 0.5973 - mean_absolute_error: 0.6175 - mean_squared_error: 0.5973 - val_loss: 0.4683 - val_mean_absolute_error: 0.5398 - val_mean_squared_error: 0.4683\nEpoch 105/200\n242/242 [==============================] - 0s 372us/step - loss: 0.5975 - mean_absolute_error: 0.6179 - mean_squared_error: 0.5975 - val_loss: 0.4733 - val_mean_absolute_error: 0.5408 - val_mean_squared_error: 0.4733\nEpoch 106/200\n242/242 [==============================] - 0s 357us/step - loss: 0.5952 - mean_absolute_error: 0.6182 - mean_squared_error: 0.5952 - val_loss: 0.4749 - val_mean_absolute_error: 0.5423 - val_mean_squared_error: 0.4749\nEpoch 107/200\n242/242 [==============================] - 0s 101us/step - loss: 0.5924 - mean_absolute_error: 0.6206 - mean_squared_error: 0.5924 - val_loss: 0.4826 - val_mean_absolute_error: 0.5463 - val_mean_squared_error: 0.4826\nEpoch 108/200\n242/242 [==============================] - 0s 350us/step - loss: 0.5929 - mean_absolute_error: 0.6198 - mean_squared_error: 0.5929 - val_loss: 0.4745 - val_mean_absolute_error: 0.5424 - val_mean_squared_error: 0.4745\nEpoch 109/200\n242/242 [==============================] - 0s 375us/step - loss: 0.5958 - mean_absolute_error: 0.6229 - mean_squared_error: 0.5958 - val_loss: 0.4845 - val_mean_absolute_error: 0.5493 - val_mean_squared_error: 0.4845\nEpoch 110/200\n242/242 [==============================] - 0s 355us/step - loss: 0.5933 - mean_absolute_error: 0.6179 - mean_squared_error: 0.5933 - val_loss: 0.4658 - val_mean_absolute_error: 0.5395 - val_mean_squared_error: 0.4658\nEpoch 111/200\n242/242 [==============================] - 0s 107us/step - loss: 0.5881 - mean_absolute_error: 0.6131 - mean_squared_error: 0.5881 - val_loss: 0.4800 - val_mean_absolute_error: 0.5443 - val_mean_squared_error: 0.4800\nEpoch 112/200\n242/242 [==============================] - 0s 355us/step - loss: 0.5867 - mean_absolute_error: 0.6163 - mean_squared_error: 0.5867 - val_loss: 0.4705 - val_mean_absolute_error: 0.5395 - val_mean_squared_error: 0.4705\nEpoch 113/200\n242/242 [==============================] - 0s 363us/step - loss: 0.5850 - mean_absolute_error: 0.6122 - mean_squared_error: 0.5850 - val_loss: 0.4750 - val_mean_absolute_error: 0.5434 - val_mean_squared_error: 0.4750\nEpoch 114/200\n242/242 [==============================] - 0s 96us/step - loss: 0.5847 - mean_absolute_error: 0.6150 - mean_squared_error: 0.5847 - val_loss: 0.4748 - val_mean_absolute_error: 0.5437 - val_mean_squared_error: 0.4748\nEpoch 115/200\n242/242 [==============================] - 0s 357us/step - loss: 0.5849 - mean_absolute_error: 0.6128 - mean_squared_error: 0.5849 - val_loss: 0.4729 - val_mean_absolute_error: 0.5416 - val_mean_squared_error: 0.4729\nEpoch 116/200\n242/242 [==============================] - 0s 369us/step - loss: 0.5829 - mean_absolute_error: 0.6100 - mean_squared_error: 0.5829 - val_loss: 0.4703 - val_mean_absolute_error: 0.5404 - val_mean_squared_error: 0.4703\nEpoch 117/200\n242/242 [==============================] - 0s 349us/step - loss: 0.5849 - mean_absolute_error: 0.6160 - mean_squared_error: 0.5849 - val_loss: 0.4778 - val_mean_absolute_error: 0.5446 - val_mean_squared_error: 0.4778\nEpoch 118/200\n242/242 [==============================] - 0s 118us/step - loss: 0.5825 - mean_absolute_error: 0.6134 - mean_squared_error: 0.5825 - val_loss: 0.4759 - val_mean_absolute_error: 0.5434 - val_mean_squared_error: 0.4759\nEpoch 119/200\n242/242 [==============================] - 0s 381us/step - loss: 0.5806 - mean_absolute_error: 0.6088 - mean_squared_error: 0.5806 - val_loss: 0.4665 - val_mean_absolute_error: 0.5415 - val_mean_squared_error: 0.4665\nEpoch 120/200\n242/242 [==============================] - 0s 359us/step - loss: 0.5857 - mean_absolute_error: 0.6075 - mean_squared_error: 0.5857 - val_loss: 0.4701 - val_mean_absolute_error: 0.5412 - val_mean_squared_error: 0.4701\nEpoch 121/200\n242/242 [==============================] - 0s 351us/step - loss: 0.5764 - mean_absolute_error: 0.6071 - mean_squared_error: 0.5764 - val_loss: 0.4856 - val_mean_absolute_error: 0.5504 - val_mean_squared_error: 0.4856\nEpoch 122/200\n242/242 [==============================] - 0s 101us/step - loss: 0.5821 - mean_absolute_error: 0.6189 - mean_squared_error: 0.5821 - val_loss: 0.4922 - val_mean_absolute_error: 0.5617 - val_mean_squared_error: 0.4922\nEpoch 123/200\n242/242 [==============================] - 0s 762us/step - loss: 0.5758 - mean_absolute_error: 0.6128 - mean_squared_error: 0.5758 - val_loss: 0.4767 - val_mean_absolute_error: 0.5469 - val_mean_squared_error: 0.4767\nEpoch 124/200\n242/242 [==============================] - 0s 372us/step - loss: 0.5792 - mean_absolute_error: 0.6067 - mean_squared_error: 0.5792 - val_loss: 0.4623 - val_mean_absolute_error: 0.5357 - val_mean_squared_error: 0.4623\nEpoch 125/200\n242/242 [==============================] - 0s 98us/step - loss: 0.5753 - mean_absolute_error: 0.6034 - mean_squared_error: 0.5753 - val_loss: 0.4738 - val_mean_absolute_error: 0.5496 - val_mean_squared_error: 0.4738\nEpoch 126/200\n242/242 [==============================] - 0s 367us/step - loss: 0.5754 - mean_absolute_error: 0.6076 - mean_squared_error: 0.5754 - val_loss: 0.4774 - val_mean_absolute_error: 0.5506 - val_mean_squared_error: 0.4774\nEpoch 127/200\n242/242 [==============================] - 0s 378us/step - loss: 0.5695 - mean_absolute_error: 0.6081 - mean_squared_error: 0.5695 - val_loss: 0.4888 - val_mean_absolute_error: 0.5609 - val_mean_squared_error: 0.4888\nEpoch 128/200\n242/242 [==============================] - 0s 376us/step - loss: 0.5733 - mean_absolute_error: 0.6128 - mean_squared_error: 0.5733 - val_loss: 0.4856 - val_mean_absolute_error: 0.5595 - val_mean_squared_error: 0.4856\nEpoch 129/200\n242/242 [==============================] - 0s 90us/step - loss: 0.5739 - mean_absolute_error: 0.6076 - mean_squared_error: 0.5739 - val_loss: 0.4699 - val_mean_absolute_error: 0.5483 - val_mean_squared_error: 0.4699\nEpoch 130/200\n242/242 [==============================] - 0s 344us/step - loss: 0.5656 - mean_absolute_error: 0.6034 - mean_squared_error: 0.5656 - val_loss: 0.4834 - val_mean_absolute_error: 0.5552 - val_mean_squared_error: 0.4834\nEpoch 131/200\n242/242 [==============================] - 0s 360us/step - loss: 0.5696 - mean_absolute_error: 0.6124 - mean_squared_error: 0.5696 - val_loss: 0.4824 - val_mean_absolute_error: 0.5538 - val_mean_squared_error: 0.4824\nEpoch 132/200\n242/242 [==============================] - 0s 371us/step - loss: 0.5718 - mean_absolute_error: 0.6040 - mean_squared_error: 0.5718 - val_loss: 0.4700 - val_mean_absolute_error: 0.5439 - val_mean_squared_error: 0.4700\nEpoch 133/200\n242/242 [==============================] - 0s 104us/step - loss: 0.5675 - mean_absolute_error: 0.6059 - mean_squared_error: 0.5675 - val_loss: 0.4778 - val_mean_absolute_error: 0.5544 - val_mean_squared_error: 0.4778\nEpoch 134/200\n242/242 [==============================] - 0s 371us/step - loss: 0.5682 - mean_absolute_error: 0.6029 - mean_squared_error: 0.5682 - val_loss: 0.4696 - val_mean_absolute_error: 0.5458 - val_mean_squared_error: 0.4696\nEpoch 135/200\n242/242 [==============================] - 0s 343us/step - loss: 0.5631 - mean_absolute_error: 0.6048 - mean_squared_error: 0.5631 - val_loss: 0.4820 - val_mean_absolute_error: 0.5529 - val_mean_squared_error: 0.4820\nEpoch 136/200\n242/242 [==============================] - 0s 114us/step - loss: 0.5640 - mean_absolute_error: 0.6056 - mean_squared_error: 0.5640 - val_loss: 0.4734 - val_mean_absolute_error: 0.5503 - val_mean_squared_error: 0.4734\nEpoch 137/200\n242/242 [==============================] - 0s 120us/step - loss: 0.5648 - mean_absolute_error: 0.6007 - mean_squared_error: 0.5648 - val_loss: 0.4695 - val_mean_absolute_error: 0.5475 - val_mean_squared_error: 0.4695\nEpoch 138/200\n242/242 [==============================] - 0s 390us/step - loss: 0.5638 - mean_absolute_error: 0.6031 - mean_squared_error: 0.5638 - val_loss: 0.4858 - val_mean_absolute_error: 0.5539 - val_mean_squared_error: 0.4858\nEpoch 139/200\n242/242 [==============================] - 0s 331us/step - loss: 0.5631 - mean_absolute_error: 0.6061 - mean_squared_error: 0.5631 - val_loss: 0.4870 - val_mean_absolute_error: 0.5591 - val_mean_squared_error: 0.4870\nEpoch 140/200\n242/242 [==============================] - 0s 99us/step - loss: 0.5596 - mean_absolute_error: 0.5996 - mean_squared_error: 0.5596 - val_loss: 0.4721 - val_mean_absolute_error: 0.5514 - val_mean_squared_error: 0.4721\nEpoch 141/200\n242/242 [==============================] - 0s 130us/step - loss: 0.5609 - mean_absolute_error: 0.5995 - mean_squared_error: 0.5609 - val_loss: 0.4728 - val_mean_absolute_error: 0.5541 - val_mean_squared_error: 0.4728\nEpoch 142/200\n242/242 [==============================] - 0s 356us/step - loss: 0.5591 - mean_absolute_error: 0.5964 - mean_squared_error: 0.5591 - val_loss: 0.4601 - val_mean_absolute_error: 0.5422 - val_mean_squared_error: 0.4601\nEpoch 143/200\n242/242 [==============================] - 0s 402us/step - loss: 0.5588 - mean_absolute_error: 0.5957 - mean_squared_error: 0.5588 - val_loss: 0.4723 - val_mean_absolute_error: 0.5500 - val_mean_squared_error: 0.4723\nEpoch 144/200\n242/242 [==============================] - 0s 396us/step - loss: 0.5559 - mean_absolute_error: 0.5946 - mean_squared_error: 0.5559 - val_loss: 0.4683 - val_mean_absolute_error: 0.5504 - val_mean_squared_error: 0.4683\nEpoch 145/200\n242/242 [==============================] - 0s 340us/step - loss: 0.5572 - mean_absolute_error: 0.5948 - mean_squared_error: 0.5572 - val_loss: 0.4657 - val_mean_absolute_error: 0.5508 - val_mean_squared_error: 0.4657\nEpoch 146/200\n242/242 [==============================] - 0s 135us/step - loss: 0.5561 - mean_absolute_error: 0.5971 - mean_squared_error: 0.5561 - val_loss: 0.4829 - val_mean_absolute_error: 0.5689 - val_mean_squared_error: 0.4829\nEpoch 147/200\n242/242 [==============================] - 0s 385us/step - loss: 0.5587 - mean_absolute_error: 0.5971 - mean_squared_error: 0.5587 - val_loss: 0.4665 - val_mean_absolute_error: 0.5547 - val_mean_squared_error: 0.4665\nEpoch 148/200\n242/242 [==============================] - 0s 347us/step - loss: 0.5547 - mean_absolute_error: 0.5931 - mean_squared_error: 0.5547 - val_loss: 0.4732 - val_mean_absolute_error: 0.5576 - val_mean_squared_error: 0.4732\nEpoch 149/200\n242/242 [==============================] - 0s 403us/step - loss: 0.5533 - mean_absolute_error: 0.5961 - mean_squared_error: 0.5533 - val_loss: 0.4762 - val_mean_absolute_error: 0.5558 - val_mean_squared_error: 0.4762\nEpoch 150/200\n242/242 [==============================] - 0s 333us/step - loss: 0.5525 - mean_absolute_error: 0.5939 - mean_squared_error: 0.5525 - val_loss: 0.4635 - val_mean_absolute_error: 0.5472 - val_mean_squared_error: 0.4635\nEpoch 151/200\n242/242 [==============================] - 0s 107us/step - loss: 0.5532 - mean_absolute_error: 0.5929 - mean_squared_error: 0.5532 - val_loss: 0.4673 - val_mean_absolute_error: 0.5502 - val_mean_squared_error: 0.4673\nEpoch 152/200\n242/242 [==============================] - 0s 357us/step - loss: 0.5527 - mean_absolute_error: 0.5987 - mean_squared_error: 0.5527 - val_loss: 0.4823 - val_mean_absolute_error: 0.5590 - val_mean_squared_error: 0.4823\nEpoch 153/200\n242/242 [==============================] - 0s 371us/step - loss: 0.5506 - mean_absolute_error: 0.5976 - mean_squared_error: 0.5506 - val_loss: 0.4770 - val_mean_absolute_error: 0.5570 - val_mean_squared_error: 0.4770\nEpoch 154/200\n242/242 [==============================] - 0s 382us/step - loss: 0.5528 - mean_absolute_error: 0.5935 - mean_squared_error: 0.5528 - val_loss: 0.4729 - val_mean_absolute_error: 0.5516 - val_mean_squared_error: 0.4729\nEpoch 155/200\n242/242 [==============================] - 0s 123us/step - loss: 0.5514 - mean_absolute_error: 0.5984 - mean_squared_error: 0.5514 - val_loss: 0.4862 - val_mean_absolute_error: 0.5663 - val_mean_squared_error: 0.4862\nEpoch 156/200\n242/242 [==============================] - 0s 114us/step - loss: 0.5468 - mean_absolute_error: 0.5934 - mean_squared_error: 0.5468 - val_loss: 0.4577 - val_mean_absolute_error: 0.5495 - val_mean_squared_error: 0.4577\nEpoch 157/200\n242/242 [==============================] - 0s 436us/step - loss: 0.5481 - mean_absolute_error: 0.5897 - mean_squared_error: 0.5481 - val_loss: 0.4639 - val_mean_absolute_error: 0.5532 - val_mean_squared_error: 0.4639\nEpoch 158/200\n242/242 [==============================] - 0s 332us/step - loss: 0.5465 - mean_absolute_error: 0.5940 - mean_squared_error: 0.5465 - val_loss: 0.4933 - val_mean_absolute_error: 0.5755 - val_mean_squared_error: 0.4933\nEpoch 159/200\n242/242 [==============================] - 0s 386us/step - loss: 0.5535 - mean_absolute_error: 0.5993 - mean_squared_error: 0.5535 - val_loss: 0.4694 - val_mean_absolute_error: 0.5623 - val_mean_squared_error: 0.4694\nEpoch 160/200\n242/242 [==============================] - 0s 393us/step - loss: 0.5482 - mean_absolute_error: 0.5943 - mean_squared_error: 0.5482 - val_loss: 0.4716 - val_mean_absolute_error: 0.5562 - val_mean_squared_error: 0.4716\nEpoch 161/200\n242/242 [==============================] - 0s 122us/step - loss: 0.5473 - mean_absolute_error: 0.5914 - mean_squared_error: 0.5473 - val_loss: 0.4633 - val_mean_absolute_error: 0.5508 - val_mean_squared_error: 0.4633\nEpoch 162/200\n242/242 [==============================] - 0s 129us/step - loss: 0.5439 - mean_absolute_error: 0.5893 - mean_squared_error: 0.5439 - val_loss: 0.4699 - val_mean_absolute_error: 0.5547 - val_mean_squared_error: 0.4699\nEpoch 163/200\n242/242 [==============================] - 0s 365us/step - loss: 0.5425 - mean_absolute_error: 0.5894 - mean_squared_error: 0.5425 - val_loss: 0.4655 - val_mean_absolute_error: 0.5500 - val_mean_squared_error: 0.4655\nEpoch 164/200\n242/242 [==============================] - 0s 374us/step - loss: 0.5448 - mean_absolute_error: 0.5880 - mean_squared_error: 0.5448 - val_loss: 0.4640 - val_mean_absolute_error: 0.5489 - val_mean_squared_error: 0.4640\nEpoch 165/200\n242/242 [==============================] - 0s 355us/step - loss: 0.5457 - mean_absolute_error: 0.5865 - mean_squared_error: 0.5457 - val_loss: 0.4635 - val_mean_absolute_error: 0.5464 - val_mean_squared_error: 0.4635\nEpoch 166/200\n242/242 [==============================] - 0s 124us/step - loss: 0.5408 - mean_absolute_error: 0.5856 - mean_squared_error: 0.5408 - val_loss: 0.4719 - val_mean_absolute_error: 0.5530 - val_mean_squared_error: 0.4719\nEpoch 167/200\n242/242 [==============================] - 0s 139us/step - loss: 0.5462 - mean_absolute_error: 0.5920 - mean_squared_error: 0.5462 - val_loss: 0.4736 - val_mean_absolute_error: 0.5611 - val_mean_squared_error: 0.4736\nEpoch 168/200\n242/242 [==============================] - 0s 375us/step - loss: 0.5448 - mean_absolute_error: 0.5861 - mean_squared_error: 0.5448 - val_loss: 0.4651 - val_mean_absolute_error: 0.5494 - val_mean_squared_error: 0.4651\nEpoch 169/200\n242/242 [==============================] - 0s 371us/step - loss: 0.5449 - mean_absolute_error: 0.5921 - mean_squared_error: 0.5449 - val_loss: 0.4819 - val_mean_absolute_error: 0.5632 - val_mean_squared_error: 0.4819\nEpoch 170/200\n242/242 [==============================] - 0s 389us/step - loss: 0.5401 - mean_absolute_error: 0.5889 - mean_squared_error: 0.5401 - val_loss: 0.4652 - val_mean_absolute_error: 0.5500 - val_mean_squared_error: 0.4652\nEpoch 171/200\n242/242 [==============================] - 0s 387us/step - loss: 0.5364 - mean_absolute_error: 0.5838 - mean_squared_error: 0.5364 - val_loss: 0.4723 - val_mean_absolute_error: 0.5568 - val_mean_squared_error: 0.4723\nEpoch 172/200\n242/242 [==============================] - 0s 404us/step - loss: 0.5433 - mean_absolute_error: 0.5914 - mean_squared_error: 0.5433 - val_loss: 0.4803 - val_mean_absolute_error: 0.5653 - val_mean_squared_error: 0.4803\nEpoch 173/200\n242/242 [==============================] - 0s 110us/step - loss: 0.5381 - mean_absolute_error: 0.5886 - mean_squared_error: 0.5381 - val_loss: 0.4834 - val_mean_absolute_error: 0.5691 - val_mean_squared_error: 0.4834\nEpoch 174/200\n242/242 [==============================] - 0s 113us/step - loss: 0.5386 - mean_absolute_error: 0.5888 - mean_squared_error: 0.5386 - val_loss: 0.4752 - val_mean_absolute_error: 0.5585 - val_mean_squared_error: 0.4752\nEpoch 175/200\n242/242 [==============================] - 0s 382us/step - loss: 0.5358 - mean_absolute_error: 0.5871 - mean_squared_error: 0.5358 - val_loss: 0.4824 - val_mean_absolute_error: 0.5671 - val_mean_squared_error: 0.4824\nEpoch 176/200\n242/242 [==============================] - 0s 377us/step - loss: 0.5368 - mean_absolute_error: 0.5853 - mean_squared_error: 0.5368 - val_loss: 0.4742 - val_mean_absolute_error: 0.5663 - val_mean_squared_error: 0.4742\nEpoch 177/200\n242/242 [==============================] - 0s 375us/step - loss: 0.5364 - mean_absolute_error: 0.5852 - mean_squared_error: 0.5364 - val_loss: 0.4833 - val_mean_absolute_error: 0.5687 - val_mean_squared_error: 0.4833\nEpoch 178/200\n242/242 [==============================] - 0s 364us/step - loss: 0.5325 - mean_absolute_error: 0.5840 - mean_squared_error: 0.5325 - val_loss: 0.4746 - val_mean_absolute_error: 0.5606 - val_mean_squared_error: 0.4746\nEpoch 179/200\n242/242 [==============================] - 0s 111us/step - loss: 0.5338 - mean_absolute_error: 0.5814 - mean_squared_error: 0.5338 - val_loss: 0.4726 - val_mean_absolute_error: 0.5588 - val_mean_squared_error: 0.4726\nEpoch 180/200\n242/242 [==============================] - 0s 376us/step - loss: 0.5314 - mean_absolute_error: 0.5841 - mean_squared_error: 0.5314 - val_loss: 0.4831 - val_mean_absolute_error: 0.5667 - val_mean_squared_error: 0.4831\nEpoch 181/200\n242/242 [==============================] - 0s 376us/step - loss: 0.5318 - mean_absolute_error: 0.5803 - mean_squared_error: 0.5318 - val_loss: 0.4703 - val_mean_absolute_error: 0.5634 - val_mean_squared_error: 0.4703\nEpoch 182/200\n242/242 [==============================] - 0s 397us/step - loss: 0.5344 - mean_absolute_error: 0.5798 - mean_squared_error: 0.5344 - val_loss: 0.4672 - val_mean_absolute_error: 0.5555 - val_mean_squared_error: 0.4672\nEpoch 183/200\n242/242 [==============================] - 0s 357us/step - loss: 0.5347 - mean_absolute_error: 0.5841 - mean_squared_error: 0.5347 - val_loss: 0.4798 - val_mean_absolute_error: 0.5662 - val_mean_squared_error: 0.4798\nEpoch 184/200\n242/242 [==============================] - 0s 118us/step - loss: 0.5290 - mean_absolute_error: 0.5808 - mean_squared_error: 0.5290 - val_loss: 0.4776 - val_mean_absolute_error: 0.5656 - val_mean_squared_error: 0.4776\nEpoch 185/200\n242/242 [==============================] - 0s 377us/step - loss: 0.5287 - mean_absolute_error: 0.5803 - mean_squared_error: 0.5287 - val_loss: 0.4714 - val_mean_absolute_error: 0.5611 - val_mean_squared_error: 0.4714\nEpoch 186/200\n242/242 [==============================] - 0s 392us/step - loss: 0.5315 - mean_absolute_error: 0.5784 - mean_squared_error: 0.5315 - val_loss: 0.4706 - val_mean_absolute_error: 0.5590 - val_mean_squared_error: 0.4706\nEpoch 187/200\n242/242 [==============================] - 0s 354us/step - loss: 0.5289 - mean_absolute_error: 0.5789 - mean_squared_error: 0.5289 - val_loss: 0.4849 - val_mean_absolute_error: 0.5693 - val_mean_squared_error: 0.4849\nEpoch 188/200\n242/242 [==============================] - 0s 395us/step - loss: 0.5283 - mean_absolute_error: 0.5795 - mean_squared_error: 0.5283 - val_loss: 0.4848 - val_mean_absolute_error: 0.5696 - val_mean_squared_error: 0.4848\nEpoch 189/200\n242/242 [==============================] - 0s 120us/step - loss: 0.5291 - mean_absolute_error: 0.5817 - mean_squared_error: 0.5291 - val_loss: 0.4818 - val_mean_absolute_error: 0.5672 - val_mean_squared_error: 0.4818\nEpoch 190/200\n242/242 [==============================] - 0s 346us/step - loss: 0.5248 - mean_absolute_error: 0.5768 - mean_squared_error: 0.5248 - val_loss: 0.4749 - val_mean_absolute_error: 0.5657 - val_mean_squared_error: 0.4749\nEpoch 191/200\n242/242 [==============================] - 0s 387us/step - loss: 0.5319 - mean_absolute_error: 0.5823 - mean_squared_error: 0.5319 - val_loss: 0.4884 - val_mean_absolute_error: 0.5750 - val_mean_squared_error: 0.4884\nEpoch 192/200\n242/242 [==============================] - 0s 379us/step - loss: 0.5272 - mean_absolute_error: 0.5767 - mean_squared_error: 0.5272 - val_loss: 0.4713 - val_mean_absolute_error: 0.5651 - val_mean_squared_error: 0.4713\nEpoch 193/200\n242/242 [==============================] - 0s 390us/step - loss: 0.5272 - mean_absolute_error: 0.5758 - mean_squared_error: 0.5272 - val_loss: 0.4769 - val_mean_absolute_error: 0.5683 - val_mean_squared_error: 0.4769\nEpoch 194/200\n242/242 [==============================] - 0s 388us/step - loss: 0.5229 - mean_absolute_error: 0.5752 - mean_squared_error: 0.5229 - val_loss: 0.4764 - val_mean_absolute_error: 0.5653 - val_mean_squared_error: 0.4764\nEpoch 195/200\n242/242 [==============================] - 0s 116us/step - loss: 0.5279 - mean_absolute_error: 0.5778 - mean_squared_error: 0.5279 - val_loss: 0.4787 - val_mean_absolute_error: 0.5675 - val_mean_squared_error: 0.4787\nEpoch 196/200\n242/242 [==============================] - 0s 175us/step - loss: 0.5260 - mean_absolute_error: 0.5771 - mean_squared_error: 0.5260 - val_loss: 0.4813 - val_mean_absolute_error: 0.5694 - val_mean_squared_error: 0.4813\nEpoch 197/200\n242/242 [==============================] - 0s 355us/step - loss: 0.5221 - mean_absolute_error: 0.5749 - mean_squared_error: 0.5221 - val_loss: 0.4792 - val_mean_absolute_error: 0.5654 - val_mean_squared_error: 0.4792\nEpoch 198/200\n242/242 [==============================] - 0s 366us/step - loss: 0.5251 - mean_absolute_error: 0.5785 - mean_squared_error: 0.5251 - val_loss: 0.4889 - val_mean_absolute_error: 0.5751 - val_mean_squared_error: 0.4889\nEpoch 199/200\n242/242 [==============================] - 0s 402us/step - loss: 0.5271 - mean_absolute_error: 0.5785 - mean_squared_error: 0.5271 - val_loss: 0.4732 - val_mean_absolute_error: 0.5655 - val_mean_squared_error: 0.4732\nEpoch 200/200\n242/242 [==============================] - 0s 413us/step - loss: 0.5259 - mean_absolute_error: 0.5785 - mean_squared_error: 0.5259 - val_loss: 0.4936 - val_mean_absolute_error: 0.5726 - val_mean_squared_error: 0.4936\n" ], [ "# Выводим динамику среднего абсолютного отклонения от номера эпохи обучения.\nplt.plot(history.history['mean_absolute_error'])\nplt.plot(history.history['val_mean_absolute_error']) \nplt.title('Model MAE') \nplt.ylabel('MAE') \nplt.xlabel('Epoch') \nplt.legend(['Train', 'Test'], loc='upper right') \nplt.show()", "_____no_output_____" ], [ "# Выводим динамику среднеквадратического отклонения, т.е. значения функции потерь, от номера эпохи обучения.\nplt.plot(history.history['mean_squared_error'])\nplt.plot(history.history['val_mean_squared_error']) \nplt.title('Model MSE') \nplt.ylabel('MSE') \nplt.xlabel('Epoch') \nplt.legend(['Train', 'Test'], loc='upper right') \nplt.show()", "_____no_output_____" ], [ "# Предсказание уже обученной нейронной сети на обучающей выборке:\nY_pred_train = model.predict(X_train).flatten()", "_____no_output_____" ], [ "# Сравним эталонные значения Y_train и результат работы обученной нейронной сети Y_pred_train для обучающей выборки.\nplt.plot(Y_train, Y_pred_train, 'bo')\nplt.plot([-2,2], [-2,2], 'r-')\nplt.title('Test vs Pred_test') \nplt.ylabel('Pred_test') \nplt.xlabel('Test') \nplt.show()", "_____no_output_____" ], [ "# Выведем сами значения Y_train и Y_pred_train.\nplt.plot(Y_train)\nplt.plot(Y_pred_train)\nplt.show()", "_____no_output_____" ], [ "# Предсказание обученной нейронной сети на тестовой выборке:\nY_pred_test = model.predict(X_test).flatten()", "_____no_output_____" ], [ "# Сравним эталонные значения Y_test и результат работы обученной нейронной сети Y_pred_test для тестовой выборки.\nplt.plot(Y_test, Y_pred_test, 'bo')\nplt.plot([-2,2], [-2,2], 'r-')\nplt.title('Test vs Pred_test') \nplt.ylabel('Pred_test') \nplt.xlabel('Test') \nplt.show()", "_____no_output_____" ], [ "# Выведем сами значения Y_test и Y_pred_test.\nplt.plot(Y_test)\nplt.plot(Y_pred_test)\nplt.show()", "_____no_output_____" ], [ "# Сравним среднеквадратичные ошибки (значения функции потерь) для обучающей и тестовой выборок.\nprint(np.sqrt(mean_squared_error(Y_train, Y_pred_train)))\nprint(np.sqrt(mean_squared_error(Y_test, Y_pred_test)))", "0.720609485794342\n0.7025312274919665\n" ], [ "# Проверим на нормальное распределение разности пар (Y_train, Y_pred_train), (Y_test, Y_pred_test)\nk_train, p_train = stats.shapiro(Y_train - Y_pred_train)\nprint('Train k = {0}, p = {1}'.format(k_train, p_train))\n\nk_test, p_test = stats.shapiro(Y_test - Y_pred_test)\nprint('Test k = {0}, p = {1}'.format(k_test, p_test))", "Train k = 0.9919413328170776, p = 0.20750486850738525\nTest k = 0.9738291501998901, p = 0.21504180133342743\n" ], [ "# Для полной выборки (Y, Y_pred) применим два статистических теста: shapiro и normaltest.\nY_pred = model.predict(X).flatten()\n\nk_s, p_s = stats.shapiro(Y - Y_pred)\nprint('k_s = {0}, p_s = {1}'.format(k_s, p_s))\n\nk_n, p_n = stats.normaltest(Y - Y_pred)\nprint('k_n = {0}, p_n = {1}'.format(k_n, p_n))", "k_s = 0.9914296269416809, p_s = 0.07569187134504318\nk_n = 4.3792271843804915, p_n = 0.11196000247931377\n" ], [ "# И тоже самое визуально, с помощью грфиков квантиль-квантиль.\n# Обучающая выборка\nqqplot(Y_train - Y_pred_train)\nplt.show()", "_____no_output_____" ], [ "# Тестовая выборка\nqqplot(Y_test - Y_pred_test)\nplt.show()", "_____no_output_____" ], [ "# Полная выборка\nqqplot(Y - Y_pred)\nplt.show()", "_____no_output_____" ], [ "plt.hist(Y - Y_pred, bins=50)\nplt.show()", "_____no_output_____" ], [ "model.save('SimpleNeuralNetwork.h5')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a605fe9575636cfee5d5fe068b7acd804f2bd9e
854
ipynb
Jupyter Notebook
hacker-rank/Python/Numpy/Min and Max.ipynb
izan-majeed/archives
89af2a24f4a6f07bda8ee38d99ae8667d42727f4
[ "Apache-2.0" ]
null
null
null
hacker-rank/Python/Numpy/Min and Max.ipynb
izan-majeed/archives
89af2a24f4a6f07bda8ee38d99ae8667d42727f4
[ "Apache-2.0" ]
null
null
null
hacker-rank/Python/Numpy/Min and Max.ipynb
izan-majeed/archives
89af2a24f4a6f07bda8ee38d99ae8667d42727f4
[ "Apache-2.0" ]
null
null
null
18.977778
79
0.521077
[ [ [ "import numpy\ni,j = map(int, input().split())\na = numpy.array([numpy.array(input().split(), int) for _ in range(i)])\nprint(numpy.max(a.min(axis=1)))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a6074c2ab765b01f08ea02311843dbbe1569aab
4,448
ipynb
Jupyter Notebook
notebooks/archived/HR_Network_analysis.ipynb
caselawanalytics/case-law-research
9311e141c854ad2e73df14ab2953f7cf96d95a24
[ "Apache-2.0" ]
1
2018-11-30T14:21:43.000Z
2018-11-30T14:21:43.000Z
notebooks/archived/HR_Network_analysis.ipynb
caselawanalytics/case-law-research
9311e141c854ad2e73df14ab2953f7cf96d95a24
[ "Apache-2.0" ]
null
null
null
notebooks/archived/HR_Network_analysis.ipynb
caselawanalytics/case-law-research
9311e141c854ad2e73df14ab2953f7cf96d95a24
[ "Apache-2.0" ]
null
null
null
20.592593
128
0.51911
[ [ [ "import pandas as pd\nimport caselawnet\nimport networkx as nx\nimport os", "_____no_output_____" ], [ "datapath = '/media/sf_VBox_Shared/CaseLaw/graphs/lido/'", "_____no_output_____" ], [ "nodes = pd.read_csv(os.path.join(datapath, 'hr_enriched_nodes_2.csv'))", "_____no_output_____" ], [ "links = pd.read_csv(os.path.join(datapath, 'hr_simple_links.csv'))", "_____no_output_____" ], [ "links = links.rename(columns={'link_id': 'id'})", "_____no_output_____" ], [ "print(nodes.shape)\nlinks_eclis = set(links['source']).union(set(links['target']))\nprint(len(links_eclis))", "(68334, 8)\n15139\n" ], [ "# Filter out nodes without links\nnodes_filtered = nodes[nodes['id'].isin(links_eclis)]\nnodes_filtered.shape", "_____no_output_____" ], [ "graph = caselawnet.network_analysis.get_network(nodes_filtered.to_dict(orient='records'), links.to_dict(orient='records'))", "_____no_output_____" ], [ "hubs, authorities = caselawnet.network_analysis.get_hits(graph)", "_____no_output_____" ], [ "statistics = {\n 'degree': graph.degree(),\n 'in_degree': graph.in_degree(),\n 'out_degree': graph.out_degree(),\n\n 'degree_centrality': nx.degree_centrality(graph),\n 'in_degree_centrality': nx.in_degree_centrality(graph),\n 'out_degree_centrality': nx.out_degree_centrality(graph),\n 'betweenness_centrality': nx.betweenness_centrality(graph),\n 'closeness_centrality': nx.closeness_centrality(graph),\n 'pagerank': caselawnet.network_analysis.get_pagerank(graph),\n 'hubs': hubs,\n 'authorities': authorities\n}", "_____no_output_____" ], [ "nodes_filtered.nunique()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a607796adba7c534a409c56d2551bbcef508339
3,258
ipynb
Jupyter Notebook
src/db.ipynb
crawles/data-science-training
5f5077faee792578f3d7ff3727576492354f1156
[ "MIT" ]
5
2017-06-21T22:33:38.000Z
2020-05-25T20:46:06.000Z
src/db.ipynb
crawles/data-science-training
5f5077faee792578f3d7ff3727576492354f1156
[ "MIT" ]
null
null
null
src/db.ipynb
crawles/data-science-training
5f5077faee792578f3d7ff3727576492354f1156
[ "MIT" ]
3
2017-06-21T22:35:35.000Z
2018-08-02T15:42:50.000Z
24.313433
131
0.502455
[ [ [ "import ConfigParser\nimport os\n\nimport pandas as pd\nimport pandas.io.sql as psql\nimport psycopg2\n\nfrom sqlalchemy import create_engine", "_____no_output_____" ], [ "%load_ext sql_magic", "_____no_output_____" ], [ "def fetchDBCredentials(dbcred_file, section='database_creds'):\n \"\"\"\n Read database access credentials from dbcred_file and return \n dict with host, database, user, password, port.\n \"\"\"\n #Read database credentials from user supplied file\n conf = ConfigParser.ConfigParser()\n conf.read(dbcred_file)\n return dict(conf.items(section))", "_____no_output_____" ], [ "cred_file = '../.dbcred'\ntry:\n conf = fetchDBCredentials(cred_file)\nexcept:\n try:\n conf = fetchDBCredentials('../' + cred_file)\n except:\n pass\nconn = psycopg2.connect(**conf)\nconn.autocommit = True\n%config SQL.conn_name = 'conn'", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
4a607bfba05bec5a92121d0037d613f8a3ffc7f1
232,573
ipynb
Jupyter Notebook
ClothingIdentification.ipynb
vinicius-17/Clothing-Identification-Using-Deep-Learning
026f2c35550ac84a4e5088777953a101fe551798
[ "MIT" ]
null
null
null
ClothingIdentification.ipynb
vinicius-17/Clothing-Identification-Using-Deep-Learning
026f2c35550ac84a4e5088777953a101fe551798
[ "MIT" ]
1
2020-12-11T08:43:57.000Z
2020-12-11T08:43:57.000Z
ClothingIdentification.ipynb
vinicius-17/Clothing-Identification-Using-Deep-Learning
026f2c35550ac84a4e5088777953a101fe551798
[ "MIT" ]
null
null
null
73.506005
5,806
0.680432
[ [ [ "#Importing libraries\nimport tensorflow as tf\nfrom tensorflow import keras\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\nfrom sklearn.metrics import classification_report\n\n#Setting the visualization\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nmpl.style.use( 'ggplot' )\nplt.style.use('fivethirtyeight')\nsns.set(context=\"notebook\", palette=\"dark\", style = 'whitegrid' , color_codes=True)", "_____no_output_____" ], [ "#Upload Fashion MNIST data\n(X_train_orig, y_train_orig), (X_test_orig, y_test_orig) = keras.datasets.fashion_mnist.load_data()", "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n32768/29515 [=================================] - 0s 1us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n26427392/26421880 [==============================] - 4s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n8192/5148 [===============================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n4423680/4422102 [==============================] - 1s 0us/step\n" ], [ "#Labels\nclass_names = ['T-shirt/top', 'Trouser', 'Pullover',\n 'Dress', 'Coat', 'Sandal', 'Shirt',\n 'Sneaker', 'Bag', 'Ankie boot']", "_____no_output_____" ], [ "#See the dimensionality of DataFrames\nprint('Dimensionality of DataFrames:')\nprint('X_train_orig:', X_train_orig.shape)\nprint('y_train_orig:', y_train_orig.shape)\nprint('X_test_orig:', X_test_orig.shape)\nprint('y_test_orig:', y_test_orig.shape)\n\n#See a slice of the image\nprint('\\n\\nImage converted to array:\\n', X_train_orig[0][:5][:5])", "Dimensionality of DataFrames:\nX_train_orig: (60000, 28, 28)\ny_train_orig: (60000,)\nX_test_orig: (10000, 28, 28)\ny_test_orig: (10000,)\n\n\nImage converted to array:\n [[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 13 73 0\n 0 1 4 0 0 0 0 1 1 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 3 0 36 136 127 62\n 54 0 0 0 1 3 4 0 0 3]]\n" ], [ "#Check unique values by class (training)\nprint('y_train_orig:')\nnp.unique(y_train_orig, return_counts=True)", "y_train_orig:\n" ], [ "#Check unique values by classes (test)\nprint('y_test_orig:')\nnp.unique(y_test_orig, return_counts=True)", "y_test_orig:\n" ], [ "#See some sample images\nplt.figure(figsize=(6,6))\n\nfor i in range(25):\n plt.subplot(5, 5, i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(X_train_orig[i], cmap=plt.cm.binary)\n plt.xlabel(class_names[y_train_orig[i]])\nplt.tight_layout()", "_____no_output_____" ], [ "#Create lambda function that turns into float32 and normalizes pixels\nf = lambda x: (x / 255.0).astype('float32')\n\n#Apply the lambda function to the X_train and X_test datasets\nX_train = f(X_train_orig)\nX_test = f(X_test_orig)", "_____no_output_____" ], [ "#Resize images\nX_train = X_train.reshape((X_train.shape[0], 28, 28, 1))\nX_test = X_test.reshape((X_test.shape[0], 28, 28, 1))\n\nprint('X_train:{}'.format(X_train.shape))\nprint('X_test:\\t{}'.format(X_test.shape))", "X_train:(60000, 28, 28, 1)\nX_test:\t(10000, 28, 28, 1)\n" ], [ "#One-Hot Encoding\nexample = np.array([1, 3, 4, 2, 0])\nprint('Example before Encoding:')\nprint(example)\n\nexample_encoded = keras.utils.to_categorical(example)\nprint('\\nExample after Encoding')\nprint(example_encoded)", "Example before Encoding:\n[1 3 4 2 0]\n\nExample after Encoding\n[[0. 1. 0. 0. 0.]\n [0. 0. 0. 1. 0.]\n [0. 0. 0. 0. 1.]\n [0. 0. 1. 0. 0.]\n [1. 0. 0. 0. 0.]]\n" ], [ "y_train = keras.utils.to_categorical(y_train_orig)\ny_test = keras.utils.to_categorical(y_test_orig)", "_____no_output_____" ], [ "#First CONV => RELU => CONV => RELU => POOL layer set\nmodel = keras.models.Sequential()\nmodel.add(keras.layers.Conv2D(32, 3, padding=\"same\", activation='relu',))\nmodel.add(keras.layers.BatchNormalization(axis=1))\nmodel.add(keras.layers.Conv2D(32, (3, 3), padding=\"same\", activation='relu'))\nmodel.add(keras.layers.BatchNormalization(axis=1))\nmodel.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))\nmodel.add(keras.layers.Dropout(0.25))\n\n#Second CONV => RELU => CONV => RELU => POOL layer set\nmodel.add(keras.layers.Conv2D(64, (3, 3), padding=\"same\", activation='relu'))\nmodel.add(keras.layers.BatchNormalization(axis=1))\nmodel.add(keras.layers.Conv2D(64, (3, 3), padding=\"same\", activation='relu'))\nmodel.add(keras.layers.BatchNormalization(axis=1))\nmodel.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))\nmodel.add(keras.layers.Dropout(0.25))\n\n#First (and only) set of FC => RELU layers\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(512, activation='relu'))\nmodel.add(keras.layers.BatchNormalization())\nmodel.add(keras.layers.Dropout(0.5))\n\n# Softmax classifier\nmodel.add(keras.layers.Dense(10, activation='softmax'))", "_____no_output_____" ], [ "#Model.compile(optimizer='adam', loss=\"sparse_categorical_crossentropy\", metrics=['accuracy'])\nmodel.compile(optimizer='adam', loss=\"categorical_crossentropy\", metrics=['accuracy'])\n\n#Train the model and save the information in history\nhistory = model.fit(X_train, y_train, epochs=10, validation_split=0.3)", "Epoch 1/10\n1313/1313 [==============================] - 126s 96ms/step - loss: 0.5206 - accuracy: 0.8189 - val_loss: 0.3592 - val_accuracy: 0.8789\nEpoch 2/10\n1313/1313 [==============================] - 126s 96ms/step - loss: 0.3442 - accuracy: 0.8772 - val_loss: 0.2777 - val_accuracy: 0.9003\nEpoch 3/10\n1313/1313 [==============================] - 125s 96ms/step - loss: 0.2917 - accuracy: 0.8944 - val_loss: 0.2665 - val_accuracy: 0.9092\nEpoch 4/10\n1313/1313 [==============================] - 126s 96ms/step - loss: 0.2740 - accuracy: 0.9008 - val_loss: 0.2826 - val_accuracy: 0.8989\nEpoch 5/10\n1313/1313 [==============================] - 126s 96ms/step - loss: 0.2602 - accuracy: 0.9067 - val_loss: 0.2304 - val_accuracy: 0.9182\nEpoch 6/10\n1313/1313 [==============================] - 127s 97ms/step - loss: 0.2368 - accuracy: 0.9140 - val_loss: 0.2233 - val_accuracy: 0.9218\nEpoch 7/10\n1313/1313 [==============================] - 128s 97ms/step - loss: 0.2374 - accuracy: 0.9140 - val_loss: 0.2272 - val_accuracy: 0.9193\nEpoch 8/10\n1313/1313 [==============================] - 131s 100ms/step - loss: 0.2139 - accuracy: 0.9214 - val_loss: 0.2285 - val_accuracy: 0.9238\nEpoch 9/10\n1313/1313 [==============================] - 134s 102ms/step - loss: 0.2003 - accuracy: 0.9254 - val_loss: 0.2046 - val_accuracy: 0.9276\nEpoch 10/10\n1313/1313 [==============================] - 132s 101ms/step - loss: 0.1935 - accuracy: 0.9290 - val_loss: 0.1961 - val_accuracy: 0.9298\n" ], [ "#Evaluating the Modely_hat = model.predict(X_test)\ny_hat = model.predict(X_test)\ny_hat_classes = np.argmax(y_hat, axis=1)\nprint(classification_report(y_test_orig, y_hat_classes, target_names=class_names))", " precision recall f1-score support\n\n T-shirt/top 0.90 0.84 0.87 1000\n Trouser 1.00 0.98 0.99 1000\n Pullover 0.87 0.91 0.89 1000\n Dress 0.93 0.91 0.92 1000\n Coat 0.87 0.88 0.88 1000\n Sandal 0.99 0.98 0.99 1000\n Shirt 0.75 0.79 0.77 1000\n Sneaker 0.96 0.98 0.97 1000\n Bag 0.99 0.98 0.99 1000\n Ankie boot 0.98 0.97 0.97 1000\n\n accuracy 0.92 10000\n macro avg 0.92 0.92 0.92 10000\nweighted avg 0.92 0.92 0.92 10000\n\n" ], [ "#Plot optimization history\npd.DataFrame(history.history).plot()\nplt.show()", "_____no_output_____" ], [ "score = model.evaluate(X_test, y_test)\n\n#Check model performance\n\nprint('Loss: {:.4f}'.format(score[0]))\nprint('Accuracy: {:.4f}'.format(score[1]))", "313/313 [==============================] - 5s 16ms/step - loss: 0.2137 - accuracy: 0.9221\nLoss: 0.2137\nAccuracy: 0.9221\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a6081afce6d397c1b3ea3f127badd362310b18d
40,299
ipynb
Jupyter Notebook
DeepLearning/ipython(guide)/cnn_for_classification.ipynb
ZeinabTaghavi/text_multi_label_classification
dbbeafab16e201bdaed56c581562be68ddaeba7c
[ "MIT" ]
null
null
null
DeepLearning/ipython(guide)/cnn_for_classification.ipynb
ZeinabTaghavi/text_multi_label_classification
dbbeafab16e201bdaed56c581562be68ddaeba7c
[ "MIT" ]
null
null
null
DeepLearning/ipython(guide)/cnn_for_classification.ipynb
ZeinabTaghavi/text_multi_label_classification
dbbeafab16e201bdaed56c581562be68ddaeba7c
[ "MIT" ]
null
null
null
40,299
40,299
0.61875
[ [ [ "from google.colab import drive\ndrive.mount('/content/drive')", "Mounted at /content/drive\n" ], [ "!pip3 install glove_python", "Requirement already satisfied: glove_python in /usr/local/lib/python3.6/dist-packages (0.1.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from glove_python) (1.4.1)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from glove_python) (1.19.4)\n" ], [ "import os\nos.chdir('/content/drive/MyDrive/sharif/DeepLearning/ipython(guide)')", "_____no_output_____" ], [ "import numpy as np\nimport codecs\nimport os\nimport random\nimport pandas\nfrom keras import backend as K\nfrom keras.models import Model\nfrom keras.layers.embeddings import Embedding\nfrom keras.layers import Input, Dense, Lambda, Permute, Dropout\nfrom keras.layers import Conv2D, MaxPooling1D\nfrom keras.optimizers import SGD\nimport ast\nfrom glove import Glove,Corpus", "_____no_output_____" ], [ "import re\nfrom sklearn.preprocessing import MultiLabelBinarizer\n\nlimit_number = 750\ndata = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv',index_col=0,converters={'body': eval})\ndata = data.dropna().reset_index(drop=True)\nX = data[\"body\"].values.tolist()\ny = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv')\nlabels = []\ntag=[]\nfor item in y['tag']:\n labels += [i for i in re.sub('\\\"|\\[|\\]|\\'| |=','',item.lower()).split(\",\") if i!='' and i!=' ']\n tag.append([i for i in re.sub('\\\"|\\[|\\]|\\'| |=','',item.lower()).split(\",\") if i!='' and i!=' '])\nlabels = list(set(labels))\nmlb = MultiLabelBinarizer()\nY=mlb.fit_transform(tag)", "_____no_output_____" ], [ "len(labels)", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "sentence_maxlen = max(map(len, (d for d in X)))\nprint('sentence maxlen', sentence_maxlen)", "sentence maxlen 300\n" ], [ "# vocab = []\n# for d in X:\n# for w in d:\n# if w not in vocab: vocab.append(w)\n# vocab = sorted(vocab)\n# vocab_size = len(vocab)\n# print('vocab examples:', vocab[:10])\n# pandas.DataFrame(vocab).to_csv(\"vocab.csv\",header=None)", "_____no_output_____" ], [ "import re\n\nfreq_dist = pandas.read_csv('../Data/FreqDist_sorted_'+str(limit_number)+'.csv',index_col=False)\nvocab=[]\nfor item in freq_dist[\"word\"]:\n try:\n word = re.sub(r\"[\\\\u200c]\",\"\",item.replace(\" \",\"\"))\n if word!=' ' and word!= None and word!='':\n vocab.append(word)\n except:\n # print(word)\n pass\n ", "_____no_output_____" ], [ "len(vocab) , len(freq_dist)", "_____no_output_____" ], [ "vocab = sorted(vocab)\nvocab_size = len(vocab)", "_____no_output_____" ], [ "print('vocab size', len(vocab))\nw2i = {w:i for i,w in enumerate(vocab)}\n# i2w = {i:w for i,w in enumerate(vocab)}", "vocab size 186516\n" ], [ "def vectorize(data, sentence_maxlen, w2i):\n vec_data = []\n \n for d in data:\n try:\n \n vec = [w2i[w] for w in d if w in w2i]\n pad_len = max(0, sentence_maxlen - len(vec))\n vec += [0] * pad_len\n vec_data.append(vec)\n # print('-----------------------')\n except:\n print(type(d), d==None)\n \n vec_data = np.array(vec_data)\n \n return vec_data\n\nvecX = vectorize(X, sentence_maxlen, w2i)\nvecY=Y", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(vecX, vecY, test_size=0.2)\nX_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25)\nprint('train: ', X_train.shape , '\\ntest: ', X_test.shape , '\\nval: ', X_val.shape)", "train: (12935, 300) \ntest: (4312, 300) \nval: (4312, 300)\n" ], [ "corpus = Corpus() \r\n#Training the corpus to generate the co occurence matrix which is used in GloVe\r\ncorpus.fit(X, window=10)\r\n\r\nglove = Glove(no_components=5, learning_rate=0.05) \r\nglove.fit(corpus.matrix, epochs=30, no_threads=4, verbose=True)\r\nglove.add_dictionary(corpus.dictionary)\r\nglove.save('glove.model')", "Performing 30 training epochs with 4 threads\nEpoch 0\nEpoch 1\nEpoch 2\nEpoch 3\nEpoch 4\nEpoch 5\nEpoch 6\nEpoch 7\nEpoch 8\nEpoch 9\nEpoch 10\nEpoch 11\nEpoch 12\nEpoch 13\nEpoch 14\nEpoch 15\nEpoch 16\nEpoch 17\nEpoch 18\nEpoch 19\nEpoch 20\nEpoch 21\nEpoch 22\nEpoch 23\nEpoch 24\nEpoch 25\nEpoch 26\nEpoch 27\nEpoch 28\nEpoch 29\n" ], [ "from gensim.scripts.glove2word2vec import glove2word2vec\r\nglove2word2vec(glove_input_file='glove.model', word2vec_output_file=\"gensim_glove_vectors.txt\") \r\nfrom gensim.models.keyedvectors import KeyedVectors\r\nglove_embd_w = KeyedVectors.load_word2vec_format(\"gensim_glove_vectors.txt\", binary=False)", "_____no_output_____" ], [ "# def load_glove_weights(glove_dir, embd_dim, vocab_size, word_index):\n# embeddings_index = {}\n# f = open(os.path.join(glove_dir, 'glove.6B.' + str(embd_dim) + 'd.txt'))\n# for line in f:\n# values = line.split()\n# word = values[0]\n# coefs = np.asarray(values[1:], dtype='float32')\n# embeddings_index[word] = coefs\n# f.close()\n\n# print('Found %s word vectors.' % len(embeddings_index)) \n# embedding_matrix = np.zeros((vocab_size, embd_dim))\n# print('embed_matrix.shape', embedding_matrix.shape)\n# for word, i in word_index.items():\n# embedding_vector = embeddings_index.get(word)\n# if embedding_vector is not None:\n# # words not found in embedding index will be all-zeros.\n# embedding_matrix[i] = embedding_vector\n\n# return embedding_matrix\n\n# embd_dim = 300\n# glove_embd_w = load_glove_weights('./dataset', embd_dim, vocab_size, w2i)\n###########################\n# import gensim\n\n# embd_dim = 300\n# embed_model = gensim.models.Word2Vec(X, size=embd_dim, window=5, min_count=5)\n# embed_model.save('word2vec_model')\n\n# embed_model=gensim.models.Word2Vec.load('word2vec_model')\n############################\n\n\n", "Performing 30 training epochs with 4 threads\nEpoch 0\nEpoch 1\nEpoch 2\nEpoch 3\nEpoch 4\nEpoch 5\nEpoch 6\nEpoch 7\nEpoch 8\nEpoch 9\nEpoch 10\nEpoch 11\nEpoch 12\nEpoch 13\nEpoch 14\nEpoch 15\nEpoch 16\nEpoch 17\nEpoch 18\nEpoch 19\nEpoch 20\nEpoch 21\nEpoch 22\nEpoch 23\nEpoch 24\nEpoch 25\nEpoch 26\nEpoch 27\nEpoch 28\n" ], [ "embd_dim=300\nglove_embd_w = np.zeros((vocab_size, embd_dim))\nfor word, i in w2i.items():\n # if word in embed_model.wv.vocab:\n embedding_vector =glove[word]\n \n # words not found in embedding index will be all-zeros.\n glove_embd_w[i] = embedding_vector", "_____no_output_____" ], [ "def Net(vocab_size, embd_size, sentence_maxlen, glove_embd_w):\n sentence = Input((sentence_maxlen,), name='SentenceInput')\n \n # embedding\n embd_layer = Embedding(input_dim=vocab_size, \n output_dim=embd_size, \n weights=[glove_embd_w], \n trainable=False,\n name='shared_embd')\n embd_sentence = embd_layer(sentence)\n embd_sentence = Permute((2,1))(embd_sentence)\n embd_sentence = Lambda(lambda x: K.expand_dims(x, -1))(embd_sentence)\n \n # cnn\n cnn = Conv2D(1, \n kernel_size=(3, sentence_maxlen),\n activation='relu')(embd_sentence)\n cnn = Lambda(lambda x: K.sum(x, axis=3))(cnn)\n cnn = MaxPooling1D(3)(cnn)\n cnn = Lambda(lambda x: K.sum(x, axis=2))(cnn)\n out = Dense(len(labels), activation='sigmoid')(cnn)\n \n sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)\n model = Model(inputs=sentence, outputs=out, name='sentence_claccification')\n model.compile(optimizer=sgd, loss='binary_crossentropy', metrics=[\"accuracy\", \"binary_accuracy\",\n \"categorical_accuracy\",]) \n return model\n\nmodel = Net(vocab_size, embd_dim, sentence_maxlen, glove_embd_w)\nprint(model.summary())\n", "Model: \"sentence_claccification\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nSentenceInput (InputLayer) [(None, 4425)] 0 \n_________________________________________________________________\nshared_embd (Embedding) (None, 4425, 300) 55954800 \n_________________________________________________________________\npermute_2 (Permute) (None, 300, 4425) 0 \n_________________________________________________________________\nlambda_6 (Lambda) (None, 300, 4425, 1) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 298, 1, 1) 13276 \n_________________________________________________________________\nlambda_7 (Lambda) (None, 298, 1) 0 \n_________________________________________________________________\nmax_pooling1d_2 (MaxPooling1 (None, 99, 1) 0 \n_________________________________________________________________\nlambda_8 (Lambda) (None, 99) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 78) 7800 \n=================================================================\nTotal params: 55,975,876\nTrainable params: 21,076\nNon-trainable params: 55,954,800\n_________________________________________________________________\nNone\n" ], [ "model.fit(X_train, y_train,\n batch_size=32,\n epochs=5,\n validation_data=(X_val, y_val)\n )", "Epoch 1/5\n405/405 [==============================] - 472s 1s/step - loss: 0.5487 - accuracy: 0.0169 - categorical_accuracy: 0.0169 - val_loss: 0.5098 - val_accuracy: 0.0039 - val_categorical_accuracy: 0.0039\nEpoch 2/5\n405/405 [==============================] - 463s 1s/step - loss: 0.4770 - accuracy: 0.0036 - categorical_accuracy: 0.0036 - val_loss: 0.4466 - val_accuracy: 0.0039 - val_categorical_accuracy: 0.0039\nEpoch 3/5\n405/405 [==============================] - 460s 1s/step - loss: 0.4208 - accuracy: 0.0036 - categorical_accuracy: 0.0036 - val_loss: 0.3968 - val_accuracy: 0.0039 - val_categorical_accuracy: 0.0039\nEpoch 4/5\n405/405 [==============================] - 461s 1s/step - loss: 0.3762 - accuracy: 0.0036 - categorical_accuracy: 0.0036 - val_loss: 0.3570 - val_accuracy: 0.0039 - val_categorical_accuracy: 0.0039\nEpoch 5/5\n405/405 [==============================] - 458s 1s/step - loss: 0.3404 - accuracy: 0.0036 - categorical_accuracy: 0.0036 - val_loss: 0.3249 - val_accuracy: 0.0039 - val_categorical_accuracy: 0.0039\n" ], [ "model.save('cnn_model_'+str(limit_number)+'_accuracy_categoricalaccuracy_.h5')\n\n# from keras.models import load_model\n# model = load_model('cnn_model.h5')", "_____no_output_____" ] ], [ [ "Evaluation", "_____no_output_____" ] ], [ [ "pred=model.predict(X_test)\r\n# For evaluation: If the probability > 0.5 you can say that it belong to the class.", "_____no_output_____" ], [ "print(pred[0])#example", "[0.26067388 0.26465452 0.26532543 0.25120264 0.25749493 0.25086457\n 0.26477438 0.25336024 0.25230414 0.26047125 0.2650743 0.25736323\n 0.26310614 0.2512095 0.25120574 0.2645912 0.2627194 0.26353997\n 0.260335 0.2559106 0.2621843 0.26167887 0.2528109 0.26419723\n 0.2569863 0.26507106 0.264779 0.26402062 0.26212293 0.2640186\n 0.25563282 0.25845373 0.2621318 0.2650153 0.25685564 0.2645614\n 0.26466 0.25580838 0.25476915 0.25268584 0.2652459 0.2607805\n 0.2579322 0.26467067 0.26471263 0.25154263 0.25319153 0.2646209\n 0.26456147 0.25665778 0.25230342 0.2518284 0.25381112 0.2581999\n 0.26293337 0.25329822 0.25526166 0.25463665 0.26458776 0.26419026\n 0.26517963 0.254416 0.26569057 0.26478577 0.2578693 0.26475787\n 0.25303414 0.25950643 0.26462162 0.2648151 0.26297373 0.26481962\n 0.26045814 0.25309712 0.25223267 0.2641736 0.25619102 0.26191378]\n" ], [ "y_pred=[]\nfor l in pred:\n temp=[]\n for value in l:\n if value>=0.5:\n temp.append(1)\n else:\n temp.append(0)\n y_pred.append(temp)", "_____no_output_____" ], [ "from sklearn.metrics import classification_report\r\n\r\nprint(classification_report(y_test, y_pred))", " precision recall f1-score support\n\n 0 0.00 0.00 0.00 116\n 1 0.00 0.00 0.00 150\n 2 0.00 0.00 0.00 146\n 3 0.00 0.00 0.00 16\n 4 0.00 0.00 0.00 78\n 5 0.00 0.00 0.00 15\n 6 0.00 0.00 0.00 163\n 7 0.00 0.00 0.00 32\n 8 0.00 0.00 0.00 26\n 9 0.00 0.00 0.00 104\n 10 0.00 0.00 0.00 152\n 11 0.00 0.00 0.00 69\n 12 0.00 0.00 0.00 138\n 13 0.00 0.00 0.00 12\n 14 0.00 0.00 0.00 15\n 15 0.00 0.00 0.00 148\n 16 0.00 0.00 0.00 139\n 17 0.00 0.00 0.00 185\n 18 0.00 0.00 0.00 103\n 19 0.00 0.00 0.00 71\n 20 0.00 0.00 0.00 123\n 21 0.00 0.00 0.00 140\n 22 0.00 0.00 0.00 47\n 23 0.00 0.00 0.00 161\n 24 0.00 0.00 0.00 63\n 25 0.00 0.00 0.00 145\n 26 0.00 0.00 0.00 142\n 27 0.00 0.00 0.00 144\n 28 0.00 0.00 0.00 121\n 29 0.00 0.00 0.00 132\n 30 0.00 0.00 0.00 68\n 31 0.00 0.00 0.00 83\n 32 0.00 0.00 0.00 112\n 33 0.00 0.00 0.00 152\n 34 0.00 0.00 0.00 91\n 35 0.00 0.00 0.00 149\n 36 0.00 0.00 0.00 151\n 37 0.00 0.00 0.00 87\n 38 0.00 0.00 0.00 49\n 39 0.00 0.00 0.00 22\n 40 0.00 0.00 0.00 147\n 41 0.00 0.00 0.00 109\n 42 0.00 0.00 0.00 66\n 43 0.00 0.00 0.00 141\n 44 0.00 0.00 0.00 147\n 45 0.00 0.00 0.00 20\n 46 0.00 0.00 0.00 24\n 47 0.00 0.00 0.00 148\n 48 0.00 0.00 0.00 167\n 49 0.00 0.00 0.00 75\n 50 0.00 0.00 0.00 35\n 51 0.00 0.00 0.00 22\n 52 0.00 0.00 0.00 43\n 53 0.00 0.00 0.00 98\n 54 0.00 0.00 0.00 128\n 55 0.00 0.00 0.00 33\n 56 0.00 0.00 0.00 61\n 57 0.00 0.00 0.00 56\n 58 0.00 0.00 0.00 148\n 59 0.00 0.00 0.00 144\n 60 0.00 0.00 0.00 146\n 61 0.00 0.00 0.00 50\n 62 0.00 0.00 0.00 129\n 63 0.00 0.00 0.00 145\n 64 0.00 0.00 0.00 87\n 65 0.00 0.00 0.00 150\n 66 0.00 0.00 0.00 30\n 67 0.00 0.00 0.00 108\n 68 0.00 0.00 0.00 158\n 69 0.00 0.00 0.00 141\n 70 0.00 0.00 0.00 128\n 71 0.00 0.00 0.00 160\n 72 0.00 0.00 0.00 145\n 73 0.00 0.00 0.00 32\n 74 0.00 0.00 0.00 22\n 75 0.00 0.00 0.00 156\n 76 0.00 0.00 0.00 63\n 77 0.00 0.00 0.00 116\n\n micro avg 0.00 0.00 0.00 7838\n macro avg 0.00 0.00 0.00 7838\nweighted avg 0.00 0.00 0.00 7838\n samples avg 0.00 0.00 0.00 7838\n\n" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
4a60901d64cc853f1a7f09f616217d60e901a5a6
32,250
ipynb
Jupyter Notebook
code-advcd-best-practice.ipynb
snowdj/coding-for-economists
2493c94fafccde7f36c933dfef658d8fc9147330
[ "MIT" ]
null
null
null
code-advcd-best-practice.ipynb
snowdj/coding-for-economists
2493c94fafccde7f36c933dfef658d8fc9147330
[ "MIT" ]
null
null
null
code-advcd-best-practice.ipynb
snowdj/coding-for-economists
2493c94fafccde7f36c933dfef658d8fc9147330
[ "MIT" ]
null
null
null
50.233645
608
0.615721
[ [ [ "(code-advcd-best-practice)=\n# Tools for Better Coding", "_____no_output_____" ], [ "## Introduction\n\nThis chapter covers the tools that will help you to write better code. This includes practical topics such as debugging code, logging, linting, and the magic of auto-formatting.\n\nAs ever, you may need to `conda install packagename` or `pip install packagename` on the terminal before being able to use some of the packages that are featured.", "_____no_output_____" ], [ "## Auto-magically improving your code\n\nIn the previous chapter, we met the idea of *code style*: even for code that runs, *how* you write it matters for readability. (And it goes without saying that you don't want bugs in your code that stop it running at all.) It is possible to catch some errors, to flag style issues, and even to re-format code to comply with a code style automatically. In this section, we'll see how to use tools to perform these functions automatically.\n\n### Linting\n\nLinters are tools that analyse code for programmatic and stylistic errors, assuming you have declared a style. A linting tool flags any potential errors and deviations from style before you even run it. When you run a linter, you get a report of what line the issue is on and why it has been raised. They are supposedly named after the lint trap in a clothes dryer because of the way they catch small errors that could have big effects.\n\nSome of the most popular linters in Python are [**flake8**](https://flake8.pycqa.org/en/latest/), [**pycodestyle**](https://pycodestyle.pycqa.org/en/latest/intro.html), and [**pylint**](https://www.pylint.org/).\n\nLet's see an example of running a linter. VS Code has direct integration with a range of linters. To get going, use `⇧⌘P` (Mac) and then type 'Python Select Linter'. In the example below, we'll use **flake8** (and **pylance**, another VS Code extension). Let's pretend we have a script, `test.py`, containing\n\n```python\nlist_defn = [1,5, 6,\n7]\n\ndef this_is_a_func():\n print('hello')\n\nprint(X)\n\nimport numpy as np\n```\n\nTo see the linting report, press <kbd>^</kbd> + <kbd>\\`</kbd> (Mac) or <kbd>ctrl</kbd> + <kbd>`</kbd> (otherwise) and navigate to the 'Problems' tab. We get a whole load of error messages about this script, here are a few:\n\n- ⓧ missing whitespace after ',' flake8(E231) 1, 15\n- ⓧ continuation line under-indented for visual indent flake8(E128) 2, 1\n- ⓧ expected 2 blank lines, found 1 flake8(E302) 4, 1\n- ⓧ indentation is not a multiple of 4 flake8(E111) 5, 4\n- ⓧ undefined name 'X' flake8(F821) 7, 7\n- ⓧ module level import not at top of file flake8(E402) 9, 1\n- ⓧ 'numpy as np' imported but unused flake8(F3401) 9, 1\n- ⚠ \"X\" is not defined Pylance(reportUndefinedVariable) 7, 7\n- ⚠ no newline at end of file flake8(W292) 78, 338\n\neach message is a warning or error that says what the problem is (for example, missing whitespace after ','), what library is reporting it (mostly flake8 here), the name of the rule that has been broken (E231), and the line, row position (1, 15). Very helpfully, we get an undefined name message for variable `X`, this is especially handy because it would cause an error on execution otherwise. The same goes for the indentation message (indentation matters!). You can customise your [linting settings](https://code.visualstudio.com/docs/python/linting) in VS Code too.\n\nAlthough the automatic linting offered by an IDE is very convenient, it's not the only way to use linting tools. You can also run them from the command line. For example, for **flake8**, the command is `flake8 test.py`.\n\n### Formatting\n\nIt's great to find out all the ways in which you are failing with respect to code style from a linter but wouldn't it be *even* better if you could fix those style issues automatically? The answer is clearly yes! This is where formatters come in; they can take valid code and forcibly apply a code style to them. This is really handy in practice for all kinds of reasons.\n\nThe most popular code formatters in Python are probably: [**yapf**](https://github.com/google/yapf), 'yet another Python formatter', from Google; [**autopep8**](https://github.com/hhatto/autopep8), which applies PEP8 to your code; and [**black**](https://black.readthedocs.io/en/stable/), the 'uncompromising formatter' that is very opinionated (\"any colour, as long as it's black\").\n\nThere are two ways to use formatters, line-by-line (though **black** doesn't work in this mode) or on an entire script at once. VS Code offers an integration with formatters. To select a formatter in VS Code, bring up the settings using <kbd>⌘</kbd> + <kbd>,</kbd> (Mac) or <kbd>ctrl</kbd> + <kbd>,</kbd> (otherwise) and type 'python formatting provider' and you can choose from autopep8, black, and yapf. \n\nIf you choose **autopep8** and then open a script you can format a *selection* of code by pressing <kbd>⌘</kbd> + <kbd>k</kbd>, <kbd>⌘</kbd> + <kbd>f</kbd> (Mac) or <kbd>ctrl</kbd> + <kbd>k</kbd>, <kbd>ctrl</kbd> + <kbd>f</kbd> (otherwise). They can also (and only, in the case of **black**) be used from the command line. For instance, to use **black**, the command is `black test.py`, assuming you have it installed.\n\nLet's see an example of a poorly styled script and see what happens when we select all lines and use <kbd>ctrl</kbd> + <kbd>k</kbd>, <kbd>ctrl</kbd> + <kbd>f</kbd> to auto format with **autopep8**. The contents of `test.py` before formatting are:\n\n```python\ndef very_important_function(y_val,debug = False, keyword_arg=0, another_arg =2):\n X = np.linspace(0,10,5)\n return X+ y_val +keyword_arg\nvery_important_function(2)\n\nlist_defn = [1,\n2,\n3,\n5,\n6,\n7]\n\nimport numpy as np\n```\n\nand, after running the auto-formatting command,\n\n```python\nimport numpy as np\n\n\ndef very_important_function(y_val, debug=False, keyword_arg=0, another_arg=2):\n X = np.linspace(0, 10, 5)\n return X + y_val + keyword_arg\n\n\nvery_important_function(2)\n\nlist_defn = [1,\n 2,\n 3,\n 5,\n 6,\n 7]\n```\n\nSo what did the formatter do? Many things. It moved the import to the top, put two blank lines after the function definition, removed whitespace around keyword arguments, added a new line at the end, and fixed some of the indentation. The different formatters have different strengths and weaknesses; for example, **black** is not so good at putting imports in the right place but excels at splitting up troublesome wide lines. If you need a formatter that deals specifically with module imports, check out [**isort**](https://pycqa.github.io/isort/).\n\nApart from taking the pressure off you to always be thinking about code style, formatters can be useful when working collaboratively too. For some open source packages, maintainers ask that new code or changed code be run through a particular formatter if it is to be incorporated into the main branch. This helps ensure the code style is consistent regardless of who is writing it. Running the code formatter can even be automated to happen every time someone *commits* some code to a shared code repository too, using something called a *pre-commit hook*.\n\nThere is a package that can run **Black** on Jupyter Notebooks too: [**black-nb**](https://pypi.org/project/black-nb/).", "_____no_output_____" ], [ "## Debugging code\n\nComputers are *very* literal, so literal that unless you're perfectly precise about what you want, they will end up doing something different. When that happens, one of the most difficult issues in programming is to understand *why* the code isn't doing what you expected. When the code doesn't do what we expect, it's called a bug.\n\nBugs could be fundamental issues with the code you're using (in fact, the term originated because a moth causing a problem in an early computer) and, if you find one of these, you should file an issue with the maintainers of the code. However, what's much more likely is that the instructions you gave aren't quite what is needed to produce the outcome that you want. And, in this case, you might need to *debug* the code: to find out which part of it isn't doing what you expect.\n\nEven with a small code base, it can be tricky to track down where the bug is: but don't fear, there are tools on hand to help you find where the bug is.\n\n### Print statements\n\nThe simplest, and I'm afraid to say the most common, way to debug code is to plonk `print` statements in the code. Let's take a common example in which we perform some simple array operations, here multiplying an array and then summing it with another array:", "_____no_output_____" ], [ "```python\nimport numpy as np\n\n\ndef array_operations(in_arr_one, in_arr_two):\n out_arr = in_arr_one*1.5\n out_arr = out_arr + in_arr_two\n return out_arr\n\n\nin_vals_one = np.array([3, 2, 5, 16, '7', 8, 9, 22])\nin_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])\n\nresult = array_operations(in_vals_one, in_vals_two)\nresult\n```\n\n```python\n---------------------------------------------------------------------------\nUFuncTypeError Traceback (most recent call last)\n<ipython-input-1-166160824d19> in <module>\n 11 in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])\n 12 \n---> 13 result = array_operations(in_vals_one, in_vals_two)\n 14 result\n\n<ipython-input-1-166160824d19> in array_operations(in_arr_one, in_arr_two)\n 3 \n 4 def array_operations(in_arr_one, in_arr_two):\n----> 5 out_arr = in_arr_one*1.5\n 6 out_arr = out_arr + in_arr_two\n 7 return out_arr\n\nUFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U32'), dtype('<U32')) -> dtype('<U32')\n```", "_____no_output_____" ], [ "Oh no! We've got a `UFuncTypeError` here, perhaps not the most illuminating error message we've ever seen. We'd like to know what's going wrong here. The `Traceback` did give us a hint about where the issue occurred though; it happens in the multiplication line of the function we wrote.\n\nTo debug the error with print statements, we might re-run the code like this:", "_____no_output_____" ], [ "```python\ndef array_operations(in_arr_one, in_arr_two):\n print(f'in_arr_one is {in_arr_one}')\n out_arr = in_arr_one*1.5\n out_arr = out_arr + in_arr_two\n return out_arr\n\n\nin_vals_one = np.array([3, 2, 5, 16, '7', 8, 9, 22])\nin_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])\n\nresult = array_operations(in_vals_one, in_vals_two)\nresult\n```\n\n```\nin_arr_one is ['3' '2' '5' '16' '7' '8' '9' '22']\n```\n\n```python\n---------------------------------------------------------------------------\nUFuncTypeError Traceback (most recent call last)\n<ipython-input-2-6a04719bc0ff> in <module>\n 9 in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])\n 10 \n---> 11 result = array_operations(in_vals_one, in_vals_two)\n 12 result\n\n<ipython-input-2-6a04719bc0ff> in array_operations(in_arr_one, in_arr_two)\n 1 def array_operations(in_arr_one, in_arr_two):\n 2 print(f'in_arr_one is {in_arr_one}')\n----> 3 out_arr = in_arr_one*1.5\n 4 out_arr = out_arr + in_arr_two\n 5 return out_arr\n\nUFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U32'), dtype('<U32')) -> dtype('<U32')\n```", "_____no_output_____" ], [ "What can we tell from the values of `in_arr_one` that are now being printed? Well, they seem to have quote marks around them and what that means is that they're strings, *not* floating point numbers or integers! Multiplying a string by 1.5 doesn't make sense here, so that's our error. If we did this, we might then trace the origin of that array back to find out where it was defined and see that instead of `np.array([3, 2, 5, 16, 7, 8, 9, 22])` being declared, we have `np.array([3, 2, 5, 16, '7', 8, 9, 22])` instead and `numpy` decides to cast the whole array as a string to ensure consistency.\n\nLet's fix that problem by turning `'7'` into `7` and run it again:", "_____no_output_____" ], [ "```python\ndef array_operations(in_arr_one, in_arr_two):\n out_arr = in_arr_one*1.5\n out_arr = out_arr + in_arr_two\n return out_arr\n\n\nin_vals_one = np.array([3, 2, 5, 16, 7, 8, 9, 22])\nin_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])\n\nresult = array_operations(in_vals_one, in_vals_two)\nresult\n```\n\n```python\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\n<ipython-input-3-ebd3efde9b3e> in <module>\n 8 in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])\n 9 \n---> 10 result = array_operations(in_vals_one, in_vals_two)\n 11 result\n\n<ipython-input-3-ebd3efde9b3e> in array_operations(in_arr_one, in_arr_two)\n 1 def array_operations(in_arr_one, in_arr_two):\n 2 out_arr = in_arr_one*1.5\n----> 3 out_arr = out_arr + in_arr_two\n 4 return out_arr\n 5 \n\nValueError: operands could not be broadcast together with shapes (8,) (7,) \n```", "_____no_output_____" ], [ "Still not working! But we've moved on to a different error now. We can still use a print statement to debug this one, which seems to be related to the shapes of variables passed into the function:", "_____no_output_____" ], [ "```python\ndef array_operations(in_arr_one, in_arr_two):\n print(f'in_arr_one shape is {in_arr_one.shape}')\n out_arr = in_arr_one*1.5\n print(f'intermediate out_arr shape is {out_arr.shape}')\n print(f'in_arr_two shape is {in_arr_two.shape}')\n out_arr = out_arr + in_arr_two\n return out_arr\n\n\nin_vals_one = np.array([3, 2, 5, 16, 7, 8, 9, 22])\nin_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])\n\nresult = array_operations(in_vals_one, in_vals_two)\nresult\n```\n\n```\nin_arr_one shape is (8,)\nintermediate out_arr shape is (8,)\nin_arr_two shape is (7,)\n```\n\n```python\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\n<ipython-input-4-4961f476c7eb> in <module>\n 11 in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])\n 12 \n---> 13 result = array_operations(in_vals_one, in_vals_two)\n 14 result\n\n<ipython-input-4-4961f476c7eb> in array_operations(in_arr_one, in_arr_two)\n 4 print(f'intermediate out_arr shape is {out_arr.shape}')\n 5 print(f'in_arr_two shape is {in_arr_two.shape}')\n----> 6 out_arr = out_arr + in_arr_two\n 7 return out_arr\n 8 \n\nValueError: operands could not be broadcast together with shapes (8,) (7,) \n```", "_____no_output_____" ], [ "The print statement now tells us the shapes of the arrays as we go through the function. We can see that in the line before the `return` statement the two arrays that are being combined using the `+` operator don't have the same shape, so we're effectively adding two vectors from two differently dimensioned vector spaces and, understandably, we are being called out on our nonsense. To fix this problem, we would have to ensure that the input arrays are the same shape (it looks like we may have just missed a value from `in_vals_two`).\n\n`print` statements are great for a quick bit of debugging and you are likely to want to use them more frequently than any other debugging tool. However, for complex, nested code debugging, they aren't always very efficient and you will sometimes feel like you are playing battleships in continually refining where they should go until you have pinpointed the actual problem, so they're far from perfect. Fortunately, there are other tools in the debugging toolbox...\n", "_____no_output_____" ], [ "### Icecream and better print statements\n\nTyping `print` statements with arguments that help you debug code can become tedious. There are better ways to work, which we'll come to, but we must also recognise that `print` is used widely in practice. So what if we had a function that was as easier to use as `print` but better geared toward debugging? Well, we do, and it's called [**icecream**](https://github.com/gruns/icecream), and it's available in most major languages, including Python, Dart, Rust, javascript, C++, PHP, Go, Ruby, and Java. \n\nLet's take an example from earlier in this chapter, where we used a `print` statement to display the contents of `in_arr_one` in advance of the line that caused an error being run. All we will do now is switch out `print(f'in_arr_one is {in_arr_one}')` for `ic(in_arr_one)`.", "_____no_output_____" ], [ "```python\nfrom icecream import ic\n\ndef array_operations(in_arr_one, in_arr_two):\n # Old debug line using `print`\n # print(f'in_arr_one is {in_arr_one}')\n # new debug line:\n ic(in_arr_one)\n out_arr = in_arr_one*1.5\n out_arr = out_arr + in_arr_two\n return out_arr\n\n\nin_vals_one = np.array([3, 2, 5, 16, '7', 8, 9, 22])\nin_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])\n\narray_operations(in_vals_one, in_vals_two)\n```\n\n```\nic| in_arr_one: array(['3', '2', '5', '16', '7', '8', '9', '22'], dtype='<U21')\n---------------------------------------------------------------------------\nUFuncTypeError Traceback (most recent call last)\n<ipython-input-6-9efd5fc1a1fe> in <module>\n 14 in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])\n 15 \n---> 16 array_operations(in_vals_one, in_vals_two)\n\n<ipython-input-6-9efd5fc1a1fe> in array_operations(in_arr_one, in_arr_two)\n 6 # new debug line:\n 7 ic(in_arr_one)\n----> 8 out_arr = in_arr_one*1.5\n 9 out_arr = out_arr + in_arr_two\n 10 return out_arr\n\nUFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U32'), dtype('<U32')) -> dtype('<U32')\n```", "_____no_output_____" ], [ "What we get in terms of debugging output is `ic| in_arr_one: array(['3', '2', '5', '16', '7', '8', '9', '22'], dtype='<U21')`, which is quite similar to before apart from three important differences, all of which are advantages:\n\n1. it is easier and quicker to write `ic(in_arr_one)` than `print(f'in_arr_one is {in_arr_one}')`\n\n2. **icecream** automatically picks up the name of the variable, `in_arr_one`, and clearly displays its contents\n\n3. **icecream** shows us that `in_arr_one` is of `type` array and that it has the `dtype` of `U`, which stands for Unicode (i.e. a string). `<U21` just means that all strings in the array are less than 21 characters long.\n\n**icecream** has some other advantages relative to print statements too, for instance it can tell you about which lines were executed in which scripts if you call it without arguments:\n", "_____no_output_____" ], [ "```python\ndef foo():\n ic()\n print('first')\n \n if 10 < 20:\n ic()\n print('second')\n else:\n ic()\n print('Never executed')\n\nfoo()\n```\n\n```\nic| <ipython-input-7-8ced0f8fcf82>:2 in foo() at 00:58:19.962\nic| <ipython-input-7-8ced0f8fcf82>:6 in foo() at 00:58:19.979\nfirst\nsecond\n```", "_____no_output_____" ], [ "And it can wrap assignments rather than living on its own lines:", "_____no_output_____" ], [ "```python\ndef half(i):\n return ic(i) / 2\n\na = 6\nb = ic(half(a))\n```\n\n```\nic| i: 6\nic| half(a): 3.0\n```", "_____no_output_____" ], [ "All in all, if you find yourself using `print` to debug, you might find a one-time import of **icecream** followed by use of `ic` instead both more convenient and more effective.", "_____no_output_____" ], [ "### **rich** for beautiful debugging\n\n[**rich**](https://github.com/willmcgugan/rich) is much more than just a tool for debugging, it's a way to create beautiful renderings of all objects both in the terminal and in interactive Python windows. You can use it to build fantastic looking command line interfaces. Here, we'll see how it can help us debug what value a variable takes *and* what methods can be used on a variable via its `inspect` function.", "_____no_output_____" ] ], [ [ "from rich import inspect\n\nmy_list = [\"foo\", \"bar\"]\ninspect(my_list, methods=True)", "_____no_output_____" ] ], [ [ "Check out all of the many options for `inspect` using `help(inspect)`. We ran it here with `methods=True`, but there are plenty of other options.\n\n```{admonition} Exercise\nCreate a dictionary (it doesn't matter what's in it, but you could map the integer 1 to the letter \"a\"). Then use `inspect` with `methods=True` to find out about all the methods you can call on an object of type dict.\n```", "_____no_output_____" ], [ "### Debugging with the IDE\n\nIn this section, we'll learn about how your Integrated Development Environment, or IDE, can aid you with debugging. While we'll talk through the use of Visual Studio Code, which is free, directly supports Python, R, and other languages, and is especially rich, many of the features will be present in other IDEs too and the ideas are somewhat general. \n\nTo begin debugging using Visual Studio Code, get a script ready, for example `script.py`, that you'd like to debug. If your script has an error in, a debug run will automatically run into it and stop on the error; alternatively you can click to the left of the line number in your script to create a *breakpoint* that your code will stop at anyway when in debug mode.\n\nTo begin a debug session, click on the play button partially covered by a bug that's on the left hand ribbon of the VS Code window. It will bring up a menu. Click 'Run and debug' and select 'Python file'. The debugger will now start running the script you had open. When it reaches and error or a breakpoint it will stop. \n\nWhy is this useful? Once the code stops, you can hover over any variables and see what's 'inside' them, which is useful for working out what's going on. Remember, in the examples above, we only saw variables that we asked for. Using the debugger, we can hover over any variable we're interested in without having to decide ahead of time! We can also see other useful bits of info such as the *call stack* of functions that have been called, what local (within the current scope) and global (available everywhere) variables have been defined, and we can nominate variables to watch too.\n\nPerhaps you now want to progress the code on from a breakpoint; you can do this too. You'll see that a menu has appeared with stop, restart, play, and other buttons on it. To skip over the next line of code, use the curved arrow over the dot. To dig in to the next line of code, for example if it's a function, use the arrow pointing toward a dot. To carry on running the code, use the play button.\n\nThis is only really scratching the surface of what you can do with IDE based debugging, but even that surface layer provides lots of really useful tools for finding out what's going on when your code executes.\n\nYou can find a short 'hello world!' debugging tutorial in the [official VS Code documentation](https://code.visualstudio.com/docs/python/python-tutorial#_configure-and-run-the-debugger).", "_____no_output_____" ], [ "## Logging\n\nLogging is a means of tracking events that happen when software runs. An event is described by a descriptive message that can optionally contain data about variables that are defined as the code is executing.\n\nLogging has two main purposes: to record events of interest, such as an error, and to act as an auditable account of what happened after the fact.\n\nAlthough Python has a built-in logger, we will see an example of logging using [**loguru**](https://github.com/Delgan/loguru), a package that makes logging a little easier and has some nice settings.\n\nLet's see how to log a debug message:", "_____no_output_____" ] ], [ [ "from loguru import logger\n\nlogger.debug(\"Simple logging!\")", "_____no_output_____" ] ], [ [ "The default message includes the time, the type of log entry message it is, what bit of code it happened in (including a line number), and the message itself (basically all the info we need). There are different levels of code messages. They are:\n\n- CRITICAL\n- ERROR\n- WARNING\n- SUCCESS\n- INFO\n- DEBUG\n- TRACE\n\nYou can find advice on what level to use for what message [here](https://reflectoring.io/logging-levels/), but it will depend a bit on what you're using your logs for.\n\nWhat we've just seen are logging messages written out to the console, which doesn't persist. This is clearly no good for auditing what happened long after the fact (and it may not be that good for debugging either) so we also need a way to write a log to a file. This snippet of code\n\n```python\nlogger.add(\"file_{time}.log\")\n```\n\ntells **loguru** to send your logging messages to a *log file* instead of to the console. This is really handy for auditing what happened when your code executed long after it ran. You can choose any name for your log file, using \"{time}\" as part of the string is a shorthand that tells **loguru** to use the current datetime to name the file.\n\nLog files can become quite numerous and quite large, which you might not want. Those logs from 6 months ago may just be taking up space and not be all that useful, for example. So, what you can also do, is tell **loguru** to use a new log file. Some examples of this would be `logger.add(\"file_1.log\", rotation=\"500 MB\")` to clean up a file after it reaches 500 MB in size, `rotation=\"12:00\"` to refresh the log file at lunch time, and `retention=\"10 days\"` to keep the file for 10 days.\n\nOne further feature that is worth being aware of is the capability to trace what caused errors, including the trace back through functions and modules, and report them in the log. Of course, you can debug these using the console, but sometimes having such complex errors written to a file (in full) can be handy. This example of a full traceback comes from the **loguru** documentation. The script would have:\n\n```python\nlogger.add(\"out.log\", backtrace=True, diagnose=True) # Caution, may leak sensitive data if used in production\n\ndef func(a, b):\n return a / b\n\ndef nested(c):\n try:\n func(5, c)\n except ZeroDivisionError:\n logger.exception(\"What?!\")\n\nnested(0)\n```\n\nwhile the log file would record:\n\n```\n2018-07-17 01:38:43.975 | ERROR | __main__:nested:10 - What?!\nTraceback (most recent call last):\n\n File \"test.py\", line 12, in <module>\n nested(0)\n └ <function nested at 0x7f5c755322f0>\n\n> File \"test.py\", line 8, in nested\n func(5, c)\n │ └ 0\n └ <function func at 0x7f5c79fc2e18>\n\n File \"test.py\", line 4, in func\n return a / b\n │ └ 0\n └ 5\n\nZeroDivisionError: division by zero\n```", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
4a6092886068d8b77fcfdc6d2174e2ad19e3b2d8
24,195
ipynb
Jupyter Notebook
week06/Feature Engineering.ipynb
romankop/SiriusDL
9aa848732ff530c45b5c2c83a18157927fc44d62
[ "MIT" ]
4
2021-04-02T15:52:52.000Z
2021-04-07T06:10:08.000Z
week06/Feature Engineering.ipynb
Firyuza/SiriusDL
4f3c69b8ca2d640efb01c16a66f1e26c36d8646e
[ "MIT" ]
2
2021-06-14T21:11:02.000Z
2021-06-30T20:03:39.000Z
week06/Feature Engineering.ipynb
romankop/SiriusDL
9aa848732ff530c45b5c2c83a18157927fc44d62
[ "MIT" ]
8
2021-04-07T07:38:20.000Z
2021-04-24T06:08:01.000Z
64.177719
1,521
0.581897
[ [ [ "import warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n\nimport torch\n\nprint(torch.__version__)\n\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.utils.data as data_utils\n\nfrom torch.utils.data import DataLoader, Dataset, Sampler\nfrom torch.utils.data.dataloader import default_collate\nfrom torch.utils.tensorboard import SummaryWriter\nfrom pytorch_lightning.metrics import Accuracy\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelEncoder", "1.5.0\n" ], [ "INPUT_SIZE = 36\nHIDDEN_SIZE = 25\nOUTPUT_SIZE = 5\nLEARNING_RATE = 1e-2\nEPOCHS = 400\nBATCH_SIZE = 256\nEMBEDDING_SIZE = 5", "_____no_output_____" ], [ "class CustomDataset(Dataset):\n # Конструктор, где считаем датасет\n def __init__(self):\n X = pd.read_csv('./data/X_cat.csv', sep='\\t', index_col=0)\n target = pd.read_csv('./data/y_cat.csv', sep='\\t', index_col=0, names=['status']) # header=-1,\n\n weekday_columns = ['Weekday_0', 'Weekday_1', 'Weekday_2',\n 'Weekday_3', 'Weekday_4', 'Weekday_5', 'Weekday_6']\n weekdays = np.argmax(X[weekday_columns].values, axis=1)\n\n X.drop(weekday_columns, axis=1, inplace=True)\n\n \n X['Weekday_cos'] = np.cos(2 * np.pi / 7.) * weekdays\n X['Weekday_sin'] = np.sin(2 * np.pi / 7.) * weekdays\n\n X['Hour_cos'] = np.cos(2 * np.pi / 24.) * X['Hour'].values\n X['Hour_sin'] = np.sin(2 * np.pi / 24.) * X['Hour'].values\n\n X['Month_cos'] = np.cos(2 * np.pi / 12.) * X['Month'].values\n X['Month_sin'] = np.sin(2 * np.pi / 12.) * X['Month'].values\n\n X['Gender'] = np.argmax(X[['Sex_Female', 'Sex_Male', 'Sex_Unknown']].values, axis=1)\n\n X.drop(['Sex_Female', 'Sex_Male', 'Sex_Unknown'], axis=1, inplace=True)\n\n print(X.shape)\n print(X.head())\n\n target = target.iloc[:, :].values\n target[target == 'Died'] = 'Euthanasia'\n\n le = LabelEncoder()\n self.y = le.fit_transform(target)\n\n self.X = X.values\n\n self.columns = X.columns.values\n\n self.embedding_column = 'Gender'\n self.nrof_emb_categories = 3\n self.numeric_columns = ['IsDog', 'Age', 'HasName', 'NameLength', 'NameFreq', 'MixColor', 'ColorFreqAsIs',\n 'ColorFreqBase', 'TabbyColor', 'MixBreed', 'Domestic', 'Shorthair', 'Longhair',\n 'Year', 'Day', 'Breed_Chihuahua Shorthair Mix', 'Breed_Domestic Medium Hair Mix',\n 'Breed_Domestic Shorthair Mix', 'Breed_German Shepherd Mix', 'Breed_Labrador Retriever Mix',\n 'Breed_Pit Bull Mix', 'Breed_Rare',\n 'SexStatus_Flawed', 'SexStatus_Intact', 'SexStatus_Unknown',\n 'Weekday_cos', 'Weekday_sin', 'Hour_cos', 'Hour_sin',\n 'Month_cos', 'Month_sin']\n\n return\n\n def __len__(self):\n return len(self.X)\n\n # Переопределяем метод,\n # который достает по индексу наблюдение из датасет\n def __getitem__(self, idx):\n\n row = self.X[idx, :]\n\n row = {col: torch.tensor(row[i]) for i, col in enumerate(self.columns)}\n\n return row, self.y[idx]", "_____no_output_____" ], [ "class MLPNet(nn.Module):\n\n def __init__(self, input_size, hidden_size, output_size, nrof_cat, emb_dim,\n emb_columns, numeric_columns):\n super(MLPNet, self).__init__()\n self.emb_columns = emb_columns\n self.numeric_columns = numeric_columns\n\n self.emb_layer = torch.nn.Embedding(nrof_cat, emb_dim)\n\n self.feature_bn = torch.nn.BatchNorm1d(input_size)\n\n self.linear1 = torch.nn.Linear(input_size, hidden_size)\n self.linear1.apply(self.init_weights)\n self.bn1 = torch.nn.BatchNorm1d(hidden_size)\n\n self.linear2 = torch.nn.Linear(hidden_size, hidden_size)\n self.linear2.apply(self.init_weights)\n self.bn2 = torch.nn.BatchNorm1d(hidden_size)\n\n self.linear3 = torch.nn.Linear(hidden_size, output_size)\n\n def init_weights(self, m):\n if type(m) == nn.Linear:\n torch.nn.init.xavier_uniform(m.weight)\n # m.bias.data.fill_(0.001)\n\n def forward(self, x):\n emb_output = self.emb_layer(torch.tensor(x[self.emb_columns], dtype=torch.int64))\n numeric_feats = torch.tensor(pd.DataFrame(x)[self.numeric_columns].values, dtype=torch.float32)\n\n concat_input = torch.cat([numeric_feats, emb_output], dim=1)\n output = self.feature_bn(concat_input)\n\n output = self.linear1(output)\n output = self.bn1(output)\n output = torch.relu(output)\n\n output = self.linear2(output)\n output = self.bn2(output)\n output = torch.relu(output)\n\n output = self.linear3(output)\n predictions = torch.softmax(output, dim=1)\n\n return predictions", "_____no_output_____" ], [ "def run_train(model, train_loader):\n step = 0\n for epoch in range(EPOCHS):\n model.train()\n\n for features, label in train_loader:\n # Reset gradients\n optimizer.zero_grad()\n\n output = model(features)\n # Calculate error and backpropagate\n loss = criterion(output, label)\n loss.backward()\n acc = accuracy(output, label).item()\n\n # Update weights with gradients\n optimizer.step()\n\n step += 1\n\n if step % 100 == 0:\n print('EPOCH %d STEP %d : train_loss: %f train_acc: %f' %\n (epoch, step, loss.item(), acc))\n\n\n return step", "_____no_output_____" ], [ "animal_dataset = CustomDataset()\ntrain_loader = data_utils.DataLoader(dataset=animal_dataset,\n batch_size=BATCH_SIZE, shuffle=True)\n\nmodel = MLPNet(INPUT_SIZE, HIDDEN_SIZE, OUTPUT_SIZE, animal_dataset.nrof_emb_categories,\n EMBEDDING_SIZE,\n animal_dataset.embedding_column, animal_dataset.numeric_columns)\n\ncriterion = nn.CrossEntropyLoss()\naccuracy = Accuracy()\n\noptimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)", "(26729, 29)\n IsDog Age HasName NameLength NameFreq MixColor ColorFreqAsIs \\\n0 1 365.0 1 7 0.000157 1 0.032919 \n1 0 365.0 1 5 0.000655 0 0.008092 \n2 1 730.0 1 6 0.000052 1 0.026293 \n3 0 21.0 0 7 0.285871 0 0.000471 \n4 1 730.0 0 7 0.285871 0 0.023831 \n\n ColorFreqBase TabbyColor MixBreed ... Breed_Domestic Shorthair Mix \\\n0 0.463624 0 1 ... 0 \n1 0.015005 1 1 ... 1 \n2 0.357521 0 1 ... 0 \n3 0.058418 0 1 ... 1 \n4 0.075353 0 0 ... 0 \n\n Breed_German Shepherd Mix Breed_Labrador Retriever Mix \\\n0 0 0 \n1 0 0 \n2 0 0 \n3 0 0 \n4 0 0 \n\n Breed_Pit Bull Mix Breed_Rare SexStatus_Flawed SexStatus_Intact \\\n0 0 1 1 0 \n1 0 0 1 0 \n2 1 0 1 0 \n3 0 0 0 1 \n4 0 1 1 0 \n\n SexStatus_Unknown Weekday Gender \n0 0 1.795196 1 \n1 0 5.385587 0 \n2 0 4.487990 1 \n3 0 3.590392 1 \n4 0 3.590392 1 \n\n[5 rows x 29 columns]\n" ], [ "step = run_train(model, train_loader)", "EPOCH 0 STEP 100 : train_loss: 1.290938 train_acc: 0.601562\nEPOCH 1 STEP 200 : train_loss: 1.214884 train_acc: 0.687500\nEPOCH 2 STEP 300 : train_loss: 1.233881 train_acc: 0.660156\nEPOCH 3 STEP 400 : train_loss: 1.200605 train_acc: 0.703125\nEPOCH 4 STEP 500 : train_loss: 1.250242 train_acc: 0.652344\nEPOCH 5 STEP 600 : train_loss: 1.283096 train_acc: 0.609375\nEPOCH 6 STEP 700 : train_loss: 1.257185 train_acc: 0.636719\nEPOCH 7 STEP 800 : train_loss: 1.228623 train_acc: 0.664062\nEPOCH 8 STEP 900 : train_loss: 1.234338 train_acc: 0.664062\nEPOCH 9 STEP 1000 : train_loss: 1.246991 train_acc: 0.652344\nEPOCH 10 STEP 1100 : train_loss: 1.232167 train_acc: 0.679688\nEPOCH 11 STEP 1200 : train_loss: 1.259292 train_acc: 0.640625\nEPOCH 12 STEP 1300 : train_loss: 1.214962 train_acc: 0.683594\nEPOCH 13 STEP 1400 : train_loss: 1.250455 train_acc: 0.652344\nEPOCH 14 STEP 1500 : train_loss: 1.220163 train_acc: 0.679688\nEPOCH 15 STEP 1600 : train_loss: 1.232081 train_acc: 0.671875\nEPOCH 16 STEP 1700 : train_loss: 1.258399 train_acc: 0.644531\nEPOCH 17 STEP 1800 : train_loss: 1.211118 train_acc: 0.691406\nEPOCH 18 STEP 1900 : train_loss: 1.207053 train_acc: 0.707031\nEPOCH 19 STEP 2000 : train_loss: 1.260475 train_acc: 0.632812\nEPOCH 19 STEP 2100 : train_loss: 1.251267 train_acc: 0.657143\nEPOCH 20 STEP 2200 : train_loss: 1.221773 train_acc: 0.679688\nEPOCH 21 STEP 2300 : train_loss: 1.207833 train_acc: 0.687500\nEPOCH 22 STEP 2400 : train_loss: 1.183208 train_acc: 0.710938\nEPOCH 23 STEP 2500 : train_loss: 1.252097 train_acc: 0.660156\nEPOCH 24 STEP 2600 : train_loss: 1.203610 train_acc: 0.699219\nEPOCH 25 STEP 2700 : train_loss: 1.215616 train_acc: 0.691406\nEPOCH 26 STEP 2800 : train_loss: 1.257802 train_acc: 0.652344\nEPOCH 27 STEP 2900 : train_loss: 1.192939 train_acc: 0.710938\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a60c0335690c74958a9169bafa053a5d5f6cb58
8,854
ipynb
Jupyter Notebook
dataxHWSp2021/In-class_Assignment_Feb9/solution/In-class_Assignment_Feb9.ipynb
UCBerkeley-SCET/DataX-Berkeley
f912d22c838b511d3ada4ecfa3548afd80437b74
[ "Apache-2.0" ]
28
2020-06-15T23:53:36.000Z
2022-03-19T09:27:02.000Z
dataxHWSp2021/In-class_Assignment_Feb9/solution/In-class_Assignment_Feb9.ipynb
UCBerkeley-SCET/DataX-Berkeley
f912d22c838b511d3ada4ecfa3548afd80437b74
[ "Apache-2.0" ]
4
2020-06-24T22:20:31.000Z
2022-02-28T01:37:36.000Z
dataxHWSp2021/In-class_Assignment_Feb9/solution/In-class_Assignment_Feb9.ipynb
UCBerkeley-SCET/DataX-Berkeley
f912d22c838b511d3ada4ecfa3548afd80437b74
[ "Apache-2.0" ]
78
2020-06-19T09:41:01.000Z
2022-02-05T00:13:29.000Z
22.819588
198
0.544838
[ [ [ "# Initialize Otter Grader\nimport otter\ngrader = otter.Notebook()", "_____no_output_____" ] ], [ [ "![data-x](https://raw.githubusercontent.com/afo/data-x-plaksha/master/imgsource/dx_logo.png)\n", "_____no_output_____" ], [ "\n# In-class Assignment (Feb 9)", "_____no_output_____" ], [ "Run the following two cells to load the required modules and read the data.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics import mean_squared_error", "_____no_output_____" ], [ "df = pd.read_csv(\"Video_Games_Sales_cleaned_sampled.csv\")\ndf.head(5)", "_____no_output_____" ] ], [ [ "## Exploring Data with Pandas", "_____no_output_____" ], [ "### Q1: \nHow many data points (rows) are there in this dataset? Store it in ```num_rows```.\n\n<!--\nBEGIN QUESTION\nname: q1\nmanual: false\n-->", "_____no_output_____" ] ], [ [ "# your code here\nnum_rows = df.shape[0] # SOLUTION\nprint(num_rows)", "_____no_output_____" ] ], [ [ "### Q2\nWhat are the max and min values in Global Sales? What about the quartiles (25%, 50%, and 75%)? Can you answer this question with a one-liner code?\n\n<!--\nBEGIN QUESTION\nname: q2\nmanual: false\n-->", "_____no_output_____" ] ], [ [ "# your code here\ndf[\"Global_Sales\"].describe() # SOLUTION", "_____no_output_____" ] ], [ [ "### Q3\nWhat are the unique genres and consoles that the dataset contains? Store them in ```genre_unique``` and ```console_unique```.\n\n<!--\nBEGIN QUESTION\nname: q3\nmanual: false\n-->", "_____no_output_____" ] ], [ [ "# your code here\ngenre_unique = df[\"Genre\"].unique() # SOLUTION\nconsole_unique = df[\"Console\"].unique() # SOLUTION\nprint(\"All genres:\", genre_unique)\nprint(\"All consoles:\", console_unique)", "_____no_output_____" ] ], [ [ "### Q4\nWhat are the top five games with the most global sales? \n\n<!--\nBEGIN QUESTION\nname: q4\nmanual: false\n-->", "_____no_output_____" ] ], [ [ "# your code here\ndf.sort_values(by=\"Global_Sales\",ascending=False).head(5) # SOLUTION", "_____no_output_____" ] ], [ [ "### Q5 (Optional: Do it if you had enough time)\nHow many games in the dataset are developed by Nintendo? What are their names?\n\n<!--\nBEGIN QUESTION\nname: q5\nmanual: false\n-->", "_____no_output_____" ] ], [ [ "# your code here\n# BEGIN SOLUTION\narr_name_by_nintendo = df.loc[df[\"Developer\"] == \"Nintendo\",\"Name\"]\nprint (arr_name_by_nintendo.nunique())\nprint (arr_name_by_nintendo.unique())\n# END SOLUTION", "_____no_output_____" ] ], [ [ "## Linear Regression", "_____no_output_____" ], [ "Suppose that you want to regress the global sales on four features: Critic_Score, Critic_Count, User_Score, and User_Count. \n\nThe input matrix $X$ and the output $y$ are given to you below.", "_____no_output_____" ] ], [ [ "## No need for modification, just run this cell \nX = df[['Critic_Score', 'Critic_Count', 'User_Score', 'User_Count']].values\ny = df[['Global_Sales']].values", "_____no_output_____" ] ], [ [ "### Q6\nUse train_test_split function in sklearn to split the dataset into training and test sets. Set 80% of the dataset aside for training and use the rest for testing. (set random_state=0)\n\n<!--\nBEGIN QUESTION\nname: q6\nmanual: false\n-->", "_____no_output_____" ] ], [ [ "# your code here\n# BEGIN SOLUTION\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=0)\n# END SOLUTION", "_____no_output_____" ] ], [ [ "### Q7\nTrain your linear regression model using the training set you obtained above. Then, store the coefficients and the intercept of your model in ```coefs``` and ```intercept```, respectively. \n\n<!--\nBEGIN QUESTION\nname: q7\nmanual: false\n-->", "_____no_output_____" ] ], [ [ "# your code here\n# BEGIN SOLUTION NO PROMPT\nmodel = LinearRegression()\nmodel.fit(X_train,y_train)\n# END SOLUTION\ncoefs = model.coef_ # SOLUTION\nintercept = model.intercept_ # SOLUTION\nprint(\"Coefficients:\", coefs)\nprint(\"Intercept:\", intercept)", "_____no_output_____" ] ], [ [ "### Q8 (Optional: Do it if you had enough time.)\nCompute the mean-squared-error of your model's prediction on the training and test sets and store them in ```train_error``` and ```test_error```, respectively. \n\n<!--\nBEGIN QUESTION\nname: q8\nmanual: false\n-->", "_____no_output_____" ] ], [ [ "# your code here\n# BEGIN SOLUTION NO PROMPT\ny_pred_train = model.predict(X_train)\ny_pred_test = model.predict(X_test)\n# END SOLUTION\ntrain_error = mean_squared_error(y_train, y_pred_train) # SOLUTION\ntest_error = mean_squared_error(y_test, y_pred_test) # SOLUTION\nprint(train_error)\nprint(test_error)", "_____no_output_____" ] ], [ [ "# Submit\nMake sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output.\n**Please save before submitting!**", "_____no_output_____" ] ], [ [ "# Save your notebook first, then run this cell to create a pdf for your reference.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a60c9c9b8e75e5c654855217930eed3a58f50b3
265,864
ipynb
Jupyter Notebook
examples/tutorial.ipynb
tkoolen/SymPy.jl
8a1bea113344fdc7119547b863e1e0d3ee410919
[ "MIT" ]
44
2015-01-20T04:08:45.000Z
2022-03-26T04:42:25.000Z
examples/tutorial.ipynb
tkoolen/SymPy.jl
8a1bea113344fdc7119547b863e1e0d3ee410919
[ "MIT" ]
68
2015-01-21T22:38:43.000Z
2020-11-29T22:18:18.000Z
examples/tutorial.ipynb
tkoolen/SymPy.jl
8a1bea113344fdc7119547b863e1e0d3ee410919
[ "MIT" ]
18
2015-02-16T17:36:45.000Z
2019-04-04T09:58:36.000Z
559.713684
62,013
0.838162
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a60d90d6fb0ca74024f03641ad73f8bfb03f536
38,202
ipynb
Jupyter Notebook
units/SLU10_Classification/Exercise notebook - SLU10 (Classification).ipynb
manuboat/LDSSA
6d79f9c48d08561e1243cdadd5c3ddf02afba3b0
[ "MIT" ]
15
2018-04-15T22:07:10.000Z
2021-11-15T11:31:50.000Z
units/SLU10_Classification/Exercise notebook - SLU10 (Classification).ipynb
manuboat/LDSSA
6d79f9c48d08561e1243cdadd5c3ddf02afba3b0
[ "MIT" ]
17
2018-04-14T21:32:09.000Z
2021-03-21T15:08:59.000Z
units/SLU10_Classification/Exercise notebook - SLU10 (Classification).ipynb
manuboat/LDSSA
6d79f9c48d08561e1243cdadd5c3ddf02afba3b0
[ "MIT" ]
13
2018-04-14T19:28:25.000Z
2022-02-22T13:16:07.000Z
30.36725
335
0.552379
[ [ [ "# SLU10 - Classification: Exercise notebook", "_____no_output_____" ] ], [ [ "import pandas as pd \nimport numpy as np ", "_____no_output_____" ] ], [ [ "In this notebook you will practice the following: \n\n - What classification is for\n - Logistic regression\n - Cost function\n - Binary classification\n \nYou thought that you would get away without implementing your own little Logistic Regression? Hah!\n\n\n# Exercise 1. Implement the Logistic Function\n*aka the sigmoid function*\n\nAs a very simple warmup, you will implement the logistic function. Let's keep this simple!\n\nHere's a quick reminder of the formula:\n\n$$\\hat{p} = \\frac{1}{1 + e^{-z}}$$\n\n**Complete here:**", "_____no_output_____" ] ], [ [ "def logistic_function(z):\n \"\"\" \n Implementation of the logistic function by hand\n \n Args:\n z (np.float64): a float\n\n Returns:\n proba (np.float64): the predicted probability for a given observation\n\n \"\"\"\n \n # define the numerator and the denominator and obtain the predicted probability \n # clue: you can use np.exp()\n numerator = None\n denominator = None\n proba = None\n # YOUR CODE HERE\n raise NotImplementedError()\n return proba", "_____no_output_____" ], [ "z = 1.2\nprint('Predicted probability: %.2f' % logistic_function(z))", "_____no_output_____" ] ], [ [ "Expected output:\n\n Predicted probability: 0.77", "_____no_output_____" ] ], [ [ "z = 3.4\nassert np.isclose(np.round(logistic_function(z),2), 0.97)\n\nz = -2.1\nassert np.isclose(np.round(logistic_function(z),2), 0.11)", "_____no_output_____" ] ], [ [ "# Exercise 2: Make Predictions From Observations\n\nThe next step is to implement a function that receives observations and returns predicted probabilities.\n\nFor instance, remember that for an observation with two variables we have:\n\n$$z = \\beta_0 + \\beta_1 x_1 + \\beta_2 x_2$$\n\nwhere $\\beta_0$ is the intercept and $\\beta_1, \\beta_2$ are the coefficients.\n\n**Complete here:**", "_____no_output_____" ] ], [ [ "def predict_proba(x, coefficients):\n \"\"\" \n Implementation of a function that returns a predicted probability for a given data observation\n \n Args:\n x (np.array): a numpy array of shape (n,)\n - n: number of variables\n coefficients (np.array): a numpy array of shape (n + 1,)\n - coefficients[0]: intercept\n - coefficients[1:]: remaining coefficients\n\n Returns:\n proba (np.array): the predicted probability for a given data observation\n\n \"\"\"\n \n # start by assigning the intercept to z \n # clue: the intercept is the first element of the list of coefficients\n z = None\n # YOUR CODE HERE\n raise NotImplementedError()\n \n # sum the remaining variable * coefficient products to z\n # clue: the variables and coefficients indeces are not exactly aligned, but correctly ordered\n for i in range(None): # iterate through the observation variables (clue: you can use len())\n z += None # multiply the variable value by its coefficient and add to z\n # YOUR CODE HERE\n raise NotImplementedError()\n \n # obtain the predicted probability from z\n # clue: we already implemented something that can give us that\n proba = None\n # YOUR CODE HERE\n raise NotImplementedError()\n \n return proba", "_____no_output_____" ], [ "x = np.array([0.2,2.32,1.3,3.2])\ncoefficients = np.array([2.1,0.22,-2, 0.4, 0.1])\nprint('Predicted probability: %.3f' % predict_proba(x, coefficients))", "_____no_output_____" ] ], [ [ "Expected output:\n\n Predicted probability: 0.160", "_____no_output_____" ] ], [ [ "x = np.array([1,0,2,3.2])\ncoefficients = np.array([-0.2,2,-6, 1.2, -1])\nassert np.isclose(np.round(predict_proba(x, coefficients),2), 0.73)\n\nx = np.array([3.2,1.2,-1.2])\ncoefficients = np.array([-1.,3.1,-3,4])\nassert np.isclose(np.round(predict_proba(x, coefficients),2), 0.63)", "_____no_output_____" ] ], [ [ "# Exercise 3: Compute the Cross-Entropy Cost Function\n\nAs you will implement stochastic gradient descent, you only have to do the following for each prediction: \n\n$$H_{\\hat{p}}(y) = - (y \\log(\\hat{p}) + (1-y) \\log (1-\\hat{p}))$$\n\n**Complete here:**", "_____no_output_____" ] ], [ [ "def cross_entropy(y, proba):\n \"\"\" \n Implementation of a function that returns the Cross-Entropy loss\n \n Args:\n y (np.int64): an integer\n proba (np.float64): a float\n\n Returns:\n loss (np.float): a float with the resulting loss for a given prediction\n\n \"\"\"\n \n # compute the inner left side of the loss function (for when y == 1)\n # clue: use np.log()\n left = None \n # YOUR CODE HERE\n raise NotImplementedError()\n \n # compute the inner right side of the loss function (for when y == 0)\n right = None \n # YOUR CODE HERE\n raise NotImplementedError()\n \n # compute the total loss\n # clue: do not forget the minus sign\n loss = None\n # YOUR CODE HERE\n raise NotImplementedError()\n return loss", "_____no_output_____" ], [ "y = 1\nproba = 0.7\nprint('Computed loss: %.3f' % cross_entropy(y, proba))", "_____no_output_____" ] ], [ [ "Expected output:\n \n Computed loss: 0.357", "_____no_output_____" ] ], [ [ "y = 1\nproba = 0.35\nassert np.isclose(np.round(cross_entropy(y, proba),3), 1.050)\n\ny = 1\nproba = 0.77\nassert np.isclose(np.round(cross_entropy(y, proba),3), 0.261)", "_____no_output_____" ] ], [ [ "# Exercise 4: Obtain the Optimized Coefficients \nNow that the warmup is done, let's do the most interesting exercise. Here you will implement the optimized coefficients through Stochastic Gradient Descent.\n\nQuick reminders:\n\n$$H_{\\hat{p}}(y) = - \\frac{1}{N}\\sum_{i=1}^{N} \\left [{ y_i \\ \\log(\\hat{p}_i) + (1-y_i) \\ \\log (1-\\hat{p}_i)} \\right ]$$\n\nand\n\n$$\\beta_{0(t+1)} = \\beta_{0(t)} - learning\\_rate \\frac{\\partial H_{\\hat{p}}(y)}{\\partial \\beta_{0(t)}}$$\n\n$$\\beta_{t+1} = \\beta_t - learning\\_rate \\frac{\\partial H_{\\hat{p}}(y)}{\\partial \\beta_t}$$\n\nwhich can be simplified to\n\n$$\\beta_{0(t+1)} = \\beta_{0(t)} + learning\\_rate \\left [(y - \\hat{p}) \\ \\hat{p} \\ (1 - \\hat{p})\\right]$$\n\n$$\\beta_{t+1} = \\beta_t + learning\\_rate \\left [(y - \\hat{p}) \\ \\hat{p} \\ (1 - \\hat{p}) \\ x \\right]$$\n\nYou will have to initialize a numpy array full of zeros for the coefficients. If you have a training set $X$, you can initialize it this way:\n```python\ncoefficients = np.zeros(X.shape[1]+1)\n```\n\nwhere the $+1$ is adding the intercept.\n\nYou will also iterate through the training set $X$ alongside their respective labels $Y$. To do so simultaneously you can do it this way:\n```python\nfor x_sample, y_sample in zip(X, Y):\n ...\n```\n\n**Complete here:**", "_____no_output_____" ] ], [ [ "def compute_coefficients(x_train, y_train, learning_rate = 0.1, n_epoch = 50, verbose = False):\n \"\"\" \n Implementation of a function that returns the optimized intercept and coefficients\n \n Args:\n x_train (np.array): a numpy array of shape (m, n)\n m: number of training observations\n n: number of variables\n y_train (np.array): a numpy array of shape (m,)\n learning_rate (np.float64): a float\n n_epoch (np.int64): an integer of the number of full training cycles to perform on the training set\n\n Returns:\n coefficients (np.array): a numpy array of shape (n+1,)\n\n \"\"\"\n \n # initialize the coefficients array with zeros\n # clue: use np.zeros()\n coefficients = None\n # YOUR CODE HERE\n raise NotImplementedError()\n \n # run the stochastic gradient descent algorithm n_epoch times and update the coefficients\n for epoch in range(None): # iterate n_epoch times\n loss = None # initialize the cross entropy loss with an empty list\n for x, y in zip(None, None): # iterate through the training set observations and labels\n proba = None # compute the predicted probability\n loss.append(None) # compute the cross entropy loss and append it to the list\n coefficients[0] += None # update the intercept \n for i in range(None): # iterate through the observation variables (clue: use len())\n coefficients[i + 1] += None # update each coefficient\n loss = None # average the obtained cross entropies (clue: use np.mean())\n # YOUR CODE HERE\n raise NotImplementedError()\n \n if((epoch%10==0) & verbose):\n print('>epoch=%d, learning_rate=%.3f, error=%.3f' % (epoch, learning_rate, loss))\n return coefficients", "_____no_output_____" ], [ "x_train = np.array([[1,2,3], [2,5,9], [3,1,4], [8,2,9]])\ny_train = np.array([0,1,0,1])\nlearning_rate = 0.1\nn_epoch = 50\ncoefficients = compute_coefficients(x_train, y_train, learning_rate=learning_rate, n_epoch=n_epoch, verbose=True)\nprint('Computed coefficients:')\nprint(coefficients)", "_____no_output_____" ] ], [ [ "Expected output:\n \n >epoch=0, learning_rate=0.100, error=0.811\n >epoch=10, learning_rate=0.100, error=0.675\n >epoch=20, learning_rate=0.100, error=0.640\n >epoch=30, learning_rate=0.100, error=0.606\n >epoch=40, learning_rate=0.100, error=0.574\n Computed coefficients:\n [-0.82964483 0.02698239 -0.04632395 0.27761155]", "_____no_output_____" ] ], [ [ "x_train = np.array([[3,1,3], [1,0,9], [3,3,4], [2,-1,10]])\ny_train = np.array([0,1,0,1])\nlearning_rate = 0.3\nn_epoch = 100\ncoefficients = compute_coefficients(x_train, y_train, learning_rate=learning_rate, n_epoch=n_epoch, verbose=False)\nassert np.allclose(coefficients, np.array([-0.25917811, -1.15128387, -0.85317139, 0.55286134]))\n\nx_train = np.array([[3,-1,-2], [-6,9,3], [3,-1,4], [5,1,6]])\ncoefficients = compute_coefficients(x_train, y_train, learning_rate=learning_rate, n_epoch=n_epoch, verbose=False)\nassert np.allclose(coefficients, np.array([-0.53111811, -0.16120628, 2.20202909, 0.27270437]))", "_____no_output_____" ] ], [ [ "# Exercise 5: Normalize Data\n\nJust a quick and easy function to normalize the data. It is crucial that your variables are adjusted between $[0;1]$ (normalized) or standardized so that you can correctly analyse some logistic regression coefficients for your possible future employer.\n\nYou only have to implement this formula\n\n$$ x_{normalized} = \\frac{x - x_{min}}{x_{max} - x_{min}}$$\n\nDon't forget that the `axis` argument is critical when obtaining the maximum, minimum and mean values! As you want to obtain the maximum and minimum values of each individual feature, you have to specify `axis=0`. Thus, if you wanted to obtain the maximum values of each feature of data $X$, you would do the following:\n\n```python\nX_max = np.max(X, axis=0)\n```\n\n**Complete here:**", "_____no_output_____" ] ], [ [ "def normalize_data(data):\n \"\"\" \n Implementation of a function that normalizes your data variables\n \n Args:\n data (np.array): a numpy array of shape (m, n)\n m: number of observations\n n: number of variables\n\n Returns:\n normalized_data (np.array): a numpy array of shape (m, n)\n\n \"\"\"\n # compute the numerator\n # clue: use np.min()\n numerator = None\n # YOUR CODE HERE\n raise NotImplementedError()\n \n # compute the numerator\n # clue: use np.max() and np.min()\n denominator = None\n # YOUR CODE HERE\n raise NotImplementedError()\n \n # obtain the normalized data\n normalized_data = None\n # YOUR CODE HERE\n raise NotImplementedError()\n \n return normalized_data", "_____no_output_____" ], [ "data = np.array([[9,5,2], [7,7,3], [2,2,11], [1,5,2], [10,1,3], [0,9,5]])\nnormalized_data = normalize_data(data)\nprint('Before normalization:')\nprint(data)\nprint('\\n-------------------\\n')\nprint('After normalization:')\nprint(normalized_data)", "_____no_output_____" ] ], [ [ "Expected output:\n \n Before normalization:\n [[ 9 5 2]\n [ 7 7 3]\n [ 2 2 11]\n [ 1 5 2]\n [10 1 3]\n [ 0 9 5]]\n\n -------------------\n\n After normalization:\n [[0.9 0.5 0. ]\n [0.7 0.75 0.11111111]\n [0.2 0.125 1. ]\n [0.1 0.5 0. ]\n [1. 0. 0.11111111]\n [0. 1. 0.33333333]]", "_____no_output_____" ] ], [ [ "data = np.array([[9,5,2,6], [7,5,1,3], [2,2,11,1]])\nnormalized_data = normalize_data(data)\nassert np.allclose(normalized_data, np.array([[1., 1., 0.1, 1.],[0.71428571, 1., 0., 0.4],[0., 0., 1., 0.]]))\n\ndata = np.array([[9,5,3,1], [1,3,1,3], [2,2,4,6]])\nnormalized_data = normalize_data(data)\nassert np.allclose(normalized_data, np.array([[1., 1., 0.66666667, 0.],[0., 0.33333333, 0., 0.4],\n [0.125, 0., 1., 1.]]))", "_____no_output_____" ] ], [ [ "# Exercise 6: Putting it All Together\n\nThe Wisconsin Breast Cancer Diagnostic dataset is another data science classic. It is the result of extraction of breast cell's nuclei characteristics to understand which of them are the most relevent for developing breast cancer.\n\nYour quest, is to first analyze this dataset from the materials that you've learned in the previous SLUs and then create a logistic regression model that can correctly classify cancer cells from healthy ones.\n\nDataset description:\n\n 1. Sample code number: id number \n 2. Clump Thickness\n 3. Uniformity of Cell Size\n 4. Uniformity of Cell Shape\n 5. Marginal Adhesion \n 6. Single Epithelial Cell Size\n 7. Bare Nuclei\n 8. Bland Chromatin\n 9. Normal Nucleoli\n 10. Mitoses \n 11. Class: (2 for benign, 4 for malignant) > We will modify to (0 for benign, 1 for malignant) for simplicity\n \nThe data is loaded for you below.", "_____no_output_____" ] ], [ [ "columns = ['Sample code number','Clump Thickness','Uniformity of Cell Size','Uniformity of Cell Shape',\n 'Marginal Adhesion','Single Epithelial Cell Size','Bare Nuclei','Bland Chromatin','Normal Nucleoli',\n 'Mitoses','Class']\ndata = pd.read_csv('data/breast-cancer-wisconsin.csv',names=columns, index_col=0)\ndata[\"Bare Nuclei\"] = data[\"Bare Nuclei\"].replace(['?'],np.nan)\ndata = data.dropna()\ndata[\"Bare Nuclei\"] = data[\"Bare Nuclei\"].map(int)\ndata.Class = data.Class.map(lambda x: 1 if x == 4 else 0)\nX = data.drop('Class').values\ny_train = data.Class.values", "_____no_output_____" ] ], [ [ "You will also have to return several values, such as the number of cancer and healthy cells. To do so, remember that you can do masks in numpy arrays. If you had a numpy array of labels called `labels` and wanted to obtain the ones with label $3$, you would do the following:\n\n```python\nfiltered_labels = labels[labels==3]\n```\n\nYou will additionally be asked to obtain the number of correct cancer cell predictions. Imagine that you have a numpy array with the predictions called `predictions` and a numpy array with the correct labels called `labels` and you wanted to obtain the number of correct predictions of a label $4$. You would do the following:\n\n```python\nn_correct_predictions = labels[(labels==4) & (predictions==4)].shape[0]\n```\n\nAlso, don't forget to use these values for your logistic regression!", "_____no_output_____" ] ], [ [ "# Hyperparameters\nlearning_rate = 0.01\nn_epoch = 100\n\n# For validation\nverbose = True", "_____no_output_____" ] ], [ [ "Now let's do this!\n\n**Complete here:**", "_____no_output_____" ] ], [ [ "# STEP ONE: Initial analysis and data processing\n# How many cells have cancer? (clue: use y_train)\nn_cancer = None\n# YOUR CODE HERE\nraise NotImplementedError()\n\n# How many cells are healthy? (clue: use y_train)\nn_healthy = None\n# YOUR CODE HERE\nraise NotImplementedError()\n\n# Normalize the training data X (clue: we have already implemented this)\nx_train = None\n# YOUR CODE HERE\nraise NotImplementedError()\n\nprint(\"Number of cells with cancer: %i\" % n_cancer)\n\nprint(\"\\nThe last three normalized rows:\")\nprint(x_train[-3:])", "_____no_output_____" ] ], [ [ "Expected output:\n\n Number of cells with cancer: 239\n\n The last three normalized rows:\n [[0.44444444 1. 1. 0.22222222 0.66666667 0.22222222\n 0.77777778 1. 0.11111111 1. ]\n [0.33333333 0.77777778 0.55555556 0.33333333 0.22222222 0.33333333\n 1. 0.55555556 0. 1. ]\n [0.33333333 0.77777778 0.77777778 0.44444444 0.33333333 0.44444444\n 1. 0.33333333 0. 1. ]]", "_____no_output_____" ] ], [ [ "# STEP TWO: Model training and predictions\n# What coefficients can we get? (clue: we have already implemented this)\n# note: don't forget to use all the hyperparameters defined above\ncoefficients = None\n# YOUR CODE HERE\nraise NotImplementedError()\n\n# Initialize the predicted probabilities list\nprobas = None\n# YOUR CODE HERE\nraise NotImplementedError()\n\n# What are the predicted probabilities on the training data?\nfor x in None: # iterate through the training data x_train\n probas.append(None) # append the list the predicted probability (clue: we already implemented this)\n \n# YOUR CODE HERE\nraise NotImplementedError()\n\n# If we had to say whether a cells had breast cancer, what are the predictions?\n# clue 1: Hard assign the predicted probabilities by rounding them to the nearest integer\n# clue 2: use np.round()\npreds = None\n# YOUR CODE HERE\nraise NotImplementedError()\n\nprint(\"\\nThe last three coefficients:\")\nprint(coefficients[-3:])\n\nprint(\"\\nThe last three obtained probas:\")\nprint(probas[-3:])\n\nprint(\"\\nThe last three predictions:\")\nprint(preds[-3:])", "_____no_output_____" ] ], [ [ "Expected output:\n\n >epoch=0, learning_rate=0.010, error=0.617\n >epoch=10, learning_rate=0.010, error=0.209\n >epoch=20, learning_rate=0.010, error=0.143\n >epoch=30, learning_rate=0.010, error=0.114\n >epoch=40, learning_rate=0.010, error=0.097\n >epoch=50, learning_rate=0.010, error=0.086\n >epoch=60, learning_rate=0.010, error=0.077\n >epoch=70, learning_rate=0.010, error=0.071\n >epoch=80, learning_rate=0.010, error=0.066\n >epoch=90, learning_rate=0.010, error=0.062\n\n The last three coefficients:\n [0.70702475 0.33306501 3.27480969]\n\n The last three obtained probas:\n [0.9679181578309998, 0.9356364708465178, 0.9482109014966041]\n\n The last three predictions:\n [1. 1. 1.]", "_____no_output_____" ] ], [ [ "# STEP THREE: Results analysis\n# How many cells were predicted to have breast cancer? (clue: use preds and len() or .shape)\nn_predicted_cancer = None\n# YOUR CODE HERE\nraise NotImplementedError()\n\n# How many cells with cancer were correctly detected? (clue: use y_train, preds and len() or .shape)\nn_correct_cancer_predictions = None\n# YOUR CODE HERE\nraise NotImplementedError()\n\nprint(\"Number of correct cancer predictions: %i\" % n_correct_cancer_predictions)", "_____no_output_____" ] ], [ [ "Expected output:\n\n Number of correct cancer predictions: 239", "_____no_output_____" ] ], [ [ "print('You have a dataset with %s cells with cancer and %s healthy cells. \\n\\n'\n 'After analysing the data and training your own logistic regression classifier you find out that it correctly '\n 'identified %s out of %s cancer cells which were all of them. You feel very lucky and happy. However, shortly '\n 'after you get somewhat suspicious after getting such amazing results. You feel that they should not be '\n 'that good, but you do not know how to be sure of it. This, because you trained and tested on the same '\n 'dataset, which does not seem right! You say to yourself that you will definitely give your best focus when '\n 'doing the next Small Learning Unit 11, which will tackle exactly that.' % \n (n_cancer, n_healthy, n_predicted_cancer, n_correct_cancer_predictions))", "_____no_output_____" ], [ "assert np.allclose(probas[:3], np.array([0.05075437808498781, 0.30382227212694596, 0.05238389294132284]))\nassert np.isclose(n_predicted_cancer, 239)\nassert np.allclose(coefficients[:3], np.array([-3.22309346, 0.40712798, 0.80696792]))\nassert np.isclose(n_correct_cancer_predictions, 239)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
4a60d9f4289c43c94f1ea8dc59200857b114837a
294,665
ipynb
Jupyter Notebook
seminar1/seminar01_me.ipynb
aaptss/ML2022_seminars
afc4d0626d3f167d63e2150420ba35bdbd65fe0d
[ "MIT" ]
null
null
null
seminar1/seminar01_me.ipynb
aaptss/ML2022_seminars
afc4d0626d3f167d63e2150420ba35bdbd65fe0d
[ "MIT" ]
null
null
null
seminar1/seminar01_me.ipynb
aaptss/ML2022_seminars
afc4d0626d3f167d63e2150420ba35bdbd65fe0d
[ "MIT" ]
null
null
null
99.213805
24,968
0.792456
[ [ [ "<a href=\"https://colab.research.google.com/github/adasegroup/ML2022_seminars/blob/master/seminar1/seminar01.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Seminar 1. Machine learning on Titanic data\n\nThe notebook provides an intro to the exploratory analysis of the data, data preprocessing and application of machine learning methods.\n\nThe notebook is based on kaggle kernel \"Titanic: Machine Learning from Disaster\" https://www.kaggle.com/omarelgabry/a-journey-through-titanic by Omar El Gabry.\nData is from a toy competition on kaggle named Titanic.\nThe goal of the competition is to predict who survived and who died during the sinking of the RMS Titanic.", "_____no_output_____" ], [ "### Documentation to go through:\n\n* https://docs.python.org/3/\n* https://pandas.pydata.org/docs\n* https://matplotlib.org/contents.html\n* https://docs.scipy.org/doc/\n* http://scikit-learn.org/stable/documentation.html", "_____no_output_____" ], [ "### Some additional info:\n\n\n* http://www.scipy-lectures.org/\n* https://www.kaggle.com/\n* https://pydata.org/", "_____no_output_____" ] ], [ [ "# importing data processing tools: pandas and numpy\nimport numpy as np\nimport pandas as pd\nfrom pandas import Series, DataFrame", "_____no_output_____" ] ], [ [ "# Load data", "_____no_output_____" ] ], [ [ "# get titanic files as a DataFrame\ntitanic_dataframe = pd.read_csv(\"https://raw.githubusercontent.com/adasegroup/ML2022_seminars/master/seminar1/titanic/train.csv\", index_col='PassengerId')", "_____no_output_____" ] ], [ [ "# Look through the data", "_____no_output_____" ] ], [ [ "# preview the data\ntitanic_dataframe.head()", "_____no_output_____" ], [ "# list the features\nprint(titanic_dataframe.keys())", "Index(['Survived', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp', 'Parch', 'Ticket',\n 'Fare', 'Cabin', 'Embarked'],\n dtype='object')\n" ], [ "# column selection by name\ntitanic_dataframe['Age']", "_____no_output_____" ], [ "# row selection by id\ntitanic_dataframe.loc[1]", "_____no_output_____" ], [ "# column selection by index\ntitanic_dataframe.iloc[:, 0]", "_____no_output_____" ], [ "# row selection by index\ntitanic_dataframe.iloc[0, :]", "_____no_output_____" ] ], [ [ "### Hints and tips\n\nYou can use ```%time``` or ```tqdm``` to track the code timing.\n\nNote that ```pandas``` is column oriented data structure.\n", "_____no_output_____" ] ], [ [ "%time titanic_dataframe['Fare'].mean()", "CPU times: user 2.1 ms, sys: 540 µs, total: 2.64 ms\nWall time: 2.03 ms\n" ], [ "data_titanic_transpose = titanic_dataframe.T\n%time data_titanic_transpose.loc['Fare'].mean()", "CPU times: user 317 µs, sys: 79 µs, total: 396 µs\nWall time: 401 µs\n" ], [ "from tqdm import tqdm\nfor i in tqdm(range(100000000)):\n pass", "100%|████████████████████████| 100000000/100000000 [00:14<00:00, 6700120.11it/s]\n" ] ], [ [ "## Data Dictionary\n\n| Variable | Definition | Key |\n| ------------- |:-------------|: -----|\n| survival | Survival | 0 = No, 1 = Yes | \n| pclass | Ticket class | 1 = 1st, 2 = 2nd, 3 = 3rd |\n| sex | Sex | |\n| Age | Age in years | |\n| sibsp | # of siblings / spouses aboard the Titanic | |\n| parch | # of parents / children aboard the Titanic | |\n| ticket | Ticket number | |\n| fare | Passenger fare | |\n| cabin | Cabin number | |\n| embarked | Port of Embarkation | C = Cherbourg, Q = Queenstown, S = Southampton |", "_____no_output_____" ] ], [ [ "titanic_dataframe.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 891 entries, 1 to 891\nData columns (total 11 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Survived 891 non-null int64 \n 1 Pclass 891 non-null int64 \n 2 Name 891 non-null object \n 3 Sex 891 non-null object \n 4 Age 714 non-null float64\n 5 SibSp 891 non-null int64 \n 6 Parch 891 non-null int64 \n 7 Ticket 891 non-null object \n 8 Fare 891 non-null float64\n 9 Cabin 204 non-null object \n 10 Embarked 889 non-null object \ndtypes: float64(2), int64(4), object(5)\nmemory usage: 115.8+ KB\n" ], [ "titanic_dataframe.describe()", "_____no_output_____" ] ], [ [ "### Hints and tips\n\nWrite ```?``` after the function you are interested in or just press ``` Shift + Tab``` for the function short referense. \n\nDouble ``` Shift + Tab``` will expand to the full reference.", "_____no_output_____" ] ], [ [ "# call information for the function \ntitanic_dataframe.drop?", "_____no_output_____" ], [ "# drop unnecessary columns, these columns won't be useful in analysis and prediction\ntitanic_dataframe.drop(['Name','Ticket'], axis=1, inplace=True)\ntitanic_dataframe.info()", "_____no_output_____" ], [ "for column_name in titanic_dataframe.columns:\n print(column_name, 'null', titanic_dataframe[column_name].isnull().sum())\n\n# It has a lot of NaN values, so it won't cause a remarkable impact on prediction\ntitanic_dataframe.drop(\"Cabin\", axis=1, inplace=True)", "Survived null 0\nPclass null 0\nSex null 0\nAge null 177\nSibSp null 0\nParch null 0\nFare null 0\nCabin null 687\nEmbarked null 2\n" ], [ "# Count various embarked values\nprint(titanic_dataframe[\"Embarked\"].value_counts())\n\n# Fill the two missing values with the most occurred value, which is \"S\".\ntitanic_dataframe[\"Embarked\"] = titanic_dataframe[\"Embarked\"].fillna(\"S\")\nprint(titanic_dataframe[\"Embarked\"].value_counts())", "S 644\nC 168\nQ 77\nName: Embarked, dtype: int64\nS 646\nC 168\nQ 77\nName: Embarked, dtype: int64\n" ], [ "# Groupby \ntitanic_dataframe.groupby(\"Survived\").count()", "_____no_output_____" ] ], [ [ "### Tasks:\n\n1. What is the mean value and stds of ages for every passanger class?\n2. In what port of embarked the absolute difference between the amount men and women was the greatest?\n3. What is a number of NaN values in every column?\n4. Replace NaN values in age with median value and calculate the std value.\n", "_____no_output_____" ] ], [ [ "titanic_dataframe.groupby(\"Pclass\").Age.mean()", "_____no_output_____" ], [ "titanic_dataframe.groupby(\"Pclass\").Age.std()", "_____no_output_____" ], [ "titanic_dataframe.groupby([\"Embarked\", \"Sex\"]).count()", "_____no_output_____" ], [ "titanic_dataframe.isnull().sum()", "_____no_output_____" ], [ "median_age = titanic_dataframe[\"Age\"].median()", "_____no_output_____" ], [ "median_age", "_____no_output_____" ], [ "titanic_dataframe[\"Age\"] = titanic_dataframe[\"Age\"].fillna(median_age)\ntitanic_dataframe.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 891 entries, 1 to 891\nData columns (total 8 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Survived 891 non-null int64 \n 1 Pclass 891 non-null int64 \n 2 Sex 891 non-null object \n 3 Age 891 non-null float64\n 4 SibSp 891 non-null int64 \n 5 Parch 891 non-null int64 \n 6 Fare 891 non-null float64\n 7 Embarked 891 non-null object \ndtypes: float64(2), int64(4), object(2)\nmemory usage: 94.9+ KB\n" ] ], [ [ "# Plotting", "_____no_output_____" ] ], [ [ "# visualization tools: matplotlib, seaborn\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('whitegrid')\n%matplotlib inline", "_____no_output_____" ], [ "# Simple plot\nx = titanic_dataframe['Age']\ny = titanic_dataframe['Fare']\nplt.plot(x, y, 'o')\nplt.xlabel('Age')\nplt.ylabel('Fare')", "_____no_output_____" ] ], [ [ "![matplotlib_figure_axes_axis.png](attachment:matplotlib_figure_axes_axis.png)", "_____no_output_____" ] ], [ [ "# Catplot plot represents share of survived passengers for different embarkment ports\nsns.catplot?", "_____no_output_____" ], [ "sns.catplot(x = 'Embarked', y = 'Survived', data=titanic_dataframe, height=4, aspect=3, kind = 'point')\n\nfigure_handle, (axis1, axis2, axis3) = plt.subplots(1, 3, figsize=(15, 5))\n\nsns.countplot(x='Embarked', data=titanic_dataframe, ax=axis1)\nsns.countplot(x='Survived', hue=\"Embarked\", data=titanic_dataframe, order=[1, 0], ax=axis2)\n\n# group by embarked, and get the mean for survived passengers for each value in Embarked\nsns.barplot(x='Embarked', y='Survived', data=titanic_dataframe[[\"Embarked\", \"Survived\"]], order=['S','C','Q'], ax=axis3)", "_____no_output_____" ] ], [ [ "![1_3hdYEX5eixaV4F3wT5OmBg.png](attachment:1_3hdYEX5eixaV4F3wT5OmBg.png)", "_____no_output_____" ] ], [ [ "# consider Embarked column in predictions,\n# and remove \"S\" dummy variable, \n# and leave \"C\" & \"Q\", since they seem to have a good rate for Survival.\n\n# OR, don't create dummy variables for Embarked column, just drop it, \n# because logically, Embarked doesn't seem to be useful in prediction.\n\nembark_dummies_titanic = pd.get_dummies(titanic_dataframe['Embarked'])", "_____no_output_____" ], [ "embark_dummies_titanic", "_____no_output_____" ], [ "embark_dummies_titanic.drop(['S'], axis=1, inplace=True)\n\ntitanic_dataframe = titanic_dataframe.join(embark_dummies_titanic)\n\ntitanic_dataframe.drop(['Embarked'], axis=1, inplace=True)", "_____no_output_____" ], [ "# Examine fare variable\n\n# convert from float to int\ntitanic_dataframe['Fare'] = titanic_dataframe['Fare'].astype(int)\n\n# get fare for survived & didn't survive passengers \nfare_not_survived = titanic_dataframe[\"Fare\"][titanic_dataframe[\"Survived\"] == 0]\nfare_survived = titanic_dataframe[\"Fare\"][titanic_dataframe[\"Survived\"] == 1]\n\n# get average and std for fare of survived/not survived passengers\navgerage_fare = DataFrame([fare_not_survived.mean(), fare_survived.mean()])\nstd_fare = DataFrame([fare_not_survived.std(), fare_survived.std()])\n\n# plot\ntitanic_dataframe['Fare'].plot(kind='hist', figsize=(15, 3), bins=100, xlim=(0, 50))\n\nstd_fare.index.names = [\"Survived\"]\navgerage_fare.index.names = [\"Survived\"]\navgerage_fare.plot(yerr=std_fare, kind='bar', legend=False)", "_____no_output_____" ], [ "# Do the same thing for pclass variable with no confidence interval visible\nprint(titanic_dataframe[[\"Pclass\", \"Survived\"]].groupby(['Pclass'], as_index=True))\n# adjust the figure size\nplt.figure(figsize=[10,5])\nsns.barplot(x='Pclass', y='Survived', data=titanic_dataframe[[\"Pclass\", \"Survived\"]], order=[1, 2, 3])", "<pandas.core.groupby.generic.DataFrameGroupBy object at 0x7f6de5e1c070>\n" ], [ "# Age \nfig, (axis1, axis2) = plt.subplots(1, 2, figsize=(15, 4))\naxis1.set_title('Original Age values - Titanic')\naxis2.set_title('New Age values - Titanic')\n\n# get average, std, and number of NaN values in titanic_df\naverage_age_titanic = titanic_dataframe[\"Age\"].mean()\nstd_age_titanic = titanic_dataframe[\"Age\"].std()\ncount_nan_age_titanic = titanic_dataframe[\"Age\"].isnull().sum()\n\n# generate random numbers between (mean - std) & (mean + std)\nrandom_ages = np.random.randint(average_age_titanic - std_age_titanic, \n average_age_titanic + std_age_titanic, \n size=count_nan_age_titanic)\n\n# plot original Age values\n# NOTE: drop all null values, and convert to int\ntitanic_dataframe['Age'].dropna().astype(int).hist(bins=70, ax=axis1)\n\n# fill NaN values in Age column with random values generated\ntitanic_dataframe.loc[np.isnan(titanic_dataframe[\"Age\"]), \"Age\"] = random_ages\n\n# convert from float to int\ntitanic_dataframe['Age'] = titanic_dataframe['Age'].astype(int)\n\n# plot new Age Values\ntitanic_dataframe['Age'].hist(bins=70, ax=axis2)", "_____no_output_____" ], [ "# .... continue with plotting of Age column\n\n# peaks for survived/not survived passengers by their age\nfacet = sns.FacetGrid(titanic_dataframe, hue=\"Survived\", aspect=3)\nfacet.map(sns.kdeplot, 'Age', shade=True)\nfacet.set(xlim=(0, titanic_dataframe['Age'].max()))\nfacet.add_legend()\n\n# average survived passengers by age\nfigure_handle, axis1 = plt.subplots(1, 1, figsize=(18, 4))\naverage_age = titanic_dataframe[[\"Age\", \"Survived\"]].groupby(['Age'], as_index=False).mean()\nsns.barplot(x='Age', y='Survived', data=average_age)", "_____no_output_____" ], [ "# Instead of having two columns Parch & SibSp, \n# we can have only one column that represents if a passenger had any family member aboard or not,\n# Meaning, if having any family member (whether parent, brother, ...etc) increases chances of Survival or not.\ntitanic_dataframe['Family'] = titanic_dataframe[\"Parch\"] + titanic_dataframe[\"SibSp\"]\ntitanic_dataframe.loc[titanic_dataframe['Family'] > 0, 'Family'] = 1\ntitanic_dataframe.loc[titanic_dataframe['Family'] == 0, 'Family'] = 0\n\n# drop Parch & SibSp\ntitanic_dataframe.drop(['SibSp','Parch'], axis=1, inplace=True)", "_____no_output_____" ], [ "# plot Family\nfigure_handle, (axis1, axis2) = plt.subplots(1, 2, sharex=True, figsize=(10, 5))\n\nsns.countplot(x='Family', data=titanic_dataframe, order=[1, 0], ax=axis1)\naxis1.set_xticklabels([\"With Family\", \"Alone\"], rotation=0)\n\n# average of survived for those who had/didn't have any family member\nsns.barplot(x='Family', y='Survived', data=titanic_dataframe[[\"Family\", \"Survived\"]], order=[1, 0], ax=axis2)", "_____no_output_____" ], [ "# Sex variable\n\n# As we see, children(age < ~16) on aboard seem to have a high chances for Survival.\n# So, we can classify passengers as males, females, and child\ndef get_person(passenger):\n age, sex = passenger\n return 'child' if age < 16 else sex\n\ntitanic_dataframe['Person'] = titanic_dataframe[['Age','Sex']].apply(get_person, axis=1)\n\n# No need to use Sex column since we created Person column\n#titanic_dataframe.drop(['Sex'], axis=1, inplace=True)\n\n# create dummy variables for Person column, & drop Male as it has the lowest average of survived passengers\nperson_dummies_titanic = pd.get_dummies(titanic_dataframe['Person'])\nperson_dummies_titanic.columns = ['Child', 'Female', 'Male']\nperson_dummies_titanic.drop(['Male'], axis=1, inplace=True)\n\ntitanic_dataframe = titanic_dataframe.join(person_dummies_titanic)\n\nfigure_handle, (axis1, axis2) = plt.subplots(1, 2, figsize=(10, 5))\n\nsns.countplot(x='Person', data=titanic_dataframe, ax=axis1)\n\n# average of survived for each Person(male, female, or child)\nsns.barplot(x='Person', y='Survived', data=titanic_dataframe[[\"Person\", \"Survived\"]], \n ax=axis2, order=['male', 'female', 'child'])\n\n# we don't need person variable after introduction of the corresponding dummy variables\ntitanic_dataframe.drop(['Person'], axis=1, inplace=True)", "_____no_output_____" ], [ "# Pclass\nsns.catplot('Pclass', 'Survived', order=[1, 2, 3], data=titanic_dataframe, height=5, kind = 'point')\n\n# The goal is to create dummy variables for class and joint it to the initial dataframe\n# create dummy variables for Pclass column, & drop 3rd class as it has the lowest average of survived passengers\npclass_dummies_titanic = pd.get_dummies(titanic_dataframe['Pclass'])\npclass_dummies_titanic.columns = ['Class_1', 'Class_2', 'Class_3']\npclass_dummies_titanic.drop(['Class_3'], axis=1, inplace=True)\n\ntitanic_dataframe = titanic_dataframe.join(pclass_dummies_titanic)\n", "/home/aaptss/.local/lib/python3.8/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n warnings.warn(\n" ] ], [ [ "### Task\n\n1. Is distribution of age similar for men and women?\n2. Compare Age distribution for all three classes.", "_____no_output_____" ] ], [ [ "gender_titanic1 = titanic_dataframe[titanic_dataframe.Sex == \"male\"]\ngender_titanic2 = titanic_dataframe[titanic_dataframe.Sex == \"female\"]\n\nfig, (axis1, axis2) = plt.subplots(1, 2, figsize=(15, 4))\n\naxis1.set_title('Age histogram - Titanic, male')\naxis2.set_title('Age histogram - Titanic, female')\n\ngender_titanic1[\"Age\"].hist(ax = axis1, bins = 50)\ngender_titanic2[\"Age\"].hist(ax = axis2, bins = 50)", "_____no_output_____" ], [ "cls_titanic1 = titanic_dataframe.loc[(titanic_dataframe.Class_1 == 1) & (titanic_dataframe.Class_2 == 0)]\ncls_titanic2 = titanic_dataframe.loc[(titanic_dataframe.Class_2 == 1) & (titanic_dataframe.Class_1 == 0)]\ncls_titanic3 = titanic_dataframe.loc[(titanic_dataframe.Class_1 == 0) & (titanic_dataframe.Class_2 == 0)]\n\nfig, (axis1, axis2, axis3) = plt.subplots(1, 3, figsize=(15, 4))\n\naxis1.set_title('Age histogram - Titanic, Class1')\naxis2.set_title('Age histogram - Titanic, Class2')\naxis3.set_title('Age histogram - Titanic, Class3')\n\ncls_titanic1[\"Age\"].hist(ax = axis1, bins = 50)\ncls_titanic2[\"Age\"].hist(ax = axis2, bins = 50)\ncls_titanic3[\"Age\"].hist(ax = axis3, bins = 50)", "_____no_output_____" ] ], [ [ "## It's time for Machine learning!", "_____no_output_____" ], [ "![MLearning](https://media.giphy.com/media/BdrSy2gqURFEk/giphy.gif)", "_____no_output_____" ] ], [ [ "# machine learning tools: various methods from scikit-learn\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score", "_____no_output_____" ], [ "titanic_dataframe.head()", "_____no_output_____" ], [ "titanic_dataframe.drop('Person', axis=1, inplace=True)", "_____no_output_____" ], [ "# titanic_dataframe.drop(\"Person\")\ntitanic_dataframe.head()", "_____no_output_____" ], [ "train, test = train_test_split(titanic_dataframe, train_size=0.5, test_size=0.5)", "_____no_output_____" ], [ "train_x = train.drop(['Survived'], axis=1)\ntrain_y = train['Survived']\ntest_x = test.drop(['Survived'], axis=1)\ntest_y = test['Survived']", "_____no_output_____" ], [ "# Logistic Regression\n\nlogistic_regression_model = LogisticRegression(solver='liblinear')\nlogistic_regression_model.fit(train_x, train_y)\ntrain_prediction = logistic_regression_model.predict(train_x)\ntest_prediction = logistic_regression_model.predict(test_x)\ntrain_accuracy = accuracy_score(train_y, train_prediction)\ntest_accuracy = accuracy_score(test_y, test_prediction)\n\nprint('Train Accuracy:', train_accuracy)\nprint('Test Accuracy:', test_accuracy)", "Train Accuracy: 0.8247191011235955\nTest Accuracy: 0.7914798206278026\n" ], [ "# get Correlation Coefficient for each feature using Logistic Regression\ncoeff_df = DataFrame(titanic_dataframe.columns.delete(0))\ncoeff_df.columns = ['Features']\ncoeff_df[\"Coefficient Estimate\"] = pd.Series(logistic_regression_model.coef_[0])\n\n# preview\ncoeff_df", "_____no_output_____" ], [ "# Support Vector Machines\n\nsvm_model = SVC(C=1.0, gamma=0.5)\nsvm_model.fit(train_x, train_y)\ntrain_prediction = svm_model.predict(train_x)\ntest_prediction = svm_model.predict(test_x)\ntrain_accuracy = accuracy_score(train_y, train_prediction)\ntest_accuracy = accuracy_score(test_y, test_prediction)\n\nprint('Train Accuracy:', train_accuracy)\nprint('Test Accuracy:', test_accuracy)", "Train Accuracy: 0.952808988764045\nTest Accuracy: 0.6390134529147982\n" ], [ "# Random Forests\n\nrandom_forest_model = RandomForestClassifier(n_estimators=10)\nrandom_forest_model.fit(train_x, train_y)\ntrain_prediction = random_forest_model.predict(train_x)\ntest_prediction = random_forest_model.predict(test_x)\ntrain_accuracy = accuracy_score(train_y, train_prediction)\ntest_accuracy = accuracy_score(test_y, test_prediction)\n\nprint('Train Accuracy:', train_accuracy)\nprint('Test Accuracy:', test_accuracy)", "Train Accuracy: 0.9685393258426966\nTest Accuracy: 0.7690582959641256\n" ], [ "# K nearest neighbours \n\nknn_model = KNeighborsClassifier(n_neighbors=1)\nknn_model.fit(train_x, train_y)\ntrain_prediction = knn_model.predict(train_x)\ntest_prediction = knn_model.predict(test_x)\ntrain_accuracy = accuracy_score(train_y, train_prediction)\ntest_accuracy = accuracy_score(test_y, test_prediction)\n\nprint('Train Accuracy:', train_accuracy)\nprint('Test Accuracy:', test_accuracy)", "Train Accuracy: 0.9797752808988764\nTest Accuracy: 0.6771300448430493\n" ] ], [ [ "### Task 3\n\nExplore **sklearn** and find the best classifier!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
4a60e1efb9326fe0f52aae9f8fc896911d849c60
234,123
ipynb
Jupyter Notebook
experiments/tuned_1v2/oracle.run2_limited/trials/5/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
experiments/tuned_1v2/oracle.run2_limited/trials/5/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
experiments/tuned_1v2/oracle.run2_limited/trials/5/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
100.481974
73,480
0.793997
[ [ [ "# PTN Template\nThis notebook serves as a template for single dataset PTN experiments \nIt can be run on its own by setting STANDALONE to True (do a find for \"STANDALONE\" to see where) \nBut it is intended to be executed as part of a *papermill.py script. See any of the \nexperimentes with a papermill script to get started with that workflow. ", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\n \nimport os, json, sys, time, random\nimport numpy as np\nimport torch\nfrom torch.optim import Adam\nfrom easydict import EasyDict\nimport matplotlib.pyplot as plt\n\nfrom steves_models.steves_ptn import Steves_Prototypical_Network\n\nfrom steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper\nfrom steves_utils.iterable_aggregator import Iterable_Aggregator\nfrom steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig\nfrom steves_utils.torch_sequential_builder import build_sequential\nfrom steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader\nfrom steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)\nfrom steves_utils.PTN.utils import independent_accuracy_assesment\n\nfrom steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory\n\nfrom steves_utils.ptn_do_report import (\n get_loss_curve,\n get_results_table,\n get_parameters_table,\n get_domain_accuracies,\n)\n\nfrom steves_utils.transforms import get_chained_transform", "_____no_output_____" ] ], [ [ "# Required Parameters\nThese are allowed parameters, not defaults\nEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)\n\nPapermill uses the cell tag \"parameters\" to inject the real parameters below this cell.\nEnable tags to see what I mean", "_____no_output_____" ] ], [ [ "required_parameters = {\n \"experiment_name\",\n \"lr\",\n \"device\",\n \"seed\",\n \"dataset_seed\",\n \"labels_source\",\n \"labels_target\",\n \"domains_source\",\n \"domains_target\",\n \"num_examples_per_domain_per_label_source\",\n \"num_examples_per_domain_per_label_target\",\n \"n_shot\",\n \"n_way\",\n \"n_query\",\n \"train_k_factor\",\n \"val_k_factor\",\n \"test_k_factor\",\n \"n_epoch\",\n \"patience\",\n \"criteria_for_best\",\n \"x_transforms_source\",\n \"x_transforms_target\",\n \"episode_transforms_source\",\n \"episode_transforms_target\",\n \"pickle_name\",\n \"x_net\",\n \"NUM_LOGS_PER_EPOCH\",\n \"BEST_MODEL_PATH\",\n \"torch_default_dtype\"\n}", "_____no_output_____" ], [ "\n\nstandalone_parameters = {}\nstandalone_parameters[\"experiment_name\"] = \"STANDALONE PTN\"\nstandalone_parameters[\"lr\"] = 0.0001\nstandalone_parameters[\"device\"] = \"cuda\"\n\nstandalone_parameters[\"seed\"] = 1337\nstandalone_parameters[\"dataset_seed\"] = 1337\n\n\nstandalone_parameters[\"num_examples_per_domain_per_label_source\"]=100\nstandalone_parameters[\"num_examples_per_domain_per_label_target\"]=100\n\nstandalone_parameters[\"n_shot\"] = 3\nstandalone_parameters[\"n_query\"] = 2\nstandalone_parameters[\"train_k_factor\"] = 1\nstandalone_parameters[\"val_k_factor\"] = 2\nstandalone_parameters[\"test_k_factor\"] = 2\n\n\nstandalone_parameters[\"n_epoch\"] = 100\n\nstandalone_parameters[\"patience\"] = 10\nstandalone_parameters[\"criteria_for_best\"] = \"target_accuracy\"\n\nstandalone_parameters[\"x_transforms_source\"] = [\"unit_power\"]\nstandalone_parameters[\"x_transforms_target\"] = [\"unit_power\"]\nstandalone_parameters[\"episode_transforms_source\"] = []\nstandalone_parameters[\"episode_transforms_target\"] = []\n\nstandalone_parameters[\"torch_default_dtype\"] = \"torch.float32\" \n\n\n\nstandalone_parameters[\"x_net\"] = [\n {\"class\": \"nnReshape\", \"kargs\": {\"shape\":[-1, 1, 2, 256]}},\n {\"class\": \"Conv2d\", \"kargs\": { \"in_channels\":1, \"out_channels\":256, \"kernel_size\":(1,7), \"bias\":False, \"padding\":(0,3), },},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\":256}},\n\n {\"class\": \"Conv2d\", \"kargs\": { \"in_channels\":256, \"out_channels\":80, \"kernel_size\":(2,7), \"bias\":True, \"padding\":(0,3), },},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\":80}},\n {\"class\": \"Flatten\", \"kargs\": {}},\n\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 80*256, \"out_features\": 256}}, # 80 units per IQ pair\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm1d\", \"kargs\": {\"num_features\":256}},\n\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 256, \"out_features\": 256}},\n]\n\n# Parameters relevant to results\n# These parameters will basically never need to change\nstandalone_parameters[\"NUM_LOGS_PER_EPOCH\"] = 10\nstandalone_parameters[\"BEST_MODEL_PATH\"] = \"./best_model.pth\"\n\n# uncomment for CORES dataset\nfrom steves_utils.CORES.utils import (\n ALL_NODES,\n ALL_NODES_MINIMUM_1000_EXAMPLES,\n ALL_DAYS\n)\n\n\nstandalone_parameters[\"labels_source\"] = ALL_NODES\nstandalone_parameters[\"labels_target\"] = ALL_NODES\n\nstandalone_parameters[\"domains_source\"] = [1]\nstandalone_parameters[\"domains_target\"] = [2,3,4,5]\n\nstandalone_parameters[\"pickle_name\"] = \"cores.stratified_ds.2022A.pkl\"\n\n\n# Uncomment these for ORACLE dataset\n# from steves_utils.ORACLE.utils_v2 import (\n# ALL_DISTANCES_FEET,\n# ALL_RUNS,\n# ALL_SERIAL_NUMBERS,\n# )\n# standalone_parameters[\"labels_source\"] = ALL_SERIAL_NUMBERS\n# standalone_parameters[\"labels_target\"] = ALL_SERIAL_NUMBERS\n# standalone_parameters[\"domains_source\"] = [8,20, 38,50]\n# standalone_parameters[\"domains_target\"] = [14, 26, 32, 44, 56]\n# standalone_parameters[\"pickle_name\"] = \"oracle.frame_indexed.stratified_ds.2022A.pkl\"\n# standalone_parameters[\"num_examples_per_domain_per_label_source\"]=1000\n# standalone_parameters[\"num_examples_per_domain_per_label_target\"]=1000\n\n# Uncomment these for Metahan dataset\n# standalone_parameters[\"labels_source\"] = list(range(19))\n# standalone_parameters[\"labels_target\"] = list(range(19))\n# standalone_parameters[\"domains_source\"] = [0]\n# standalone_parameters[\"domains_target\"] = [1]\n# standalone_parameters[\"pickle_name\"] = \"metehan.stratified_ds.2022A.pkl\"\n# standalone_parameters[\"n_way\"] = len(standalone_parameters[\"labels_source\"])\n# standalone_parameters[\"num_examples_per_domain_per_label_source\"]=200\n# standalone_parameters[\"num_examples_per_domain_per_label_target\"]=100\n\n\nstandalone_parameters[\"n_way\"] = len(standalone_parameters[\"labels_source\"])", "_____no_output_____" ], [ "# Parameters\nparameters = {\n \"experiment_name\": \"tuned_1v2:oracle.run2_limited\",\n \"device\": \"cuda\",\n \"lr\": 0.0001,\n \"labels_source\": [\n \"3123D52\",\n \"3123D65\",\n \"3123D79\",\n \"3123D80\",\n \"3123D54\",\n \"3123D70\",\n \"3123D7B\",\n \"3123D89\",\n \"3123D58\",\n \"3123D76\",\n \"3123D7D\",\n \"3123EFE\",\n \"3123D64\",\n \"3123D78\",\n \"3123D7E\",\n \"3124E4A\",\n ],\n \"labels_target\": [\n \"3123D52\",\n \"3123D65\",\n \"3123D79\",\n \"3123D80\",\n \"3123D54\",\n \"3123D70\",\n \"3123D7B\",\n \"3123D89\",\n \"3123D58\",\n \"3123D76\",\n \"3123D7D\",\n \"3123EFE\",\n \"3123D64\",\n \"3123D78\",\n \"3123D7E\",\n \"3124E4A\",\n ],\n \"episode_transforms_source\": [],\n \"episode_transforms_target\": [],\n \"domains_source\": [8, 32, 50],\n \"domains_target\": [14, 20, 26, 38, 44],\n \"num_examples_per_domain_per_label_source\": 2000,\n \"num_examples_per_domain_per_label_target\": 2000,\n \"n_shot\": 3,\n \"n_way\": 16,\n \"n_query\": 2,\n \"train_k_factor\": 3,\n \"val_k_factor\": 2,\n \"test_k_factor\": 2,\n \"torch_default_dtype\": \"torch.float32\",\n \"n_epoch\": 50,\n \"patience\": 3,\n \"criteria_for_best\": \"target_accuracy\",\n \"x_net\": [\n {\"class\": \"nnReshape\", \"kargs\": {\"shape\": [-1, 1, 2, 256]}},\n {\n \"class\": \"Conv2d\",\n \"kargs\": {\n \"in_channels\": 1,\n \"out_channels\": 256,\n \"kernel_size\": [1, 7],\n \"bias\": False,\n \"padding\": [0, 3],\n },\n },\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\": 256}},\n {\n \"class\": \"Conv2d\",\n \"kargs\": {\n \"in_channels\": 256,\n \"out_channels\": 80,\n \"kernel_size\": [2, 7],\n \"bias\": True,\n \"padding\": [0, 3],\n },\n },\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\": 80}},\n {\"class\": \"Flatten\", \"kargs\": {}},\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 20480, \"out_features\": 256}},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm1d\", \"kargs\": {\"num_features\": 256}},\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 256, \"out_features\": 256}},\n ],\n \"NUM_LOGS_PER_EPOCH\": 10,\n \"BEST_MODEL_PATH\": \"./best_model.pth\",\n \"pickle_name\": \"oracle.Run2_10kExamples_stratified_ds.2022A.pkl\",\n \"x_transforms_source\": [\"unit_mag\"],\n \"x_transforms_target\": [\"unit_mag\"],\n \"dataset_seed\": 154325,\n \"seed\": 154325,\n}\n", "_____no_output_____" ], [ "# Set this to True if you want to run this template directly\nSTANDALONE = False\nif STANDALONE:\n print(\"parameters not injected, running with standalone_parameters\")\n parameters = standalone_parameters\n\nif not 'parameters' in locals() and not 'parameters' in globals():\n raise Exception(\"Parameter injection failed\")\n\n#Use an easy dict for all the parameters\np = EasyDict(parameters)\n\nsupplied_keys = set(p.keys())\n\nif supplied_keys != required_parameters:\n print(\"Parameters are incorrect\")\n if len(supplied_keys - required_parameters)>0: print(\"Shouldn't have:\", str(supplied_keys - required_parameters))\n if len(required_parameters - supplied_keys)>0: print(\"Need to have:\", str(required_parameters - supplied_keys))\n raise RuntimeError(\"Parameters are incorrect\")\n\n", "_____no_output_____" ], [ "###################################\n# Set the RNGs and make it all deterministic\n###################################\nnp.random.seed(p.seed)\nrandom.seed(p.seed)\ntorch.manual_seed(p.seed)\n\ntorch.use_deterministic_algorithms(True) ", "_____no_output_____" ], [ "###########################################\n# The stratified datasets honor this\n###########################################\ntorch.set_default_dtype(eval(p.torch_default_dtype))", "_____no_output_____" ], [ "###################################\n# Build the network(s)\n# Note: It's critical to do this AFTER setting the RNG\n# (This is due to the randomized initial weights)\n###################################\nx_net = build_sequential(p.x_net)", "_____no_output_____" ], [ "start_time_secs = time.time()", "_____no_output_____" ], [ "###################################\n# Build the dataset\n###################################\n\nif p.x_transforms_source == []: x_transform_source = None\nelse: x_transform_source = get_chained_transform(p.x_transforms_source) \n\nif p.x_transforms_target == []: x_transform_target = None\nelse: x_transform_target = get_chained_transform(p.x_transforms_target)\n\nif p.episode_transforms_source == []: episode_transform_source = None\nelse: raise Exception(\"episode_transform_source not implemented\")\n\nif p.episode_transforms_target == []: episode_transform_target = None\nelse: raise Exception(\"episode_transform_target not implemented\")\n\n\neaf_source = Episodic_Accessor_Factory(\n labels=p.labels_source,\n domains=p.domains_source,\n num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,\n iterator_seed=p.seed,\n dataset_seed=p.dataset_seed,\n n_shot=p.n_shot,\n n_way=p.n_way,\n n_query=p.n_query,\n train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),\n pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),\n x_transform_func=x_transform_source,\n example_transform_func=episode_transform_source,\n \n)\ntrain_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test()\n\n\neaf_target = Episodic_Accessor_Factory(\n labels=p.labels_target,\n domains=p.domains_target,\n num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target,\n iterator_seed=p.seed,\n dataset_seed=p.dataset_seed,\n n_shot=p.n_shot,\n n_way=p.n_way,\n n_query=p.n_query,\n train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),\n pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),\n x_transform_func=x_transform_target,\n example_transform_func=episode_transform_target,\n)\ntrain_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test()\n\n\ntransform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only\n\ntrain_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)\nval_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)\ntest_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)\n\ntrain_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)\nval_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)\ntest_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)\n\ndatasets = EasyDict({\n \"source\": {\n \"original\": {\"train\":train_original_source, \"val\":val_original_source, \"test\":test_original_source},\n \"processed\": {\"train\":train_processed_source, \"val\":val_processed_source, \"test\":test_processed_source}\n },\n \"target\": {\n \"original\": {\"train\":train_original_target, \"val\":val_original_target, \"test\":test_original_target},\n \"processed\": {\"train\":train_processed_target, \"val\":val_processed_target, \"test\":test_processed_target}\n },\n})", "_____no_output_____" ], [ "# Some quick unit tests on the data\nfrom steves_utils.transforms import get_average_power, get_average_magnitude\n\nq_x, q_y, s_x, s_y, truth = next(iter(train_processed_source))\n\nassert q_x.dtype == eval(p.torch_default_dtype)\nassert s_x.dtype == eval(p.torch_default_dtype)\n\nprint(\"Visually inspect these to see if they line up with expected values given the transforms\")\nprint('x_transforms_source', p.x_transforms_source)\nprint('x_transforms_target', p.x_transforms_target)\nprint(\"Average magnitude, source:\", get_average_magnitude(q_x[0].numpy()))\nprint(\"Average power, source:\", get_average_power(q_x[0].numpy()))\n\nq_x, q_y, s_x, s_y, truth = next(iter(train_processed_target))\nprint(\"Average magnitude, target:\", get_average_magnitude(q_x[0].numpy()))\nprint(\"Average power, target:\", get_average_power(q_x[0].numpy()))\n", "Visually inspect these to see if they line up with expected values given the transforms\nx_transforms_source ['unit_mag']\nx_transforms_target ['unit_mag']\nAverage magnitude, source: 1.0\nAverage power, source: 1.2736319\n" ], [ "###################################\n# Build the model\n###################################\nmodel = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))\noptimizer = Adam(params=model.parameters(), lr=p.lr)", "(2, 256)\n" ], [ "###################################\n# train\n###################################\njig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)\n\njig.train(\n train_iterable=datasets.source.processed.train,\n source_val_iterable=datasets.source.processed.val,\n target_val_iterable=datasets.target.processed.val,\n num_epochs=p.n_epoch,\n num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,\n patience=p.patience,\n optimizer=optimizer,\n criteria_for_best=p.criteria_for_best,\n)", "epoch: 1, [batch: 1 / 2520], examples_per_second: 123.5700, train_label_loss: 2.7747, \n" ], [ "total_experiment_time_secs = time.time() - start_time_secs", "_____no_output_____" ], [ "###################################\n# Evaluate the model\n###################################\nsource_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)\ntarget_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)\n\nsource_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)\ntarget_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)\n\nhistory = jig.get_history()\n\ntotal_epochs_trained = len(history[\"epoch_indices\"])\n\nval_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))\n\nconfusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)\nper_domain_accuracy = per_domain_accuracy_from_confusion(confusion)\n\n# Add a key to per_domain_accuracy for if it was a source domain\nfor domain, accuracy in per_domain_accuracy.items():\n per_domain_accuracy[domain] = {\n \"accuracy\": accuracy,\n \"source?\": domain in p.domains_source\n }\n\n# Do an independent accuracy assesment JUST TO BE SURE!\n# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)\n# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)\n# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)\n# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)\n\n# assert(_source_test_label_accuracy == source_test_label_accuracy)\n# assert(_target_test_label_accuracy == target_test_label_accuracy)\n# assert(_source_val_label_accuracy == source_val_label_accuracy)\n# assert(_target_val_label_accuracy == target_val_label_accuracy)\n\nexperiment = {\n \"experiment_name\": p.experiment_name,\n \"parameters\": dict(p),\n \"results\": {\n \"source_test_label_accuracy\": source_test_label_accuracy,\n \"source_test_label_loss\": source_test_label_loss,\n \"target_test_label_accuracy\": target_test_label_accuracy,\n \"target_test_label_loss\": target_test_label_loss,\n \"source_val_label_accuracy\": source_val_label_accuracy,\n \"source_val_label_loss\": source_val_label_loss,\n \"target_val_label_accuracy\": target_val_label_accuracy,\n \"target_val_label_loss\": target_val_label_loss,\n \"total_epochs_trained\": total_epochs_trained,\n \"total_experiment_time_secs\": total_experiment_time_secs,\n \"confusion\": confusion,\n \"per_domain_accuracy\": per_domain_accuracy,\n },\n \"history\": history,\n \"dataset_metrics\": get_dataset_metrics(datasets, \"ptn\"),\n}", "_____no_output_____" ], [ "ax = get_loss_curve(experiment)\nplt.show()", "_____no_output_____" ], [ "get_results_table(experiment)", "_____no_output_____" ], [ "get_domain_accuracies(experiment)", "_____no_output_____" ], [ "print(\"Source Test Label Accuracy:\", experiment[\"results\"][\"source_test_label_accuracy\"], \"Target Test Label Accuracy:\", experiment[\"results\"][\"target_test_label_accuracy\"])\nprint(\"Source Val Label Accuracy:\", experiment[\"results\"][\"source_val_label_accuracy\"], \"Target Val Label Accuracy:\", experiment[\"results\"][\"target_val_label_accuracy\"])", "Source Test Label Accuracy: 0.5996527777777778 Target Test Label Accuracy: 0.4965104166666667\nSource Val Label Accuracy: 0.6015625 Target Val Label Accuracy: 0.49223958333333334\n" ], [ "json.dumps(experiment)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a60e49061bb98f9aff61b411989b3c580cf013f
107,207
ipynb
Jupyter Notebook
Problem-1/.ipynb_checkpoints/Checkpoint-4_2-checkpoint.ipynb
grassknoted/AML-Hackathon
cfab046a0144944e111e617e62d7999b0ed1da7f
[ "MIT" ]
null
null
null
Problem-1/.ipynb_checkpoints/Checkpoint-4_2-checkpoint.ipynb
grassknoted/AML-Hackathon
cfab046a0144944e111e617e62d7999b0ed1da7f
[ "MIT" ]
null
null
null
Problem-1/.ipynb_checkpoints/Checkpoint-4_2-checkpoint.ipynb
grassknoted/AML-Hackathon
cfab046a0144944e111e617e62d7999b0ed1da7f
[ "MIT" ]
null
null
null
82.277053
1,413
0.631862
[ [ [ "import pandas as pd\nfrom os.path import join", "_____no_output_____" ], [ "data_path = \"../Dataset-1/selfie_dataset.txt\"#join(\"..\", \"..\", \"Dataset-1\", \"selfie_dataset.txt\")\nimage_path = \"../Dataset-1/images\"#join(\"..\", \"..\", \"Dataset-1\", \"selfie_dataset.txt\")#join(\"..\", \"..\", \"Dataset-1\", \"images\")", "_____no_output_____" ], [ "headers = [\n \"image_name\", \"score\", \"partial_faces\" ,\"is_female\" ,\"baby\" ,\"child\" ,\"teenager\" ,\"youth\" ,\"middle_age\" ,\"senior\" ,\"white\" ,\"black\" ,\"asian\" ,\"oval_face\" ,\"round_face\" ,\"heart_face\" ,\"smiling\" ,\"mouth_open\" ,\"frowning\" ,\"wearing_glasses\" ,\"wearing_sunglasses\" ,\"wearing_lipstick\" ,\"tongue_out\" ,\"duck_face\" ,\"black_hair\" ,\"blond_hair\" ,\"brown_hair\" ,\"red_hair\" ,\"curly_hair\" ,\"straight_hair\" ,\"braid_hair\" ,\"showing_cellphone\" ,\"using_earphone\" ,\"using_mirror\", \"braces\" ,\"wearing_hat\" ,\"harsh_lighting\", \"dim_lighting\"\n]\ndf_image_details = pd.read_csv(data_path, names=headers, delimiter=\" \")\ndf_image_details.head(3)", "_____no_output_____" ], [ "df_image_details = df_image_details[df_image_details.is_female != 0]\ndf_image_details.replace(to_replace=-1, value=0, inplace=True)", "_____no_output_____" ], [ "image_names = df_image_details.image_name.values.copy()\nimage_scores = df_image_details[headers[1]].values.copy()\nimage_attrs = df_image_details[headers[2:]].values.copy()", "_____no_output_____" ], [ "image_paths = [join(image_path, iname) + '.jpg' for iname in image_names]", "_____no_output_____" ], [ "image_paths_train, image_paths_test = image_paths[:-1000], image_paths[-1000:]\nimage_attrs_train, image_attrs_test = image_attrs[:-1000], image_attrs[-1000:]\nimage_scores_train, image_scores_test = image_scores[:-1000], image_scores[-1000:]", "_____no_output_____" ], [ "from keras.utils import Sequence\nimport numpy as np\nimport cv2", "Using TensorFlow backend.\n" ], [ "class ImageGenerator(Sequence):\n def __init__(self, x_set, y_set, batch_size):\n self.x, self.y = x_set, y_set\n self.batch_size = batch_size\n\n def __len__(self):\n return int(np.ceil(len(self.x) / float(self.batch_size)))\n\n def __getitem__(self, idx):\n batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]\n batch_y1 = self.y[0][idx * self.batch_size:(idx + 1) * self.batch_size]\n batch_y2 = self.y[1][idx * self.batch_size:(idx + 1) * self.batch_size]\n\n # read your data here using the batch lists, batch_x and batch_y\n x = [self.read_image(filename) for filename in batch_x] \n y1 = [atrributes for atrributes in batch_y1]\n y2 = [atrributes for atrributes in batch_y2]\n return [np.array(x), np.array(x)], [np.array(y1), np.array(y2)]\n \n def read_image(self, fname):\n im = cv2.imread(fname)\n im = cv2.resize(im, (224, 224), interpolation=cv2.INTER_CUBIC)\n return im / 255.", "_____no_output_____" ] ], [ [ "# Training Model\n", "_____no_output_____" ] ], [ [ "from keras.applications import resnet50\nfrom keras.layers import Dense, Conv2D, MaxPool2D, Input, Flatten, concatenate, Dropout\nfrom keras.models import Model, load_model\nfrom keras.callbacks import ModelCheckpoint, ReduceLROnPlateau", "_____no_output_____" ], [ "model_rnet = load_model(\"resnet50.hdf5\")", "/home/npc/anaconda3/envs/py36/lib/python3.6/site-packages/keras/engine/saving.py:292: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.\n warnings.warn('No training configuration found in save file: '\n" ], [ "for layer in model_rnet.layers:\n layer.trainable = False", "_____no_output_____" ], [ "model_classification = Dense(1024, activation='relu')(model_rnet.get_layer('avg_pool').output)\nmodel_classification = Dropout(0.5)(model_classification)\n\nmodel_classification = Dense(512, activation='relu')(model_classification)\n\nmodel_classification = Dense(36, activation='sigmoid', name='classification')(model_classification)", "_____no_output_____" ], [ "model_regression_input = Input((224, 224, 3), name='input_regression')\nmodel_regression = Conv2D(16, 3)(model_regression_input)\nmodel_regression = MaxPool2D()(model_regression)\n\nmodel_regression = Conv2D(24, 5)(model_regression)\nmodel_regression = MaxPool2D()(model_regression)\n\nmodel_regression = Conv2D(32, 5)(model_regression)\nmodel_regression = MaxPool2D()(model_regression)\n\nmodel_regression = Flatten()(model_regression)\nmodel_regression = Dense(128)(model_regression)\n\nmodel_regression = concatenate([model_regression, model_classification])\n\nmodel_regression = Dense(1, name='regression')(model_regression)", "_____no_output_____" ], [ "model = Model(inputs=[model_rnet.input, model_regression_input], outputs=[model_classification, model_regression])", "_____no_output_____" ], [ "model.summary()", "__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 input_1[0][0] \n__________________________________________________________________________________________________\nconv1 (Conv2D) (None, 112, 112, 64) 9472 conv1_pad[0][0] \n__________________________________________________________________________________________________\nbn_conv1 (BatchNormalization) (None, 112, 112, 64) 256 conv1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 112, 112, 64) 0 bn_conv1[0][0] \n__________________________________________________________________________________________________\npool1_pad (ZeroPadding2D) (None, 114, 114, 64) 0 activation_1[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 56, 56, 64) 0 pool1_pad[0][0] \n__________________________________________________________________________________________________\nres2a_branch2a (Conv2D) (None, 56, 56, 64) 4160 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 56, 56, 64) 0 bn2a_branch2a[0][0] \n__________________________________________________________________________________________________\nres2a_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_2[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 56, 56, 64) 0 bn2a_branch2b[0][0] \n__________________________________________________________________________________________________\nres2a_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_3[0][0] \n__________________________________________________________________________________________________\nres2a_branch1 (Conv2D) (None, 56, 56, 256) 16640 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn2a_branch1 (BatchNormalizatio (None, 56, 56, 256) 1024 res2a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 56, 56, 256) 0 bn2a_branch2c[0][0] \n bn2a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 56, 56, 256) 0 add_1[0][0] \n__________________________________________________________________________________________________\nres2b_branch2a (Conv2D) (None, 56, 56, 64) 16448 activation_4[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 56, 56, 64) 0 bn2b_branch2a[0][0] \n__________________________________________________________________________________________________\nres2b_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_5[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 56, 56, 64) 0 bn2b_branch2b[0][0] \n__________________________________________________________________________________________________\nres2b_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_6[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_2 (Add) (None, 56, 56, 256) 0 bn2b_branch2c[0][0] \n activation_4[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 56, 56, 256) 0 add_2[0][0] \n__________________________________________________________________________________________________\nres2c_branch2a (Conv2D) (None, 56, 56, 64) 16448 activation_7[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 56, 56, 64) 0 bn2c_branch2a[0][0] \n__________________________________________________________________________________________________\nres2c_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_8[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 56, 56, 64) 0 bn2c_branch2b[0][0] \n__________________________________________________________________________________________________\nres2c_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_9[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_3 (Add) (None, 56, 56, 256) 0 bn2c_branch2c[0][0] \n activation_7[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 56, 56, 256) 0 add_3[0][0] \n__________________________________________________________________________________________________\nres3a_branch2a (Conv2D) (None, 28, 28, 128) 32896 activation_10[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 28, 28, 128) 0 bn3a_branch2a[0][0] \n__________________________________________________________________________________________________\nres3a_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_11[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 28, 28, 128) 0 bn3a_branch2b[0][0] \n__________________________________________________________________________________________________\nres3a_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_12[0][0] \n__________________________________________________________________________________________________\nres3a_branch1 (Conv2D) (None, 28, 28, 512) 131584 activation_10[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn3a_branch1 (BatchNormalizatio (None, 28, 28, 512) 2048 res3a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_4 (Add) (None, 28, 28, 512) 0 bn3a_branch2c[0][0] \n bn3a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 28, 28, 512) 0 add_4[0][0] \n__________________________________________________________________________________________________\nres3b_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_13[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 28, 28, 128) 0 bn3b_branch2a[0][0] \n__________________________________________________________________________________________________\nres3b_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_14[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_15 (Activation) (None, 28, 28, 128) 0 bn3b_branch2b[0][0] \n__________________________________________________________________________________________________\nres3b_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_15[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_5 (Add) (None, 28, 28, 512) 0 bn3b_branch2c[0][0] \n activation_13[0][0] \n__________________________________________________________________________________________________\nactivation_16 (Activation) (None, 28, 28, 512) 0 add_5[0][0] \n__________________________________________________________________________________________________\nres3c_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_16[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_17 (Activation) (None, 28, 28, 128) 0 bn3c_branch2a[0][0] \n__________________________________________________________________________________________________\nres3c_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_17[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_18 (Activation) (None, 28, 28, 128) 0 bn3c_branch2b[0][0] \n__________________________________________________________________________________________________\nres3c_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_18[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_6 (Add) (None, 28, 28, 512) 0 bn3c_branch2c[0][0] \n activation_16[0][0] \n__________________________________________________________________________________________________\nactivation_19 (Activation) (None, 28, 28, 512) 0 add_6[0][0] \n__________________________________________________________________________________________________\nres3d_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_19[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_20 (Activation) (None, 28, 28, 128) 0 bn3d_branch2a[0][0] \n__________________________________________________________________________________________________\nres3d_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_20[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_21 (Activation) (None, 28, 28, 128) 0 bn3d_branch2b[0][0] \n__________________________________________________________________________________________________\nres3d_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_21[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_7 (Add) (None, 28, 28, 512) 0 bn3d_branch2c[0][0] \n activation_19[0][0] \n__________________________________________________________________________________________________\nactivation_22 (Activation) (None, 28, 28, 512) 0 add_7[0][0] \n__________________________________________________________________________________________________\nres4a_branch2a (Conv2D) (None, 14, 14, 256) 131328 activation_22[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_23 (Activation) (None, 14, 14, 256) 0 bn4a_branch2a[0][0] \n__________________________________________________________________________________________________\nres4a_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_23[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_24 (Activation) (None, 14, 14, 256) 0 bn4a_branch2b[0][0] \n__________________________________________________________________________________________________\nres4a_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_24[0][0] \n__________________________________________________________________________________________________\nres4a_branch1 (Conv2D) (None, 14, 14, 1024) 525312 activation_22[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn4a_branch1 (BatchNormalizatio (None, 14, 14, 1024) 4096 res4a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_8 (Add) (None, 14, 14, 1024) 0 bn4a_branch2c[0][0] \n bn4a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_25 (Activation) (None, 14, 14, 1024) 0 add_8[0][0] \n__________________________________________________________________________________________________\nres4b_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_25[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_26 (Activation) (None, 14, 14, 256) 0 bn4b_branch2a[0][0] \n__________________________________________________________________________________________________\nres4b_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_26[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_27 (Activation) (None, 14, 14, 256) 0 bn4b_branch2b[0][0] \n__________________________________________________________________________________________________\nres4b_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_27[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_9 (Add) (None, 14, 14, 1024) 0 bn4b_branch2c[0][0] \n activation_25[0][0] \n__________________________________________________________________________________________________\nactivation_28 (Activation) (None, 14, 14, 1024) 0 add_9[0][0] \n__________________________________________________________________________________________________\nres4c_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_28[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_29 (Activation) (None, 14, 14, 256) 0 bn4c_branch2a[0][0] \n__________________________________________________________________________________________________\nres4c_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_29[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_30 (Activation) (None, 14, 14, 256) 0 bn4c_branch2b[0][0] \n__________________________________________________________________________________________________\nres4c_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_30[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_10 (Add) (None, 14, 14, 1024) 0 bn4c_branch2c[0][0] \n activation_28[0][0] \n__________________________________________________________________________________________________\nactivation_31 (Activation) (None, 14, 14, 1024) 0 add_10[0][0] \n__________________________________________________________________________________________________\nres4d_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_31[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_32 (Activation) (None, 14, 14, 256) 0 bn4d_branch2a[0][0] \n__________________________________________________________________________________________________\nres4d_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_32[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_33 (Activation) (None, 14, 14, 256) 0 bn4d_branch2b[0][0] \n__________________________________________________________________________________________________\nres4d_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_33[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_11 (Add) (None, 14, 14, 1024) 0 bn4d_branch2c[0][0] \n activation_31[0][0] \n__________________________________________________________________________________________________\nactivation_34 (Activation) (None, 14, 14, 1024) 0 add_11[0][0] \n__________________________________________________________________________________________________\nres4e_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_34[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4e_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_35 (Activation) (None, 14, 14, 256) 0 bn4e_branch2a[0][0] \n__________________________________________________________________________________________________\nres4e_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_35[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4e_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_36 (Activation) (None, 14, 14, 256) 0 bn4e_branch2b[0][0] \n__________________________________________________________________________________________________\nres4e_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_36[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4e_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_12 (Add) (None, 14, 14, 1024) 0 bn4e_branch2c[0][0] \n activation_34[0][0] \n__________________________________________________________________________________________________\nactivation_37 (Activation) (None, 14, 14, 1024) 0 add_12[0][0] \n__________________________________________________________________________________________________\nres4f_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_37[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4f_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_38 (Activation) (None, 14, 14, 256) 0 bn4f_branch2a[0][0] \n__________________________________________________________________________________________________\nres4f_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_38[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4f_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_39 (Activation) (None, 14, 14, 256) 0 bn4f_branch2b[0][0] \n__________________________________________________________________________________________________\nres4f_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_39[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4f_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_13 (Add) (None, 14, 14, 1024) 0 bn4f_branch2c[0][0] \n activation_37[0][0] \n__________________________________________________________________________________________________\nactivation_40 (Activation) (None, 14, 14, 1024) 0 add_13[0][0] \n__________________________________________________________________________________________________\nres5a_branch2a (Conv2D) (None, 7, 7, 512) 524800 activation_40[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_41 (Activation) (None, 7, 7, 512) 0 bn5a_branch2a[0][0] \n__________________________________________________________________________________________________\nres5a_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_41[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_42 (Activation) (None, 7, 7, 512) 0 bn5a_branch2b[0][0] \n__________________________________________________________________________________________________\nres5a_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_42[0][0] \n__________________________________________________________________________________________________\nres5a_branch1 (Conv2D) (None, 7, 7, 2048) 2099200 activation_40[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn5a_branch1 (BatchNormalizatio (None, 7, 7, 2048) 8192 res5a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_14 (Add) (None, 7, 7, 2048) 0 bn5a_branch2c[0][0] \n bn5a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_43 (Activation) (None, 7, 7, 2048) 0 add_14[0][0] \n__________________________________________________________________________________________________\nres5b_branch2a (Conv2D) (None, 7, 7, 512) 1049088 activation_43[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_44 (Activation) (None, 7, 7, 512) 0 bn5b_branch2a[0][0] \n__________________________________________________________________________________________________\nres5b_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_44[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_45 (Activation) (None, 7, 7, 512) 0 bn5b_branch2b[0][0] \n__________________________________________________________________________________________________\nres5b_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_45[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_15 (Add) (None, 7, 7, 2048) 0 bn5b_branch2c[0][0] \n activation_43[0][0] \n__________________________________________________________________________________________________\nactivation_46 (Activation) (None, 7, 7, 2048) 0 add_15[0][0] \n__________________________________________________________________________________________________\nres5c_branch2a (Conv2D) (None, 7, 7, 512) 1049088 activation_46[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_47 (Activation) (None, 7, 7, 512) 0 bn5c_branch2a[0][0] \n__________________________________________________________________________________________________\nres5c_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_47[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_48 (Activation) (None, 7, 7, 512) 0 bn5c_branch2b[0][0] \n__________________________________________________________________________________________________\nres5c_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_48[0][0] \n__________________________________________________________________________________________________\ninput_regression (InputLayer) (None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nbn5c_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5c_branch2c[0][0] \n__________________________________________________________________________________________________\nconv2d_10 (Conv2D) (None, 222, 222, 16) 448 input_regression[0][0] \n__________________________________________________________________________________________________\nadd_16 (Add) (None, 7, 7, 2048) 0 bn5c_branch2c[0][0] \n activation_46[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_10 (MaxPooling2D) (None, 111, 111, 16) 0 conv2d_10[0][0] \n__________________________________________________________________________________________________\nactivation_49 (Activation) (None, 7, 7, 2048) 0 add_16[0][0] \n__________________________________________________________________________________________________\nconv2d_11 (Conv2D) (None, 107, 107, 24) 9624 max_pooling2d_10[0][0] \n__________________________________________________________________________________________________\navg_pool (GlobalAveragePooling2 (None, 2048) 0 activation_49[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_11 (MaxPooling2D) (None, 53, 53, 24) 0 conv2d_11[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 1024) 2098176 avg_pool[0][0] \n__________________________________________________________________________________________________\nconv2d_12 (Conv2D) (None, 49, 49, 32) 19232 max_pooling2d_11[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 1024) 0 dense_1[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_12 (MaxPooling2D) (None, 24, 24, 32) 0 conv2d_12[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 512) 524800 dropout_1[0][0] \n__________________________________________________________________________________________________\nflatten_4 (Flatten) (None, 18432) 0 max_pooling2d_12[0][0] \n__________________________________________________________________________________________________\nclassification (Dense) (None, 36) 18468 dense_2[0][0] \n__________________________________________________________________________________________________\ndense_6 (Dense) (None, 128) 2359424 flatten_4[0][0] \n__________________________________________________________________________________________________\nconcatenate_4 (Concatenate) (None, 164) 0 dense_6[0][0] \n classification[0][0] \n__________________________________________________________________________________________________\nregression (Dense) (None, 1) 165 concatenate_4[0][0] \n==================================================================================================\nTotal params: 28,618,049\nTrainable params: 5,030,337\nNon-trainable params: 23,587,712\n__________________________________________________________________________________________________\n" ], [ "model.compile(\n optimizer='adam',\n loss={\n 'regression': 'mean_squared_error',\n 'classification': 'binary_crossentropy'\n },\n metrics=[\n 'accuracy'\n ]\n)", "_____no_output_____" ], [ "train_gen = ImageGenerator(image_paths_train, (image_attrs_train, image_scores_train), batch_size=128)\ntest_gen = ImageGenerator(image_paths_test, (image_attrs_test, image_scores_test), batch_size=128)", "_____no_output_____" ], [ "train_len = len(image_paths_train)\ntest_len = len(image_paths_test)\ntrain_len, test_len", "_____no_output_____" ], [ "model.fit_generator(train_gen, validation_data=test_gen, epochs=200, \n steps_per_epoch=train_len // 128,\n validation_steps=10, use_multiprocessing=False,\n callbacks=[\n ReduceLROnPlateau(patience=2, verbose=1),\n ModelCheckpoint('chpt-4_2.hdf5', verbose=1, save_best_only=True)\n ])", "Epoch 1/200\n345/345 [==============================] - 270s 781ms/step - loss: 12.5174 - classification_loss: 0.4885 - regression_loss: 12.0289 - classification_acc: 0.8355 - regression_acc: 4.3026e-04 - val_loss: 0.8073 - val_classification_loss: 0.3461 - val_regression_loss: 0.4612 - val_classification_acc: 0.8316 - val_regression_acc: 0.0000e+00\n\nEpoch 00001: val_loss improved from inf to 0.80734, saving model to chpt-4_2.hdf5\nEpoch 2/200\n345/345 [==============================] - 263s 764ms/step - loss: 0.5436 - classification_loss: 0.3002 - regression_loss: 0.2434 - classification_acc: 0.8759 - regression_acc: 5.2084e-04 - val_loss: 0.6198 - val_classification_loss: 0.3069 - val_regression_loss: 0.3129 - val_classification_acc: 0.8852 - val_regression_acc: 0.0000e+00\n\nEpoch 00002: val_loss improved from 0.80734 to 0.61983, saving model to chpt-4_2.hdf5\nEpoch 3/200\n345/345 [==============================] - 264s 765ms/step - loss: 0.5086 - classification_loss: 0.2881 - regression_loss: 0.2205 - classification_acc: 0.8797 - regression_acc: 5.6613e-04 - val_loss: 0.5796 - val_classification_loss: 0.3053 - val_regression_loss: 0.2743 - val_classification_acc: 0.8752 - val_regression_acc: 7.9618e-04\n\nEpoch 00003: val_loss improved from 0.61983 to 0.57959, saving model to chpt-4_2.hdf5\nEpoch 4/200\n345/345 [==============================] - 261s 756ms/step - loss: 0.4933 - classification_loss: 0.2815 - regression_loss: 0.2118 - classification_acc: 0.8816 - regression_acc: 5.2084e-04 - val_loss: 0.5763 - val_classification_loss: 0.2987 - val_regression_loss: 0.2776 - val_classification_acc: 0.8930 - val_regression_acc: 0.0016\n\nEpoch 00004: val_loss improved from 0.57959 to 0.57632, saving model to chpt-4_2.hdf5\nEpoch 5/200\n345/345 [==============================] - 257s 744ms/step - loss: 0.4833 - classification_loss: 0.2786 - regression_loss: 0.2047 - classification_acc: 0.8824 - regression_acc: 5.4348e-04 - val_loss: 0.5914 - val_classification_loss: 0.3227 - val_regression_loss: 0.2687 - val_classification_acc: 0.8679 - val_regression_acc: 7.9618e-04\n\nEpoch 00005: val_loss did not improve from 0.57632\nEpoch 6/200\n345/345 [==============================] - 257s 746ms/step - loss: 0.4750 - classification_loss: 0.2757 - regression_loss: 0.1992 - classification_acc: 0.8836 - regression_acc: 5.8877e-04 - val_loss: 0.6103 - val_classification_loss: 0.3253 - val_regression_loss: 0.2850 - val_classification_acc: 0.8839 - val_regression_acc: 7.9618e-04\n\nEpoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.\n\nEpoch 00006: val_loss did not improve from 0.57632\nEpoch 7/200\n345/345 [==============================] - 257s 745ms/step - loss: 0.4478 - classification_loss: 0.2683 - regression_loss: 0.1795 - classification_acc: 0.8860 - regression_acc: 6.7935e-04 - val_loss: 0.6084 - val_classification_loss: 0.3335 - val_regression_loss: 0.2749 - val_classification_acc: 0.8753 - val_regression_acc: 7.9618e-04\n\nEpoch 00007: val_loss did not improve from 0.57632\nEpoch 8/200\n345/345 [==============================] - 258s 749ms/step - loss: 0.4444 - classification_loss: 0.2675 - regression_loss: 0.1769 - classification_acc: 0.8864 - regression_acc: 5.8877e-04 - val_loss: 0.6206 - val_classification_loss: 0.3287 - val_regression_loss: 0.2919 - val_classification_acc: 0.8878 - val_regression_acc: 0.0016\n\nEpoch 00008: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.\n\nEpoch 00008: val_loss did not improve from 0.57632\nEpoch 9/200\n345/345 [==============================] - 258s 747ms/step - loss: 0.4394 - classification_loss: 0.2665 - regression_loss: 0.1728 - classification_acc: 0.8868 - regression_acc: 6.3406e-04 - val_loss: 0.6302 - val_classification_loss: 0.3345 - val_regression_loss: 0.2956 - val_classification_acc: 0.8774 - val_regression_acc: 7.9618e-04\n\nEpoch 00009: val_loss did not improve from 0.57632\nEpoch 10/200\n345/345 [==============================] - 258s 747ms/step - loss: 0.4383 - classification_loss: 0.2656 - regression_loss: 0.1728 - classification_acc: 0.8872 - regression_acc: 6.3406e-04 - val_loss: 0.6139 - val_classification_loss: 0.3317 - val_regression_loss: 0.2822 - val_classification_acc: 0.8769 - val_regression_acc: 7.9618e-04\n\nEpoch 00010: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.\n\nEpoch 00010: val_loss did not improve from 0.57632\nEpoch 11/200\n345/345 [==============================] - 259s 750ms/step - loss: 0.4375 - classification_loss: 0.2651 - regression_loss: 0.1724 - classification_acc: 0.8874 - regression_acc: 5.6613e-04 - val_loss: 0.6176 - val_classification_loss: 0.3345 - val_regression_loss: 0.2831 - val_classification_acc: 0.8771 - val_regression_acc: 7.9618e-04\n\nEpoch 00011: val_loss did not improve from 0.57632\nEpoch 12/200\n345/345 [==============================] - 258s 748ms/step - loss: 0.4379 - classification_loss: 0.2659 - regression_loss: 0.1720 - classification_acc: 0.8872 - regression_acc: 6.3406e-04 - val_loss: 0.6158 - val_classification_loss: 0.3276 - val_regression_loss: 0.2882 - val_classification_acc: 0.8860 - val_regression_acc: 0.0016\n\nEpoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07.\n\nEpoch 00012: val_loss did not improve from 0.57632\nEpoch 13/200\n345/345 [==============================] - 258s 748ms/step - loss: 0.4382 - classification_loss: 0.2664 - regression_loss: 0.1718 - classification_acc: 0.8868 - regression_acc: 6.3406e-04 - val_loss: 0.6185 - val_classification_loss: 0.3341 - val_regression_loss: 0.2843 - val_classification_acc: 0.8765 - val_regression_acc: 7.9618e-04\n\nEpoch 00013: val_loss did not improve from 0.57632\nEpoch 14/200\n345/345 [==============================] - 258s 747ms/step - loss: 0.4374 - classification_loss: 0.2656 - regression_loss: 0.1718 - classification_acc: 0.8871 - regression_acc: 5.8877e-04 - val_loss: 0.6162 - val_classification_loss: 0.3319 - val_regression_loss: 0.2843 - val_classification_acc: 0.8766 - val_regression_acc: 7.9618e-04\n\nEpoch 00014: ReduceLROnPlateau reducing learning rate to 1.000000082740371e-08.\n\nEpoch 00014: val_loss did not improve from 0.57632\nEpoch 15/200\n345/345 [==============================] - 258s 747ms/step - loss: 0.4374 - classification_loss: 0.2655 - regression_loss: 0.1719 - classification_acc: 0.8871 - regression_acc: 5.8877e-04 - val_loss: 0.6191 - val_classification_loss: 0.3346 - val_regression_loss: 0.2844 - val_classification_acc: 0.8771 - val_regression_acc: 7.9618e-04\n\nEpoch 00015: val_loss did not improve from 0.57632\nEpoch 16/200\n345/345 [==============================] - 259s 750ms/step - loss: 0.4377 - classification_loss: 0.2656 - regression_loss: 0.1721 - classification_acc: 0.8874 - regression_acc: 6.3406e-04 - val_loss: 0.6172 - val_classification_loss: 0.3276 - val_regression_loss: 0.2896 - val_classification_acc: 0.8860 - val_regression_acc: 0.0016\n\nEpoch 00016: ReduceLROnPlateau reducing learning rate to 1.000000082740371e-09.\n\nEpoch 00016: val_loss did not improve from 0.57632\nEpoch 17/200\n345/345 [==============================] - 258s 748ms/step - loss: 0.4385 - classification_loss: 0.2659 - regression_loss: 0.1725 - classification_acc: 0.8870 - regression_acc: 6.3406e-04 - val_loss: 0.6191 - val_classification_loss: 0.3342 - val_regression_loss: 0.2850 - val_classification_acc: 0.8764 - val_regression_acc: 7.9618e-04\n\nEpoch 00017: val_loss did not improve from 0.57632\nEpoch 18/200\n345/345 [==============================] - 258s 748ms/step - loss: 0.4380 - classification_loss: 0.2663 - regression_loss: 0.1717 - classification_acc: 0.8870 - regression_acc: 6.1142e-04 - val_loss: 0.6163 - val_classification_loss: 0.3319 - val_regression_loss: 0.2843 - val_classification_acc: 0.8766 - val_regression_acc: 7.9618e-04\n\nEpoch 00018: ReduceLROnPlateau reducing learning rate to 1.000000082740371e-10.\n\nEpoch 00018: val_loss did not improve from 0.57632\nEpoch 19/200\n345/345 [==============================] - 258s 747ms/step - loss: 0.4375 - classification_loss: 0.2645 - regression_loss: 0.1730 - classification_acc: 0.8878 - regression_acc: 6.1142e-04 - val_loss: 0.6191 - val_classification_loss: 0.3346 - val_regression_loss: 0.2844 - val_classification_acc: 0.8771 - val_regression_acc: 7.9618e-04\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a60e49aceda195e1c1f09f50b2245585044d1d6
61,411
ipynb
Jupyter Notebook
docs/notebooks/RailroadDiagrams.ipynb
robi-y/fuzzingbook
402116fc19f89d1d572afc99a048ac97ed8f4013
[ "MIT" ]
1
2019-02-02T19:04:36.000Z
2019-02-02T19:04:36.000Z
docs/notebooks/RailroadDiagrams.ipynb
robi-y/fuzzingbook
402116fc19f89d1d572afc99a048ac97ed8f4013
[ "MIT" ]
null
null
null
docs/notebooks/RailroadDiagrams.ipynb
robi-y/fuzzingbook
402116fc19f89d1d572afc99a048ac97ed8f4013
[ "MIT" ]
1
2018-12-01T16:34:30.000Z
2018-12-01T16:34:30.000Z
37.400122
257
0.437707
[ [ [ "# Railroad Diagrams\n\nThe code in this notebook helps with drawing syntax-diagrams. It is a (slightly customized) copy of the [excellent library from Tab Atkins jr.](https://github.com/tabatkins/railroad-diagrams), which unfortunately is not available as a Python package.", "_____no_output_____" ], [ "**Prerequisites**\n\n* This notebook needs some understanding on advanced concepts in Python and Graphics, notably \n * classes\n * the Python `with` statement\n * Scalable Vector Graphics", "_____no_output_____" ] ], [ [ "import fuzzingbook_utils", "_____no_output_____" ], [ "import re\nimport io", "_____no_output_____" ], [ "class C:\n # Display constants\n DEBUG = False # if true, writes some debug information into attributes\n VS = 8 # minimum vertical separation between things. For a 3px stroke, must be at least 4\n AR = 10 # radius of arcs\n DIAGRAM_CLASS = 'railroad-diagram' # class to put on the root <svg>\n STROKE_ODD_PIXEL_LENGTH = True # is the stroke width an odd (1px, 3px, etc) pixel length?\n INTERNAL_ALIGNMENT = 'center' # how to align items when they have extra space. left/right/center\n CHAR_WIDTH = 8.5 # width of each monospace character. play until you find the right value for your font\n COMMENT_CHAR_WIDTH = 7 # comments are in smaller text by default\n\n DEFAULT_STYLE = '''\\\n svg.railroad-diagram {\n background-color:hsl(100,100%,100%);\n }\n svg.railroad-diagram path {\n stroke-width:3;\n stroke:black;\n fill:rgba(0,0,0,0);\n }\n svg.railroad-diagram text {\n font:bold 14px monospace;\n text-anchor:middle;\n }\n svg.railroad-diagram text.label{\n text-anchor:start;\n }\n svg.railroad-diagram text.comment{\n font:italic 12px monospace;\n }\n svg.railroad-diagram rect{\n stroke-width:3;\n stroke:black;\n fill:hsl(0,62%,82%);\n }\n'''", "_____no_output_____" ], [ "def e(text):\n text = re.sub(r\"&\",'&amp;', str(text))\n text = re.sub(r\"<\",'&lt;', str(text))\n text = re.sub(r\">\",'&gt;', str(text))\n return str(text)\n\ndef determineGaps(outer, inner):\n diff = outer - inner\n if C.INTERNAL_ALIGNMENT == 'left':\n return 0, diff\n elif C.INTERNAL_ALIGNMENT == 'right':\n return diff, 0\n else:\n return diff/2, diff/2\n\ndef doubleenumerate(seq):\n length = len(list(seq))\n for i,item in enumerate(seq):\n yield i, i-length, item\n\ndef addDebug(el):\n if not C.DEBUG:\n return\n el.attrs['data-x'] = \"{0} w:{1} h:{2}/{3}/{4}\".format(type(el).__name__, el.width, el.up, el.height, el.down)", "_____no_output_____" ], [ "class DiagramItem(object):\n def __init__(self, name, attrs=None, text=None):\n self.name = name\n # up = distance it projects above the entry line\n # height = distance between the entry/exit lines\n # down = distance it projects below the exit line\n self.height = 0\n self.attrs = attrs or {}\n self.children = [text] if text else []\n self.needsSpace = False\n\n def format(self, x, y, width):\n raise NotImplementedError # Virtual\n\n def addTo(self, parent):\n parent.children.append(self)\n return self\n\n def writeSvg(self, write):\n write(u'<{0}'.format(self.name))\n for name, value in sorted(self.attrs.items()):\n write(u' {0}=\"{1}\"'.format(name, e(value)))\n write(u'>')\n if self.name in [\"g\", \"svg\"]:\n write(u'\\n')\n for child in self.children:\n if isinstance(child, DiagramItem):\n child.writeSvg(write)\n else:\n write(e(child))\n write(u'</{0}>'.format(self.name))\n\n def __eq__(self, other):\n return type(self) == type(other) and self.__dict__ == other.__dict__\n\n def __ne__(self, other):\n return not (self == other)", "_____no_output_____" ], [ "class Path(DiagramItem):\n def __init__(self, x, y):\n self.x = x\n self.y = y\n DiagramItem.__init__(self, 'path', {'d': 'M%s %s' % (x, y)})\n\n def m(self, x, y):\n self.attrs['d'] += 'm{0} {1}'.format(x,y)\n return self\n\n def l(self, x, y):\n self.attrs['d'] += 'l{0} {1}'.format(x,y)\n return self\n\n def h(self, val):\n self.attrs['d'] += 'h{0}'.format(val)\n return self\n\n def right(self, val):\n return self.h(max(0, val))\n\n def left(self, val):\n return self.h(-max(0, val))\n\n def v(self, val):\n self.attrs['d'] += 'v{0}'.format(val)\n return self\n\n def down(self, val):\n return self.v(max(0, val))\n\n def up(self, val):\n return self.v(-max(0, val))\n\n def arc_8(self, start, dir):\n # 1/8 of a circle\n arc = C.AR\n s2 = 1/math.sqrt(2) * arc\n s2inv = (arc - s2)\n path = \"a {0} {0} 0 0 {1} \".format(arc, \"1\" if dir == 'cw' else \"0\")\n sd = start+dir\n if sd == 'ncw':\n offset = [s2, s2inv]\n elif sd == 'necw':\n offset = [s2inv, s2]\n elif sd == 'ecw':\n offset = [-s2inv, s2]\n elif sd == 'secw':\n offset = [-s2, s2inv]\n elif sd == 'scw':\n offset = [-s2, -s2inv]\n elif sd == 'swcw':\n offset = [-s2inv, -s2]\n elif sd == 'wcw':\n offset = [s2inv, -s2]\n elif sd == 'nwcw':\n offset = [s2, -s2inv]\n elif sd == 'nccw':\n offset = [-s2, s2inv]\n elif sd == 'nwccw':\n offset = [-s2inv, s2]\n elif sd == 'wccw':\n offset = [s2inv, s2]\n elif sd == 'swccw':\n offset = [s2, s2inv]\n elif sd == 'sccw':\n offset = [s2, -s2inv]\n elif sd == 'seccw':\n offset = [s2inv, -s2]\n elif sd == 'eccw':\n offset = [-s2inv, -s2]\n elif sd == 'neccw':\n offset = [-s2, -s2inv]\n\n path += \" \".join(str(x) for x in offset)\n self.attrs['d'] += path\n return self\n\n def arc(self, sweep):\n x = C.AR\n y = C.AR\n if sweep[0] == 'e' or sweep[1] == 'w':\n x *= -1\n if sweep[0] == 's' or sweep[1] == 'n':\n y *= -1\n cw = 1 if sweep == 'ne' or sweep == 'es' or sweep == 'sw' or sweep == 'wn' else 0\n self.attrs['d'] += 'a{0} {0} 0 0 {1} {2} {3}'.format(C.AR, cw, x, y)\n return self\n\n\n def format(self):\n self.attrs['d'] += 'h.5'\n return self\n\n def __repr__(self):\n return 'Path(%r, %r)' % (self.x, self.y)", "_____no_output_____" ], [ "def wrapString(value):\n return value if isinstance(value, DiagramItem) else Terminal(value)", "_____no_output_____" ], [ "class Style(DiagramItem):\n def __init__(self, css):\n self.name = 'style'\n self.css = css\n self.height = 0\n self.width = 0\n self.needsSpace = False\n\n def __repr__(self):\n return 'Style(%r)' % css\n\n def format(self, x, y, width):\n return self\n\n def writeSvg(self, write):\n # Write included stylesheet as CDATA. See https:#developer.mozilla.org/en-US/docs/Web/SVG/Element/style\n cdata = u'/* <![CDATA[ */\\n{css}\\n/* ]]> */\\n'.format(css=self.css)\n write(u'<style>{cdata}</style>'.format(cdata=cdata))", "_____no_output_____" ], [ "class Diagram(DiagramItem):\n def __init__(self, *items, **kwargs):\n # Accepts a type=[simple|complex] kwarg\n DiagramItem.__init__(self, 'svg', {'class': C.DIAGRAM_CLASS, 'xmlns': \"http://www.w3.org/2000/svg\"})\n self.type = kwargs.get(\"type\", \"simple\")\n self.items = [wrapString(item) for item in items]\n if items and not isinstance(items[0], Start):\n self.items.insert(0, Start(self.type))\n if items and not isinstance(items[-1], End):\n self.items.append(End(self.type))\n self.css = kwargs.get(\"css\", C.DEFAULT_STYLE)\n if self.css:\n self.items.insert(0, Style(self.css))\n self.up = 0\n self.down = 0\n self.height = 0\n self.width = 0\n for item in self.items:\n if isinstance(item, Style):\n continue\n self.width += item.width + (20 if item.needsSpace else 0)\n self.up = max(self.up, item.up - self.height)\n self.height += item.height\n self.down = max(self.down - item.height, item.down)\n if self.items[0].needsSpace:\n self.width -= 10\n if self.items[-1].needsSpace:\n self.width -= 10\n self.formatted = False\n\n def __repr__(self):\n if self.css:\n items = ', '.join(map(repr, self.items[2:-1]))\n else:\n items = ', '.join(map(repr, self.items[1:-1]))\n pieces = [] if not items else [items]\n if self.css != C.DEFAULT_STYLE:\n pieces.append('css=%r' % self.css)\n if self.type != 'simple':\n pieces.append('type=%r' % self.type)\n return 'Diagram(%s)' % ', '.join(pieces)\n\n def format(self, paddingTop=20, paddingRight=None, paddingBottom=None, paddingLeft=None):\n if paddingRight is None:\n paddingRight = paddingTop\n if paddingBottom is None:\n paddingBottom = paddingTop\n if paddingLeft is None:\n paddingLeft = paddingRight\n x = paddingLeft\n y = paddingTop + self.up\n g = DiagramItem('g')\n if C.STROKE_ODD_PIXEL_LENGTH:\n g.attrs['transform'] = 'translate(.5 .5)'\n for item in self.items:\n if item.needsSpace:\n Path(x, y).h(10).addTo(g)\n x += 10\n item.format(x, y, item.width).addTo(g)\n x += item.width\n y += item.height\n if item.needsSpace:\n Path(x, y).h(10).addTo(g)\n x += 10\n self.attrs['width'] = self.width + paddingLeft + paddingRight\n self.attrs['height'] = self.up + self.height + self.down + paddingTop + paddingBottom\n self.attrs['viewBox'] = \"0 0 {width} {height}\".format(**self.attrs)\n g.addTo(self)\n self.formatted = True\n return self\n\n\n def writeSvg(self, write):\n if not self.formatted:\n self.format()\n return DiagramItem.writeSvg(self, write)\n\n def parseCSSGrammar(self, text):\n token_patterns = {\n 'keyword': r\"[\\w-]+\\(?\",\n 'type': r\"<[\\w-]+(\\(\\))?>\",\n 'char': r\"[/,()]\",\n 'literal': r\"'(.)'\",\n 'openbracket': r\"\\[\",\n 'closebracket': r\"\\]\",\n 'closebracketbang': r\"\\]!\",\n 'bar': r\"\\|\",\n 'doublebar': r\"\\|\\|\",\n 'doubleand': r\"&&\",\n 'multstar': r\"\\*\",\n 'multplus': r\"\\+\",\n 'multhash': r\"#\",\n 'multnum1': r\"{\\s*(\\d+)\\s*}\",\n 'multnum2': r\"{\\s*(\\d+)\\s*,\\s*(\\d*)\\s*}\",\n 'multhashnum1': r\"#{\\s*(\\d+)\\s*}\",\n 'multhashnum2': r\"{\\s*(\\d+)\\s*,\\s*(\\d*)\\s*}\"\n }\n\n\nclass Sequence(DiagramItem):\n def __init__(self, *items):\n DiagramItem.__init__(self, 'g')\n self.items = [wrapString(item) for item in items]\n self.needsSpace = True\n self.up = 0\n self.down = 0\n self.height = 0\n self.width = 0\n for item in self.items:\n self.width += item.width + (20 if item.needsSpace else 0)\n self.up = max(self.up, item.up - self.height)\n self.height += item.height\n self.down = max(self.down - item.height, item.down)\n if self.items[0].needsSpace:\n self.width -= 10\n if self.items[-1].needsSpace:\n self.width -= 10\n addDebug(self)\n\n def __repr__(self):\n items = ', '.join(map(repr, self.items))\n return 'Sequence(%s)' % items\n\n def format(self, x, y, width):\n leftGap, rightGap = determineGaps(width, self.width)\n Path(x, y).h(leftGap).addTo(self)\n Path(x+leftGap+self.width, y+self.height).h(rightGap).addTo(self)\n x += leftGap\n for i,item in enumerate(self.items):\n if item.needsSpace and i > 0:\n Path(x, y).h(10).addTo(self)\n x += 10\n item.format(x, y, item.width).addTo(self)\n x += item.width\n y += item.height\n if item.needsSpace and i < len(self.items)-1:\n Path(x, y).h(10).addTo(self)\n x += 10\n return self", "_____no_output_____" ], [ "class Stack(DiagramItem):\n def __init__(self, *items):\n DiagramItem.__init__(self, 'g')\n self.items = [wrapString(item) for item in items]\n self.needsSpace = True\n self.width = max(item.width + (20 if item.needsSpace else 0) for item in self.items)\n # pretty sure that space calc is totes wrong\n if len(self.items) > 1:\n self.width += C.AR*2\n self.up = self.items[0].up\n self.down = self.items[-1].down\n self.height = 0\n last = len(self.items) - 1\n for i,item in enumerate(self.items):\n self.height += item.height\n if i > 0:\n self.height += max(C.AR*2, item.up + C.VS)\n if i < last:\n self.height += max(C.AR*2, item.down + C.VS)\n addDebug(self)\n\n def __repr__(self):\n items = ', '.join(repr(item) for item in self.items)\n return 'Stack(%s)' % items\n\n def format(self, x, y, width):\n leftGap, rightGap = determineGaps(width, self.width)\n Path(x, y).h(leftGap).addTo(self)\n x += leftGap\n xInitial = x\n if len(self.items) > 1:\n Path(x, y).h(C.AR).addTo(self)\n x += C.AR\n innerWidth = self.width - C.AR*2\n else:\n innerWidth = self.width\n for i,item in enumerate(self.items):\n item.format(x, y, innerWidth).addTo(self)\n x += innerWidth\n y += item.height\n if i != len(self.items)-1:\n (Path(x,y)\n .arc('ne').down(max(0, item.down + C.VS - C.AR*2))\n .arc('es').left(innerWidth)\n .arc('nw').down(max(0, self.items[i+1].up + C.VS - C.AR*2))\n .arc('ws').addTo(self))\n y += max(item.down + C.VS, C.AR*2) + max(self.items[i+1].up + C.VS, C.AR*2)\n x = xInitial + C.AR\n if len(self.items) > 1:\n Path(x, y).h(C.AR).addTo(self)\n x += C.AR\n Path(x, y).h(rightGap).addTo(self)\n return self\n\n\nclass OptionalSequence(DiagramItem):\n def __new__(cls, *items):\n if len(items) <= 1:\n return Sequence(*items)\n else:\n return super(OptionalSequence, cls).__new__(cls)\n\n def __init__(self, *items):\n DiagramItem.__init__(self, 'g')\n self.items = [wrapString(item) for item in items]\n self.needsSpace = False\n self.width = 0\n self.up = 0\n self.height = sum(item.height for item in self.items)\n self.down = self.items[0].down\n heightSoFar = 0\n for i,item in enumerate(self.items):\n self.up = max(self.up, max(C.AR * 2, item.up + C.VS) - heightSoFar)\n heightSoFar += item.height\n if i > 0:\n self.down = max(self.height + self.down, heightSoFar + max(C.AR*2, item.down + C.VS)) - self.height\n itemWidth = item.width + (20 if item.needsSpace else 0)\n if i == 0:\n self.width += C.AR + max(itemWidth, C.AR)\n else:\n self.width += C.AR*2 + max(itemWidth, C.AR) + C.AR\n addDebug(self)\n\n def __repr__(self):\n items = ', '.join(repr(item) for item in self.items)\n return 'OptionalSequence(%s)' % items\n\n def format(self, x, y, width):\n leftGap, rightGap = determineGaps(width, self.width)\n Path(x, y).right(leftGap).addTo(self)\n Path(x + leftGap + self.width, y + self.height).right(rightGap).addTo(self)\n x += leftGap\n upperLineY = y - self.up\n last = len(self.items) - 1\n for i,item in enumerate(self.items):\n itemSpace = 10 if item.needsSpace else 0\n itemWidth = item.width + itemSpace\n if i == 0:\n # Upper skip\n (Path(x,y)\n .arc('se')\n .up(y - upperLineY - C.AR*2)\n .arc('wn')\n .right(itemWidth - C.AR)\n .arc('ne')\n .down(y + item.height - upperLineY - C.AR*2)\n .arc('ws')\n .addTo(self))\n # Straight line\n (Path(x, y)\n .right(itemSpace + C.AR)\n .addTo(self))\n item.format(x + itemSpace + C.AR, y, item.width).addTo(self)\n x += itemWidth + C.AR\n y += item.height\n elif i < last:\n # Upper skip\n (Path(x, upperLineY)\n .right(C.AR*2 + max(itemWidth, C.AR) + C.AR)\n .arc('ne')\n .down(y - upperLineY + item.height - C.AR*2)\n .arc('ws')\n .addTo(self))\n # Straight line\n (Path(x,y)\n .right(C.AR*2)\n .addTo(self))\n item.format(x + C.AR*2, y, item.width).addTo(self)\n (Path(x + item.width + C.AR*2, y + item.height)\n .right(itemSpace + C.AR)\n .addTo(self))\n # Lower skip\n (Path(x,y)\n .arc('ne')\n .down(item.height + max(item.down + C.VS, C.AR*2) - C.AR*2)\n .arc('ws')\n .right(itemWidth - C.AR)\n .arc('se')\n .up(item.down + C.VS - C.AR*2)\n .arc('wn')\n .addTo(self))\n x += C.AR*2 + max(itemWidth, C.AR) + C.AR\n y += item.height\n else:\n # Straight line\n (Path(x, y)\n .right(C.AR*2)\n .addTo(self))\n item.format(x + C.AR*2, y, item.width).addTo(self)\n (Path(x + C.AR*2 + item.width, y + item.height)\n .right(itemSpace + C.AR)\n .addTo(self))\n # Lower skip\n (Path(x,y)\n .arc('ne')\n .down(item.height + max(item.down + C.VS, C.AR*2) - C.AR*2)\n .arc('ws')\n .right(itemWidth - C.AR)\n .arc('se')\n .up(item.down + C.VS - C.AR*2)\n .arc('wn')\n .addTo(self))\n return self", "_____no_output_____" ], [ "class AlternatingSequence(DiagramItem):\n def __new__(cls, *items):\n if len(items) == 2:\n return super(AlternatingSequence, cls).__new__(cls)\n else:\n raise Exception(\"AlternatingSequence takes exactly two arguments got \" + len(items))\n\n def __init__(self, *items):\n DiagramItem.__init__(self, 'g')\n self.items = [wrapString(item) for item in items]\n self.needsSpace = False\n\n arc = C.AR\n vert = C.VS\n first = self.items[0]\n second = self.items[1]\n\n arcX = 1 / math.sqrt(2) * arc * 2\n arcY = (1 - 1 / math.sqrt(2)) * arc * 2\n crossY = max(arc, vert)\n crossX = (crossY - arcY) + arcX\n\n firstOut = max(arc + arc, crossY/2 + arc + arc, crossY/2 + vert + first.down)\n self.up = firstOut + first.height + first.up\n\n secondIn = max(arc + arc, crossY/2 + arc + arc, crossY/2 + vert + second.up)\n self.down = secondIn + second.height + second.down\n\n self.height = 0\n\n firstWidth = (20 if first.needsSpace else 0) + first.width\n secondWidth = (20 if second.needsSpace else 0) + second.width\n self.width = 2*arc + max(firstWidth, crossX, secondWidth) + 2*arc\n addDebug(self)\n\n def __repr__(self):\n items = ', '.join(repr(item) for item in self.items)\n return 'AlternatingSequence(%s)' % items\n\n def format(self, x, y, width):\n arc = C.AR\n gaps = determineGaps(width, self.width)\n Path(x,y).right(gaps[0]).addTo(self)\n x += gaps[0]\n Path(x+self.width, y).right(gaps[1]).addTo(self)\n # bounding box\n # Path(x+gaps[0], y).up(self.up).right(self.width).down(self.up+self.down).left(self.width).up(self.down).addTo(self)\n first = self.items[0]\n second = self.items[1]\n\n # top\n firstIn = self.up - first.up\n firstOut = self.up - first.up - first.height\n Path(x,y).arc('se').up(firstIn-2*arc).arc('wn').addTo(self)\n first.format(x + 2*arc, y - firstIn, self.width - 4*arc).addTo(self)\n Path(x + self.width - 2*arc, y - firstOut).arc('ne').down(firstOut - 2*arc).arc('ws').addTo(self)\n\n # bottom\n secondIn = self.down - second.down - second.height\n secondOut = self.down - second.down\n Path(x,y).arc('ne').down(secondIn - 2*arc).arc('ws').addTo(self)\n second.format(x + 2*arc, y + secondIn, self.width - 4*arc).addTo(self)\n Path(x + self.width - 2*arc, y + secondOut).arc('se').up(secondOut - 2*arc).arc('wn').addTo(self)\n\n # crossover\n arcX = 1 / Math.sqrt(2) * arc * 2\n arcY = (1 - 1 / Math.sqrt(2)) * arc * 2\n crossY = max(arc, C.VS)\n crossX = (crossY - arcY) + arcX\n crossBar = (self.width - 4*arc - crossX)/2\n (Path(x+arc, y - crossY/2 - arc).arc('ws').right(crossBar)\n .arc_8('n', 'cw').l(crossX - arcX, crossY - arcY).arc_8('sw', 'ccw')\n .right(crossBar).arc('ne').addTo(self))\n (Path(x+arc, y + crossY/2 + arc).arc('wn').right(crossBar)\n .arc_8('s', 'ccw').l(crossX - arcX, -(crossY - arcY)).arc_8('nw', 'cw')\n .right(crossBar).arc('se').addTo(self))\n\n return self", "_____no_output_____" ], [ "class Choice(DiagramItem):\n def __init__(self, default, *items):\n DiagramItem.__init__(self, 'g')\n assert default < len(items)\n self.default = default\n self.items = [wrapString(item) for item in items]\n self.width = C.AR * 4 + max(item.width for item in self.items)\n self.up = self.items[0].up\n self.down = self.items[-1].down\n self.height = self.items[default].height\n for i, item in enumerate(self.items):\n if i in [default-1, default+1]:\n arcs = C.AR*2\n else:\n arcs = C.AR\n if i < default:\n self.up += max(arcs, item.height + item.down + C.VS + self.items[i+1].up)\n elif i == default:\n continue\n else:\n self.down += max(arcs, item.up + C.VS + self.items[i-1].down + self.items[i-1].height)\n self.down -= self.items[default].height # already counted in self.height\n addDebug(self)\n\n def __repr__(self):\n items = ', '.join(repr(item) for item in self.items)\n return 'Choice(%r, %s)' % (self.default, items)\n\n def format(self, x, y, width):\n leftGap, rightGap = determineGaps(width, self.width)\n\n # Hook up the two sides if self is narrower than its stated width.\n Path(x, y).h(leftGap).addTo(self)\n Path(x + leftGap + self.width, y + self.height).h(rightGap).addTo(self)\n x += leftGap\n\n innerWidth = self.width - C.AR * 4\n default = self.items[self.default]\n\n # Do the elements that curve above\n above = self.items[:self.default][::-1]\n if above:\n distanceFromY = max(\n C.AR * 2,\n default.up\n + C.VS\n + above[0].down\n + above[0].height)\n for i,ni,item in doubleenumerate(above):\n Path(x, y).arc('se').up(distanceFromY - C.AR * 2).arc('wn').addTo(self)\n item.format(x + C.AR * 2, y - distanceFromY, innerWidth).addTo(self)\n Path(x + C.AR * 2 + innerWidth, y - distanceFromY + item.height).arc('ne') \\\n .down(distanceFromY - item.height + default.height - C.AR*2).arc('ws').addTo(self)\n if ni < -1:\n distanceFromY += max(\n C.AR,\n item.up\n + C.VS\n + above[i+1].down\n + above[i+1].height)\n\n # Do the straight-line path.\n Path(x, y).right(C.AR * 2).addTo(self)\n self.items[self.default].format(x + C.AR * 2, y, innerWidth).addTo(self)\n Path(x + C.AR * 2 + innerWidth, y+self.height).right(C.AR * 2).addTo(self)\n\n # Do the elements that curve below\n below = self.items[self.default + 1:]\n if below:\n distanceFromY = max(\n C.AR * 2,\n default.height\n + default.down\n + C.VS\n + below[0].up)\n for i, item in enumerate(below):\n Path(x, y).arc('ne').down(distanceFromY - C.AR * 2).arc('ws').addTo(self)\n item.format(x + C.AR * 2, y + distanceFromY, innerWidth).addTo(self)\n Path(x + C.AR * 2 + innerWidth, y + distanceFromY + item.height).arc('se') \\\n .up(distanceFromY - C.AR * 2 + item.height - default.height).arc('wn').addTo(self)\n distanceFromY += max(\n C.AR,\n item.height\n + item.down\n + C.VS\n + (below[i + 1].up if i+1 < len(below) else 0))\n return self\n\n\nclass MultipleChoice(DiagramItem):\n def __init__(self, default, type, *items):\n DiagramItem.__init__(self, 'g')\n assert 0 <= default < len(items)\n assert type in [\"any\", \"all\"]\n self.default = default\n self.type = type\n self.needsSpace = True\n self.items = [wrapString(item) for item in items]\n self.innerWidth = max(item.width for item in self.items)\n self.width = 30 + C.AR + self.innerWidth + C.AR + 20\n self.up = self.items[0].up\n self.down = self.items[-1].down\n self.height = self.items[default].height\n for i, item in enumerate(self.items):\n if i in [default-1, default+1]:\n minimum = 10 + C.AR\n else:\n minimum = C.AR\n if i < default:\n self.up += max(minimum, item.height + item.down + C.VS + self.items[i+1].up)\n elif i == default:\n continue\n else:\n self.down += max(minimum, item.up + C.VS + self.items[i-1].down + self.items[i-1].height)\n self.down -= self.items[default].height # already counted in self.height\n addDebug(self)\n\n def __repr__(self):\n items = ', '.join(map(repr, self.items))\n return 'MultipleChoice(%r, %r, %s)' % (self.default, self.type, items)\n\n def format(self, x, y, width):\n leftGap, rightGap = determineGaps(width, self.width)\n\n # Hook up the two sides if self is narrower than its stated width.\n Path(x, y).h(leftGap).addTo(self)\n Path(x + leftGap + self.width, y + self.height).h(rightGap).addTo(self)\n x += leftGap\n\n default = self.items[self.default]\n\n # Do the elements that curve above\n above = self.items[:self.default][::-1]\n if above:\n distanceFromY = max(\n 10 + C.AR,\n default.up\n + C.VS\n + above[0].down\n + above[0].height)\n for i,ni,item in doubleenumerate(above):\n (Path(x + 30, y)\n .up(distanceFromY - C.AR)\n .arc('wn')\n .addTo(self))\n item.format(x + 30 + C.AR, y - distanceFromY, self.innerWidth).addTo(self)\n (Path(x + 30 + C.AR + self.innerWidth, y - distanceFromY + item.height)\n .arc('ne')\n .down(distanceFromY - item.height + default.height - C.AR - 10)\n .addTo(self))\n if ni < -1:\n distanceFromY += max(\n C.AR,\n item.up\n + C.VS\n + above[i+1].down\n + above[i+1].height)\n\n # Do the straight-line path.\n Path(x + 30, y).right(C.AR).addTo(self)\n self.items[self.default].format(x + 30 + C.AR, y, self.innerWidth).addTo(self)\n Path(x + 30 + C.AR + self.innerWidth, y + self.height).right(C.AR).addTo(self)\n\n # Do the elements that curve below\n below = self.items[self.default + 1:]\n if below:\n distanceFromY = max(\n 10 + C.AR,\n default.height\n + default.down\n + C.VS\n + below[0].up)\n for i, item in enumerate(below):\n (Path(x+30, y)\n .down(distanceFromY - C.AR)\n .arc('ws')\n .addTo(self))\n item.format(x + 30 + C.AR, y + distanceFromY, self.innerWidth).addTo(self)\n (Path(x + 30 + C.AR + self.innerWidth, y + distanceFromY + item.height)\n .arc('se')\n .up(distanceFromY - C.AR + item.height - default.height - 10)\n .addTo(self))\n distanceFromY += max(\n C.AR,\n item.height\n + item.down\n + C.VS\n + (below[i + 1].up if i+1 < len(below) else 0))\n text = DiagramItem('g', attrs={\"class\": \"diagram-text\"}).addTo(self)\n DiagramItem('title', text=\"take one or more branches, once each, in any order\" if self.type==\"any\" else \"take all branches, once each, in any order\").addTo(text)\n DiagramItem('path', attrs={\n \"d\": \"M {x} {y} h -26 a 4 4 0 0 0 -4 4 v 12 a 4 4 0 0 0 4 4 h 26 z\".format(x=x+30, y=y-10),\n \"class\": \"diagram-text\"\n }).addTo(text)\n DiagramItem('text', text=\"1+\" if self.type==\"any\" else \"all\", attrs={\n \"x\": x + 15,\n \"y\": y + 4,\n \"class\": \"diagram-text\"\n }).addTo(text)\n DiagramItem('path', attrs={\n \"d\": \"M {x} {y} h 16 a 4 4 0 0 1 4 4 v 12 a 4 4 0 0 1 -4 4 h -16 z\".format(x=x+self.width-20, y=y-10),\n \"class\": \"diagram-text\"\n }).addTo(text)\n DiagramItem('text', text=u\"↺\", attrs={\n \"x\": x + self.width - 10,\n \"y\": y + 4,\n \"class\": \"diagram-arrow\"\n }).addTo(text)\n return self\n\n\nclass HorizontalChoice(DiagramItem):\n def __new__(cls, *items):\n if len(items) <= 1:\n return Sequence(*items)\n else:\n return super(HorizontalChoice, cls).__new__(cls)\n\n def __init__(self, *items):\n DiagramItem.__init__(self, 'g')\n self.items = [wrapString(item) for item in items]\n allButLast = self.items[:-1]\n middles = self.items[1:-1]\n first = self.items[0]\n last = self.items[-1]\n self.needsSpace = False\n\n self.width = (C.AR # starting track\n + C.AR*2 * (len(self.items)-1) # inbetween tracks\n + sum(x.width + (20 if x.needsSpace else 0) for x in self.items) #items\n + (C.AR if last.height > 0 else 0) # needs space to curve up\n + C.AR) #ending track\n\n # Always exits at entrance height\n self.height = 0\n\n # All but the last have a track running above them\n self._upperTrack = max(\n C.AR*2,\n C.VS,\n max(x.up for x in allButLast) + C.VS\n )\n self.up = max(self._upperTrack, last.up)\n\n # All but the first have a track running below them\n # Last either straight-lines or curves up, so has different calculation\n self._lowerTrack = max(\n C.VS,\n max(x.height+max(x.down+C.VS, C.AR*2) for x in middles) if middles else 0,\n last.height + last.down + C.VS\n )\n if first.height < self._lowerTrack:\n # Make sure there's at least 2*C.AR room between first exit and lower track\n self._lowerTrack = max(self._lowerTrack, first.height + C.AR*2)\n self.down = max(self._lowerTrack, first.height + first.down)\n\n addDebug(self)\n\n def format(self, x, y, width):\n # Hook up the two sides if self is narrower than its stated width.\n leftGap, rightGap = determineGaps(width, self.width)\n Path(x, y).h(leftGap).addTo(self)\n Path(x + leftGap + self.width, y + self.height).h(rightGap).addTo(self)\n x += leftGap\n\n first = self.items[0]\n last = self.items[-1]\n\n # upper track\n upperSpan = (sum(x.width+(20 if x.needsSpace else 0) for x in self.items[:-1])\n + (len(self.items) - 2) * C.AR*2\n - C.AR)\n (Path(x,y)\n .arc('se')\n .up(self._upperTrack - C.AR*2)\n .arc('wn')\n .h(upperSpan)\n .addTo(self))\n\n # lower track\n lowerSpan = (sum(x.width+(20 if x.needsSpace else 0) for x in self.items[1:])\n + (len(self.items) - 2) * C.AR*2\n + (C.AR if last.height > 0 else 0)\n - C.AR)\n lowerStart = x + C.AR + first.width+(20 if first.needsSpace else 0) + C.AR*2\n (Path(lowerStart, y+self._lowerTrack)\n .h(lowerSpan)\n .arc('se')\n .up(self._lowerTrack - C.AR*2)\n .arc('wn')\n .addTo(self))\n\n # Items\n for [i, item] in enumerate(self.items):\n # input track\n if i == 0:\n (Path(x,y)\n .h(C.AR)\n .addTo(self))\n x += C.AR\n else:\n (Path(x, y - self._upperTrack)\n .arc('ne')\n .v(self._upperTrack - C.AR*2)\n .arc('ws')\n .addTo(self))\n x += C.AR*2\n\n # item\n itemWidth = item.width + (20 if item.needsSpace else 0)\n item.format(x, y, itemWidth).addTo(self)\n x += itemWidth\n\n # output track\n if i == len(self.items)-1:\n if item.height == 0:\n (Path(x,y)\n .h(C.AR)\n .addTo(self))\n else:\n (Path(x,y+item.height)\n .arc('se')\n .addTo(self))\n elif i == 0 and item.height > self._lowerTrack:\n # Needs to arc up to meet the lower track, not down.\n if item.height - self._lowerTrack >= C.AR*2:\n (Path(x, y+item.height)\n .arc('se')\n .v(self._lowerTrack - item.height + C.AR*2)\n .arc('wn')\n .addTo(self))\n else:\n # Not enough space to fit two arcs\n # so just bail and draw a straight line for now.\n (Path(x, y+item.height)\n .l(C.AR*2, self._lowerTrack - item.height)\n .addTo(self))\n else:\n (Path(x, y+item.height)\n .arc('ne')\n .v(self._lowerTrack - item.height - C.AR*2)\n .arc('ws')\n .addTo(self))\n return self", "_____no_output_____" ], [ "def Optional(item, skip=False):\n return Choice(0 if skip else 1, Skip(), item)", "_____no_output_____" ], [ "class OneOrMore(DiagramItem):\n def __init__(self, item, repeat=None):\n DiagramItem.__init__(self, 'g')\n repeat = repeat or Skip()\n self.item = wrapString(item)\n self.rep = wrapString(repeat)\n self.width = max(self.item.width, self.rep.width) + C.AR * 2\n self.height = self.item.height\n self.up = self.item.up\n self.down = max(\n C.AR * 2,\n self.item.down + C.VS + self.rep.up + self.rep.height + self.rep.down)\n self.needsSpace = True\n addDebug(self)\n\n def format(self, x, y, width):\n leftGap, rightGap = determineGaps(width, self.width)\n\n # Hook up the two sides if self is narrower than its stated width.\n Path(x, y).h(leftGap).addTo(self)\n Path(x + leftGap + self.width, y +self.height).h(rightGap).addTo(self)\n x += leftGap\n\n # Draw item\n Path(x, y).right(C.AR).addTo(self)\n self.item.format(x + C.AR, y, self.width - C.AR * 2).addTo(self)\n Path(x + self.width - C.AR, y + self.height).right(C.AR).addTo(self)\n\n # Draw repeat arc\n distanceFromY = max(C.AR*2, self.item.height + self.item.down + C.VS + self.rep.up)\n Path(x + C.AR, y).arc('nw').down(distanceFromY - C.AR * 2) \\\n .arc('ws').addTo(self)\n self.rep.format(x + C.AR, y + distanceFromY, self.width - C.AR*2).addTo(self)\n Path(x + self.width - C.AR, y + distanceFromY + self.rep.height).arc('se') \\\n .up(distanceFromY - C.AR * 2 + self.rep.height - self.item.height).arc('en').addTo(self)\n\n return self\n\n def __repr__(self):\n return 'OneOrMore(%r, repeat=%r)' % (self.item, self.rep)", "_____no_output_____" ], [ "def ZeroOrMore(item, repeat=None, skip=False):\n result = Optional(OneOrMore(item, repeat), skip)\n return result", "_____no_output_____" ], [ "class Start(DiagramItem):\n def __init__(self, type=\"simple\", label=None):\n DiagramItem.__init__(self, 'g')\n if label:\n self.width = max(20, len(label) * C.CHAR_WIDTH + 10)\n else:\n self.width = 20\n self.up = 10\n self.down = 10\n self.type = type\n self.label = label\n addDebug(self)\n\n def format(self, x, y, _width):\n path = Path(x, y-10)\n if self.type == \"complex\":\n path.down(20).m(0, -10).right(self.width).addTo(self)\n else:\n path.down(20).m(10, -20).down(20).m(-10, -10).right(self.width).addTo(self)\n if self.label:\n DiagramItem('text', attrs={\"x\":x, \"y\":y-15, \"style\":\"text-anchor:start\"}, text=self.label).addTo(self)\n return self\n\n def __repr__(self):\n return 'Start(type=%r, label=%r)' % (self.type, self.label)", "_____no_output_____" ], [ "class End(DiagramItem):\n def __init__(self, type=\"simple\"):\n DiagramItem.__init__(self, 'path')\n self.width = 20\n self.up = 10\n self.down = 10\n self.type = type\n addDebug(self)\n\n def format(self, x, y, _width):\n if self.type == \"simple\":\n self.attrs['d'] = 'M {0} {1} h 20 m -10 -10 v 20 m 10 -20 v 20'.format(x, y)\n elif self.type == \"complex\":\n self.attrs['d'] = 'M {0} {1} h 20 m 0 -10 v 20'\n return self\n\n def __repr__(self):\n return 'End(type=%r)' % self.type", "_____no_output_____" ], [ "class Terminal(DiagramItem):\n def __init__(self, text, href=None, title=None):\n DiagramItem.__init__(self, 'g', {'class': 'terminal'})\n self.text = text\n self.href = href\n self.title = title\n self.width = len(text) * C.CHAR_WIDTH + 20\n self.up = 11\n self.down = 11\n self.needsSpace = True\n addDebug(self)\n\n def __repr__(self):\n return 'Terminal(%r, href=%r, title=%r)' % (self.text, self.href, self.title)\n\n def format(self, x, y, width):\n leftGap, rightGap = determineGaps(width, self.width)\n\n # Hook up the two sides if self is narrower than its stated width.\n Path(x, y).h(leftGap).addTo(self)\n Path(x + leftGap + self.width, y).h(rightGap).addTo(self)\n\n DiagramItem('rect', {'x': x + leftGap, 'y': y - 11, 'width': self.width,\n 'height': self.up + self.down, 'rx': 10, 'ry': 10}).addTo(self)\n text = DiagramItem('text', {'x': x + width / 2, 'y': y + 4}, self.text)\n if self.href is not None:\n a = DiagramItem('a', {'xlink:href':self.href}, text).addTo(self)\n text.addTo(a)\n else:\n text.addTo(self)\n if self.title is not None:\n DiagramItem('title', {}, self.title).addTo(self)\n return self", "_____no_output_____" ], [ "class NonTerminal(DiagramItem):\n def __init__(self, text, href=None, title=None):\n DiagramItem.__init__(self, 'g', {'class': 'non-terminal'})\n self.text = text\n self.href = href\n self.title = title\n self.width = len(text) * C.CHAR_WIDTH + 20\n self.up = 11\n self.down = 11\n self.needsSpace = True\n addDebug(self)\n\n def __repr__(self):\n return 'NonTerminal(%r, href=%r, title=%r)' % (self.text, self.href, self.title)\n\n def format(self, x, y, width):\n leftGap, rightGap = determineGaps(width, self.width)\n\n # Hook up the two sides if self is narrower than its stated width.\n Path(x, y).h(leftGap).addTo(self)\n Path(x + leftGap + self.width, y).h(rightGap).addTo(self)\n\n DiagramItem('rect', {'x': x + leftGap, 'y': y - 11, 'width': self.width,\n 'height': self.up + self.down}).addTo(self)\n text = DiagramItem('text', {'x': x + width / 2, 'y': y + 4}, self.text)\n if self.href is not None:\n a = DiagramItem('a', {'xlink:href':self.href}, text).addTo(self)\n text.addTo(a)\n else:\n text.addTo(self)\n if self.title is not None:\n DiagramItem('title', {}, self.title).addTo(self)\n return self", "_____no_output_____" ], [ "class Comment(DiagramItem):\n def __init__(self, text, href=None, title=None):\n DiagramItem.__init__(self, 'g')\n self.text = text\n self.href = href\n self.title = title\n self.width = len(text) * C.COMMENT_CHAR_WIDTH + 10\n self.up = 11\n self.down = 11\n self.needsSpace = True\n addDebug(self)\n\n def __repr__(self):\n return 'Comment(%r, href=%r, title=%r)' % (self.text, self.href, self.title)\n\n def format(self, x, y, width):\n leftGap, rightGap = determineGaps(width, self.width)\n\n # Hook up the two sides if self is narrower than its stated width.\n Path(x, y).h(leftGap).addTo(self)\n Path(x + leftGap + self.width, y).h(rightGap).addTo(self)\n\n text = DiagramItem('text', {'x': x + width / 2, 'y': y + 5, 'class': 'comment'}, self.text)\n if self.href is not None:\n a = DiagramItem('a', {'xlink:href':self.href}, text).addTo(self)\n text.addTo(a)\n else:\n text.addTo(self)\n if self.title is not None:\n DiagramItem('title', {}, self.title).addTo(self)\n return self", "_____no_output_____" ], [ "class Skip(DiagramItem):\n def __init__(self):\n DiagramItem.__init__(self, 'g')\n self.width = 0\n self.up = 0\n self.down = 0\n addDebug(self)\n\n def format(self, x, y, width):\n Path(x, y).right(width).addTo(self)\n return self\n\n def __repr__(self):\n return 'Skip()'", "_____no_output_____" ], [ "def show_diagram(graph, log=False):\n with io.StringIO() as f:\n d = Diagram(graph)\n if log:\n print(d)\n d.writeSvg(f.write)\n mysvg = f.getvalue()\n return mysvg", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a60fe963901da4ce1f4bfedc0a5d147e21e14cb
50,862
ipynb
Jupyter Notebook
RNN/.ipynb_checkpoints/RNN_passengers_number_v2-checkpoint.ipynb
danhtaihoang/classification-tensorflow
2fe345f49ac98e353e5aff89d2f3526636943292
[ "MIT" ]
null
null
null
RNN/.ipynb_checkpoints/RNN_passengers_number_v2-checkpoint.ipynb
danhtaihoang/classification-tensorflow
2fe345f49ac98e353e5aff89d2f3526636943292
[ "MIT" ]
null
null
null
RNN/.ipynb_checkpoints/RNN_passengers_number_v2-checkpoint.ipynb
danhtaihoang/classification-tensorflow
2fe345f49ac98e353e5aff89d2f3526636943292
[ "MIT" ]
1
2020-11-28T17:26:07.000Z
2020-11-28T17:26:07.000Z
153.198795
21,696
0.896897
[ [ [ "Adapted from https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import Dense,Conv2D,MaxPool2D,Flatten,Dropout,LSTM\nfrom tensorflow.keras.utils import plot_model\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.metrics import confusion_matrix,mean_squared_error\nfrom sklearn.preprocessing import MinMaxScaler\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "np.random.seed(11)", "_____no_output_____" ], [ "# load the dataset\npath = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/airline-passengers.csv'\ndf = pd.read_csv(path, header=0, index_col=0, squeeze=True)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "# convert an array of values into a dataset matrix\ndef create_dataset(dataset, look_back=1):\n\tdataX, dataY = [], []\n\tfor i in range(len(dataset)-look_back-1):\n\t\ta = dataset[i:(i+look_back), 0]\n\t\tdataX.append(a)\n\t\tdataY.append(dataset[i + look_back, 0])\n\treturn np.array(dataX), np.array(dataY)", "_____no_output_____" ], [ "# retrieve the values\nvalues = df.values.astype('float32')", "_____no_output_____" ], [ "values[0:20]", "_____no_output_____" ], [ "# normalize the dataset\nscaler = MinMaxScaler(feature_range=(0, 1))\nvalues = scaler.fit_transform(values.reshape(-1,1))", "_____no_output_____" ], [ "# split into train and test sets\ntrain_size = int(len(values) * 0.6)\ntest_size = len(values) - train_size\ntrain, test = values[0:train_size,:], values[train_size:len(values),:]", "_____no_output_____" ], [ "# reshape into X=t and Y=t+1\nlook_back = 1\nx_train, y_train = create_dataset(train, look_back)\nx_test, y_test = create_dataset(test, look_back)", "_____no_output_____" ], [ "# reshape input to be [samples, time steps, features]\nx_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1]))\nx_test = np.reshape(x_test, (x_test.shape[0], 1, x_test.shape[1]))", "_____no_output_____" ], [ "# create and fit the LSTM network\nmodel = Sequential()\nmodel.add(LSTM(4, input_shape=(1, look_back)))\nmodel.add(Dense(1))\n\nmodel.compile(loss='mean_squared_error', optimizer='adam')", "_____no_output_____" ], [ "model.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nlstm (LSTM) (None, 4) 96 \n_________________________________________________________________\ndense (Dense) (None, 1) 5 \n=================================================================\nTotal params: 101\nTrainable params: 101\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "history = model.fit(x_train, y_train, epochs=100, batch_size=1, verbose=0)", "_____no_output_____" ], [ "# make predictions\ny_pred = model.predict(x_test)", "_____no_output_____" ], [ "plt.plot(y_pred)", "_____no_output_____" ], [ "plt.plot(y_test)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a610603d13cb30dc6989c378665122413e97879
211,995
ipynb
Jupyter Notebook
LB10-Regression+TimeSeries/Python/Python_Linear_REGRESSION_Advertising_v1.ipynb
FENICOLAS/BINA-FS22-WORK
8d960afa4f6438cc613a48c8d0935d644c07a6ce
[ "CC0-1.0" ]
null
null
null
LB10-Regression+TimeSeries/Python/Python_Linear_REGRESSION_Advertising_v1.ipynb
FENICOLAS/BINA-FS22-WORK
8d960afa4f6438cc613a48c8d0935d644c07a6ce
[ "CC0-1.0" ]
null
null
null
LB10-Regression+TimeSeries/Python/Python_Linear_REGRESSION_Advertising_v1.ipynb
FENICOLAS/BINA-FS22-WORK
8d960afa4f6438cc613a48c8d0935d644c07a6ce
[ "CC0-1.0" ]
7
2022-02-21T07:59:00.000Z
2022-03-14T09:46:50.000Z
129.186472
41,546
0.826241
[ [ [ "# **Jupyter Notebook to demonstrate (simple) Linear Regression for Advertising/Sales Predicition**\n\nLinear Regression is a simple yet powerful and mostly used algorithm in data science. There are a plethora of real-world applications of Linear Regression.\n\nThe purpose of this tutorial/notebook is to get a clear idea on how a linear regression can be used to solve a marketing problem, such as selecting the right channels to advertise a product.\n\nProblem Statement and Example Data: Build a model which predicts sales based on the money spent on different platforms for marketing. Using this notebook, we will build a linear regression model to predict Sales using an appropriate predictor variable, based on the advertising dataset.\n\n\n### Useful resources:\n* [Kaggle](https://www.kaggle.com) Multiple Lineare Regression Notebooks and Tutorials\n* [Analytics Vidhya Blog](https://www.analyticsvidhya.com/blog/2021/10/everything-you-need-to-know-about-linear-regression/) \"Everything you need to Know about Linear Regression\"\n* [CodeSource](https://codesource.io/building-a-regression-model-to-predict-sales-revenue-using-sci-kit-learn/) \"Building a Regression Model to Predict Sales Revenue using Sci-Kit Learn\"\n\n---\nAuthor: \n* dr.daniel benninger [> Linkedin](https://www.linkedin.com/in/danielbenninger/)\n\nHistory: \n* v1, June 2021, dbe --- adapted version for CAS DA4", "_____no_output_____" ], [ "## Setup Environment - Load necessary Libraries and Functions\n\nFirst, we need to import some libraries: \n* pandas: data manipulation and analysis\n* numpy : library for scientific computing in Python, used for working with arrays and matrices\n* matplotlib : plotting library for data visualization\n* rcParams: To change the matplotlib properties like figure size\n* seaborn: data visualization library based on matplotlib\n\n* sklearn: xxx\n* statsmodels: Using statsmodels module classes and functions for linear regression \n \n**Note:** Make sure they are installed already before importing them", "_____no_output_____" ] ], [ [ "# Supress Warnings\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# Import the numpy and pandas package\nimport numpy as np\nimport pandas as pd\n\n# Import the data visualization package\nimport matplotlib.pyplot as plt \nimport seaborn as sns\n\nfrom pylab import rcParams\n\n# configure plot area\nrcParams['figure.figsize'] = 12, 8", "_____no_output_____" ] ], [ [ "## Reading and Understanding the Data", "_____no_output_____" ] ], [ [ "# Input data files - in colab environment - are available in the \"/content/sample_data\" directory.\n# For example: check the current files in the input directory\n\nfrom subprocess import check_output\nprint(check_output([\"ls\", \"/content/sample_data\"]).decode(\"utf8\"))", "anscombe.json\ncalifornia_housing_test.csv\ncalifornia_housing_train.csv\nDATA_Werbung.csv\nmnist_test.csv\nmnist_train_small.csv\nREADME.md\n\n" ], [ "#advertising = pd.DataFrame(pd.read_csv(\"../input/advertising.csv\"))\n#load Advertising dataset from local directory\nadvertising = pd.read_csv(\"/content/sample_data/DATA_Werbung.csv\")\nadvertising.head()", "_____no_output_____" ] ], [ [ "## Data Inspection", "_____no_output_____" ] ], [ [ "advertising.shape", "_____no_output_____" ], [ "advertising.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 200 entries, 0 to 199\nData columns (total 5 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Unnamed: 0 200 non-null int64 \n 1 TV 200 non-null float64\n 2 Radio 200 non-null float64\n 3 Newspaper 200 non-null float64\n 4 Sales 200 non-null float64\ndtypes: float64(4), int64(1)\nmemory usage: 7.9 KB\n" ], [ "advertising.describe()", "_____no_output_____" ] ], [ [ "## Data Cleaning & Analysis", "_____no_output_____" ] ], [ [ "# Checking Null values\nadvertising.isnull().sum()*100/advertising.shape[0]", "_____no_output_____" ] ], [ [ "Note: There are no *NULL* values in the dataset, hence it is clean.", "_____no_output_____" ] ], [ [ "# Analysis with Box-Whisker Plots (Lageparameter Analyse)\nfig, axs = plt.subplots(3, figsize = (10,5))\n\nplt1 = sns.boxplot(advertising['TV'], ax = axs[0])\nplt2 = sns.boxplot(advertising['Newspaper'], ax = axs[1])\nplt3 = sns.boxplot(advertising['Radio'], ax = axs[2])\n\nplt.tight_layout()", "_____no_output_____" ] ], [ [ "Note: There are no considerable *outliers* present in the data.", "_____no_output_____" ], [ "## Diagnostic Analytics - Exploratory Data Analysis", "_____no_output_____" ], [ "### Univariate Analysis\nFocus on the **Sales** (Target) Variable", "_____no_output_____" ] ], [ [ "# Analysis with Box-Whisker Plots (Lageparameter Analyse)\nfig, axs = plt.subplots(1, figsize = (10,5))\n\nsns.boxplot(advertising['Sales'])\nplt.show()", "_____no_output_____" ] ], [ [ "### Bivariate Analysis\nFocus on the Observation and Target Variable combinations", "_____no_output_____" ] ], [ [ "# Analysis with Scatterplot (Pairplots)\n# Let's see how Sales are related with other variables\n\nsns.pairplot(advertising, x_vars=['TV', 'Newspaper', 'Radio'], y_vars='Sales', height=4, aspect=1, kind='scatter')\nplt.title('Pairwise Scatterplots')\nplt.show()", "_____no_output_____" ], [ "# Analysis with Heatmap\n# Let's see the correlation between different variables\n\nsns.heatmap(advertising.corr(), cmap=\"YlOrBr\", annot = True)\nplt.show()", "_____no_output_____" ] ], [ [ "Note: As is visible from the *pairplot* and the *heatmap*, the variable `TV` seems to be most correlated with `Sales`. \nSo let's go ahead and perform **simple linear regression** using `TV` as our feature variable.", "_____no_output_____" ], [ "## Model Building - Linear Regression\n#### Performing Simple Linear Regression\nThe equation of linear regression:<br>\n$y = c + m_1x_1 + m_2x_2 + ... + m_nx_n$\n\n- $y$ is the response (*target*)\n- $c$ is the intercept\n- $m_1$ is the coefficient for the first feature (*observation*)\n- $m_n$ is the coefficient for the nth feature<br>\n\nIn our case:\n\n$y = c + m_1 \\times TV$\n\nThe $m$ values are called the model **coefficients** or **model parameters**.\n\n---", "_____no_output_____" ], [ "### Generic Steps in Model Building\n1) We first assign \n* the feature (**observation**) (`TV`, in this case) to the **variable `X`** \n* and the response (**target**) variable (`Sales`) to the **variable `y`**.", "_____no_output_____" ] ], [ [ "X = advertising['TV']\ny = advertising['Sales']", "_____no_output_____" ] ], [ [ "2) Then split Dataset into Train-Test Parts\n\nYou now need to split our variable into training and testing sets. You'll perform this by importing `train_test_split` from the `sklearn.model_selection` library. It is usually a good practice to keep 70% of the data in your train dataset and the rest 30% in your test dataset", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.7, test_size = 0.3, random_state = 100)", "_____no_output_____" ], [ "# Let's have a look at the TRAIN dataset\n\nX_train.head()", "_____no_output_____" ], [ "y_train.head()", "_____no_output_____" ] ], [ [ "### Build a Linear Model\n\nYou first need to import the `statsmodel.api` library using which you'll perform the linear regression.", "_____no_output_____" ] ], [ [ "import statsmodels.api as sm", "_____no_output_____" ] ], [ [ "By default, the `statsmodels` library fits a line on the dataset which passes through the origin. But in order to have an intercept, you need to manually use the `add_constant` attribute of `statsmodels`. \n\nAnd once you've added the constant to your `X_train` dataset, you can go ahead and fit a regression line using the **Ordinary Least Squares** (`OLS`) attribute of `statsmodels` as shown below", "_____no_output_____" ] ], [ [ "# Add a constant to get an intercept\nX_train_sm = sm.add_constant(X_train)\n\n# Fit the Regression Line using 'OLS' (ordinary least square)\nlr = sm.OLS(y_train, X_train_sm).fit()\n", "_____no_output_____" ], [ "# Print the regression parameters, \n# i.e. the intercept and the slope of the fitted regression line\nlr.params", "_____no_output_____" ], [ "# Performing a summary operation lists out all the different parameters of the regression line fitted\nprint(lr.summary())", " OLS Regression Results \n==============================================================================\nDep. Variable: Sales R-squared: 0.613\nModel: OLS Adj. R-squared: 0.611\nMethod: Least Squares F-statistic: 219.0\nDate: Mon, 17 Jan 2022 Prob (F-statistic): 2.84e-30\nTime: 14:00:42 Log-Likelihood: -370.62\nNo. Observations: 140 AIC: 745.2\nDf Residuals: 138 BIC: 751.1\nDf Model: 1 \nCovariance Type: nonrobust \n==============================================================================\n coef std err t P>|t| [0.025 0.975]\n------------------------------------------------------------------------------\nconst 6.9897 0.548 12.762 0.000 5.907 8.073\nTV 0.0465 0.003 14.798 0.000 0.040 0.053\n==============================================================================\nOmnibus: 0.995 Durbin-Watson: 1.983\nProb(Omnibus): 0.608 Jarque-Bera (JB): 0.970\nSkew: -0.008 Prob(JB): 0.616\nKurtosis: 2.593 Cond. No. 328.\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n" ] ], [ [ "#### Key Statistics of the Linear Model\nLooking at the key values from the summary above of the linear model, we are concerned with: \n1. The coefficients and significance (**p-values**)\n2. **R-squared**\n3. **F statistic** and its significance", "_____no_output_____" ], [ "##### 1. The coefficient for TV is 0.054, with a very low p value\n*The coefficient is statistically significant*. So the association is not purely by chance. ", "_____no_output_____" ], [ "##### 2. R - squared is 0.816\nMeaning that 81.6% of the variance in `Sales` is explained by `TV`\n\n*This is a decent R-squared value.*", "_____no_output_____" ], [ "###### 3. F statistic has a very low p value (practically low)\nMeaning that *the model fit is statistically significant*, and the explained variance isn't purely by chance.", "_____no_output_____" ], [ "---\nNote: **The fit is significant** \nLet's visualize how well the model fit the data. \nFrom the parameters that we get, our linear regression equation becomes:\n\n$ Sales = 6.948 + 0.054 \\times TV $", "_____no_output_____" ] ], [ [ "# Visualize Orginal Data and Linear Model \n\nplt.scatter(X_train, y_train)\nplt.plot(X_train, 6.948 + 0.054*X_train, 'r')\n\nplt.title('Original Data and Linear Model')\n# Set x-axis label\nplt.xlabel('TV')\n# Set y-axis label\nplt.ylabel('Sales')\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Model Evaluation\n\n**Residual analysis**, to validate assumptions of the model, and hence the reliability for inference", "_____no_output_____" ], [ "#### Distribution of the Error Terms\nWe need to check if the error terms are also normally distributed (which is infact, one of the major assumptions of linear regression), let us plot the *histogram of the error terms* and see what it looks like.", "_____no_output_____" ] ], [ [ "y_train_pred = lr.predict(X_train_sm)\nres = (y_train - y_train_pred)", "_____no_output_____" ], [ "plt.figure()\nsns.distplot(res, bins = 15)\n\nplt.title('Model Evaluation: Distribution of the Error Terms', fontsize = 15)\nplt.xlabel('y_train - y_train_pred', fontsize = 15) # X-label\nplt.show()", "_____no_output_____" ] ], [ [ "Note: The residuals are following the *normally distributed with a mean 0*. All good!", "_____no_output_____" ], [ "#### Looking for Patterns in the Residuals", "_____no_output_____" ] ], [ [ "plt.scatter(X_train,res)\n\nplt.title('Model Evaluation: Residual Patterns', fontsize = 15)\nplt.ylabel('y_train - y_train_pred', fontsize = 15) # Y-label\n\nplt.show()", "_____no_output_____" ] ], [ [ "We are confident that the model fit isn't by chance, and has decent predictive power. The normality of residual terms allows some inference on the coefficients. \n**Although**, the variance of residuals increasing with X indicates that there is significant variation that this model is unable to explain. \n\nAs you can see, the regression line is a pretty good fit to the data!", "_____no_output_____" ], [ "### Predictions on the Test Set", "_____no_output_____" ], [ "Now that you have fitted a regression line on your train dataset, it's time to make some predictions on the test data. For this, you first need to add a constant to the `X_test` data like you did for `X_train` and then you can simply go on and predict the y values corresponding to `X_test` using the `predict` attribute of the fitted regression line.", "_____no_output_____" ] ], [ [ "# Add a constant to X_test\nX_test_sm = sm.add_constant(X_test)\n\n# Predict the y values corresponding to X_test_sm\ny_pred = lr.predict(X_test_sm)", "_____no_output_____" ], [ "y_pred.head()", "_____no_output_____" ], [ "from sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import r2_score", "_____no_output_____" ] ], [ [ "##### Looking at the Root Mean Squared Error (RMSE) on the test set", "_____no_output_____" ] ], [ [ "#Returns the mean squared error (RMSE); we'll take a square root\nRMSE = np.sqrt(mean_squared_error(y_test, y_pred))\nprint('Root Mean Squared Error (RMSE): ', RMSE)", "Root Mean Squared Error (RMSE): 2.8241456288327007\n" ] ], [ [ "###### Checking the R-squared on the test set", "_____no_output_____" ] ], [ [ "r_squared = r2_score(y_test, y_pred)\nprint('R-squared: ',r_squared)", "R-squared: 0.5942987267783302\n" ] ], [ [ "##### Visualizing the Fit on the Test Set", "_____no_output_____" ] ], [ [ "plt.scatter(X_test, y_test)\nplt.plot(X_test, 6.948 + 0.054 * X_test, 'r')\n\nplt.title('Model Evaluation: Visualizing the Fit on the Test Set', fontsize = 15)\nplt.ylabel('Sales') # Y-label\nplt.xlabel('TV') # X-label\nplt.show()", "_____no_output_____" ] ], [ [ "## Model Deployment\n\n1) Define Linear Model Function", "_____no_output_____" ] ], [ [ "def lr_model_prediction (Xarg):\n intercept = 6.948\n coeff_X = 0.054\n\n result = intercept + coeff_X * Xarg\n return result", "_____no_output_____" ] ], [ [ "2) Apply Linear Model Function to new observation values", "_____no_output_____" ] ], [ [ "TV_ads = 375\nprint('Sales Prediction:',lr_model_prediction(X_new),' for TV ads volume: ', TV_ads)", "Sales Prediction: 24.498 for TV ads volume: 375\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a610b62455d68e8ed37741ca53e903307cdb83d
6,836
ipynb
Jupyter Notebook
docs/tutorials/simulations.ipynb
matech96/federated
b30a26d66162bd02a89a12f119e17925d161a26b
[ "Apache-2.0" ]
1
2020-05-02T05:08:14.000Z
2020-05-02T05:08:14.000Z
docs/tutorials/simulations.ipynb
RITESG/STATIC
cfe9d3e35ba033b1c4e47d347427a83f682f41de
[ "Apache-2.0" ]
null
null
null
docs/tutorials/simulations.ipynb
RITESG/STATIC
cfe9d3e35ba033b1c4e47d347427a83f682f41de
[ "Apache-2.0" ]
1
2020-06-23T13:30:15.000Z
2020-06-23T13:30:15.000Z
29.721739
91
0.493125
[ [ [ "# High-performance simulations with TFF\n\nThis tutorial will describe how to setup high-performance simulations with TFF\nin a variety of common scenarios.\n\nTODO(b/134543154): Populate the content, some of the things to cover here:\n- using GPUs in a single-machine setup,\n- multi-machine setup on GCP/GKE, with and without TPUs,\n- interfacing MapReduce-like backends,\n- current limitations and when/how they will be relaxed.", "_____no_output_____" ], [ "## Before we begin\n\nFirst, make sure your notebook is connected to a backend that has the relevant\ncomponents (including gRPC dependencies for multi-machine scenarios) compiled.", "_____no_output_____" ], [ "Now, let's start by loading the MNIST example from the TFF website, and\ndeclaring the Python function that will run a small experiment loop over\na group of 10 clients.", "_____no_output_____" ] ], [ [ "#@test {\"skip\": true}\n!pip install --quiet --upgrade tensorflow_federated", "/bin/sh: pip: command not found\n" ], [ "import collections\nimport time\n\nimport tensorflow as tf\n\nimport tensorflow_federated as tff\n\nsource, _ = tff.simulation.datasets.emnist.load_data()\n\n\ndef map_fn(example):\n return collections.OrderedDict(\n x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])\n\n\ndef client_data(n):\n ds = source.create_tf_dataset_for_client(source.client_ids[n])\n return ds.repeat(10).shuffle(500).batch(20).map(map_fn)\n\n\ntrain_data = [client_data(n) for n in range(10)]\nelement_spec = train_data[0].element_spec\n\ndef model_fn():\n model = tf.keras.models.Sequential([\n tf.keras.layers.Input(shape=(784,)),\n tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),\n tf.keras.layers.Softmax(),\n ])\n return tff.learning.from_keras_model(\n model,\n input_spec=element_spec,\n loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])\n\n\ntrainer = tff.learning.build_federated_averaging_process(\n model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))\n\n\ndef evaluate(num_rounds=10):\n state = trainer.initialize()\n for _ in range(num_rounds):\n t1 = time.time()\n state, metrics = trainer.next(state, train_data)\n t2 = time.time()\n print('loss {}, round time {}'.format(metrics.loss, t2 - t1))", "_____no_output_____" ] ], [ [ "## Single-machine simulations\n\nNow on by default.", "_____no_output_____" ] ], [ [ "evaluate()", "loss 3.0367836952209473, round time 4.970079183578491\nloss 2.778421401977539, round time 3.4929888248443604\nloss 2.521284341812134, round time 4.029532432556152\nloss 2.3498423099517822, round time 3.4987425804138184\nloss 2.0624916553497314, round time 3.5738046169281006\nloss 1.9093912839889526, round time 3.041914463043213\nloss 1.7627369165420532, round time 3.6436498165130615\nloss 1.5839917659759521, round time 3.193682909011841\nloss 1.5063327550888062, round time 3.22552227973938\nloss 1.4204730987548828, round time 3.399146795272827\n" ] ], [ [ "## Multi-machine simulations on GCP/GKE, GPUs, TPUs, and beyond...\n\nComing very soon.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a611ae481bf4f3eab20c4e15bf79520bf19f597
14,800
ipynb
Jupyter Notebook
src/python/tensorflow_cloud/core/tests/examples/dogs_classification.ipynb
anukaal/cloud
2b6c9127734ea22b6c1aaa070ac06124e5960c57
[ "Apache-2.0" ]
342
2020-02-10T18:55:31.000Z
2022-03-01T04:30:12.000Z
src/python/tensorflow_cloud/core/tests/examples/dogs_classification.ipynb
anukaal/cloud
2b6c9127734ea22b6c1aaa070ac06124e5960c57
[ "Apache-2.0" ]
179
2020-02-14T22:35:42.000Z
2022-01-22T23:06:23.000Z
src/python/tensorflow_cloud/core/tests/examples/dogs_classification.ipynb
anukaal/cloud
2b6c9127734ea22b6c1aaa070ac06124e5960c57
[ "Apache-2.0" ]
140
2020-02-10T19:11:50.000Z
2022-02-23T13:04:44.000Z
29.83871
334
0.511892
[ [ [ "# Copyright 2020 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# TensorFlow Cloud - Putting it all together\nIn this example, we will use all of the features outlined in the [Keras cloud guide](https://www.tensorflow.org/guide/keras/training_keras_models_on_cloud) to train a state-of-the-art model to classify dog breeds using feature extraction. Let's begin by installing TensorFlow Cloud and importing a few important packages.\n\n## Setup", "_____no_output_____" ] ], [ [ "!pip install tensorflow-cloud", "_____no_output_____" ], [ "import datetime\nimport os\n\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tensorflow_cloud as tfc\nimport tensorflow_datasets as tfds\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.models import Model", "_____no_output_____" ] ], [ [ "### Cloud Configuration\n\nIn order to run TensorFlow Cloud from a Colab notebook, we'll need to upload our [authentication key](https://cloud.google.com/docs/authentication/getting-started) and specify our [Cloud storage bucket](https://cloud.google.com/storage/docs/creating-buckets) for image building and publishing.", "_____no_output_____" ] ], [ [ "if not tfc.remote():\n from google.colab import files\n\n key_upload = files.upload()\n key_path = list(key_upload.keys())[0]\n os.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = key_path\n os.system(f\"gcloud auth activate-service-account --key-file {key_path}\")", "_____no_output_____" ], [ "GCP_BUCKET = \"[your-bucket-name]\" #@param {type:\"string\"}", "_____no_output_____" ] ], [ [ "## Model Creation\n\n### Dataset preprocessing\nWe'll be loading our training data from TensorFlow Datasets: ", "_____no_output_____" ] ], [ [ "(ds_train, ds_test), metadata = tfds.load(\n \"stanford_dogs\",\n split=[\"train\", \"test\"],\n shuffle_files=True,\n with_info=True,\n as_supervised=True,\n)\n \nNUM_CLASSES = metadata.features[\"label\"].num_classes", "_____no_output_____" ] ], [ [ "Let's visualize this dataset:", "_____no_output_____" ] ], [ [ "print(\"Number of training samples: %d\" % tf.data.experimental.cardinality(ds_train))\nprint(\"Number of test samples: %d\" % tf.data.experimental.cardinality(ds_test))\nprint(\"Number of classes: %d\" % NUM_CLASSES)", "_____no_output_____" ], [ "plt.figure(figsize=(10, 10))\nfor i, (image, label) in enumerate(ds_train.take(9)):\n ax = plt.subplot(3, 3, i + 1)\n plt.imshow(image)\n plt.title(int(label))\n plt.axis(\"off\")", "_____no_output_____" ] ], [ [ "Here we will resize and rescale our images to fit into our model's input, as well as create batches. ", "_____no_output_____" ] ], [ [ "IMG_SIZE = 224\nBATCH_SIZE = 64\nBUFFER_SIZE = 2\n \nsize = (IMG_SIZE, IMG_SIZE)\nds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))\nds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))\n \ndef input_preprocess(image, label):\n image = tf.keras.applications.resnet50.preprocess_input(image)\n return image, label", "_____no_output_____" ], [ "ds_train = ds_train.map(\n input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE\n)\n \nds_train = ds_train.batch(batch_size=BATCH_SIZE, drop_remainder=True)\nds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)\n \nds_test = ds_test.map(input_preprocess)\nds_test = ds_test.batch(batch_size=BATCH_SIZE, drop_remainder=True)", "_____no_output_____" ] ], [ [ "### Model Architecture\nWe're using ResNet50 pretrained on ImageNet, from the Keras Applications module. ", "_____no_output_____" ] ], [ [ "inputs = tf.keras.layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))\nbase_model = tf.keras.applications.ResNet50(\n weights=\"imagenet\", include_top=False, input_tensor=inputs\n)\nx = tf.keras.layers.GlobalAveragePooling2D()(base_model.output)\nx = tf.keras.layers.Dropout(0.5)(x)\noutputs = tf.keras.layers.Dense(NUM_CLASSES)(x)\n \nmodel = tf.keras.Model(inputs, outputs)", "_____no_output_____" ], [ "base_model.trainable = False", "_____no_output_____" ] ], [ [ "### Callbacks using Cloud Storage", "_____no_output_____" ] ], [ [ "MODEL_PATH = \"resnet-dogs\"\ncheckpoint_path = os.path.join(\"gs://\", GCP_BUCKET, MODEL_PATH, \"save_at_{epoch}\")\ntensorboard_path = os.path.join(\n \"gs://\", GCP_BUCKET, \"logs\", datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n)\n\ncallbacks = [\n # TensorBoard will store logs for each epoch and graph performance for us. \n keras.callbacks.TensorBoard(log_dir=tensorboard_path, histogram_freq=1),\n # ModelCheckpoint will save models after each epoch for retrieval later.\n keras.callbacks.ModelCheckpoint(checkpoint_path),\n # EarlyStopping will terminate training when val_loss ceases to improve. \n keras.callbacks.EarlyStopping(monitor=\"val_loss\", patience=3),\n]", "_____no_output_____" ], [ "model.compile(\n optimizer=tf.keras.optimizers.Adam(learning_rate=1e-2),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[\"accuracy\"],\n)", "_____no_output_____" ] ], [ [ "Here, we're using the `tfc.remote()` flag to designate a smaller number of epochs than intended for the full training job when running locally. This enables easy debugging on Colab.", "_____no_output_____" ] ], [ [ "if tfc.remote():\n epochs = 500\n train_data = ds_train\n test_data = ds_test\nelse:\n epochs = 1\n train_data = ds_train.take(5)\n test_data = ds_test.take(5)\n callbacks = None\n \nmodel.fit(\n train_data, epochs=epochs, callbacks=callbacks, validation_data=test_data, verbose=2\n)", "_____no_output_____" ], [ "if tfc.remote():\n SAVE_PATH = os.path.join(\"gs://\", GCP_BUCKET, MODEL_PATH)\n model.save(SAVE_PATH)", "_____no_output_____" ] ], [ [ "Our model requires two additional libraries. We'll create a `requirements.txt` which specifies those libraries:", "_____no_output_____" ] ], [ [ "requirements = [\"tensorflow-datasets\", \"matplotlib\"]\n\nf = open(\"requirements.txt\", 'w')\nf.write('\\n'.join(requirements))\nf.close()", "_____no_output_____" ] ], [ [ "Let's add a job label so we can document our job logs later:", "_____no_output_____" ] ], [ [ "job_labels = {\"job\":\"resnet-dogs\"}", "_____no_output_____" ] ], [ [ "### Train on Cloud\n\nAll that's left to do is run our model on Cloud. To recap, our `run()` call enables:\n- A model that will be trained and stored on Cloud, including checkpoints\n- Tensorboard callback logs that will be accessible through tensorboard.dev\n- Specific python library requirements that will be fulfilled\n- Customizable job labels for log documentation\n- Real-time streaming logs printed in Colab\n- Deeply customizable machine configuration (ours will use two Tesla T4s)\n- An automatic resolution of distribution strategy for this configuration", "_____no_output_____" ] ], [ [ "tfc.run(\n requirements_txt=\"requirements.txt\",\n distribution_strategy=\"auto\",\n chief_config=tfc.MachineConfig(\n cpu_cores=8,\n memory=30,\n accelerator_type=tfc.AcceleratorType.NVIDIA_TESLA_T4,\n accelerator_count=2,\n ),\n docker_config=tfc.DockerConfig(\n image_build_bucket=GCP_BUCKET,\n ),\n job_labels=job_labels,\n stream_logs=True,\n)", "_____no_output_____" ] ], [ [ "### Evaluate your model\n\nWe'll use the cloud storage directories we saved for callbacks in order to load tensorboard and retrieve the saved model. Tensorboard logs can be used to monitor training performance in real-time", "_____no_output_____" ] ], [ [ "!tensorboard dev upload --logdir $tensorboard_path --name \"ResNet Dogs\"", "_____no_output_____" ], [ "if tfc.remote():\n model = tf.keras.models.load_model(SAVE_PATH)\nmodel.evaluate(test_data)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
4a6120ae128f92a4bf026e0f85f1c7cde8a52b3e
9,162
ipynb
Jupyter Notebook
_notebooks/2021-11-3-Project_Euler_Problems_1_and_2.ipynb
Nhyland28/Blog
f0e941c10a5684d9b969a5930ffb53a0b86a5bc3
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-11-3-Project_Euler_Problems_1_and_2.ipynb
Nhyland28/Blog
f0e941c10a5684d9b969a5930ffb53a0b86a5bc3
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-11-3-Project_Euler_Problems_1_and_2.ipynb
Nhyland28/Blog
f0e941c10a5684d9b969a5930ffb53a0b86a5bc3
[ "Apache-2.0" ]
null
null
null
36.501992
683
0.604017
[ [ [ "# Project Euler Problems 1 and 2\n> Multiples of 3 or 5 and even Fibonacci numbers in Python\n\n- toc: false \n- badges: true\n- comments: true\n- categories: [euler, programming]", "_____no_output_____" ], [ "In order to stay fresh with general programming skills I am going to attempt various Project Euler problems and walk through my solutions. For those of you that do not know [Project Euler](https://projecteuler.net/about) it \"is a series of challenging mathematical/computer programming problems.\" Simply put, it is a great way to practice your computer programming skills.\n\nIn this blog I'm going to work on Project Euler problems [1](https://projecteuler.net/problem=1) and [2](https://projecteuler.net/problem=2).", "_____no_output_____" ], [ "## Problem 1: Multiples of 3 or 5\n\n*If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.*\n\n*Find the sum of all the multiples of 3 or 5 below 1000.*", "_____no_output_____" ], [ "#### Breaking down the problem:\n\nIt's pretty straightforward. We want all the natural numbers (so positive integers... no decimals) under 1,000 that are multiples of 3 or 5. A multiple is simply \"a number that can be divided by another number without a remainder.\" So, multiples of 3 include 3, 6, 9, 12...\n\nThen we just want to take the sum of all those numbers.\n\nI am going to create a function with 2 arguments:\n1. A list of the numbers we want multiples of (so in this case 3 and 5)\n2. The max number we want (in this case 1000)\n\nThe function will then do the following steps:\n1. Initialize a list (`sum_list`). This is where we will store the multiples of the numbers we are interested in. For the case of this problem, I mean all the multiples of 3 and 5.\n2. Loop through 1 and the `max_number` (1000 for this particular problem). We'll call this number `i`.\n3. Loop through each of the dividers (e.g. 3 and 5) and use the modulo operator, `%`, to determine if the number, `i`, is even or odd. The modulo operator returns the remainder of a division problem (e.g. `4 % 1 = 1`). \n4. If the number `i` has no remainder when divided by a divider we will `.append` (or add) it to the `sum_list` we created earlier.\n5. One problem this creates is there could be duplicates. For example, 15 would show up twice in `sum_list` as both 3 and 5 go into it. We can solve this by removing duplicates in the `sum_list`. The [`set()`](https://www.w3schools.com/python/ref_func_set.asp) function is any easy way to do this. The function converts the object into a Python [set](https://www.w3schools.com/python/python_sets.asp) (one of the 4 major Python data classes along with lists, tuples, and dictionaries). Sets \"do not allow duplicate values\" so the use of `list(set(sum_list))` will convert the sum_list into a set, effectively dropping duplicate values, then converting it back into a list.\n6. The last step is to use the `sum()` function to calculate the sum of all the multiples stored in `sum_list`.", "_____no_output_____" ] ], [ [ "def euler_1(dividers, max_number):\n sum_list = []\n for i in range(1, max_number):\n for div in dividers:\n if i % div == 0:\n sum_list.append(i)\n sum_list = list(set(sum_list))\n return(sum(sum_list))", "_____no_output_____" ] ], [ [ "Running our function with the arguments from the question (multiples of 3 and 5 up to 1000) we get an answer of 233,168.", "_____no_output_____" ] ], [ [ "euler_1([3,5], 1000)", "_____no_output_____" ] ], [ [ "## Problem 1: Even Fibonacci numbers\n\n*Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:*\n\n*1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...*\n\n*By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.*", "_____no_output_____" ], [ "#### Breaking down the problem:\n\nThis problem will involve 3 parts:\n1. Creating a Fibonacci sequence that stops right before 4,000,000\n2. Sorting out only the even numbers\n3. Finding the sum of all those even numbers\n\nSimilar to the first Euler problem, I'm going to create a function that takes two arguments:\n1. The maximum number for the Fibonacci sequence (4,000,000) in this case\n2. And, a boolean variable determining if we want to sort even numbers\n\nThe function will then do the following steps:\n1. Create a variable called `modulo_div` that will allow us to toggle whether we want to sum the even-valued terms (such as in this problem) or odd-valued terms\n2. Create a list, `fibonacci`, with the first two terms of the Fibonacci sequence (1 and 2)\n3. Create a variable, `i`, which will serve as an iterator\n4. Put `i` into a while loop. Then, add together the last two numbers of the `fibonacci` list and make that sum the value of `i`. For example, on the first iteration we will sum 1 and 2 making `i` equal to 3.\n5. As long as the value of `i` is less than 4,000,000 add it to the end of `fibonacci` using `append()`. If the value of `i` exceeds 4,000,000 do not add it to the list and break the loop.\n6. Once the loop is broken we create a new list, `fibonacci_portion`. Use a Python list comprehension to go through the `fibonacci` list and only add even numbers. It will be the same method we used to gather odd numbers in the previous problem (using the modulo operator).\n7. Finally, return the sum of the fibonacci_portion list (so all even numbers in the Fibonacci sequence up to 4,000,000) ", "_____no_output_____" ] ], [ [ "def euler_2(max_number, even = True):\n if even == True:\n modulo_div = 2\n else:\n modulo_div = 1\n \n fibonacci = [1,2]\n i = 1\n while i > 0:\n i = sum(fibonacci[-2:])\n if i > 4000000:\n break\n fibonacci.append(i)\n \n fibonacci_portion = [j for j in fibonacci if j % modulo_div == 0]\n \n return(sum(fibonacci_portion))", "_____no_output_____" ] ], [ [ "Running our function with the arguments from the question (a maximum of 4,000,000 and looking at even numbers) we get an answer of 4,613,732.", "_____no_output_____" ] ], [ [ "euler_2(4000000, even = True)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a61232fc72b181daf02398c6e68946f47c5a100
72,652
ipynb
Jupyter Notebook
2_T_autompg_xgboost.ipynb
ginttone/test_visuallization
bd73af65bec070a42f89728dda4f1011b8130177
[ "Apache-2.0" ]
null
null
null
2_T_autompg_xgboost.ipynb
ginttone/test_visuallization
bd73af65bec070a42f89728dda4f1011b8130177
[ "Apache-2.0" ]
null
null
null
2_T_autompg_xgboost.ipynb
ginttone/test_visuallization
bd73af65bec070a42f89728dda4f1011b8130177
[ "Apache-2.0" ]
null
null
null
32.189632
246
0.36138
[ [ [ "<a href=\"https://colab.research.google.com/github/ginttone/test_visuallization/blob/master/2_T_autompg_xgboost.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "## 데이터 로딩", "_____no_output_____" ] ], [ [ "import pandas as pd\ndf = pd.read_csv('./auto-mpg.csv', header=None)\ndf.columns = ['mpg','cylinders','displacement','horsepower','weight',\n 'acceleration','model year','origin','name'] \ndf.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 398 entries, 0 to 397\nData columns (total 9 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 mpg 398 non-null float64\n 1 cylinders 398 non-null int64 \n 2 displacement 398 non-null float64\n 3 horsepower 398 non-null object \n 4 weight 398 non-null float64\n 5 acceleration 398 non-null float64\n 6 model year 398 non-null int64 \n 7 origin 398 non-null int64 \n 8 name 398 non-null object \ndtypes: float64(4), int64(3), object(2)\nmemory usage: 28.1+ KB\n" ], [ "df[['horsepower','name']].describe(include='all')", "_____no_output_____" ] ], [ [ "## replace()", "_____no_output_____" ] ], [ [ "df['horsepower'].value_counts()", "_____no_output_____" ], [ "df['horsepower'].unique()", "_____no_output_____" ], [ "df_horsepower = df['horsepower'].replace(to_replace='?', value=None, inplace=False)\ndf_horsepower.unique()", "_____no_output_____" ], [ "df_horsepower = df_horsepower.astype('float')\ndf_horsepower.mean()", "_____no_output_____" ], [ "df['horsepower'] = df_horsepower.fillna(104)\ndf.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 398 entries, 0 to 397\nData columns (total 9 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 mpg 398 non-null float64\n 1 cylinders 398 non-null int64 \n 2 displacement 398 non-null float64\n 3 horsepower 398 non-null float64\n 4 weight 398 non-null float64\n 5 acceleration 398 non-null float64\n 6 model year 398 non-null int64 \n 7 origin 398 non-null int64 \n 8 name 398 non-null object \ndtypes: float64(5), int64(3), object(1)\nmemory usage: 28.1+ KB\n" ], [ "df['name'].unique()", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "## 분류와 연속 컬럼 구분", "_____no_output_____" ] ], [ [ "df.head(8)", "_____no_output_____" ] ], [ [ "### check columns \n- 연속형 : displacement, horsepower, weight, acceleration, mpg\n- 분류형 : model year, name, cylinders, origin", "_____no_output_____" ] ], [ [ "df['name'].value_counts()", "_____no_output_____" ], [ "df['origin'].value_counts()", "_____no_output_____" ], [ "df['mpg'].describe(include='all')", "_____no_output_____" ], [ "df['mpg'].value_counts()", "_____no_output_____" ] ], [ [ "## 정규화 단계", "_____no_output_____" ] ], [ [ "Y = df['mpg']\nX_contiuns = df[['displacement', 'horsepower', 'weight', 'acceleration']]\nX_category = df[['model year', 'cylinders', 'origin']]", "_____no_output_____" ], [ "from sklearn import preprocessing", "_____no_output_____" ], [ "scaler = preprocessing.StandardScaler()\ntype(scaler)", "_____no_output_____" ], [ "scaler.fit(X_contiuns)", "_____no_output_____" ], [ "X = scaler.transform(X_contiuns)", "_____no_output_____" ], [ "from sklearn.linear_model import LinearRegression", "_____no_output_____" ], [ "lr = LinearRegression()\ntype(lr)", "_____no_output_____" ], [ "lr.fit(X,Y)", "_____no_output_____" ], [ "lr.score(X,Y)", "_____no_output_____" ], [ "df.head(1)", "_____no_output_____" ] ], [ [ "### X_contiuns = df[['displacement', 'horsepower', 'weight', 'acceleration']]\n", "_____no_output_____" ] ], [ [ "x_cusmter = scaler.transform([[307.0,130.0,3504.0,12.0]])\nx_cusmter.shape", "_____no_output_____" ], [ "lr.predict(x_cusmter)", "_____no_output_____" ] ], [ [ "### XGboost", "_____no_output_____" ] ], [ [ "import xgboost as xgb\nmodel_xgb = xgb.XGBRegressor()\nmodel_xgb.fit(X, Y)\n", "[07:37:54] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n" ], [ "model_xgb.score(X,Y)", "_____no_output_____" ], [ "model_xgb.predict(x_cusmter)", "_____no_output_____" ] ], [ [ "### LightXGboost", "_____no_output_____" ] ], [ [ "from lightgbm import LGBMRegressor", "_____no_output_____" ], [ "model_lxgb = LGBMRegressor()\nmodel_lxgb.fit(X, Y)\nmodel_lxgb.score(X, Y)", "_____no_output_____" ] ], [ [ "## pickle", "_____no_output_____" ] ], [ [ "import pickle\npickle.dump(lr, open('./autompg_lr.pkl','wb'))", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "!ls -l ./autompg_lr.pkl", "-rw-r--r-- 1 root root 567 Jul 6 07:37 ./autompg_lr.pkl\n" ], [ "pickle.load(open('./autompg_lr.pkl', 'rb'))", "_____no_output_____" ], [ "pickle.dump(scaler, open('./autompg_standardscaler.pkl','wb'))", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "## One hot encoding", "_____no_output_____" ] ], [ [ "X_category.head(3)", "_____no_output_____" ], [ "X_category['origin'].value_counts()\n# 1, 2, 3\n#? | ? | ?\n# 1 | 0 | 0 -> 1\n# 0 | 1 | 0 -> 2\n# 0 | 0 | 1 _ 3", "_____no_output_____" ], [ "# data, prefix=None\ndf_origin = pd.get_dummies(X_category['origin'], prefix='origin')", "_____no_output_____" ], [ "df_cylinders = pd.get_dummies(X_category['cylinders'], prefix='cylinders')", "_____no_output_____" ], [ "df_origin.shape, df_cylinders.shape", "_____no_output_____" ], [ "X_contiuns.head(3)", "_____no_output_____" ], [ "# X_contiuns + df_cylinders + df_origin\n# objs, axis=0\nX = pd.concat([X_contiuns, df_cylinders, df_origin], axis='columns')\nX.head(5)", "_____no_output_____" ], [ "#스텐다드 스케일러 적용(앞에서한 스텐다드 스케일러는 연속형4개 이었어 지금은 컬럼이 더많아졌어 그래서 새로함)\nscaler_xgb = preprocessing.StandardScaler()\nscaler_xgb.fit(X)\nX=scaler_xgb.transform(X)\nX\n", "_____no_output_____" ] ], [ [ "xgboost하고서 서비스단계(predict)\n\none hot encoding\n\n대상: origin, cylinders\n\n백터를 임의로 만들어줘야해\n예) if cylinder == 5:\n[0,0,1,0,0]\n\npickle로 담아갈필요가 없어 ", "_____no_output_____" ] ], [ [ "import pickle\npickle.dump(scaler_xgb,open('./scaler_xgb.pkl','wb'))", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\nx_train, x_test, y_train, y_test = train_test_split(X, Y) \nx_train.shape, x_test.shape, y_train.shape, y_test.shape", "_____no_output_____" ], [ "import xgboost", "_____no_output_____" ], [ "xgb = xgboost.XGBRegressor()\nxgb", "_____no_output_____" ], [ "xgb.fit(x_train, y_train)", "[07:56:32] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n" ], [ "pickle.dump(xgb,open('./xgb_model.pkl','wb'))", "_____no_output_____" ], [ "xgb.score(x_train, y_train)", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "xgb.score(x_test, y_test)", "_____no_output_____" ], [ "X[0]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a612be6fcba32db5c27b91e9315ab6bec016741
4,239
ipynb
Jupyter Notebook
serving/model_server.ipynb
zilbermanor/functions
a1ef1411089314b8a264a70077a64ea77ccc0558
[ "Apache-2.0" ]
null
null
null
serving/model_server.ipynb
zilbermanor/functions
a1ef1411089314b8a264a70077a64ea77ccc0558
[ "Apache-2.0" ]
null
null
null
serving/model_server.ipynb
zilbermanor/functions
a1ef1411089314b8a264a70077a64ea77ccc0558
[ "Apache-2.0" ]
null
null
null
25.383234
104
0.488323
[ [ [ "# Model Server", "_____no_output_____" ] ], [ [ "# nuclio: ignore\nimport nuclio", "_____no_output_____" ], [ "%nuclio config kind=\"nuclio:serving\"\n%nuclio env MODEL_CLASS=ClassifierModel", "%nuclio: setting kind to 'nuclio:serving'\n%nuclio: setting 'MODEL_CLASS' environment variable\n" ], [ "%nuclio config spec.build.baseImage = \"mlrun/mlrun:0.4.5\"", "%nuclio: setting spec.build.baseImage to 'mlrun/mlrun:0.4.5'\n" ], [ "%%nuclio cmd -c\npython -m pip install -U kfserving \npython -m pip install numpy cloudpickle", "_____no_output_____" ], [ "import os\nfrom cloudpickle import load\n\nimport kfserving\n\nimport numpy as np\n\nfrom typing import List", "_____no_output_____" ], [ "class ClassifierModel(kfserving.KFModel):\n def __init__(self, name: str, model_dir: str, classifier = None):\n super().__init__(name)\n self.name = name\n self.model_dir = model_dir\n if classifier:\n self.classifier = classifier\n self.ready = True\n\n def load(self):\n \"\"\"Load model from KubeFlow storage.\n\n HACK FOR NOW\n ============\n \n * need instructions on how to recreate whatever is in this \n model_dir, \n * currently we grab the first file that end with 'pkl' and \n set it as model for inference. It could be a json file,\n onnx, etc...\n \"\"\"\n if self.model_dir.endswith('.pkl'):\n model_file = self.model_dir\n else:\n for file in os.path.listdir(self.model_dir):\n if file.endswith('.pkl'):\n model_file = os.pth.join(self.model_dir, file)\n break\n self.classifier = load(open(model_file, 'rb'))\n self.ready = True\n\n def predict(self, body: dict) -> List:\n \"\"\"Generate model predictions from sample.\n \n :param body : A dict of observations, each of which is an 1-dimensional feature vector.\n \n Returns model predictions as a `List`, one for each row in the `body` input `List`.\n \"\"\"\n try:\n feats = np.asarray(body['instances'])\n result: np.ndarray = self.classifier.predict(feats)\n return result.tolist()\n except Exception as e:\n raise Exception(f\"Failed to predict {e}\")", "_____no_output_____" ], [ "# nuclio: end-code", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a612ef7e6f08794b00ffd8d1e1b1a7cf6d78001
170,077
ipynb
Jupyter Notebook
Ali/ProlificAcademic.ipynb
venkateshtaduvayi/ML-for-Good-Hackathon
72d152efe6f228ac575909cfebee4c4ca2808dfc
[ "Apache-2.0" ]
null
null
null
Ali/ProlificAcademic.ipynb
venkateshtaduvayi/ML-for-Good-Hackathon
72d152efe6f228ac575909cfebee4c4ca2808dfc
[ "Apache-2.0" ]
null
null
null
Ali/ProlificAcademic.ipynb
venkateshtaduvayi/ML-for-Good-Hackathon
72d152efe6f228ac575909cfebee4c4ca2808dfc
[ "Apache-2.0" ]
2
2021-11-24T17:23:42.000Z
2021-11-28T17:01:34.000Z
302.090586
36,072
0.926133
[ [ [ "##### Load the data to dataframes", "_____no_output_____" ] ], [ [ "from pathlib import Path\nimport pandas as pd\n%run util.ipynb\nfrom termcolor import colored\n\nfile_april_2020_parent_address = \"../Data/ProlificAcademic/April 2020/Data/CRISIS_Parent_April_2020.csv\"\nfile_april_2020_adult_address = \"../Data/ProlificAcademic/April 2020/Data/CRISIS_Adult_April_2020.csv\"\n\nfile_may_2020_parent_address = \"../Data/ProlificAcademic/May 2020/Data/CRISIS_Parent_May_2020.csv\"\nfile_may_2020_adult_address = \"../Data/ProlificAcademic/May 2020/Data/CRISIS_Adult_May_2020.csv\"\n\n\nfile_april_2021_parent_address = \"../Data/ProlificAcademic/April 2021/Data/CRISIS_Parent_April_2021.csv\"\nfile_april_2021_adult_address = \"../Data/ProlificAcademic/April 2021/Data/CRISIS_Adult_April_2021.csv\"\n\n\n\nfile_april_2020_parent = Path(file_april_2020_parent_address)\nfile_april_2020_adult = Path(file_april_2020_adult_address)\n\nfile_may_2020_parent = Path(file_may_2020_parent_address)\nfile_may_2020_adult = Path(file_may_2020_adult_address)\n\nfile_april_2021_parent = Path(file_april_2021_parent_address)\nfile_april_2021_adult = Path(file_april_2021_adult_address)\n\nif file_april_2020_parent.exists():\n print(\"Reading file {}.\".format(file_april_2020_parent))\n df_april_2020_parent = pd.read_csv(file_april_2020_parent)\n #print(df_april_2020_parent.head())\nelse: \n print(\"file {} does not exists.\".format(file_april_2020_parent))\n\nif file_april_2020_adult.exists():\n print(\"Reading file {}.\".format(file_april_2020_adult))\n df_april_2020_adult = pd.read_csv(file_april_2020_adult)\n #print(df_april_2020_adult.head())\nelse: \n print(\"file {} does not exists.\".format(file_april_2020_adult))\n \n \nif file_may_2020_parent.exists():\n print(\"Reading file {}.\".format(file_may_2020_parent))\n df_may_2020_parent = pd.read_csv(file_may_2020_parent)\n #print(df_may_2020_parent.head())\nelse: \n print(\"file {} does not exists.\".format(file_may_2020_parent))\n\nif file_may_2020_adult.exists():\n print(\"Reading file {}.\".format(file_may_2020_adult))\n df_may_2020_adult = pd.read_csv(file_may_2020_adult)\n #print(df_may_2020_adult.head())\nelse: \n print(\"file {} does not exists.\".format(file_may_2020_adult))\n \n \nif file_april_2021_parent.exists():\n print(\"Reading file {}.\".format(file_april_2021_parent))\n df_april_2021_parent = pd.read_csv(file_april_2021_parent)\n #print(df_april_2021_parent.head())\nelse: \n print(\"file {} does not exists.\".format(file_april_2021_parent))\n\nif file_april_2020_adult.exists():\n print(\"Reading file {}.\".format(file_april_2021_adult))\n df_april_2021_adult = pd.read_csv(file_april_2021_adult)\n #print(df_april_2021_adult.head())\nelse: \n print(\"file {} does not exists.\".format(file_april_2021_adult))", "Reading file ../Data/ProlificAcademic/April 2020/Data/CRISIS_Parent_April_2020.csv.\nReading file ../Data/ProlificAcademic/April 2020/Data/CRISIS_Adult_April_2020.csv.\nReading file ../Data/ProlificAcademic/May 2020/Data/CRISIS_Parent_May_2020.csv.\nReading file ../Data/ProlificAcademic/May 2020/Data/CRISIS_Adult_May_2020.csv.\nReading file ../Data/ProlificAcademic/April 2021/Data/CRISIS_Parent_April_2021.csv.\nReading file ../Data/ProlificAcademic/April 2021/Data/CRISIS_Adult_April_2021.csv.\n" ], [ "pd.set_option('display.max_columns', None)", "_____no_output_____" ] ], [ [ "##### Check `polarity` and `subjective` of specifypositive\nCheck to see how the statement is positive(or negative) and if the statements are public opinions or factual information.", "_____no_output_____" ] ], [ [ "df_specifypositive_april_2020_adult = df_april_2020_adult.filter(['specifypositive'], axis=1)\ndf_specifypositive_april_2020_adult = clean_data(df_specifypositive_april_2020_adult)\n\ndf_specifypositive_may_2020_adult = df_may_2020_parent.filter(['specifypositive'], axis=1)\ndf_specifypositive_may_2020_adult = clean_data(df_specifypositive_may_2020_adult)\n\ndf_specifypositive_april_2021_adult = df_april_2021_parent.filter(['specifypositive'], axis=1)\ndf_specifypositive_april_2021_adult = clean_data(df_specifypositive_april_2021_adult)", "_____no_output_____" ], [ "from textblob import TextBlob\nimport matplotlib.pyplot as plt\n\ndf_specifypositive_april_2020_adult['polarity'] = df_specifypositive_april_2020_adult['specifypositive'].apply(lambda x: TextBlob(x).polarity)\ndf_specifypositive_may_2020_adult['polarity'] = df_specifypositive_may_2020_adult['specifypositive'].apply(lambda x: TextBlob(x).polarity)\ndf_specifypositive_april_2021_adult['polarity'] = df_specifypositive_april_2021_adult['specifypositive'].apply(lambda x: TextBlob(x).polarity)\n\n#Plot\nfig, (ax1,ax2,ax3) = plt.subplots(1,3, figsize=(10,4)) # 1 row, 2 columns\n\ndf_specifypositive_april_2020_adult['polarity'].plot.hist(color='salmon', title='Comments Polarity, April 2020',ax=ax1)\ndf_specifypositive_may_2020_adult['polarity'].plot.hist(color='salmon', title='Comments Polarity, May 2020',ax=ax2)\ndf_specifypositive_april_2021_adult['polarity'].plot.hist(color='salmon', title='Comments Polarity, April 2020',ax=ax3)", "_____no_output_____" ], [ "df_specifypositive_april_2020_adult['subjective'] = df_specifypositive_april_2020_adult['specifypositive'].apply(lambda x: TextBlob(x).subjectivity)\ndf_specifypositive_may_2020_adult['subjective'] = df_specifypositive_may_2020_adult['specifypositive'].apply(lambda x: TextBlob(x).subjectivity)\ndf_specifypositive_april_2021_adult['subjective'] = df_specifypositive_april_2021_adult['specifypositive'].apply(lambda x: TextBlob(x).subjectivity)\n\n\n#Plot\nfig, (ax1,ax2,ax3) = plt.subplots(1,3, figsize=(12,4)) # 1 row, 2 columns\n\ndf_specifypositive_april_2020_adult['subjective'].plot.hist(color='salmon', title='Comments subjective, April 2020',ax=ax1)\ndf_specifypositive_may_2020_adult['subjective'].plot.hist(color='salmon', title='Comments subjective, May 2020',ax=ax2)\ndf_specifypositive_april_2021_adult['subjective'].plot.hist(color='salmon', title='Comments subjective, April 2020',ax=ax3)", "_____no_output_____" ] ], [ [ "- Find correlation between positivechange and selected columns `(social media, isolated, video game, outdoor)`\n\n\n - Normalize the Date\n\n - The correlation is calculated for April 2020, May 2020 and April 2021\n\n - Ex: What is the correlation between spending time outdoor and positivechange.\n\n - positivechange : Has COVID-19 led to any positive changes in your child's life?\n\n - outdoor: ...how many days per week did your child spend time outdoors?", "_____no_output_____" ] ], [ [ "# Select columns and clean data (removing null value)\n# 3 different list of column'names is selected since the column headers are different. \nselected_col_names_2021 = ['positivechange','tvmedia','socialmedia','peopletoturnto','peopletotalkto','isolated','leftout','lackcompanionship','videogames','outdoors']\nselected_col_names_2020_1 = ['positivechange','priortvmedia','priorsocialmedia','peopletoturnto','peopletotalkto','isolated','priorlonely','lackcompanionship','priorvideogames','outdoorsprior']\nselected_col_names_2020_2 = ['positivechange','priortvmedia_2','priorsocialmedia_2','peopletoturnto_2','peopletotalkto','isolated','priorlonely_2','lackcompanionship','priorvideogames_2','outdoorsprior_2']\n\ndf_corr_april_2021_adult = df_april_2021_adult.filter(selected_col_names_2021, axis=1)\ndf_corr_april_2021_adult = clean_data(df_corr_april_2021_adult)\n\ndf_corr_april_2020_adult = df_april_2020_adult.filter(selected_col_names_2020_1, axis=1)\ndf_corr_april_2020_adult = clean_data(df_corr_april_2020_adult)\n\ndf_corr_may_2020_adult = df_may_2020_adult.filter(selected_col_names_2020_2, axis=1)\ndf_corr_may_2020_adult = clean_data(df_corr_may_2020_adult)\n", "_____no_output_____" ], [ "from sklearn import preprocessing\n\n# Normolize data to make sure that all of the data looks and reads the same way across all records. \ndf_normolized__april_2020_adult = zscore(df_corr_april_2020_adult,selected_col_names_2020_1)\ndf_normolized__may_2020_adult = zscore(df_corr_may_2020_adult,selected_col_names_2020_2)\ndf_normolized__april_2021_adult = zscore(df_corr_april_2021_adult,selected_col_names_2021)", "_____no_output_____" ], [ "#Check correlation between positivechange and 3 selected features.\n\ncorr_positivechange_socialmedia_april_2020 = df_normolized__april_2020_adult['positivechange'].corr(df_normolized__april_2020_adult['priorsocialmedia'])\ncorr_positivechange_isolated_april_2020 = df_normolized__april_2020_adult['positivechange'].corr(df_normolized__april_2020_adult['isolated'])\ncorr_positivechange_videogame_april_2020 = df_normolized__april_2020_adult['positivechange'].corr(df_normolized__april_2020_adult['priorvideogames'])\ncorr_positivechange_outdoor_april_2020 = df_normolized__april_2020_adult['positivechange'].corr(df_normolized__april_2020_adult['outdoorsprior'])\n\nprint(colored('April 2020 - social media:','blue'),colored(corr_positivechange_socialmedia_april_2020,'green'))\nprint(colored(\"April 2020 - isolated:\",'blue'),colored(corr_positivechange_isolated_april_2020,'green'))\nprint(colored(\"April 2020 - video game:\",'blue'),colored(corr_positivechange_videogame_april_2020,'green'))\nprint(colored(\"April 2020 - outdoor:\",'blue'),colored(corr_positivechange_outdoor_april_2020,'green'))\n\n\ncorr_positivechange_socialmedia_may_2020 = df_normolized__may_2020_adult['positivechange'].corr(df_normolized__may_2020_adult['priorsocialmedia_2'])\ncorr_positivechange_isolated_may_2020 =df_normolized__may_2020_adult['positivechange'].corr(df_normolized__may_2020_adult['isolated'])\ncorr_positivechange_videogame_may_2020 = df_normolized__may_2020_adult['positivechange'].corr(df_normolized__may_2020_adult['priorvideogames_2'])\ncorr_positivechange_outdoor_may_2020 = df_normolized__may_2020_adult['positivechange'].corr(df_normolized__may_2020_adult['outdoorsprior_2'])\nprint()\nprint(colored(\"May 2020 - social media: \",'blue'),colored(corr_positivechange_socialmedia_may_2020,'green'))\nprint(colored(\"May 2020 - isolated:\",'blue'),colored(corr_positivechange_isolated_may_2020,'green'))\nprint(colored(\"May 2020 - video game:\",'blue'),colored(corr_positivechange_videogame_may_2020,'green'))\nprint(colored(\"May 2020 - outdoor:\",'blue'),colored(corr_positivechange_videogame_may_2020,'green'))\n\ncorr_positivechange_socialmedia_april_2021 = df_normolized__april_2021_adult['positivechange'].corr(df_normolized__april_2021_adult['socialmedia'])\ncorr_positivechange_isolated_april_2021 =df_normolized__april_2021_adult['positivechange'].corr(df_normolized__april_2021_adult['isolated'])\ncorr_positivechange_videogame_april_2021 =df_normolized__april_2021_adult['positivechange'].corr(df_normolized__april_2021_adult['videogames'])\ncorr_positivechange_outdoor_april_2021 = df_normolized__april_2021_adult['positivechange'].corr(df_normolized__april_2021_adult['outdoors'])\nprint()\nprint(colored(\"April 2021 - social media:\",'blue'),colored(corr_positivechange_socialmedia_april_2021,'green'))\nprint(colored(\"April 2021 - isolated:\",'blue'),colored(corr_positivechange_isolated_april_2021,'green'))\nprint(colored(\"april 2021 - video game:\",'blue'),colored(corr_positivechange_videogame_april_2021,'green'))\nprint(colored(\"april 2021 - outdoor:\",'blue'),colored(corr_positivechange_outdoor_april_2021,'green'))", "\u001b[34mApril 2020 - social media:\u001b[0m \u001b[32m-0.2439750182371333\u001b[0m\n\u001b[34mApril 2020 - isolated:\u001b[0m \u001b[32m-0.3273268353539886\u001b[0m\n\u001b[34mApril 2020 - video game:\u001b[0m \u001b[32m0.2182178902359924\u001b[0m\n\u001b[34mApril 2020 - outdoor:\u001b[0m \u001b[32m0.0\u001b[0m\n\n\u001b[34mMay 2020 - social media: \u001b[0m \u001b[32m-0.2439750182371333\u001b[0m\n\u001b[34mMay 2020 - isolated:\u001b[0m \u001b[32m-0.3273268353539886\u001b[0m\n\u001b[34mMay 2020 - video game:\u001b[0m \u001b[32m0.2182178902359924\u001b[0m\n\u001b[34mMay 2020 - outdoor:\u001b[0m \u001b[32m0.2182178902359924\u001b[0m\n\n\u001b[34mApril 2021 - social media:\u001b[0m \u001b[32m-0.2439750182371333\u001b[0m\n\u001b[34mApril 2021 - isolated:\u001b[0m \u001b[32m-0.3273268353539886\u001b[0m\n\u001b[34mapril 2021 - video game:\u001b[0m \u001b[32m0.2182178902359924\u001b[0m\n\u001b[34mapril 2021 - outdoor:\u001b[0m \u001b[32m0.0\u001b[0m\n" ], [ "import seaborn as sns\n\n\nfig, axes = plt.subplots(1, 3, figsize=(15, 5), sharey=True)\nfig.suptitle('Positive Change - Social Media')\n\nsns.regplot(ax=axes[0],x=df_normolized__april_2020_adult[\"positivechange\"], y=df_normolized__april_2020_adult[\"priorsocialmedia\"]).set(title='april_2020_adult')\nsns.regplot(ax=axes[1],x=df_normolized__may_2020_adult[\"positivechange\"], y=df_normolized__may_2020_adult[\"priorsocialmedia_2\"]).set(title='may_2020_adult')\nsns.regplot(ax=axes[2],x=df_normolized__april_2021_adult[\"positivechange\"], y=df_normolized__april_2021_adult[\"socialmedia\"]).set(title='april_2021_adult')\n", "_____no_output_____" ] ], [ [ "`Longitudinal study` of `positivechange` and `Social media` shows that positivechange and Social media have <font color='red'>Inverse Correlation</font>.", "_____no_output_____" ] ], [ [ "fig, axes = plt.subplots(1, 3, figsize=(15, 5), sharey=True)\nfig.suptitle('Positive Change - isolated')\n\nsns.regplot(ax=axes[0],x=df_normolized__april_2020_adult[\"positivechange\"], y=df_normolized__april_2020_adult[\"isolated\"]).set(title='april_2020_adult')\nsns.regplot(ax=axes[1],x=df_normolized__may_2020_adult[\"positivechange\"], y=df_normolized__may_2020_adult[\"isolated\"]).set(title='may_2020_adult')\nsns.regplot(ax=axes[2],x=df_normolized__april_2021_adult[\"positivechange\"], y=df_normolized__april_2021_adult[\"isolated\"]).set(title='april_2021_adult')\n", "_____no_output_____" ] ], [ [ "`Longitudinal study` of `positivechange` and `isolated` shows that positivechange and Social media have <font color='red'>Inverse Correlation</font>.", "_____no_output_____" ] ], [ [ "fig, axes = plt.subplots(1, 3, figsize=(15, 5), sharey=True)\nfig.suptitle('Positive Change - videogame')\n\nsns.regplot(ax=axes[0],x=df_normolized__april_2020_adult[\"positivechange\"], y=df_normolized__april_2020_adult[\"priorvideogames\"]).set(title='april_2020_adult')\nsns.regplot(ax=axes[1],x=df_normolized__may_2020_adult[\"positivechange\"], y=df_normolized__may_2020_adult[\"priorvideogames_2\"]).set(title='may_2020_adult')\nsns.regplot(ax=axes[2],x=df_normolized__april_2021_adult[\"positivechange\"], y=df_normolized__april_2021_adult[\"videogames\"]).set(title='april_2021_adult')\n", "_____no_output_____" ] ], [ [ "`Longitudinal study` of `positivechange` and `video game` shows that positivechange and Social media have <font color='green'>Direct Correlation</font>.", "_____no_output_____" ] ], [ [ "fig, axes = plt.subplots(1, 3, figsize=(15, 5), sharey=True)\nfig.suptitle('Positive Change - outdoor')\n\nsns.regplot(ax=axes[0],x=df_normolized__april_2020_adult[\"positivechange\"], y=df_normolized__april_2020_adult[\"outdoorsprior\"]).set(title='april_2020_adult')\nsns.regplot(ax=axes[1],x=df_normolized__may_2020_adult[\"positivechange\"], y=df_normolized__may_2020_adult[\"outdoorsprior_2\"]).set(title='may_2020_adult')\nsns.regplot(ax=axes[2],x=df_normolized__april_2021_adult[\"positivechange\"], y=df_normolized__april_2021_adult[\"outdoors\"]).set(title='april_2021_adult')\n", "_____no_output_____" ] ], [ [ "`Longitudinal study` of `positivechange` and `outdoor` shows that positivechange and outdoor have <font color='blue'>No Relation</font>.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a613537990c67cc3f393fc2c08674dceb332c8c
1,828
ipynb
Jupyter Notebook
result/.ipynb_checkpoints/Untitled-checkpoint.ipynb
jasonjnie/Painting-to-Photo-Transfer
28263149f9f949cfe00f686bf0be0ab88c504fa5
[ "BSD-3-Clause" ]
null
null
null
result/.ipynb_checkpoints/Untitled-checkpoint.ipynb
jasonjnie/Painting-to-Photo-Transfer
28263149f9f949cfe00f686bf0be0ab88c504fa5
[ "BSD-3-Clause" ]
null
null
null
result/.ipynb_checkpoints/Untitled-checkpoint.ipynb
jasonjnie/Painting-to-Photo-Transfer
28263149f9f949cfe00f686bf0be0ab88c504fa5
[ "BSD-3-Clause" ]
null
null
null
16.321429
51
0.496718
[ [ [ "# Painting-to-Photo Transfer", "_____no_output_____" ], [ "## Zhonghao Wang, Jiaxi Nie, Rui Lan", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown" ] ]
4a613919d78fd6b63259381ecb45d4400cc87381
2,975
ipynb
Jupyter Notebook
book/lessons/0_preparation.ipynb
f-lingo/skimage-tutorials
926331f05f0c5dc2b0b1be7b9460a68a5399039e
[ "CC-BY-4.0" ]
8
2019-07-24T19:23:28.000Z
2022-02-18T20:36:37.000Z
book/lessons/0_preparation.ipynb
f-lingo/skimage-tutorials
926331f05f0c5dc2b0b1be7b9460a68a5399039e
[ "CC-BY-4.0" ]
5
2019-11-15T02:00:26.000Z
2021-01-06T04:26:40.000Z
book/lessons/0_preparation.ipynb
f-lingo/skimage-tutorials
926331f05f0c5dc2b0b1be7b9460a68a5399039e
[ "CC-BY-4.0" ]
3
2020-11-30T14:58:35.000Z
2022-02-14T16:33:46.000Z
27.546296
112
0.558992
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a61392823e4a01f42c408d35e90599b3d98bbdc
51,682
ipynb
Jupyter Notebook
Chapter 07/fashionmnist_keras_distributed.ipynb
bpbpublications/Mastering-TensorFlow-2.x
fc169692e6f38f3d6b78f956f47bcc7c884a9647
[ "MIT" ]
1
2022-02-15T07:36:18.000Z
2022-02-15T07:36:18.000Z
Chapter 07/fashionmnist_keras_distributed.ipynb
bpbpublications/Mastering-TensorFlow-2.x
fc169692e6f38f3d6b78f956f47bcc7c884a9647
[ "MIT" ]
null
null
null
Chapter 07/fashionmnist_keras_distributed.ipynb
bpbpublications/Mastering-TensorFlow-2.x
fc169692e6f38f3d6b78f956f47bcc7c884a9647
[ "MIT" ]
null
null
null
121.604706
19,992
0.84722
[ [ [ "# Distributed Training with Keras", "_____no_output_____" ], [ "## Import dependencies", "_____no_output_____" ] ], [ [ "import tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow import keras\n\nimport os\nprint(tf.__version__)", "2.3.0\n" ] ], [ [ "## Dataset - Fashion MNIST", "_____no_output_____" ] ], [ [ "#datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)\n#mnist_train, mnist_test = datasets['train'], datasets['test']\n\nfashion_mnist = keras.datasets.fashion_mnist\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()", "_____no_output_____" ] ], [ [ "## Define a distribution Strategy", "_____no_output_____" ] ], [ [ "strategy = tf.distribute.MirroredStrategy()", "WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.\nINFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0',)\n" ], [ "print('Number of devices: {}'.format(strategy.num_replicas_in_sync))", "Number of devices: 1\n" ], [ "num_train_examples = len(train_images)#info.splits['train'].num_examples\nprint(num_train_examples)\nnum_test_examples = len(test_images) #info.splits['test'].num_examples\nprint(num_test_examples)\n\nBUFFER_SIZE = 10000\n\nBATCH_SIZE_PER_REPLICA = 64\nBATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync", "60000\n10000\n" ], [ "#train_dataset = train_images.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\n#eval_dataset = test_images.map(scale).batch(BATCH_SIZE)", "_____no_output_____" ], [ "class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']", "_____no_output_____" ], [ "with strategy.scope():\n model = keras.Sequential([\n keras.layers.Flatten(input_shape=(28, 28)),\n keras.layers.Dense(128, activation='relu'),\n keras.layers.Dense(10)\n ])\n\n model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n optimizer='adam',\n metrics=['accuracy'])", "_____no_output_____" ], [ "# Define the checkpoint directory to store the checkpoints\n\ncheckpoint_dir = './training_checkpoints'\n# Name of the checkpoint files\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt_{epoch}\")", "_____no_output_____" ], [ "# Function for decaying the learning rate.\n# You can define any decay function you need.\ndef decay(epoch):\n if epoch < 3:\n return 1e-3\n elif epoch >= 3 and epoch < 7:\n return 1e-4\n else:\n return 1e-5", "_____no_output_____" ], [ "class PrintLR(tf.keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs=None):\n print('\\nLearning rate for epoch {} is {}'.format(epoch + 1,\n model.optimizer.lr.numpy()))", "_____no_output_____" ], [ "from tensorflow.keras.callbacks import ModelCheckpoint\n\n#checkpoint = ModelCheckpoint(ckpt_model, \n# monitor='val_accuracy',\n# verbose=1,\n# save_best_only=True,\n# mode='max')", "_____no_output_____" ], [ "callbacks = [\n tf.keras.callbacks.TensorBoard(log_dir='./logs'),\n tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,\n save_weights_only=True),\n tf.keras.callbacks.LearningRateScheduler(decay),\n PrintLR()\n]", "_____no_output_____" ], [ "#model.fit(train_dataset, epochs=12, callbacks=callbacks)\nhistory = model.fit(train_images, train_labels,validation_data=(test_images, test_labels), \n epochs=15,callbacks=callbacks)", "Epoch 1/15\n 1/1875 [..............................] - ETA: 0s - loss: 0.6464 - accuracy: 0.8125WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0010s vs `on_train_batch_end` time: 0.0159s). Check your callbacks.\n1840/1875 [============================>.] - ETA: 0s - loss: 0.5935 - accuracy: 0.7878 ETA: 0s - loss: 0.6025 - accu\nLearning rate for epoch 1 is 0.0010000000474974513\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.5930 - accuracy: 0.7878 - val_loss: 0.5607 - val_accuracy: 0.8050\nEpoch 2/15\n1835/1875 [============================>.] - ETA: 0s - loss: 0.5369 - accuracy: 0.8128\nLearning rate for epoch 2 is 0.0010000000474974513\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.5362 - accuracy: 0.8129 - val_loss: 0.5793 - val_accuracy: 0.8014\nEpoch 3/15\n1840/1875 [============================>.] - ETA: 0s - loss: 0.5291 - accuracy: 0.8174\nLearning rate for epoch 3 is 0.0010000000474974513\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.5296 - accuracy: 0.8171 - val_loss: 0.6655 - val_accuracy: 0.7981\nEpoch 4/15\n1844/1875 [============================>.] - ETA: 0s - loss: 0.4334 - accuracy: 0.8484\nLearning rate for epoch 4 is 9.999999747378752e-05\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.4327 - accuracy: 0.8485 - val_loss: 0.4920 - val_accuracy: 0.8374\nEpoch 5/15\n1868/1875 [============================>.] - ETA: 0s - loss: 0.4155 - accuracy: 0.8540\nLearning rate for epoch 5 is 9.999999747378752e-05\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.4151 - accuracy: 0.8541 - val_loss: 0.4790 - val_accuracy: 0.8405\nEpoch 6/15\n1845/1875 [============================>.] - ETA: 0s - loss: 0.4064 - accuracy: 0.8560\nLearning rate for epoch 6 is 9.999999747378752e-05\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.4063 - accuracy: 0.8561 - val_loss: 0.4781 - val_accuracy: 0.8391\nEpoch 7/15\n1834/1875 [============================>.] - ETA: 0s - loss: 0.4001 - accuracy: 0.8579\nLearning rate for epoch 7 is 9.999999747378752e-05\n1875/1875 [==============================] - 3s 1ms/step - loss: 0.4004 - accuracy: 0.8577 - val_loss: 0.4722 - val_accuracy: 0.8430\nEpoch 8/15\n1848/1875 [============================>.] - ETA: 0s - loss: 0.3866 - accuracy: 0.8628\nLearning rate for epoch 8 is 9.999999747378752e-06\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.3870 - accuracy: 0.8626 - val_loss: 0.4655 - val_accuracy: 0.8450\nEpoch 9/15\n1839/1875 [============================>.] - ETA: 0s - loss: 0.3844 - accuracy: 0.8624\nLearning rate for epoch 9 is 9.999999747378752e-06\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.3844 - accuracy: 0.8626 - val_loss: 0.4652 - val_accuracy: 0.8450\nEpoch 10/15\n1852/1875 [============================>.] - ETA: 0s - loss: 0.3833 - accuracy: 0.8632\nLearning rate for epoch 10 is 9.999999747378752e-06\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.3833 - accuracy: 0.8633 - val_loss: 0.4637 - val_accuracy: 0.8448\nEpoch 11/15\n1865/1875 [============================>.] - ETA: 0s - loss: 0.3820 - accuracy: 0.8626\nLearning rate for epoch 11 is 9.999999747378752e-06\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.3824 - accuracy: 0.8626 - val_loss: 0.4652 - val_accuracy: 0.8456\nEpoch 12/15\n1859/1875 [============================>.] - ETA: 0s - loss: 0.3814 - accuracy: 0.8633\nLearning rate for epoch 12 is 9.999999747378752e-06\n1875/1875 [==============================] - 3s 1ms/step - loss: 0.3816 - accuracy: 0.8632 - val_loss: 0.4635 - val_accuracy: 0.8452\nEpoch 13/15\n1867/1875 [============================>.] - ETA: 0s - loss: 0.3810 - accuracy: 0.8637\nLearning rate for epoch 13 is 9.999999747378752e-06\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.3809 - accuracy: 0.8636 - val_loss: 0.4642 - val_accuracy: 0.8462\nEpoch 14/15\n1849/1875 [============================>.] - ETA: 0s - loss: 0.3801 - accuracy: 0.8639\nLearning rate for epoch 14 is 9.999999747378752e-06\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.3801 - accuracy: 0.8638 - val_loss: 0.4642 - val_accuracy: 0.8452\nEpoch 15/15\n1858/1875 [============================>.] - ETA: 0s - loss: 0.3798 - accuracy: 0.8639\nLearning rate for epoch 15 is 9.999999747378752e-06\n1875/1875 [==============================] - 2s 1ms/step - loss: 0.3793 - accuracy: 0.8642 - val_loss: 0.4642 - val_accuracy: 0.8471\n" ], [ "history.history.keys", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(history.history['accuracy'])\nplt.plot(history.history['val_accuracy'])\nplt.title('Model Accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'])\nplt.show()", "_____no_output_____" ], [ "plt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('Model Loss')\nplt.ylabel('Loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'])\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a613b156dca4f7661a2b1169863c5d553d7f849
65,730
ipynb
Jupyter Notebook
ml/Dimensionality Reduction Algorithms/PCA/principal component analysis.ipynb
Siddhant-K-code/AlgoBook
a37f1fbcb11ae27801bc47c01357ed90035a4e82
[ "MIT" ]
191
2020-09-28T10:00:20.000Z
2022-03-06T14:36:55.000Z
ml/Dimensionality Reduction Algorithms/PCA/principal component analysis.ipynb
Siddhant-K-code/AlgoBook
a37f1fbcb11ae27801bc47c01357ed90035a4e82
[ "MIT" ]
210
2020-09-28T10:06:36.000Z
2022-03-05T03:44:24.000Z
ml/Dimensionality Reduction Algorithms/PCA/principal component analysis.ipynb
Siddhant-K-code/AlgoBook
a37f1fbcb11ae27801bc47c01357ed90035a4e82
[ "MIT" ]
320
2020-09-28T09:56:14.000Z
2022-02-12T16:45:57.000Z
165.566751
15,824
0.89434
[ [ [ "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd", "_____no_output_____" ] ], [ [ "Manually Principal Component Analysis", "_____no_output_____" ] ], [ [ "#Reading wine data\ndf_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'\n 'machine-learning-databases/wine/wine.data',\n header=None)\n# in the data first column is class label and rest\n# 13 columns are different features\nX,y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values", "_____no_output_____" ], [ "#Splitting Data into training set and test set\n#using scikit-learn\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=0.3,\n stratify=y, random_state=0) ", "_____no_output_____" ], [ "#Standardarising all the columns\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train_std = sc.fit_transform(X_train)\nX_test_std = sc.transform(X_test)", "_____no_output_____" ], [ "# covariance matrix using numpy \ncov_mat = np.cov(X_train_std.T)\n# eigen pair\neigen_vals, eigen_vecs = np.linalg.eig(cov_mat)\nprint('\\nEigenvalues \\n%s' % eigen_vecs[:3])\n# only three rows are printed", "\nEigenvalues \n[[-1.37242175e-01 5.03034778e-01 -1.37748734e-01 -3.29610003e-03\n -2.90625226e-01 2.99096847e-01 7.90529293e-02 -3.68176414e-01\n -3.98377017e-01 -9.44869777e-02 3.74638877e-01 -1.27834515e-01\n 2.62834263e-01]\n [ 2.47243265e-01 1.64871190e-01 9.61503863e-02 5.62646692e-01\n 8.95378697e-02 6.27036396e-01 -2.74002014e-01 -1.25775752e-02\n 1.10458230e-01 2.63652406e-02 -1.37405597e-01 8.06401578e-02\n -2.66769211e-01]\n [-2.54515927e-02 2.44564761e-01 6.77775667e-01 -1.08977111e-01\n -1.60834991e-01 3.89128239e-04 1.32328045e-01 1.77578177e-01\n 3.82496856e-01 1.42747511e-01 4.61583035e-01 1.67924873e-02\n -1.15542548e-01]]\n" ], [ "# representing relative importance of features\ntot = eigen_vals.sum()\nvar_exp = [(i/tot) for i in sorted(eigen_vals, reverse=True)]\ncum_var_exp = np.cumsum(var_exp)\nimport matplotlib.pyplot as plt\nplt.bar(range(1,14), var_exp, alpha=0.5, align='center',\n label='Individual explained variance')\nplt.step(range(1,14), cum_var_exp, where='mid',\n label='CUmmulative explained variance')\nplt.ylabel('Explained variance ratio')\nplt.xlabel('Principal component index')\nplt.legend(loc='best')\nplt.tight_layout()\nplt.show()\n# plots explained variance ration of the features\n# Explained variance is variance of one feature / sum of all the variances", "_____no_output_____" ], [ "# sorting the eigenpairs by decreasing order of the eigenvalues:\n# list of (eigenvalue, eigenvector) tuples\neigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])\n for i in range(len(eigen_vals))]\n# Sort the (eigenvalue, eigenvector) tuples from high to low\neigen_pairs.sort(key=lambda k:k[0], reverse=True)", "_____no_output_____" ], [ "# We take first two features which account for about 60% of variance\nw = np.hstack((eigen_pairs[0][1][:, np.newaxis],\n eigen_pairs[1][1][:, np.newaxis]))\n# w is projection matrix\nprint('Matrix W:\\n', w)", "Matrix W:\n [[-0.13724218 0.50303478]\n [ 0.24724326 0.16487119]\n [-0.02545159 0.24456476]\n [ 0.20694508 -0.11352904]\n [-0.15436582 0.28974518]\n [-0.39376952 0.05080104]\n [-0.41735106 -0.02287338]\n [ 0.30572896 0.09048885]\n [-0.30668347 0.00835233]\n [ 0.07554066 0.54977581]\n [-0.32613263 -0.20716433]\n [-0.36861022 -0.24902536]\n [-0.29669651 0.38022942]]\n" ], [ "# converting 13 feature data to 2 feature data\nX_train_pca = X_train_std.dot(w)", "_____no_output_____" ], [ "# Plotting the features on\ncolors = ['r', 'b', 'g']\nmarkers = ['s', 'x', 'o']\nfor l,c,m in zip(np.unique(y_train), colors, markers):\n plt.scatter(X_train_pca[y_train==l, 0],\n X_train_pca[y_train==l, 1],\n c = c, label=l, marker = m)\nplt.xlabel('PC 1')\nplt.ylabel('PC 2')\nplt.legend(loc='best')\nplt.tight_layout()\nplt.show()", "_____no_output_____" ] ], [ [ "Using Scikit Learn", "_____no_output_____" ] ], [ [ "# Class to plot decision region\nfrom matplotlib.colors import ListedColormap\ndef plot_decision_regions(X, y, classifier, resolution=0.02):\n markers = ('s', 'x', 'o', '^', 'v')\n colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')\n cmap = ListedColormap(colors[:len(np.unique(y))])\n \n x1_min, x1_max = X[:, 0].min()-1, X[:, 0].max()+1\n x2_min, x2_max = X[:, 1].min()-1, X[:, 1].max()+1\n \n xx1,xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),\n np.arange(x2_min, x2_max, resolution))\n Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)\n Z = Z.reshape(xx1.shape)\n plt.contourf(xx1,xx2,Z, alpha=0.4, cmap=cmap)\n plt.xlim(xx1.min(), xx1.max())\n plt.ylim(xx2.min(), xx2.max())\n \n for idx, cl in enumerate(np.unique(y)):\n plt.scatter(x = X[y==cl, 0], \n y = X[y==cl, 1],\n alpha = 0.6,\n color = cmap(idx),\n edgecolor='black',\n marker=markers[idx],\n label=cl)", "_____no_output_____" ], [ "# Plotting decision region of training set after applying PCA\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.decomposition import PCA\npca = PCA(n_components=2)\nlr = LogisticRegression(multi_class='ovr', \n random_state=1,\n solver = 'lbfgs')\nX_train_pca = pca.fit_transform(X_train_std)\nX_test_pca = pca.transform(X_test_std)\n\nlr.fit(X_train_pca, y_train)\nplot_decision_regions(X_train_pca, y_train, classifier=lr)\nplt.xlabel('PC 1')\nplt.ylabel('PC 2')\n\nplt.legend(loc='lower left')\nplt.tight_layout()\nplt.show()", "_____no_output_____" ], [ "# plotting decision regions of test data set after applying PCA\nplot_decision_regions(X_test_pca, y_test, classifier=lr)\nplt.xlabel('PC 1')\nplt.ylabel('PC 2')\n\nplt.legend(loc='lower left')\nplt.tight_layout()\nplt.show()", "_____no_output_____" ], [ "# finding explained variance ratio using scikit learn\npca1 = PCA(n_components=None)\nX_train_pca1 = pca1.fit_transform(X_train_std)\npca1.explained_variance_ratio_", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a6141b45f96473c72d3477157806de270d86be5
133,338
ipynb
Jupyter Notebook
Classification_inception_v4/Implementation_inception_v4 mark1.ipynb
oscarscaro/TeamEve
e43d1929b06d13583ab1eefc7cf1548041792385
[ "Apache-2.0" ]
null
null
null
Classification_inception_v4/Implementation_inception_v4 mark1.ipynb
oscarscaro/TeamEve
e43d1929b06d13583ab1eefc7cf1548041792385
[ "Apache-2.0" ]
null
null
null
Classification_inception_v4/Implementation_inception_v4 mark1.ipynb
oscarscaro/TeamEve
e43d1929b06d13583ab1eefc7cf1548041792385
[ "Apache-2.0" ]
1
2020-02-04T10:20:32.000Z
2020-02-04T10:20:32.000Z
86.191338
172
0.648465
[ [ [ "import os, shutil, csv\n\noriginal_dataset_dir = '/Users/mithyyin/Documents/GitHub/TeamEve/Classfication_small_datasets_inception_v3/waste_original_dataset' #directory name of your biendata\n#original_dataset_dir =r'C:\\Users\\oscarscaro\\Documents\\GitHub\\TeamEve\\Classfication_small_datasets_inception_v3\\images_withoutrect'\nbase_dir = './data_small' #create a directory for the data subset\n\n#os.mkdir(base_dir)\n\n#creating a new folder for each set\ntrain_dir = os.path.join(base_dir, 'train') \n#os.mkdir(train_dir)\nvalidation_dir = os.path.join(base_dir, 'validation') \n#os.mkdir(validation_dir) \ntest_dir = os.path.join(base_dir, 'test') \n#os.mkdir(test_dir)", "_____no_output_____" ], [ "batch_size = 16 #try 32, 128\nepoch = 100 # try 50, 100. \n#data_augmentation = True", "_____no_output_____" ], [ "#Pre-process steps 2, \ntrain_datagen = ImageDataGenerator(rescale=1./255)\n\n#changed here, major changes\ntest_datagen = ImageDataGenerator(rescale=1./255)\n\ntrain_generator = train_datagen.flow_from_directory(\n train_dir, #train_dir is the path where you store all the validaiton folder, chnage this\n target_size = (img_rows,img_cols), #try 1920,1080\n batch_size = batch_size,\n class_mode = 'categorical')\n\n#tr_crops = crop_generator(train_generator)\n\nvalidation_generator = test_datagen.flow_from_directory( #debug here\n validation_dir, #train_dir is the path where you store all the validation folder, change this\n target_size = (img_cols,img_cols),\n batch_size = batch_size,\n class_mode = 'categorical')\n\n#val_crops = crop_generator(val_gen)", "_____no_output_____" ], [ "import numpy as np\n\n# Sys\nimport warnings\n# Keras Core\nfrom keras.layers.convolutional import MaxPooling2D, Convolution2D, AveragePooling2D\nfrom keras.layers import Input, Dropout, Dense, Flatten, Activation\nfrom keras.layers.normalization import BatchNormalization\nfrom keras.layers.merge import concatenate\nfrom keras import regularizers\nfrom keras import initializers\nfrom keras.models import Model\n# Backend\nfrom keras import backend as K\n# Utils\nfrom keras.utils.layer_utils import convert_all_kernels_in_model\nfrom keras.utils.data_utils import get_file\n\n\n#########################################################################################\n# Implements the Inception Network v4 (http://arxiv.org/pdf/1602.07261v1.pdf) in Keras. #\n#########################################################################################\n\nWEIGHTS_PATH = 'https://github.com/kentsommer/keras-inceptionV4/releases/download/2.1/inception-v4_weights_tf_dim_ordering_tf_kernels.h5'\nWEIGHTS_PATH_NO_TOP = 'https://github.com/kentsommer/keras-inceptionV4/releases/download/2.1/inception-v4_weights_tf_dim_ordering_tf_kernels_notop.h5'\n\n\ndef preprocess_input(x):\n x = np.divide(x, 255.0)\n x = np.subtract(x, 0.5)\n x = np.multiply(x, 2.0)\n return x\n\n\ndef conv2d_bn(x, nb_filter, num_row, num_col,\n padding='same', strides=(1, 1), use_bias=False):\n \"\"\"\n Utility function to apply conv + BN. \n (Slightly modified from https://github.com/fchollet/keras/blob/master/keras/applications/inception_v3.py)\n \"\"\"\n if K.image_data_format() == 'channels_first':\n channel_axis = 1\n else:\n channel_axis = -1\n x = Convolution2D(nb_filter, (num_row, num_col),\n strides=strides,\n padding=padding,\n use_bias=use_bias,\n kernel_regularizer=regularizers.l2(0.00004),\n kernel_initializer=initializers.VarianceScaling(scale=2.0, mode='fan_in', distribution='normal', seed=None))(x)\n x = BatchNormalization(axis=channel_axis, momentum=0.9997, scale=False)(x)\n x = Activation('relu')(x)\n return x\n\n\ndef block_inception_a(input):\n if K.image_data_format() == 'channels_first':\n channel_axis = 1\n else:\n channel_axis = -1\n\n branch_0 = conv2d_bn(input, 96, 1, 1)\n\n branch_1 = conv2d_bn(input, 64, 1, 1)\n branch_1 = conv2d_bn(branch_1, 96, 3, 3)\n\n branch_2 = conv2d_bn(input, 64, 1, 1)\n branch_2 = conv2d_bn(branch_2, 96, 3, 3)\n branch_2 = conv2d_bn(branch_2, 96, 3, 3)\n\n branch_3 = AveragePooling2D((3,3), strides=(1,1), padding='same')(input)\n branch_3 = conv2d_bn(branch_3, 96, 1, 1)\n\n x = concatenate([branch_0, branch_1, branch_2, branch_3], axis=channel_axis)\n return x\n\n\ndef block_reduction_a(input):\n if K.image_data_format() == 'channels_first':\n channel_axis = 1\n else:\n channel_axis = -1\n\n branch_0 = conv2d_bn(input, 384, 3, 3, strides=(2,2), padding='valid')\n\n branch_1 = conv2d_bn(input, 192, 1, 1)\n branch_1 = conv2d_bn(branch_1, 224, 3, 3)\n branch_1 = conv2d_bn(branch_1, 256, 3, 3, strides=(2,2), padding='valid')\n\n branch_2 = MaxPooling2D((3,3), strides=(2,2), padding='valid')(input)\n\n x = concatenate([branch_0, branch_1, branch_2], axis=channel_axis)\n return x\n\n\ndef block_inception_b(input):\n if K.image_data_format() == 'channels_first':\n channel_axis = 1\n else:\n channel_axis = -1\n\n branch_0 = conv2d_bn(input, 384, 1, 1)\n\n branch_1 = conv2d_bn(input, 192, 1, 1)\n branch_1 = conv2d_bn(branch_1, 224, 1, 7)\n branch_1 = conv2d_bn(branch_1, 256, 7, 1)\n\n branch_2 = conv2d_bn(input, 192, 1, 1)\n branch_2 = conv2d_bn(branch_2, 192, 7, 1)\n branch_2 = conv2d_bn(branch_2, 224, 1, 7)\n branch_2 = conv2d_bn(branch_2, 224, 7, 1)\n branch_2 = conv2d_bn(branch_2, 256, 1, 7)\n\n branch_3 = AveragePooling2D((3,3), strides=(1,1), padding='same')(input)\n branch_3 = conv2d_bn(branch_3, 128, 1, 1)\n\n x = concatenate([branch_0, branch_1, branch_2, branch_3], axis=channel_axis)\n return x\n\n\ndef block_reduction_b(input):\n if K.image_data_format() == 'channels_first':\n channel_axis = 1\n else:\n channel_axis = -1\n\n branch_0 = conv2d_bn(input, 192, 1, 1)\n branch_0 = conv2d_bn(branch_0, 192, 3, 3, strides=(2, 2), padding='valid')\n\n branch_1 = conv2d_bn(input, 256, 1, 1)\n branch_1 = conv2d_bn(branch_1, 256, 1, 7)\n branch_1 = conv2d_bn(branch_1, 320, 7, 1)\n branch_1 = conv2d_bn(branch_1, 320, 3, 3, strides=(2,2), padding='valid')\n\n branch_2 = MaxPooling2D((3, 3), strides=(2, 2), padding='valid')(input)\n\n x = concatenate([branch_0, branch_1, branch_2], axis=channel_axis)\n return x\n\n\ndef block_inception_c(input):\n if K.image_data_format() == 'channels_first':\n channel_axis = 1\n else:\n channel_axis = -1\n\n branch_0 = conv2d_bn(input, 256, 1, 1)\n\n branch_1 = conv2d_bn(input, 384, 1, 1)\n branch_10 = conv2d_bn(branch_1, 256, 1, 3)\n branch_11 = conv2d_bn(branch_1, 256, 3, 1)\n branch_1 = concatenate([branch_10, branch_11], axis=channel_axis)\n\n\n branch_2 = conv2d_bn(input, 384, 1, 1)\n branch_2 = conv2d_bn(branch_2, 448, 3, 1)\n branch_2 = conv2d_bn(branch_2, 512, 1, 3)\n branch_20 = conv2d_bn(branch_2, 256, 1, 3)\n branch_21 = conv2d_bn(branch_2, 256, 3, 1)\n branch_2 = concatenate([branch_20, branch_21], axis=channel_axis)\n\n branch_3 = AveragePooling2D((3, 3), strides=(1, 1), padding='same')(input)\n branch_3 = conv2d_bn(branch_3, 256, 1, 1)\n\n x = concatenate([branch_0, branch_1, branch_2, branch_3], axis=channel_axis)\n return x\n\n\ndef inception_v4_base(input):\n if K.image_data_format() == 'channels_first':\n channel_axis = 1\n else:\n channel_axis = -1\n\n # Input Shape is 299 x 299 x 3 (th) or 3 x 299 x 299 (th)\n net = conv2d_bn(input, 32, 3, 3, strides=(2,2), padding='valid')\n net = conv2d_bn(net, 32, 3, 3, padding='valid')\n net = conv2d_bn(net, 64, 3, 3)\n\n branch_0 = MaxPooling2D((3,3), strides=(2,2), padding='valid')(net)\n\n branch_1 = conv2d_bn(net, 96, 3, 3, strides=(2,2), padding='valid')\n\n net = concatenate([branch_0, branch_1], axis=channel_axis)\n\n branch_0 = conv2d_bn(net, 64, 1, 1)\n branch_0 = conv2d_bn(branch_0, 96, 3, 3, padding='valid')\n\n branch_1 = conv2d_bn(net, 64, 1, 1)\n branch_1 = conv2d_bn(branch_1, 64, 1, 7)\n branch_1 = conv2d_bn(branch_1, 64, 7, 1)\n branch_1 = conv2d_bn(branch_1, 96, 3, 3, padding='valid')\n\n net = concatenate([branch_0, branch_1], axis=channel_axis)\n\n branch_0 = conv2d_bn(net, 192, 3, 3, strides=(2,2), padding='valid')\n branch_1 = MaxPooling2D((3,3), strides=(2,2), padding='valid')(net)\n\n net = concatenate([branch_0, branch_1], axis=channel_axis)\n\n # 35 x 35 x 384\n # 4 x Inception-A blocks\n for idx in range(4):\n \tnet = block_inception_a(net)\n\n # 35 x 35 x 384\n # Reduction-A block\n net = block_reduction_a(net)\n\n # 17 x 17 x 1024\n # 7 x Inception-B blocks\n for idx in range(7):\n \tnet = block_inception_b(net)\n\n # 17 x 17 x 1024\n # Reduction-B block\n net = block_reduction_b(net)\n\n # 8 x 8 x 1536\n # 3 x Inception-C blocks\n for idx in range(3):\n \tnet = block_inception_c(net)\n\n return net\n\n\ndef inception_v4(num_classes, dropout_keep_prob, weights, include_top):\n '''\n Creates the inception v4 network\n Args:\n \tnum_classes: number of classes\n \tdropout_keep_prob: float, the fraction to keep before final layer.\n \n Returns: \n \tlogits: the logits outputs of the model.\n '''\n\n # Input Shape is 299 x 299 x 3 (tf) or 3 x 299 x 299 (th)\n if K.image_data_format() == 'channels_first':\n inputs = Input((3, 299, 299))\n else:\n inputs = Input((299, 299, 3))\n\n # Make inception base\n x = inception_v4_base(inputs)\n\n\n # Final pooling and prediction\n if include_top:\n # 1 x 1 x 1536\n x = AveragePooling2D((8,8), padding='valid')(x)\n x = Dropout(dropout_keep_prob)(x)\n x = Flatten()(x)\n # 1536\n x = Dense(units=num_classes, activation='softmax')(x)\n\n model = Model(inputs, x, name='inception_v4')\n\n # load weights\n if weights == 'imagenet':\n if K.image_data_format() == 'channels_first':\n if K.backend() == 'tensorflow':\n warnings.warn('You are using the TensorFlow backend, yet you '\n 'are using the Theano '\n 'image data format convention '\n '(`image_data_format=\"channels_first\"`). '\n 'For best performance, set '\n '`image_data_format=\"channels_last\"` in '\n 'your Keras config '\n 'at ~/.keras/keras.json.')\n if include_top:\n weights_path = get_file(\n 'inception-v4_weights_tf_dim_ordering_tf_kernels.h5',\n WEIGHTS_PATH,\n cache_subdir='models',\n md5_hash='9fe79d77f793fe874470d84ca6ba4a3b')\n else:\n weights_path = get_file(\n 'inception-v4_weights_tf_dim_ordering_tf_kernels_notop.h5',\n WEIGHTS_PATH_NO_TOP,\n cache_subdir='models',\n md5_hash='9296b46b5971573064d12e4669110969')\n model.load_weights(weights_path, by_name=True)\n return model\n\n\ndef create_model(num_classes=204, dropout_prob=0.2, weights=None, include_top=True):\n return inception_v4(num_classes, dropout_prob, weights, include_top)", "Using TensorFlow backend.\n" ], [ "model = create_model()", "_____no_output_____" ], [ "model.summary()", "__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 299, 299, 3) 0 \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 149, 149, 32) 864 input_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_1 (BatchNor (None, 149, 149, 32) 96 conv2d_1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 149, 149, 32) 0 batch_normalization_1[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 147, 147, 32) 9216 activation_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_2 (BatchNor (None, 147, 147, 32) 96 conv2d_2[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 147, 147, 32) 0 batch_normalization_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 147, 147, 64) 18432 activation_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_3 (BatchNor (None, 147, 147, 64) 192 conv2d_3[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 147, 147, 64) 0 batch_normalization_3[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 73, 73, 96) 55296 activation_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_4 (BatchNor (None, 73, 73, 96) 288 conv2d_4[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 73, 73, 64) 0 activation_3[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 73, 73, 96) 0 batch_normalization_4[0][0] \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 73, 73, 160) 0 max_pooling2d_1[0][0] \n activation_4[0][0] \n__________________________________________________________________________________________________\nconv2d_7 (Conv2D) (None, 73, 73, 64) 10240 concatenate_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_7 (BatchNor (None, 73, 73, 64) 192 conv2d_7[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 73, 73, 64) 0 batch_normalization_7[0][0] \n__________________________________________________________________________________________________\nconv2d_8 (Conv2D) (None, 73, 73, 64) 28672 activation_7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_8 (BatchNor (None, 73, 73, 64) 192 conv2d_8[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 73, 73, 64) 0 batch_normalization_8[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 73, 73, 64) 10240 concatenate_1[0][0] \n__________________________________________________________________________________________________\nconv2d_9 (Conv2D) (None, 73, 73, 64) 28672 activation_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_5 (BatchNor (None, 73, 73, 64) 192 conv2d_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_9 (BatchNor (None, 73, 73, 64) 192 conv2d_9[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 73, 73, 64) 0 batch_normalization_5[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 73, 73, 64) 0 batch_normalization_9[0][0] \n__________________________________________________________________________________________________\nconv2d_6 (Conv2D) (None, 71, 71, 96) 55296 activation_5[0][0] \n__________________________________________________________________________________________________\nconv2d_10 (Conv2D) (None, 71, 71, 96) 55296 activation_9[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_6 (BatchNor (None, 71, 71, 96) 288 conv2d_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_10 (BatchNo (None, 71, 71, 96) 288 conv2d_10[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 71, 71, 96) 0 batch_normalization_6[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 71, 71, 96) 0 batch_normalization_10[0][0] \n__________________________________________________________________________________________________\nconcatenate_2 (Concatenate) (None, 71, 71, 192) 0 activation_6[0][0] \n activation_10[0][0] \n__________________________________________________________________________________________________\nconv2d_11 (Conv2D) (None, 35, 35, 192) 331776 concatenate_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_11 (BatchNo (None, 35, 35, 192) 576 conv2d_11[0][0] \n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 35, 35, 192) 0 batch_normalization_11[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_2 (MaxPooling2D) (None, 35, 35, 192) 0 concatenate_2[0][0] \n__________________________________________________________________________________________________\nconcatenate_3 (Concatenate) (None, 35, 35, 384) 0 activation_11[0][0] \n max_pooling2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_15 (Conv2D) (None, 35, 35, 64) 24576 concatenate_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_15 (BatchNo (None, 35, 35, 64) 192 conv2d_15[0][0] \n__________________________________________________________________________________________________\nactivation_15 (Activation) (None, 35, 35, 64) 0 batch_normalization_15[0][0] \n__________________________________________________________________________________________________\nconv2d_13 (Conv2D) (None, 35, 35, 64) 24576 concatenate_3[0][0] \n__________________________________________________________________________________________________\nconv2d_16 (Conv2D) (None, 35, 35, 96) 55296 activation_15[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_13 (BatchNo (None, 35, 35, 64) 192 conv2d_13[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_16 (BatchNo (None, 35, 35, 96) 288 conv2d_16[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 35, 35, 64) 0 batch_normalization_13[0][0] \n__________________________________________________________________________________________________\nactivation_16 (Activation) (None, 35, 35, 96) 0 batch_normalization_16[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_1 (AveragePoo (None, 35, 35, 384) 0 concatenate_3[0][0] \n__________________________________________________________________________________________________\nconv2d_12 (Conv2D) (None, 35, 35, 96) 36864 concatenate_3[0][0] \n__________________________________________________________________________________________________\nconv2d_14 (Conv2D) (None, 35, 35, 96) 55296 activation_13[0][0] \n__________________________________________________________________________________________________\nconv2d_17 (Conv2D) (None, 35, 35, 96) 82944 activation_16[0][0] \n__________________________________________________________________________________________________\nconv2d_18 (Conv2D) (None, 35, 35, 96) 36864 average_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_12 (BatchNo (None, 35, 35, 96) 288 conv2d_12[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_14 (BatchNo (None, 35, 35, 96) 288 conv2d_14[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_17 (BatchNo (None, 35, 35, 96) 288 conv2d_17[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_18 (BatchNo (None, 35, 35, 96) 288 conv2d_18[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 35, 35, 96) 0 batch_normalization_12[0][0] \n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 35, 35, 96) 0 batch_normalization_14[0][0] \n__________________________________________________________________________________________________\nactivation_17 (Activation) (None, 35, 35, 96) 0 batch_normalization_17[0][0] \n__________________________________________________________________________________________________\nactivation_18 (Activation) (None, 35, 35, 96) 0 batch_normalization_18[0][0] \n__________________________________________________________________________________________________\nconcatenate_4 (Concatenate) (None, 35, 35, 384) 0 activation_12[0][0] \n activation_14[0][0] \n activation_17[0][0] \n activation_18[0][0] \n__________________________________________________________________________________________________\nconv2d_22 (Conv2D) (None, 35, 35, 64) 24576 concatenate_4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_22 (BatchNo (None, 35, 35, 64) 192 conv2d_22[0][0] \n__________________________________________________________________________________________________\nactivation_22 (Activation) (None, 35, 35, 64) 0 batch_normalization_22[0][0] \n__________________________________________________________________________________________________\nconv2d_20 (Conv2D) (None, 35, 35, 64) 24576 concatenate_4[0][0] \n__________________________________________________________________________________________________\nconv2d_23 (Conv2D) (None, 35, 35, 96) 55296 activation_22[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_20 (BatchNo (None, 35, 35, 64) 192 conv2d_20[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_23 (BatchNo (None, 35, 35, 96) 288 conv2d_23[0][0] \n__________________________________________________________________________________________________\nactivation_20 (Activation) (None, 35, 35, 64) 0 batch_normalization_20[0][0] \n__________________________________________________________________________________________________\nactivation_23 (Activation) (None, 35, 35, 96) 0 batch_normalization_23[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_2 (AveragePoo (None, 35, 35, 384) 0 concatenate_4[0][0] \n__________________________________________________________________________________________________\nconv2d_19 (Conv2D) (None, 35, 35, 96) 36864 concatenate_4[0][0] \n__________________________________________________________________________________________________\nconv2d_21 (Conv2D) (None, 35, 35, 96) 55296 activation_20[0][0] \n__________________________________________________________________________________________________\nconv2d_24 (Conv2D) (None, 35, 35, 96) 82944 activation_23[0][0] \n__________________________________________________________________________________________________\nconv2d_25 (Conv2D) (None, 35, 35, 96) 36864 average_pooling2d_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_19 (BatchNo (None, 35, 35, 96) 288 conv2d_19[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_21 (BatchNo (None, 35, 35, 96) 288 conv2d_21[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_24 (BatchNo (None, 35, 35, 96) 288 conv2d_24[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_25 (BatchNo (None, 35, 35, 96) 288 conv2d_25[0][0] \n__________________________________________________________________________________________________\nactivation_19 (Activation) (None, 35, 35, 96) 0 batch_normalization_19[0][0] \n__________________________________________________________________________________________________\nactivation_21 (Activation) (None, 35, 35, 96) 0 batch_normalization_21[0][0] \n__________________________________________________________________________________________________\nactivation_24 (Activation) (None, 35, 35, 96) 0 batch_normalization_24[0][0] \n__________________________________________________________________________________________________\nactivation_25 (Activation) (None, 35, 35, 96) 0 batch_normalization_25[0][0] \n__________________________________________________________________________________________________\nconcatenate_5 (Concatenate) (None, 35, 35, 384) 0 activation_19[0][0] \n activation_21[0][0] \n activation_24[0][0] \n activation_25[0][0] \n__________________________________________________________________________________________________\nconv2d_29 (Conv2D) (None, 35, 35, 64) 24576 concatenate_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_29 (BatchNo (None, 35, 35, 64) 192 conv2d_29[0][0] \n__________________________________________________________________________________________________\nactivation_29 (Activation) (None, 35, 35, 64) 0 batch_normalization_29[0][0] \n__________________________________________________________________________________________________\nconv2d_27 (Conv2D) (None, 35, 35, 64) 24576 concatenate_5[0][0] \n__________________________________________________________________________________________________\nconv2d_30 (Conv2D) (None, 35, 35, 96) 55296 activation_29[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_27 (BatchNo (None, 35, 35, 64) 192 conv2d_27[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_30 (BatchNo (None, 35, 35, 96) 288 conv2d_30[0][0] \n__________________________________________________________________________________________________\nactivation_27 (Activation) (None, 35, 35, 64) 0 batch_normalization_27[0][0] \n__________________________________________________________________________________________________\nactivation_30 (Activation) (None, 35, 35, 96) 0 batch_normalization_30[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_3 (AveragePoo (None, 35, 35, 384) 0 concatenate_5[0][0] \n__________________________________________________________________________________________________\nconv2d_26 (Conv2D) (None, 35, 35, 96) 36864 concatenate_5[0][0] \n__________________________________________________________________________________________________\nconv2d_28 (Conv2D) (None, 35, 35, 96) 55296 activation_27[0][0] \n__________________________________________________________________________________________________\nconv2d_31 (Conv2D) (None, 35, 35, 96) 82944 activation_30[0][0] \n__________________________________________________________________________________________________\nconv2d_32 (Conv2D) (None, 35, 35, 96) 36864 average_pooling2d_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_26 (BatchNo (None, 35, 35, 96) 288 conv2d_26[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_28 (BatchNo (None, 35, 35, 96) 288 conv2d_28[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_31 (BatchNo (None, 35, 35, 96) 288 conv2d_31[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_32 (BatchNo (None, 35, 35, 96) 288 conv2d_32[0][0] \n__________________________________________________________________________________________________\nactivation_26 (Activation) (None, 35, 35, 96) 0 batch_normalization_26[0][0] \n__________________________________________________________________________________________________\nactivation_28 (Activation) (None, 35, 35, 96) 0 batch_normalization_28[0][0] \n__________________________________________________________________________________________________\nactivation_31 (Activation) (None, 35, 35, 96) 0 batch_normalization_31[0][0] \n__________________________________________________________________________________________________\nactivation_32 (Activation) (None, 35, 35, 96) 0 batch_normalization_32[0][0] \n__________________________________________________________________________________________________\nconcatenate_6 (Concatenate) (None, 35, 35, 384) 0 activation_26[0][0] \n activation_28[0][0] \n activation_31[0][0] \n activation_32[0][0] \n__________________________________________________________________________________________________\nconv2d_36 (Conv2D) (None, 35, 35, 64) 24576 concatenate_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_36 (BatchNo (None, 35, 35, 64) 192 conv2d_36[0][0] \n__________________________________________________________________________________________________\nactivation_36 (Activation) (None, 35, 35, 64) 0 batch_normalization_36[0][0] \n__________________________________________________________________________________________________\nconv2d_34 (Conv2D) (None, 35, 35, 64) 24576 concatenate_6[0][0] \n__________________________________________________________________________________________________\nconv2d_37 (Conv2D) (None, 35, 35, 96) 55296 activation_36[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_34 (BatchNo (None, 35, 35, 64) 192 conv2d_34[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_37 (BatchNo (None, 35, 35, 96) 288 conv2d_37[0][0] \n__________________________________________________________________________________________________\nactivation_34 (Activation) (None, 35, 35, 64) 0 batch_normalization_34[0][0] \n__________________________________________________________________________________________________\nactivation_37 (Activation) (None, 35, 35, 96) 0 batch_normalization_37[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_4 (AveragePoo (None, 35, 35, 384) 0 concatenate_6[0][0] \n__________________________________________________________________________________________________\nconv2d_33 (Conv2D) (None, 35, 35, 96) 36864 concatenate_6[0][0] \n__________________________________________________________________________________________________\nconv2d_35 (Conv2D) (None, 35, 35, 96) 55296 activation_34[0][0] \n__________________________________________________________________________________________________\nconv2d_38 (Conv2D) (None, 35, 35, 96) 82944 activation_37[0][0] \n__________________________________________________________________________________________________\nconv2d_39 (Conv2D) (None, 35, 35, 96) 36864 average_pooling2d_4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_33 (BatchNo (None, 35, 35, 96) 288 conv2d_33[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_35 (BatchNo (None, 35, 35, 96) 288 conv2d_35[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_38 (BatchNo (None, 35, 35, 96) 288 conv2d_38[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_39 (BatchNo (None, 35, 35, 96) 288 conv2d_39[0][0] \n__________________________________________________________________________________________________\nactivation_33 (Activation) (None, 35, 35, 96) 0 batch_normalization_33[0][0] \n__________________________________________________________________________________________________\nactivation_35 (Activation) (None, 35, 35, 96) 0 batch_normalization_35[0][0] \n__________________________________________________________________________________________________\nactivation_38 (Activation) (None, 35, 35, 96) 0 batch_normalization_38[0][0] \n__________________________________________________________________________________________________\nactivation_39 (Activation) (None, 35, 35, 96) 0 batch_normalization_39[0][0] \n__________________________________________________________________________________________________\nconcatenate_7 (Concatenate) (None, 35, 35, 384) 0 activation_33[0][0] \n activation_35[0][0] \n activation_38[0][0] \n activation_39[0][0] \n__________________________________________________________________________________________________\nconv2d_41 (Conv2D) (None, 35, 35, 192) 73728 concatenate_7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_41 (BatchNo (None, 35, 35, 192) 576 conv2d_41[0][0] \n__________________________________________________________________________________________________\nactivation_41 (Activation) (None, 35, 35, 192) 0 batch_normalization_41[0][0] \n__________________________________________________________________________________________________\nconv2d_42 (Conv2D) (None, 35, 35, 224) 387072 activation_41[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_42 (BatchNo (None, 35, 35, 224) 672 conv2d_42[0][0] \n__________________________________________________________________________________________________\nactivation_42 (Activation) (None, 35, 35, 224) 0 batch_normalization_42[0][0] \n__________________________________________________________________________________________________\nconv2d_40 (Conv2D) (None, 17, 17, 384) 1327104 concatenate_7[0][0] \n__________________________________________________________________________________________________\nconv2d_43 (Conv2D) (None, 17, 17, 256) 516096 activation_42[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_40 (BatchNo (None, 17, 17, 384) 1152 conv2d_40[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_43 (BatchNo (None, 17, 17, 256) 768 conv2d_43[0][0] \n__________________________________________________________________________________________________\nactivation_40 (Activation) (None, 17, 17, 384) 0 batch_normalization_40[0][0] \n__________________________________________________________________________________________________\nactivation_43 (Activation) (None, 17, 17, 256) 0 batch_normalization_43[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_3 (MaxPooling2D) (None, 17, 17, 384) 0 concatenate_7[0][0] \n__________________________________________________________________________________________________\nconcatenate_8 (Concatenate) (None, 17, 17, 1024) 0 activation_40[0][0] \n activation_43[0][0] \n max_pooling2d_3[0][0] \n__________________________________________________________________________________________________\nconv2d_48 (Conv2D) (None, 17, 17, 192) 196608 concatenate_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_48 (BatchNo (None, 17, 17, 192) 576 conv2d_48[0][0] \n__________________________________________________________________________________________________\nactivation_48 (Activation) (None, 17, 17, 192) 0 batch_normalization_48[0][0] \n__________________________________________________________________________________________________\nconv2d_49 (Conv2D) (None, 17, 17, 192) 258048 activation_48[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_49 (BatchNo (None, 17, 17, 192) 576 conv2d_49[0][0] \n__________________________________________________________________________________________________\nactivation_49 (Activation) (None, 17, 17, 192) 0 batch_normalization_49[0][0] \n__________________________________________________________________________________________________\nconv2d_45 (Conv2D) (None, 17, 17, 192) 196608 concatenate_8[0][0] \n__________________________________________________________________________________________________\nconv2d_50 (Conv2D) (None, 17, 17, 224) 301056 activation_49[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_45 (BatchNo (None, 17, 17, 192) 576 conv2d_45[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_50 (BatchNo (None, 17, 17, 224) 672 conv2d_50[0][0] \n__________________________________________________________________________________________________\nactivation_45 (Activation) (None, 17, 17, 192) 0 batch_normalization_45[0][0] \n__________________________________________________________________________________________________\nactivation_50 (Activation) (None, 17, 17, 224) 0 batch_normalization_50[0][0] \n__________________________________________________________________________________________________\nconv2d_46 (Conv2D) (None, 17, 17, 224) 301056 activation_45[0][0] \n__________________________________________________________________________________________________\nconv2d_51 (Conv2D) (None, 17, 17, 224) 351232 activation_50[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_46 (BatchNo (None, 17, 17, 224) 672 conv2d_46[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_51 (BatchNo (None, 17, 17, 224) 672 conv2d_51[0][0] \n__________________________________________________________________________________________________\nactivation_46 (Activation) (None, 17, 17, 224) 0 batch_normalization_46[0][0] \n__________________________________________________________________________________________________\nactivation_51 (Activation) (None, 17, 17, 224) 0 batch_normalization_51[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_5 (AveragePoo (None, 17, 17, 1024) 0 concatenate_8[0][0] \n__________________________________________________________________________________________________\nconv2d_44 (Conv2D) (None, 17, 17, 384) 393216 concatenate_8[0][0] \n__________________________________________________________________________________________________\nconv2d_47 (Conv2D) (None, 17, 17, 256) 401408 activation_46[0][0] \n__________________________________________________________________________________________________\nconv2d_52 (Conv2D) (None, 17, 17, 256) 401408 activation_51[0][0] \n__________________________________________________________________________________________________\nconv2d_53 (Conv2D) (None, 17, 17, 128) 131072 average_pooling2d_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_44 (BatchNo (None, 17, 17, 384) 1152 conv2d_44[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_47 (BatchNo (None, 17, 17, 256) 768 conv2d_47[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_52 (BatchNo (None, 17, 17, 256) 768 conv2d_52[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_53 (BatchNo (None, 17, 17, 128) 384 conv2d_53[0][0] \n__________________________________________________________________________________________________\nactivation_44 (Activation) (None, 17, 17, 384) 0 batch_normalization_44[0][0] \n__________________________________________________________________________________________________\nactivation_47 (Activation) (None, 17, 17, 256) 0 batch_normalization_47[0][0] \n__________________________________________________________________________________________________\nactivation_52 (Activation) (None, 17, 17, 256) 0 batch_normalization_52[0][0] \n__________________________________________________________________________________________________\nactivation_53 (Activation) (None, 17, 17, 128) 0 batch_normalization_53[0][0] \n__________________________________________________________________________________________________\nconcatenate_9 (Concatenate) (None, 17, 17, 1024) 0 activation_44[0][0] \n activation_47[0][0] \n activation_52[0][0] \n activation_53[0][0] \n__________________________________________________________________________________________________\nconv2d_58 (Conv2D) (None, 17, 17, 192) 196608 concatenate_9[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_58 (BatchNo (None, 17, 17, 192) 576 conv2d_58[0][0] \n__________________________________________________________________________________________________\nactivation_58 (Activation) (None, 17, 17, 192) 0 batch_normalization_58[0][0] \n__________________________________________________________________________________________________\nconv2d_59 (Conv2D) (None, 17, 17, 192) 258048 activation_58[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_59 (BatchNo (None, 17, 17, 192) 576 conv2d_59[0][0] \n__________________________________________________________________________________________________\nactivation_59 (Activation) (None, 17, 17, 192) 0 batch_normalization_59[0][0] \n__________________________________________________________________________________________________\nconv2d_55 (Conv2D) (None, 17, 17, 192) 196608 concatenate_9[0][0] \n__________________________________________________________________________________________________\nconv2d_60 (Conv2D) (None, 17, 17, 224) 301056 activation_59[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_55 (BatchNo (None, 17, 17, 192) 576 conv2d_55[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_60 (BatchNo (None, 17, 17, 224) 672 conv2d_60[0][0] \n__________________________________________________________________________________________________\nactivation_55 (Activation) (None, 17, 17, 192) 0 batch_normalization_55[0][0] \n__________________________________________________________________________________________________\nactivation_60 (Activation) (None, 17, 17, 224) 0 batch_normalization_60[0][0] \n__________________________________________________________________________________________________\nconv2d_56 (Conv2D) (None, 17, 17, 224) 301056 activation_55[0][0] \n__________________________________________________________________________________________________\nconv2d_61 (Conv2D) (None, 17, 17, 224) 351232 activation_60[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_56 (BatchNo (None, 17, 17, 224) 672 conv2d_56[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_61 (BatchNo (None, 17, 17, 224) 672 conv2d_61[0][0] \n__________________________________________________________________________________________________\nactivation_56 (Activation) (None, 17, 17, 224) 0 batch_normalization_56[0][0] \n__________________________________________________________________________________________________\nactivation_61 (Activation) (None, 17, 17, 224) 0 batch_normalization_61[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_6 (AveragePoo (None, 17, 17, 1024) 0 concatenate_9[0][0] \n__________________________________________________________________________________________________\nconv2d_54 (Conv2D) (None, 17, 17, 384) 393216 concatenate_9[0][0] \n__________________________________________________________________________________________________\nconv2d_57 (Conv2D) (None, 17, 17, 256) 401408 activation_56[0][0] \n__________________________________________________________________________________________________\nconv2d_62 (Conv2D) (None, 17, 17, 256) 401408 activation_61[0][0] \n__________________________________________________________________________________________________\nconv2d_63 (Conv2D) (None, 17, 17, 128) 131072 average_pooling2d_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_54 (BatchNo (None, 17, 17, 384) 1152 conv2d_54[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_57 (BatchNo (None, 17, 17, 256) 768 conv2d_57[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_62 (BatchNo (None, 17, 17, 256) 768 conv2d_62[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_63 (BatchNo (None, 17, 17, 128) 384 conv2d_63[0][0] \n__________________________________________________________________________________________________\nactivation_54 (Activation) (None, 17, 17, 384) 0 batch_normalization_54[0][0] \n__________________________________________________________________________________________________\nactivation_57 (Activation) (None, 17, 17, 256) 0 batch_normalization_57[0][0] \n__________________________________________________________________________________________________\nactivation_62 (Activation) (None, 17, 17, 256) 0 batch_normalization_62[0][0] \n__________________________________________________________________________________________________\nactivation_63 (Activation) (None, 17, 17, 128) 0 batch_normalization_63[0][0] \n__________________________________________________________________________________________________\nconcatenate_10 (Concatenate) (None, 17, 17, 1024) 0 activation_54[0][0] \n activation_57[0][0] \n activation_62[0][0] \n activation_63[0][0] \n__________________________________________________________________________________________________\nconv2d_68 (Conv2D) (None, 17, 17, 192) 196608 concatenate_10[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_68 (BatchNo (None, 17, 17, 192) 576 conv2d_68[0][0] \n__________________________________________________________________________________________________\nactivation_68 (Activation) (None, 17, 17, 192) 0 batch_normalization_68[0][0] \n__________________________________________________________________________________________________\nconv2d_69 (Conv2D) (None, 17, 17, 192) 258048 activation_68[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_69 (BatchNo (None, 17, 17, 192) 576 conv2d_69[0][0] \n__________________________________________________________________________________________________\nactivation_69 (Activation) (None, 17, 17, 192) 0 batch_normalization_69[0][0] \n__________________________________________________________________________________________________\nconv2d_65 (Conv2D) (None, 17, 17, 192) 196608 concatenate_10[0][0] \n__________________________________________________________________________________________________\nconv2d_70 (Conv2D) (None, 17, 17, 224) 301056 activation_69[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_65 (BatchNo (None, 17, 17, 192) 576 conv2d_65[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_70 (BatchNo (None, 17, 17, 224) 672 conv2d_70[0][0] \n__________________________________________________________________________________________________\nactivation_65 (Activation) (None, 17, 17, 192) 0 batch_normalization_65[0][0] \n__________________________________________________________________________________________________\nactivation_70 (Activation) (None, 17, 17, 224) 0 batch_normalization_70[0][0] \n__________________________________________________________________________________________________\nconv2d_66 (Conv2D) (None, 17, 17, 224) 301056 activation_65[0][0] \n__________________________________________________________________________________________________\nconv2d_71 (Conv2D) (None, 17, 17, 224) 351232 activation_70[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_66 (BatchNo (None, 17, 17, 224) 672 conv2d_66[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_71 (BatchNo (None, 17, 17, 224) 672 conv2d_71[0][0] \n__________________________________________________________________________________________________\nactivation_66 (Activation) (None, 17, 17, 224) 0 batch_normalization_66[0][0] \n__________________________________________________________________________________________________\nactivation_71 (Activation) (None, 17, 17, 224) 0 batch_normalization_71[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_7 (AveragePoo (None, 17, 17, 1024) 0 concatenate_10[0][0] \n__________________________________________________________________________________________________\nconv2d_64 (Conv2D) (None, 17, 17, 384) 393216 concatenate_10[0][0] \n__________________________________________________________________________________________________\nconv2d_67 (Conv2D) (None, 17, 17, 256) 401408 activation_66[0][0] \n__________________________________________________________________________________________________\nconv2d_72 (Conv2D) (None, 17, 17, 256) 401408 activation_71[0][0] \n__________________________________________________________________________________________________\nconv2d_73 (Conv2D) (None, 17, 17, 128) 131072 average_pooling2d_7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_64 (BatchNo (None, 17, 17, 384) 1152 conv2d_64[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_67 (BatchNo (None, 17, 17, 256) 768 conv2d_67[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_72 (BatchNo (None, 17, 17, 256) 768 conv2d_72[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_73 (BatchNo (None, 17, 17, 128) 384 conv2d_73[0][0] \n__________________________________________________________________________________________________\nactivation_64 (Activation) (None, 17, 17, 384) 0 batch_normalization_64[0][0] \n__________________________________________________________________________________________________\nactivation_67 (Activation) (None, 17, 17, 256) 0 batch_normalization_67[0][0] \n__________________________________________________________________________________________________\nactivation_72 (Activation) (None, 17, 17, 256) 0 batch_normalization_72[0][0] \n__________________________________________________________________________________________________\nactivation_73 (Activation) (None, 17, 17, 128) 0 batch_normalization_73[0][0] \n__________________________________________________________________________________________________\nconcatenate_11 (Concatenate) (None, 17, 17, 1024) 0 activation_64[0][0] \n activation_67[0][0] \n activation_72[0][0] \n activation_73[0][0] \n__________________________________________________________________________________________________\nconv2d_78 (Conv2D) (None, 17, 17, 192) 196608 concatenate_11[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_78 (BatchNo (None, 17, 17, 192) 576 conv2d_78[0][0] \n__________________________________________________________________________________________________\nactivation_78 (Activation) (None, 17, 17, 192) 0 batch_normalization_78[0][0] \n__________________________________________________________________________________________________\nconv2d_79 (Conv2D) (None, 17, 17, 192) 258048 activation_78[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_79 (BatchNo (None, 17, 17, 192) 576 conv2d_79[0][0] \n__________________________________________________________________________________________________\nactivation_79 (Activation) (None, 17, 17, 192) 0 batch_normalization_79[0][0] \n__________________________________________________________________________________________________\nconv2d_75 (Conv2D) (None, 17, 17, 192) 196608 concatenate_11[0][0] \n__________________________________________________________________________________________________\nconv2d_80 (Conv2D) (None, 17, 17, 224) 301056 activation_79[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_75 (BatchNo (None, 17, 17, 192) 576 conv2d_75[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_80 (BatchNo (None, 17, 17, 224) 672 conv2d_80[0][0] \n__________________________________________________________________________________________________\nactivation_75 (Activation) (None, 17, 17, 192) 0 batch_normalization_75[0][0] \n__________________________________________________________________________________________________\nactivation_80 (Activation) (None, 17, 17, 224) 0 batch_normalization_80[0][0] \n__________________________________________________________________________________________________\nconv2d_76 (Conv2D) (None, 17, 17, 224) 301056 activation_75[0][0] \n__________________________________________________________________________________________________\nconv2d_81 (Conv2D) (None, 17, 17, 224) 351232 activation_80[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_76 (BatchNo (None, 17, 17, 224) 672 conv2d_76[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_81 (BatchNo (None, 17, 17, 224) 672 conv2d_81[0][0] \n__________________________________________________________________________________________________\nactivation_76 (Activation) (None, 17, 17, 224) 0 batch_normalization_76[0][0] \n__________________________________________________________________________________________________\nactivation_81 (Activation) (None, 17, 17, 224) 0 batch_normalization_81[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_8 (AveragePoo (None, 17, 17, 1024) 0 concatenate_11[0][0] \n__________________________________________________________________________________________________\nconv2d_74 (Conv2D) (None, 17, 17, 384) 393216 concatenate_11[0][0] \n__________________________________________________________________________________________________\nconv2d_77 (Conv2D) (None, 17, 17, 256) 401408 activation_76[0][0] \n__________________________________________________________________________________________________\nconv2d_82 (Conv2D) (None, 17, 17, 256) 401408 activation_81[0][0] \n__________________________________________________________________________________________________\nconv2d_83 (Conv2D) (None, 17, 17, 128) 131072 average_pooling2d_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_74 (BatchNo (None, 17, 17, 384) 1152 conv2d_74[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_77 (BatchNo (None, 17, 17, 256) 768 conv2d_77[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_82 (BatchNo (None, 17, 17, 256) 768 conv2d_82[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_83 (BatchNo (None, 17, 17, 128) 384 conv2d_83[0][0] \n__________________________________________________________________________________________________\nactivation_74 (Activation) (None, 17, 17, 384) 0 batch_normalization_74[0][0] \n__________________________________________________________________________________________________\nactivation_77 (Activation) (None, 17, 17, 256) 0 batch_normalization_77[0][0] \n__________________________________________________________________________________________________\nactivation_82 (Activation) (None, 17, 17, 256) 0 batch_normalization_82[0][0] \n__________________________________________________________________________________________________\nactivation_83 (Activation) (None, 17, 17, 128) 0 batch_normalization_83[0][0] \n__________________________________________________________________________________________________\nconcatenate_12 (Concatenate) (None, 17, 17, 1024) 0 activation_74[0][0] \n activation_77[0][0] \n activation_82[0][0] \n activation_83[0][0] \n__________________________________________________________________________________________________\nconv2d_88 (Conv2D) (None, 17, 17, 192) 196608 concatenate_12[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_88 (BatchNo (None, 17, 17, 192) 576 conv2d_88[0][0] \n__________________________________________________________________________________________________\nactivation_88 (Activation) (None, 17, 17, 192) 0 batch_normalization_88[0][0] \n__________________________________________________________________________________________________\nconv2d_89 (Conv2D) (None, 17, 17, 192) 258048 activation_88[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_89 (BatchNo (None, 17, 17, 192) 576 conv2d_89[0][0] \n__________________________________________________________________________________________________\nactivation_89 (Activation) (None, 17, 17, 192) 0 batch_normalization_89[0][0] \n__________________________________________________________________________________________________\nconv2d_85 (Conv2D) (None, 17, 17, 192) 196608 concatenate_12[0][0] \n__________________________________________________________________________________________________\nconv2d_90 (Conv2D) (None, 17, 17, 224) 301056 activation_89[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_85 (BatchNo (None, 17, 17, 192) 576 conv2d_85[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_90 (BatchNo (None, 17, 17, 224) 672 conv2d_90[0][0] \n__________________________________________________________________________________________________\nactivation_85 (Activation) (None, 17, 17, 192) 0 batch_normalization_85[0][0] \n__________________________________________________________________________________________________\nactivation_90 (Activation) (None, 17, 17, 224) 0 batch_normalization_90[0][0] \n__________________________________________________________________________________________________\nconv2d_86 (Conv2D) (None, 17, 17, 224) 301056 activation_85[0][0] \n__________________________________________________________________________________________________\nconv2d_91 (Conv2D) (None, 17, 17, 224) 351232 activation_90[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_86 (BatchNo (None, 17, 17, 224) 672 conv2d_86[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_91 (BatchNo (None, 17, 17, 224) 672 conv2d_91[0][0] \n__________________________________________________________________________________________________\nactivation_86 (Activation) (None, 17, 17, 224) 0 batch_normalization_86[0][0] \n__________________________________________________________________________________________________\nactivation_91 (Activation) (None, 17, 17, 224) 0 batch_normalization_91[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_9 (AveragePoo (None, 17, 17, 1024) 0 concatenate_12[0][0] \n__________________________________________________________________________________________________\nconv2d_84 (Conv2D) (None, 17, 17, 384) 393216 concatenate_12[0][0] \n__________________________________________________________________________________________________\nconv2d_87 (Conv2D) (None, 17, 17, 256) 401408 activation_86[0][0] \n__________________________________________________________________________________________________\nconv2d_92 (Conv2D) (None, 17, 17, 256) 401408 activation_91[0][0] \n__________________________________________________________________________________________________\nconv2d_93 (Conv2D) (None, 17, 17, 128) 131072 average_pooling2d_9[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_84 (BatchNo (None, 17, 17, 384) 1152 conv2d_84[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_87 (BatchNo (None, 17, 17, 256) 768 conv2d_87[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_92 (BatchNo (None, 17, 17, 256) 768 conv2d_92[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_93 (BatchNo (None, 17, 17, 128) 384 conv2d_93[0][0] \n__________________________________________________________________________________________________\nactivation_84 (Activation) (None, 17, 17, 384) 0 batch_normalization_84[0][0] \n__________________________________________________________________________________________________\nactivation_87 (Activation) (None, 17, 17, 256) 0 batch_normalization_87[0][0] \n__________________________________________________________________________________________________\nactivation_92 (Activation) (None, 17, 17, 256) 0 batch_normalization_92[0][0] \n__________________________________________________________________________________________________\nactivation_93 (Activation) (None, 17, 17, 128) 0 batch_normalization_93[0][0] \n__________________________________________________________________________________________________\nconcatenate_13 (Concatenate) (None, 17, 17, 1024) 0 activation_84[0][0] \n activation_87[0][0] \n activation_92[0][0] \n activation_93[0][0] \n__________________________________________________________________________________________________\nconv2d_98 (Conv2D) (None, 17, 17, 192) 196608 concatenate_13[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_98 (BatchNo (None, 17, 17, 192) 576 conv2d_98[0][0] \n__________________________________________________________________________________________________\nactivation_98 (Activation) (None, 17, 17, 192) 0 batch_normalization_98[0][0] \n__________________________________________________________________________________________________\nconv2d_99 (Conv2D) (None, 17, 17, 192) 258048 activation_98[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_99 (BatchNo (None, 17, 17, 192) 576 conv2d_99[0][0] \n__________________________________________________________________________________________________\nactivation_99 (Activation) (None, 17, 17, 192) 0 batch_normalization_99[0][0] \n__________________________________________________________________________________________________\nconv2d_95 (Conv2D) (None, 17, 17, 192) 196608 concatenate_13[0][0] \n__________________________________________________________________________________________________\nconv2d_100 (Conv2D) (None, 17, 17, 224) 301056 activation_99[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_95 (BatchNo (None, 17, 17, 192) 576 conv2d_95[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_100 (BatchN (None, 17, 17, 224) 672 conv2d_100[0][0] \n__________________________________________________________________________________________________\nactivation_95 (Activation) (None, 17, 17, 192) 0 batch_normalization_95[0][0] \n__________________________________________________________________________________________________\nactivation_100 (Activation) (None, 17, 17, 224) 0 batch_normalization_100[0][0] \n__________________________________________________________________________________________________\nconv2d_96 (Conv2D) (None, 17, 17, 224) 301056 activation_95[0][0] \n__________________________________________________________________________________________________\nconv2d_101 (Conv2D) (None, 17, 17, 224) 351232 activation_100[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_96 (BatchNo (None, 17, 17, 224) 672 conv2d_96[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_101 (BatchN (None, 17, 17, 224) 672 conv2d_101[0][0] \n__________________________________________________________________________________________________\nactivation_96 (Activation) (None, 17, 17, 224) 0 batch_normalization_96[0][0] \n__________________________________________________________________________________________________\nactivation_101 (Activation) (None, 17, 17, 224) 0 batch_normalization_101[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_10 (AveragePo (None, 17, 17, 1024) 0 concatenate_13[0][0] \n__________________________________________________________________________________________________\nconv2d_94 (Conv2D) (None, 17, 17, 384) 393216 concatenate_13[0][0] \n__________________________________________________________________________________________________\nconv2d_97 (Conv2D) (None, 17, 17, 256) 401408 activation_96[0][0] \n__________________________________________________________________________________________________\nconv2d_102 (Conv2D) (None, 17, 17, 256) 401408 activation_101[0][0] \n__________________________________________________________________________________________________\nconv2d_103 (Conv2D) (None, 17, 17, 128) 131072 average_pooling2d_10[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_94 (BatchNo (None, 17, 17, 384) 1152 conv2d_94[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_97 (BatchNo (None, 17, 17, 256) 768 conv2d_97[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_102 (BatchN (None, 17, 17, 256) 768 conv2d_102[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_103 (BatchN (None, 17, 17, 128) 384 conv2d_103[0][0] \n__________________________________________________________________________________________________\nactivation_94 (Activation) (None, 17, 17, 384) 0 batch_normalization_94[0][0] \n__________________________________________________________________________________________________\nactivation_97 (Activation) (None, 17, 17, 256) 0 batch_normalization_97[0][0] \n__________________________________________________________________________________________________\nactivation_102 (Activation) (None, 17, 17, 256) 0 batch_normalization_102[0][0] \n__________________________________________________________________________________________________\nactivation_103 (Activation) (None, 17, 17, 128) 0 batch_normalization_103[0][0] \n__________________________________________________________________________________________________\nconcatenate_14 (Concatenate) (None, 17, 17, 1024) 0 activation_94[0][0] \n activation_97[0][0] \n activation_102[0][0] \n activation_103[0][0] \n__________________________________________________________________________________________________\nconv2d_108 (Conv2D) (None, 17, 17, 192) 196608 concatenate_14[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_108 (BatchN (None, 17, 17, 192) 576 conv2d_108[0][0] \n__________________________________________________________________________________________________\nactivation_108 (Activation) (None, 17, 17, 192) 0 batch_normalization_108[0][0] \n__________________________________________________________________________________________________\nconv2d_109 (Conv2D) (None, 17, 17, 192) 258048 activation_108[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_109 (BatchN (None, 17, 17, 192) 576 conv2d_109[0][0] \n__________________________________________________________________________________________________\nactivation_109 (Activation) (None, 17, 17, 192) 0 batch_normalization_109[0][0] \n__________________________________________________________________________________________________\nconv2d_105 (Conv2D) (None, 17, 17, 192) 196608 concatenate_14[0][0] \n__________________________________________________________________________________________________\nconv2d_110 (Conv2D) (None, 17, 17, 224) 301056 activation_109[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_105 (BatchN (None, 17, 17, 192) 576 conv2d_105[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_110 (BatchN (None, 17, 17, 224) 672 conv2d_110[0][0] \n__________________________________________________________________________________________________\nactivation_105 (Activation) (None, 17, 17, 192) 0 batch_normalization_105[0][0] \n__________________________________________________________________________________________________\nactivation_110 (Activation) (None, 17, 17, 224) 0 batch_normalization_110[0][0] \n__________________________________________________________________________________________________\nconv2d_106 (Conv2D) (None, 17, 17, 224) 301056 activation_105[0][0] \n__________________________________________________________________________________________________\nconv2d_111 (Conv2D) (None, 17, 17, 224) 351232 activation_110[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_106 (BatchN (None, 17, 17, 224) 672 conv2d_106[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_111 (BatchN (None, 17, 17, 224) 672 conv2d_111[0][0] \n__________________________________________________________________________________________________\nactivation_106 (Activation) (None, 17, 17, 224) 0 batch_normalization_106[0][0] \n__________________________________________________________________________________________________\nactivation_111 (Activation) (None, 17, 17, 224) 0 batch_normalization_111[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_11 (AveragePo (None, 17, 17, 1024) 0 concatenate_14[0][0] \n__________________________________________________________________________________________________\nconv2d_104 (Conv2D) (None, 17, 17, 384) 393216 concatenate_14[0][0] \n__________________________________________________________________________________________________\nconv2d_107 (Conv2D) (None, 17, 17, 256) 401408 activation_106[0][0] \n__________________________________________________________________________________________________\nconv2d_112 (Conv2D) (None, 17, 17, 256) 401408 activation_111[0][0] \n__________________________________________________________________________________________________\nconv2d_113 (Conv2D) (None, 17, 17, 128) 131072 average_pooling2d_11[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_104 (BatchN (None, 17, 17, 384) 1152 conv2d_104[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_107 (BatchN (None, 17, 17, 256) 768 conv2d_107[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_112 (BatchN (None, 17, 17, 256) 768 conv2d_112[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_113 (BatchN (None, 17, 17, 128) 384 conv2d_113[0][0] \n__________________________________________________________________________________________________\nactivation_104 (Activation) (None, 17, 17, 384) 0 batch_normalization_104[0][0] \n__________________________________________________________________________________________________\nactivation_107 (Activation) (None, 17, 17, 256) 0 batch_normalization_107[0][0] \n__________________________________________________________________________________________________\nactivation_112 (Activation) (None, 17, 17, 256) 0 batch_normalization_112[0][0] \n__________________________________________________________________________________________________\nactivation_113 (Activation) (None, 17, 17, 128) 0 batch_normalization_113[0][0] \n__________________________________________________________________________________________________\nconcatenate_15 (Concatenate) (None, 17, 17, 1024) 0 activation_104[0][0] \n activation_107[0][0] \n activation_112[0][0] \n activation_113[0][0] \n__________________________________________________________________________________________________\nconv2d_116 (Conv2D) (None, 17, 17, 256) 262144 concatenate_15[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_116 (BatchN (None, 17, 17, 256) 768 conv2d_116[0][0] \n__________________________________________________________________________________________________\nactivation_116 (Activation) (None, 17, 17, 256) 0 batch_normalization_116[0][0] \n__________________________________________________________________________________________________\nconv2d_117 (Conv2D) (None, 17, 17, 256) 458752 activation_116[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_117 (BatchN (None, 17, 17, 256) 768 conv2d_117[0][0] \n__________________________________________________________________________________________________\nactivation_117 (Activation) (None, 17, 17, 256) 0 batch_normalization_117[0][0] \n__________________________________________________________________________________________________\nconv2d_114 (Conv2D) (None, 17, 17, 192) 196608 concatenate_15[0][0] \n__________________________________________________________________________________________________\nconv2d_118 (Conv2D) (None, 17, 17, 320) 573440 activation_117[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_114 (BatchN (None, 17, 17, 192) 576 conv2d_114[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_118 (BatchN (None, 17, 17, 320) 960 conv2d_118[0][0] \n__________________________________________________________________________________________________\nactivation_114 (Activation) (None, 17, 17, 192) 0 batch_normalization_114[0][0] \n__________________________________________________________________________________________________\nactivation_118 (Activation) (None, 17, 17, 320) 0 batch_normalization_118[0][0] \n__________________________________________________________________________________________________\nconv2d_115 (Conv2D) (None, 8, 8, 192) 331776 activation_114[0][0] \n__________________________________________________________________________________________________\nconv2d_119 (Conv2D) (None, 8, 8, 320) 921600 activation_118[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_115 (BatchN (None, 8, 8, 192) 576 conv2d_115[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_119 (BatchN (None, 8, 8, 320) 960 conv2d_119[0][0] \n__________________________________________________________________________________________________\nactivation_115 (Activation) (None, 8, 8, 192) 0 batch_normalization_115[0][0] \n__________________________________________________________________________________________________\nactivation_119 (Activation) (None, 8, 8, 320) 0 batch_normalization_119[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_4 (MaxPooling2D) (None, 8, 8, 1024) 0 concatenate_15[0][0] \n__________________________________________________________________________________________________\nconcatenate_16 (Concatenate) (None, 8, 8, 1536) 0 activation_115[0][0] \n activation_119[0][0] \n max_pooling2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_124 (Conv2D) (None, 8, 8, 384) 589824 concatenate_16[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_124 (BatchN (None, 8, 8, 384) 1152 conv2d_124[0][0] \n__________________________________________________________________________________________________\nactivation_124 (Activation) (None, 8, 8, 384) 0 batch_normalization_124[0][0] \n__________________________________________________________________________________________________\nconv2d_125 (Conv2D) (None, 8, 8, 448) 516096 activation_124[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_125 (BatchN (None, 8, 8, 448) 1344 conv2d_125[0][0] \n__________________________________________________________________________________________________\nactivation_125 (Activation) (None, 8, 8, 448) 0 batch_normalization_125[0][0] \n__________________________________________________________________________________________________\nconv2d_121 (Conv2D) (None, 8, 8, 384) 589824 concatenate_16[0][0] \n__________________________________________________________________________________________________\nconv2d_126 (Conv2D) (None, 8, 8, 512) 688128 activation_125[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_121 (BatchN (None, 8, 8, 384) 1152 conv2d_121[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_126 (BatchN (None, 8, 8, 512) 1536 conv2d_126[0][0] \n__________________________________________________________________________________________________\nactivation_121 (Activation) (None, 8, 8, 384) 0 batch_normalization_121[0][0] \n__________________________________________________________________________________________________\nactivation_126 (Activation) (None, 8, 8, 512) 0 batch_normalization_126[0][0] \n__________________________________________________________________________________________________\nconv2d_122 (Conv2D) (None, 8, 8, 256) 294912 activation_121[0][0] \n__________________________________________________________________________________________________\nconv2d_123 (Conv2D) (None, 8, 8, 256) 294912 activation_121[0][0] \n__________________________________________________________________________________________________\nconv2d_127 (Conv2D) (None, 8, 8, 256) 393216 activation_126[0][0] \n__________________________________________________________________________________________________\nconv2d_128 (Conv2D) (None, 8, 8, 256) 393216 activation_126[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_12 (AveragePo (None, 8, 8, 1536) 0 concatenate_16[0][0] \n__________________________________________________________________________________________________\nconv2d_120 (Conv2D) (None, 8, 8, 256) 393216 concatenate_16[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_122 (BatchN (None, 8, 8, 256) 768 conv2d_122[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_123 (BatchN (None, 8, 8, 256) 768 conv2d_123[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_127 (BatchN (None, 8, 8, 256) 768 conv2d_127[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_128 (BatchN (None, 8, 8, 256) 768 conv2d_128[0][0] \n__________________________________________________________________________________________________\nconv2d_129 (Conv2D) (None, 8, 8, 256) 393216 average_pooling2d_12[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_120 (BatchN (None, 8, 8, 256) 768 conv2d_120[0][0] \n__________________________________________________________________________________________________\nactivation_122 (Activation) (None, 8, 8, 256) 0 batch_normalization_122[0][0] \n__________________________________________________________________________________________________\nactivation_123 (Activation) (None, 8, 8, 256) 0 batch_normalization_123[0][0] \n__________________________________________________________________________________________________\nactivation_127 (Activation) (None, 8, 8, 256) 0 batch_normalization_127[0][0] \n__________________________________________________________________________________________________\nactivation_128 (Activation) (None, 8, 8, 256) 0 batch_normalization_128[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_129 (BatchN (None, 8, 8, 256) 768 conv2d_129[0][0] \n__________________________________________________________________________________________________\nactivation_120 (Activation) (None, 8, 8, 256) 0 batch_normalization_120[0][0] \n__________________________________________________________________________________________________\nconcatenate_17 (Concatenate) (None, 8, 8, 512) 0 activation_122[0][0] \n activation_123[0][0] \n__________________________________________________________________________________________________\nconcatenate_18 (Concatenate) (None, 8, 8, 512) 0 activation_127[0][0] \n activation_128[0][0] \n__________________________________________________________________________________________________\nactivation_129 (Activation) (None, 8, 8, 256) 0 batch_normalization_129[0][0] \n__________________________________________________________________________________________________\nconcatenate_19 (Concatenate) (None, 8, 8, 1536) 0 activation_120[0][0] \n concatenate_17[0][0] \n concatenate_18[0][0] \n activation_129[0][0] \n__________________________________________________________________________________________________\nconv2d_134 (Conv2D) (None, 8, 8, 384) 589824 concatenate_19[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_134 (BatchN (None, 8, 8, 384) 1152 conv2d_134[0][0] \n__________________________________________________________________________________________________\nactivation_134 (Activation) (None, 8, 8, 384) 0 batch_normalization_134[0][0] \n__________________________________________________________________________________________________\nconv2d_135 (Conv2D) (None, 8, 8, 448) 516096 activation_134[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_135 (BatchN (None, 8, 8, 448) 1344 conv2d_135[0][0] \n__________________________________________________________________________________________________\nactivation_135 (Activation) (None, 8, 8, 448) 0 batch_normalization_135[0][0] \n__________________________________________________________________________________________________\nconv2d_131 (Conv2D) (None, 8, 8, 384) 589824 concatenate_19[0][0] \n__________________________________________________________________________________________________\nconv2d_136 (Conv2D) (None, 8, 8, 512) 688128 activation_135[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_131 (BatchN (None, 8, 8, 384) 1152 conv2d_131[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_136 (BatchN (None, 8, 8, 512) 1536 conv2d_136[0][0] \n__________________________________________________________________________________________________\nactivation_131 (Activation) (None, 8, 8, 384) 0 batch_normalization_131[0][0] \n__________________________________________________________________________________________________\nactivation_136 (Activation) (None, 8, 8, 512) 0 batch_normalization_136[0][0] \n__________________________________________________________________________________________________\nconv2d_132 (Conv2D) (None, 8, 8, 256) 294912 activation_131[0][0] \n__________________________________________________________________________________________________\nconv2d_133 (Conv2D) (None, 8, 8, 256) 294912 activation_131[0][0] \n__________________________________________________________________________________________________\nconv2d_137 (Conv2D) (None, 8, 8, 256) 393216 activation_136[0][0] \n__________________________________________________________________________________________________\nconv2d_138 (Conv2D) (None, 8, 8, 256) 393216 activation_136[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_13 (AveragePo (None, 8, 8, 1536) 0 concatenate_19[0][0] \n__________________________________________________________________________________________________\nconv2d_130 (Conv2D) (None, 8, 8, 256) 393216 concatenate_19[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_132 (BatchN (None, 8, 8, 256) 768 conv2d_132[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_133 (BatchN (None, 8, 8, 256) 768 conv2d_133[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_137 (BatchN (None, 8, 8, 256) 768 conv2d_137[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_138 (BatchN (None, 8, 8, 256) 768 conv2d_138[0][0] \n__________________________________________________________________________________________________\nconv2d_139 (Conv2D) (None, 8, 8, 256) 393216 average_pooling2d_13[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_130 (BatchN (None, 8, 8, 256) 768 conv2d_130[0][0] \n__________________________________________________________________________________________________\nactivation_132 (Activation) (None, 8, 8, 256) 0 batch_normalization_132[0][0] \n__________________________________________________________________________________________________\nactivation_133 (Activation) (None, 8, 8, 256) 0 batch_normalization_133[0][0] \n__________________________________________________________________________________________________\nactivation_137 (Activation) (None, 8, 8, 256) 0 batch_normalization_137[0][0] \n__________________________________________________________________________________________________\nactivation_138 (Activation) (None, 8, 8, 256) 0 batch_normalization_138[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_139 (BatchN (None, 8, 8, 256) 768 conv2d_139[0][0] \n__________________________________________________________________________________________________\nactivation_130 (Activation) (None, 8, 8, 256) 0 batch_normalization_130[0][0] \n__________________________________________________________________________________________________\nconcatenate_20 (Concatenate) (None, 8, 8, 512) 0 activation_132[0][0] \n activation_133[0][0] \n__________________________________________________________________________________________________\nconcatenate_21 (Concatenate) (None, 8, 8, 512) 0 activation_137[0][0] \n activation_138[0][0] \n__________________________________________________________________________________________________\nactivation_139 (Activation) (None, 8, 8, 256) 0 batch_normalization_139[0][0] \n__________________________________________________________________________________________________\nconcatenate_22 (Concatenate) (None, 8, 8, 1536) 0 activation_130[0][0] \n concatenate_20[0][0] \n concatenate_21[0][0] \n activation_139[0][0] \n__________________________________________________________________________________________________\nconv2d_144 (Conv2D) (None, 8, 8, 384) 589824 concatenate_22[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_144 (BatchN (None, 8, 8, 384) 1152 conv2d_144[0][0] \n__________________________________________________________________________________________________\nactivation_144 (Activation) (None, 8, 8, 384) 0 batch_normalization_144[0][0] \n__________________________________________________________________________________________________\nconv2d_145 (Conv2D) (None, 8, 8, 448) 516096 activation_144[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_145 (BatchN (None, 8, 8, 448) 1344 conv2d_145[0][0] \n__________________________________________________________________________________________________\nactivation_145 (Activation) (None, 8, 8, 448) 0 batch_normalization_145[0][0] \n__________________________________________________________________________________________________\nconv2d_141 (Conv2D) (None, 8, 8, 384) 589824 concatenate_22[0][0] \n__________________________________________________________________________________________________\nconv2d_146 (Conv2D) (None, 8, 8, 512) 688128 activation_145[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_141 (BatchN (None, 8, 8, 384) 1152 conv2d_141[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_146 (BatchN (None, 8, 8, 512) 1536 conv2d_146[0][0] \n__________________________________________________________________________________________________\nactivation_141 (Activation) (None, 8, 8, 384) 0 batch_normalization_141[0][0] \n__________________________________________________________________________________________________\nactivation_146 (Activation) (None, 8, 8, 512) 0 batch_normalization_146[0][0] \n__________________________________________________________________________________________________\nconv2d_142 (Conv2D) (None, 8, 8, 256) 294912 activation_141[0][0] \n__________________________________________________________________________________________________\nconv2d_143 (Conv2D) (None, 8, 8, 256) 294912 activation_141[0][0] \n__________________________________________________________________________________________________\nconv2d_147 (Conv2D) (None, 8, 8, 256) 393216 activation_146[0][0] \n__________________________________________________________________________________________________\nconv2d_148 (Conv2D) (None, 8, 8, 256) 393216 activation_146[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_14 (AveragePo (None, 8, 8, 1536) 0 concatenate_22[0][0] \n__________________________________________________________________________________________________\nconv2d_140 (Conv2D) (None, 8, 8, 256) 393216 concatenate_22[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_142 (BatchN (None, 8, 8, 256) 768 conv2d_142[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_143 (BatchN (None, 8, 8, 256) 768 conv2d_143[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_147 (BatchN (None, 8, 8, 256) 768 conv2d_147[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_148 (BatchN (None, 8, 8, 256) 768 conv2d_148[0][0] \n__________________________________________________________________________________________________\nconv2d_149 (Conv2D) (None, 8, 8, 256) 393216 average_pooling2d_14[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_140 (BatchN (None, 8, 8, 256) 768 conv2d_140[0][0] \n__________________________________________________________________________________________________\nactivation_142 (Activation) (None, 8, 8, 256) 0 batch_normalization_142[0][0] \n__________________________________________________________________________________________________\nactivation_143 (Activation) (None, 8, 8, 256) 0 batch_normalization_143[0][0] \n__________________________________________________________________________________________________\nactivation_147 (Activation) (None, 8, 8, 256) 0 batch_normalization_147[0][0] \n__________________________________________________________________________________________________\nactivation_148 (Activation) (None, 8, 8, 256) 0 batch_normalization_148[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_149 (BatchN (None, 8, 8, 256) 768 conv2d_149[0][0] \n__________________________________________________________________________________________________\nactivation_140 (Activation) (None, 8, 8, 256) 0 batch_normalization_140[0][0] \n__________________________________________________________________________________________________\nconcatenate_23 (Concatenate) (None, 8, 8, 512) 0 activation_142[0][0] \n activation_143[0][0] \n__________________________________________________________________________________________________\nconcatenate_24 (Concatenate) (None, 8, 8, 512) 0 activation_147[0][0] \n activation_148[0][0] \n__________________________________________________________________________________________________\nactivation_149 (Activation) (None, 8, 8, 256) 0 batch_normalization_149[0][0] \n__________________________________________________________________________________________________\nconcatenate_25 (Concatenate) (None, 8, 8, 1536) 0 activation_140[0][0] \n concatenate_23[0][0] \n concatenate_24[0][0] \n activation_149[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_15 (AveragePo (None, 1, 1, 1536) 0 concatenate_25[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 1, 1, 1536) 0 average_pooling2d_15[0][0] \n__________________________________________________________________________________________________\nflatten_1 (Flatten) (None, 1536) 0 dropout_1[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 204) 313548 flatten_1[0][0] \n==================================================================================================\nTotal params: 41,487,948\nTrainable params: 41,424,780\nNon-trainable params: 63,168\n__________________________________________________________________________________________________\n" ], [ "#compiling the model\nmodel.compile('Adam', \n loss='categorical_crossentropy',\n metrics=['accuracy'])", "_____no_output_____" ], [ "# model checkpoint\n\n#change the name of the model\nfilepath=\"waste_sort_weights_best_updated.h5\"\n\ncheckpoint = ModelCheckpoint(filepath, monitor='val_acc', \n verbose=1, save_best_only=True, mode='max')\n\n# early stopping added in\n# should not be a problem if you had the latest version of keras\nearly = EarlyStopping(monitor=\"val_loss\", \n mode=\"min\", \n patience=150, restore_best_weights=True)\ncallbacks_list = [checkpoint, early]", "_____no_output_____" ], [ "#train the model\nhistory = net.fit_generator(\ntrain_generator,\nsteps_per_epoch=100,\nepochs=epoch,\nverbose=1,\nvalidation_data=validation_generator,\nvalidation_steps=50,\ncallbacks = callbacks_list) #change here", "_____no_output_____" ], [ "# Training plots\nepochs = [i for i in range(1, len(history.history['loss'])+1)]\n\nplt.plot(epochs, history.history['loss'], color='blue', label=\"training_loss\")\nplt.plot(epochs, history.history['val_loss'], color='red', label=\"validation_loss\")\nplt.legend(loc='best')\nplt.title('training')\nplt.xlabel('epoch')\nplt.savefig(TRAINING_PLOT_FILE, bbox_inches='tight')\nplt.show()\n\nplt.plot(epochs, history.history['acc'], color='blue', label=\"training_accuracy\")\nplt.plot(epochs, history.history['val_acc'], color='red',label=\"validation_accuracy\")\nplt.legend(loc='best')\nplt.title('validation')\nplt.xlabel('epoch')\nplt.savefig(VALIDATION_PLOT_FILE, bbox_inches='tight')\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a6149d5dac26cecb9fa7e9f38f2378fe023469b
10,507
ipynb
Jupyter Notebook
example_loan_prediction.ipynb
rongchuhe2/workshop_data_analysis_python
c2e44e1e77530e4b2cfff8f333c0496635d78355
[ "MIT" ]
null
null
null
example_loan_prediction.ipynb
rongchuhe2/workshop_data_analysis_python
c2e44e1e77530e4b2cfff8f333c0496635d78355
[ "MIT" ]
null
null
null
example_loan_prediction.ipynb
rongchuhe2/workshop_data_analysis_python
c2e44e1e77530e4b2cfff8f333c0496635d78355
[ "MIT" ]
null
null
null
22.260593
449
0.539164
[ [ [ "# Problem Statement\n\n## About Company\n\nCompany deals in all home loans. They have presence across all urban, semi urban and rural areas. Customer first apply for home loan after that company validates the customer eligibility for loan.\n\n## Problem\nCompany wants to automate the loan eligibility process (real time) based on customer detail provided while filling online application form. These details are Gender, Marital Status, Education, Number of Dependents, Income, Loan Amount, Credit History and others. To automate this process, they have given a problem to identify the customers segments, those are eligible for loan amount so that they can specifically target these customers. \n\n\n## Data\n\n| Variable | Description|\n|---|------|\n|Loan_ID|Unique Loan ID|\n|Gender| Male/ Female|\n|Married| Applicant married (Y/N)\n|Dependents| Number of dependents\n|Education| Applicant Education (Graduate/ Under Graduate)\n|Self_Employed| Self employed (Y/N)\n|ApplicantIncome| Applicant income\n|CoapplicantIncome| Coapplicant income\n|LoanAmount| Loan amount in thousands\n|Loan_Amount_Term | Term of loan in months\n|Credit_History |credit history meets guidelines\n|Property_Area | Urban/ Semi Urban/ Rural\n|Loan_Status |Loan approved (Y/N)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "df = pd.read_csv(\"data/loan_prediction_train.csv\")", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "## Clear Data", "_____no_output_____" ] ], [ [ "df.apply(lambda x: sum(x.isnull()), axis=0)", "_____no_output_____" ], [ "df = df.dropna()\ndf.apply(lambda x: sum(x.isnull()), axis=0)", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ], [ "df['Property_Area'].value_counts()", "_____no_output_____" ] ], [ [ "## Distribution analysis", "_____no_output_____" ] ], [ [ "df['ApplicantIncome'].hist(bins=50)", "_____no_output_____" ], [ "df.boxplot(column='ApplicantIncome')", "_____no_output_____" ], [ "df.boxplot(column='ApplicantIncome', by='Education')", "_____no_output_____" ], [ "df['LoanAmount'].hist(bins=50)", "_____no_output_____" ], [ "df.boxplot(column='LoanAmount')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "## Building a Preditive Model", "_____no_output_____" ] ], [ [ "df.dtypes", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "from sklearn.preprocessing import LabelEncoder\n\nvar_mod = ['Gender', 'Married', 'Dependents', 'Education', 'Self_Employed', 'Property_Area', 'Loan_Status']\nle = LabelEncoder()\nfor var in var_mod:\n df[var] = le.fit_transform(df[var])", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.corr()", "_____no_output_____" ], [ "plt.scatter(df['Credit_History'], df['Loan_Status'], alpha=0.1)", "_____no_output_____" ], [ "noise1 = np.random.normal(0, 0.1, len(df))\nnoise2 = np.random.normal(0, 0.1, len(df))\nplt.scatter(df['Credit_History']+noise1, df['Loan_Status'] + noise2, alpha=0.1)", "_____no_output_____" ], [ "from sklearn.linear_model import LogisticRegression\n", "_____no_output_____" ], [ "logit_model = LogisticRegression()\nlogit_model", "_____no_output_____" ], [ "predictors = df[['Credit_History']]", "_____no_output_____" ], [ "logit_model.fit(df[['Credit_History']], df['Loan_Status'])", "_____no_output_____" ], [ "from sklearn.cross_validation import KFold\n", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "kf = KFold(len(df), n_folds=5)\n\nerror = []\nfor train, test in kf:\n train_predictors = df[['Credit_History']].iloc[train,:]\n train_target = df['Loan_Status'].iloc[train]\n logit_model.fit(train_predictors, train_target)\n \n error.append(logit_model.score(df[['Credit_History']].iloc[test,:], df['Loan_Status'].iloc[test]))\n\nprint(\"Cross-Validation Score \", np.mean(error))", "_____no_output_____" ], [ "def fit_model(model, data, predictors, outcome, num_fold=5):\n kf =KFold(data.shape[0], n_folds=num_fold)\n error = []\n for train, test in kf:\n train_predictors = data[predictors].iloc[train,:]\n train_target = data[outcome].iloc[train]\n model.fit(train_predictors, train_target)\n error.append(model.score(data[predictors].iloc[test,:], data[outcome].iloc[test]))\n \n print(\"Cross-Validation Score :\", np.mean(error))\n\n model.fit(data[predictors], data[outcome])\n accuracy = model.score(data[predictors], data[outcome])\n print(\"Accuracy: \", accuracy)\n return model\n", "_____no_output_____" ], [ "logit_model = LogisticRegression()\nlogit_model = fit_model(logit_model, df, ['Credit_History'], 'Loan_Status')", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "predictor_list = ['Gender', 'Married', 'Dependents', 'Education',\n 'Self_Employed', 'ApplicantIncome', 'CoapplicantIncome',\n 'LoanAmount','Loan_Amount_Term', 'Credit_History', 'Property_Area']\nlogit_model = fit_model(logit_model, df, predictor_list, 'Loan_Status')", "_____no_output_____" ], [ "from sklearn.tree import DecisionTreeClassifier\n\ndecision_tree_model = DecisionTreeClassifier()\npredictor_var = ['Credit_History', 'Gender', 'Married', 'Education']\noutcome_var = 'Loan_Status'\ndecision_tree_model = fit_model(decision_tree_model, df, predictor_var, outcome_var)", "_____no_output_____" ], [ "predictor_var = ['Credit_History', 'Loan_Amount_Term', 'LoanAmount']\ndecision_tree_model = fit_model(decision_tree_model, df, predictor_var, outcome_var)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a614a6a07d65d4e7c0a2b90de9fd5a62064f00c
59,516
ipynb
Jupyter Notebook
Data_processing/Data_processing_scripts/schools.ipynb
uwescience/DSSG2017-Equity
b48247017824629be57df35661fda89f8ab07e3e
[ "MIT" ]
3
2017-06-27T21:43:51.000Z
2018-10-11T16:50:50.000Z
Data_processing/Data_processing_scripts/schools.ipynb
uwescience/DSSG2017-Equity
b48247017824629be57df35661fda89f8ab07e3e
[ "MIT" ]
28
2017-06-27T18:29:17.000Z
2017-08-18T21:37:11.000Z
Data_processing/Data_processing_scripts/schools.ipynb
uwescience/DSSG2017-Equity
b48247017824629be57df35661fda89f8ab07e3e
[ "MIT" ]
5
2017-06-27T18:20:17.000Z
2020-03-21T21:04:57.000Z
42.633238
94
0.241313
[ [ [ "import pandas", "_____no_output_____" ], [ "schools = pandas.read_csv('2016_SSD.csv')", "_____no_output_____" ], [ "schools.head(10)", "_____no_output_____" ], [ "hs = schools.loc[schools['Grade Level'] == 'HS']", "_____no_output_____" ], [ "hs.head(72)", "_____no_output_____" ], [ "for index, row in ms.iterrows():\n #print(index)\n count = 0\n if index > 0:\n #print(row)\n if row['Proficient on state ELA test - Grade 6'] != '-':\n three = float(row['Proficient on state ELA test - Grade 6'])\n count = count + 1\n else:\n three = 0\n if row['Proficient on state ELA test - Grade 7'] != '-':\n four = float(row['Proficient on state ELA test - Grade 7'])\n count = count + 1\n else:\n four = 0\n if row['Proficient on state ELA test - Grade 8'] != '-':\n five = float(row['Proficient on state ELA test - Grade 8'])\n count = count + 1\n else:\n five = 0\n if count > 0:\n avg_prof = (three + four + five)/count\n else:\n avg_prof = 0\n print(avg_prof)\n #print(index)", "0.6654933333333334\n0.5120433333333333\n0.6683199999999999\n0.21429\n0.8100266666666668\n0.5991266666666667\n0.7796233333333333\n0.7638433333333333\n0.7679300000000001\n0.6344500000000001\n0.40278\n0.6519666666666667\n0.4333133333333334\n0.7183966666666667\n0.6595399999999999\n0.4361566666666667\n0.5710500000000001\n0.7237666666666667\n0\n0.38583999999999996\n0.6476833333333333\n0.69164\n0.70011\n" ], [ "for index, row in hs.iterrows():\n #print(index)\n if index > 0:\n print(row['School demographics: % Free/Reduced Lunch'])\n #print(index)", "0.42481\n0.1534\n0.13333\n0.67206\n0.6722100000000001\n0.72324\n0.37689\n0.29944\n0.81215\n0.40171\n0.337\n0.34884\n0.8231799999999999\n0.14145\n0.975\n0.9583299999999999\n0.2\n0.33962\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a6161b04428f6d1a0c818f1761422a0847cd670
308,253
ipynb
Jupyter Notebook
experiments/notebooks/val-2.ipynb
luismangubat/kingston-abm
7af71660822b3e973f78d98f4917ead8c105808d
[ "MIT" ]
null
null
null
experiments/notebooks/val-2.ipynb
luismangubat/kingston-abm
7af71660822b3e973f78d98f4917ead8c105808d
[ "MIT" ]
null
null
null
experiments/notebooks/val-2.ipynb
luismangubat/kingston-abm
7af71660822b3e973f78d98f4917ead8c105808d
[ "MIT" ]
2
2022-02-02T04:04:14.000Z
2022-03-21T01:51:54.000Z
163.964362
45,720
0.827765
[ [ [ "# Run a batch of samples on the HPC cluster\n\nThis experiment is part of a series which should help us validate the Kingston, ON model.", "_____no_output_____" ], [ "## Set-up orchistration and compute environments", "_____no_output_____" ], [ "To set-up access to the remote compute server:\n\n1. On the local host generate keys:\n\n```\nssh-keygen -t rsa\n```\n\n1. Copy those keys to the remote host:\n\n```\ncat ~/.ssh/id_rsa.pub | ssh user@hostname 'cat >> .ssh/authorized_keys'\n```", "_____no_output_____" ] ], [ [ "!cat /src/ansible/playbooks/hosts", "_____no_output_____" ] ], [ [ "### Install simulator on compute environments", "_____no_output_____" ] ], [ [ "!ansible-playbook -i /src/ansible/playbooks/hosts /src/ansible/playbooks/covid19sim.yml --extra-vars 'host=mulab' --user=paredes", "_____no_output_____" ] ], [ [ "### Copy Kingston configuration file to compute environments", "_____no_output_____" ] ], [ [ "!ansible -i /src/ansible/playbooks/hosts mulab --user=paredes -m copy -a \"src=params/kingston_0xdfc056a4fdb804e60e964b2cc5aae6ea.yml dest=~/COVI-AgentSim/src/covid19sim/configs/simulation/region/kingston0xdfc056a4fdb804e60e964b2cc5aae6ea.yaml\"", "_____no_output_____" ] ], [ [ "### Generate random seeds", "_____no_output_____" ] ], [ [ "!/opt/conda/bin/conda run -n covisim conda install numpy -y", "_____no_output_____" ], [ "cpus = 16\ncompute = 16\nn_samples = cpus * compute\nn_samples", "_____no_output_____" ], [ "import numpy as np\nfrom numpy.random import default_rng\nimport pandas as pd\n\nrng = default_rng()\n\nseed_list = [rng.integers(low=0, high=1e4) for _ in range(n_samples)]\n\nlen(np.unique(seed_list)), pd.DataFrame(seed_list).hist()", "_____no_output_____" ] ], [ [ "### Build run commands", "_____no_output_____" ] ], [ [ "args_dict = {'region': 'kingston0xdfc056a4fdb804e60e964b2cc5aae6ea',\n 'n_people': 3000,\n 'simulation_days': 60,\n 'init_fraction_sick': 0.002,\n 'N_BEHAVIOR_LEVELS': 2,\n 'intervention': 'no_intervention',\n 'tune': True,\n 'track': 'light',\n 'GLOBAL_MOBILITY_SCALING_FACTOR': 0.85,\n 'APP_UPTAKE': -1,\n 'USE_INFERENCE_SERVER': False,\n 'INTERVENTION_DAY': -1}\nargs_dict", "_____no_output_____" ], [ "args_str = ' '.join([f'{k}={v}' for k, v in args_dict.items()])\nargs_str", "_____no_output_____" ], [ "import random\nimport subprocess\n\nrun_id = hex(random.getrandbits(128))\n\nargs_list = [f'~/.conda/envs/covisim/bin/python ~/COVI-AgentSim/src/covid19sim/run.py seed={s} outdir=~/kingston-abm/experiments/validation/results/data/{run_id} {args_str}\\n' for s in seed_list]\n\nfile_name = f'val-2-{run_id}.cmd'\n\nwith open(file_name, 'w') as arg_file:\n arg_file.writelines(args_list)\n \n#subprocess.run(f'cat {file_name} | parallel -j4, shell=True, capture_output=True)\n\nfile_name", "_____no_output_____" ] ], [ [ "## Run simulations", "_____no_output_____" ] ], [ [ "!cat val-2-0x57e7aa91fbc783d1f3cd1bf719dd5a35.cmd | parallel --sshloginfile nodefile", "_____no_output_____" ] ], [ [ "## Set up the local analysis environment", "_____no_output_____" ] ], [ [ "!ansible-playbook -i /src/ansible/playbooks/hosts /src/ansible/playbooks/covid19sim.yml", "\nPLAY [Set-up covid19sim] *******************************************************\n\nTASK [Gathering Facts] *********************************************************\n\u001b[0;32mok: [localhost]\u001b[0m\n\nTASK [Create a new viritual environment] ***************************************\n\u001b[0;33mchanged: [localhost]\u001b[0m\n\nTASK [Clone ctt module, required to run simulator] *****************************\n\u001b[0;33mchanged: [localhost]\u001b[0m\n\nTASK [Use fixed requirements.txt] **********************************************\n\u001b[0;33mchanged: [localhost]\u001b[0m\n\nTASK [Use fixed setup.py] ******************************************************\n\u001b[0;33mchanged: [localhost]\u001b[0m\n\nTASK [Version of ctt module should be commit hash] *****************************\n\u001b[0;33mchanged: [localhost]\u001b[0m\n\nTASK [Install ctt module] ******************************************************\n\u001b[0;33mchanged: [localhost]\u001b[0m\n\nTASK [Checkout a viable commit] ************************************************\n\u001b[0;33mchanged: [localhost]\u001b[0m\n\nTASK [Version of covid19sim module should be commit hash] **********************\n\u001b[0;33mchanged: [localhost]\u001b[0m\n\nTASK [Install simulator] *******************************************************\n\u001b[0;33mchanged: [localhost]\u001b[0m\n\nPLAY RECAP *********************************************************************\n\u001b[0;33mlocalhost\u001b[0m : \u001b[0;32mok=10 \u001b[0m \u001b[0;33mchanged=9 \u001b[0m unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 \n\n" ], [ "!ipython kernel install --name covisim", "_____no_output_____" ], [ "!/opt/conda/bin/conda run -n covisim conda install ipykernel -y", "_____no_output_____" ], [ "!/opt/conda/bin/conda run -n covisim conda install matplotlib -y", "Collecting package metadata (current_repodata.json): ...working... done\r\nSolving environment: ...working... done\r\n\r\n## Package Plan ##\r\n\r\n environment location: /opt/conda/envs/covisim\r\n\r\n added / updated specs:\r\n - matplotlib\r\n\r\n\r\nThe following packages will be downloaded:\r\n\r\n package | build\r\n ---------------------------|-----------------\r\n blas-1.0 | mkl 6 KB\r\n cycler-0.10.0 | py38_0 14 KB\r\n intel-openmp-2021.2.0 | h06a4308_610 1.3 MB\r\n kiwisolver-1.3.1 | py38h2531618_0 80 KB\r\n lcms2-2.12 | h3be6417_0 312 KB\r\n libtiff-4.2.0 | h85742a9_0 502 KB\r\n libwebp-base-1.2.0 | h27cfd23_0 437 KB\r\n libxml2-2.9.12 | h03d6c58_0 1.2 MB\r\n lz4-c-1.9.3 | h2531618_0 186 KB\r\n matplotlib-3.3.4 | py38h06a4308_0 26 KB\r\n matplotlib-base-3.3.4 | py38h62a2d02_0 5.1 MB\r\n mkl-2021.2.0 | h06a4308_296 144.3 MB\r\n mkl-service-2.3.0 | py38h27cfd23_1 57 KB\r\n mkl_fft-1.3.0 | py38h42c9631_2 180 KB\r\n mkl_random-1.2.1 | py38ha9443f7_2 305 KB\r\n numpy-1.20.2 | py38h2d18471_0 23 KB\r\n numpy-base-1.20.2 | py38hfae3a4d_0 4.6 MB\r\n olefile-0.46 | py_0 33 KB\r\n pillow-8.2.0 | py38he98fc37_0 628 KB\r\n zstd-1.4.9 | haebb681_0 480 KB\r\n ------------------------------------------------------------\r\n Total: 159.7 MB\r\n\r\nThe following NEW packages will be INSTALLED:\r\n\r\n blas pkgs/main/linux-64::blas-1.0-mkl\r\n cycler pkgs/main/linux-64::cycler-0.10.0-py38_0\r\n dbus pkgs/main/linux-64::dbus-1.13.18-hb2f20db_0\r\n expat pkgs/main/linux-64::expat-2.4.1-h2531618_2\r\n fontconfig pkgs/main/linux-64::fontconfig-2.13.1-h6c09931_0\r\n freetype pkgs/main/linux-64::freetype-2.10.4-h5ab3b9f_0\r\n glib pkgs/main/linux-64::glib-2.68.2-h36276a3_0\r\n gst-plugins-base pkgs/main/linux-64::gst-plugins-base-1.14.0-h8213a91_2\r\n gstreamer pkgs/main/linux-64::gstreamer-1.14.0-h28cd5cc_2\r\n icu pkgs/main/linux-64::icu-58.2-he6710b0_3\r\n intel-openmp pkgs/main/linux-64::intel-openmp-2021.2.0-h06a4308_610\r\n jpeg pkgs/main/linux-64::jpeg-9b-h024ee3a_2\r\n kiwisolver pkgs/main/linux-64::kiwisolver-1.3.1-py38h2531618_0\r\n lcms2 pkgs/main/linux-64::lcms2-2.12-h3be6417_0\r\n libpng pkgs/main/linux-64::libpng-1.6.37-hbc83047_0\r\n libtiff pkgs/main/linux-64::libtiff-4.2.0-h85742a9_0\r\n libuuid pkgs/main/linux-64::libuuid-1.0.3-h1bed415_2\r\n libwebp-base pkgs/main/linux-64::libwebp-base-1.2.0-h27cfd23_0\r\n libxcb pkgs/main/linux-64::libxcb-1.14-h7b6447c_0\r\n libxml2 pkgs/main/linux-64::libxml2-2.9.12-h03d6c58_0\r\n lz4-c pkgs/main/linux-64::lz4-c-1.9.3-h2531618_0\r\n matplotlib pkgs/main/linux-64::matplotlib-3.3.4-py38h06a4308_0\r\n matplotlib-base pkgs/main/linux-64::matplotlib-base-3.3.4-py38h62a2d02_0\r\n mkl pkgs/main/linux-64::mkl-2021.2.0-h06a4308_296\r\n mkl-service pkgs/main/linux-64::mkl-service-2.3.0-py38h27cfd23_1\r\n mkl_fft pkgs/main/linux-64::mkl_fft-1.3.0-py38h42c9631_2\r\n mkl_random pkgs/main/linux-64::mkl_random-1.2.1-py38ha9443f7_2\r\n numpy pkgs/main/linux-64::numpy-1.20.2-py38h2d18471_0\r\n numpy-base pkgs/main/linux-64::numpy-base-1.20.2-py38hfae3a4d_0\r\n olefile pkgs/main/noarch::olefile-0.46-py_0\r\n pcre pkgs/main/linux-64::pcre-8.45-h295c915_0\r\n pillow pkgs/main/linux-64::pillow-8.2.0-py38he98fc37_0\r\n pyparsing pkgs/main/noarch::pyparsing-2.4.7-pyhd3eb1b0_0\r\n pyqt pkgs/main/linux-64::pyqt-5.9.2-py38h05f1152_4\r\n qt pkgs/main/linux-64::qt-5.9.7-h5867ecd_1\r\n sip pkgs/main/linux-64::sip-4.19.13-py38he6710b0_0\r\n zstd pkgs/main/linux-64::zstd-1.4.9-haebb681_0\r\n\r\n\r\n\r\nDownloading and Extracting Packages\r\n\rlcms2-2.12 | 312 KB | | 0% \rlcms2-2.12 | 312 KB | ########2 | 82% \rlcms2-2.12 | 312 KB | ########## | 100% \r\n\rmkl_fft-1.3.0 | 180 KB | | 0% \rmkl_fft-1.3.0 | 180 KB | ########## | 100% \r\n\rmkl-service-2.3.0 | 57 KB | | 0% \rmkl-service-2.3.0 | 57 KB | ########## | 100% \r\n\rmkl-2021.2.0 | 144.3 MB | | 0% \rmkl-2021.2.0 | 144.3 MB | | 0% \rmkl-2021.2.0 | 144.3 MB | | 1% \rmkl-2021.2.0 | 144.3 MB | 1 | 1% \rmkl-2021.2.0 | 144.3 MB | 1 | 2% \rmkl-2021.2.0 | 144.3 MB | 2 | 2% \rmkl-2021.2.0 | 144.3 MB | 2 | 3% \rmkl-2021.2.0 | 144.3 MB | 3 | 3% \rmkl-2021.2.0 | 144.3 MB | 3 | 4% \rmkl-2021.2.0 | 144.3 MB | 4 | 4% \rmkl-2021.2.0 | 144.3 MB | 4 | 5% \rmkl-2021.2.0 | 144.3 MB | 5 | 5% \rmkl-2021.2.0 | 144.3 MB | 5 | 5% \rmkl-2021.2.0 | 144.3 MB | 5 | 6% \rmkl-2021.2.0 | 144.3 MB | 6 | 6% \rmkl-2021.2.0 | 144.3 MB | 6 | 7% \rmkl-2021.2.0 | 144.3 MB | 7 | 7% \rmkl-2021.2.0 | 144.3 MB | 7 | 8% \rmkl-2021.2.0 | 144.3 MB | 8 | 8% \rmkl-2021.2.0 | 144.3 MB | 8 | 9% \rmkl-2021.2.0 | 144.3 MB | 9 | 9% \rmkl-2021.2.0 | 144.3 MB | 9 | 10% \rmkl-2021.2.0 | 144.3 MB | # | 10% \rmkl-2021.2.0 | 144.3 MB | # | 11% \rmkl-2021.2.0 | 144.3 MB | #1 | 11% \rmkl-2021.2.0 | 144.3 MB | #1 | 12% \rmkl-2021.2.0 | 144.3 MB | #2 | 12% \rmkl-2021.2.0 | 144.3 MB | #2 | 13% \rmkl-2021.2.0 | 144.3 MB | #3 | 13% \rmkl-2021.2.0 | 144.3 MB | #3 | 14% \rmkl-2021.2.0 | 144.3 MB | #4 | 14% \rmkl-2021.2.0 | 144.3 MB | #4 | 15% \rmkl-2021.2.0 | 144.3 MB | #4 | 15% \rmkl-2021.2.0 | 144.3 MB | #5 | 15% \rmkl-2021.2.0 | 144.3 MB | #5 | 16% \rmkl-2021.2.0 | 144.3 MB | #6 | 16% \rmkl-2021.2.0 | 144.3 MB | #6 | 17% \rmkl-2021.2.0 | 144.3 MB | #7 | 17% \rmkl-2021.2.0 | 144.3 MB | #7 | 18% \rmkl-2021.2.0 | 144.3 MB | #8 | 18% \rmkl-2021.2.0 | 144.3 MB | #8 | 19% \rmkl-2021.2.0 | 144.3 MB | #9 | 19% \rmkl-2021.2.0 | 144.3 MB | #9 | 20% \rmkl-2021.2.0 | 144.3 MB | ## | 20% \rmkl-2021.2.0 | 144.3 MB | ## | 21% \rmkl-2021.2.0 | 144.3 MB | ##1 | 21% \rmkl-2021.2.0 | 144.3 MB | ##1 | 21% \rmkl-2021.2.0 | 144.3 MB | ##1 | 22% \rmkl-2021.2.0 | 144.3 MB | ##2 | 22% \rmkl-2021.2.0 | 144.3 MB | ##2 | 23% \rmkl-2021.2.0 | 144.3 MB | ##3 | 23% \rmkl-2021.2.0 | 144.3 MB | ##3 | 24% \rmkl-2021.2.0 | 144.3 MB | ##4 | 24% \rmkl-2021.2.0 | 144.3 MB | ##4 | 25% \rmkl-2021.2.0 | 144.3 MB | ##5 | 25% \rmkl-2021.2.0 | 144.3 MB | ##5 | 26% \rmkl-2021.2.0 | 144.3 MB | ##6 | 26% \rmkl-2021.2.0 | 144.3 MB | ##6 | 27% \rmkl-2021.2.0 | 144.3 MB | ##7 | 27% \rmkl-2021.2.0 | 144.3 MB | ##7 | 28% \rmkl-2021.2.0 | 144.3 MB | ##8 | 28% \rmkl-2021.2.0 | 144.3 MB | ##8 | 29% \rmkl-2021.2.0 | 144.3 MB | ##9 | 29% \rmkl-2021.2.0 | 144.3 MB | ##9 | 30% \rmkl-2021.2.0 | 144.3 MB | ##9 | 30% \rmkl-2021.2.0 | 144.3 MB | ### | 30% \rmkl-2021.2.0 | 144.3 MB | ### | 31% \rmkl-2021.2.0 | 144.3 MB | ###1 | 31% \rmkl-2021.2.0 | 144.3 MB | ###1 | 32% \rmkl-2021.2.0 | 144.3 MB | ###2 | 32% \rmkl-2021.2.0 | 144.3 MB | ###2 | 33% \rmkl-2021.2.0 | 144.3 MB | ###3 | 33% \rmkl-2021.2.0 | 144.3 MB | ###3 | 34% \rmkl-2021.2.0 | 144.3 MB | ###4 | 34% \rmkl-2021.2.0 | 144.3 MB | ###4 | 35% \rmkl-2021.2.0 | 144.3 MB | ###5 | 35% \rmkl-2021.2.0 | 144.3 MB | ###5 | 36% \rmkl-2021.2.0 | 144.3 MB | ###6 | 36% \rmkl-2021.2.0 | 144.3 MB | ###6 | 37% \rmkl-2021.2.0 | 144.3 MB | ###6 | 37% \rmkl-2021.2.0 | 144.3 MB | ###7 | 37% \rmkl-2021.2.0 | 144.3 MB | ###7 | 38% \rmkl-2021.2.0 | 144.3 MB | ###8 | 38% \rmkl-2021.2.0 | 144.3 MB | ###8 | 39% \rmkl-2021.2.0 | 144.3 MB | ###9 | 39% \rmkl-2021.2.0 | 144.3 MB | ###9 | 40% \rmkl-2021.2.0 | 144.3 MB | #### | 40% \rmkl-2021.2.0 | 144.3 MB | #### | 41% \rmkl-2021.2.0 | 144.3 MB | ####1 | 41% \rmkl-2021.2.0 | 144.3 MB | ####1 | 42% \rmkl-2021.2.0 | 144.3 MB | ####2 | 42% \rmkl-2021.2.0 | 144.3 MB | ####2 | 43% \rmkl-2021.2.0 | 144.3 MB | ####3 | 43% \rmkl-2021.2.0 | 144.3 MB | ####3 | 44% \rmkl-2021.2.0 | 144.3 MB | ####3 | 44% \rmkl-2021.2.0 | 144.3 MB | ####4 | 44% \rmkl-2021.2.0 | 144.3 MB | ####4 | 45% \rmkl-2021.2.0 | 144.3 MB | ####5 | 45% \rmkl-2021.2.0 | 144.3 MB | ####5 | 46% \rmkl-2021.2.0 | 144.3 MB | ####6 | 46% \rmkl-2021.2.0 | 144.3 MB | ####6 | 47% \rmkl-2021.2.0 | 144.3 MB | ####7 | 47% \rmkl-2021.2.0 | 144.3 MB | ####7 | 48% \rmkl-2021.2.0 | 144.3 MB | ####8 | 48% \rmkl-2021.2.0 | 144.3 MB | ####8 | 49% \rmkl-2021.2.0 | 144.3 MB | ####9 | 49% \rmkl-2021.2.0 | 144.3 MB | ####9 | 50% \rmkl-2021.2.0 | 144.3 MB | ##### | 50% \rmkl-2021.2.0 | 144.3 MB | ##### | 50% \rmkl-2021.2.0 | 144.3 MB | ##### | 51% \rmkl-2021.2.0 | 144.3 MB | #####1 | 51% \rmkl-2021.2.0 | 144.3 MB | #####1 | 52% \rmkl-2021.2.0 | 144.3 MB | #####2 | 52% \rmkl-2021.2.0 | 144.3 MB | #####2 | 53% \rmkl-2021.2.0 | 144.3 MB | #####3 | 53% \rmkl-2021.2.0 | 144.3 MB | #####3 | 54% \rmkl-2021.2.0 | 144.3 MB | #####4 | 54% \rmkl-2021.2.0 | 144.3 MB | #####4 | 55% \rmkl-2021.2.0 | 144.3 MB | #####5 | 55% \rmkl-2021.2.0 | 144.3 MB | #####5 | 56% \rmkl-2021.2.0 | 144.3 MB | #####6 | 56% \rmkl-2021.2.0 | 144.3 MB | #####6 | 57% \rmkl-2021.2.0 | 144.3 MB | #####7 | 57% \rmkl-2021.2.0 | 144.3 MB | #####7 | 58% \rmkl-2021.2.0 | 144.3 MB | #####7 | 58% \rmkl-2021.2.0 | 144.3 MB | #####8 | 58% \rmkl-2021.2.0 | 144.3 MB | #####8 | 59% \rmkl-2021.2.0 | 144.3 MB | #####9 | 59% \rmkl-2021.2.0 | 144.3 MB | #####9 | 60% \rmkl-2021.2.0 | 144.3 MB | ###### | 60% \rmkl-2021.2.0 | 144.3 MB | ###### | 61% \rmkl-2021.2.0 | 144.3 MB | ######1 | 61% \rmkl-2021.2.0 | 144.3 MB | ######1 | 62% \rmkl-2021.2.0 | 144.3 MB | ######2 | 62% \rmkl-2021.2.0 | 144.3 MB | ######2 | 63% \rmkl-2021.2.0 | 144.3 MB | ######3 | 63% \rmkl-2021.2.0 | 144.3 MB | ######3 | 64% \rmkl-2021.2.0 | 144.3 MB | ######4 | 64% \rmkl-2021.2.0 | 144.3 MB | ######4 | 65% \rmkl-2021.2.0 | 144.3 MB | ######4 | 65% \rmkl-2021.2.0 | 144.3 MB | ######5 | 65% \rmkl-2021.2.0 | 144.3 MB | ######5 | 66% \rmkl-2021.2.0 | 144.3 MB | ######6 | 66% \rmkl-2021.2.0 | 144.3 MB | ######6 | 67% \rmkl-2021.2.0 | 144.3 MB | ######7 | 67% \rmkl-2021.2.0 | 144.3 MB | ######7 | 68% \rmkl-2021.2.0 | 144.3 MB | ######8 | 68% \rmkl-2021.2.0 | 144.3 MB | ######8 | 69% \rmkl-2021.2.0 | 144.3 MB | ######9 | 69% \rmkl-2021.2.0 | 144.3 MB | ######9 | 70% \rmkl-2021.2.0 | 144.3 MB | ####### | 70% \rmkl-2021.2.0 | 144.3 MB | ####### | 71% \rmkl-2021.2.0 | 144.3 MB | #######1 | 71% \rmkl-2021.2.0 | 144.3 MB | #######1 | 71% \rmkl-2021.2.0 | 144.3 MB | #######1 | 72% \rmkl-2021.2.0 | 144.3 MB | #######2 | 72% \rmkl-2021.2.0 | 144.3 MB | #######2 | 73% \rmkl-2021.2.0 | 144.3 MB | #######3 | 73% \rmkl-2021.2.0 | 144.3 MB | #######3 | 74% \rmkl-2021.2.0 | 144.3 MB | #######4 | 74% \rmkl-2021.2.0 | 144.3 MB | #######4 | 75% \rmkl-2021.2.0 | 144.3 MB | #######5 | 75% \rmkl-2021.2.0 | 144.3 MB | #######5 | 76% \rmkl-2021.2.0 | 144.3 MB | #######6 | 76% \rmkl-2021.2.0 | 144.3 MB | #######6 | 77% \rmkl-2021.2.0 | 144.3 MB | #######7 | 77% \rmkl-2021.2.0 | 144.3 MB | #######7 | 78% \rmkl-2021.2.0 | 144.3 MB | #######7 | 78% \rmkl-2021.2.0 | 144.3 MB | #######8 | 78% \rmkl-2021.2.0 | 144.3 MB | #######8 | 79% \rmkl-2021.2.0 | 144.3 MB | #######9 | 79% \rmkl-2021.2.0 | 144.3 MB | #######9 | 80% \rmkl-2021.2.0 | 144.3 MB | ######## | 80% \rmkl-2021.2.0 | 144.3 MB | ######## | 81% \rmkl-2021.2.0 | 144.3 MB | ########1 | 81% \rmkl-2021.2.0 | 144.3 MB | ########1 | 82% \rmkl-2021.2.0 | 144.3 MB | ########2 | 82% \rmkl-2021.2.0 | 144.3 MB | ########2 | 83% \rmkl-2021.2.0 | 144.3 MB | ########3 | 83% \rmkl-2021.2.0 | 144.3 MB | ########3 | 84% \rmkl-2021.2.0 | 144.3 MB | ########4 | 84% \rmkl-2021.2.0 | 144.3 MB | ########4 | 85% \rmkl-2021.2.0 | 144.3 MB | ########5 | 85% \rmkl-2021.2.0 | 144.3 MB | ########5 | 86% \rmkl-2021.2.0 | 144.3 MB | ########5 | 86% \rmkl-2021.2.0 | 144.3 MB | ########6 | 86% \rmkl-2021.2.0 | 144.3 MB | ########6 | 87% \rmkl-2021.2.0 | 144.3 MB | ########7 | 87% \rmkl-2021.2.0 | 144.3 MB | ########7 | 88% \rmkl-2021.2.0 | 144.3 MB | ########8 | 88% \rmkl-2021.2.0 | 144.3 MB | ########8 | 89% \rmkl-2021.2.0 | 144.3 MB | ########9 | 89% \rmkl-2021.2.0 | 144.3 MB | ########9 | 90% \rmkl-2021.2.0 | 144.3 MB | ######### | 90% \rmkl-2021.2.0 | 144.3 MB | ######### | 91% \rmkl-2021.2.0 | 144.3 MB | #########1 | 91% \rmkl-2021.2.0 | 144.3 MB | #########1 | 92% \rmkl-2021.2.0 | 144.3 MB | #########2 | 92% \rmkl-2021.2.0 | 144.3 MB | #########2 | 93% \rmkl-2021.2.0 | 144.3 MB | #########3 | 93% \rmkl-2021.2.0 | 144.3 MB | #########3 | 93% \rmkl-2021.2.0 | 144.3 MB | #########3 | 94% \rmkl-2021.2.0 | 144.3 MB | #########4 | 94% \rmkl-2021.2.0 | 144.3 MB | #########4 | 95% \rmkl-2021.2.0 | 144.3 MB | #########5 | 95% \rmkl-2021.2.0 | 144.3 MB | #########5 | 96% \rmkl-2021.2.0 | 144.3 MB | #########6 | 96% \rmkl-2021.2.0 | 144.3 MB | #########6 | 97% \rmkl-2021.2.0 | 144.3 MB | #########7 | 97% \rmkl-2021.2.0 | 144.3 MB | #########7 | 98% \rmkl-2021.2.0 | 144.3 MB | #########8 | 98% \rmkl-2021.2.0 | 144.3 MB | #########8 | 98% \rmkl-2021.2.0 | 144.3 MB | #########8 | 99% \rmkl-2021.2.0 | 144.3 MB | #########9 | 99% \rmkl-2021.2.0 | 144.3 MB | #########9 | 100% \rmkl-2021.2.0 | 144.3 MB | ########## | 100% \r\n\rcycler-0.10.0 | 14 KB | | 0% \rcycler-0.10.0 | 14 KB | ########## | 100% \r\n\rblas-1.0 | 6 KB | | 0% \rblas-1.0 | 6 KB | ########## | 100% \r\n\rlibtiff-4.2.0 | 502 KB | | 0% \rlibtiff-4.2.0 | 502 KB | ########## | 100% \rlibtiff-4.2.0 | 502 KB | ########## | 100% \r\n\rmatplotlib-3.3.4 | 26 KB | | 0% \rmatplotlib-3.3.4 | 26 KB | ########## | 100% \r\n\rintel-openmp-2021.2. | 1.3 MB | | 0% \rintel-openmp-2021.2. | 1.3 MB | #### | 41% \rintel-openmp-2021.2. | 1.3 MB | #########1 | 91% \rintel-openmp-2021.2. | 1.3 MB | ########## | 100% \r\n\rmatplotlib-base-3.3. | 5.1 MB | | 0% \rmatplotlib-base-3.3. | 5.1 MB | 9 | 10% \rmatplotlib-base-3.3. | 5.1 MB | ##1 | 22% \rmatplotlib-base-3.3. | 5.1 MB | ###5 | 36% \rmatplotlib-base-3.3. | 5.1 MB | ####8 | 48% \rmatplotlib-base-3.3. | 5.1 MB | ######1 | 62% \rmatplotlib-base-3.3. | 5.1 MB | #######5 | 75% \rmatplotlib-base-3.3. | 5.1 MB | ########8 | 89% \rmatplotlib-base-3.3. | 5.1 MB | ########## | 100% \r\n\rlz4-c-1.9.3 | 186 KB | | 0% \rlz4-c-1.9.3 | 186 KB | ########## | 100% \r\n\rmkl_random-1.2.1 | 305 KB | | 0% \rmkl_random-1.2.1 | 305 KB | ########## | 100% \r\n\rkiwisolver-1.3.1 | 80 KB | | 0% \rkiwisolver-1.3.1 | 80 KB | ########## | 100% \r\n\rpillow-8.2.0 | 628 KB | | 0% \rpillow-8.2.0 | 628 KB | ########6 | 87% \rpillow-8.2.0 | 628 KB | ########## | 100% \r\n\rnumpy-1.20.2 | 23 KB | | 0% \rnumpy-1.20.2 | 23 KB | ########## | 100% \r\n\rzstd-1.4.9 | 480 KB | | 0% \rzstd-1.4.9 | 480 KB | ########## | 100% \rzstd-1.4.9 | 480 KB | ########## | 100% \r\n\rlibwebp-base-1.2.0 | 437 KB | | 0% \rlibwebp-base-1.2.0 | 437 KB | ########## | 100% \r\n\rlibxml2-2.9.12 | 1.2 MB | | 0% \rlibxml2-2.9.12 | 1.2 MB | ####4 | 45% \rlibxml2-2.9.12 | 1.2 MB | ########## | 100% \rlibxml2-2.9.12 | 1.2 MB | ########## | 100% \r\n\rnumpy-base-1.20.2 | 4.6 MB | | 0% \rnumpy-base-1.20.2 | 4.6 MB | #1 | 12% \rnumpy-base-1.20.2 | 4.6 MB | ##6 | 27% \rnumpy-base-1.20.2 | 4.6 MB | ####1 | 41% \rnumpy-base-1.20.2 | 4.6 MB | #####5 | 56% \rnumpy-base-1.20.2 | 4.6 MB | ####### | 71% \rnumpy-base-1.20.2 | 4.6 MB | ########5 | 85% \rnumpy-base-1.20.2 | 4.6 MB | ########## | 100% \rnumpy-base-1.20.2 | 4.6 MB | ########## | 100% \r\n\rolefile-0.46 | 33 KB | | 0% \rolefile-0.46 | 33 KB | ########## | 100% \r\nPreparing transaction: ...working... done\r\nVerifying transaction: ...working... done\r\nExecuting transaction: ...working... done\r\n\r\n" ], [ "import os\n\nrun_id = '0x57e7aa91fbc783d1f3cd1bf719dd5a35'\n\nsamples = os.listdir(f'../data/{run_id}')\nlen(samples)", "_____no_output_____" ], [ "import pandas as pd\n\ndata = [os.listdir(f'../data/{run_id}/{s}') for s in samples]\n\ndata_df = pd.DataFrame(data, columns=['params', 'log', 'metrics'])\ndata_df['sample'] = samples\ndata_df.head()", "_____no_output_____" ], [ "import pickle\n\ncases_list = []\ni=0\nfor _, data in data_df.iterrows():\n with open(f'/src/experiments/data/{run_id}/{data[\"sample\"]}/{data[\"metrics\"]}', 'rb') as tracker:\n tracker_dict = pickle.load(tracker)\n cases_list.append(pd.DataFrame(tracker_dict['cases_per_day'], columns=[i]))\n i += 1\ncases_df = pd.concat(cases_list, axis=1)\ncases_df.head()", "_____no_output_____" ], [ "!ansible -i /src/ansible/playbooks/hosts mulab --user=paredes -a \"ls ~/kingston-abm/experiments/validation/results/data/0x57e7aa91fbc783d1f3cd1bf719dd5a35/\"", "_____no_output_____" ], [ "cases_df.transpose().head()", "_____no_output_____" ], [ "%matplotlib inline", "_____no_output_____" ], [ "cases_df.transpose().boxplot(figsize=(20,10))", "_____no_output_____" ], [ "!/opt/conda/bin/conda run -n covisim conda install seaborn -y", "Collecting package metadata (current_repodata.json): ...working... done\r\nSolving environment: ...working... done\r\n\r\n## Package Plan ##\r\n\r\n environment location: /opt/conda/envs/covisim\r\n\r\n added / updated specs:\r\n - seaborn\r\n\r\n\r\nThe following packages will be downloaded:\r\n\r\n package | build\r\n ---------------------------|-----------------\r\n libgfortran-ng-7.5.0 | ha8ba4b0_17 22 KB\r\n libgfortran4-7.5.0 | ha8ba4b0_17 995 KB\r\n pandas-1.2.5 | py38h295c915_0 8.8 MB\r\n pytz-2021.1 | pyhd3eb1b0_0 181 KB\r\n scipy-1.6.2 | py38had2a1c9_1 15.6 MB\r\n seaborn-0.11.1 | pyhd3eb1b0_0 212 KB\r\n ------------------------------------------------------------\r\n Total: 25.8 MB\r\n\r\nThe following NEW packages will be INSTALLED:\r\n\r\n libgfortran-ng pkgs/main/linux-64::libgfortran-ng-7.5.0-ha8ba4b0_17\r\n libgfortran4 pkgs/main/linux-64::libgfortran4-7.5.0-ha8ba4b0_17\r\n pandas pkgs/main/linux-64::pandas-1.2.5-py38h295c915_0\r\n pytz pkgs/main/noarch::pytz-2021.1-pyhd3eb1b0_0\r\n scipy pkgs/main/linux-64::scipy-1.6.2-py38had2a1c9_1\r\n seaborn pkgs/main/noarch::seaborn-0.11.1-pyhd3eb1b0_0\r\n\r\n\r\n\r\nDownloading and Extracting Packages\r\n\rlibgfortran4-7.5.0 | 995 KB | | 0% \rlibgfortran4-7.5.0 | 995 KB | #1 | 11% \rlibgfortran4-7.5.0 | 995 KB | ########1 | 82% \rlibgfortran4-7.5.0 | 995 KB | ########## | 100% \r\n\rpytz-2021.1 | 181 KB | | 0% \rpytz-2021.1 | 181 KB | ########## | 100% \rpytz-2021.1 | 181 KB | ########## | 100% \r\n\rpandas-1.2.5 | 8.8 MB | | 0% \rpandas-1.2.5 | 8.8 MB | 5 | 6% \rpandas-1.2.5 | 8.8 MB | #3 | 13% \rpandas-1.2.5 | 8.8 MB | ## | 21% \rpandas-1.2.5 | 8.8 MB | ##8 | 28% \rpandas-1.2.5 | 8.8 MB | ###5 | 36% \rpandas-1.2.5 | 8.8 MB | ####3 | 43% \rpandas-1.2.5 | 8.8 MB | #####2 | 52% \rpandas-1.2.5 | 8.8 MB | #####9 | 60% \rpandas-1.2.5 | 8.8 MB | ######7 | 67% \rpandas-1.2.5 | 8.8 MB | #######4 | 75% \rpandas-1.2.5 | 8.8 MB | ########2 | 82% \rpandas-1.2.5 | 8.8 MB | ######### | 90% \rpandas-1.2.5 | 8.8 MB | #########7 | 98% \rpandas-1.2.5 | 8.8 MB | ########## | 100% \r\n\rseaborn-0.11.1 | 212 KB | | 0% \rseaborn-0.11.1 | 212 KB | ########## | 100% \r\n\rlibgfortran-ng-7.5.0 | 22 KB | | 0% \rlibgfortran-ng-7.5.0 | 22 KB | ########## | 100% \r\n\rscipy-1.6.2 | 15.6 MB | | 0% \rscipy-1.6.2 | 15.6 MB | 3 | 3% \rscipy-1.6.2 | 15.6 MB | 7 | 8% \rscipy-1.6.2 | 15.6 MB | #2 | 12% \rscipy-1.6.2 | 15.6 MB | #6 | 16% \rscipy-1.6.2 | 15.6 MB | ## | 21% \rscipy-1.6.2 | 15.6 MB | ##4 | 25% \rscipy-1.6.2 | 15.6 MB | ##9 | 30% \rscipy-1.6.2 | 15.6 MB | ###3 | 34% \rscipy-1.6.2 | 15.6 MB | ###8 | 38% \rscipy-1.6.2 | 15.6 MB | ####2 | 43% \rscipy-1.6.2 | 15.6 MB | ####6 | 47% \rscipy-1.6.2 | 15.6 MB | #####1 | 52% \rscipy-1.6.2 | 15.6 MB | #####6 | 56% \rscipy-1.6.2 | 15.6 MB | ###### | 60% \rscipy-1.6.2 | 15.6 MB | ######4 | 65% \rscipy-1.6.2 | 15.6 MB | ######9 | 69% \rscipy-1.6.2 | 15.6 MB | #######3 | 73% \rscipy-1.6.2 | 15.6 MB | #######7 | 78% \rscipy-1.6.2 | 15.6 MB | ########1 | 82% \rscipy-1.6.2 | 15.6 MB | ########6 | 86% \rscipy-1.6.2 | 15.6 MB | #########1 | 92% \rscipy-1.6.2 | 15.6 MB | #########6 | 96% \rscipy-1.6.2 | 15.6 MB | ########## | 100% \r\nPreparing transaction: ...working... done\r\nVerifying transaction: ...working... done\r\nExecuting transaction: ...working... done\r\n\r\n" ] ], [ [ "### Plot compartment model", "_____no_output_____" ] ], [ [ "import pickle\n\ns_list = []\ne_list = []\ni_list = []\nr_list = []\n\ni=0\nfor _, data in data_df.iterrows():\n with open(f'/src/experiments/data/{run_id}/{data[\"sample\"]}/{data[\"metrics\"]}', 'rb') as tracker:\n tracker_dict = pickle.load(tracker)\n s_list.append(pd.DataFrame(tracker_dict['s'], columns=[i]))\n e_list.append(pd.DataFrame(tracker_dict['e'], columns=[i]))\n i_list.append(pd.DataFrame(tracker_dict['i'], columns=[i]))\n r_list.append(pd.DataFrame(tracker_dict['r'], columns=[i]))\n i += 1\n \ns_df = pd.concat(s_list, axis=1)\ne_df = pd.concat(e_list, axis=1)\ni_df = pd.concat(i_list, axis=1)\nr_df = pd.concat(r_list, axis=1)\n\ns_df.head(), \\\ne_df.head(), \\\ni_df.head(), \\\nr_df.head()", "_____no_output_____" ], [ "s_df.transpose().boxplot(figsize=(20,10))", "_____no_output_____" ], [ "e_df.transpose().boxplot(figsize=(20,10))", "_____no_output_____" ], [ "i_df.transpose().boxplot(figsize=(20,10))", "_____no_output_____" ], [ "r_df.transpose().boxplot(figsize=(20,10))", "_____no_output_____" ], [ "s_df['day'] = s_df.index\ne_df['day'] = e_df.index\ni_df['day'] = i_df.index\nr_df['day'] = r_df.index\ns_long_df = pd.melt(s_df, id_vars=['day'], value_vars=[i for i in range(256)])\ne_long_df = pd.melt(e_df, id_vars=['day'], value_vars=[i for i in range(256)])\ni_long_df = pd.melt(i_df, id_vars=['day'], value_vars=[i for i in range(256)])\nr_long_df = pd.melt(r_df, id_vars=['day'], value_vars=[i for i in range(256)])\ns_long_df['seir'] = 's'\ne_long_df['seir'] = 'e'\ni_long_df['seir'] = 'i'\nr_long_df['seir'] = 'r'\nseir_long_df = pd.concat([s_long_df, e_long_df, i_long_df, r_long_df])\nseir_long_df.head()", "_____no_output_____" ], [ "import seaborn as sns\n\n%matplotlib inline\n\nsns.lineplot(data=seir_long_df, \n x='day', \n y='value', \n hue='seir', \n estimator='mean', \n ci=68)", "_____no_output_____" ] ], [ [ "## All configuration p", "_____no_output_____" ] ], [ [ "import glob\nimport os\nimport yaml\n\n#os.path.abspath('../data/*dd5a35/*seed-1054*')\n\nwith open(glob.glob('../data/*dd5a35/*seed-1054*/*.yaml')[0], 'r') as f:\n config = yaml.safe_load(f)\n\nconfig.keys()", "_____no_output_____" ], [ "config['n_people'], config['simulation_days']", "_____no_output_____" ] ], [ [ "## Other metrics", "_____no_output_____" ] ], [ [ "import pickle\n\nfile_name = '/src/experiments/validation/results/data/0x57e7aa91fbc783d1f3cd1bf719dd5a35/sim_v2_people-3000_days-60_init-0.002_uptake--1_seed-1054_20210706-135347_580349/tracker_data_n_3000_seed_1054_20210706-140112.pkl'\nwith open(file_name, 'rb') as results_file:\n tracker = pickle.load(results_file)\ntracker.keys()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
4a61686b059dcbcf4334784b3d4182f10f6f7028
767,043
ipynb
Jupyter Notebook
monte-carlo/Monte_Carlo.ipynb
soon-yau/deep-reinforcement-learning
169c07671b7948da50446894410c7683d2d0778b
[ "MIT" ]
1
2021-04-05T16:00:21.000Z
2021-04-05T16:00:21.000Z
monte-carlo/Monte_Carlo.ipynb
soon-yau/deep-reinforcement-learning
169c07671b7948da50446894410c7683d2d0778b
[ "MIT" ]
null
null
null
monte-carlo/Monte_Carlo.ipynb
soon-yau/deep-reinforcement-learning
169c07671b7948da50446894410c7683d2d0778b
[ "MIT" ]
null
null
null
1,100.492109
381,200
0.953017
[ [ [ "# Monte Carlo Methods\n\nIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. \n\nWhile we have provided some starter code, you are welcome to erase these hints and write your code from scratch.\n\n### Part 0: Explore BlackjackEnv\n\nWe begin by importing the necessary packages.", "_____no_output_____" ] ], [ [ "import sys\nimport gym\nimport numpy as np\nfrom collections import defaultdict\n\nfrom plot_utils import plot_blackjack_values, plot_policy\n\nimport sys\nprint(sys.version)", "3.5.2 (default, Nov 12 2018, 13:43:14) \n[GCC 5.4.0 20160609]\n" ] ], [ [ "Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.", "_____no_output_____" ] ], [ [ "env = gym.make('Blackjack-v0')", "_____no_output_____" ] ], [ [ "Each state is a 3-tuple of:\n- the player's current sum $\\in \\{0, 1, \\ldots, 31\\}$,\n- the dealer's face up card $\\in \\{1, \\ldots, 10\\}$, and\n- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).\n\nThe agent has two potential actions:\n\n```\n STICK = 0\n HIT = 1\n```\nVerify this by running the code cell below.", "_____no_output_____" ] ], [ [ "print(env.observation_space)\nprint(env.action_space)", "Tuple(Discrete(32), Discrete(11), Discrete(2))\nDiscrete(2)\n" ], [ "print(env.action_space)", "Discrete(2)\n" ] ], [ [ "Execute the code cell below to play Blackjack with a random policy. \n\n(_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)", "_____no_output_____" ] ], [ [ "for i_episode in range(3):\n state = env.reset()\n while True: \n action = env.action_space.sample()\n print(state, action)\n state, reward, done, info = env.step(action)\n print(state)\n if done:\n print('End game! Reward: ', reward)\n print('You won :)\\n') if reward > 0 else print('You lost :(\\n')\n break", "(4, 6, False) 0\n(4, 6, False)\nEnd game! Reward: -1.0\nYou lost :(\n\n(16, 2, False) 1\n(20, 2, False)\n(20, 2, False) 1\n(30, 2, False)\nEnd game! Reward: -1\nYou lost :(\n\n(13, 1, False) 0\n(13, 1, False)\nEnd game! Reward: -1.0\nYou lost :(\n\n" ] ], [ [ "### Part 1: MC Prediction\n\nIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). \n\nWe will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. \n\nThe function accepts as **input**:\n- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.\n\nIt returns as **output**:\n- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \\ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.", "_____no_output_____" ] ], [ [ "def generate_episode_from_limit_stochastic(bj_env):\n episode = []\n state = bj_env.reset()\n while True:\n probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]\n action = np.random.choice(np.arange(2), p=probs)\n next_state, reward, done, info = bj_env.step(action)\n episode.append((state, action, reward))\n state = next_state\n if done:\n break\n return episode", "_____no_output_____" ] ], [ [ "Execute the code cell below to play Blackjack with the policy. \n\n(*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)", "_____no_output_____" ] ], [ [ "for i in range(3):\n print(generate_episode_from_limit_stochastic(env))", "[((12, 7, False), 1, 0), ((13, 7, False), 1, 0), ((19, 7, False), 0, -1.0)]\n[((21, 10, True), 0, 1.0)]\n[((19, 6, False), 0, 1.0)]\n" ] ], [ [ "Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.\n\nYour algorithm has three arguments:\n- `env`: This is an instance of an OpenAI Gym environment.\n- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.\n- `generate_episode`: This is a function that returns an episode of interaction.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n\nThe algorithm returns as output:\n- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.", "_____no_output_____" ] ], [ [ "N = defaultdict(lambda: np.zeros(env.action_space.n))\nprint(N)", "defaultdict(<function <lambda> at 0x7f425429cae8>, {})\n" ], [ "import time\n\ndef mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):\n # initialize empty dictionaries of arrays\n start=time.time()\n returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))\n N = defaultdict(lambda: np.zeros(env.action_space.n))\n Q = defaultdict(lambda: np.zeros(env.action_space.n))\n # loop over episodes\n for i_episode in range(1, num_episodes+1):\n # monitor progress\n if i_episode % 1000 == 0:\n print(\"\\rEpisode {}/{}.\".format(i_episode, num_episodes), end=\"\")\n sys.stdout.flush()\n \n ## TODO: complete the function\n # generate an episode\n episode = generate_episode(env)\n states, actions, rewards = zip(*episode)\n #discounts = np.array([gamma**i for i in range(len(episode))])\n reward = episode[-1][-1]\n #print(episode, len(episode), reward)\n for i, state in enumerate(states):\n action = actions[i]\n g = gamma**(len(episode)-1-i)*reward\n #g = sum(discounts[:len(states)-i]*rewards[i:])\n returns_sum[state][action] += g\n N[state][action]+= 1\n Q[state][action]= returns_sum[state][action]/N[state][action]\n print(\"elapsed:\", time.time()-start)\n return Q\nQ = mc_prediction_q(env, 1, generate_episode_from_limit_stochastic)", "elapsed: 0.00030732154846191406\n" ] ], [ [ "Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.\n\nTo check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.", "_____no_output_____" ] ], [ [ "# obtain the action-value function\nQ = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)\n\n# obtain the corresponding state-value function\nV_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \\\n for k, v in Q.items())\n\n# plot the state-value function\nplot_blackjack_values(V_to_plot)", "Episode 500000/500000.elapsed: 41.55307722091675\n" ] ], [ [ "### Part 2: MC Control\n\nIn this section, you will write your own implementation of constant-$\\alpha$ MC control. \n\nYour algorithm has four arguments:\n- `env`: This is an instance of an OpenAI Gym environment.\n- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.\n- `alpha`: This is the step-size parameter for the update step.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n\nThe algorithm returns as output:\n- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.\n- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.\n\n(_Feel free to define additional functions to help you to organize your code._)", "_____no_output_____" ] ], [ [ "def get_prob(Q_state, epsilon):\n probs = epsilon*np.ones_like(Q_state)/len(Q_state)\n probs[np.argmax(probs)]+=1-epsilon\n return probs\n\n#get_prob([40, 2], 0.1)\n\ndef generate_episode_epsilon_greedy(env, Q, epsilon):\n episode = []\n state = env.reset()\n nA = env.action_space.n\n \n while True:\n # get probability \n if state in Q:\n probs = get_prob(Q[state], epsilon)\n else:\n probs = np.ones_like(Q[state])/nA\n \n action = np.random.choice(np.arange(nA), p=probs)\n next_state, reward, done, info = env.step(action)\n episode.append((state, action, reward))\n state = next_state\n if done:\n break\n return episode\n\ndef update_Q(env, Q, episode, gamma, alpha):\n states, actions, rewards = zip(*episode)\n #discounts = np.array([gamma**i for i in range(len(episode))])\n reward = episode[-1][-1]\n #print(episode, len(episode), reward)\n for i, state in enumerate(states):\n action = actions[i]\n g = gamma**(len(episode)-1-i)*reward\n #g = sum(discounts[:len(states)-i]*rewards[i:])\n Q[state][action] += alpha*(g-Q[state][action])\n return Q\n\ndef mc_control(env, num_episodes, alpha, gamma=1.0, epsilon_start=1.0, epsilon_decay=0.999999, epsilon_min=0.05):\n nA = env.action_space.n\n # initialize empty dictionary of arrays\n Q = defaultdict(lambda: np.zeros(nA))\n # loop over episodes\n epsilon=epsilon_start\n for i_episode in range(1, num_episodes+1):\n # monitor progress\n if i_episode % 1000 == 0:\n print(\"\\rEpisode {}/{} Epsilon {}.\".format(i_episode, num_episodes, epsilon), end=\"\")\n sys.stdout.flush()\n \n ## TODO: complete the function\n # generate episode using epsilon-greedy\n epsilon = max(epsilon*epsilon_decay, epsilon_min)\n episode = generate_episode_epsilon_greedy(env, Q, epsilon)\n \n # update Q using constant alpha\n Q = update_Q(env, Q, episode, gamma, alpha)\n policy = dict((k,np.argmax(v)) for k,v in Q.items())\n return policy, Q", "_____no_output_____" ] ], [ [ "Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.", "_____no_output_____" ] ], [ [ "# obtain the estimated optimal policy and action-value function\npolicy, Q = mc_control(env, 1000000, 0.1)", "Episode 1000000/1000000 Epsilon 0.05.26065815754." ] ], [ [ "Next, we plot the corresponding state-value function.", "_____no_output_____" ] ], [ [ "# obtain the corresponding state-value function\nV = dict((k,np.max(v)) for k, v in Q.items())\n\n# plot the state-value function\nplot_blackjack_values(V)", "_____no_output_____" ] ], [ [ "Finally, we visualize the policy that is estimated to be optimal.", "_____no_output_____" ] ], [ [ "# plot the policy\nplot_policy(policy)", "_____no_output_____" ] ], [ [ "The **true** optimal policy $\\pi_*$ can be found in Figure 5.2 of the [textbook](http://go.udacity.com/rl-textbook) (and appears below). Compare your final estimate to the optimal policy - how close are you able to get? If you are not happy with the performance of your algorithm, take the time to tweak the decay rate of $\\epsilon$, change the value of $\\alpha$, and/or run the algorithm for more episodes to attain better results.\n\n![True Optimal Policy](images/optimal.png)", "_____no_output_____" ] ], [ [ "for k, v in policy.items():\n if k[2]:\n print(k,v)", "(17, 8, True) 1\n(12, 10, True) 1\n(12, 1, True) 1\n(21, 3, True) 0\n(12, 5, True) 1\n(14, 2, True) 1\n(14, 1, True) 1\n(16, 4, True) 1\n(16, 7, True) 1\n(14, 6, True) 1\n(16, 1, True) 1\n(20, 9, True) 0\n(16, 8, True) 1\n(18, 4, True) 0\n(20, 7, True) 0\n(15, 8, True) 1\n(18, 10, True) 0\n(20, 5, True) 0\n(13, 5, True) 1\n(15, 2, True) 1\n(15, 3, True) 0\n(13, 8, True) 1\n(15, 4, True) 1\n(17, 1, True) 1\n(19, 5, True) 0\n(19, 6, True) 0\n(21, 9, True) 0\n(14, 8, True) 1\n(15, 1, True) 1\n(14, 5, True) 1\n(17, 10, True) 1\n(16, 2, True) 1\n(12, 8, True) 1\n(21, 5, True) 0\n(21, 2, True) 0\n(14, 3, True) 1\n(21, 4, True) 0\n(15, 7, True) 0\n(21, 6, True) 0\n(18, 6, True) 1\n(18, 5, True) 1\n(20, 6, True) 0\n(18, 9, True) 1\n(20, 1, True) 0\n(13, 1, True) 1\n(15, 6, True) 1\n(13, 7, True) 1\n(20, 10, True) 0\n(17, 4, True) 1\n(17, 5, True) 1\n(19, 9, True) 1\n(19, 10, True) 1\n(13, 10, True) 1\n(21, 10, True) 0\n(17, 3, True) 1\n(19, 3, True) 0\n(19, 4, True) 0\n(21, 8, True) 0\n(17, 6, True) 1\n(17, 9, True) 1\n(12, 4, True) 1\n(21, 1, True) 0\n(12, 7, True) 1\n(14, 10, True) 1\n(14, 7, True) 1\n(21, 7, True) 0\n(12, 9, True) 1\n(20, 8, True) 0\n(13, 2, True) 1\n(16, 10, True) 1\n(18, 2, True) 1\n(18, 1, True) 1\n(20, 2, True) 0\n(16, 5, True) 1\n(12, 2, True) 1\n(17, 2, True) 0\n(15, 9, True) 1\n(15, 10, True) 1\n(18, 8, True) 0\n(18, 7, True) 0\n(20, 4, True) 0\n(13, 3, True) 1\n(14, 4, True) 0\n(13, 9, True) 1\n(18, 3, True) 1\n(13, 6, True) 1\n(15, 5, True) 1\n(16, 9, True) 1\n(17, 7, True) 1\n(14, 9, True) 1\n(19, 7, True) 0\n(19, 8, True) 0\n(16, 3, True) 1\n(16, 6, True) 0\n(13, 4, True) 0\n(20, 3, True) 0\n(19, 1, True) 0\n(19, 2, True) 0\n(12, 3, True) 1\n(12, 6, True) 1\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a616eaf368c5af15d5dabac9af7ff21d9b62424
8,082
ipynb
Jupyter Notebook
MicroStrategy World 2021 demos/Narrowcast migration.ipynb
czyzq/mstrio-py
b25fd19936b659d503a7eaaa96c8d0b4e118cb7c
[ "Apache-2.0" ]
1
2022-02-15T13:18:04.000Z
2022-02-15T13:18:04.000Z
MicroStrategy World 2021 demos/Narrowcast migration.ipynb
czyzq/mstrio-py
b25fd19936b659d503a7eaaa96c8d0b4e118cb7c
[ "Apache-2.0" ]
null
null
null
MicroStrategy World 2021 demos/Narrowcast migration.ipynb
czyzq/mstrio-py
b25fd19936b659d503a7eaaa96c8d0b4e118cb7c
[ "Apache-2.0" ]
null
null
null
27.120805
113
0.571641
[ [ [ "# Narowcast Server service migration to Distribution Services", "_____no_output_____" ], [ "## 1. Getting data from NC", "_____no_output_____" ], [ "### 1.1 List of NC Services", "_____no_output_____" ] ], [ [ "# Run this SQL code against Narrocast Server database\n\"\"\"\nselect \nnames1.MR_OBJECT_ID AS serviceID, \nnames1.MR_OBJECT_NAME AS service_name, \nparent1.MR_OBJECT_NAME AS foldername, \nnames2.MR_OBJECT_NAME AS publication_name, \nnames3.MR_OBJECT_NAME AS document_name, \ninfo3.MR_OBJECT_SUBTYPE AS doc_type, \nnames4.MR_OBJECT_ID AS info_obj_id,\nnames4.MR_OBJECT_NAME AS info_obj_name, \ninfo4.MR_OBJECT_SUBTYPE AS info_obj_subtype \nfrom \nMSTROBJNAMES names1, \nMSTROBJINFO info1, \nMSTROBJNAMES parent1, \nMSTROBJDEPN dpns,\nMSTROBJNames names2, \nMSTROBJDEPN dpns2,\nMSTROBJNames names3,\nMSTROBJINFO info3, \nMSTROBJDEPN dpns3,\nMSTROBJNames names4,\nMSTROBJInfo info4 \nwhere names1.MR_OBJECT_ID = dpns.MR_INDEP_OBJID\nand names1.MR_OBJECT_ID = info1.MR_OBJECT_ID\nand info1.MR_PARENT_ID = parent1.MR_OBJECT_ID\nand dpns.MR_DEPN_OBJID = names2.MR_OBJECT_ID\nand names2.MR_OBJECT_ID = dpns2.MR_INDEP_OBJID\nand dpns2.MR_DEPN_OBJID = names3.MR_OBJECT_ID\nand names3.MR_OBJECT_ID = dpns3.MR_INDEP_OBJID\nand names3.MR_OBJECT_ID = info3.MR_OBJECT_ID\nand dpns3.MR_DEPN_OBJID = names4.MR_OBJECT_ID\nand dpns3.MR_DEPN_OBJID = info4.MR_OBJECT_ID\nand names1.MR_Object_Type = 19\nand names2.MR_Object_Type = 16\nand names3.MR_Object_Type = 14\nand names4.MR_Object_Type = 4\nand info4.MR_OBJECT_SubType <> 1\n\"\"\"", "_____no_output_____" ] ], [ [ "<img src=\"Images/NC_services.png\">", "_____no_output_____" ], [ "### 1.2 NC Service details", "_____no_output_____" ] ], [ [ "\"\"\"\nselect \nnames1.MR_OBJECT_ID AS serviceID, --This is Service ID\nnames1.MR_OBJECT_NAME AS service_name, \nnames2.MR_OBJECT_NAME AS subset_name, \na11.MR_ADD_DISPLAY AS dispname, \na11.MR_PHYSICAL_ADD AS email, \na13.MR_USER_NAME,\nsp.MR_INFOSOURCE_ID, \nsp.MR_QUES_OBJ_ID, \npo.mr_seq, \nsp.MR_USER_PREF,\npo.MR_PREF_OBJ\nfrom \nMSTROBJNames names1,\nMSTROBJINFO info1, \nMSTROBJDEPN dpns,\nMSTROBJNames names2,\n\nMSTRSUBSCRIPTIONS a12,\nMSTRADDRESSES a11,\nMSTRUSERS a13,\nMSTRSUBPREF sp,\nMSTRPREFOBJS po\n\nwhere names1.MR_Object_Type = 19\nand names2.MR_Object_Type = 17\nand info1.MR_STATUS =1\n\nand names1.MR_OBJECT_ID = info1.MR_OBJECT_ID\nand names1.MR_OBJECT_ID = dpns.MR_INDEP_OBJID\nand dpns.MR_DEPN_OBJID = names2.MR_OBJECT_ID\nand names2.MR_OBJECT_ID = a12.MR_SUB_SET_ID\nand a11.MR_ADDRESS_ID = a12.MR_ADDRESS_ID\nand a12.MR_SUB_GUID = sp.MR_SUB_GUID\nand sp.MR_PREF_OBJ_ID = po.MR_PREF_OBJ_ID\nand a12.MR_USER_ID = a13.MR_USER_ID\nand names1.MR_OBJECT_ID = '047886F8A7474F4A929EC6DD135F0A98' --Filter for Service ID\n\"\"\"", "_____no_output_____" ] ], [ [ "<img src=\"Images/service_details.png\">", "_____no_output_____" ] ], [ [ "with open('narrowcast_emails.csv', encoding=\"utf8\", newline='') as f:\n email_list = [x.strip() for x in f]", "_____no_output_____" ] ], [ [ "## Automate tasks in MicroStrategy", "_____no_output_____" ] ], [ [ "from mstrio.connection import Connection\nfrom mstrio.distribution_services import EmailSubscription, Content\nfrom mstrio.users_and_groups.user import list_users\nfrom datetime import datetime\n\n#### Parameters ####\napi_login, api_password = 'administrator', ''\nbase_url = 'Insert Env URL'\nproject_id = 'Insert Project ID'\n\nconn = Connection(base_url,api_login,api_password)", "_____no_output_____" ] ], [ [ "### Get users' default addresses", "_____no_output_____" ] ], [ [ "users = list_users(connection=conn)\ndefault_addresses=[]\n\nfor u in users:\n if u.addresses:\n user_addresses = [[u.name, u.id, uad['value']] for uad in u.addresses if uad['isDefault']==True]\n default_addresses.extend(user_addresses)", "_____no_output_____" ] ], [ [ "### Create a list of recipients", "_____no_output_____" ] ], [ [ "# From MSTR Metadata\nfor d in default_addresses:\n print(d)", "_____no_output_____" ], [ "# From Narrowcast\nfor e in email_list:\n print(e)", "_____no_output_____" ], [ "# Match Metadata with Narrowcast\nmatched_emails = [d[1] for d in default_addresses if d[2] in email_list]\nfor m in matched_emails:\n print(m)", "_____no_output_____" ] ], [ [ "### Create a subscription", "_____no_output_____" ] ], [ [ "# create an email subscription\nrecipient_ids = matched_emails[:]\ncontent_id = 'Insert Content ID'\nschedule_id = 'Insert Schedule ID'\nsubscription_name = 'REST_API_'+datetime.now().strftime(\"%Y-%m-%d__%H-%M\")\nsubject_txt='Email Subject'\nmessage_txt=\"Message Text\"\nEmailSubscription.create(connection=conn,\n name=subscription_name, \n project_id=project_id,\n send_now = True,\n contents=[Content(id=content_id, type='report', name='Report 1',\n personalization=Content.Properties(format_type='EXCEL'))],\n schedules_ids=[schedule_id],\n recipients=recipient_ids,\n email_subject=subject_txt,\n email_message=message_txt,\n email_send_content_as=\"data\")\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
4a6177c9f7f13c8819a43480db1cb88a28e90ddf
148,751
ipynb
Jupyter Notebook
clases/02-10 Sintaxis de Julia y dados.ipynb
dpsanders/metodos-monte-carlo
6dbf474b5dad84366dc6bd9754b4dd935bce65c3
[ "MIT" ]
10
2015-02-09T04:59:41.000Z
2022-02-20T01:59:04.000Z
clases/02-10 Sintaxis de Julia y dados.ipynb
dpsanders/metodos-monte-carlo
6dbf474b5dad84366dc6bd9754b4dd935bce65c3
[ "MIT" ]
null
null
null
clases/02-10 Sintaxis de Julia y dados.ipynb
dpsanders/metodos-monte-carlo
6dbf474b5dad84366dc6bd9754b4dd935bce65c3
[ "MIT" ]
21
2015-02-05T20:34:20.000Z
2021-08-03T03:43:53.000Z
60.394235
27,657
0.762442
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a61807bd8a25f52947ad390f91997be426c7f7e
13,548
ipynb
Jupyter Notebook
deep-learning/multi-frameworks/notebooks/Keras_Theano_CNN.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
[ "Apache-2.0" ]
3,266
2017-08-06T16:51:46.000Z
2022-03-30T07:34:24.000Z
deep-learning/multi-frameworks/notebooks/Keras_Theano_CNN.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
[ "Apache-2.0" ]
150
2017-08-28T14:59:36.000Z
2022-03-11T23:21:35.000Z
deep-learning/multi-frameworks/notebooks/Keras_Theano_CNN.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
[ "Apache-2.0" ]
1,449
2017-08-06T17:40:59.000Z
2022-03-31T12:03:24.000Z
31.877647
247
0.536906
[ [ [ "# High-level Keras (Theano) Example", "_____no_output_____" ] ], [ [ "# Lots of warnings!\n# Not sure why Keras creates model with float64?", "_____no_output_____" ], [ "%%writefile ~/.theanorc\n[global]\ndevice = cuda0\nforce_device= True\nfloatX = float32\nwarn_float64 = warn", "Overwriting /home/iliauk/.theanorc\n" ], [ "import os\nimport sys\nimport numpy as np\nos.environ['KERAS_BACKEND'] = \"theano\"\nimport theano\nimport keras as K\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Flatten\nfrom keras.layers import Conv2D, MaxPooling2D\nfrom common.params import *\nfrom common.utils import *", "Using cuDNN version 6021 on context None\nMapped name None to device cuda0: Tesla P100-PCIE-16GB (BC4B:00:00.0)\nUsing Theano backend.\n" ], [ "# Force one-gpu\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"", "_____no_output_____" ], [ "# Performance Improvement\n# 1. Make sure channels-first (not last)\nK.backend.set_image_data_format('channels_first')\n# 2. CuDNN auto-tune\ntheano.config.dnn.conv.algo_fwd = \"time_once\"\ntheano.config.dnn.conv.algo_bwd_filter = \"time_once\"\ntheano.config.dnn.conv.algo_bwd_data = \"time_once\"", "_____no_output_____" ], [ "print(\"OS: \", sys.platform)\nprint(\"Python: \", sys.version)\nprint(\"Keras: \", K.__version__)\nprint(\"Numpy: \", np.__version__)\nprint(\"Theano: \", theano.__version__)\nprint(K.backend.backend())\nprint(K.backend.image_data_format())\nprint(\"GPU: \", get_gpu_name())\nprint(get_cuda_version())\nprint(\"CuDNN Version \", get_cudnn_version())", "OS: linux\nPython: 3.5.2 |Anaconda custom (64-bit)| (default, Jul 2 2016, 17:53:06) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]\nKeras: 2.1.4\nNumpy: 1.14.1\nTheano: 1.0.1\ntheano\nchannels_first\nGPU: ['Tesla P100-PCIE-16GB', 'Tesla P100-PCIE-16GB']\nCUDA Version 8.0.61\nCuDNN Version 6.0.21\n" ], [ "def create_symbol(n_classes=N_CLASSES):\n model = Sequential()\n \n model.add(Conv2D(50, kernel_size=(3, 3), padding='same', activation='relu',\n input_shape=(3, 32, 32)))\n model.add(Conv2D(50, kernel_size=(3, 3), padding='same', activation='relu')) \n model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n model.add(Dropout(0.25))\n \n model.add(Conv2D(100, kernel_size=(3, 3), padding='same', activation='relu'))\n model.add(Conv2D(100, kernel_size=(3, 3), padding='same', activation='relu')) \n model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n model.add(Dropout(0.25))\n \n model.add(Flatten())\n model.add(Dense(512, activation='relu'))\n model.add(Dropout(0.5))\n model.add(Dense(n_classes, activation='softmax'))\n return model", "_____no_output_____" ], [ "def init_model(m, lr=LR, momentum=MOMENTUM):\n m.compile(\n loss = \"categorical_crossentropy\",\n optimizer = K.optimizers.SGD(lr, momentum),\n metrics = ['accuracy'])\n return m", "_____no_output_____" ], [ "%%time\n# Data into format for library\nx_train, x_test, y_train, y_test = cifar_for_library(channel_first=True, one_hot=True)\nprint(x_train.shape, x_test.shape, y_train.shape, y_test.shape)\nprint(x_train.dtype, x_test.dtype, y_train.dtype, y_test.dtype)", "Preparing train set...\nPreparing test set...\n(50000, 3, 32, 32) (10000, 3, 32, 32) (50000, 10) (10000, 10)\nfloat32 float32 int32 int32\nCPU times: user 650 ms, sys: 604 ms, total: 1.25 s\nWall time: 1.25 s\n" ], [ "%%time\n# Load symbol\nsym = create_symbol()", "/anaconda/envs/py35/lib/python3.5/site-packages/theano/compile/function_module.py:1519: UserWarning: You are creating a TensorVariable with float64 dtype. You requested an action via the Theano flag warn_float64={ignore,warn,raise,pdb}.\n optimizer_profile = optimizer(fgraph)\n/anaconda/envs/py35/lib/python3.5/site-packages/theano/compile/function_module.py:1519: UserWarning: You are creating a TensorVariable with float64 dtype. You requested an action via the Theano flag warn_float64={ignore,warn,raise,pdb}.\n optimizer_profile = optimizer(fgraph)\n/anaconda/envs/py35/lib/python3.5/site-packages/theano/compile/function_module.py:1519: UserWarning: You are creating a TensorVariable with float64 dtype. You requested an action via the Theano flag warn_float64={ignore,warn,raise,pdb}.\n optimizer_profile = optimizer(fgraph)\n" ], [ "%%time\n# Initialise model\nmodel = init_model(sym)", "CPU times: user 12.2 ms, sys: 0 ns, total: 12.2 ms\nWall time: 12 ms\n" ], [ "model.summary()", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_1 (Conv2D) (None, 50, 32, 32) 1400 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 50, 32, 32) 22550 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 50, 16, 16) 0 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 50, 16, 16) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 100, 16, 16) 45100 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 100, 16, 16) 90100 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 100, 8, 8) 0 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 100, 8, 8) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 6400) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 512) 3277312 \n_________________________________________________________________\ndropout_3 (Dropout) (None, 512) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 10) 5130 \n=================================================================\nTotal params: 3,441,592\nTrainable params: 3,441,592\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "%%time\n# Main training loop: 1m33s\nmodel.fit(x_train,\n y_train,\n batch_size=BATCHSIZE,\n epochs=EPOCHS,\n verbose=1)", "Epoch 1/10\n50000/50000 [==============================] - 8s 163us/step - loss: 1.8260 - acc: 0.3317\nEpoch 2/10\n50000/50000 [==============================] - 8s 158us/step - loss: 1.3923 - acc: 0.4949\nEpoch 3/10\n50000/50000 [==============================] - 9s 181us/step - loss: 1.1576 - acc: 0.5866\nEpoch 4/10\n50000/50000 [==============================] - 8s 155us/step - loss: 1.0030 - acc: 0.6450\nEpoch 5/10\n50000/50000 [==============================] - 8s 153us/step - loss: 0.8973 - acc: 0.6837\nEpoch 6/10\n50000/50000 [==============================] - 8s 155us/step - loss: 0.8053 - acc: 0.7170\nEpoch 7/10\n50000/50000 [==============================] - 9s 181us/step - loss: 0.7338 - acc: 0.7404\nEpoch 8/10\n50000/50000 [==============================] - 8s 154us/step - loss: 0.6722 - acc: 0.7627\nEpoch 9/10\n50000/50000 [==============================] - 8s 160us/step - loss: 0.6091 - acc: 0.7848\nEpoch 10/10\n50000/50000 [==============================] - 8s 160us/step - loss: 0.5626 - acc: 0.8016\nCPU times: user 42.6 s, sys: 23.5 s, total: 1min 6s\nWall time: 1min 33s\n" ], [ "%%time\n# Main evaluation loop: 2.47s\ny_guess = model.predict(x_test, batch_size=BATCHSIZE)\ny_guess = np.argmax(y_guess, axis=-1)\ny_truth = np.argmax(y_test, axis=-1)", "CPU times: user 886 ms, sys: 191 ms, total: 1.08 s\nWall time: 2.47 s\n" ], [ "print(\"Accuracy: \", 1.*sum(y_guess == y_truth)/len(y_guess))", "Accuracy: 0.7525\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a6188f1492f7f838aef34b86975df08c958e33b
63,309
ipynb
Jupyter Notebook
FlaskApp/app/stations - rain and dry.ipynb
aoifeosullivan19/comp30670project
cabaa0939a7fe0629100413f1ac2500a2e2b389e
[ "MIT" ]
null
null
null
FlaskApp/app/stations - rain and dry.ipynb
aoifeosullivan19/comp30670project
cabaa0939a7fe0629100413f1ac2500a2e2b389e
[ "MIT" ]
null
null
null
FlaskApp/app/stations - rain and dry.ipynb
aoifeosullivan19/comp30670project
cabaa0939a7fe0629100413f1ac2500a2e2b389e
[ "MIT" ]
null
null
null
37.483126
381
0.533842
[ [ [ "from sqlalchemy import create_engine\nimport pandas as pd\nimport matplotlib.pyplot as plot\nimport json\nimport pymysql\nimport statsmodels.formula.api as sm\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn import metrics\nfrom sklearn.cross_validation import cross_val_score\nfrom collections import OrderedDict, defaultdict\nimport pickle\n%matplotlib inline", "/root/anaconda3/lib/python3.6/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n" ], [ "conn_str = \"mysql+pymysql://dublinbikesadmin:dublinbikes2018@dublinbikes.cglcinwmtg3w.eu-west-1.rds.amazonaws.com/dublinbikes\"\nconn = create_engine(conn_str)\n\nquery = \"\"\"\n SELECT * from bike_dynamic \n\"\"\"\ndf_bike = pd.read_sql_query(con=conn, sql=query)\n\nquery = \"\"\"\n SELECT * from weather_info\n\"\"\"\n\ndf_weather = pd.read_sql_query(con=conn, sql=query)", "_____no_output_____" ], [ "df_weather.to_csv ('weather.csv', index=False)", "_____no_output_____" ], [ "# Convert csv and json files into dataframes\ndf_w = pd.read_csv('weather_clean.csv')\ndf_w.head(30)", "_____no_output_____" ], [ "df_w.rename(columns={'dt_txt':'last_update'}, inplace=True)\ndf_w['last_update'] = pd.to_datetime(df_w['last_update'], errors='coerce')\ndf_bike['last_update'] = pd.to_datetime(df_bike['last_update'])", "_____no_output_____" ], [ "df_bike['day'] = df_bike['last_update'].dt.weekday_name\ndf_bike['hour'] = df_bike['last_update'].dt.hour\ndf_w['day'] = df_w['last_update'].dt.weekday_name\ndf_w['hour'] = df_w['last_update'].dt.hour\ndf_w['date'] = df_w['last_update'].dt.day\ndf_bike['date'] = df_bike['last_update'].dt.day", "_____no_output_____" ], [ "merged_w= pd.merge(df_bike, df_w, how='right', on=['date', 'hour', 'day'])\nmerged_weather = pd.get_dummies(merged_w, columns=[\"day\"])\nmerged_weather.shape", "_____no_output_____" ], [ "merged_weather.mainDescription.unique()", "_____no_output_____" ], [ "df_rain = merged_weather.loc[(merged_weather['mainDescription'] == 'Rain') | (merged_weather['mainDescription'] == 'Drizzle') | (merged_weather['mainDescription'] == 'Fog') | (merged_weather['mainDescription'] == 'Mist') | (merged_weather['mainDescription'] == 'Snow')]", "_____no_output_____" ], [ "df_dry = merged_weather.loc[(merged_weather['mainDescription'] == 'Clear') | (merged_weather['mainDescription'] == 'Clouds')]", "_____no_output_____" ], [ "df_rain.shape", "_____no_output_____" ], [ "df_dry.shape", "_____no_output_____" ], [ "# creating df for each station for rain data\ndfRain_1 = df_rain[df_rain['number'] == 1]\ndfRain_2 = df_rain[df_rain['number'] == 2]\ndfRain_3 = df_rain[df_rain['number'] == 3]\ndfRain_4 = df_rain[df_rain['number'] == 4]\ndfRain_5 = df_rain[df_rain['number'] == 5]\ndfRain_6 = df_rain[df_rain['number'] == 6]\ndfRain_7 = df_rain[df_rain['number'] == 7]\ndfRain_8 = df_rain[df_rain['number'] == 8]\ndfRain_9 = df_rain[df_rain['number'] == 9]\ndfRain_10 = df_rain[df_rain['number'] == 10]\ndfRain_11 = df_rain[df_rain['number'] == 11]\ndfRain_12 = df_rain[df_rain['number'] == 12]\ndfRain_13 = df_rain[df_rain['number'] == 13]\ndfRain_14 = df_rain[df_rain['number'] == 14]\ndfRain_15 = df_rain[df_rain['number'] == 15]\ndfRain_16 = df_rain[df_rain['number'] == 16]\ndfRain_17 = df_rain[df_rain['number'] == 17]\ndfRain_18 = df_rain[df_rain['number'] == 18]\ndfRain_19 = df_rain[df_rain['number'] == 19]\ndfRain_20 = df_rain[df_rain['number'] == 20]\ndfRain_21 = df_rain[df_rain['number'] == 21]\ndfRain_22 = df_rain[df_rain['number'] == 22]\ndfRain_23 = df_rain[df_rain['number'] == 23]\ndfRain_24 = df_rain[df_rain['number'] == 24]\ndfRain_25 = df_rain[df_rain['number'] == 25]\ndfRain_26 = df_rain[df_rain['number'] == 26]\ndfRain_27 = df_rain[df_rain['number'] == 27]\ndfRain_28 = df_rain[df_rain['number'] == 28]\ndfRain_29 = df_rain[df_rain['number'] == 29]\ndfRain_30 = df_rain[df_rain['number'] == 30]\ndfRain_31 = df_rain[df_rain['number'] == 31]\ndfRain_32 = df_rain[df_rain['number'] == 32]\ndfRain_33 = df_rain[df_rain['number'] == 33]\ndfRain_34 = df_rain[df_rain['number'] == 34]\ndfRain_35 = df_rain[df_rain['number'] == 35]\ndfRain_36 = df_rain[df_rain['number'] == 36]\ndfRain_37 = df_rain[df_rain['number'] == 37]\ndfRain_38 = df_rain[df_rain['number'] == 38]\ndfRain_39 = df_rain[df_rain['number'] == 39]\ndfRain_40 = df_rain[df_rain['number'] == 40]\ndfRain_41 = df_rain[df_rain['number'] == 41]\ndfRain_42 = df_rain[df_rain['number'] == 42]\ndfRain_43 = df_rain[df_rain['number'] == 43]\ndfRain_44 = df_rain[df_rain['number'] == 44]\ndfRain_45 = df_rain[df_rain['number'] == 45]\ndfRain_46 = df_rain[df_rain['number'] == 46]\ndfRain_47 = df_rain[df_rain['number'] == 47]\ndfRain_48 = df_rain[df_rain['number'] == 48]\ndfRain_49 = df_rain[df_rain['number'] == 49]\ndfRain_50 = df_rain[df_rain['number'] == 50]\ndfRain_51 = df_rain[df_rain['number'] == 51]\ndfRain_52 = df_rain[df_rain['number'] == 52]\ndfRain_53 = df_rain[df_rain['number'] == 53]\ndfRain_54 = df_rain[df_rain['number'] == 54]\ndfRain_55 = df_rain[df_rain['number'] == 55]\ndfRain_56 = df_rain[df_rain['number'] == 56]\ndfRain_57 = df_rain[df_rain['number'] == 57]\ndfRain_58 = df_rain[df_rain['number'] == 58]\ndfRain_59 = df_rain[df_rain['number'] == 59]\ndfRain_60 = df_rain[df_rain['number'] == 60]\ndfRain_61 = df_rain[df_rain['number'] == 61]\ndfRain_62 = df_rain[df_rain['number'] == 62]\ndfRain_63 = df_rain[df_rain['number'] == 63]\ndfRain_64 = df_rain[df_rain['number'] == 64]\ndfRain_65 = df_rain[df_rain['number'] == 65]\ndfRain_66 = df_rain[df_rain['number'] == 66]\ndfRain_67 = df_rain[df_rain['number'] == 67]\ndfRain_68 = df_rain[df_rain['number'] == 68]\ndfRain_69 = df_rain[df_rain['number'] == 69]\ndfRain_70 = df_rain[df_rain['number'] == 70]\ndfRain_71 = df_rain[df_rain['number'] == 71]\ndfRain_72 = df_rain[df_rain['number'] == 72]\ndfRain_73 = df_rain[df_rain['number'] == 73]\ndfRain_74 = df_rain[df_rain['number'] == 74]\ndfRain_75 = df_rain[df_rain['number'] == 75]\ndfRain_76 = df_rain[df_rain['number'] == 76]\ndfRain_77 = df_rain[df_rain['number'] == 77]\ndfRain_78 = df_rain[df_rain['number'] == 78]\ndfRain_79 = df_rain[df_rain['number'] == 79]\ndfRain_80 = df_rain[df_rain['number'] == 80]\ndfRain_81 = df_rain[df_rain['number'] == 81]\ndfRain_82 = df_rain[df_rain['number'] == 82]\ndfRain_83 = df_rain[df_rain['number'] == 83]\ndfRain_84 = df_rain[df_rain['number'] == 84]\ndfRain_85 = df_rain[df_rain['number'] == 85]\ndfRain_86 = df_rain[df_rain['number'] == 86]\ndfRain_87 = df_rain[df_rain['number'] == 87]\ndfRain_88 = df_rain[df_rain['number'] == 88]\ndfRain_89 = df_rain[df_rain['number'] == 89]\ndfRain_90 = df_rain[df_rain['number'] == 90]\ndfRain_91 = df_rain[df_rain['number'] == 91]\ndfRain_92 = df_rain[df_rain['number'] == 92]\ndfRain_93 = df_rain[df_rain['number'] == 93]\ndfRain_94 = df_rain[df_rain['number'] == 94]\ndfRain_95 = df_rain[df_rain['number'] == 95]\ndfRain_96 = df_rain[df_rain['number'] == 96]\ndfRain_97 = df_rain[df_rain['number'] == 97]\ndfRain_98 = df_rain[df_rain['number'] == 98]\ndfRain_99 = df_rain[df_rain['number'] == 99]\ndfRain_100 = df_rain[df_rain['number'] == 100]\ndfRain_101 = df_rain[df_rain['number'] == 101]\ndfRain_102 = df_rain[df_rain['number'] == 102]\ndfRain_103 = df_rain[df_rain['number'] == 103]\ndfRain_104 = df_rain[df_rain['number'] == 104]\ndfRain_105 = df_rain[df_rain['number'] == 105]", "_____no_output_____" ], [ "# creating df for each station (sunny weather)\ndfDry_1 = df_dry[df_dry['number'] == 1]\ndfDry_2 = df_dry[df_dry['number'] == 2]\ndfDry_3 = df_dry[df_dry['number'] == 3]\ndfDry_4 = df_dry[df_dry['number'] == 4]\ndfDry_5 = df_dry[df_dry['number'] == 5]\ndfDry_6 = df_dry[df_dry['number'] == 6]\ndfDry_7 = df_dry[df_dry['number'] == 7]\ndfDry_8 = df_dry[df_dry['number'] == 8]\ndfDry_9 = df_dry[df_dry['number'] == 9]\ndfDry_10 = df_dry[df_dry['number'] == 10]\ndfDry_11 = df_dry[df_dry['number'] == 11]\ndfDry_12 = df_dry[df_dry['number'] == 12]\ndfDry_13 = df_dry[df_dry['number'] == 13]\ndfDry_14 = df_dry[df_dry['number'] == 14]\ndfDry_15 = df_dry[df_dry['number'] == 15]\ndfDry_16 = df_dry[df_dry['number'] == 16]\ndfDry_17 = df_dry[df_dry['number'] == 17]\ndfDry_18 = df_dry[df_dry['number'] == 18]\ndfDry_19 = df_dry[df_dry['number'] == 19]\ndfDry_20 = df_dry[df_dry['number'] == 20]\ndfDry_21 = df_dry[df_dry['number'] == 21]\ndfDry_22 = df_dry[df_dry['number'] == 22]\ndfDry_23 = df_dry[df_dry['number'] == 23]\ndfDry_24 = df_dry[df_dry['number'] == 24]\ndfDry_25 = df_dry[df_dry['number'] == 25]\ndfDry_26 = df_dry[df_dry['number'] == 26]\ndfDry_27 = df_dry[df_dry['number'] == 27]\ndfDry_28 = df_dry[df_dry['number'] == 28]\ndfDry_29 = df_dry[df_dry['number'] == 29]\ndfDry_30 = df_dry[df_dry['number'] == 30]\ndfDry_31 = df_dry[df_dry['number'] == 31]\ndfDry_32 = df_dry[df_dry['number'] == 32]\ndfDry_33 = df_dry[df_dry['number'] == 33]\ndfDry_34 = df_dry[df_dry['number'] == 34]\ndfDry_35 = df_dry[df_dry['number'] == 35]\ndfDry_36 = df_dry[df_dry['number'] == 36]\ndfDry_37 = df_dry[df_dry['number'] == 37]\ndfDry_38 = df_dry[df_dry['number'] == 38]\ndfDry_39 = df_dry[df_dry['number'] == 39]\ndfDry_40 = df_dry[df_dry['number'] == 40]\ndfDry_41 = df_dry[df_dry['number'] == 41]\ndfDry_42 = df_dry[df_dry['number'] == 42]\ndfDry_43 = df_dry[df_dry['number'] == 43]\ndfDry_44 = df_dry[df_dry['number'] == 44]\ndfDry_45 = df_dry[df_dry['number'] == 45]\ndfDry_46 = df_dry[df_dry['number'] == 46]\ndfDry_47 = df_dry[df_dry['number'] == 47]\ndfDry_48 = df_dry[df_dry['number'] == 48]\ndfDry_49 = df_dry[df_dry['number'] == 49]\ndfDry_50 = df_dry[df_dry['number'] == 50]\ndfDry_51 = df_dry[df_dry['number'] == 51]\ndfDry_52 = df_dry[df_dry['number'] == 52]\ndfDry_53 = df_dry[df_dry['number'] == 53]\ndfDry_54 = df_dry[df_dry['number'] == 54]\ndfDry_55 = df_dry[df_dry['number'] == 55]\ndfDry_56 = df_dry[df_dry['number'] == 56]\ndfDry_57 = df_dry[df_dry['number'] == 57]\ndfDry_58 = df_dry[df_dry['number'] == 58]\ndfDry_59 = df_dry[df_dry['number'] == 59]\ndfDry_60 = df_dry[df_dry['number'] == 60]\ndfDry_61 = df_dry[df_dry['number'] == 61]\ndfDry_62 = df_dry[df_dry['number'] == 62]\ndfDry_63 = df_dry[df_dry['number'] == 63]\ndfDry_64 = df_dry[df_dry['number'] == 64]\ndfDry_65 = df_dry[df_dry['number'] == 65]\ndfDry_66 = df_dry[df_dry['number'] == 66]\ndfDry_67 = df_dry[df_dry['number'] == 67]\ndfDry_68 = df_dry[df_dry['number'] == 68]\ndfDry_69 = df_dry[df_dry['number'] == 69]\ndfDry_70 = df_dry[df_dry['number'] == 70]\ndfDry_71 = df_dry[df_dry['number'] == 71]\ndfDry_72 = df_dry[df_dry['number'] == 72]\ndfDry_73 = df_dry[df_dry['number'] == 73]\ndfDry_74 = df_dry[df_dry['number'] == 74]\ndfDry_75 = df_dry[df_dry['number'] == 75]\ndfDry_76 = df_dry[df_dry['number'] == 76]\ndfDry_77 = df_dry[df_dry['number'] == 77]\ndfDry_78 = df_dry[df_dry['number'] == 78]\ndfDry_79 = df_dry[df_dry['number'] == 79]\ndfDry_80 = df_dry[df_dry['number'] == 80]\ndfDry_81 = df_dry[df_dry['number'] == 81]\ndfDry_82 = df_dry[df_dry['number'] == 82]\ndfDry_83 = df_dry[df_dry['number'] == 83]\ndfDry_84 = df_dry[df_dry['number'] == 84]\ndfDry_85 = df_dry[df_dry['number'] == 85]\ndfDry_86 = df_dry[df_dry['number'] == 86]\ndfDry_87 = df_dry[df_dry['number'] == 87]\ndfDry_88 = df_dry[df_dry['number'] == 88]\ndfDry_89 = df_dry[df_dry['number'] == 89]\ndfDry_90 = df_dry[df_dry['number'] == 90]\ndfDry_91 = df_dry[df_dry['number'] == 91]\ndfDry_92 = df_dry[df_dry['number'] == 92]\ndfDry_93 = df_dry[df_dry['number'] == 93]\ndfDry_94 = df_dry[df_dry['number'] == 94]\ndfDry_95 = df_dry[df_dry['number'] == 95]\ndfDry_96 = df_dry[df_dry['number'] == 96]\ndfDry_97 = df_dry[df_dry['number'] == 97]\ndfDry_98 = df_dry[df_dry['number'] == 98]\ndfDry_99 = df_dry[df_dry['number'] == 99]\ndfDry_100 = df_dry[df_dry['number'] == 100]\ndfDry_101 = df_dry[df_dry['number'] == 101]\ndfDry_102 = df_dry[df_dry['number'] == 102]\ndfDry_103 = df_dry[df_dry['number'] == 103]\ndfDry_104 = df_dry[df_dry['number'] == 104]\ndfDry_105 = df_dry[df_dry['number'] == 105]\n", "_____no_output_____" ], [ "def predictionsRain(x):\n lm_r = sm.ols(formula=\"available_bikes ~ humidity + temp_max + hour + day_Monday + day_Tuesday + day_Wednesday + day_Thursday + day_Friday + day_Sunday\", data=x).fit()\n return lm_r\n", "_____no_output_____" ], [ "def predictionsDry(x):\n lm_d = sm.ols(formula=\"available_bikes ~ deg + humidity + temp_max\", data=x).fit()\n return lm_d", "_____no_output_____" ], [ "def create_df (dfNumber, predictor):\n rain_predictions = pd.DataFrame({'Monday': dfNumber.day_Monday, 'Tuesday': dfNumber.day_Tuesday, 'Wednesday': dfNumber.day_Wednesday, 'Thursday': dfNumber.day_Thursday, 'Friday': dfNumber.day_Friday, 'Saturday': dfNumber.day_Saturday, 'Sunday': dfNumber.day_Sunday, 'hour': dfNumber.hour, 'PredictedBikes': predictor.predict(dfNumber)})\n return rain_predictions\n ", "_____no_output_____" ] ], [ [ "Creating dataframes for each station now for rainy weather", "_____no_output_____" ] ], [ [ "lmRain_1 = predictionsRain(dfRain_1)\nlmRain_2 = predictionsRain(dfRain_2)\nlmRain_3 = predictionsRain(dfRain_3)\nlmRain_4 = predictionsRain(dfRain_4)\nlmRain_5 = predictionsRain(dfRain_5)\nlmRain_6 = predictionsRain(dfRain_6)\nlmRain_7 = predictionsRain(dfRain_7)\nlmRain_8 = predictionsRain(dfRain_8)\nlmRain_9 = predictionsRain(dfRain_9)\nlmRain_10 = predictionsRain(dfRain_10)\nlmRain_11 = predictionsRain(dfRain_11)\nlmRain_12 = predictionsRain(dfRain_12)\nlmRain_13 = predictionsRain(dfRain_13)\nlmRain_14 = predictionsRain(dfRain_14)\nlmRain_15 = predictionsRain(dfRain_15)\nlmRain_16 = predictionsRain(dfRain_16)\nlmRain_17 = predictionsRain(dfRain_17)\nlmRain_18 = predictionsRain(dfRain_18)\nlmRain_19 = predictionsRain(dfRain_19)\n#lmRain_20 = predictionsRain(dfRain_20)\nlmRain_21 = predictionsRain(dfRain_21)\nlmRain_22 = predictionsRain(dfRain_22)\nlmRain_23 = predictionsRain(dfRain_23)\nlmRain_24 = predictionsRain(dfRain_24)\nlmRain_25 = predictionsRain(dfRain_25)\nlmRain_26 = predictionsRain(dfRain_26)\nlmRain_27 = predictionsRain(dfRain_27)\nlmRain_28 = predictionsRain(dfRain_28)\nlmRain_29 = predictionsRain(dfRain_29)\nlmRain_30 = predictionsRain(dfRain_30)\nlmRain_31 = predictionsRain(dfRain_31)\nlmRain_32 = predictionsRain(dfRain_32)\nlmRain_33 = predictionsRain(dfRain_33)\nlmRain_34 = predictionsRain(dfRain_34)\nlmRain_35 = predictionsRain(dfRain_35)\nlmRain_36 = predictionsRain(dfRain_36)\nlmRain_37 = predictionsRain(dfRain_37)\nlmRain_38 = predictionsRain(dfRain_38)\nlmRain_39 = predictionsRain(dfRain_39)\nlmRain_40 = predictionsRain(dfRain_40)\nlmRain_41 = predictionsRain(dfRain_41)\nlmRain_42 = predictionsRain(dfRain_42)\nlmRain_43 = predictionsRain(dfRain_43)\nlmRain_44 = predictionsRain(dfRain_44)\nlmRain_45 = predictionsRain(dfRain_45)\nlmRain_46 = predictionsRain(dfRain_46)\nlmRain_47= predictionsRain(dfRain_47)\nlmRain_48 = predictionsRain(dfRain_48)\nlmRain_49 = predictionsRain(dfRain_49)\nlmRain_50 = predictionsRain(dfRain_50)\nlmRain_51 = predictionsRain(dfRain_51)\nlmRain_52= predictionsRain(dfRain_52)\nlmRain_53 = predictionsRain(dfRain_53)\nlmRain_54 = predictionsRain(dfRain_54)\nlmRain_55 = predictionsRain(dfRain_55)\nlmRain_56= predictionsRain(dfRain_56)\nlmRain_57 = predictionsRain(dfRain_57)\nlmRain_58 = predictionsRain(dfRain_58)\nlmRain_59 = predictionsRain(dfRain_59)\nlmRain_60= predictionsRain(dfRain_60)\nlmRain_61 = predictionsRain(dfRain_61)\nlmRain_62= predictionsRain(dfRain_62)\nlmRain_63 = predictionsRain(dfRain_63)\nlmRain_64= predictionsRain(dfRain_64)\nlmRain_65 = predictionsRain(dfRain_65)\nlmRain_66= predictionsRain(dfRain_66)\nlmRain_67 = predictionsRain(dfRain_67)\nlmRain_68= predictionsRain(dfRain_68)\nlmRain_69 = predictionsRain(dfRain_69)\nlmRain_70= predictionsRain(dfRain_70)\nlmRain_71 = predictionsRain(dfRain_71)\nlmRain_72= predictionsRain(dfRain_72)\nlmRain_73 = predictionsRain(dfRain_73)\nlmRain_74= predictionsRain(dfRain_74)\nlmRain_75= predictionsRain(dfRain_75)\nlmRain_76= predictionsRain(dfRain_76)\nlmRain_77 = predictionsRain(dfRain_77)\nlmRain_78= predictionsRain(dfRain_78)\nlmRain_79 = predictionsRain(dfRain_79)\nlmRain_80= predictionsRain(dfRain_80)\nlmRain_81 = predictionsRain(dfRain_81)\nlmRain_82= predictionsRain(dfRain_82)\nlmRain_83 = predictionsRain(dfRain_83)\nlmRain_84= predictionsRain(dfRain_84)\nlmRain_85 = predictionsRain(dfRain_85)\nlmRain_86= predictionsRain(dfRain_86)\nlmRain_87 = predictionsRain(dfRain_87)\nlmRain_88= predictionsRain(dfRain_88)\nlmRain_89 = predictionsRain(dfRain_89)\nlmRain_90= predictionsRain(dfRain_90)\nlmRain_91 = predictionsRain(dfRain_91)\nlmRain_92= predictionsRain(dfRain_92)\nlmRain_93 = predictionsRain(dfRain_93)\nlmRain_94= predictionsRain(dfRain_94)\nlmRain_95 = predictionsRain(dfRain_95)\nlmRain_96= predictionsRain(dfRain_96)\nlmRain_97 = predictionsRain(dfRain_97)\nlmRain_98= predictionsRain(dfRain_98)\nlmRain_99 = predictionsRain(dfRain_99)\nlmRain_100= predictionsRain(dfRain_100)\nlmRain_101 = predictionsRain(dfRain_101)\nlmRain_102= predictionsRain(dfRain_102)\nlmRain_103 = predictionsRain(dfRain_103)\nlmRain_104= predictionsRain(dfRain_104)\nlmRain_105= predictionsRain(dfRain_105)", "_____no_output_____" ], [ "df_weather_rain = df_weather.loc[df_weather['mainDescription'] == 'Rain']\ndf_weather_rain.shape", "_____no_output_____" ], [ "df_weather_dry = df_weather.loc[df_weather['mainDescription'] == 'Clouds']\ndf_weather_dry.shape", "_____no_output_____" ], [ "df = df_bike[['day', 'hour']].copy()\n\ndf= df.drop_duplicates()\ndf = df.reset_index()\n\ndf_weather_rain = df_weather_rain[200:368]\ndf_weather_rain = df_weather_rain.reset_index()\n\nresult = pd.merge(df, df_weather_rain, right_index=True, left_index=True)\n\nresult =pd.get_dummies(result, columns=[\"day\"])", "_____no_output_____" ], [ "df_dry = df_bike[['day', 'hour']].copy()\n\ndf_dry = df_dry.drop_duplicates()\ndf_dry = df_dry.reset_index()\n\ndf_weather_dry = df_weather_dry[1100:1268]\ndf_weather_dry = df_weather_dry.reset_index()\n\nresult_dry = pd.merge(df_dry, df_weather_dry, right_index=True, left_index=True)\n\nresult_dry = pd.get_dummies(result_dry, columns=[\"day\"])", "_____no_output_____" ], [ "stationRain1=create_df(result, lmRain_1)\nstationRain2=create_df(result, lmRain_2)\nstationRain3=create_df(result, lmRain_3)\nstationRain4=create_df(result, lmRain_4)\nstationRain5=create_df(result, lmRain_5)\nstationRain6=create_df(result, lmRain_6)\nstationRain7=create_df(result, lmRain_7)\nstationRain8=create_df(result, lmRain_8)\nstationRain9=create_df(result, lmRain_9)\nstationRain10=create_df(result, lmRain_10)\nstationRain11=create_df(result, lmRain_11)\nstationRain12=create_df(result, lmRain_12)\nstationRain13=create_df(result, lmRain_13)\nstationRain14=create_df(result, lmRain_14)\nstationRain15=create_df(result, lmRain_15)\nstationRain16=create_df(result, lmRain_16)\nstationRain17=create_df(result, lmRain_17)\nstationRain18=create_df(result, lmRain_18)\nstationRain19=create_df(result, lmRain_19)\n#stationRain20=create_df(result, lmRain_20)\nstationRain21=create_df(result, lmRain_21)\nstationRain22=create_df(result, lmRain_22)\nstationRain23=create_df(result, lmRain_23)\nstationRain24=create_df(result, lmRain_24)\nstationRain25=create_df(result, lmRain_25)\nstationRain26=create_df(result, lmRain_26)\nstationRain27=create_df(result, lmRain_27)\nstationRain28=create_df(result, lmRain_28)\nstationRain29=create_df(result, lmRain_29)\nstationRain30=create_df(result, lmRain_30)\nstationRain31=create_df(result, lmRain_31)\nstationRain32=create_df(result, lmRain_32)\nstationRain33=create_df(result, lmRain_33)\nstationRain34=create_df(result, lmRain_34)\nstationRain35=create_df(result, lmRain_35)\nstationRain36=create_df(result, lmRain_36)\nstationRain37=create_df(result, lmRain_37)\nstationRain38=create_df(result, lmRain_38)\nstationRain39=create_df(result, lmRain_49)\nstationRain40=create_df(result, lmRain_40)\nstationRain41=create_df(result, lmRain_41)\nstationRain42=create_df(result, lmRain_42)\nstationRain43=create_df(result, lmRain_43)\nstationRain44=create_df(result, lmRain_44)\nstationRain45=create_df(result, lmRain_45)\nstationRain46=create_df(result, lmRain_46)\nstationRain47=create_df(result, lmRain_47)\nstationRain48=create_df(result, lmRain_48)\nstationRain49=create_df(result, lmRain_49)\nstationRain50=create_df(result, lmRain_50)\nstationRain51=create_df(result, lmRain_51)\nstationRain52=create_df(result, lmRain_52)\nstationRain53=create_df(result, lmRain_53)\nstationRain54=create_df(result, lmRain_54)\nstationRain55=create_df(result, lmRain_55)\nstationRain56=create_df(result, lmRain_56)\nstationRain57=create_df(result, lmRain_57)\nstationRain58=create_df(result, lmRain_58)\nstationRain59=create_df(result, lmRain_59)\nstationRain60=create_df(result, lmRain_60)\nstationRain61=create_df(result, lmRain_61)\nstationRain62=create_df(result, lmRain_62)\nstationRain63=create_df(result, lmRain_63)\nstationRain64=create_df(result, lmRain_64)\nstationRain65=create_df(result, lmRain_65)\nstationRain66=create_df(result, lmRain_66)\nstationRain67=create_df(result, lmRain_67)\nstationRain68=create_df(result, lmRain_68)\nstationRain69=create_df(result, lmRain_69)\nstationRain70=create_df(result, lmRain_70)\nstationRain71=create_df(result, lmRain_71)\nstationRain72=create_df(result, lmRain_72)\nstationRain73=create_df(result, lmRain_73)\nstationRain74=create_df(result, lmRain_74)\nstationRain75=create_df(result, lmRain_75)\nstationRain76=create_df(result, lmRain_76)\nstationRain77=create_df(result, lmRain_77)\nstationRain78=create_df(result, lmRain_78)\nstationRain79=create_df(result, lmRain_79)\nstationRain80=create_df(result, lmRain_80)\nstationRain81=create_df(result, lmRain_81)\nstationRain82=create_df(result, lmRain_82)\nstationRain83=create_df(result, lmRain_83)\nstationRain84=create_df(result, lmRain_84)\nstationRain85=create_df(result, lmRain_85)\nstationRain86=create_df(result, lmRain_86)\nstationRain87=create_df(result, lmRain_87)\nstationRain88=create_df(result, lmRain_88)\nstationRain89=create_df(result, lmRain_89)\nstationRain90=create_df(result, lmRain_90)\nstationRain91=create_df(result, lmRain_91)\nstationRain92=create_df(result, lmRain_91)\nstationRain93=create_df(result, lmRain_92)\nstationRain94=create_df(result, lmRain_93)\nstationRain95=create_df(result, lmRain_94)\nstationRain96=create_df(result, lmRain_95)\nstationRain97=create_df(result, lmRain_96)\nstationRain98=create_df(result, lmRain_97)\nstationRain99=create_df(result, lmRain_99)\nstationRain100=create_df(result, lmRain_100)\nstationRain101=create_df(result, lmRain_101)\nstationRain102=create_df(result, lmRain_102)\nstationRain103=create_df(result, lmRain_103)\nstationRain104=create_df(result, lmRain_104)\nstationRain105=create_df(result, lmRain_105)\n", "_____no_output_____" ], [ "stationRain90.shape", "_____no_output_____" ] ], [ [ "Creating dataframes for each station now for dry weather", "_____no_output_____" ] ], [ [ "lmDry_1 = predictionsDry(dfDry_1)\nlmDry_2 = predictionsDry(dfDry_2)\nlmDry_3 = predictionsDry(dfDry_3)\nlmDry_4 = predictionsDry(dfDry_4)\nlmDry_5 = predictionsDry(dfDry_5)\nlmDry_6 = predictionsDry(dfDry_6)\nlmDry_7 = predictionsDry(dfDry_7)\nlmDry_8 = predictionsDry(dfDry_8)\nlmDry_9 = predictionsDry(dfDry_9)\nlmDry_10 = predictionsDry(dfDry_10)\nlmDry_11 = predictionsDry(dfDry_11)\nlmDry_12 = predictionsDry(dfDry_12)\nlmDry_13 = predictionsDry(dfDry_13)\nlmDry_14 = predictionsDry(dfDry_14)\nlmDry_15 = predictionsDry(dfDry_15)\nlmDry_16 = predictionsDry(dfDry_16)\nlmDry_17 = predictionsDry(dfDry_17)\nlmDry_18 = predictionsDry(dfDry_18)\nlmDry_19 = predictionsDry(dfDry_19)\n#lmDry_20 = predictionsDry(dfDry_20)\nlmDry_21 = predictionsDry(dfDry_21)\nlmDry_22 = predictionsDry(dfDry_22)\nlmDry_23 = predictionsDry(dfDry_23)\nlmDry_24 = predictionsDry(dfDry_24)\nlmDry_25 = predictionsDry(dfDry_25)\nlmDry_26 = predictionsDry(dfDry_26)\nlmDry_27 = predictionsDry(dfDry_27)\nlmDry_28 = predictionsDry(dfDry_28)\nlmDry_29 = predictionsDry(dfDry_29)\nlmDry_30 = predictionsDry(dfDry_30)\nlmDry_31 = predictionsDry(dfDry_31)\nlmDry_32 = predictionsDry(dfDry_32)\nlmDry_33 = predictionsDry(dfDry_33)\nlmDry_34 = predictionsDry(dfDry_34)\nlmDry_35 = predictionsDry(dfDry_35)\nlmDry_36 = predictionsDry(dfDry_36)\nlmDry_37 = predictionsDry(dfDry_37)\nlmDry_38 = predictionsDry(dfDry_38)\nlmDry_39 = predictionsDry(dfDry_39)\nlmDry_40 = predictionsDry(dfDry_40)\nlmDry_41 = predictionsDry(dfDry_41)\nlmDry_42 = predictionsDry(dfDry_42)\nlmDry_43 = predictionsDry(dfDry_43)\nlmDry_44 = predictionsDry(dfDry_44)\nlmDry_45 = predictionsDry(dfDry_45)\nlmDry_46 = predictionsDry(dfDry_46)\nlmDry_47= predictionsDry(dfDry_47)\nlmDry_48 = predictionsDry(dfDry_48)\nlmDry_49 = predictionsDry(dfDry_49)\nlmDry_50 = predictionsDry(dfDry_50)\nlmDry_51 = predictionsDry(dfDry_51)\nlmDry_52= predictionsDry(dfDry_52)\nlmDry_53 = predictionsDry(dfDry_53)\nlmDry_54 = predictionsDry(dfDry_54)\nlmDry_55 = predictionsDry(dfDry_55)\nlmDry_56= predictionsDry(dfDry_56)\nlmDry_57 = predictionsDry(dfDry_57)\nlmDry_58 = predictionsDry(dfDry_58)\nlmDry_59 = predictionsDry(dfDry_59)\nlmDry_60= predictionsDry(dfDry_60)\nlmDry_61 = predictionsDry(dfDry_61)\nlmDry_62= predictionsDry(dfDry_62)\nlmDry_63 = predictionsDry(dfDry_63)\nlmDry_64= predictionsDry(dfDry_64)\nlmDry_65 = predictionsDry(dfDry_65)\nlmDry_66= predictionsDry(dfDry_66)\nlmDry_67 = predictionsDry(dfDry_67)\nlmDry_68= predictionsDry(dfDry_68)\nlmDry_69 = predictionsDry(dfDry_69)\nlmDry_70= predictionsDry(dfDry_70)\nlmDry_71 = predictionsDry(dfDry_71)\nlmDry_72= predictionsDry(dfDry_72)\nlmDry_73 = predictionsDry(dfDry_73)\nlmDry_74= predictionsDry(dfDry_74)\nlmDry_75= predictionsDry(dfDry_75)\nlmDry_76= predictionsDry(dfDry_76)\nlmDry_77 = predictionsDry(dfDry_77)\nlmDry_78= predictionsDry(dfDry_78)\nlmDry_79 = predictionsDry(dfDry_79)\nlmDry_80= predictionsDry(dfDry_80)\nlmDry_81 = predictionsDry(dfDry_81)\nlmDry_82= predictionsDry(dfDry_82)\nlmDry_83 = predictionsDry(dfDry_83)\nlmDry_84= predictionsDry(dfDry_84)\nlmDry_85 = predictionsDry(dfDry_85)\nlmDry_86= predictionsDry(dfDry_86)\nlmDry_87 = predictionsDry(dfDry_87)\nlmDry_88= predictionsDry(dfDry_88)\nlmDry_89 = predictionsDry(dfDry_89)\nlmDry_90= predictionsDry(dfDry_90)\nlmDry_91 = predictionsDry(dfDry_91)\nlmDry_92= predictionsDry(dfDry_92)\nlmDry_93 = predictionsDry(dfDry_93)\nlmDry_94= predictionsDry(dfDry_94)\nlmDry_95 = predictionsDry(dfDry_95)\nlmDry_96= predictionsDry(dfDry_96)\nlmDry_97 = predictionsDry(dfDry_97)\nlmDry_98= predictionsDry(dfDry_98)\nlmDry_99 = predictionsDry(dfDry_99)\nlmDry_100= predictionsDry(dfDry_100)\nlmDry_101 = predictionsDry(dfDry_101)\nlmDry_102= predictionsDry(dfDry_102)\nlmDry_103 = predictionsDry(dfDry_103)\nlmDry_104= predictionsDry(dfDry_104)\nlmDry_105= predictionsDry(dfDry_105)\n", "_____no_output_____" ], [ "stationDry1=create_df(result_dry, lmDry_1)\nstationDry2=create_df(result_dry, lmDry_2)\nstationDry3=create_df(result_dry, lmDry_3)\nstationDry4=create_df(result_dry, lmDry_4)\nstationDry5=create_df(result_dry, lmDry_5)\nstationDry6=create_df(result_dry, lmDry_6)\nstationDry7=create_df(result_dry, lmDry_7)\nstationDry8=create_df(result_dry, lmDry_8)\nstationDry9=create_df(result_dry, lmDry_9)\nstationDry10=create_df(result_dry, lmDry_10)\nstationDry11=create_df(result_dry, lmDry_11)\nstationDry12=create_df(result_dry, lmDry_12)\nstationDry13=create_df(result_dry, lmDry_13)\nstationDry14=create_df(result_dry, lmDry_14)\nstationDry15=create_df(result_dry, lmDry_15)\nstationDry16=create_df(result_dry, lmDry_16)\nstationDry17=create_df(result_dry, lmDry_17)\nstationDry18=create_df(result_dry, lmDry_18)\nstationDry19=create_df(result_dry, lmDry_19)\n#stationDry20=create_df(result_dry, lmDry_20)\nstationDry21=create_df(result_dry, lmDry_21)\nstationDry22=create_df(result_dry, lmDry_22)\nstationDry23=create_df(result_dry, lmDry_23)\nstationDry24=create_df(result_dry, lmDry_24)\nstationDry25=create_df(result_dry, lmDry_25)\nstationDry26=create_df(result_dry, lmDry_26)\nstationDry27=create_df(result_dry, lmDry_27)\nstationDry28=create_df(result_dry, lmDry_28)\nstationDry29=create_df(result_dry, lmDry_29)\nstationDry30=create_df(result_dry, lmDry_30)\nstationDry31=create_df(result_dry, lmDry_31)\nstationDry32=create_df(result_dry, lmDry_32)\nstationDry33=create_df(result_dry, lmDry_33)\nstationDry34=create_df(result_dry, lmDry_34)\nstationDry35=create_df(result_dry, lmDry_35)\nstationDry36=create_df(result_dry, lmDry_36)\nstationDry37=create_df(result_dry, lmDry_37)\nstationDry38=create_df(result_dry, lmDry_38)\nstationDry39=create_df(result_dry, lmDry_49)\nstationDry40=create_df(result_dry, lmDry_40)\nstationDry41=create_df(result_dry, lmDry_41)\nstationDry42=create_df(result_dry, lmDry_42)\nstationDry43=create_df(result_dry, lmDry_43)\nstationDry44=create_df(result_dry, lmDry_44)\nstationDry45=create_df(result_dry, lmDry_45)\nstationDry46=create_df(result_dry, lmDry_46)\nstationDry47=create_df(result_dry, lmDry_47)\nstationDry48=create_df(result_dry, lmDry_48)\nstationDry49=create_df(result_dry, lmDry_49)\nstationDry50=create_df(result_dry, lmDry_50)\nstationDry51=create_df(result_dry, lmDry_51)\nstationDry52=create_df(result_dry, lmDry_52)\nstationDry53=create_df(result_dry, lmDry_53)\nstationDry54=create_df(result_dry, lmDry_54)\nstationDry55=create_df(result_dry, lmDry_55)\nstationDry56=create_df(result_dry, lmDry_56)\nstationDry57=create_df(result_dry, lmDry_57)\nstationDry58=create_df(result_dry, lmDry_58)\nstationDry59=create_df(result_dry, lmDry_59)\nstationDry60=create_df(result_dry, lmDry_60)\nstationDry61=create_df(result_dry, lmDry_61)\nstationDry62=create_df(result_dry, lmDry_62)\nstationDry63=create_df(result_dry, lmDry_63)\nstationDry64=create_df(result_dry, lmDry_64)\nstationDry65=create_df(result_dry, lmDry_65)\nstationDry66=create_df(result_dry, lmDry_66)\nstationDry67=create_df(result_dry, lmDry_67)\nstationDry68=create_df(result_dry, lmDry_68)\nstationDry69=create_df(result_dry, lmDry_69)\nstationDry70=create_df(result_dry, lmDry_70)\nstationDry71=create_df(result_dry, lmDry_71)\nstationDry72=create_df(result_dry, lmDry_72)\nstationDry73=create_df(result_dry, lmDry_73)\nstationDry74=create_df(result_dry, lmDry_74)\nstationDry75=create_df(result_dry, lmDry_75)\nstationDry76=create_df(result_dry, lmDry_76)\nstationDry77=create_df(result_dry, lmDry_77)\nstationDry78=create_df(result_dry, lmDry_78)\nstationDry79=create_df(result_dry, lmDry_79)\nstationDry80=create_df(result_dry, lmDry_80)\nstationDry81=create_df(result_dry, lmDry_81)\nstationDry82=create_df(result_dry, lmDry_82)\nstationDry83=create_df(result_dry, lmDry_83)\nstationDry84=create_df(result_dry, lmDry_84)\nstationDry85=create_df(result_dry, lmDry_85)\nstationDry86=create_df(result_dry, lmDry_86)\nstationDry87=create_df(result_dry, lmDry_87)\nstationDry88=create_df(result_dry, lmDry_88)\nstationDry89=create_df(result_dry, lmDry_89)\nstationDry90=create_df(result_dry, lmDry_90)\nstationDry91=create_df(result_dry, lmDry_91)\nstationDry92=create_df(result_dry, lmDry_91)\nstationDry93=create_df(result_dry, lmDry_92)\nstationDry94=create_df(result_dry, lmDry_93)\nstationDry95=create_df(result_dry, lmDry_94)\nstationDry96=create_df(result_dry, lmDry_95)\nstationDry97=create_df(result_dry, lmDry_96)\nstationDry98=create_df(result_dry, lmDry_97)\nstationDry99=create_df(result_dry, lmDry_99)\nstationDry100=create_df(result_dry, lmDry_100)\nstationDry101=create_df(result_dry, lmDry_101)\nstationDry102=create_df(result_dry, lmDry_102)\nstationDry103=create_df(result_dry, lmDry_103)\nstationDry104=create_df(result_dry, lmDry_104)\nstationDry105=create_df(result_dry, lmDry_105)\n", "_____no_output_____" ], [ "def convert(station,strstation):\n \n station = eval(station)\n\n df_rp1 = station[['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday']].copy()\n\n df_rp1 = pd.get_dummies(df_rp1).idxmax(1)\n\n dfrp1 = df_rp1.to_frame()\n\n df = station.join(dfrp1)\n\n df = df.rename(columns={0: 'day'})\n\n station = df[['PredictedBikes','day','hour']].copy()\n\n names = [ 'Monday', 'Tuesday', 'Wednesday', 'Thursday','Friday', 'Saturday', 'Sunday']\n station['day'] = station['day'].astype('category', categories=names, ordered=True)\n \n parse(station,strstation)", "_____no_output_____" ], [ "def parse(station, strstation):\n \n d = defaultdict(list)\n\n y = station.groupby(['day','hour'])['PredictedBikes'].mean()\n y = OrderedDict(y)\n\n i = 0\n for k,v in y.items():\n\n if v < 0:\n d[k[1]].append(0)\n else:\n d[k[1]].append(v)\n \n file = \"pickleFiles/\"+strstation+\".pickle\"\n\n pickle_out = open(file,\"wb\")\n pickle.dump(d, pickle_out)\n pickle_out.close()", "_____no_output_____" ], [ "for i in range(1,105):\n if i == 20:\n pass\n else:\n station = \"stationRain\"+str(i)\n convert(station, station)", "_____no_output_____" ], [ "for i in range(1,105):\n if i == 20:\n pass\n else:\n station = \"stationDry\"+str(i)\n convert(station, station)", "_____no_output_____" ], [ "day = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']\n\npickle_out = open(\"pickleFiles/day.pickle\",\"wb\")\npickle.dump(day, pickle_out)\npickle_out.close()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a6193a0201b4fe2bf1abdfe037fbd48c5de84eb
299,641
ipynb
Jupyter Notebook
DRU/API_Training/Python/Solutions/6. Python API Training - Continuous Model Training [Solution].ipynb
harunpehlivan/tutorials-for-data-scientists
03acf8a72bafc1f82ed806e46468d13f4ee0e449
[ "Apache-2.0" ]
51
2020-05-15T14:01:21.000Z
2022-02-03T09:45:30.000Z
DRU/API_Training/Python/Solutions/6. Python API Training - Continuous Model Training [Solution].ipynb
harunpehlivan/tutorials-for-data-scientists
03acf8a72bafc1f82ed806e46468d13f4ee0e449
[ "Apache-2.0" ]
5
2020-07-06T17:17:59.000Z
2022-02-14T16:49:42.000Z
DRU/API_Training/Python/Solutions/6. Python API Training - Continuous Model Training [Solution].ipynb
harunpehlivan/tutorials-for-data-scientists
03acf8a72bafc1f82ed806e46468d13f4ee0e449
[ "Apache-2.0" ]
40
2020-05-27T14:36:49.000Z
2022-02-13T06:13:21.000Z
37.80005
295
0.222483
[ [ [ "### 6. Python API Training - Continuous Model Training [Solution]\n\n<b>Author:</b> Thodoris Petropoulos <br>\n<b>Contributors:</b> Rajiv Shah\n\nThis is the 6th exercise to complete in order to finish your `Python API Training for DataRobot` course! This exercise teaches you how to deploy a trained model, make predictions (**Warning**: Multiple ways of getting predictions out of DataRobot), and monitor drift to replace a model.\n\nHere are the actual sections of the notebook alongside time to complete: \n\n1. Connect to DataRobot. [3min]<br>\n2. Retrieve the first project created in `Exercise 4 - Model Factory`. [5min]\n3. Search for the `recommended for deployment` model and deploy it as a rest API. [20min]\n4. Create a scoring procedure using dataset (1) that will force data drift on that deployment. [25min]\n5. Check data drift. Does it look like data is drifting?. [3min]\n6. Create a new project using data (2). [5min]\n7. Replace the previously deployed model with the new `recommended for deployment` model from the new project. [10min]\n\nEach section will have specific instructions so do not worry if things are still blurry!\n\nAs always, consult:\n\n- [API Documentation](https://datarobot-public-api-client.readthedocs-hosted.com)\n- [Samples](https://github.com/datarobot-community/examples-for-data-scientists)\n- [Tutorials](https://github.com/datarobot-community/tutorials-for-data-scientists)\n\nThe last two links should provide you with the snippets you need to complete most of these exercises.\n\n<b>Data</b>\n\n(1) The dataset we will be using throughout these exercises is the well-known `readmissions dataset`. You can access it or directly download it through DataRobot's public S3 bucket [here](https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes.csv).\n\n(2) This dataset will be used to retrain the model. It can be accessed [here](https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes_scoring.csv) through DataRobot's public S3 bucket.", "_____no_output_____" ], [ "### Import Libraries\nImport libraries here as you start finding out what libraries are needed. The DataRobot package is already included for your convenience.", "_____no_output_____" ] ], [ [ "import datarobot as dr\n\n#Proposed Libraries needed\nimport pandas as pd", "_____no_output_____" ] ], [ [ "### 1. Connect to DataRobot [3min]", "_____no_output_____" ] ], [ [ "#Possible solution\ndr.Client(config_path='../../github/config.yaml')", "_____no_output_____" ] ], [ [ "### 2. Retrieve the first project created in `Exercise 4 - Model Factory` . [5min]\n\nThis should be the first project created during the exercise. Not one of the projects created using a sample of `readmission_type_id`.", "_____no_output_____" ] ], [ [ "#Proposed Solution\nproject = dr.Project.get('YOUR_PROJECT_ID')", "_____no_output_____" ] ], [ [ "### 3. Search for the `recommended for deployment` model and deploy it as a rest API. [10min]\n\n**Hint**: The recommended model can be found using the `DataRobot.ModelRecommendation` method. \n\n**Hint 2**: Use the `update_drift_tracking_settings` method on the DataRobot Deployment object to enable data drift tracking.", "_____no_output_____" ] ], [ [ "# Proposed Solution\n\n#Find the recommended model\nrecommended_model = dr.ModelRecommendation.get(project.id).get_model()\n\n#Deploy the model\nprediction_server = dr.PredictionServer.list()[0]\n\ndeployment = dr.Deployment.create_from_learning_model(recommended_model.id, label='Readmissions Deployment', default_prediction_server_id=prediction_server.id)\ndeployment.update_drift_tracking_settings(feature_drift_enabled=True)", "_____no_output_____" ] ], [ [ "### 4. Create a scoring procedure using dataset (1) that will force data drift on that deployment. [25min]\n\n**Instructions**\n1. Take the first 100 rows of dataset (1) and save them to a Pandas DataFrame\n2. Score 5 times using these observations to force drift.\n3. Use the deployment you created during `question 3`.\n\n**Hint**: The easiest way to score using a deployed model in DataRobot is to go to the `Deployments` page within DataRobot and navigate to the `Integrations` and `scoring code` tab. There you will find sample code for Python that you can use to score.\n\n**Hint 2**: The only thing you will have to change for the code to work is change the filename variable to point to the csv file to be scored and create a for loop.", "_____no_output_____" ] ], [ [ "# Proposed Solution \n\n#Save the dataset that is going to be scored as a csv file\nscoring_dataset = pd.read_csv('https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes.csv').head(100)\nscoring_dataset.to_csv('scoring_dataset.csv', index=False)\n\n#This has been copied from the `integrations` tab. \n#The only thing you actually have to do is change the filename variable in the bottom of the script and\n#create the for loop.\n\n\"\"\"\nUsage:\n python datarobot-predict.py <input-file.csv>\n \nThis example uses the requests library which you can install with:\n pip install requests\nWe highly recommend that you update SSL certificates with:\n pip install -U urllib3[secure] certifi\n\"\"\"\nimport sys\nimport json\nimport requests\n \nDATAROBOT_KEY = ''\nAPI_KEY = ''\nUSERNAME = ''\n \nDEPLOYMENT_ID = ''\nMAX_PREDICTION_FILE_SIZE_BYTES = 52428800 # 50 MB\n \n \nclass DataRobotPredictionError(Exception):\n \"\"\"Raised if there are issues getting predictions from DataRobot\"\"\"\n \n \ndef make_datarobot_deployment_predictions(data, deployment_id):\n \"\"\"\n Make predictions on data provided using DataRobot deployment_id provided.\n See docs for details:\n https://app.eu.datarobot.com/docs/users-guide/predictions/api/new-prediction-api.html\n \n Parameters\n ----------\n data : str\n Feature1,Feature2\n numeric_value,string\n deployment_id : str\n The ID of the deployment to make predictions with.\n \n Returns\n -------\n Response schema:\n https://app.eu.datarobot.com/docs/users-guide/predictions/api/new-prediction-api.html#response-schema\n \n Raises\n ------\n DataRobotPredictionError if there are issues getting predictions from DataRobot\n \"\"\"\n # Set HTTP headers. The charset should match the contents of the file.\n headers = {'Content-Type': 'text/plain; charset=UTF-8', 'datarobot-key': DATAROBOT_KEY}\n \n url = 'https://cfds.orm.eu.datarobot.com/predApi/v1.0/deployments/{deployment_id}/'\\\n 'predictions'.format(deployment_id=deployment_id)\n # Make API request for predictions\n predictions_response = requests.post(\n url,\n auth=(USERNAME, API_KEY),\n data=data,\n headers=headers,\n )\n _raise_dataroboterror_for_status(predictions_response)\n # Return a Python dict following the schema in the documentation\n return predictions_response.json()\n \n \ndef _raise_dataroboterror_for_status(response):\n \"\"\"Raise DataRobotPredictionError if the request fails along with the response returned\"\"\"\n try:\n response.raise_for_status()\n except requests.exceptions.HTTPError:\n err_msg = '{code} Error: {msg}'.format(\n code=response.status_code, msg=response.text)\n raise DataRobotPredictionError(err_msg)\n \n \ndef main(filename, deployment_id):\n \"\"\"\n Return an exit code on script completion or error. Codes > 0 are errors to the shell.\n Also useful as a usage demonstration of\n `make_datarobot_deployment_predictions(data, deployment_id)`\n \"\"\"\n if not filename:\n print(\n 'Input file is required argument. '\n 'Usage: python datarobot-predict.py <input-file.csv>')\n return 1\n data = open(filename, 'rb').read()\n data_size = sys.getsizeof(data)\n if data_size >= MAX_PREDICTION_FILE_SIZE_BYTES:\n print(\n 'Input file is too large: {} bytes. '\n 'Max allowed size is: {} bytes.'\n ).format(data_size, MAX_PREDICTION_FILE_SIZE_BYTES)\n return 1\n try:\n predictions = make_datarobot_deployment_predictions(data, deployment_id)\n except DataRobotPredictionError as exc:\n print(exc)\n return 1\n print(json.dumps(predictions, indent=4))\n return 0\n \nfor i in range(0,5):\n filename = 'scoring_dataset.csv'\n main(filename, DEPLOYMENT_ID)", "{\n \"data\": [\n {\n \"predictionValues\": [\n {\n \"value\": 0.1951537877,\n \"label\": 1.0\n },\n {\n \"value\": 0.8048462123,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 0\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2464775145,\n \"label\": 1.0\n },\n {\n \"value\": 0.7535224855,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 1\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.523460269,\n \"label\": 1.0\n },\n {\n \"value\": 0.476539731,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 2\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1581269652,\n \"label\": 1.0\n },\n {\n \"value\": 0.8418730348,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 3\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2535447478,\n \"label\": 1.0\n },\n {\n \"value\": 0.7464552522,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 4\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3509267569,\n \"label\": 1.0\n },\n {\n \"value\": 0.6490732431,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 5\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1231950298,\n \"label\": 1.0\n },\n {\n \"value\": 0.8768049702,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 6\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.5941886306,\n \"label\": 1.0\n },\n {\n \"value\": 0.4058113694,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 7\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.5549402833,\n \"label\": 1.0\n },\n {\n \"value\": 0.4450597167,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 8\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2812143266,\n \"label\": 1.0\n },\n {\n \"value\": 0.7187856734,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 9\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2439627945,\n \"label\": 1.0\n },\n {\n \"value\": 0.7560372055,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 10\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2654249072,\n \"label\": 1.0\n },\n {\n \"value\": 0.7345750928,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 11\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2779132724,\n \"label\": 1.0\n },\n {\n \"value\": 0.7220867276,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 12\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.5093356967,\n \"label\": 1.0\n },\n {\n \"value\": 0.4906643033,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 13\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2065299749,\n \"label\": 1.0\n },\n {\n \"value\": 0.7934700251,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 14\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.420971185,\n \"label\": 1.0\n },\n {\n \"value\": 0.579028815,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 15\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.30439502,\n \"label\": 1.0\n },\n {\n \"value\": 0.69560498,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 16\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.596169591,\n \"label\": 1.0\n },\n {\n \"value\": 0.403830409,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 17\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3204616904,\n \"label\": 1.0\n },\n {\n \"value\": 0.6795383096,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 18\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.7294729948,\n \"label\": 1.0\n },\n {\n \"value\": 0.2705270052,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 19\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2662279308,\n \"label\": 1.0\n },\n {\n \"value\": 0.7337720692,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 20\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3128296733,\n \"label\": 1.0\n },\n {\n \"value\": 0.6871703267,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 21\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.5447424054,\n \"label\": 1.0\n },\n {\n \"value\": 0.4552575946,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 22\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2899021208,\n \"label\": 1.0\n },\n {\n \"value\": 0.7100978792,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 23\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3890560269,\n \"label\": 1.0\n },\n {\n \"value\": 0.6109439731,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 24\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4990068078,\n \"label\": 1.0\n },\n {\n \"value\": 0.5009931922,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 25\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4304368794,\n \"label\": 1.0\n },\n {\n \"value\": 0.5695631206,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 26\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.5026999116,\n \"label\": 1.0\n },\n {\n \"value\": 0.4973000884,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 27\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1702546179,\n \"label\": 1.0\n },\n {\n \"value\": 0.8297453821,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 28\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3628830016,\n \"label\": 1.0\n },\n {\n \"value\": 0.6371169984,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 29\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2132720202,\n \"label\": 1.0\n },\n {\n \"value\": 0.7867279798,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 30\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2383863032,\n \"label\": 1.0\n },\n {\n \"value\": 0.7616136968,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 31\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2441482544,\n \"label\": 1.0\n },\n {\n \"value\": 0.7558517456,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 32\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.193837598,\n \"label\": 1.0\n },\n {\n \"value\": 0.806162402,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 33\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3429098129,\n \"label\": 1.0\n },\n {\n \"value\": 0.6570901871,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 34\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.5072881579,\n \"label\": 1.0\n },\n {\n \"value\": 0.4927118421,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 35\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4164822102,\n \"label\": 1.0\n },\n {\n \"value\": 0.5835177898,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 36\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.6304177046,\n \"label\": 1.0\n },\n {\n \"value\": 0.3695822954,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 37\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1049734578,\n \"label\": 1.0\n },\n {\n \"value\": 0.8950265422,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 38\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4578552544,\n \"label\": 1.0\n },\n {\n \"value\": 0.5421447456,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 39\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.8032473922,\n \"label\": 1.0\n },\n {\n \"value\": 0.1967526078,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 40\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2727117538,\n \"label\": 1.0\n },\n {\n \"value\": 0.7272882462,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 41\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.384485513,\n \"label\": 1.0\n },\n {\n \"value\": 0.615514487,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 42\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3710546196,\n \"label\": 1.0\n },\n {\n \"value\": 0.6289453804,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 43\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4369978905,\n \"label\": 1.0\n },\n {\n \"value\": 0.5630021095,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 44\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.7259251475,\n \"label\": 1.0\n },\n {\n \"value\": 0.2740748525,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 45\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2407976985,\n \"label\": 1.0\n },\n {\n \"value\": 0.7592023015,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 46\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4246424735,\n \"label\": 1.0\n },\n {\n \"value\": 0.5753575265,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 47\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2639210224,\n \"label\": 1.0\n },\n {\n \"value\": 0.7360789776,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 48\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2094423324,\n \"label\": 1.0\n },\n {\n \"value\": 0.7905576676,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 49\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.325514406,\n \"label\": 1.0\n },\n {\n \"value\": 0.674485594,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 50\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2901072204,\n \"label\": 1.0\n },\n {\n \"value\": 0.7098927796,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 51\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4683373272,\n \"label\": 1.0\n },\n {\n \"value\": 0.5316626728,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 52\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2019754201,\n \"label\": 1.0\n },\n {\n \"value\": 0.7980245799,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 53\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.8148747087,\n \"label\": 1.0\n },\n {\n \"value\": 0.1851252913,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 54\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.5285876393,\n \"label\": 1.0\n },\n {\n \"value\": 0.4714123607,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 55\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4858893752,\n \"label\": 1.0\n },\n {\n \"value\": 0.5141106248,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 56\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3918590546,\n \"label\": 1.0\n },\n {\n \"value\": 0.6081409454,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 57\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4460092485,\n \"label\": 1.0\n },\n {\n \"value\": 0.5539907515,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 58\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1893992573,\n \"label\": 1.0\n },\n {\n \"value\": 0.8106007427,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 59\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3643003106,\n \"label\": 1.0\n },\n {\n \"value\": 0.6356996894,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 60\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.5428369641,\n \"label\": 1.0\n },\n {\n \"value\": 0.4571630359,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 61\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3130823076,\n \"label\": 1.0\n },\n {\n \"value\": 0.6869176924,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 62\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4086145461,\n \"label\": 1.0\n },\n {\n \"value\": 0.5913854539,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 63\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3327098787,\n \"label\": 1.0\n },\n {\n \"value\": 0.6672901213,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 64\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4195334911,\n \"label\": 1.0\n },\n {\n \"value\": 0.5804665089,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 65\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2339870185,\n \"label\": 1.0\n },\n {\n \"value\": 0.7660129815,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 66\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1668866128,\n \"label\": 1.0\n },\n {\n \"value\": 0.8331133872,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 67\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.5952532291,\n \"label\": 1.0\n },\n {\n \"value\": 0.4047467709,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 68\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3510763943,\n \"label\": 1.0\n },\n {\n \"value\": 0.6489236057,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 69\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.355173111,\n \"label\": 1.0\n },\n {\n \"value\": 0.644826889,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 70\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3211461604,\n \"label\": 1.0\n },\n {\n \"value\": 0.6788538396,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 71\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.6033878326,\n \"label\": 1.0\n },\n {\n \"value\": 0.3966121674,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 72\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3709016442,\n \"label\": 1.0\n },\n {\n \"value\": 0.6290983558,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 73\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2929402292,\n \"label\": 1.0\n },\n {\n \"value\": 0.7070597708,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 74\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2088063508,\n \"label\": 1.0\n },\n {\n \"value\": 0.7911936492,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 75\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.512416482,\n \"label\": 1.0\n },\n {\n \"value\": 0.487583518,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 76\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2619113326,\n \"label\": 1.0\n },\n {\n \"value\": 0.7380886674,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 77\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.342654258,\n \"label\": 1.0\n },\n {\n \"value\": 0.657345742,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 78\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3954042196,\n \"label\": 1.0\n },\n {\n \"value\": 0.6045957804,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 79\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3474768996,\n \"label\": 1.0\n },\n {\n \"value\": 0.6525231004,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 80\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.6875383258,\n \"label\": 1.0\n },\n {\n \"value\": 0.3124616742,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 81\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4663459063,\n \"label\": 1.0\n },\n {\n \"value\": 0.5336540937,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 82\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3457750678,\n \"label\": 1.0\n },\n {\n \"value\": 0.6542249322,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 83\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1136166826,\n \"label\": 1.0\n },\n {\n \"value\": 0.8863833174,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 84\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.6986195445,\n \"label\": 1.0\n },\n {\n \"value\": 0.3013804555,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 85\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4779470861,\n \"label\": 1.0\n },\n {\n \"value\": 0.5220529139,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 86\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1691306978,\n \"label\": 1.0\n },\n {\n \"value\": 0.8308693022,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 87\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1437647939,\n \"label\": 1.0\n },\n {\n \"value\": 0.8562352061,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 88\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1588821262,\n \"label\": 1.0\n },\n {\n \"value\": 0.8411178738,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 89\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3533524275,\n \"label\": 1.0\n },\n {\n \"value\": 0.6466475725,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 90\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.6037442088,\n \"label\": 1.0\n },\n {\n \"value\": 0.3962557912,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 91\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2442641109,\n \"label\": 1.0\n },\n {\n \"value\": 0.7557358891,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 92\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.2253168821,\n \"label\": 1.0\n },\n {\n \"value\": 0.7746831179,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 93\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3057530224,\n \"label\": 1.0\n },\n {\n \"value\": 0.6942469776,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 94\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.3431742191,\n \"label\": 1.0\n },\n {\n \"value\": 0.6568257809,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 95\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.7758949399,\n \"label\": 1.0\n },\n {\n \"value\": 0.2241050601,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 1.0,\n \"rowId\": 96\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.1636782736,\n \"label\": 1.0\n },\n {\n \"value\": 0.8363217264,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 97\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4720363021,\n \"label\": 1.0\n },\n {\n \"value\": 0.5279636979,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 98\n },\n {\n \"predictionValues\": [\n {\n \"value\": 0.4349449873,\n \"label\": 1.0\n },\n {\n \"value\": 0.5650550127,\n \"label\": 0.0\n }\n ],\n \"predictionThreshold\": 0.5,\n \"prediction\": 0.0,\n \"rowId\": 99\n }\n ]\n}\n" ] ], [ [ "### 5. Check data drift. Does it look like data is drifting?. [3min]\n\nCheck data drift from within the `Deployments` page in the UI. Is data drift marked as red?", "_____no_output_____" ], [ "### 6. Create a new project using data (2). [5min]\n\nLink to data: https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes_scoring.csv", "_____no_output_____" ] ], [ [ "#Proposed solution\nnew_project = dr.Project.create(sourcedata = 'https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes_scoring.csv',\n project_name = '06_New_Project')\n\nnew_project.set_target(target = 'readmitted', mode = 'quick', worker_count = -1)\nnew_project.wait_for_autopilot()", "In progress: 4, queued: 9 (waited: 0s)\nIn progress: 4, queued: 9 (waited: 1s)\nIn progress: 4, queued: 9 (waited: 2s)\nIn progress: 4, queued: 9 (waited: 4s)\nIn progress: 4, queued: 9 (waited: 6s)\nIn progress: 4, queued: 9 (waited: 8s)\nIn progress: 4, queued: 9 (waited: 13s)\n" ] ], [ [ "### 7. Replace the previously deployed model with the new `recommended for deployment` model from the new project. [10min]\n\n**Hint**: You will have to provide a reason why you are replacing the model. Try: `dr.enums.MODEL_REPLACEMENT_REASON.DATA_DRIFT`.", "_____no_output_____" ] ], [ [ "#Proposed Solution\nnew_recommended_model = dr.ModelRecommendation.get(new_project.id).get_model()\ndeployment.replace_model(new_recommended_model.id, dr.enums.MODEL_REPLACEMENT_REASON.DATA_DRIFT)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a6197f931f672a09d015ab6eb4da970fed88335
87,910
ipynb
Jupyter Notebook
data.ipynb
HernandezM22/Wifi-ProbeReq-Rpi-Sniffer
7f39019a57b07aecef887ff2a1c849aca59f214d
[ "MIT" ]
1
2021-12-02T05:24:20.000Z
2021-12-02T05:24:20.000Z
data.ipynb
HernandezM22/Wifi-ProbeReq-Rpi-Sniffer
7f39019a57b07aecef887ff2a1c849aca59f214d
[ "MIT" ]
null
null
null
data.ipynb
HernandezM22/Wifi-ProbeReq-Rpi-Sniffer
7f39019a57b07aecef887ff2a1c849aca59f214d
[ "MIT" ]
null
null
null
67.260903
17,458
0.682084
[ [ [ "import pandas as pd\nimport math", "_____no_output_____" ], [ "import matplotlib.pyplot as plt", "_____no_output_____" ], [ "df = pd.read_csv(\"probes.csv\")", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "intensities = []\n\nfor intens in df[\" intens\"]:\n intensities.append(int(intens[0:3]))\n\ndf[\"n\"] = intensities\ndf\n", "_____no_output_____" ], [ "df[\"FSPL\"] = 20+93.5-(df[\"n\"]+93.5)\ndf", "_____no_output_____" ], [ "df[\"distance\"] = 10**((df[\"FSPL\"]-90.45-20*math.log10(2.4))/20)", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df[\"distance\"].plot()", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ], [ "df[\"distance\"].hist()", "_____no_output_____" ], [ "real_distances = []\n\nfor distance in df[\"distance\"]:\n if df[\"distance\"].min() < distance <= 0.558842:\n real_distances.append(distance*3)\n elif 0.558842 < distance <= 1.115036:\n real_distances.append(distance*3)\n elif 1.115036 < distance <= 1.767212:\n real_distances.append(distance*4)\n else:\n real_distances.append(distance*4.3)\n\n\ndf[\"real_distances\"] = real_distances\n\ndf", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ], [ "df[\"real_distances\"].hist()", "_____no_output_____" ], [ "df[\"real_distances\"].plot()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a619db545a327ea8c7c2875a00c8b0e9c2ff892
721,571
ipynb
Jupyter Notebook
lab_9/.ipynb_checkpoints/Two body problem-checkpoint.ipynb
kamilok1965/CMfSaT
0d63eea12291789800df29b23bda0772f6af6695
[ "MIT" ]
1
2019-01-09T18:59:11.000Z
2019-01-09T18:59:11.000Z
lab_9/.ipynb_checkpoints/Two body problem-checkpoint.ipynb
kamilok1965/CMfSaT
0d63eea12291789800df29b23bda0772f6af6695
[ "MIT" ]
null
null
null
lab_9/.ipynb_checkpoints/Two body problem-checkpoint.ipynb
kamilok1965/CMfSaT
0d63eea12291789800df29b23bda0772f6af6695
[ "MIT" ]
1
2018-12-13T10:05:17.000Z
2018-12-13T10:05:17.000Z
489.532564
84,852
0.765428
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a61ace9bd832dccfbed960d96dc75bc40366655
43,760
ipynb
Jupyter Notebook
site/ru/guide/keras/overview.ipynb
kweinmeister/docs-l10n
e87bd94e18f7df4e3614cbcfece4b051c910bdca
[ "Apache-2.0" ]
null
null
null
site/ru/guide/keras/overview.ipynb
kweinmeister/docs-l10n
e87bd94e18f7df4e3614cbcfece4b051c910bdca
[ "Apache-2.0" ]
null
null
null
site/ru/guide/keras/overview.ipynb
kweinmeister/docs-l10n
e87bd94e18f7df4e3614cbcfece4b051c910bdca
[ "Apache-2.0" ]
null
null
null
33.975155
743
0.542185
[ [ [ "##### Copyright 2019 The TensorFlow Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Обзор Keras", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/overview\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />Смотрите на TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ru/guide/keras/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Запустите в Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ru/guide/keras/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />Изучайте код на GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ru/guide/keras/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Скачайте ноутбук</a>\n </td>\n</table>", "_____no_output_____" ], [ "Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ru).", "_____no_output_____" ], [ "Это руководство даст вам основы для начала работы с Keras. Чтение займет 10 минут.", "_____no_output_____" ], [ "## Импортируйте tf.keras\n\n`tf.keras` является реализацией TensorFlow\n[спецификации Keras API](https://keras.io). Это высокоуровневый\nAPI для построения и обучения моделей включающий первоклассную поддержку для\nTensorFlow-специфичной функциональности, такой как [eager execution](../eager.ipynb),\nконвейеры `tf.data`, и [Estimators](../estimator.ipynb).\n`tf.keras` делает использование TensorFlow проще не жертвуя при этом гибкостью и\nпроизводительностью.\n\nДля начала, импортируйте `tf.keras` как часть установки вашей TensorFlow:", "_____no_output_____" ] ], [ [ "from __future__ import absolute_import, division, print_function, unicode_literals\n\ntry:\n # %tensorflow_version существуют только в Colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass\nimport tensorflow as tf\n\nfrom tensorflow import keras", "_____no_output_____" ] ], [ [ "`tf.keras` может выполнять любой Keras-совместимый код, но имейте ввиду:\n\n* Версия `tf.keras` в последнем релизе TensorFlow может отличаться от\n последней версии `keras` в PyPI. Проверьте `tf.keras.__version__`.\n* При [сохранении весоа моделей](./save_and_serialize.ipynb), `tf.keras` делает это по умолчанию\n [в формате checkpoint](../checkpoint.ipynb). Передайте параметр `save_format='h5'` для\n использования HDF5 (или добавьте к имени файла расширение `.h5`).", "_____no_output_____" ], [ "## Постройте простую модель\n\n### Последовательная модель\n\nВ Keras, вы собираете *слои (layers)* для построения *моделей (models)*. Модель это (обычно) граф\nслоев. Наиболее распространенным видом модели является стек слоев:\nмодель `tf.keras.Sequential`.\n\nПостроение простой полносвязной сети (т.е. многослойного перцептрона):", "_____no_output_____" ] ], [ [ "from tensorflow.keras import layers\n\nmodel = tf.keras.Sequential()\n# Добавим полносвязный слой с 64 узлами к модели:\nmodel.add(layers.Dense(64, activation='relu'))\n# Добавим другой слой:\nmodel.add(layers.Dense(64, activation='relu'))\n# Добавим слой softmax с 10 выходами:\nmodel.add(layers.Dense(10, activation='softmax'))", "_____no_output_____" ] ], [ [ "Вы можете найти короткий, но полный пример того, как использовать последовательные (Sequential) модели [здесь](https://www.tensorflow.org/tutorials/quickstart/beginner).\n\nЧтобы узнать о построении более сложных чем последовательные (Sequential), см:\n- [Руководство по Keras Functional API](./functional.ipynb)\n- [Руководство по написанию слоев и моделей с сабклассингом с нуля](./custom_layers_and_models.ipynb)", "_____no_output_____" ], [ "### Настройте слои\n\nДоступно много разновидностей слоев `tf.keras.layers`. Большинство из них используют общий конструктор\nаргументов:\n\n* `activation`: Установка функции активации для слоя. В этом параметре\n указывается имя встроенной функции или вызываемый объект. У параметра\n нет значения по умолчанию.\n* `kernel_initializer` И `bias_initializer`: Схемы инициализации\n создающие веса слоя (ядро и сдвиг). В этом параметре может быть имя\n или вызываемый объект. По умолчанию используется инициализатор `\"Glorot uniform\"`.\n* `kernel_regularizer` и `bias_regularizer`: Схемы регуляризации\n добавляемые к весам слоя (ядро и сдвиг), такие как L1 или L2\n регуляризации. По умолчанию регуляризация не устанавливается.\n\nСледующие примеры слоев `tf.keras.layers.Dense` используют \nаргументы конструктора:", "_____no_output_____" ] ], [ [ "# Создать слой с сигмоидой:\nlayers.Dense(64, activation='sigmoid')\n# Или:\nlayers.Dense(64, activation=tf.keras.activations.sigmoid)\n\n# Линейный слой с регуляризацией L1 с коэфициентом 0.01 примененной к матрице ядра:\nlayers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))\n\n# Линейный слой с регуляризацией L2 с коэффициентом 0.01 примененной к вектору сдвига:\nlayers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))\n\n# Линейный слой с ядром инициализированным случайной ортогональной матрицей:\nlayers.Dense(64, kernel_initializer='orthogonal')\n\n# Линейный слой с вектором сдвига инициализированным значениями 2.0:\nlayers.Dense(64, bias_initializer=tf.keras.initializers.Constant(2.0))", "_____no_output_____" ] ], [ [ "## Обучение и оценка\n\n### Настройка обучения\n\nПосле того как модель сконструирована, настройте процесс ее обучения вызовом\nметода `compile`:", "_____no_output_____" ] ], [ [ "model = tf.keras.Sequential([\n# Добавляем полносвязный слой с 64 узлами к модели:\nlayers.Dense(64, activation='relu', input_shape=(32,)),\n# Добавляем другой слой:\nlayers.Dense(64, activation='relu'),\n# Добавляем слой softmax с 10 выходами:\nlayers.Dense(10, activation='softmax')])\n\nmodel.compile(optimizer=tf.keras.optimizers.Adam(0.01),\n loss='categorical_crossentropy',\n metrics=['accuracy'])", "_____no_output_____" ] ], [ [ "`tf.keras.Model.compile` принимает три важных аргумента:\n\n* `optimizer`: Этот объект определяет процедуру обучения. Передайте в него экземпляры\n оптимизатора из модуля `tf.keras.optimizers`, такие как\n `tf.keras.optimizers.Adam` или\n `tf.keras.optimizers.SGD`. Если вы просто хотите использовать значения по умолчанию, вы также можете указать оптимизаторы ключевыми словами, такими как `'adam'` или `'sgd'`.\n* `loss`: Это функция которая минимизируется в процессе обучения. Среди распространенных вариантов\n mean square error (`mse`), `categorical_crossentropy`, и\n `binary_crossentropy`. Функции потерь указываются по имени или по\n передаче вызываемого объекта из модуля `tf.keras.losses`.\n* `metrics`: Используются для мониторинга обучения. Это строковые имена или вызываемые объекты из\n модуля `tf.keras.metrics`.\n* Кроме того, чтобы быть уверенным, что модель обучается и оценивается eagerly, проверьте что вы передали компилятору параметр `run_eagerly=True`\n\n\nДалее посмотрим несколько примеров конфигурации модели для обучения:", "_____no_output_____" ] ], [ [ "# Сконфигурируем модель для регрессии со среднеквадратичной ошибкой.\nmodel.compile(optimizer=tf.keras.optimizers.Adam(0.01),\n loss='mse', # срееднеквадратичная ошибка\n metrics=['mae']) # средняя абсолютная ошибка\n\n# Сконфигурируем модель для категориальной классификации.\nmodel.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),\n loss=tf.keras.losses.CategoricalCrossentropy(),\n metrics=[tf.keras.metrics.CategoricalAccuracy()])", "_____no_output_____" ] ], [ [ "### Обучение на данных NumPy\n\nДля небольших датасетов используйте помещающиеся в память массивы [NumPy](https://www.numpy.org/)\nдля обучения и оценки модели. Модель \"обучается\" на тренировочных даннных\nиспользуя метод `fit`:", "_____no_output_____" ] ], [ [ "import numpy as np\n\ndata = np.random.random((1000, 32))\nlabels = np.random.random((1000, 10))\n\nmodel.fit(data, labels, epochs=10, batch_size=32)", "_____no_output_____" ] ], [ [ "`tf.keras.Model.fit` принимает три важных аргумента:\n\n* `epochs`: Обучение разбито на *эпохи*. Эпоха это одна итерация\n по всем входным данным (это делается небольшими партиями).\n* `batch_size`: При передаче данных NumPy, модель разбивает данные на меньшие\n блоки (batches) и итерирует по этим блокам во время обучения. Это число\n указывает размер каждого блока данных. Помните, что последний блок может быть меньшего\n размера если общее число записей не делится на размер партии.\n* `validation_data`: При прототипировании модели вы хотите легко отслеживать её\n производительность на валидационных данных. Передача с этим аргументом кортежа входных данных\n и меток позволяет модели отопражать значения функции потерь и метрики в режиме вывода\n для передаваемых данных в конце каждой эпохи.\n\nВот пример использования `validation_data`:", "_____no_output_____" ] ], [ [ "import numpy as np\n\ndata = np.random.random((1000, 32))\nlabels = np.random.random((1000, 10))\n\nval_data = np.random.random((100, 32))\nval_labels = np.random.random((100, 10))\n\nmodel.fit(data, labels, epochs=10, batch_size=32,\n validation_data=(val_data, val_labels))", "_____no_output_____" ] ], [ [ "### Обучение с использованием наборов данных tf.data\n\nИспользуйте [Datasets API](../data.ipynb) для масштабирования больших баз данных\nили обучения на нескольких устройствах. Передайте экземпляр `tf.data.Dataset` в метод\n`fit`:", "_____no_output_____" ] ], [ [ "# Создает экземпляр учебного датасета:\ndataset = tf.data.Dataset.from_tensor_slices((data, labels))\ndataset = dataset.batch(32)\n\nmodel.fit(dataset, epochs=10)", "_____no_output_____" ] ], [ [ "Поскольку `Dataset` выдает данные пакетами, этот кусок кода не требует аргумента `batch_size`.\n\nДатасеты могут быть также использованы для валидации:", "_____no_output_____" ] ], [ [ "dataset = tf.data.Dataset.from_tensor_slices((data, labels))\ndataset = dataset.batch(32)\n\nval_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))\nval_dataset = val_dataset.batch(32)\n\nmodel.fit(dataset, epochs=10,\n validation_data=val_dataset)", "_____no_output_____" ] ], [ [ "### Оценка и предсказание\n\nМетоды `tf.keras.Model.evaluate` и `tf.keras.Model.predict` могут использовать данные\nNumPy и `tf.data.Dataset`.\n\nВот так можно *оценить* потери в режиме вывода и метрики для предоставленных данных:", "_____no_output_____" ] ], [ [ "# С массивом Numpy\ndata = np.random.random((1000, 32))\nlabels = np.random.random((1000, 10))\n\nmodel.evaluate(data, labels, batch_size=32)\n\n# С датасетом\ndataset = tf.data.Dataset.from_tensor_slices((data, labels))\ndataset = dataset.batch(32)\n\nmodel.evaluate(dataset)", "_____no_output_____" ] ], [ [ "А вот как *предсказать* вывод последнего уровня в режиме вывода для предоставленных данных,\nв виде массива NumPy:", "_____no_output_____" ] ], [ [ "result = model.predict(data, batch_size=32)\nprint(result.shape)", "_____no_output_____" ] ], [ [ "Полное руководство по обучению и оценке модели, включая описание написания пользовательских циклов обучения с нуля, см. в [руководстве по обучению и оценке] (./ train_and_evaluate.ipynb).", "_____no_output_____" ], [ "## Построение сложных моделей\n\n### The Functional API\n\n Модель `tf.keras.Sequential` это простой стек слоев с помощью которого\nнельзя представить произвольную модель. Используйте\n[Keras functional API](./functional.ipynb)\nдля построения сложных топологий моделей, таких как:\n\n* Модели с несколькими входами,\n* Модели с несколькими выходами,\n* Модели с общими слоями (один и тот же слой вызывается несколько раз),\n* Модели с непоследовательными потоками данных (напр. остаточные связи).\n\nПостроение модели с functional API работает следующим образом:\n\n1. Экземпляр слоя является вызываемым и возвращает тензор.\n2. Входные и выходные тензоры используются для определения экземпляра\n `tf.keras.Model`\n3. Эта модель обучается точно так же как и `Sequential` модель.\n\nСледующий пример использует functional API для построения простой, полносвязной\nсети:", "_____no_output_____" ] ], [ [ "inputs = tf.keras.Input(shape=(32,)) # Возвращает входной плейсхолдер\n\n# Экземпляр слоя вызывается на тензор и возвращает тензор.\nx = layers.Dense(64, activation='relu')(inputs)\nx = layers.Dense(64, activation='relu')(x)\npredictions = layers.Dense(10, activation='softmax')(x)", "_____no_output_____" ] ], [ [ "Создайте экземпляр модели с данными входами и выходами.", "_____no_output_____" ] ], [ [ "model = tf.keras.Model(inputs=inputs, outputs=predictions)\n\n# Шаг компиляции определяет конфигурацию обучения.\nmodel.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# Обучение за 5 эпох\nmodel.fit(data, labels, batch_size=32, epochs=5)", "_____no_output_____" ] ], [ [ "### Сабклассинг моделей\n\nСоздайте полностью настраиваемую модель с помощью сабклассинга `tf.keras.Model` и определения\nвашего собственного прямого распространения. Создайте слои в методе `__init__` и установите их как\nатрибуты экземпляра класса. Определите прямое распространение в методе `call`.\n\nСабклассинг модели особенно полезен когда включен\n[eager execution](../eager.ipynb), поскольку он позволяет написать \nпрямое распространение императивно.\n\nПримечание: если вам нужно чтобы ваша модель *всегда* выполнялась императивно, вы можете установить `dynamic=True` когда вызываете конструктор `super`.\n\n> Ключевой момент: Используйте правильный API для работы. Хоть сабклассинг модели обеспечивает\nгибкость, за нее приходится платить большей сложностью и большими возможностями для\nпользовательских ошибок. Если это возможно выбирайте functional API.\n\nСледующий пример показывает сабклассированную `tf.keras.Model` использующую пользовательское прямое\nраспространение, которое не обязательно выполнять императивно:", "_____no_output_____" ] ], [ [ "class MyModel(tf.keras.Model):\n\n def __init__(self, num_classes=10):\n super(MyModel, self).__init__(name='my_model')\n self.num_classes = num_classes\n # Определим свои слои тут.\n self.dense_1 = layers.Dense(32, activation='relu')\n self.dense_2 = layers.Dense(num_classes, activation='sigmoid')\n\n def call(self, inputs):\n # Определим тут свое прямое распространение,\n # с использованием ранее определенных слоев (в `__init__`).\n x = self.dense_1(inputs)\n return self.dense_2(x)", "_____no_output_____" ] ], [ [ "Создайте экземпляр класса новой модели:", "_____no_output_____" ] ], [ [ "model = MyModel(num_classes=10)\n\n# Шаг компиляции определяет конфигурацию обучения.\nmodel.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# Обучение за 5 эпох.\nmodel.fit(data, labels, batch_size=32, epochs=5)", "_____no_output_____" ] ], [ [ "### Пользовательские слои\n\nСоздайте пользовательский слой сабклассингом `tf.keras.layers.Layer` и реализацией\nследующих методов:\n\n* `__init__`: Опционально определите подслои которые будут использоваться в этом слое.\n* `build`: Создайте веса слоя. Добавьте веса при помощи метода\n `add_weight`.\n* `call`: Определите прямое распространение.\n* Опционально, слой может быть сериализован реализацией метода `get_config`\n и метода класса `from_config`.\n\nНиже пример пользовательского слоя который осуществляет умножение матрицы (`matmul`) поданной на вход\nс матрицей ядра:", "_____no_output_____" ] ], [ [ "class MyLayer(layers.Layer):\n\n def __init__(self, output_dim, **kwargs):\n self.output_dim = output_dim\n super(MyLayer, self).__init__(**kwargs)\n\n def build(self, input_shape):\n # Создадим обучаемую весовую переменную для этого слоя.\n self.kernel = self.add_weight(name='kernel',\n shape=(input_shape[1], self.output_dim),\n initializer='uniform',\n trainable=True)\n\n def call(self, inputs):\n return tf.matmul(inputs, self.kernel)\n\n def get_config(self):\n base_config = super(MyLayer, self).get_config()\n base_config['output_dim'] = self.output_dim\n return base_config\n\n @classmethod\n def from_config(cls, config):\n return cls(**config)", "_____no_output_____" ] ], [ [ "Создайте модель с использованием вашего пользовательского слоя:", "_____no_output_____" ] ], [ [ "model = tf.keras.Sequential([\n MyLayer(10),\n layers.Activation('softmax')])\n\n# Шаг компиляции определяет конфигурацию обучения\nmodel.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# Обучение за 5 эпох.\nmodel.fit(data, labels, batch_size=32, epochs=5)", "_____no_output_____" ] ], [ [ "Узнайте больше о создании новых слоев и моделей с нуля с помощью сабклассинга в [Руководстве написания слоев и моделей с нуля](./custom_layers_and_models.ipynb).", "_____no_output_____" ], [ "## Колбеки\n\nКолбек это объект переданный модели чтобы кастомизировать и расширить ее поведение\nво время обучения. Вы можете написать свой пользовательский колбек или использовать встроенный\n`tf.keras.callbacks` который включает:\n\n* `tf.keras.callbacks.ModelCheckpoint`: Сохранение контрольных точек модели за\n регулярные интервалы.\n* `tf.keras.callbacks.LearningRateScheduler`: Динамичное изменение шага\n обучения.\n* `tf.keras.callbacks.EarlyStopping`: Остановка обучения в том случае когда\n результат на валидации перестает улучшаться.\n* `tf.keras.callbacks.TensorBoard`: Мониторинг поведения модели с помощью\n [TensorBoard](https://tensorflow.org/tensorboard).\n\nДля использования `tf.keras.callbacks.Callback`, передайте ее методу модели `fit`:", "_____no_output_____" ] ], [ [ "callbacks = [\n # Остановить обучение если `val_loss` перестанет улучшаться в течение 2 эпох\n tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),\n # Записать логи TensorBoard в каталог `./logs`\n tf.keras.callbacks.TensorBoard(log_dir='./logs')\n]\nmodel.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,\n validation_data=(val_data, val_labels))", "_____no_output_____" ] ], [ [ "<a name='save_and_restore'></a>\n## Сохранение и восстановление", "_____no_output_____" ], [ "<a name=\"weights_only\"></a>\n### Сохранение только значений весов\n\nСохраните и загрузите веса модели с помощью `tf.keras.Model.save_weights`:", "_____no_output_____" ] ], [ [ "model = tf.keras.Sequential([\nlayers.Dense(64, activation='relu', input_shape=(32,)),\nlayers.Dense(10, activation='softmax')])\n\nmodel.compile(optimizer=tf.keras.optimizers.Adam(0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])", "_____no_output_____" ], [ "# Сохраним веса в файл TensorFlow Checkpoint\nmodel.save_weights('./weights/my_model')\n\n# Восстановим состояние модели,\n# для этого необходима модель с такой же архитектурой.\nmodel.load_weights('./weights/my_model')", "_____no_output_____" ] ], [ [ "По умолчанию веса модели сохраняются в формате\n[TensorFlow checkpoint](../checkpoint.ipynb). Веса могут быть также\nсохранены в формате Keras HDF5 (значение по умолчанию для универсальной\nреализации Keras):", "_____no_output_____" ] ], [ [ "# Сохранение весов в файл HDF5\nmodel.save_weights('my_model.h5', save_format='h5')\n\n# Восстановление состояния модели\nmodel.load_weights('my_model.h5')", "_____no_output_____" ] ], [ [ "### Сохранение только конфигурации модели\n\nКонфигурация модели может быть сохранена - это сериализует архитектуру модели\nбез всяких весов. Сохраненная конфигурация может восстановить и инициализировать ту же\nмодель, даже без кода определяющего исходную модель. Keras поддерживает\nформаты сериализации JSON и YAML:", "_____no_output_____" ] ], [ [ "# Сериализация модели в формат JSON\njson_string = model.to_json()\njson_string", "_____no_output_____" ], [ "import json\nimport pprint\npprint.pprint(json.loads(json_string))", "_____no_output_____" ] ], [ [ "Восстановление модели (заново инициализированной) из JSON:", "_____no_output_____" ] ], [ [ "fresh_model = tf.keras.models.model_from_json(json_string)", "_____no_output_____" ] ], [ [ "Сериализация модели в формат YAML требует установки `pyyaml` *перед тем как импортировать TensorFlow*:", "_____no_output_____" ] ], [ [ "yaml_string = model.to_yaml()\nprint(yaml_string)", "_____no_output_____" ] ], [ [ "Восстановление модели из YAML:", "_____no_output_____" ] ], [ [ "fresh_model = tf.keras.models.model_from_yaml(yaml_string)", "_____no_output_____" ] ], [ [ "Внимание: сабклассированные модели не сериализуемы, потому что их архитектура\nопределяется кодом Python в теле метода `call`.", "_____no_output_____" ], [ "\n### Сохранение всей модели в один файл\n\nВся модель может быть сохранена в файл содержащий значения весов, конфигурацию\nмодели, и даже конфигурацию оптимизатора. Это позволит вам\nустановить контрольную точку модели и продолжить обучение позже с точно того же положения\nдаже без доступа к исходному коду.", "_____no_output_____" ] ], [ [ "# Создадим простую модель\nmodel = tf.keras.Sequential([\n layers.Dense(10, activation='softmax', input_shape=(32,)),\n layers.Dense(10, activation='softmax')\n])\nmodel.compile(optimizer='rmsprop',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\nmodel.fit(data, labels, batch_size=32, epochs=5)\n\n\n# Сохраним всю модель в файл HDF5\nmodel.save('my_model.h5')\n\n# Пересоздадим в точности эту модель включая веса и оптимизатор.\nmodel = tf.keras.models.load_model('my_model.h5')", "_____no_output_____" ] ], [ [ "Узнайте больше о сохранении и сериализации моделей Keras в руководстве по [сохранению и сериализации моделей](./save_and_serialize.ipynb).", "_____no_output_____" ], [ "<a name=\"eager_execution\"></a>\n## Eager execution\n\n[Eager execution](../eager.ipynb) это императивное программирование\nсреда которая выполняет операции немедленно. Это не требуется для \nKeras, но поддерживается `tf.keras` и полезно для проверки вашей программы и\nотладки.\n\nВсе строящие модели API `tf.keras` совместимы eager execution.\nИ хотя могут быть использованы `Sequential` и functional API, eager execution\nособенно полезно при *сабклассировании модели* и построении *пользовательских слоев* — эти API\nтребуют от вас написание прямого распространения в виде кода (вместо API которые\nсоздают модели путем сборки существующих слоев).\n\nСм [руководство eager execution](../eager.ipynb) для\nпримеров использования моделей Keras с пользовательскими циклами обучения и `tf.GradientTape`.\nВы можете также найти полный коротки пример [тут](https://www.tensorflow.org/tutorials/quickstart/advanced).", "_____no_output_____" ], [ "## Распределение\n", "_____no_output_____" ], [ "### Множественные GPU\n\n`tf.keras` модели можно запускать на множестве GPU с использованием\n`tf.distribute.Strategy`. Этот API обеспечивает распределенное\nобучение на нескольких GPU практически без изменений в существующем коде.\n\nНа данный момент, `tf.distribute.MirroredStrategy` единственная поддерживаемая\nстратегия распределения. `MirroredStrategy` выполняет репликацию в графах с\nсинхронным обучением используя all-reduce на одной машине. Для использования\n`distribute.Strategy` , вложите инсталляцию оптимизатора, конструкцию и компиляцию модели в `Strategy`'s `.scope()`, затем\nобучите модель.\n\nСледующий пример распределяет `tf.keras.Model` между множеством GPU на\nодной машине.\n\nСперва определим модель внутри области распределенной стратегии:", "_____no_output_____" ] ], [ [ "strategy = tf.distribute.MirroredStrategy()\n\nwith strategy.scope():\n model = tf.keras.Sequential()\n model.add(layers.Dense(16, activation='relu', input_shape=(10,)))\n model.add(layers.Dense(1, activation='sigmoid'))\n\n optimizer = tf.keras.optimizers.SGD(0.2)\n\n model.compile(loss='binary_crossentropy', optimizer=optimizer)\n\nmodel.summary()", "_____no_output_____" ] ], [ [ "Затем обучим модель на данных как обычно:", "_____no_output_____" ] ], [ [ "x = np.random.random((1024, 10))\ny = np.random.randint(2, size=(1024, 1))\nx = tf.cast(x, tf.float32)\ndataset = tf.data.Dataset.from_tensor_slices((x, y))\ndataset = dataset.shuffle(buffer_size=1024).batch(32)\n\nmodel.fit(dataset, epochs=1)", "_____no_output_____" ] ], [ [ "Для дополнительной информации см. [полное руководство по распределенному обучению наа TensorFlow](../distributed_training.ipynb).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a61ae71011a2168d0d79ff77be50fed9d9fc9c3
664,700
ipynb
Jupyter Notebook
car_sale.ipynb
maxlz/carsale
167741f5d1c01a503f420e2fb1cc30463cd428eb
[ "Apache-2.0" ]
null
null
null
car_sale.ipynb
maxlz/carsale
167741f5d1c01a503f420e2fb1cc30463cd428eb
[ "Apache-2.0" ]
null
null
null
car_sale.ipynb
maxlz/carsale
167741f5d1c01a503f420e2fb1cc30463cd428eb
[ "Apache-2.0" ]
null
null
null
149.13619
275,384
0.841465
[ [ [ "import pandas as pd", "_____no_output_____" ], [ "df = pd.read_csv('X.txt',sep=',')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.收益率 = df.收益率.str.strip(to_strip='%')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.sort_values(by=['收益率','天数','车型','城市车商','众筹金额'],ascending=False).head()", "_____no_output_____" ], [ "df.to_csv('X.csv',index=False,encoding='utf8')", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 20250 entries, 0 to 20249\nData columns (total 8 columns):\n城市车商 20250 non-null object\n第几期 20250 non-null object\n车型 20250 non-null object\n众筹金额 20250 non-null float64\n分红 20250 non-null float64\n每万元分红 20250 non-null float64\n天数 20250 non-null object\n收益率 19521 non-null object\ndtypes: float64(3), object(5)\nmemory usage: 1.2+ MB\n" ], [ "df.城市车商.value_counts()", "_____no_output_____" ], [ "df[(df.城市车商.str[0]>=u'\\u4e00') & (df.城市车商.str[0]<=u'\\u9fff')].to_csv('zw_data.csv',index=False)", "_____no_output_____" ], [ "df.城市车商 = df.城市车商.str.strip()", "_____no_output_____" ], [ "df[(df.城市车商.str[0]<u'\\u4e00') | (df.城市车商.str[0]>u'\\u9fff')].to_csv('en_data.csv',index=False)", "_____no_output_____" ], [ "df = pd.read_csv('en_data.csv')", "_____no_output_____" ], [ "df.groupby(by='城市车商').mean().sort_values(by='收益率',ascending=False)", "_____no_output_____" ], [ "df['城市']= df.城市车商.str[:4]", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.groupby(by='城市').mean().sort_values(by='收益率',ascending=False).head(10)", "_____no_output_____" ], [ "df.groupby(by='城市').mean().sort_values(by='收益率',ascending=False).head(10).收益率.plot(kind='bar')", "_____no_output_____" ], [ "df[df.城市=='CNVB'].groupby(by='城市车商').mean().sort_values(by='收益率',ascending=False).head(10).收益率.plot(kind='bar')", "_____no_output_____" ], [ "df[df.城市=='NWVB'].groupby(by='城市车商').mean().sort_values(by='收益率',ascending=False).head(10).收益率.plot(kind='bar')", "_____no_output_____" ], [ "df[df.城市=='NWVB'].groupby(by='车型').agg(['mean','count']).sort_values(by=[('收益率','count'),('收益率','mean')],ascending=False).head(20)", "_____no_output_____" ], [ "df.groupby(by='车型').agg(['mean','count'])[df.groupby(by='车型').agg(['mean','count'])[('收益率','count')]>10].sort_values(by=[('收益率','mean'),('收益率','count')],ascending=False).head(20)", "_____no_output_____" ], [ "df.groupby(by='车型').agg(['mean','count'])[df.groupby(by='车型').agg(['mean','count'])[('收益率','count')]>10].sort_values(by=[('收益率','mean'),('收益率','count')],ascending=False).head(20)", "_____no_output_____" ], [ "df2 = df[df.城市=='NWVB'].groupby(by='车型').agg(['mean','count']).reset_index()", "_____no_output_____" ] ], [ [ "### NWVB城市的收益率车型排行,大于5辆车以上:", "_____no_output_____" ] ], [ [ "df2[df2[('收益率','count')]>5].sort_values(by=[('收益率','mean'),('收益率','count')],ascending=False).head(20)", "_____no_output_____" ] ], [ [ "### 失败率?", "_____no_output_____" ] ], [ [ "df = pd.read_csv('X.csv')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df[df.天数.str.isdigit()].to_csv('X2.csv',encoding='utf8')", "_____no_output_____" ], [ "df = pd.read_csv('X2.csv')", "_____no_output_____" ], [ "df[(df.收益率>=7)&(df.收益率<=9)&(df.天数>85)].groupby(by='车型')['收益率'].count().reset_index().sort_values(by='收益率',ascending=False).head(10)", "_____no_output_____" ], [ "df.groupby(by='车型')['收益率'].count().reset_index().sort_values(by='收益率',ascending=False).head(10)", "_____no_output_____" ], [ "df.sample(1400).收益率.plot(kind='hist',bins=8)", "_____no_output_____" ], [ "df.reset_index().sample(2000)[(df.收益率<100)&(df.天数<90)].plot(kind='scatter',x=['天数'],y=['收益率'],s=1)", "/Users/max/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: UserWarning: Boolean Series key will be reindexed to match DataFrame index.\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "df['天数乘收益率'] = df.天数*df.收益率", "_____no_output_____" ], [ "df.groupby(by='车型')['车型','天数乘收益率'].agg(['mean','count']).sort_values(by=('天数乘收益率','mean'),ascending=False).head(10)", "_____no_output_____" ] ], [ [ "## 保存带BOM的UTF8,否则excel乱码", "_____no_output_____" ] ], [ [ "df.to_csv('X2.csv',encoding='utf-8-sig',index=False)", "_____no_output_____" ], [ "from scipy.stats import zscore", "_____no_output_____" ], [ "import seaborn as sns", "_____no_output_____" ], [ "import numpy as np\ndf = pd.read_csv('X2.csv')", "_____no_output_____" ], [ "numeric_cols = df.select_dtypes(include=[np.number]).columns\nnumeric_cols", "_____no_output_____" ], [ "df[['天数乘收益率']].apply(zscore).head()", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom matplotlib.font_manager import FontProperties\nfont=FontProperties(fname='/Users/max/Library/Fonts/msyh.ttf',size=10)\n\n\nplt.rcParams['font.sans-serif'] = ['Microsoft YaHei'] # 中文字体设置-黑体\nplt.rcParams['axes.unicode_minus'] = False # 解决保存图像是负号'-'显示为方块的问题\nsns.set(font='Microsoft YaHei') # 解决Seaborn中文显示问题", "_____no_output_____" ], [ "sns.pairplot(df[numeric_cols].sample(2000))", "_____no_output_____" ] ], [ [ "import matplotlib\na=sorted([f.name for f in matplotlib.font_manager.fontManager.ttflist])\n \nfor i in a:\n print(i)", "_____no_output_____" ] ], [ [ "### 可见,天数越长,每万元收益越高,即90天卖不出去的车是每万元收益最高的。\n### 另外一种策略,是投资年化高的,不断滚动,使总收益最大化。", "_____no_output_____" ] ], [ [ "sns.jointplot(x='天数',y='每万元分红',data=df.sample(1000),xlim=(0,100),kind=\"reg\",color=\"m\");", "_____no_output_____" ], [ "df['收益率除天数'] = df.收益率/df.天数", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df2 = df.groupby(by='车型')['车型','收益率除天数'].agg(['median','count','std'])", "_____no_output_____" ] ], [ [ "### 收益率除天数最差情况,8% 90天,即0.089", "_____no_output_____" ] ], [ [ "df2[df2[('收益率除天数','count')]>3].sort_values(by=('收益率除天数','median'),ascending=False).head(20)", "_____no_output_____" ], [ "df[df.车型=='现代劳恩斯']", "_____no_output_____" ], [ "df.to_csv('X3.csv',encoding='utf-8-sig',index=False)", "_____no_output_____" ], [ "df2.to_csv('X32.csv',encoding='utf-8-sig',index=False)", "_____no_output_____" ], [ "df = pd.read_csv('X3.csv')", "_____no_output_____" ] ], [ [ "### 下图可见,随着天数增加,收益与天数比值快速降低,收益与天数比值高的集中于20天以内", "_____no_output_____" ] ], [ [ "sns.jointplot(y='收益率除天数',x='天数',data=df.sample(400),kind=\"scatter\",color=\"m\",ylim=(0,3),xlim=(0,90));", "/Users/max/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n" ] ], [ [ "### 下图可见,众筹金额在20万以内的,往往收益率与天数比值更可能大", "_____no_output_____" ] ], [ [ "sns.jointplot(y='收益率除天数',x='众筹金额',data=df.sample(1000),kind=\"kde\",color=\"m\",ylim=(0,3),xlim=(0,400000),s=2,cbar=True);", "/Users/max/anaconda3/lib/python3.6/site-packages/matplotlib/contour.py:1004: UserWarning: The following kwargs were not used by contour: 's'\n s)\n" ], [ "sns.set_context(\"notebook\", font_scale=1.2, rc={\"lines.linewidth\": 2.5})", "_____no_output_____" ], [ "sns.kdeplot(df.sample(1000).众筹金额.astype(float),df.sample(1000).收益率除天数,cmap=\"Reds\", shade=True, shade_lowest=False,kernel='epa',cbar=True).set(ylim=(0,3),xlim=(0,500000));", "_____no_output_____" ], [ "df.收益率除天数.agg(['mean','std'])", "_____no_output_____" ], [ "plt.hist(df.收益率除天数,range=(0,3));", "_____no_output_____" ], [ "sns.kdeplot(df.sample(2000).收益率除天数,cumulative=True,shade=True,cbar=True).set(xlim=(0,3))", "_____no_output_____" ], [ "sns.jointplot(x='天数',y='众筹金额',data=df.sample(1000),kind='hex',bins=10,xlim=(0,90),ylim=(0,500000))", "_____no_output_____" ] ], [ [ "### 优质车商:回款快,收益率高", "_____no_output_____" ] ], [ [ "df2 = df.groupby(by='城市车商')['城市车商','收益率除天数'].agg(['median','count','std','min','max'])", "_____no_output_____" ], [ "df2[df2[('收益率除天数','count')]>3].reset_index().sort_values(by=('收益率除天数','median'),ascending=False).head(20)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "raw", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "raw" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a61b968bb74125554224a20e4819cc0ca6bb8a5
41,661
ipynb
Jupyter Notebook
Chapter02/Working_with_VCF.ipynb
AlfonsoOC/Bioinformatics-with-Python-Cookbook-Second-Edition
8ddca0d4bcf88f974cbe7f322e3450a2ffca4e2d
[ "MIT" ]
null
null
null
Chapter02/Working_with_VCF.ipynb
AlfonsoOC/Bioinformatics-with-Python-Cookbook-Second-Edition
8ddca0d4bcf88f974cbe7f322e3450a2ffca4e2d
[ "MIT" ]
null
null
null
Chapter02/Working_with_VCF.ipynb
AlfonsoOC/Bioinformatics-with-Python-Cookbook-Second-Edition
8ddca0d4bcf88f974cbe7f322e3450a2ffca4e2d
[ "MIT" ]
null
null
null
165.321429
35,900
0.89897
[ [ [ "# Getting the necessary data", "_____no_output_____" ], [ "You just need to do this only once", "_____no_output_____" ] ], [ [ "!rm -f genotypes.vcf.gz 2>/dev/null\n!tabix -fh ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/release/20130502/supporting/vcf_with_sample_level_annotation/ALL.chr22.phase3_shapeit2_mvncall_integrated_v5_extra_anno.20130502.genotypes.vcf.gz 22:1-17000000|bgzip -c > genotypes.vcf.gz\n!tabix -p vcf genotypes.vcf.gz", "[kftp_pasv_connect] kftp_pasv_prep() is not called before hand.\r\n[kftp_connect_file] 425 Unable to build data connection: Connection refused\r\n[main] fail to open the data file.\r\n" ], [ "from collections import defaultdict\n\n%matplotlib inline\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nimport vcf", "_____no_output_____" ], [ "v = vcf.Reader(filename='genotypes.vcf.gz')\n\nprint('Variant Level information')\ninfos = v.infos\nfor info in infos:\n print(info)\n\nprint('Sample Level information')\nfmts = v.formats\nfor fmt in fmts:\n print(fmt)", "Variant Level information\nCIEND\nCIPOS\nCS\nEND\nIMPRECISE\nMC\nMEINFO\nMEND\nMLEN\nMSTART\nSVLEN\nSVTYPE\nTSD\nAC\nAF\nNS\nAN\nASN_AF\nEUR_AF\nAFR_AF\nAMR_AF\nSAN_AF\nDP\nSample Level information\nGT\nDP\n" ], [ "v = vcf.Reader(filename='genotypes.vcf.gz')\nrec = next(v)\nprint(rec.CHROM, rec.POS, rec.ID, rec.REF, rec.ALT, rec.QUAL, rec.FILTER)\nprint(rec.INFO)\nprint(rec.FORMAT)\nsamples = rec.samples\nprint(len(samples))\nsample = samples[0]\nprint(sample.called, sample.gt_alleles, sample.is_het, sample.is_variant, sample.phased)\nprint(int(sample['DP']))", "22 16050075 None A [G] 100 []\n{'NS': 2504, 'SAN_AF': [0.0], 'AN': 5008, 'AF': [0.000199681], 'AMR_AF': [0.0], 'EAS_AF': [''], 'DP': [8012], 'EUR_AF': [0.0], 'ASN_AF': [0.0], 'SAS_AF': ['0.0010'], 'AC': [1], 'AFR_AF': [0.0]}\nGT:DP\n2504\nTrue ['0', '0'] False False True\n1\n" ], [ "f = vcf.Reader(filename='DOG.vcf')\n\nmy_type = defaultdict(int)\nnum_alts = defaultdict(int)\n\nfor rec in f:\n my_type[rec.var_type, rec.var_subtype] += 1\n if rec.is_snp:\n num_alts[len(rec.ALT)] += 1\nprint(my_type)\nprint(num_alts)", "defaultdict(<class 'int'>, {('indel', 'ins'): 835, ('snp', 'tv'): 377, ('snp', 'ts'): 66})\ndefaultdict(<class 'int'>, {1: 443})\n" ], [ "f = vcf.Reader(filename='DOG.vcf')\n\nsample_dp = defaultdict(int)\nfor rec in f:\n if not rec.is_snp or len(rec.ALT) != 1:\n continue\n for sample in rec.samples:\n dp = sample['DP']\n if dp is None:\n dp = 0\n dp = int(dp)\n sample_dp[dp] += 1", "_____no_output_____" ], [ "dps = list(sample_dp.keys())\ndps.sort()\ndp_dist = [sample_dp[x] for x in dps]\nfig, ax = plt.subplots(figsize=(16, 9))\nax.plot(dp_dist[:100], 'r')\nax.axvline(dp_dist.index(max(dp_dist)))", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a61c12e903c2ff9b77784d36f7b051faa2e47bc
1,767
ipynb
Jupyter Notebook
homegrown/homegrown_results_print.ipynb
tszumowski/custom-vision-study
603f91a7f187accfae2db43ed5f4b5766fc29b67
[ "Apache-2.0" ]
5
2019-03-12T21:03:04.000Z
2019-10-24T13:27:26.000Z
homegrown/homegrown_results_print.ipynb
tszumowski/custom-vision-study
603f91a7f187accfae2db43ed5f4b5766fc29b67
[ "Apache-2.0" ]
null
null
null
homegrown/homegrown_results_print.ipynb
tszumowski/custom-vision-study
603f91a7f187accfae2db43ed5f4b5766fc29b67
[ "Apache-2.0" ]
4
2019-03-12T21:03:13.000Z
2019-10-17T07:56:38.000Z
22.948052
132
0.434069
[ [ [ "# todo", "_____no_output_____" ] ], [ [ "import perfreport\nimport glob\n\nresults_directory = '{PATH TO HOMEGROWN MODEL RESULT PICKLES}'\n\n# Update the glob to filter to the pickles of interest\nresults_list = glob.glob(results_directory+\"*.p\")\n\nprint(\"RESULTS TO SHOW\")\nfor result_file in results_list:\n print(\" \"+result_file)\n\nprint(\"\")\nprint(\"\")\nprint(\"\")\n\nfor result_file in results_list:\n print(\"############################################################################################################\")\n print(result_file)\n print(\"############################################################################################################\")\n perfreport.print_metrics(result_file)\n print(\"\")\n print(\"\")\n print(\"\")\n ", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
4a61cb87dd3a72fac5a4473fb3f489694ac8c6f4
8,223
ipynb
Jupyter Notebook
EngineeringOptimization/GitLab/Lab14.ipynb
dalexa10/EngineeringDesignOptimization
eb5b5e4edd773aef629f59aea8a9771af41bd224
[ "MIT" ]
null
null
null
EngineeringOptimization/GitLab/Lab14.ipynb
dalexa10/EngineeringDesignOptimization
eb5b5e4edd773aef629f59aea8a9771af41bd224
[ "MIT" ]
null
null
null
EngineeringOptimization/GitLab/Lab14.ipynb
dalexa10/EngineeringDesignOptimization
eb5b5e4edd773aef629f59aea8a9771af41bd224
[ "MIT" ]
null
null
null
26.271565
138
0.488143
[ [ [ "<div>\n<img src=\"figures/svtLogo.png\"/>\n</div>\n<h1><center>Mathematical Optimization for Engineers</center></h1>\n<h2><center>Lab 14 - Uncertainty</center></h2>", "_____no_output_____" ], [ "We want to optimize the total annualized cost of a heating and electric power system. Three different technologies are present: \n- a gas boiler\n- a combined heat and power plant\n- a photovoltaic module\n\nWe first the the nominal case without uncertanties. \nNext, we will consider a two-stage approach to consider uncertainties in the electricity demand and the power producable via PV. \nUncertain variables are the solar power and the power demand. ", "_____no_output_____" ] ], [ [ "# import cell\nfrom scipy.optimize import minimize, NonlinearConstraint, Bounds", "_____no_output_____" ], [ "class Boiler():\n \"\"\"Boiler \n Gas in, heat out\n \"\"\"\n \n def __init__(self):\n self.M = 0.75 \n \n def invest_cost(self, Qdot_nom):\n inv = 100 * Qdot_nom ** self.M\n return inv\n \n def oper_cost(self, Qdot_nom, op_load): \n cost_gas = 60\n cost_gas_oper = Qdot_nom * cost_gas * op_load\n \n return cost_gas_oper\n \n def heat(self, Qdot_nom, op_load):\n eta_th = 0.9 - (1 - op_load) * 0.05\n return Qdot_nom * op_load * eta_th\n ", "_____no_output_____" ], [ "class CHP():\n \"\"\"Combined-heat-and-power (CHP) engine \n Gas in, heat and power out\n \"\"\"\n\n def __init__(self):\n self.c_ref = 150\n self.M = 0.85 # [-], cost exponent\n self.cost_gas = 60\n \n def invest_cost(self, Qdot_nom):\n inv = self.c_ref * (Qdot_nom) ** self.M\n return inv\n \n def oper_cost(self, Qdot_nom, op_load): \n cost_gas_oper = Qdot_nom * op_load * self.cost_gas\n return cost_gas_oper\n \n def elec_out(self, Qdot_nom, op_load):\n eta_el = 0.3 - (1 - op_load) * 0.1\n out_pow = eta_el * Qdot_nom * op_load\n return out_pow\n \n def heat(self, Qdot_nom, op_load): \n eta_th = 0.6 - (1-op_load) * 0.05 \n return Qdot_nom * eta_th * op_load\n", "_____no_output_____" ], [ "class PV:\n \"\"\"Photovoltaic modules (PV) \n solar \n \"\"\" \n \n def __init__(self): \n self.M = 0.9 # [-], cost exponent\n \n def invest_cost(self, p_nom):\n inv = 200 * p_nom ** self.M\n return inv\n \n def oper_cost(self, out_nom): \n return 0\n \n def elec_out(self, p_nom, op_load, solar):\n return p_nom * op_load * solar\n ", "_____no_output_____" ], [ "def objective_function(x, PV, Boiler, CHP, scenarios):\n total_cost = 0\n design_PV = x[0] \n design_boiler = x[1] \n design_CHP = x[2] \n \n # investment costs\n # your code here\n \n # expected operating costs\n # your code here\n \n return total_cost", "_____no_output_____" ], [ "def constraint_function(x, PV, Boiler, CHP, scenarios): \n heat_demand = 200\n \n design_PV = x[0] \n design_boiler = x[1] \n design_CHP = x[2] \n\n # loop over all uncertatintes\n \n \n # heat demand\n \n # electricty demand \n \n \n return c", "_____no_output_____" ], [ "def print_solution(x):\n print('PV design: ', x[0])\n print('Boiler design: ', x[1])\n print('CHP design: ', x[2])\n ", "_____no_output_____" ], [ "# nominal case\nscenario1 = {\"p\": 1.0, \"solar\":1.0, \"elec\": 100}\nscenarios = [scenario1] # base scenario\n", "_____no_output_____" ], [ "# now consider different scenarios\nmyPV = PV()\nmyBoiler = Boiler()\nmyCHP = CHP()\ncons = lambda x: constraint_function(x, myPV, myBoiler, myCHP, scenarios)\nobj = lambda x: objective_function(x, myPV, myBoiler, myCHP, scenarios)\n# constraints need bounds\n# your code here\n# bounds for operation 0 . 1\nx_guess = [200,200,200, 1,1,1 ]\n# bounds for decision variables\n# your code here\nbnds = Bounds(lbs, ubs)\n", "_____no_output_____" ], [ "res = minimize(obj, x_guess, method = 'SLSQP', bounds=bnds,\n constraints = nonlinear_constraints,\n options={\"maxiter\": 15, 'iprint': 2, 'disp': True})\n", "_____no_output_____" ], [ "print_solution(res.x)", "_____no_output_____" ], [ "# nominal \n# uncertanties: power demand and solar power (relative 1.0)\nscenario1 = {\"p\": 0.40, \"solar\":1.0, \"elec\": 100}\nscenario2 = {\"p\": 0.3, \"solar\":1.0, \"elec\": 120}\nscenario3 = {\"p\": 0.3, \"solar\":0.5, \"elec\": 80}\n\n# put scenarios together\n# your code here", "_____no_output_____" ], [ "myPV = PV()\nmyBoiler = Boiler()\nmyCHP = CHP()\ncons = lambda x: constraint_function(x, myPV, myBoiler, myCHP, scenarios)\nobj = lambda x: objective_function(x, myPV, myBoiler, myCHP, scenarios)\n# bounds and constraints\n# your code here\n\n\nres = minimize(obj, x_guess, method = 'SLSQP', bounds=bnds,\n constraints = nonlinear_constraints,\n options={\"maxiter\": 15, 'iprint': 2, 'disp': True})", "_____no_output_____" ], [ "print_solution(res.x)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a61cf7e65af0e35a754dc676d161893235b8f1d
22,790
ipynb
Jupyter Notebook
Gan-Tutorial/Demo1/Exercise.ipynb
hscspring/AI-Methods
dedec9ba0f1894135d51349504983133d4f823fa
[ "MIT" ]
3
2020-04-07T09:25:34.000Z
2022-01-24T01:59:29.000Z
Gan-Tutorial/Demo1/Exercise.ipynb
hscspring/Note_GAN
dedec9ba0f1894135d51349504983133d4f823fa
[ "MIT" ]
null
null
null
Gan-Tutorial/Demo1/Exercise.ipynb
hscspring/Note_GAN
dedec9ba0f1894135d51349504983133d4f823fa
[ "MIT" ]
null
null
null
67.426036
13,972
0.786617
[ [ [ "import os\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\n\nimport tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data", "_____no_output_____" ] ], [ [ "注意点: \n\n- b 零初始值\n- w 初始化要用 tf,不要用 np", "_____no_output_____" ] ], [ [ "# 读取数据集MNIST,并放在当前目录data文件夹下MNIST文件夹中,如果该地址没有数据,则下载数据至该文件夹\n# 一张图片有 28*28=784 个像素点,每个点用一个浮点数表示其亮度;\nmnist = input_data.read_data_sets(\"./data/MNIST/\", one_hot=True)", "Extracting ./data/MNIST/train-images-idx3-ubyte.gz\nExtracting ./data/MNIST/train-labels-idx1-ubyte.gz\nExtracting ./data/MNIST/t10k-images-idx3-ubyte.gz\nExtracting ./data/MNIST/t10k-labels-idx1-ubyte.gz\n" ], [ "#该函数用于输出生成图片\n\ndef plot(samples):\n fig = plt.figure(figsize=(4, 4))\n gs = gridspec.GridSpec(4, 4)\n gs.update(wspace=0.05, hspace=0.05)\n\n for i, sample in enumerate(samples):\n ax = plt.subplot(gs[i])\n plt.axis('off')\n ax.set_xticklabels([])\n ax.set_yticklabels([])\n ax.set_aspect('equal')\n plt.imshow(sample.reshape(28, 28), cmap='Greys_r')\n\n return fig", "_____no_output_____" ], [ "batch_size = 128\nz_dim = 200", "_____no_output_____" ], [ "def variable_init(size):\n # He initialization: sqrt(2./dim of the previous layer)\n # np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2./layers_dims[l-1])\n in_dim = size[0]\n return tf.random_normal(shape=size, stddev=np.sqrt(2./in_dim))", "_____no_output_____" ], [ "# 定义并初始化变量\n\nX = tf.placeholder(tf.float32, shape=(None, 784))\nZ = tf.placeholder(tf.float32, shape=(None, z_dim))\n\nDW1 = tf.Variable(variable_init([784, 128]))\nDb1 = tf.Variable(tf.zeros(shape=[128]))\nDW2 = tf.Variable(variable_init([128, 1]))\nDb2 = tf.Variable(tf.zeros(shape=[1]))\ntheta_D = [DW1, DW1, Db1, Db2]\n\nGW1 = tf.Variable(variable_init([z_dim, 128]))\nGb1 = tf.Variable(tf.zeros(shape=[128]))\nGW2 = tf.Variable(variable_init([128, 784]))\nGb2 = tf.Variable(tf.zeros(shape=[784]))\ntheta_G = [GW1, GW2, Gb1, Gb2]", "_____no_output_____" ], [ "# 定义随机噪声生成器\n# 函数 Z,生成 z\ndef noise_maker(m, n):\n return np.random.uniform(-1.0, 1.0, size=[m, n])", "_____no_output_____" ], [ "# 定义数据生成器,将 z 变成 概率分布\n# 生成的结果为:是不是图片\n# 生成 N * 784 的结果\n\ndef generator(z):\n \n # tanh, relu。。。都可以\n Gh1 = tf.nn.relu(tf.matmul(z, GW1) + Gb1)\n G_logit = tf.matmul(Gh1, GW2) + Gb2\n # 这里用 sigmoid 是因为不需要加起来概率等于 1\n G_prob = tf.nn.sigmoid(G_logit)\n return G_prob", "_____no_output_____" ], [ "# 定义判别器\n\ndef discriminator(x):\n \n # tanh relu。。。\n Dh1 = tf.nn.relu(tf.matmul(x, DW1) + Db1)\n D_logit = tf.matmul(Dh1, DW2) + Db2\n# D_prob = tf.nn.sigmoid(D_logit)\n return D_logit # , D_prob", "_____no_output_____" ], [ "# 定义损失函数\n\nD_real_logit = discriminator(X) # D_real_prob, \nD_fake_logit = discriminator(generator(Z)) # D_fake_prob, \n\nD_X = tf.concat([D_real_logit, D_fake_logit], 1)\nD_y = tf.concat([tf.ones_like(D_real_logit), tf.zeros_like(D_fake_logit)], 1)\n\nD_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_X, labels=D_y))\nG_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_fake_logit, labels=tf.ones_like(D_fake_logit)))\n\nD_opt = tf.train.AdamOptimizer().minimize(D_loss, var_list=theta_D)\nG_opt = tf.train.AdamOptimizer().minimize(G_loss, var_list=theta_G)", "_____no_output_____" ], [ "sess = tf.Session()\n\nsess.run(tf.global_variables_initializer())\n\nif not os.path.exists('out_exercise/'):\n os.makedirs('out_exercise/')\n\ni = 0\nfor it in range(20000):\n \n if it % 2000 == 0:\n # 16 幅图\n samples = sess.run(generator(Z), feed_dict={Z: noise_maker(16, z_dim)})\n fig = plot(samples)\n plt.savefig('out_exercise/{}.png'.format(str(i).zfill(3)), bbox_inches='tight')\n i += 1\n plt.close(fig)\n \n X_mb, _ = mnist.train.next_batch(batch_size)\n \n _, D_loss_curr = sess.run([D_opt, D_loss], feed_dict={X: X_mb, Z: noise_maker(batch_size, z_dim)})\n _, G_loss_curr = sess.run([G_opt, G_loss], feed_dict={Z: noise_maker(batch_size, z_dim)})\n\n# sam,fakeprob,fakelogit = sess.run([generator(Z), D_fake_prob, D_fake_logit], \n# feed_dict={X: X_mb, Z: noise_maker(batch_size, z_dim)})\n\n \n if it % 2000 == 0:\n print('Iter: {} D_loss: {:.4}, G_loss: {:.4}'.format(it, D_loss_curr, G_loss_curr))", "Iter: 0 D_loss: 0.8844, G_loss: 6.359\nIter: 2000 D_loss: 0.005452, G_loss: 8.601\nIter: 4000 D_loss: 0.03211, G_loss: 5.053\nIter: 6000 D_loss: 0.05437, G_loss: 6.873\nIter: 8000 D_loss: 0.08955, G_loss: 4.662\nIter: 10000 D_loss: 0.1231, G_loss: 3.741\nIter: 12000 D_loss: 0.2174, G_loss: 2.919\nIter: 14000 D_loss: 0.2096, G_loss: 3.576\nIter: 16000 D_loss: 0.2404, G_loss: 2.987\nIter: 18000 D_loss: 0.3235, G_loss: 2.675\n" ], [ "samples.shape", "_____no_output_____" ], [ "plot(samples)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a61d04b0144b437e57ed76d6d4971ac634d3923
37,475
ipynb
Jupyter Notebook
analysis/Figure_5.ipynb
tee-lab/PercolationModels
687cb8189fafeb2e0d205ea4d8a660bd953bd7b1
[ "BSD-3-Clause" ]
null
null
null
analysis/Figure_5.ipynb
tee-lab/PercolationModels
687cb8189fafeb2e0d205ea4d8a660bd953bd7b1
[ "BSD-3-Clause" ]
null
null
null
analysis/Figure_5.ipynb
tee-lab/PercolationModels
687cb8189fafeb2e0d205ea4d8a660bd953bd7b1
[ "BSD-3-Clause" ]
1
2021-09-11T17:25:25.000Z
2021-09-11T17:25:25.000Z
196.204188
30,904
0.889847
[ [ [ "import pickle\nimport numpy as np\nimport collections\nimport matplotlib.pyplot as plt\nimport copy\nimport matplotlib.ticker as ticker\nimport mpmath as mp\nfrom mpmath import gammainc", "_____no_output_____" ], [ "def power_law(x,s_min, s_max, alpha):\n C = ((1-alpha)/(s_max**(1-alpha)-s_min**(1-alpha)))\n return [C*x[i]**(-alpha) for i in range(len(x))]\n\ndef stretched_exponential(x,s_min,lamda,beta):\n C = beta*np.exp((s_min**beta)/lamda)/lamda\n return [C*(x[i]**(beta-1))*np.exp(-(x[i]**beta)/lamda) for i in range(len(x))]\n\ndef exponential(x,s_min,lamda):\n C = lamda*np.exp(lamda*s_min)\n return [C*np.exp(-lamda*x[i]) for i in range(len(x))]\n\ndef truncated_power_law(x,s_min,alpha,lamda):\n C = float(((lamda)**(1-alpha))/gammainc(1-alpha, s_min*lamda))\n return [C*(x[i]**(-alpha))*(np.exp(-lamda*x[i])) for i in range(len(x))]", "_____no_output_____" ], [ "grid_sizes = [128,256,512,1024]\ndensity = 0.59\nreplicates = [1,2,3,4,5]\nnumbers_per_replicate = 2000000\ndpt = []\nnpt = []\ndictionary = {}\nfor grid_size in grid_sizes:\n del dpt\n del npt\n dpt = []\n npt = []\n for replicate in replicates:\n filename_dp = \"dp_transformations_\" + str(grid_size) + \"_\" + str(density) + \"_\" + str(numbers_per_replicate) + \"_\" + str(replicate) + \".txt\"\n filename_np = \"np_transformations_\" + str(grid_size) + \"_\" + str(density) + \"_\" + str(numbers_per_replicate) + \"_\" + str(replicate) + \".txt\"\n with open(filename_dp) as f:\n dpt.extend([tuple(map(int, i.split(' '))) for i in f])\n with open(filename_np) as f:\n npt.extend([tuple(map(int, i.split(' '))) for i in f])\n\n npscs = [abs(i[1]-i[0]) for i in npt]\n npsc_freqs = dict(collections.Counter(npscs))\n npsc_freqs = {k: v / (len(npscs)) for k, v in npsc_freqs.items()}\n\n dpscs = [abs(i[1]-i[0]) for i in dpt]\n dpsc_freqs = dict(collections.Counter(dpscs))\n dpsc_freqs = {k: v / (len(dpscs)) for k, v in dpsc_freqs.items()}\n\n np_lists = sorted(npsc_freqs.items()) \n dp_lists = sorted(dpsc_freqs.items()) \n\n np_x, np_y = zip(*np_lists)\n dp_x, dp_y = zip(*dp_lists)\n \n cy_dp = []\n cy_np = []\n for i in range(len(dp_y)):\n cy_dp.append(sum([dp_y[j] for j in range(i,len(dp_y))]))\n for i in range(len(np_y)):\n cy_np.append(sum([np_y[j] for j in range(i,len(np_y))]))\n \n dictionary[str(grid_size)+'_np_x'] = np_x\n dictionary[str(grid_size)+'_dp_x'] = dp_x\n dictionary[str(grid_size)+'_np_y'] = np_y\n dictionary[str(grid_size)+'_dp_y'] = dp_y\n dictionary[str(grid_size)+'_cy_dp'] = cy_dp\n dictionary[str(grid_size)+'_cy_np'] = cy_np", "_____no_output_____" ], [ "# Determined using the Fitting Notebook \nx_max = max(np_x)\nx_min = 5\nalpha = 1.608\nlamda = 1/224091.1\n\nx = [i for i in range(x_min,x_max+1)]\ny = truncated_power_law(x, x_min, alpha, lamda)\nA = (np_y[4]/y[0])\ny = [(np_y[4]/y[0])*i for i in y]\n\n\nfig, ax1 = plt.subplots(figsize=(5,4))\n\n# These are in unitless percentages of the figure size. (0,0 is bottom left)\nleft, bottom, width, height = [0.58, 0.58, 0.3, 0.27]\nax2 = fig.add_axes([left, bottom, width, height])\n\n\n\nax1.scatter(dictionary['1024_dp_x'], dictionary['1024_dp_y'],s=10,marker='o',label='DP',color='red',facecolors='none')\nax1.scatter(dictionary['1024_np_x'], dictionary['1024_np_y'],s=10,marker='d',label='P',color='black',facecolors='none')\nax1.plot(x,y,color=\"blue\",linewidth=2,label='fit')\nax1.set_xscale('log')\nax1.set_yscale('log')\nax1.set_xlabel('$|\\Delta s|$')\nax1.set_ylabel('$P(S=|\\Delta s|)$')\nax1.legend(loc=\"lower left\")\nax1.set_ylim(5*10**(-8),10**(0))\nax1.set_xlim(1,300000)\nax1.tick_params(bottom=True,top=True,left=True,right=True)\n\n\nax2.scatter(dictionary['1024_np_x'], dictionary['1024_cy_np'],marker='.',s=0.8,color='red',label='1024')\nax2.scatter(dictionary['512_np_x'], dictionary['512_cy_np'],marker='.',s=0.8,color='blue',label='512')\nax2.scatter(dictionary['256_np_x'], dictionary['256_cy_np'],marker='.',s=0.8,color='black',label='256')\nax2.scatter(dictionary['128_np_x'], dictionary['128_cy_np'],marker='.',s=0.8,color='orange',label='128')\nax2.set_xscale('log')\nax2.set_yscale('log')\nax2.set_xlabel('$|\\Delta s|$')\nax2.set_ylabel('$P(S<|\\Delta s|)$')\n#ax2.legend(loc=\"lower left\")\nax2.set_ylim(5*10**(-8),10**(0))\nax2.set_xlim(1,300000)\nax2.text(0.55, 0.85, 'P', transform=ax2.transAxes, ha=\"right\")\nax1.set_title('$P: \\\\rho = $'+str(density))\n\n#plt.savefig(\"Figure_5.png\",dpi=300)\n\ndel dictionary ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
4a61d22c02924e0974ee3c76b7a456b5773bb84a
618,558
ipynb
Jupyter Notebook
Week_10/Convolutional_Neural_Network.ipynb
Chau-Xochitl/INFO_7390
3ab4a3bf7af5c0b57d9604def26d64dc31405333
[ "MIT" ]
null
null
null
Week_10/Convolutional_Neural_Network.ipynb
Chau-Xochitl/INFO_7390
3ab4a3bf7af5c0b57d9604def26d64dc31405333
[ "MIT" ]
null
null
null
Week_10/Convolutional_Neural_Network.ipynb
Chau-Xochitl/INFO_7390
3ab4a3bf7af5c0b57d9604def26d64dc31405333
[ "MIT" ]
1
2019-12-02T06:48:30.000Z
2019-12-02T06:48:30.000Z
249.217566
301,242
0.903703
[ [ [ "# TensorFlow Tutorial #02\n# Convolutional Neural Network\n\nThese lessons are adapted from [tutorials](https://github.com/Hvass-Labs/TensorFlow-Tutorials) \nby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/) / [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) \nwhich are published under the [MIT License](https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/LICENSE) which allows very broad use for both academic and commercial purposes.\n", "_____no_output_____" ], [ "## Introduction\n\nThe previous tutorial showed that a simple linear model had about 91% classification accuracy for recognizing hand-written digits in the MNIST data-set.\n\nIn this tutorial we will implement a simple Convolutional Neural Network in TensorFlow which has a classification accuracy of about 99%, or more if you make some of the suggested exercises.\n\nConvolutional Networks work by moving small filters across the input image. This means the filters are re-used for recognizing patterns throughout the entire input image. This makes the Convolutional Networks much more powerful than Fully-Connected networks with the same number of variables. This in turn makes the Convolutional Networks faster to train.\n\nYou should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. Beginners to TensorFlow may also want to study the first tutorial before proceeding to this one.", "_____no_output_____" ], [ "## Flowchart", "_____no_output_____" ], [ "The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below.", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nImage('images/02_network_flowchart.png')", "_____no_output_____" ] ], [ [ "The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.\n\nThese 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are down-sampled again to 7x7 pixels.\n\nThe output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.\n\nThe convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.\n\nThese particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.\n\nNote that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow.", "_____no_output_____" ], [ "## Convolutional Layer", "_____no_output_____" ], [ "![Convolutional Networks](http://nikbearbrown.com/YouTube/MachineLearning/IMG/Convolutional_Networks.png)\n\nConvolutional Networks [https://youtu.be/jajksuQW4mc](https://youtu.be/jajksuQW4mc)\n\n![Introduction to Deep Learning: What Are Convolutional Neural Networks?](http://nikbearbrown.com/YouTube/MachineLearning/IMG/What_Are_Convolutional_Neural_Networks.png )\n\nIntroduction to Deep Learning: What Are Convolutional Neural Networks? [https://youtu.be/ixF5WNpTzCA](https://youtu.be/ixF5WNpTzCA)\n\n![MIT 6.S191 Lecture 3: Convolutional Neural Networks](http://nikbearbrown.com/YouTube/MachineLearning/IMG/Convolutional_Neural_Networks.png )\nMIT 6.S191 Lecture 3: Convolutional Neural Networks [https://youtu.be/v5JvvbP0d44](https://youtu.be/v5JvvbP0d44)\n\nThe following chart shows the basic idea of processing an image in the first convolutional layer. The input image depicts the number 7 and four copies of the image are shown here, so we can see more clearly how the filter is being moved to different positions of the image. For each position of the filter, the dot-product is being calculated between the filter and the image pixels under the filter, which results in a single pixel in the output image. So moving the filter across the entire input image results in a new image being generated.\n\nThe red filter-weights means that the filter has a positive reaction to black pixels in the input image, while blue pixels means the filter has a negative reaction to black pixels.\n\nIn this case it appears that the filter recognizes the horizontal line of the 7-digit, as can be seen from its stronger reaction to that line in the output image.", "_____no_output_____" ] ], [ [ "Image('images/02_convolution.png')", "_____no_output_____" ] ], [ [ "The step-size for moving the filter across the input is called the stride. There is a stride for moving the filter horizontally (x-axis) and another stride for moving vertically (y-axis).\n\nIn the source-code below, the stride is set to 1 in both directions, which means the filter starts in the upper left corner of the input image and is being moved 1 pixel to the right in each step. When the filter reaches the end of the image to the right, then the filter is moved back to the left side and 1 pixel down the image. This continues until the filter has reached the lower right corner of the input image and the entire output image has been generated.\n\nWhen the filter reaches the end of the right-side as well as the bottom of the input image, then it can be padded with zeroes (white pixels). This causes the output image to be of the exact same dimension as the input image.\n\nFurthermore, the output of the convolution may be passed through a so-called Rectified Linear Unit (ReLU), which merely ensures that the output is positive because negative values are set to zero. The output may also be down-sampled by so-called max-pooling, which considers small windows of 2x2 pixels and only keeps the largest of those pixels. This halves the resolution of the input image e.g. from 28x28 to 14x14 pixels.\n\nNote that the second convolutional layer is more complicated because it takes 16 input channels. We want a separate filter for each input channel, so we need 16 filters instead of just one. Furthermore, we want 36 output channels from the second convolutional layer, so in total we need 16 x 36 = 576 filters for the second convolutional layer. It can be a bit challenging to understand how this works.", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\nimport time\nfrom datetime import timedelta\nimport math", "_____no_output_____" ] ], [ [ "This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:", "_____no_output_____" ] ], [ [ "tf.__version__", "_____no_output_____" ] ], [ [ "## Configuration of Neural Network\n\nThe configuration of the Convolutional Neural Network is defined here for convenience, so you can easily find and change these numbers and re-run the Notebook.", "_____no_output_____" ] ], [ [ "# Convolutional Layer 1.\nfilter_size1 = 5 # Convolution filters are 5 x 5 pixels.\nnum_filters1 = 16 # There are 16 of these filters.\n\n# Convolutional Layer 2.\nfilter_size2 = 5 # Convolution filters are 5 x 5 pixels.\nnum_filters2 = 36 # There are 36 of these filters.\n\n# Fully-connected layer.\nfc_size = 128 # Number of neurons in fully-connected layer.", "_____no_output_____" ] ], [ [ "## Load Data", "_____no_output_____" ], [ "The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.", "_____no_output_____" ] ], [ [ "from tensorflow.examples.tutorials.mnist import input_data\ndata = input_data.read_data_sets('data/MNIST/', one_hot=True)\n", "Extracting data/MNIST/train-images-idx3-ubyte.gz\nExtracting data/MNIST/train-labels-idx1-ubyte.gz\nExtracting data/MNIST/t10k-images-idx3-ubyte.gz\nExtracting data/MNIST/t10k-labels-idx1-ubyte.gz\n" ] ], [ [ "The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.", "_____no_output_____" ] ], [ [ "print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))", "Size of:\n- Training-set:\t\t55000\n- Test-set:\t\t10000\n- Validation-set:\t5000\n" ] ], [ [ "The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.", "_____no_output_____" ] ], [ [ "data.test.cls = np.argmax(data.test.labels, axis=1)", "_____no_output_____" ] ], [ [ "## Data Dimensions", "_____no_output_____" ], [ "The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.", "_____no_output_____" ] ], [ [ "# We know that MNIST images are 28 pixels in each dimension.\nimg_size = 28\n\n# Images are stored in one-dimensional arrays of this length.\nimg_size_flat = img_size * img_size\n\n# Tuple with height and width of images used to reshape arrays.\nimg_shape = (img_size, img_size)\n\n# Number of colour channels for the images: 1 channel for gray-scale.\nnum_channels = 1\n\n# Number of classes, one class for each of 10 digits.\nnum_classes = 10", "_____no_output_____" ] ], [ [ "### Helper-function for plotting images", "_____no_output_____" ], [ "Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.", "_____no_output_____" ] ], [ [ "def plot_images(images, cls_true, cls_pred=None):\n assert len(images) == len(cls_true) == 9\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n fig.subplots_adjust(hspace=0.3, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n # Plot image.\n ax.imshow(images[i].reshape(img_shape), cmap='binary')\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n xlabel = \"True: {0}, Pred: {1}\".format(cls_true[i], cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "_____no_output_____" ] ], [ [ "### Plot a few images to see if data is correct", "_____no_output_____" ] ], [ [ "# Get the first images from the test-set.\nimages = data.test.images[0:9]\n\n# Get the true classes for those images.\ncls_true = data.test.cls[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true)", "_____no_output_____" ] ], [ [ "## TensorFlow Graph\n\nThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.\n\nTensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.\n\nTensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.\n\nA TensorFlow graph consists of the following parts which will be detailed below:\n\n* Placeholder variables used for inputting data to the graph.\n* Variables that are going to be optimized so as to make the convolutional network perform better.\n* The mathematical formulas for the convolutional network.\n* A cost measure that can be used to guide the optimization of the variables.\n* An optimization method which updates the variables.\n\nIn addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.\n\n### TensorFlow graphs\n\nTensorFlow is based on graph based computation – “what on earth is that?”, you might say. It’s an alternative way of conceptualising mathematical calculations. TensorFlow is the second machine learning framework that Google created and used to design, build, and train deep learning models.You can use the TensorFlow library do to numerical computations, which in itself doesn’t seem all too special, but these computations are done with data flow graphs. In these graphs, nodes represent mathematical operations, while the edges represent the data, which usually are multidimensional data arrays or tensors, that are communicated between these edges.\n\nYou see? The name “TensorFlow” is derived from the operations which neural networks perform on multidimensional data arrays or tensors! It’s literally a flow of tensors. \n\nLets work with some basics of tensor flow by defining two constants and printing their product out.", "_____no_output_____" ], [ "### Helper-functions for creating new variables", "_____no_output_____" ], [ "Functions for creating new TensorFlow variables in the given shape and initializing them with random values. Note that the initialization is not actually done at this point, it is merely being defined in the TensorFlow graph.", "_____no_output_____" ] ], [ [ "def new_weights(shape):\n return tf.Variable(tf.truncated_normal(shape, stddev=0.05))", "_____no_output_____" ], [ "def new_biases(length):\n return tf.Variable(tf.constant(0.05, shape=[length]))", "_____no_output_____" ] ], [ [ "### Helper-function for creating a new Convolutional Layer", "_____no_output_____" ], [ "This function creates a new convolutional layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.\n\nIt is assumed that the input is a 4-dim tensor with the following dimensions:\n\n1. Image number.\n2. Y-axis of each image.\n3. X-axis of each image.\n4. Channels of each image.\n\nNote that the input channels may either be colour-channels, or it may be filter-channels if the input is produced from a previous convolutional layer.\n\nThe output is another 4-dim tensor with the following dimensions:\n\n1. Image number, same as input.\n2. Y-axis of each image. If 2x2 pooling is used, then the height and width of the input images is divided by 2.\n3. X-axis of each image. Ditto.\n4. Channels produced by the convolutional filters.", "_____no_output_____" ] ], [ [ "def new_conv_layer(input, # The previous layer.\n num_input_channels, # Num. channels in prev. layer.\n filter_size, # Width and height of each filter.\n num_filters, # Number of filters.\n use_pooling=True): # Use 2x2 max-pooling.\n\n # Shape of the filter-weights for the convolution.\n # This format is determined by the TensorFlow API.\n shape = [filter_size, filter_size, num_input_channels, num_filters]\n\n # Create new weights aka. filters with the given shape.\n weights = new_weights(shape=shape)\n\n # Create new biases, one for each filter.\n biases = new_biases(length=num_filters)\n\n # Create the TensorFlow operation for convolution.\n # Note the strides are set to 1 in all dimensions.\n # The first and last stride must always be 1,\n # because the first is for the image-number and\n # the last is for the input-channel.\n # But e.g. strides=[1, 2, 2, 1] would mean that the filter\n # is moved 2 pixels across the x- and y-axis of the image.\n # The padding is set to 'SAME' which means the input image\n # is padded with zeroes so the size of the output is the same.\n layer = tf.nn.conv2d(input=input,\n filter=weights,\n strides=[1, 1, 1, 1],\n padding='SAME')\n\n # Add the biases to the results of the convolution.\n # A bias-value is added to each filter-channel.\n layer += biases\n\n # Use pooling to down-sample the image resolution?\n if use_pooling:\n # This is 2x2 max-pooling, which means that we\n # consider 2x2 windows and select the largest value\n # in each window. Then we move 2 pixels to the next window.\n layer = tf.nn.max_pool(value=layer,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME')\n\n # Rectified Linear Unit (ReLU).\n # It calculates max(x, 0) for each input pixel x.\n # This adds some non-linearity to the formula and allows us\n # to learn more complicated functions.\n layer = tf.nn.relu(layer)\n\n # Note that ReLU is normally executed before the pooling,\n # but since relu(max_pool(x)) == max_pool(relu(x)) we can\n # save 75% of the relu-operations by max-pooling first.\n\n # We return both the resulting layer and the filter-weights\n # because we will plot the weights later.\n return layer, weights", "_____no_output_____" ] ], [ [ "### Helper-function for flattening a layer\n\nA convolutional layer produces an output tensor with 4 dimensions. We will add fully-connected layers after the convolution layers, so we need to reduce the 4-dim tensor to 2-dim which can be used as input to the fully-connected layer.", "_____no_output_____" ] ], [ [ "def flatten_layer(layer):\n # Get the shape of the input layer.\n layer_shape = layer.get_shape()\n\n # The shape of the input layer is assumed to be:\n # layer_shape == [num_images, img_height, img_width, num_channels]\n\n # The number of features is: img_height * img_width * num_channels\n # We can use a function from TensorFlow to calculate this.\n num_features = layer_shape[1:4].num_elements()\n \n # Reshape the layer to [num_images, num_features].\n # Note that we just set the size of the second dimension\n # to num_features and the size of the first dimension to -1\n # which means the size in that dimension is calculated\n # so the total size of the tensor is unchanged from the reshaping.\n layer_flat = tf.reshape(layer, [-1, num_features])\n\n # The shape of the flattened layer is now:\n # [num_images, img_height * img_width * num_channels]\n\n # Return both the flattened layer and the number of features.\n return layer_flat, num_features", "_____no_output_____" ] ], [ [ "### Helper-function for creating a new Fully-Connected Layer", "_____no_output_____" ], [ "This function creates a new fully-connected layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.\n\nIt is assumed that the input is a 2-dim tensor of shape `[num_images, num_inputs]`. The output is a 2-dim tensor of shape `[num_images, num_outputs]`.", "_____no_output_____" ] ], [ [ "def new_fc_layer(input, # The previous layer.\n num_inputs, # Num. inputs from prev. layer.\n num_outputs, # Num. outputs.\n use_relu=True): # Use Rectified Linear Unit (ReLU)?\n\n # Create new weights and biases.\n weights = new_weights(shape=[num_inputs, num_outputs])\n biases = new_biases(length=num_outputs)\n\n # Calculate the layer as the matrix multiplication of\n # the input and weights, and then add the bias-values.\n layer = tf.matmul(input, weights) + biases\n\n # Use ReLU?\n if use_relu:\n layer = tf.nn.relu(layer)\n\n return layer", "_____no_output_____" ] ], [ [ "### Placeholder variables", "_____no_output_____" ], [ "Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.\n\nFirst we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.", "_____no_output_____" ] ], [ [ "x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')", "_____no_output_____" ] ], [ [ "The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:", "_____no_output_____" ] ], [ [ "x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])", "_____no_output_____" ] ], [ [ "Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.", "_____no_output_____" ] ], [ [ "y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')", "_____no_output_____" ] ], [ [ "We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.", "_____no_output_____" ] ], [ [ "y_true_cls = tf.argmax(y_true, dimension=1)", "_____no_output_____" ] ], [ [ "### Convolutional Layer 1\n\nCreate the first convolutional layer. It takes `x_image` as input and creates `num_filters1` different filters, each having width and height equal to `filter_size1`. Finally we wish to down-sample the image so it is half the size by using 2x2 max-pooling.", "_____no_output_____" ] ], [ [ "layer_conv1, weights_conv1 = \\\n new_conv_layer(input=x_image,\n num_input_channels=num_channels,\n filter_size=filter_size1,\n num_filters=num_filters1,\n use_pooling=True)", "_____no_output_____" ] ], [ [ "Check the shape of the tensor that will be output by the convolutional layer. It is (?, 14, 14, 16) which means that there is an arbitrary number of images (this is the ?), each image is 14 pixels wide and 14 pixels high, and there are 16 different channels, one channel for each of the filters.", "_____no_output_____" ] ], [ [ "layer_conv1", "_____no_output_____" ] ], [ [ "### Convolutional Layer 2\n\nCreate the second convolutional layer, which takes as input the output from the first convolutional layer. The number of input channels corresponds to the number of filters in the first convolutional layer.", "_____no_output_____" ] ], [ [ "layer_conv2, weights_conv2 = \\\n new_conv_layer(input=layer_conv1,\n num_input_channels=num_filters1,\n filter_size=filter_size2,\n num_filters=num_filters2,\n use_pooling=True)", "_____no_output_____" ] ], [ [ "Check the shape of the tensor that will be output from this convolutional layer. The shape is (?, 7, 7, 36) where the ? again means that there is an arbitrary number of images, with each image having width and height of 7 pixels, and there are 36 channels, one for each filter.", "_____no_output_____" ] ], [ [ "layer_conv2", "_____no_output_____" ] ], [ [ "### Flatten Layer\n\nThe convolutional layers output 4-dim tensors. We now wish to use these as input in a fully-connected network, which requires for the tensors to be reshaped or flattened to 2-dim tensors.", "_____no_output_____" ] ], [ [ "layer_flat, num_features = flatten_layer(layer_conv2)", "_____no_output_____" ] ], [ [ "Check that the tensors now have shape (?, 1764) which means there's an arbitrary number of images which have been flattened to vectors of length 1764 each. Note that 1764 = 7 x 7 x 36.", "_____no_output_____" ] ], [ [ "layer_flat", "_____no_output_____" ], [ "num_features", "_____no_output_____" ] ], [ [ "### Fully-Connected Layer 1\n\nAdd a fully-connected layer to the network. The input is the flattened layer from the previous convolution. The number of neurons or nodes in the fully-connected layer is `fc_size`. ReLU is used so we can learn non-linear relations.", "_____no_output_____" ] ], [ [ "layer_fc1 = new_fc_layer(input=layer_flat,\n num_inputs=num_features,\n num_outputs=fc_size,\n use_relu=True)", "_____no_output_____" ] ], [ [ "Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and `fc_size` == 128.", "_____no_output_____" ] ], [ [ "layer_fc1", "_____no_output_____" ] ], [ [ "### Fully-Connected Layer 2\n\nAdd another fully-connected layer that outputs vectors of length 10 for determining which of the 10 classes the input image belongs to. Note that ReLU is not used in this layer.", "_____no_output_____" ] ], [ [ "layer_fc2 = new_fc_layer(input=layer_fc1,\n num_inputs=fc_size,\n num_outputs=num_classes,\n use_relu=False)", "_____no_output_____" ], [ "layer_fc2", "_____no_output_____" ] ], [ [ "### Predicted Class", "_____no_output_____" ], [ "The second fully-connected layer estimates how likely it is that the input image belongs to each of the 10 classes. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each element is limited between zero and one and the 10 elements sum to one. This is calculated using the so-called softmax function and the result is stored in `y_pred`.", "_____no_output_____" ] ], [ [ "y_pred = tf.nn.softmax(layer_fc2)", "_____no_output_____" ] ], [ [ "The class-number is the index of the largest element.", "_____no_output_____" ] ], [ [ "y_pred_cls = tf.argmax(y_pred, dimension=1)", "_____no_output_____" ] ], [ [ "### Cost-function to be optimized", "_____no_output_____" ], [ "To make the model better at classifying the input images, we must somehow change the variables for all the network layers. To do this we first need to know how well the model currently performs by comparing the predicted output of the model `y_pred` to the desired output `y_true`.\n\nThe cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the network layers.\n\nTensorFlow has a built-in function for calculating the cross-entropy. Note that the function calculates the softmax internally so we must use the output of `layer_fc2` directly rather than `y_pred` which has already had the softmax applied.", "_____no_output_____" ] ], [ [ "cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,\n labels=y_true)", "_____no_output_____" ] ], [ [ "We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.", "_____no_output_____" ] ], [ [ "cost = tf.reduce_mean(cross_entropy)", "_____no_output_____" ] ], [ [ "### Optimization Method", "_____no_output_____" ], [ "Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the `AdamOptimizer` which is an advanced form of Gradient Descent.\n\nNote that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.", "_____no_output_____" ] ], [ [ "optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)", "_____no_output_____" ] ], [ [ "### Performance Measures", "_____no_output_____" ], [ "We need a few more performance measures to display the progress to the user.\n\nThis is a vector of booleans whether the predicted class equals the true class of each image.", "_____no_output_____" ] ], [ [ "correct_prediction = tf.equal(y_pred_cls, y_true_cls)", "_____no_output_____" ] ], [ [ "This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.", "_____no_output_____" ] ], [ [ "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))", "_____no_output_____" ] ], [ [ "## TensorFlow Run", "_____no_output_____" ], [ "### Create TensorFlow session\n\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.", "_____no_output_____" ] ], [ [ "session = tf.Session()", "_____no_output_____" ] ], [ [ "### Initialize variables\n\nThe variables for `weights` and `biases` must be initialized before we start optimizing them.", "_____no_output_____" ] ], [ [ "session.run(tf.global_variables_initializer())", "_____no_output_____" ] ], [ [ "### Helper-function to perform optimization iterations", "_____no_output_____" ], [ "There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.\n\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.", "_____no_output_____" ] ], [ [ "train_batch_size = 64", "_____no_output_____" ] ], [ [ "Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.", "_____no_output_____" ] ], [ [ "# Counter for total number of iterations performed so far.\ntotal_iterations = 0\n\ndef optimize(num_iterations):\n # Ensure we update the global variable rather than a local copy.\n global total_iterations\n\n # Start-time used for printing time-usage below.\n start_time = time.time()\n\n for i in range(total_iterations,\n total_iterations + num_iterations):\n\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch = data.train.next_batch(train_batch_size)\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n session.run(optimizer, feed_dict=feed_dict_train)\n\n # Print status every 100 iterations.\n if i % 100 == 0:\n # Calculate the accuracy on the training-set.\n acc = session.run(accuracy, feed_dict=feed_dict_train)\n\n # Message for printing.\n msg = \"Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}\"\n\n # Print it.\n print(msg.format(i + 1, acc))\n\n # Update the total number of iterations performed.\n total_iterations += num_iterations\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # Print the time-usage.\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))", "_____no_output_____" ] ], [ [ "### Helper-function to plot example errors", "_____no_output_____" ], [ "Function for plotting examples of images from the test-set that have been mis-classified.", "_____no_output_____" ] ], [ [ "def plot_example_errors(cls_pred, correct):\n # This function is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # correct is a boolean array whether the predicted class\n # is equal to the true class for each image in the test-set.\n\n # Negate the boolean array.\n incorrect = (correct == False)\n \n # Get the images from the test-set that have been\n # incorrectly classified.\n images = data.test.images[incorrect]\n \n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = data.test.cls[incorrect]\n \n # Plot the first 9 images.\n plot_images(images=images[0:9],\n cls_true=cls_true[0:9],\n cls_pred=cls_pred[0:9])", "_____no_output_____" ] ], [ [ "### Helper-function to plot confusion matrix", "_____no_output_____" ] ], [ [ "def plot_confusion_matrix(cls_pred):\n # This is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Get the true classifications for the test-set.\n cls_true = data.test.cls\n \n # Get the confusion matrix using sklearn.\n cm = confusion_matrix(y_true=cls_true,\n y_pred=cls_pred)\n\n # Print the confusion matrix as text.\n print(cm)\n\n # Plot the confusion matrix as an image.\n plt.matshow(cm)\n\n # Make various adjustments to the plot.\n plt.colorbar()\n tick_marks = np.arange(num_classes)\n plt.xticks(tick_marks, range(num_classes))\n plt.yticks(tick_marks, range(num_classes))\n plt.xlabel('Predicted')\n plt.ylabel('True')\n\n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "_____no_output_____" ] ], [ [ "### Helper-function for showing the performance", "_____no_output_____" ], [ "Function for printing the classification accuracy on the test-set.\n\nIt takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.\n\nNote that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.", "_____no_output_____" ] ], [ [ "# Split the test-set into smaller batches of this size.\ntest_batch_size = 256\n\ndef print_test_accuracy(show_example_errors=False,\n show_confusion_matrix=False):\n\n # Number of images in the test-set.\n num_test = len(data.test.images)\n\n # Allocate an array for the predicted classes which\n # will be calculated in batches and filled into this array.\n cls_pred = np.zeros(shape=num_test, dtype=np.int)\n\n # Now calculate the predicted classes for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_test:\n # The ending index for the next batch is denoted j.\n j = min(i + test_batch_size, num_test)\n\n # Get the images from the test-set between index i and j.\n images = data.test.images[i:j, :]\n\n # Get the associated labels.\n labels = data.test.labels[i:j, :]\n\n # Create a feed-dict with these images and labels.\n feed_dict = {x: images,\n y_true: labels}\n\n # Calculate the predicted class using TensorFlow.\n cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n\n # Convenience variable for the true class-numbers of the test-set.\n cls_true = data.test.cls\n\n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n # Calculate the number of correctly classified images.\n # When summing a boolean array, False means 0 and True means 1.\n correct_sum = correct.sum()\n\n # Classification accuracy is the number of correctly classified\n # images divided by the total number of images in the test-set.\n acc = float(correct_sum) / num_test\n\n # Print the accuracy.\n msg = \"Accuracy on Test-Set: {0:.1%} ({1} / {2})\"\n print(msg.format(acc, correct_sum, num_test))\n\n # Plot some examples of mis-classifications, if desired.\n if show_example_errors:\n print(\"Example errors:\")\n plot_example_errors(cls_pred=cls_pred, correct=correct)\n\n # Plot the confusion matrix, if desired.\n if show_confusion_matrix:\n print(\"Confusion Matrix:\")\n plot_confusion_matrix(cls_pred=cls_pred)", "_____no_output_____" ] ], [ [ "## Performance before any optimization\n\nThe accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.", "_____no_output_____" ] ], [ [ "print_test_accuracy()", "Accuracy on Test-Set: 10.3% (1033 / 10000)\n" ] ], [ [ "## Performance after 1 optimization iteration\n\nThe classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.", "_____no_output_____" ] ], [ [ "optimize(num_iterations=1)", "Optimization Iteration: 1, Training Accuracy: 12.5%\nTime usage: 0:00:00\n" ], [ "print_test_accuracy()", "Accuracy on Test-Set: 12.7% (1267 / 10000)\n" ] ], [ [ "## Performance after 100 optimization iterations\n\nAfter 100 optimization iterations, the model has significantly improved its classification accuracy.", "_____no_output_____" ] ], [ [ "optimize(num_iterations=99) # We already performed 1 iteration above.", "Time usage: 0:00:04\n" ], [ "print_test_accuracy(show_example_errors=True)", "Accuracy on Test-Set: 69.1% (6909 / 10000)\nExample errors:\n" ] ], [ [ "## Performance after 1000 optimization iterations\n\nAfter 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.", "_____no_output_____" ] ], [ [ "optimize(num_iterations=900) # We performed 100 iterations above.", "Optimization Iteration: 101, Training Accuracy: 68.8%\nOptimization Iteration: 201, Training Accuracy: 73.4%\nOptimization Iteration: 301, Training Accuracy: 89.1%\nOptimization Iteration: 401, Training Accuracy: 84.4%\nOptimization Iteration: 501, Training Accuracy: 87.5%\nOptimization Iteration: 601, Training Accuracy: 93.8%\nOptimization Iteration: 701, Training Accuracy: 96.9%\nOptimization Iteration: 801, Training Accuracy: 92.2%\nOptimization Iteration: 901, Training Accuracy: 92.2%\nTime usage: 0:00:37\n" ], [ "print_test_accuracy(show_example_errors=True)", "Accuracy on Test-Set: 92.9% (9293 / 10000)\nExample errors:\n" ] ], [ [ "## Performance after 10,000 optimization iterations\n\nAfter 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.", "_____no_output_____" ] ], [ [ "optimize(num_iterations=9000) # We performed 1000 iterations above.", "Optimization Iteration: 1001, Training Accuracy: 93.8%\nOptimization Iteration: 1101, Training Accuracy: 89.1%\nOptimization Iteration: 1201, Training Accuracy: 89.1%\nOptimization Iteration: 1301, Training Accuracy: 96.9%\nOptimization Iteration: 1401, Training Accuracy: 87.5%\nOptimization Iteration: 1501, Training Accuracy: 95.3%\nOptimization Iteration: 1601, Training Accuracy: 90.6%\nOptimization Iteration: 1701, Training Accuracy: 87.5%\nOptimization Iteration: 1801, Training Accuracy: 95.3%\nOptimization Iteration: 1901, Training Accuracy: 93.8%\nOptimization Iteration: 2001, Training Accuracy: 95.3%\nOptimization Iteration: 2101, Training Accuracy: 93.8%\nOptimization Iteration: 2201, Training Accuracy: 89.1%\nOptimization Iteration: 2301, Training Accuracy: 96.9%\nOptimization Iteration: 2401, Training Accuracy: 98.4%\nOptimization Iteration: 2501, Training Accuracy: 92.2%\nOptimization Iteration: 2601, Training Accuracy: 96.9%\nOptimization Iteration: 2701, Training Accuracy: 100.0%\nOptimization Iteration: 2801, Training Accuracy: 96.9%\nOptimization Iteration: 2901, Training Accuracy: 98.4%\nOptimization Iteration: 3001, Training Accuracy: 98.4%\nOptimization Iteration: 3101, Training Accuracy: 95.3%\nOptimization Iteration: 3201, Training Accuracy: 98.4%\nOptimization Iteration: 3301, Training Accuracy: 93.8%\nOptimization Iteration: 3401, Training Accuracy: 98.4%\nOptimization Iteration: 3501, Training Accuracy: 96.9%\nOptimization Iteration: 3601, Training Accuracy: 100.0%\nOptimization Iteration: 3701, Training Accuracy: 96.9%\nOptimization Iteration: 3801, Training Accuracy: 98.4%\nOptimization Iteration: 3901, Training Accuracy: 96.9%\nOptimization Iteration: 4001, Training Accuracy: 98.4%\nOptimization Iteration: 4101, Training Accuracy: 100.0%\nOptimization Iteration: 4201, Training Accuracy: 96.9%\nOptimization Iteration: 4301, Training Accuracy: 96.9%\nOptimization Iteration: 4401, Training Accuracy: 98.4%\nOptimization Iteration: 4501, Training Accuracy: 98.4%\nOptimization Iteration: 4601, Training Accuracy: 100.0%\nOptimization Iteration: 4701, Training Accuracy: 96.9%\nOptimization Iteration: 4801, Training Accuracy: 100.0%\nOptimization Iteration: 4901, Training Accuracy: 98.4%\nOptimization Iteration: 5001, Training Accuracy: 100.0%\nOptimization Iteration: 5101, Training Accuracy: 98.4%\nOptimization Iteration: 5201, Training Accuracy: 98.4%\nOptimization Iteration: 5301, Training Accuracy: 98.4%\nOptimization Iteration: 5401, Training Accuracy: 100.0%\nOptimization Iteration: 5501, Training Accuracy: 96.9%\nOptimization Iteration: 5601, Training Accuracy: 100.0%\nOptimization Iteration: 5701, Training Accuracy: 98.4%\nOptimization Iteration: 5801, Training Accuracy: 98.4%\nOptimization Iteration: 5901, Training Accuracy: 100.0%\nOptimization Iteration: 6001, Training Accuracy: 100.0%\nOptimization Iteration: 6101, Training Accuracy: 96.9%\nOptimization Iteration: 6201, Training Accuracy: 96.9%\nOptimization Iteration: 6301, Training Accuracy: 98.4%\nOptimization Iteration: 6401, Training Accuracy: 98.4%\nOptimization Iteration: 6501, Training Accuracy: 98.4%\nOptimization Iteration: 6601, Training Accuracy: 95.3%\nOptimization Iteration: 6701, Training Accuracy: 96.9%\nOptimization Iteration: 6801, Training Accuracy: 98.4%\nOptimization Iteration: 6901, Training Accuracy: 100.0%\nOptimization Iteration: 7001, Training Accuracy: 100.0%\nOptimization Iteration: 7101, Training Accuracy: 98.4%\nOptimization Iteration: 7201, Training Accuracy: 95.3%\nOptimization Iteration: 7301, Training Accuracy: 98.4%\nOptimization Iteration: 7401, Training Accuracy: 96.9%\nOptimization Iteration: 7501, Training Accuracy: 100.0%\nOptimization Iteration: 7601, Training Accuracy: 95.3%\nOptimization Iteration: 7701, Training Accuracy: 100.0%\nOptimization Iteration: 7801, Training Accuracy: 98.4%\nOptimization Iteration: 7901, Training Accuracy: 96.9%\nOptimization Iteration: 8001, Training Accuracy: 95.3%\nOptimization Iteration: 8101, Training Accuracy: 98.4%\nOptimization Iteration: 8201, Training Accuracy: 95.3%\nOptimization Iteration: 8301, Training Accuracy: 100.0%\nOptimization Iteration: 8401, Training Accuracy: 98.4%\nOptimization Iteration: 8501, Training Accuracy: 100.0%\nOptimization Iteration: 8601, Training Accuracy: 96.9%\nOptimization Iteration: 8701, Training Accuracy: 96.9%\nOptimization Iteration: 8801, Training Accuracy: 98.4%\nOptimization Iteration: 8901, Training Accuracy: 96.9%\nOptimization Iteration: 9001, Training Accuracy: 100.0%\nOptimization Iteration: 9101, Training Accuracy: 98.4%\nOptimization Iteration: 9201, Training Accuracy: 98.4%\nOptimization Iteration: 9301, Training Accuracy: 100.0%\nOptimization Iteration: 9401, Training Accuracy: 100.0%\nOptimization Iteration: 9501, Training Accuracy: 100.0%\nOptimization Iteration: 9601, Training Accuracy: 100.0%\nOptimization Iteration: 9701, Training Accuracy: 100.0%\nOptimization Iteration: 9801, Training Accuracy: 100.0%\nOptimization Iteration: 9901, Training Accuracy: 96.9%\nTime usage: 0:06:55\n" ], [ "print_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)", "Accuracy on Test-Set: 98.5% (9848 / 10000)\nExample errors:\n" ] ], [ [ "## Visualization of Weights and Layers\n\nIn trying to understand why the convolutional neural network can recognize handwritten digits, we will now visualize the weights of the convolutional filters and the resulting output images.", "_____no_output_____" ], [ "### Helper-function for plotting convolutional weights", "_____no_output_____" ] ], [ [ "def plot_conv_weights(weights, input_channel=0):\n # Assume weights are TensorFlow ops for 4-dim variables\n # e.g. weights_conv1 or weights_conv2.\n \n # Retrieve the values of the weight-variables from TensorFlow.\n # A feed-dict is not necessary because nothing is calculated.\n w = session.run(weights)\n\n # Get the lowest and highest values for the weights.\n # This is used to correct the colour intensity across\n # the images so they can be compared with each other.\n w_min = np.min(w)\n w_max = np.max(w)\n\n # Number of filters used in the conv. layer.\n num_filters = w.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot all the filter-weights.\n for i, ax in enumerate(axes.flat):\n # Only plot the valid filter-weights.\n if i<num_filters:\n # Get the weights for the i'th filter of the input channel.\n # See new_conv_layer() for details on the format\n # of this 4-dim tensor.\n img = w[:, :, input_channel, i]\n\n # Plot image.\n ax.imshow(img, vmin=w_min, vmax=w_max,\n interpolation='nearest', cmap='seismic')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "_____no_output_____" ] ], [ [ "### Helper-function for plotting the output of a convolutional layer", "_____no_output_____" ] ], [ [ "def plot_conv_layer(layer, image):\n # Assume layer is a TensorFlow op that outputs a 4-dim tensor\n # which is the output of a convolutional layer,\n # e.g. layer_conv1 or layer_conv2.\n\n # Create a feed-dict containing just one image.\n # Note that we don't need to feed y_true because it is\n # not used in this calculation.\n feed_dict = {x: [image]}\n\n # Calculate and retrieve the output values of the layer\n # when inputting that image.\n values = session.run(layer, feed_dict=feed_dict)\n\n # Number of filters used in the conv. layer.\n num_filters = values.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot the output images of all the filters.\n for i, ax in enumerate(axes.flat):\n # Only plot the images for valid filters.\n if i<num_filters:\n # Get the output image of using the i'th filter.\n # See new_conv_layer() for details on the format\n # of this 4-dim tensor.\n img = values[0, :, :, i]\n\n # Plot image.\n ax.imshow(img, interpolation='nearest', cmap='binary')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "_____no_output_____" ] ], [ [ "### Input Images", "_____no_output_____" ], [ "Helper-function for plotting an image.", "_____no_output_____" ] ], [ [ "def plot_image(image):\n plt.imshow(image.reshape(img_shape),\n interpolation='nearest',\n cmap='binary')\n\n plt.show()", "_____no_output_____" ] ], [ [ "Plot an image from the test-set which will be used as an example below.", "_____no_output_____" ] ], [ [ "image1 = data.test.images[0]\nplot_image(image1)", "_____no_output_____" ] ], [ [ "Plot another example image from the test-set.", "_____no_output_____" ] ], [ [ "image2 = data.test.images[13]\nplot_image(image2)", "_____no_output_____" ] ], [ [ "### Convolution Layer 1", "_____no_output_____" ], [ "Now plot the filter-weights for the first convolutional layer.\n\nNote that positive weights are red and negative weights are blue.", "_____no_output_____" ] ], [ [ "plot_conv_weights(weights=weights_conv1)", "_____no_output_____" ] ], [ [ "Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to 14 x 14 pixels which is half the resolution of the original input image.", "_____no_output_____" ] ], [ [ "plot_conv_layer(layer=layer_conv1, image=image1)", "_____no_output_____" ] ], [ [ "The following images are the results of applying the convolutional filters to the second image.", "_____no_output_____" ] ], [ [ "plot_conv_layer(layer=layer_conv1, image=image2)", "_____no_output_____" ] ], [ [ "It is difficult to see from these images what the purpose of the convolutional filters might be. It appears that they have merely created several variations of the input image, as if light was shining from different angles and casting shadows in the image.", "_____no_output_____" ], [ "### Convolution Layer 2", "_____no_output_____" ], [ "Now plot the filter-weights for the second convolutional layer.\n\nThere are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.\n\nNote again that positive weights are red and negative weights are blue.", "_____no_output_____" ] ], [ [ "plot_conv_weights(weights=weights_conv2, input_channel=0)", "_____no_output_____" ] ], [ [ "There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel. ", "_____no_output_____" ] ], [ [ "plot_conv_weights(weights=weights_conv2, input_channel=1)", "_____no_output_____" ] ], [ [ "It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.\n\nApplying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.\n\nNote that these are down-sampled yet again to 7 x 7 pixels which is half the resolution of the images from the first conv-layer.", "_____no_output_____" ] ], [ [ "plot_conv_layer(layer=layer_conv2, image=image1)", "_____no_output_____" ] ], [ [ "And these are the results of applying the filter-weights to the second image.", "_____no_output_____" ] ], [ [ "plot_conv_layer(layer=layer_conv2, image=image2)", "_____no_output_____" ] ], [ [ "From these images, it looks like the second convolutional layer might detect lines and patterns in the input images, which are less sensitive to local variations in the original input images.\n\nThese images are then flattened and input to the fully-connected layer, but that is not shown here.", "_____no_output_____" ], [ "### Close TensorFlow Session", "_____no_output_____" ], [ "We are now done using TensorFlow, so we close the session to release its resources.", "_____no_output_____" ] ], [ [ "# This has been commented out in case you want to modify and experiment\n# with the Notebook without having to restart it.\nsession.close()", "_____no_output_____" ] ], [ [ "## Conclusion\n\nWe have seen that a Convolutional Neural Network works much better at recognizing hand-written digits than the simple linear model in Tutorial #01. The Convolutional Network gets a classification accuracy of about 99%, or even more if you make some adjustments, compared to only 91% for the simple linear model.\n\nHowever, the Convolutional Network is also much more complicated to implement, and it is not obvious from looking at the filter-weights why it works and why it sometimes fails.\n\nSo we would like an easier way to program Convolutional Neural Networks and we would also like a better way of visualizing their inner workings.", "_____no_output_____" ], [ "## Exercises\n\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\n\nYou may want to backup this Notebook before making any changes.\n\n* Do you get the exact same results if you run the Notebook multiple times without changing any parameters? What are the sources of randomness?\n* Run another 10,000 optimization iterations. Are the results better?\n* Change the learning-rate for the optimizer.\n* Change the configuration of the layers, such as the number of convolutional filters, the size of those filters, the number of neurons in the fully-connected layer, etc.\n* Add a so-called drop-out layer after the fully-connected layer. Note that the drop-out probability should be zero when calculating the classification accuracy, so you will need a placeholder variable for this probability.\n* Change the order of ReLU and max-pooling in the convolutional layer. Does it calculate the same thing? What is the fastest way of computing it? How many calculations are saved? Does it also work for Sigmoid-functions and average-pooling?\n* Add one or more convolutional and fully-connected layers. Does it help performance?\n* What is the smallest possible configuration that still gives good results?\n* Try using ReLU in the last fully-connected layer. Does the performance change? Why?\n* Try not using pooling in the convolutional layers. Does it change the classification accuracy and training time?\n* Try using a 2x2 stride in the convolution instead of max-pooling? What is the difference?\n* Remake the program yourself without looking too much at this source-code.\n* Explain to a friend how the program works.", "_____no_output_____" ], [ "### MIT 6.S191: Convolutional Neural Networks\n\n![MIT 6.S191: Convolutional Neural Networks](http://nikbearbrown.com/YouTube/MachineLearning/IMG/MIT_6.S191_Convolutional_Neural_Networks.png)\n\nMIT 6.S191: Convolutional Neural Networks [https://youtu.be/NVH8EYPHi30](https://youtu.be/NVH8EYPHi30) \n\n![MIT 6.S191: Issues in Image Classification](http://nikbearbrown.com/YouTube/MachineLearning/IMG/MIT_6.S191_Issues_in_Image_Classification.png)\n\nMIT 6.S191: Issues in Image Classification https://youtu.be/QYwESy6isuc\n\n \n\nMIT 6.S191: Introduction to Deep Learning [http://introtodeeplearning.com/](http://introtodeeplearning.com) ", "_____no_output_____" ], [ "## License (MIT)\n\nCopyright (c) 2016 by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
4a61dc2fce321319545728898033183040be6bd0
6,522
ipynb
Jupyter Notebook
example/notebook/Sommerfeld-factor.ipynb
lucydot/CarrierCapture.jl
59f3eab8dd123b00781d06fcb0777a5127f4004e
[ "MIT" ]
1
2021-10-12T10:03:01.000Z
2021-10-12T10:03:01.000Z
example/notebook/Sommerfeld-factor.ipynb
lucydot/CarrierCapture.jl
59f3eab8dd123b00781d06fcb0777a5127f4004e
[ "MIT" ]
null
null
null
example/notebook/Sommerfeld-factor.ipynb
lucydot/CarrierCapture.jl
59f3eab8dd123b00781d06fcb0777a5127f4004e
[ "MIT" ]
null
null
null
23.977941
107
0.510886
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a61de5f8914f3c53b46927442f41f8f8a2099f4
2,326
ipynb
Jupyter Notebook
notebooks/workflows/ocean_heat_content/00_intro.ipynb
khallock/ncar-python-tutorial
340dc0ac1c92637bfac6c9be3fe1e70c4bf40885
[ "CC-BY-4.0" ]
null
null
null
notebooks/workflows/ocean_heat_content/00_intro.ipynb
khallock/ncar-python-tutorial
340dc0ac1c92637bfac6c9be3fe1e70c4bf40885
[ "CC-BY-4.0" ]
null
null
null
notebooks/workflows/ocean_heat_content/00_intro.ipynb
khallock/ncar-python-tutorial
340dc0ac1c92637bfac6c9be3fe1e70c4bf40885
[ "CC-BY-4.0" ]
null
null
null
31.432432
407
0.624248
[ [ [ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Computing-Ocean-Heat-Content-(OHC)\" data-toc-modified-id=\"Computing-Ocean-Heat-Content-(OHC)-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Computing Ocean Heat Content (OHC)</a></span></li></ul></div>", "_____no_output_____" ], [ "# Computing Ocean Heat Content (OHC)\n\nNow that you're familiar with the Jupyter Notebook workspace, let's use some Python in a way that mirrors a potential usecase and integrates the teaching of Python geoscience tools when you would need them. We've prepared a series of 4 notebooks that demonstrate Python Tools through the calculation of Ocean Heat Content (OHC). Throughout these notebooks, we will introduce the following concepts:\n\n- Python modules\n- Xarray Library\n\n\nThe contents in each notebook takes build up on the previous notebook. Therefore, we recommend following these notebooks in order (1-4) until you are familiar with all the concepts presented. ", "_____no_output_____" ], [ "<div class=\"alert alert-block alert-success\">\n <p>Next: <a href=\"01_modules_and_xarray_datasets.ipynb\">Book 1 of 4: Modules and Xarray Datasets</a></p>\n</div>", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown" ] ]
4a61ed6c4626e2eaffa6aea79a2c6789f38ca1de
69,376
ipynb
Jupyter Notebook
A2/.ipynb_checkpoints/Assignment2-910-checkpoint.ipynb
tyrionz/data-wrangling-python
b4f7727450e72ffeb44a21a02faf720c9829494e
[ "CC0-1.0" ]
null
null
null
A2/.ipynb_checkpoints/Assignment2-910-checkpoint.ipynb
tyrionz/data-wrangling-python
b4f7727450e72ffeb44a21a02faf720c9829494e
[ "CC0-1.0" ]
null
null
null
A2/.ipynb_checkpoints/Assignment2-910-checkpoint.ipynb
tyrionz/data-wrangling-python
b4f7727450e72ffeb44a21a02faf720c9829494e
[ "CC0-1.0" ]
null
null
null
34.057928
1,621
0.441089
[ [ [ "Assessment Requirements \nEach group is required to complete the following two tasks: \n1. Generate a sparse representation for Paper Bodies (i.e. paper text without Title, Authors, Abstract and References). The sparse representation consists of two files: a. Vocabulary index file b. Sparse count vectors file \n \n2. Generate a CSV file (stats.csv) containing three columns: a. Top 10 most frequent terms appearing in all Titles b. Top 10 most frequent Authors c. Top 10 most frequent terms appearing in all Abstracts \n \nNote: In case of ties in any of the above fields, settle the tie based on alphabetical ascending order. (example: if the author named John appeared as many times as Mark, then John shall be selected over Mark) ", "_____no_output_____" ] ], [ [ "!pip install vocab", "Collecting vocab\n Downloading https://files.pythonhosted.org/packages/c2/75/6f312f7159f7353ce679b4e27296a77593e24cd266d6cd513ab37401450a/vocab-0.0.4.tar.gz\nBuilding wheels for collected packages: vocab\n Building wheel for vocab (setup.py): started\n Building wheel for vocab (setup.py): finished with status 'done'\n Stored in directory: C:\\Users\\Bot1\\AppData\\Local\\pip\\Cache\\wheels\\e3\\d8\\e7\\c6aa517ea6ac4c3ed8155741f0b3da0a23380585367d3cea84\nSuccessfully built vocab\nInstalling collected packages: vocab\nSuccessfully installed vocab-0.0.4\n" ], [ "import re\nimport pandas as pd\nimport requests\nimport os\n#import pdfminer\nimport tqdm\nfrom tqdm import tqdm_notebook as tqdm\n\n# segmentation first, then find capital words, then loop through and lower each word, then tokenize.\nimport io\nfrom io import StringIO\n#from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter\n#from pdfminer.converter import TextConverter\n#from pdfminer.layout import LAParams\n#from pdfminer.pdfpage import PDFPage\nimport os\nimport sys, getopt\n\nimport nltk.data\nsent_detector = nltk.data.load('tokenizers/punkt/english.pickle')\n\nfrom nltk.collocations import BigramCollocationFinder \nfrom nltk.metrics import BigramAssocMeasures\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.naive_bayes import MultinomialNB\nfrom nltk.tokenize import MWETokenizer\nfrom itertools import chain\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\nfrom nltk.tokenize import RegexpTokenizer\nfrom nltk.tokenize import MWETokenizer\n\n\nfrom nltk.probability import *\nfrom nltk.tokenize import word_tokenize\n\n\nfrom nltk.collocations import BigramCollocationFinder \nfrom nltk.metrics import BigramAssocMeasures\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.naive_bayes import MultinomialNB\nfrom nltk.tokenize import MWETokenizer\nfrom itertools import chain\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\nfrom nltk.stem.porter import *\nnltk.download('punkt')\n\n# segmentation first, then find capital words, then loop through and lower each word, then tokenize.\nimport io\nfrom io import StringIO\n#from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter\n#from pdfminer.converter import TextConverter\n#from pdfminer.layout import LAParams\n#from pdfminer.pdfpage import PDFPage\nimport os\nimport sys, getopt\n\nimport nltk.data\nsent_detector = nltk.data.load('tokenizers/punkt/english.pickle')\n\nfrom itertools import chain", "[nltk_data] Downloading package punkt to\n[nltk_data] C:\\Users\\Caddy'sLenovo\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n" ], [ "#!pip install Tabula-py #uncomment to install Tabula-py\n#!pip install pdfminer.six #uncomment to install Tabula-py\n#!pip install pdfminer3k #uncomment to install Tabula-py\n#!pip install tqdm\n#pdfminer3k is a Python 3 port of pdfminer", "_____no_output_____" ], [ "import tabula\n# readinf the PDF file that contain Table Data\n# you can find find the pdf file with complete code in below\n# read_pdf will save the pdf table into Pandas Dataframe\ndf = tabula.read_pdf(\"Group102.pdf\", pages='all')\n# in order to print first 5 lines of Table", "_____no_output_____" ], [ "df = df[df['filename'] != 'filename']", "_____no_output_____" ], [ "if not os.path.exists('data'):\n os.makedirs('data') # make a dataset, to store all the pdf files downloaded\n for each in tqdm(df.iterrows()):\n response = requests.get(each[1][1])\n with open('data/'+ str(each[1][0]),'wb') as f:\n f.write(response.content)", "_____no_output_____" ], [ "#converts pdf, returns its text content as a string\ndef convert(fname, pages=None):\n if not pages:\n pagenums = set()\n else:\n pagenums = set(pages)\n\n output = io.StringIO()\n manager = PDFResourceManager()\n converter = TextConverter(manager, output, laparams=LAParams())\n interpreter = PDFPageInterpreter(manager, converter)\n\n infile = open(fname, 'rb')\n for page in PDFPage.get_pages(infile, pagenums):\n interpreter.process_page(page)\n infile.close()\n converter.close()\n text = output.getvalue()\n output.close\n return text \n\n#converts all pdfs in directory pdfDir, saves all resulting txt files to txtdir\ndef convertMultiple(pdfDir, txtDir):\n if pdfDir == \"\": pdfDir = os.getcwd() + \"\\\\\" #if no pdfDir passed in \n for pdf in tqdm(os.listdir(pdfDir)): #iterate through pdfs in pdf directory\n fileExtension = pdf.split(\".\")[-1]\n if fileExtension == \"pdf\":\n pdfFilename = pdfDir + pdf \n text = convert(pdfFilename) #get string of text content of pdf\n textFilename = txtDir + pdf + \".txt\"\n textFile = open(textFilename, \"w\") #make text file\n textFile.write(text) #write text to text file\n\n# set paths accordingly:\npdfDir = \"C:/your_path_here/\"\ntxtDir = \"C://your_path_here/\"", "_____no_output_____" ], [ "#An empty list to store all the given stopwords\nstopwords=[]\n\n#Opening the given stopwords file and storing the words in the stopwords list\nwith open('stopwords_en.txt') as f:\n stopwords = f.read().splitlines() ", "_____no_output_____" ], [ "def selective_lower(sentence): #E\n aux_sentence = ''\n #A.\n sentence = sentence.replace('the','')\n sentence = sentence.replace('The','')\n sentence = sentence.replace('fi','fi')\n sentence = sentence.replace('ff','ff')\n sentence = sentence.replace('- ','')\n sentence = sentence.replace('-\\n','')\n sentence = sentence.replace('\\n',' ')\n cap_set = re.findall(r'(?!^)\\b([A-Z]\\w+)',sentence)\n for word in sentence.split(\" \"):\n if (len(word) > 2) and (word not in stopwords):\n if (word not in cap_set):\n aux_sentence += word.lower() + str(' ')\n elif (len(word) > 2) and (word not in stopwords):\n aux_sentence += word + str(' ')\n \n aux_sentence = re.sub('[^A-Za-z]+', ' ', aux_sentence.strip())\n \n return aux_sentence.strip()", "_____no_output_____" ], [ "import os\nfrom tqdm import tqdm\n\nsent_detector = nltk.data.load('tokenizers/punkt/english.pickle')\nbody_dict={}\n\ndef get_data(directory):\n shortword = re.compile(r'\\W*\\b\\w{1,2}\\b') \n body_dict={} \n for filename in tqdm(os.listdir(directory)):\n filepdf = filename.replace('.pdf','')\n raw_body = convert(str(os.path.join(directory, filename)))\n \n sentence_list = sent_detector.tokenize(raw_body.strip())\n\n\n body = []\n start = 0\n stop = 0\n for i in range(len(sentence_list)):\n if 'Paper Body' in sentence_list[i]:\n start = i\n sentence_list[i] = sentence_list[i].replace('Paper Body','')\n if '2 Reference' in sentence_list[i]:\n stop = i\n sentence_list[i] = sentence_list[i].replace('w indows','windows')\n sentence_list[i] = sentence_list[i].replace('W indows','Windows')\n # this is to find the start and stop of Paper body\n for i in range(start, stop):\n body.append(sentence_list[i])\n for i in range(len(body)):\n body[i] = body[i].replace('\\n',' ') #replace all the new line\n body[i] = selective_lower(body[i]) #E:.\n \n for i in range(start, stop):\n body.append(sentence_list[i])\n for i in range(len(body)):\n body[i] = body[i].replace('\\n',' ') #replace all the new line\n body[i] = selective_lower(body[i]).strip() #E:.\n body[i] = shortword.sub('',body[i]) #make sure shortword removed\n body_dict[filepdf] = \" \".join(body)\n return body_dict\n", "_____no_output_____" ], [ "pathpdf = 'data/'\ntokenizer = RegexpTokenizer(\"[A-Za-z]\\w+(?:[-'?]\\w+)?\")\nbody_dict = get_data(pathpdf)", "\n 0%| | 0/200 [00:00<?, ?it/s]\n 0%|▍ | 1/200 [00:00<03:06, 1.07it/s]\n 1%|▊ | 2/200 [00:01<03:09, 1.05it/s]\n 2%|█▏ | 3/200 [00:02<02:55, 1.12it/s]\n 2%|█▋ | 4/200 [00:05<04:28, 1.37s/it]\n 2%|██ | 5/200 [00:06<04:18, 1.32s/it]\n 3%|██▍ | 6/200 [00:07<03:58, 1.23s/it]\n 4%|██▊ | 7/200 [00:08<03:42, 1.15s/it]\n 4%|███▎ | 8/200 [00:09<03:26, 1.08s/it]\n 4%|███▋ | 9/200 [00:10<03:19, 1.04s/it]\n 5%|████ | 10/200 [00:11<03:15, 1.03s/it]\n 6%|████▍ | 11/200 [00:12<03:23, 1.08s/it]\n 6%|████▊ | 12/200 [00:13<03:13, 1.03s/it]\n 6%|█████▎ | 13/200 [00:14<03:11, 1.03s/it]\n 7%|█████▋ | 14/200 [00:15<03:15, 1.05s/it]\n 8%|██████ | 15/200 [00:16<03:02, 1.02it/s]\n 8%|██████▍ | 16/200 [00:17<03:06, 1.01s/it]\n 8%|██████▉ | 17/200 [00:18<02:59, 1.02it/s]\n 9%|███████▎ | 18/200 [00:19<02:50, 1.07it/s]\n 10%|███████▋ | 19/200 [00:20<02:57, 1.02it/s]\n 10%|████████ | 20/200 [00:21<03:36, 1.20s/it]\n 10%|████████▌ | 21/200 [00:23<03:36, 1.21s/it]\n 11%|████████▉ | 22/200 [00:24<03:21, 1.13s/it]\n 12%|█████████▎ | 23/200 [00:25<03:18, 1.12s/it]\n 12%|█████████▋ | 24/200 [00:26<03:07, 1.07s/it]\n 12%|██████████▏ | 25/200 [00:27<02:59, 1.02s/it]\n 13%|██████████▌ | 26/200 [00:28<02:57, 1.02s/it]\n 14%|██████████▉ | 27/200 [00:29<02:54, 1.01s/it]\n 14%|███████████▎ | 28/200 [00:30<03:01, 1.06s/it]\n 14%|███████████▋ | 29/200 [00:31<02:59, 1.05s/it]\n 15%|████████████▏ | 30/200 [00:32<02:59, 1.06s/it]\n 16%|████████████▌ | 31/200 [00:33<03:04, 1.09s/it]\n 16%|████████████▉ | 32/200 [00:35<03:46, 1.35s/it]\n 16%|█████████████▎ | 33/200 [00:36<03:27, 1.24s/it]\n 17%|█████████████▊ | 34/200 [00:37<03:21, 1.21s/it]\n 18%|██████████████▏ | 35/200 [00:38<03:12, 1.16s/it]\n 18%|██████████████▌ | 36/200 [00:39<02:53, 1.06s/it]\n 18%|██████████████▉ | 37/200 [00:40<02:49, 1.04s/it]\n 19%|███████████████▍ | 38/200 [00:41<02:45, 1.02s/it]\n 20%|███████████████▊ | 39/200 [00:42<02:53, 1.08s/it]\n 20%|████████████████▏ | 40/200 [00:43<02:50, 1.06s/it]\n 20%|████████████████▌ | 41/200 [00:44<02:47, 1.05s/it]\n 21%|█████████████████ | 42/200 [00:45<02:50, 1.08s/it]\n 22%|█████████████████▍ | 43/200 [00:46<02:52, 1.10s/it]\n 22%|█████████████████▊ | 44/200 [00:47<02:41, 1.04s/it]\n 22%|██████████████████▏ | 45/200 [00:48<02:43, 1.05s/it]\n 23%|██████████████████▋ | 46/200 [00:50<03:26, 1.34s/it]" ], [ "data = pd.DataFrame.from_dict(body_dict,orient='index')\ndata = data.reset_index()\ndata.columns = ['filename','body']", "_____no_output_____" ], [ "def tokenize(body):\n tokenized_body = tokenizer.tokenize(body_dict[body]) #tokenizing the string\n return (body, tokenized_body) # return a tuple of file name and a list of tokens\n\n#calling the tokenize method in a loop for all the elements in the dictionary\ntokenized_body = dict(tokenize(body) for body in body_dict.keys()) \n#data['tokenized'] = data['body'].apply(tokenizer.tokenize)", "_____no_output_____" ], [ "all_tokens = list(chain.from_iterable(tokenized_body.values())) #maybe this should be done finally", "_____no_output_____" ], [ "from sklearn.feature_extraction.text import CountVectorizer\nvectorizer = CountVectorizer(analyzer = \"word\") ", "_____no_output_____" ], [ "data_features = vectorizer.fit_transform([' '.join(value) for value in tokenized_body.values()])\nprint (data_features.shape)", "(200, 17019)\n" ], [ "words = list(chain.from_iterable(tokenized_body.values()))\nvocab = set(words)", "_____no_output_____" ], [ "from sklearn.feature_extraction.text import CountVectorizer\nvectorizer = CountVectorizer(analyzer = \"word\") ", "_____no_output_____" ], [ "words_per_doc = list(chain.from_iterable([set(value) for value in tokenized_body.values()]))\nwpd = FreqDist(words_per_doc)\nwpd.most_common(25)", "_____no_output_____" ], [ "word_to_remove = []\nfor word, count in wpd.items():\n if (count/200 < 0.03) or (count/200 > 0.95):\n word_to_remove.append(word)", "_____no_output_____" ], [ "len(word_to_remove)", "_____no_output_____" ], [ "all_tokens = [token for token in all_tokens if token not in word_to_remove]", "_____no_output_____" ], [ "len(set(all_tokens))", "_____no_output_____" ], [ "for file in list(tokenized_body.keys()):\n tokenized_body[file] = [token for token in tokenized_body[file] if token not in word_to_remove]", "_____no_output_____" ], [ "#Finding the top 200 bigrams\nfinder=BigramCollocationFinder.from_words(all_tokens)\nbigrams=finder.nbest(BigramAssocMeasures.likelihood_ratio, 200)\nbigrams_list=[x for x in bigrams if not any(c.isdigit() for c in x)] ", "_____no_output_____" ], [ "bigrams_list", "_____no_output_____" ], [ "#Eliminating numbers from bigrams\nbigrams_list=[x for x in bigrams if not any(c.isdigit() for c in x)] ", "_____no_output_____" ], [ "#Preserving these bigrams and putting it back in the dictionary, along with the unigrams\nmwetokenizer = MWETokenizer(bigrams_list,separator='__')", "_____no_output_____" ], [ "#colloc_body is a dictionary that contains both the bigrams as well as the unigrams\ncolloc_body = dict((body, mwetokenizer.tokenize(data)) for body,data in tokenized_body.items())", "_____no_output_____" ], [ "#Using the porterstemmer method\nps = PorterStemmer()\n#An empty string to store the content of a particular body\nstrcontent=''\n#An empty dictionary to append the stemmed data back \nstemmed_dict=dict()\n\n#Looping to stem each value in the dictionary\nfor key,body in colloc_body.items(): \n for word in body:\n if (word[0].islower()) and ('__' not in word):\n #Temporarily storing the data in an empty string\n strcontent=strcontent+ ' ' + ps.stem(word)\n \n #Assigning the string to the respective key\n stemmed_dict[key]=strcontent\n #Again emptying the string to store the next resume\n strcontent=''\n\n#Loop to again word tokenize each body in the dictionary and assigning it back to its body number \nfor key,body in stemmed_dict.items():\n stemmed_dict[key]=word_tokenize(body)", "_____no_output_____" ], [ "for key,body in colloc_body.items(): \n for word in body:\n if (word[0].isupper) and ('__' in word):\n stemmed_dict[key].append(word)\n", "_____no_output_____" ], [ "uni_tokens = []\nfor file in list(stemmed_dict.keys()):\n for word in stemmed_dict[file]:\n uni_tokens.append(word)", "_____no_output_____" ], [ "len(set(uni_tokens))", "_____no_output_____" ], [ "vocab = list(set(uni_tokens))", "_____no_output_____" ], [ "word2index = {token: token_index for token_index, token in enumerate(uni_tokens.index2word)} \nword2index['hi'] == 30308 # True", "_____no_output_____" ], [ "vocab", "_____no_output_____" ], [ "from vocab import Vocab, UnkVocab\nv = Vocab()\nvocab_index = v.word2index(vocab, train=True)\nvocab_serial = dict(zip(vocab,vocab_index))\n\n\nfile = open('Group102_vocab.txt', \"w\")\n\nfor k, v in vocab_serial.items():\n file.write(str(k) + ':'+ str(v) + '\\n')\n\nfile.close()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a61fac25d298924f3ad53e18f890d275a971150
14,962
ipynb
Jupyter Notebook
Python/Python_Strings/PythonStrings.ipynb
SouravKr2001/winter-of-contributing
94cb13e6260422e75c9a666c948b53ba8ce09d19
[ "MIT" ]
null
null
null
Python/Python_Strings/PythonStrings.ipynb
SouravKr2001/winter-of-contributing
94cb13e6260422e75c9a666c948b53ba8ce09d19
[ "MIT" ]
null
null
null
Python/Python_Strings/PythonStrings.ipynb
SouravKr2001/winter-of-contributing
94cb13e6260422e75c9a666c948b53ba8ce09d19
[ "MIT" ]
null
null
null
29.510848
202
0.493517
[ [ [ "# Python Strings\n\n- Strings is one of the most important data types.\n- Let's know how it is declared, defined, accessed and common operations performed on the python strings.", "_____no_output_____" ], [ "## Strings\n\n- A string is a collection or series of characters.\n- An important point to remember is that python strings are **immutable**. Once,we created a string object, we cannot change them. \n- However, observe that python doesn't have character data type. It just treats a single character as a string of length one.\n- Anything enclosed in quotes(either single or double) is considered as a string.\n- Indexing in strings starts from 0.", "_____no_output_____" ], [ "## Declaration Of Strings\n\n- The declaration of a string is very simple. It is similar to declaration as that of any other basic data type.\n- **Syntax**\n> *var_name = 'some text within quotes'* or *var_name = \"some text within quotes\"*\n- Lets look at an example!", "_____no_output_____" ] ], [ [ "MyString = \"Hello World!\"", "_____no_output_____" ] ], [ [ "## Accessing Strings\n- Now, that we know how to initialise strings, lets look at how to access them.\n- We can use name of the variable to which the string is assigned, if we want to access the complete string.\n- To access only one character of a string ,we use indices.\n- To access a substring (a subsequence/part of total string), we use slicing.\n\n**Syntax**:\n> 1) *var_name* - to access the whole string.\\\n 2) *var_name[index]* - to access a certain character at 'index' position.\\\n 3) *var_name[index1:index2]* - to access a sequence of characters from index1 to index2-1 positions in the string.\n\n- Let's look at few examples of accessing strings.\n", "_____no_output_____" ] ], [ [ "MyString = \"I love GirlScript\"\nprint(MyString)\nprint(MyString[0])\nprint(MyString[7:]) #if index2 is left empty it prints upto the end.", "I love GirlScript\nI\nGirlScript\n" ] ], [ [ "## Operations On Strings\n\n Now, we had the basic knowledge of what string is, how they are declared and accessed . Let's see the operations that can be performed on strings.\\\n Some of the common operations are \n- length \n- Copy\n- Concatenation\n- Comparison\n- Finding if a substring exists in a string.\n\nThese are some of the common operations performed on strings. Let's look at how these are performed.\n\n### Finding length of string\n\n- The *len()* function in python gives us the length of string. \n- This can be done as follows:\n\n", "_____no_output_____" ] ], [ [ "MyString = \"I love GirlScript\"\nprint(\"The length of MyString is \"+ str(len(MyString))) # At this point, all you have to notice is that len(MyString) is giving the length of MyString variable.\n# I'll get back to the part of str() and '+' later in this tutorial.", "The length of MyString is 17\n" ] ], [ [ "### Copying Strings\n\n- There are many ways to copy strings in python. However, most of them creates a copy of the object but the new and old variables points same object.\n- Lets look into the ways how strings can be copied.\n1. Using **Slice operator**:\n- We can create a new variable and then use the actual variable that holds the string along with [ : ].\n- Let's look at an example.", "_____no_output_____" ] ], [ [ "MyString = \"I love GirlScript\"\nMyString2 = MyString[:]\nprint(MyString2)", "I love GirlScript\n" ] ], [ [ "2. Using **Assignment Operator**\n- We could simply use '=' to make a copy of string and now both the old and new variables refer same string object.\n- An example is as shown:", "_____no_output_____" ] ], [ [ "MyString = \"I love GirlScript\"\nMyString2 = MyString\nprint(MyString2)", "I love GirlScript\n" ] ], [ [ "3. Using **str()**\n- Now, lets learn about str() function in python.\n- str() converts any value within braces into string object and returns it.\n- Copying via str() can be done as shown:", "_____no_output_____" ] ], [ [ "MyString = \"I love GirlScript\"\nMyString2 = str(MyString)\nprint(MyString2)", "I love GirlScript\n" ] ], [ [ "4.We can even come up with our own program to copy strings. Here is one, to copy strings.\n", "_____no_output_____" ] ], [ [ "MyString = \"I love GirlScript\"\nMyString2 = '' # currently an empty string.\nfor i in MyString:\n MyString2 += i #appending i to previous MyString2 and assigning new value to MyString. + is a way of cancatenation which we shall learn in detail in next section.\nprint(MyString2)", "I love GirlScript\n" ] ], [ [ "## Concatenation\n\nConcatenation is adding two or more strings and forming a larger string. This can be done using '+' operator.\\\nAn example is as shown:", "_____no_output_____" ] ], [ [ "MyString = \"I love GirlScript\"\nMyString2 = \" Winter of Contribution\"\nMyString3 = MyString + MyString2\nprint(MyString3)", "I love GirlScript Winter of Contribution\n" ] ], [ [ "However, print() has its special of treating all its parameters.\\\nAnything written in print() seperated by comma(,) is first converted to a string object and concatenated by adding a space in between. Lets see an example:\n", "_____no_output_____" ] ], [ [ "print(MyString,MyString2)", "I love GirlScript Winter of Contribution\n" ] ], [ [ "# Comparison\n\nString comparison is one of the most frequently used operations. \\\nStrings can be considered just like basic data type such as int and comparison can be one using \n- ==\n- &lt;\n- &gt;\n- &lt;=\n- &gt;=\n\nWhat these comparison operators generally does is, it checks the ascii value of each character until a mismatched value occurs, if there's no mismatch the strings are equal. \n\n", "_____no_output_____" ] ], [ [ "print('a' > 'b') #output is false as ascii value of 'a' 97 is less than ascii value of 'b' 98\nprint('ab' == 'ab') #returns true as strings are same\nprint('ac' >= 'abd') #prints true as 'c'(99) is greater than 'b'(98)\nprint('ac' <= 'abd') #false as 'c' is greater than 'b'\nprint('ac' < 'abd') #result follows from above discussion", "False\nTrue\nTrue\nFalse\nFalse\n" ] ], [ [ "## Finding a substring in a given string\n\nfind() return the index at which substring occurs in given string and -1 if substring is not found.\\\nfind() takes in 3 parameters\n- str -> points substring\n- beginning index -> specifies starting index from where search should start(this is 0 by default)\n- ending index -> This is the ending index, by default it is equal to the length of the string.\n\nLets look at an example of how its done:", "_____no_output_____" ] ], [ [ "MyString = \"I love GirlScript\"\nSubstr = \"GirlScript\"\nprint(MyString.find(Substr))", "7\n" ] ], [ [ "These are the common operations that are performed on the strings. There are several other built-in string methods such as islower(),isupper(), strip(), split(), capitalisze(),count() etc. ", "_____no_output_____" ], [ "# Conclusion\n\nPython strings are very much helpful and are also easy to handle. However, one must perform manipulations on strings keeping in mind that they are immutable. performing operations such as :\n\n> MyString[1] = '$'\n\nthrows an error. Every programmer needs a firm knowledge of strings.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
4a6201a17ddeee7e8c2a24666018c944d59029ba
4,145
ipynb
Jupyter Notebook
mytests/preprocess_smartlab.ipynb
charithmu/Open3D-ML
8f11ef7416b8f3437d9735603b6e18ac7f8e7c96
[ "MIT" ]
3
2021-03-18T17:09:32.000Z
2021-06-26T20:58:12.000Z
mytests/preprocess_smartlab.ipynb
charithmu/Open3D-ML
8f11ef7416b8f3437d9735603b6e18ac7f8e7c96
[ "MIT" ]
null
null
null
mytests/preprocess_smartlab.ipynb
charithmu/Open3D-ML
8f11ef7416b8f3437d9735603b6e18ac7f8e7c96
[ "MIT" ]
1
2021-06-26T11:04:29.000Z
2021-06-26T11:04:29.000Z
26.069182
89
0.505428
[ [ [ "import glob\nimport os\nimport pprint\nimport random\nimport shutil\nfrom itertools import islice\n\nimport numpy as np\nimport pandas as pd\nfrom pypcd import pypcd", "_____no_output_____" ], [ "def convert_to_npy(path):\n cloud = pypcd.PointCloud.from_path(path)\n df = pd.DataFrame(cloud.pc_data)\n result = df[[\"x\", \"y\", \"z\", \"label\"]].to_numpy()\n return result", "_____no_output_____" ], [ "dataset_path = \"/home/threedee/datasets/SmartLab/\"\nannotated_path = \"/home/threedee/annotation_workspace/Data/smartlab/labled/\"\n\nsenarios = [\"senario1\", \"senario2\", \"senario3\", \"senario4\"]\n\nnpy_output_path = dataset_path + \"npy/\"", "_____no_output_____" ], [ "# save *.pcd files as *.npy files in given folder.\n\nfor sc in senarios:\n\n print(\"Processing files in: \" + sc)\n anno_path_sc = annotated_path + sc + \"/\"\n save_path_sc = npy_output_path + sc + \"/\"\n\n list = glob.glob(anno_path_sc + \"*.pcd\")\n for i in range(len(list)):\n\n print(list[i])\n\n path, file = os.path.split(list[i])\n save_path = save_path_sc + file[:-4] + \".npy\"\n\n if not os.path.exists(save_path_sc):\n os.makedirs(save_path_sc)\n\n npy_cloud = convert_to_npy(list[i])\n np.save(save_path, npy_cloud)", "_____no_output_____" ], [ "# clean the train,val and test folders for new data\n\ntrain_set = glob.glob(dataset_path + \"train/*\")\nval_set = glob.glob(dataset_path + \"val/*\")\ntest_set = glob.glob(dataset_path + \"test/*\")\nall_set = train_set + val_set + test_set\nfor f in all_set:\n # print(\"Removing: \" + f)\n os.remove(f)\nprint(\"Removed \" + str(len(all_set)) + \" files.\")", "_____no_output_____" ], [ "# copy files in to train,val and test folders with random sampling\n\ntotal = 218\nsplit_val = [152, 44, 22]\nsplit_names = [\"train\", \"val\", \"test\"]\n\nfiles = glob.glob(npy_output_path + \"/**/*.npy\", recursive=True)\n\nprint(str(len(files)) + \" files found.\")\n# pprint.pprint(files[0])\n\nfilearr = np.array(files)\nnp.random.shuffle(filearr)\n\n# pprint.pprint(filearr[0])\n\nsplits = np.split(filearr, [152, 196])\n\nfor (idx, name) in zip(range(3), split_names):\n\n print(\"Copying \" + str(len(splits[idx])) + \" files into split: \" + name)\n\n for file in splits[idx]:\n filename = os.path.split(file)[1]\n copy_path = dataset_path + name + \"/\" + filename\n\n shutil.copy(file, copy_path)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
4a620332c4050da0c1cd93140a4e9d4e583d2220
618,608
ipynb
Jupyter Notebook
new_experiment/viz_expression_data.ipynb
jjc2718/generic-expression-patterns
99961ac3647d2447268ca73a94cab8b09ee08237
[ "BSD-3-Clause" ]
null
null
null
new_experiment/viz_expression_data.ipynb
jjc2718/generic-expression-patterns
99961ac3647d2447268ca73a94cab8b09ee08237
[ "BSD-3-Clause" ]
null
null
null
new_experiment/viz_expression_data.ipynb
jjc2718/generic-expression-patterns
99961ac3647d2447268ca73a94cab8b09ee08237
[ "BSD-3-Clause" ]
null
null
null
361.759064
254,532
0.923937
[ [ [ "# Visualize gene expression\n\nThis notebook visualizes the gene expression data for the template and simulated experiments in order to:\n1. Validate that the structure of the gene expression data and simulated data are consistent\n2. To visualize the signal that is in the experiments", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%load_ext rpy2.ipython\n%autoreload 2", "_____no_output_____" ], [ "import os\nimport pandas as pd\nimport umap\nimport pickle\nimport glob\nimport seaborn as sns\nfrom sklearn.decomposition import PCA\nfrom keras.models import load_model\nimport plotnine as pn\n\nfrom ponyo import utils\nfrom generic_expression_patterns_modules import plot", "/home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/matplotlib/__init__.py:886: MatplotlibDeprecationWarning: \nexamples.directory is deprecated; in the future, examples will be found relative to the 'datapath' directory.\n \"found relative to the 'datapath' directory.\".format(key))\nUsing TensorFlow backend.\n" ] ], [ [ "## Load config parameters", "_____no_output_____" ] ], [ [ "# Read in config variables\nbase_dir = os.path.abspath(os.path.join(os.getcwd(), \"../\"))\n\nconfig_filename = os.path.abspath(\n os.path.join(base_dir, \"configs\", \"config_pseudomonas_33245.tsv\")\n)\n\nparams = utils.read_config(config_filename)", "_____no_output_____" ], [ "# Load config params\n\nlocal_dir = params['local_dir']\nproject_id = params['project_id']\nnum_simulated = params['num_simulated']\n\npval_name = \"adj.P.Val\"\nlogFC_name = \"logFC\"\nrun=0\n\n# Manual settings to visualize/troubleshoot volcano plots for other datasets\n# Will pull these out to archive later\n\n\"\"\"vae_model_dir = params['vae_model_dir']\ntemplate_filename = params['mapped_template_filename']\nnormalized_compendium_filename = params['normalized_compendium_filename']\nscaler_filename = params['scaler_filename']\"\"\"\n\n\n# Settings for running visualization using pseudomonas config file\nvae_model_dir = os.path.join(base_dir,\"pseudomonas_analysis\", \"models\", \"NN_2500_30\")\ntemplate_filename = params['processed_template_filename']\nnormalized_compendium_filename = params['normalized_compendium_filename']\nscaler_filename = params['scaler_filename']\n\n\"\"\"# Settings for running visualization using human cancer config file\nvae_model_dir = os.path.join(base_dir,\"human_cancer_analysis\", \"models\", \"NN_2500_30\")\ntemplate_filename = params['processed_template_filename']\nnormalized_compendium_filename = params['normalized_compendium_filename']\nscaler_filename = params['scaler_filename']\"\"\"\n\n\n\"\"\"# Settings for running visualization using human_general config file\nvae_model_dir = os.path.join(base_dir,\"human_general_analysis\", \"models\", \"NN_2500_30\")\ntemplate_filename = os.path.join(base_dir,\"human_general_analysis\", params['processed_template_filename'])\nnormalized_compendium_filename = params['normalized_compendium_filename']\nscaler_filename = os.path.join(base_dir, \"human_general_analysis\", params['scaler_filename'])\"\"\"", "_____no_output_____" ] ], [ [ "## Volcano plots", "_____no_output_____" ] ], [ [ "# Check number of DEGs\ntemplate_DE_stats_filename = os.path.join(\n local_dir,\n \"DE_stats\",\n f\"DE_stats_template_data_{project_id}_real.txt\"\n)\n\ntemplate_DE_stats = pd.read_csv(\n template_DE_stats_filename, \n sep=\"\\t\", \n header=0, \n index_col=0\n)\n\nselected = template_DE_stats[(template_DE_stats[pval_name]<0.01) & (abs(template_DE_stats[logFC_name])>1)]\nprint(selected.shape)", "(122, 6)\n" ], [ "plot.make_volcano_plot_template(\n template_DE_stats_filename,\n project_id,\n pval_name,\n logFC_name\n)", "/home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['Verdana'] not found. Falling back to DejaVu Sans.\n (prop.get_family(), self.defaultFamily[fontext]))\n" ], [ "simulated_DE_stats_dir = os.path.join(local_dir, \"DE_stats\")\n\nplot.make_volcano_plot_simulated(\n simulated_DE_stats_dir,\n project_id,\n pval_name,\n logFC_name,\n num_simulated,\n 5,\n 5,\n 20,\n 15\n)", "_____no_output_____" ] ], [ [ "## Plot distribution of DE stats", "_____no_output_____" ] ], [ [ "sns.distplot(template_DE_stats[logFC_name], kde=False)", "_____no_output_____" ], [ "simulated_DE_stats_filename = os.path.join(\n simulated_DE_stats_dir,\n f\"DE_stats_simulated_data_{project_id}_{run}.txt\"\n)\n\nsimulated_DE_stats = pd.read_csv(\n simulated_DE_stats_filename, \n sep=\"\\t\", \n header=0, \n index_col=0\n)\n\nsns.distplot(simulated_DE_stats[logFC_name], kde=False)", "_____no_output_____" ] ], [ [ "## PCA of latent space", "_____no_output_____" ] ], [ [ "# Get decoded simulated experiment\nsimulated_filename = os.path.join(\n local_dir,\n \"pseudo_experiment\",\n f\"selected_simulated_data_{project_id}_{run}.txt\"\n)", "_____no_output_____" ], [ "normalized_compendium_data = pd.read_csv(normalized_compendium_filename, sep=\"\\t\", index_col=0, header=0)\ntemplate_data = pd.read_csv(template_filename, sep=\"\\t\", index_col=0, header=0)\nsimulated_data = pd.read_csv(simulated_filename, sep=\"\\t\", index_col=0, header=0)", "_____no_output_____" ], [ "print(template_data.shape)\ntemplate_data", "(4, 5549)\n" ], [ "sns.distplot(template_data.iloc[0:2].mean(), kde=False)\nsns.distplot(template_data.iloc[2:].mean(), kde=False)", "_____no_output_____" ], [ "sns.distplot(simulated_data.iloc[0:2].mean(), kde=False)\nsns.distplot(simulated_data.iloc[2:].mean(), kde=False)", "_____no_output_____" ], [ "# Normalize simulated_data\n# Load pickled file\nwith open(scaler_filename, \"rb\") as scaler_fh:\n scaler = pickle.load(scaler_fh)\n\nnormalized_simulated_data = scaler.transform(simulated_data)\n\nnormalized_simulated_data = pd.DataFrame(\n normalized_simulated_data,\n columns=simulated_data.columns,\n index=simulated_data.index,\n)\n\nprint(normalized_simulated_data.shape)\nnormalized_simulated_data.head()", "(4, 5549)\n" ], [ "# If template experiment included in training compendium\n# Get normalized template data\nsample_ids = list(template_data.index)\nnormalized_template_data = normalized_compendium_data.loc[sample_ids]\n\nprint(normalized_template_data.shape)\nnormalized_template_data.head()", "(4, 5549)\n" ], [ "\"\"\"# If template experiment NOT included in training compendium\nwith open(scaler_filename, \"rb\") as scaler_fh:\n scaler = pickle.load(scaler_fh)\n\nnormalized_template_data = scaler.transform(template_data)\n\nnormalized_template_data = pd.DataFrame(\n normalized_template_data,\n columns=template_data.columns,\n index=template_data.index,\n)\"\"\"", "_____no_output_____" ], [ "# Label samples \nnormalized_compendium_data['sample group'] = \"compendium\"\nnormalized_template_data['sample group'] = \"template\"\nnormalized_simulated_data['sample group'] = \"simulated\"", "_____no_output_____" ], [ "normalized_all_data = pd.concat([normalized_template_data,\n normalized_simulated_data,\n normalized_compendium_data\n])", "_____no_output_____" ], [ "# Plot\n\n# Drop label column\nnormalized_all_data_numeric = normalized_all_data.drop(['sample group'], axis=1)\n\nmodel = umap.UMAP(random_state=1).fit(normalized_all_data_numeric)\n\nnormalized_all_data_UMAPencoded = model.transform(normalized_all_data_numeric)\nnormalized_all_data_UMAPencoded_df = pd.DataFrame(data=normalized_all_data_UMAPencoded,\n index=normalized_all_data.index,\n columns=['1','2'])\n\n# Add back label column\nnormalized_all_data_UMAPencoded_df['sample group'] = normalized_all_data['sample group']\n\n# Plot\nfig = pn.ggplot(normalized_all_data_UMAPencoded_df, pn.aes(x='1', y='2'))\nfig += pn.geom_point(pn.aes(color='sample group'), alpha=0.4)\nfig += pn.labs(x ='UMAP 1',\n y = 'UMAP 2',\n title = 'Gene expression data in gene space')\nfig += pn.theme_bw()\nfig += pn.theme(\n legend_title_align = \"center\",\n plot_background=pn.element_rect(fill='white'),\n legend_key=pn.element_rect(fill='white', colour='white'), \n legend_title=pn.element_text(family='sans-serif', size=15),\n legend_text=pn.element_text(family='sans-serif', size=12),\n plot_title=pn.element_text(family='sans-serif', size=15),\n axis_text=pn.element_text(family='sans-serif', size=12),\n axis_title=pn.element_text(family='sans-serif', size=15)\n )\nfig += pn.scale_color_manual(['#bdbdbd', 'red', 'blue'])\nfig += pn.guides(colour=pn.guide_legend(override_aes={'alpha': 1}))\n\nprint(fig)", "/home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/matplotlib/__init__.py:886: MatplotlibDeprecationWarning: \nexamples.directory is deprecated; in the future, examples will be found relative to the 'datapath' directory.\n \"found relative to the 'datapath' directory.\".format(key))\n" ] ], [ [ "## PCA in latent space", "_____no_output_____" ] ], [ [ "# Model files\nmodel_encoder_filename = glob.glob(os.path.join(vae_model_dir, \"*_encoder_model.h5\"))[0]\nweights_encoder_filename = glob.glob(os.path.join(vae_model_dir, \"*_encoder_weights.h5\"))[0]\nmodel_decoder_filename = glob.glob(os.path.join(vae_model_dir, \"*_decoder_model.h5\"))[0]\nweights_decoder_filename = glob.glob(os.path.join(vae_model_dir, \"*_decoder_weights.h5\"))[0]", "_____no_output_____" ], [ "# Load saved models\nloaded_model = load_model(model_encoder_filename, compile=False)\nloaded_decode_model = load_model(model_decoder_filename, compile=False)\n\nloaded_model.load_weights(weights_encoder_filename)\nloaded_decode_model.load_weights(weights_decoder_filename)", "WARNING:tensorflow:From /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\n" ], [ "# PCA model\npca = PCA(n_components=2)", "_____no_output_____" ], [ "# Encode compendium\nnormalized_compendium = normalized_compendium_data.drop(['sample group'], axis=1)\ncompendium_encoded = loaded_model.predict_on_batch(normalized_compendium)\n\ncompendium_encoded_df = pd.DataFrame(data=compendium_encoded, \n index=normalized_compendium.index)\n\n# Get and save PCA model\nmodel1 = pca.fit(compendium_encoded_df)\n\ncompendium_PCAencoded = model1.transform(compendium_encoded_df)\n\ncompendium_PCAencoded_df = pd.DataFrame(data=compendium_PCAencoded,\n index=compendium_encoded_df.index,\n columns=['1','2'])\n\n# Add label\ncompendium_PCAencoded_df['sample group'] = 'compendium'", "WARNING:tensorflow:From /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n\n" ], [ "# Encode template experiment\nnormalized_template_data = normalized_template_data.drop(['sample group'], axis=1)\n\ntemplate_encoded = loaded_model.predict_on_batch(normalized_template_data)\ntemplate_encoded_df = pd.DataFrame(data=template_encoded,\n index=normalized_template_data.index)\n\ntemplate_PCAencoded = model1.transform(template_encoded_df)\n\ntemplate_PCAencoded_df = pd.DataFrame(data=template_PCAencoded,\n index=template_encoded_df.index,\n columns=['1','2'])\n\n# Add back label column\ntemplate_PCAencoded_df['sample group'] = 'template'", "_____no_output_____" ], [ "# Use stored encoded simulated data\n# Note: We cannot encode the decoded simulated experiment since we are not using tied weights\n# Re-encoded the decoded simulated experiment will not yield a linear latent space shift\nencoded_simulated_filename = os.path.join(\n local_dir,\n \"pseudo_experiment\",\n f\"selected_simulated_encoded_data_{project_id}_{run}.txt\"\n)\n\nsimulated_encoded_df = pd.read_csv(encoded_simulated_filename,header=0, sep='\\t', index_col=0)\n\nsample_ids = list(template_data.index)\nsimulated_encoded_df = simulated_encoded_df.loc[sample_ids]\n\nsimulated_PCAencoded = model1.transform(simulated_encoded_df)\n\nsimulated_PCAencoded_df = pd.DataFrame(data=simulated_PCAencoded,\n index=simulated_encoded_df.index,\n columns=['1','2'])\n\n# Add back label column\nsimulated_PCAencoded_df['sample group'] = 'simulated'", "_____no_output_____" ], [ "# Concatenate dataframes\ncombined_PCAencoded_df = pd.concat([compendium_PCAencoded_df, \n template_PCAencoded_df,\n simulated_PCAencoded_df])\n\nprint(combined_PCAencoded_df.shape)\ncombined_PCAencoded_df.head()", "(958, 3)\n" ], [ "# Plot\nfig1 = pn.ggplot(combined_PCAencoded_df, pn.aes(x='1', y='2'))\nfig1 += pn.geom_point(pn.aes(color='sample group'), alpha=0.4)\nfig1 += pn.labs(x ='PC 1',\n y = 'PC 2',\n title = 'Gene expression data in latent space')\nfig1 += pn.theme_bw()\nfig1 += pn.theme(\n legend_title_align = \"center\",\n plot_background=pn.element_rect(fill='white'),\n legend_key=pn.element_rect(fill='white', colour='white'), \n legend_title=pn.element_text(family='sans-serif', size=15),\n legend_text=pn.element_text(family='sans-serif', size=12),\n plot_title=pn.element_text(family='sans-serif', size=15),\n axis_text=pn.element_text(family='sans-serif', size=12),\n axis_title=pn.element_text(family='sans-serif', size=15)\n )\nfig1 += pn.scale_color_manual(['#bdbdbd', 'red', 'blue'])\nfig1 += pn.guides(colour=pn.guide_legend(override_aes={'alpha': 1}))\n\nprint(fig1)", "/home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/matplotlib/__init__.py:886: MatplotlibDeprecationWarning: \nexamples.directory is deprecated; in the future, examples will be found relative to the 'datapath' directory.\n \"found relative to the 'datapath' directory.\".format(key))\n" ] ], [ [ "## UMAP of latent space", "_____no_output_____" ] ], [ [ "# Get and save PCA model\nmodel2 = umap.UMAP(random_state=1).fit(compendium_encoded_df)\n\ncompendium_UMAPencoded = model2.transform(compendium_encoded_df)\n\ncompendium_UMAPencoded_df = pd.DataFrame(data=compendium_UMAPencoded,\n index=compendium_encoded_df.index,\n columns=['1','2'])\n\n# Add label\ncompendium_UMAPencoded_df['sample group'] = 'compendium'", "_____no_output_____" ], [ "template_UMAPencoded = model2.transform(template_encoded_df)\n\ntemplate_UMAPencoded_df = pd.DataFrame(data=template_UMAPencoded,\n index=template_encoded_df.index,\n columns=['1','2'])\n\n# Add back label column\ntemplate_UMAPencoded_df['sample group'] = 'template'", "_____no_output_____" ], [ "simulated_UMAPencoded = model2.transform(simulated_encoded_df)\n\nsimulated_UMAPencoded_df = pd.DataFrame(data=simulated_UMAPencoded,\n index=simulated_encoded_df.index,\n columns=['1','2'])\n\n# Add back label column\nsimulated_UMAPencoded_df['sample group'] = 'simulated'", "_____no_output_____" ], [ "# Concatenate dataframes\ncombined_UMAPencoded_df = pd.concat([compendium_UMAPencoded_df, \n template_UMAPencoded_df,\n simulated_UMAPencoded_df])\n\nprint(combined_UMAPencoded_df.shape)\ncombined_UMAPencoded_df.head()", "(958, 3)\n" ], [ "# Plot\nfig2 = pn.ggplot(combined_UMAPencoded_df, pn.aes(x='1', y='2'))\nfig2 += pn.geom_point(pn.aes(color='sample group'), alpha=0.4)\nfig2 += pn.labs(x ='UMAP 1',\n y = 'UMAP 2',\n title = 'Gene expression data in latent space')\nfig2 += pn.theme_bw()\nfig2 += pn.theme(\n legend_title_align = \"center\",\n plot_background=pn.element_rect(fill='white'),\n legend_key=pn.element_rect(fill='white', colour='white'), \n legend_title=pn.element_text(family='sans-serif', size=15),\n legend_text=pn.element_text(family='sans-serif', size=12),\n plot_title=pn.element_text(family='sans-serif', size=15),\n axis_text=pn.element_text(family='sans-serif', size=12),\n axis_title=pn.element_text(family='sans-serif', size=15)\n )\nfig2 += pn.scale_color_manual(['#bdbdbd', 'red', 'blue'])\nfig2 += pn.guides(colour=pn.guide_legend(override_aes={'alpha': 1}))\n\nprint(fig2)", "/home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/matplotlib/__init__.py:886: MatplotlibDeprecationWarning: \nexamples.directory is deprecated; in the future, examples will be found relative to the 'datapath' directory.\n \"found relative to the 'datapath' directory.\".format(key))\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
4a6209608993f0c223c885993ae3a80001f24d5e
29,250
ipynb
Jupyter Notebook
tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb
wmlzhx926/tensorflow
d309d1a2f3de12f16afa6d1b33031814428c8b70
[ "Apache-2.0" ]
4
2020-06-28T08:25:36.000Z
2021-08-12T12:41:34.000Z
tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb
wmlzhx926/tensorflow
d309d1a2f3de12f16afa6d1b33031814428c8b70
[ "Apache-2.0" ]
5
2020-07-17T17:36:44.000Z
2020-08-05T20:18:02.000Z
tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb
wmlzhx926/tensorflow
d309d1a2f3de12f16afa6d1b33031814428c8b70
[ "Apache-2.0" ]
4
2019-11-28T12:18:07.000Z
2021-08-01T16:12:17.000Z
33.352338
572
0.554667
[ [ [ "##### Copyright 2019 The TensorFlow Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Text classification with TensorFlow Lite Model Maker", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/tutorials/model_maker_text_classification\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>", "_____no_output_____" ], [ "The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.\n\nThis notebook shows an end-to-end example that utilizes this Model Maker library to illustrate the adaption and conversion of a commonly-used text classification model to classify movie reviews on a mobile device.", "_____no_output_____" ], [ "## Prerequisites\n\nTo run this example, we first need to install several required packages, including Model Maker package that in github [repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).", "_____no_output_____" ] ], [ [ "!pip install git+https://github.com/tensorflow/examples.git#egg=tensorflow-examples[model_maker]", "_____no_output_____" ] ], [ [ "Import the required packages.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport os\n\nimport tensorflow as tf\nassert tf.__version__.startswith('2')\n\nfrom tensorflow_examples.lite.model_maker.core.data_util.text_dataloader import TextClassifierDataLoader\nfrom tensorflow_examples.lite.model_maker.core.task.model_spec import AverageWordVecModelSpec\nfrom tensorflow_examples.lite.model_maker.core.task.model_spec import BertClassifierModelSpec\nfrom tensorflow_examples.lite.model_maker.core.task import text_classifier", "_____no_output_____" ] ], [ [ "## Simple End-to-End Example", "_____no_output_____" ], [ "### Get the data path\nLet's get some texts to play with this simple end-to-end example.", "_____no_output_____" ] ], [ [ "data_path = tf.keras.utils.get_file(\n fname='aclImdb',\n origin='http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz',\n untar=True)", "_____no_output_____" ] ], [ [ " You could replace it with your own text folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.\n\n<img src=\"https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png\" alt=\"Upload File\" width=\"800\" hspace=\"100\">\n", "_____no_output_____" ], [ "If you prefer not to upload your images to the cloud, you could try to run the library locally following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker) in github.", "_____no_output_____" ], [ "### Run the example\n\nThe example just consists of 6 lines of code as shown below, representing 5 steps of the overall process.", "_____no_output_____" ], [ "Step 0. Choose a `model_spec` that represents a model for text classifier.", "_____no_output_____" ] ], [ [ "model_spec = AverageWordVecModelSpec()", "_____no_output_____" ] ], [ [ "Step 1. Load train and test data specific to an on-device ML app and preprocess the data according to specific `model_spec`.", "_____no_output_____" ] ], [ [ "train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=model_spec, class_labels=['pos', 'neg'])\ntest_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'test'), model_spec=model_spec, is_training=False, shuffle=False)", "_____no_output_____" ] ], [ [ "Step 2. Customize the TensorFlow model.", "_____no_output_____" ] ], [ [ "model = text_classifier.create(train_data, model_spec=model_spec)", "_____no_output_____" ] ], [ [ "Step 3. Evaluate the model.", "_____no_output_____" ] ], [ [ "loss, acc = model.evaluate(test_data)", "_____no_output_____" ] ], [ [ "Step 4. Export to TensorFlow Lite model.\nYou could download it in the left sidebar same as the uploading part for your own use.", "_____no_output_____" ] ], [ [ "model.export(export_dir='.')", "_____no_output_____" ] ], [ [ "After this simple 5 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [text classification](https://github.com/tensorflow/examples/tree/master/lite/examples/text_classification) reference app.", "_____no_output_____" ], [ "## Detailed Process\n\nIn the above, we tried the simple end-to-end example. The following walks through the example step by step to show more detail.", "_____no_output_____" ], [ "### Step 0: Choose a model_spec that represents a model for text classifier.\n\neach `model_spec` object represents a specific model for the text classifier. Currently, we support averging word embedding model and BERT-base model.", "_____no_output_____" ] ], [ [ "model_spec = AverageWordVecModelSpec()", "_____no_output_____" ] ], [ [ "### Step 1: Load Input Data Specific to an On-device ML App\n\nThe IMDB dataset contains 25000 movie reviews for training and 25000 movie reviews for testing from the [Internet Movie Database](https://www.imdb.com/). The dataset has two classes: positive and negative movie reviews.\n\nDownload the archive version of the dataset and untar it.\n\nThe IMDB dataset has the following directory structure:\n\n<pre>\n<b>aclImdb</b>\n|__ <b>train</b>\n |______ <b>pos</b>: [1962_10.txt, 2499_10.txt, ...]\n |______ <b>neg</b>: [104_3.txt, 109_2.txt, ...]\n |______ unsup: [12099_0.txt, 1424_0.txt, ...]\n|__ <b>test</b>\n |______ <b>pos</b>: [1384_9.txt, 191_9.txt, ...]\n |______ <b>neg</b>: [1629_1.txt, 21_1.txt]\n\n</pre>\n\nNote that the text data under `train/unsup` folder are unlabeled documents for unsupervised learning and such data should be ignored in this tutorial.\n", "_____no_output_____" ] ], [ [ "data_path = tf.keras.utils.get_file(\n fname='aclImdb',\n origin='http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz',\n untar=True)", "_____no_output_____" ] ], [ [ "Use `TextClassifierDataLoader` to load data.\n\nAs for `from_folder()` method, it could load data from the folder. It assumes that the text data of the same class are in the same subdirectory and the subfolder name is the class name. Each text file contains one movie review sample.\n\nParameter `class_labels` is used to specify which subfolder should be considered. As for `train` folder, this parameter is used to skip `unsup` subfolder.\n", "_____no_output_____" ] ], [ [ "train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=model_spec, class_labels=['pos', 'neg'])\ntest_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'test'), model_spec=model_spec, is_training=False, shuffle=False)\ntrain_data, validation_data = train_data.split(0.9)", "_____no_output_____" ] ], [ [ "### Step 2: Customize the TensorFlow Model\n\nCreate a custom text classifier model based on the loaded data. Currently, we support averaging word embedding and BERT-base model.", "_____no_output_____" ] ], [ [ "model = text_classifier.create(train_data, model_spec=model_spec, validation_data=validation_data)", "_____no_output_____" ] ], [ [ "Have a look at the detailed model structure.", "_____no_output_____" ] ], [ [ "model.summary()", "_____no_output_____" ] ], [ [ "### Step 3: Evaluate the Customized Model\n\nEvaluate the result of the model, get the loss and accuracy of the model.\n\nEvaluate the loss and accuracy in `test_data`. If no data is given the results are evaluated on the data that's splitted in the `create` method.", "_____no_output_____" ] ], [ [ "loss, acc = model.evaluate(test_data)", "_____no_output_____" ] ], [ [ "### Step 4: Export to TensorFlow Lite Model\n\nConvert the existing model to TensorFlow Lite model format that could be later used in on-device ML application. Meanwhile, save the text labels in label file and vocabulary in vocab file. The default TFLite filename is `model.tflite`, the default label filename is `label.txt`, the default vocab filename is `vocab`.", "_____no_output_____" ] ], [ [ "model.export(export_dir='.')", "_____no_output_____" ] ], [ [ "The TensorFlow Lite model file and label file could be used in the [text classification](https://github.com/tensorflow/examples/tree/master/lite/examples/text_classification) reference app.\n\nIn detail, we could add `movie_review_classifier.tflite`, `text_label.txt` and `vocab.txt` to the [assets directory](https://github.com/tensorflow/examples/tree/master/lite/examples/text_classification/android/app/src/main/assets) folder. Meanwhile, change the filenames in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/text_classification/android/app/src/main/java/org/tensorflow/lite/examples/textclassification/TextClassificationClient.java#L43). ", "_____no_output_____" ], [ "Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.", "_____no_output_____" ] ], [ [ "# Read TensorFlow Lite model from TensorFlow Lite file.\nwith tf.io.gfile.GFile('model.tflite', 'rb') as f:\n model_content = f.read()\n\n# Read label names from label file.\nwith tf.io.gfile.GFile('labels.txt', 'r') as f:\n label_names = f.read().split('\\n')\n\n# Initialze TensorFlow Lite inpterpreter.\ninterpreter = tf.lite.Interpreter(model_content=model_content)\ninterpreter.allocate_tensors()\ninput_index = interpreter.get_input_details()[0]['index']\noutput = interpreter.tensor(interpreter.get_output_details()[0][\"index\"])\n\n# Run predictions on each test data and calculate accuracy.\naccurate_count = 0\nfor text, label in test_data.dataset:\n # Add batch dimension and convert to float32 to match with the model's input\n # data format.\n text = tf.expand_dims(text, 0)\n\n # Run inference.\n interpreter.set_tensor(input_index, text)\n interpreter.invoke()\n\n # Post-processing: remove batch dimension and find the label with highest\n # probability.\n predict_label = np.argmax(output()[0])\n # Get label name with label index.\n predict_label_name = label_names[predict_label]\n accurate_count += (predict_label == label.numpy())\n\naccuracy = accurate_count * 1.0 / test_data.size\nprint('TensorFlow Lite model accuracy = %.4f' % accuracy)", "_____no_output_____" ] ], [ [ "Note that preprocessing for inference should be the same as training. Currently, preprocessing contains split the text to tokens by '\\W', encode the tokens to ids, the pad the text with `pad_id` to have the length of `seq_length`.", "_____no_output_____" ], [ "## Advanced Usage\n\nThe `create` function is the critical part of this library in which parameter `model_spec` defines the specification of the model, currently `AverageWordVecModelSpec` and `BertModelSpec` is supported. The `create` function contains the following steps for `AverageWordVecModelSpec`:\n\n1. Tokenize the text and select the top `num_words` most frequent words to generate the vocubulary. The default value of `num_words` in `AverageWordVecModelSpec` object is `10000`.\n2. Encode the text string tokens to int ids.\n3. Create the text classifier model. Currently, this library supports one model: average the word embedding of the text with RELU activation, then leverage softmax dense layer for classification. As for [Embedding layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding), the input dimension is the size of the vocabulary, the output dimension is `AverageWordVecModelSpec` object's variable `wordvec_dim` which default value is `16`, the input length is `AverageWordVecModelSpec` object's variable `seq_len` which default value is `256`.\n4. Train the classifier model. The default epoch is `2` and the default batch size is `32`.\n\nIn this section, we describe several advanced topics, including adjusting the model, changing the training hyperparameters etc.\n", "_____no_output_____" ], [ "## Adjust the model\n\nWe could adjust the model infrastructure like variables `wordvec_dim`, `seq_len` in `AverageWordVecModelSpec` class.\n", "_____no_output_____" ], [ "* `wordvec_dim`: Dimension of word embedding.\n* `seq_len`: length of sequence.\n\nFor example, we could train with larger `wordvec_dim`. If we change the model, we need to construct the new `model_spec` firstly.", "_____no_output_____" ] ], [ [ "new_model_spec = AverageWordVecModelSpec(wordvec_dim=32)", "_____no_output_____" ] ], [ [ "Secondly, we should get the preprocessed data accordingly.", "_____no_output_____" ] ], [ [ "new_train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=new_model_spec, class_labels=['pos', 'neg'])\nnew_train_data, new_validation_data = new_train_data.split(0.9)", "_____no_output_____" ] ], [ [ "Finally, we could train the new model.", "_____no_output_____" ] ], [ [ "model = text_classifier.create(new_train_data, model_spec=new_model_spec, validation_data=new_validation_data)", "_____no_output_____" ] ], [ [ "### Change the training hyperparameters\nWe could also change the training hyperparameters like `epochs` and `batch_size` that could affect the model accuracy. For instance,\n\n* `epochs`: more epochs could achieve better accuracy, but may lead to overfitting.\n* `batch_size`: number of samples to use in one training step.\n\nFor example, we could train with more epochs.", "_____no_output_____" ] ], [ [ "model = text_classifier.create(train_data, model_spec=model_spec, validation_data=validation_data, epochs=5)", "_____no_output_____" ] ], [ [ "Evaluate the newly retrained model with 5 training epochs.", "_____no_output_____" ] ], [ [ "loss, accuracy = model.evaluate(test_data)", "_____no_output_____" ] ], [ [ "### Change the Model\n\nWe could change the model by changing the `model_spec`. The following shows how we change to BERT-base model.\n\nFirst, we could change `model_spec` to `BertModelSpec`.", "_____no_output_____" ] ], [ [ "model_spec = BertClassifierModelSpec()", "_____no_output_____" ] ], [ [ "The remaining steps remains the same.\n\nLoad data and preprocess the data according to `model_spec`.", "_____no_output_____" ] ], [ [ "train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=model_spec, class_labels=['pos', 'neg'])\ntest_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'test'), model_spec=model_spec, is_training=False, shuffle=False)", "_____no_output_____" ] ], [ [ "Then retrain the model. Note that it could take a long time to retrain the BERT model. we just set `epochs` equals 1 to demonstrate it.", "_____no_output_____" ] ], [ [ "model = text_classifier.create(train_data, model_spec=model_spec, epochs=1)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a6223f8c885e53cae1f7a5e7c3447fee6671c19
44,894
ipynb
Jupyter Notebook
test_matplotlib.ipynb
Drixitel/astr-119-session-5
1ee3c6cfb6d5deeeebeefc46fa5535486473f94a
[ "MIT" ]
null
null
null
test_matplotlib.ipynb
Drixitel/astr-119-session-5
1ee3c6cfb6d5deeeebeefc46fa5535486473f94a
[ "MIT" ]
null
null
null
test_matplotlib.ipynb
Drixitel/astr-119-session-5
1ee3c6cfb6d5deeeebeefc46fa5535486473f94a
[ "MIT" ]
null
null
null
227.888325
24,852
0.922573
[ [ [ "# Using % inline and imports ", "_____no_output_____" ] ], [ [ "%matplotlib inline \n#note the plt.show needs the %mat inline, but since the image is being saved \n#you no longer need the plt.show \nimport numpy as np #when we plot it will show us in the next line of code\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Plot a function \n* create an array \n* create another array that passes the values of the first \n* plot those arrays \n* define labels and legend etc. through matplotlib tech. ", "_____no_output_____" ], [ "## Question: \n* how does this plt. know to pass a line? \n* we have an array of values \n* we pass it through anther math function that transforms the values\n* would it not plot points vs. a curve? \n", "_____no_output_____" ] ], [ [ "x = np.arange(0,5,0.1) #exclusive (start,end,increment)\ny = np.sin(x) #exclusive? \nz = np.cos(x)\n\nplt.plot(x,y, label = 'sin(x)')\nplt.plot(x,z, label = 'cos(x)')\n\nplt.xlabel('x')\nplt.ylabel('y = sin(x), cos(x)')\n\nplt.legend()\n#plt.show()\n\n\nplt.savefig('sinx_cosx.png',bbox_inches='tight',dpi=600)\n#bbox bounding type is little lightspace\n#this saves to a file \n\n", "_____no_output_____" ] ], [ [ "### Exclusive and Inclusive \n* this is a mathematical describtion \n> * (0,5) is exclusive \n> * [0,5] is inclusive ", "_____no_output_____" ], [ "## Another way to write the plot above ", "_____no_output_____" ] ], [ [ "x = np.arange(0,5,0.1) # start, end, step [0,5) not inclusive \nprint(x)", "[0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1. 1.1 1.2 1.3 1.4 1.5 1.6 1.7\n 1.8 1.9 2. 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3. 3.1 3.2 3.3 3.4 3.5\n 3.6 3.7 3.8 3.9 4. 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9]\n" ], [ "x = np.arange(0,5,0.1) #array of values for x-axis\n\n\n\ny = np.sin(x) #function that takes x values \n\nplt.plot(x,y) #plot the vector y(x)=<x,sinx> \n\nplt.xlabel('x')\nplt.ylabel('sin(x)')\n\nplt.show()\n\n", "_____no_output_____" ], [ "x = np.arange(0,5,0.1) \n\ny = np.sin(x) \n\nplt.plot(x,y) \n\nplt.xlabel('x')\nplt.ylabel('sin(x)')\n\n#plt.savefig('sinx.png', bbox_inches='tight', dpi=300)\n#used to save files \n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ] ]
4a622a8846d3f015016ea5a54321f43a83e014ff
105,709
ipynb
Jupyter Notebook
body_brain.ipynb
matthew-brett/bio145
b160e2ee649311a64828a29e092f417063599e07
[ "CC-BY-4.0" ]
null
null
null
body_brain.ipynb
matthew-brett/bio145
b160e2ee649311a64828a29e092f417063599e07
[ "CC-BY-4.0" ]
null
null
null
body_brain.ipynb
matthew-brett/bio145
b160e2ee649311a64828a29e092f417063599e07
[ "CC-BY-4.0" ]
null
null
null
123.78103
17,368
0.88387
[ [ [ "# Regression, body and brain\n\n## About this page\n\nThis is a Jupyter Notebook. It can be run as an interactive demo, or you can\nread it as a web page.\n\nYou don't need to understand the code on this page, the text will tell you\nwhat the code is doing.\n\nYou can also [run this demo\ninteractively](https://mybinder.org/v2/gh/matthew-brett/bio145/master?filepath=on_correlation.ipynb).\n\n## The example problem\n\nWe are going to do regression of body weights and brain weights of some animals, and then look at the correlation.", "_____no_output_____" ], [ "## Some setup\n\nWe first need to get our environment set up to run the code and plots we\nneed.", "_____no_output_____" ] ], [ [ "# Code to get set up. If you are running interactively, you can execute\n# this cell by pressing Shift and Enter at the same time.\n# Libraries for arrays and plotting\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n# Make plots look a little bit more fancy\nplt.style.use('fivethirtyeight')\n# Import library for statistical routines\nimport scipy.stats as sps\n# Print array numbers to 4 digits of precisiono\nnp.set_printoptions(precision=4, suppress=True)", "_____no_output_____" ] ], [ [ "## Starting with a line\n\nHere are the body weights (in kilograms) from the 8 animals:", "_____no_output_____" ] ], [ [ "body_weight = np.array([3.3, 465, 27.7, 521, 192, 2.5, 0.3, 55.5])", "_____no_output_____" ] ], [ [ "These are the corresponding brain weights (in grams):", "_____no_output_____" ] ], [ [ "brain_weight = np.array([25.6, 423, 115, 655, 180, 12.1, 1.9, 175])", "_____no_output_____" ] ], [ [ "We believe that there is some relationship between `body_weight` and `brain_weight`.\nPlotting them together we get:", "_____no_output_____" ] ], [ [ "plt.plot(body_weight, brain_weight, '+')\nplt.xlabel('Body')\nplt.ylabel('Brain');", "_____no_output_____" ] ], [ [ "It looks like there may be some sort of straight line relationship. We could\ntry to find a good line to fit the data. Here I will do some magic to work\nout a good line.", "_____no_output_____" ] ], [ [ "slope, intercept, r, p, se = sps.linregress(body_weight, brain_weight)\nprint(f'Slope: {slope:.4f}')\nprint(f'Intercept: {intercept:.4f}')", "Slope: 1.0195\nIntercept: 36.9484\n" ] ], [ [ "We also got the correlation *r* value from this calculation. Here it is, for\nfuture reference. We will come back to this later:", "_____no_output_____" ] ], [ [ "# Correlation \"r\" value\nprint(f'Correlation r: {r:.4f}')", "Correlation r: 0.9594\n" ] ], [ [ "This is the squared *r* value ($r^2$):", "_____no_output_____" ] ], [ [ "r ** 2", "_____no_output_____" ] ], [ [ "Here is the line drawn on the plot of the data:", "_____no_output_____" ] ], [ [ "# Plot data with the prediction\nplt.plot(body_weight, brain_weight, 'k+')\nmx = max(body_weight)\nx_vals = [0, mx]\ny_vals = [intercept, intercept + slope * mx]\nplt.plot(x_vals, y_vals, 'b')\nplt.xlabel('Body')\nplt.ylabel('Brain')\nplt.title('Body vs Brain with nice line');", "_____no_output_____" ] ], [ [ "## How do we chose a good line?", "_____no_output_____" ], [ "The line gives a *prediction* of what `brain_weight` should be, for any value\nof `body_weight`. If we have some value `x` for `body_weight`, then we can\npredict the value `y` of `brain_weight`, with `y = intercept + slope * x`.\n\nFor example, here are the first values for `body_weight` and `brain_weight`:", "_____no_output_____" ] ], [ [ "print(f'First body_weight value {body_weight[0]}')\nprint(f'First brain_weight value {brain_weight[0]}')", "First body_weight value 3.3\nFirst brain_weight value 25.6\n" ] ], [ [ "The second value is the *actual* value of `brain_weight`. The *predicted*\nvalue of `brain_weight`, for this value of `body_weight` is:", "_____no_output_____" ] ], [ [ "predicted = intercept + body_weight[0]\npredicted", "_____no_output_____" ] ], [ [ "The *error* for our line, is the difference between the actual and predicted\nvalue.", "_____no_output_____" ] ], [ [ "actual = brain_weight[0]\nerror = actual - predicted\nerror", "_____no_output_____" ] ], [ [ "This is the error for the first value. We can get the errors for all the\nvalues in the same way.", "_____no_output_____" ], [ "This is the calculation of error for all 12 values. As usual, you don't need\nto understand the code in detail:", "_____no_output_____" ] ], [ [ "all_predicted = intercept + body_weight * slope\nall_errors = brain_weight - all_predicted\nall_errors", "_____no_output_____" ] ], [ [ "Notice the first value for `all_errors` is the same as the value for `error`\nwe saw above.\n\nThe errors here are the distances between the prediction line and the points\non the plot. Here I show the errors as red lines. Don't worry about the code\nbelow, it's not important to the idea.", "_____no_output_____" ] ], [ [ "# Plot data with the prediction and errors\nplt.plot(body_weight, brain_weight, 'k+', ms=15)\nmx = max(body_weight)\nx_vals = [0, mx]\ny_vals = [intercept, intercept + slope * mx]\nplt.plot(x_vals, y_vals, 'b')\n# Draw the error lines\nfor i in range(len(body_weight)):\n x_vals = [body_weight[i], body_weight[i]]\n y_vals = [all_predicted[i], brain_weight[i]]\n plt.plot(x_vals, y_vals, 'r')\nplt.xlabel('Body weight')\nplt.ylabel('Brain weight')\nplt.title('body_weight vs brain_weight, and errors');", "_____no_output_____" ] ], [ [ "A good line will make the errors as small as possible. Therefore, a good line\nwill make the lengths of the red lines as short as possible.\n\nWe need to generate a single number, from the errors, that gives an overall\nmeasure of the size of the errors.\n\nWe cannot just add up the errors, because the negative and positive errors\nwill cancel out. Even if the errors are a mixture of large positive and large\nnegative, the sum could be very small.\n\nThe usual thing to do, is to square all the errors, to make sure they are all\npositive. Then we add all the squared errors. This gives the *sum of squared\nerror* or SSE.", "_____no_output_____" ] ], [ [ "# A reminder of the errors we calculated above\nall_errors", "_____no_output_____" ], [ "# Square all the errors\nsquared_errors = all_errors ** 2\nsquared_errors", "_____no_output_____" ], [ "# Calculate the sum of the squared errors\nSSE = sum(squared_errors)\nSSE", "_____no_output_____" ] ], [ [ "The line is a good one when SSE is small. In fact, the usual \"best fit\" line\nchosen by packages such as Excel, is the line that gives the lowest SSE value,\nof all possible lines.\n\nIt is the line that minimizes the squared error, often called the *least squares* line.\n\nThis is the line that I found by sort-of magic, above. If you like, try other\nslopes and intercepts. You will find that they always have a higher SSE value\nthan the slope and intercept I have used here.", "_____no_output_____" ], [ "## Regression and correlation\n\nAbove, you have seen regression, using the *least squares* line.\n\nCorrelation is a version of the same thing, but where we have *standardized*\nthe data.\n\nWe standardize data by subtracting the mean, and dividing by the standard\ndeviation.\n\nWe do this, to put the x and y values onto the same scale.\n\nFor example, here is a histogram of the `body_weight` values, to give you an idea\nof their position and spread.", "_____no_output_____" ] ], [ [ "plt.hist(body_weight)\nplt.title(\"Body weight values\");", "_____no_output_____" ] ], [ [ "In correlation, we are interested to know whether the *variation* in the (e.g)\n`body_weight` values, predicts the variation in the (e.g) `brain_weight` values.\nVariation, is variation around the mean. To show variation, we subtract the\nmean. We refer to the values, with the mean subtracted, as *mean centered*.", "_____no_output_____" ] ], [ [ "centered_x = body_weight - body_weight.mean()\nplt.hist(centered_x)\nplt.title('Mean-centered body weight values');", "_____no_output_____" ] ], [ [ "Finally, the values for the spread either side of zero depends on the units of\nthe measurement. We measure the spread, with standard deviation:", "_____no_output_____" ] ], [ [ "std_x = np.std(centered_x)\nstd_x", "_____no_output_____" ] ], [ [ "We would like to re-express our data to have a standard spread, that is\ncomparable for the `x` / `body_weight` values and the `y` / `brain_weight` values.\nFor example, we might like to ensure the data have a standard deviation of 1.\nTo do this, we divide the centered values by the standard deviation.", "_____no_output_____" ] ], [ [ "standard_x = centered_x / std_x\nplt.hist(standard_x)\nplt.title('Standardized body weight values');", "_____no_output_____" ] ], [ [ "You will see below, that the mean of these values is now 0, and the standard deviation is 1.", "_____no_output_____" ] ], [ [ "print(f'Mean of standard x: {np.mean(standard_x):.4f}')\nprint(f'Standard deviation: {np.std(standard_x):.4f}')", "Mean of standard x: 0.0000\nStandard deviation: 1.0000\n" ] ], [ [ "Our `body_weight` values are now *standardized*.\n\nWe do the same for our `y` / `brain_weight` values:", "_____no_output_____" ] ], [ [ "# Standarize the y / brain_weight values\ncentered_y = brain_weight - brain_weight.mean()\nstandard_y = centered_y / np.std(centered_y)\nprint(f'Mean of standard y: {np.mean(standard_y):.4f}')\nprint(f'Standard deviation: {np.std(standard_y):.4f}')", "Mean of standard y: 0.0000\nStandard deviation: 1.0000\n" ] ], [ [ "The correlation value *r* is just the slope of the regression line relating\nour standardized `x` / `body_weight` and standardized `y` / `brain_weight`:", "_____no_output_____" ] ], [ [ "std_slope, std_intercept, std_r, p, se = sps.linregress(standard_x, standard_y)\nprint(f'Standardized slope (=correlation r): {std_slope:.4f}')\nprint(f'Standardized intercept: {std_intercept:.4f}')", "Standardized slope (=correlation r): 0.9594\nStandardized intercept: 0.0000\n" ] ], [ [ "It turns out that, when we standardize the x and y values, as we did here, the\n*intercept* for the least-squares line must be zero, for mathematical reasons\nthat are not important for our current purpose.\n\nNotice that the slope above is the same as the `r` value for the original\nregression line:", "_____no_output_____" ] ], [ [ "print(f'Standardized slope: {std_slope:.4f}')\nprint(f'Original r for regression: {r:.4f}')", "Standardized slope: 0.9594\nOriginal r for regression: 0.9594\n" ] ], [ [ "Here is the plot of standardized `body_weight` against standardized `brain_weight`,\nwith the least-squares line:", "_____no_output_____" ] ], [ [ "# Plot standard data with the prediction\nplt.plot(standard_x, standard_y, '+')\nmx = max(standard_x)\nmn = min(standard_x)\nx_vals = [mn, mx]\ny_vals = [std_intercept + std_slope * mn, std_intercept + std_slope * mx]\nplt.plot(x_vals, y_vals, 'b')\nplt.title('Standardized body weight against standardized brain weight');", "_____no_output_____" ] ], [ [ "Notice that the plot has the point (0, 0) at its center, and that the line\ngoes through the (0, 0) point. The slope of the line, is the correlation\nvalue *r*.\n\nIt turns out that, if we do this standardization procedure, the slope of the\nline can only vary between 1 (where the standardized `x` values are the same as\nthe standardized `y` values) and -1 (where the standardized `x` values are the\nexact negative of the standardized `y` values).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]