hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d0edeb0b5de3a9a436290075f61cb5687508fd24
| 77,437 |
ipynb
|
Jupyter Notebook
|
Notebooks/ETL Pipeline Preparation.ipynb
|
leandrominer85/Disaster-Response-Pipelines
|
d800bb493f805180965c34852676d5cded755020
|
[
"OLDAP-2.4"
] | null | null | null |
Notebooks/ETL Pipeline Preparation.ipynb
|
leandrominer85/Disaster-Response-Pipelines
|
d800bb493f805180965c34852676d5cded755020
|
[
"OLDAP-2.4"
] | null | null | null |
Notebooks/ETL Pipeline Preparation.ipynb
|
leandrominer85/Disaster-Response-Pipelines
|
d800bb493f805180965c34852676d5cded755020
|
[
"OLDAP-2.4"
] | null | null | null | 36.927515 | 292 | 0.386986 |
[
[
[
"# ETL Pipeline Preparation\nFollow the instructions below to help you create your ETL pipeline.\n### 1. Import libraries and load datasets.\n- Import Python libraries\n- Load `messages.csv` into a dataframe and inspect the first few lines.\n- Load `categories.csv` into a dataframe and inspect the first few lines.",
"_____no_output_____"
]
],
[
[
"# import libraries\nimport pandas as pd\nimport sqlite3\nfrom sqlalchemy import create_engine",
"_____no_output_____"
],
[
"# load messages dataset\nmessages = pd.read_csv(r'E:\\Dropbox\\Pessoal\\Python\\Udacity\\Disaster-Response-Pipelines\\data\\messages.csv')\nmessages.head()",
"_____no_output_____"
],
[
"# load categories dataset\ncategories = pd.read_csv(r'E:\\Dropbox\\Pessoal\\Python\\Udacity\\Disaster-Response-Pipelines\\data\\categories.csv')\ncategories.head()",
"_____no_output_____"
]
],
[
[
"### 2. Merge datasets.\n- Merge the messages and categories datasets using the common id\n- Assign this combined dataset to `df`, which will be cleaned in the following steps",
"_____no_output_____"
]
],
[
[
"# merge datasets\ndf = messages.merge(categories, on='id')\ndf.head()",
"_____no_output_____"
]
],
[
[
"### 3. Split `categories` into separate category columns.\n- Split the values in the `categories` column on the `;` character so that each value becomes a separate column. You'll find [this method](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.str.split.html) very helpful! Make sure to set `expand=True`.\n- Use the first row of categories dataframe to create column names for the categories data.\n- Rename columns of `categories` with new column names.",
"_____no_output_____"
]
],
[
[
"# create a dataframe of the 36 individual category columns\ncategories = categories['categories'].str.split(';', expand=True)\n\n#Creating a list of the columns names with the first row value and then extract the final and assign to the DF \ncol_list = list(categories.iloc[0].values)\ncategories.columns = [x.split(\"-\")[0] for x in col_list] \ncategories.head()",
"_____no_output_____"
]
],
[
[
"### 4. Convert category values to just numbers 0 or 1.\n- Iterate through the category columns in df to keep only the last character of each string (the 1 or 0). For example, `related-0` becomes `0`, `related-1` becomes `1`. Convert the string to a numeric value.\n- You can perform [normal string actions on Pandas Series](https://pandas.pydata.org/pandas-docs/stable/text.html#indexing-with-str), like indexing, by including `.str` after the Series. You may need to first convert the Series to be of type string, which you can do with `astype(str)`.",
"_____no_output_____"
]
],
[
[
"for column in categories:\n # set each value to be the last character of the string\n categories[column] = categories[column].str.split('-',expand=True)[1]\n \n # convert column from string to numeric\n categories[column] = categories[column].astype(int)\ncategories.head()",
"_____no_output_____"
]
],
[
[
"### 5. Replace `categories` column in `df` with new category columns.\n- Drop the categories column from the df dataframe since it is no longer needed.\n- Concatenate df and categories data frames.",
"_____no_output_____"
]
],
[
[
"# drop the original categories column from `df`\ndf.drop(\"categories\", inplace = True, axis=1)\n\ndf.head()",
"_____no_output_____"
],
[
"# concatenate the original dataframe with the new `categories` dataframe\ndf = pd.concat([df,categories], axis=1, join=\"inner\")\ndf.head()",
"_____no_output_____"
],
[
"pd.set_option('display.max_columns', None)\ndf['related'] = df.loc[df['related'] == 2] = 1\ndf.describe()",
"_____no_output_____"
]
],
[
[
"### 6. Remove duplicates.\n- Check how many duplicates are in this dataset.\n- Drop the duplicates.\n- Confirm duplicates were removed.",
"_____no_output_____"
]
],
[
[
"# check number of duplicates\nsum(df.id.duplicated())",
"_____no_output_____"
],
[
"# drop duplicates\ndf.drop_duplicates(['id'],inplace=True)",
"_____no_output_____"
],
[
"# check number of duplicates\nsum(df.id.duplicated())",
"_____no_output_____"
]
],
[
[
"### 7. Save the clean dataset into an sqlite database.\nYou can do this with pandas [`to_sql` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) combined with the SQLAlchemy library. Remember to import SQLAlchemy's `create_engine` in the first cell of this notebook to use it below.",
"_____no_output_____"
]
],
[
[
"conn = sqlite3.connect(r'E:\\Dropbox\\Pessoal\\Python\\Udacity\\Disaster-Response-Pipelines\\databases\\Disaster.db')\n\ndf.to_sql('Disaster', con=conn, index=False)",
"_____no_output_____"
]
],
[
[
"### 8. Use this notebook to complete `etl_pipeline.py`\nUse the template file attached in the Resources folder to write a script that runs the steps above to create a database based on new datasets specified by the user. Alternatively, you can complete `etl_pipeline.py` in the classroom on the `Project Workspace IDE` coming later.",
"_____no_output_____"
]
],
[
[
"def load_data (data1,data2,db_name='Disaster',table_name='Disaster'):\n\n '''\n This function receives two paths to data files in the .csv format (data1, data 2).\n After that it merges both dataframes in the common column ('id'). Uses the second dataframe\n (with categories column) to create a new dataframe with columns names based on the first row name\n (the string before the \"-\" character). The values of the columns of this new dataframe are the numeric part\n of the end of each row (the number after the \"-\" character). Then it concats this dataframe with\n the one merged and drop the old \"categories\" column. Finally it drops the \"id\" duplicates and saves the\n dataframe on a table in a database.\n The user can change the database and table names in the function db_name and table_name.\n \n '''\n \n \n #Read the dfs\n df1 = pd.read_csv(data1)\n df2 = pd.read_csv(data2)\n \n #Merge the dfs\n df = df1.merge(df2, on='id')\n \n \n # create a dataframe of the 36 individual category columns\n categories = df2['categories'].str.split(';', expand=True)\n\n #Creating a list of the columns names with the first row value and then extract the final and assign to the DF \n col_list = list(categories.iloc[0].values)\n categories.columns = [x.split(\"-\")[0] for x in col_list] \n \n for column in categories:\n # set each value to be the last character of the string\n categories[column] = categories[column].str.split('-',expand=True)[1]\n\n # convert column from string to numeric\n categories[column] = categories[column].astype(int)\n # drop the original categories column from `df`\n df.drop(\"categories\", inplace = True, axis=1)\n\n # concatenate the original dataframe with the new `categories` dataframe\n df = pd.concat([df,categories], axis=1)\n \n # drop duplicates\n df.drop_duplicates(['id'],inplace=True)\n \n engine = create_engine('sqlite:///{}.db'.format(db_name))\n df.to_sql('{}'.format(table_name), engine, index=False)\n \n return",
"_____no_output_____"
],
[
"conn = sqlite3.connect(r'E:\\Dropbox\\Pessoal\\Python\\Udacity\\Disaster-Response-Pipelines\\databases\\Disaster.db')\ndf = pd.read_sql('select * from disaster', con = conn)\n",
"_____no_output_____"
],
[
"genre_counts = df.groupby('genre').count()['message']\n \ngenre_names = list(genre_counts.index)\n ",
"_____no_output_____"
],
[
"genre_counts",
"_____no_output_____"
],
[
"genre_names",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0ededc92cce65fc8a1149ac785ccbbafcf36d78
| 5,299 |
ipynb
|
Jupyter Notebook
|
02-Data-Import-Export/Assignment-01.ipynb
|
nikkiii66/Intro-to-Data-Analytics
|
7f0353bc93169202161ee8f175a6847dc0f1dc2d
|
[
"MIT"
] | 19 |
2020-11-07T20:34:58.000Z
|
2021-12-29T22:54:52.000Z
|
02-Data-Import-Export/Assignment-01.ipynb
|
nikkiii66/Intro-to-Data-Analytics
|
7f0353bc93169202161ee8f175a6847dc0f1dc2d
|
[
"MIT"
] | 1 |
2020-12-02T03:44:02.000Z
|
2020-12-02T03:44:02.000Z
|
02-Data-Import-Export/Assignment-01.ipynb
|
nikkiii66/Intro-to-Data-Analytics
|
7f0353bc93169202161ee8f175a6847dc0f1dc2d
|
[
"MIT"
] | 49 |
2020-11-09T11:16:36.000Z
|
2022-01-26T00:28:22.000Z
| 5,299 | 5,299 | 0.632006 |
[
[
[
"# Assignment 1\n\nThis assignment is to test your understanding of Python basics.\n\nAnswer the questions and complete the tasks outlined below; use the specific method described, if applicable. In order to get complete points on your homework assigment you have to a) complete this notebook, b) based on your results answer the multiple choice questions on QoestromTools. \n\n**Important note:** make sure you spend some time to review the basics of python notebooks under the folder `00-Python-Basics` in course repo or [A Whirlwind Tour of Python](https://www.oreilly.com/programming/free/files/a-whirlwind-tour-of-python.pdf).",
"_____no_output_____"
],
[
"# Question 1\n**What is 9 to the power of 7?**",
"_____no_output_____"
]
],
[
[
"# Your answer goes here\n",
"_____no_output_____"
]
],
[
[
"# Question 2\n\n**What is the quotient and remainder of 453634/34?**",
"_____no_output_____"
]
],
[
[
"# Your answer goes here\nprint('Quotient of 453634/34:')\nprint('Remainder of 453634/34:')",
"Quotient of 453634/34:\nRemainder of 453634/34:\n"
]
],
[
[
"# Question 3\n\nWrite a statement to check whether `a` is a multiple of 12 and within the range of [1000, 1800) or (0, 300]. \n\n**What is the outcome of `a = 780`?**\n\nNote: (0, 300] represents a range from 0 to 300, where 0 is not included in the range, but 300 is.",
"_____no_output_____"
]
],
[
[
"a = 780",
"_____no_output_____"
],
[
"# Your answer goes here\n",
"_____no_output_____"
]
],
[
[
"# Question 4\n\n**Given this nested list, what indexing yields to the word \"hello\"?**",
"_____no_output_____"
]
],
[
[
"lst = [[5,[100,200,{'target':[1,2,3,'hello']}],23,11],1,71,2,[3,4],'bye']\nprint(lst)",
"[[5, [100, 200, {'target': [1, 2, 3, 'hello']}], 23, 11], 1, 71, 2, [3, 4], 'bye']\n"
],
[
"# Your answer goes here\n",
"_____no_output_____"
]
],
[
[
"# Question 5\n\nUsing a list comprehension, create a new list out of the list `L1`, which contains only the even numbers from `L1`, and converts them into absolute values (using `abs()` function). Call this new list `L2`.\n\n**What is the sum of all of the elements of `L2`?** \n\nHint: Use `sum(L2)` to get the sum of all the elements.",
"_____no_output_____"
]
],
[
[
"L1 = [64, 34, 112, 91, 62, 40, 117, 80, 96, 34, 48, -9, -33,\n 99, 16, 118, -51, 60, 115, 4, -10, 82, -7, 77, -33, -40,\n 77, 90, -9, 52, -44, 25, -43, 28, -37, 92, 25, -45, 3,\n 103, 22, 39, -52, 74, -54, -76, -10, 5, -54, 95, -59, -2,\n 110, 63, -53, 113, -43, 18, 49, -20, 81, -67, 1, 38, -24,\n 57, -11, -69, -66, -67, -68, -16, 64, -34, 52, -37, -7, -40,\n 11, -3, 76, 91, -57, -48, -10, -16, 14, 13, -65]",
"_____no_output_____"
],
[
"# Your answer goes here\n",
"_____no_output_____"
]
],
[
[
"# Question 6\n\nWrite a function that receives a list of integer numbers and returns a list of numbers that are multiples of 4. Call this function `mult4_filter()`.\n\n**Given the list `L3` below how many elements the outcome of `mult4_filter(L3)` has?**\n\nHint: use `len(mult4_filter(L3))` to get the number of elements.",
"_____no_output_____"
]
],
[
[
"L3 = [15, 11, 1, 3, 13, 3, 14, 16, 17, 17, 6, 18, 10, 19, 8, 1, 18,\n 17, 14, 1, 5, 2, 13, 0, 1, 13, 16, 8, 5, 11, 12, 8, 17, 14,\n 10, 18, 17, 16, 3, 7, 8, 15, 18, 7, 10, 5, 7, 16, 6, 5]",
"_____no_output_____"
],
[
"# Your answer goes here\ndef mult4_filter(L):\n # Your code goes here\n return",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0edeecee64acccefd26b96e8f7e1c2654c27ba9
| 6,927 |
ipynb
|
Jupyter Notebook
|
2 Supervised learning/Lectures notebooks/2 sklearn cross validation/sklearn.cross_validation.ipynb
|
MaxSu77/ML-DA-Coursera-Yandex-MIPT
|
22b1a01c9db1b0aead39da7dc7e0f93c1062dddc
|
[
"MIT"
] | 1 |
2020-09-01T12:47:25.000Z
|
2020-09-01T12:47:25.000Z
|
2 Supervised learning/Lectures notebooks/2 sklearn cross validation/sklearn.cross_validation.ipynb
|
MaxSu77/ML-DA-Coursera-Yandex-MIPT
|
22b1a01c9db1b0aead39da7dc7e0f93c1062dddc
|
[
"MIT"
] | null | null | null |
2 Supervised learning/Lectures notebooks/2 sklearn cross validation/sklearn.cross_validation.ipynb
|
MaxSu77/ML-DA-Coursera-Yandex-MIPT
|
22b1a01c9db1b0aead39da7dc7e0f93c1062dddc
|
[
"MIT"
] | 1 |
2022-03-03T06:29:29.000Z
|
2022-03-03T06:29:29.000Z
| 21.851735 | 153 | 0.537895 |
[
[
[
"# Sklearn",
"_____no_output_____"
],
[
"## sklearn.model_selection",
"_____no_output_____"
],
[
"документация: http://scikit-learn.org/stable/modules/cross_validation.html",
"_____no_output_____"
]
],
[
[
"from sklearn import model_selection, datasets\n\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"### Разовое разбиение данных на обучение и тест с помощью train_test_split",
"_____no_output_____"
]
],
[
[
"iris = datasets.load_iris()",
"_____no_output_____"
],
[
"train_data, test_data, train_labels, test_labels = model_selection.train_test_split(iris.data, iris.target, \n test_size = 0.3)",
"_____no_output_____"
],
[
"#убедимся, что тестовая выборка действительно составляет 0.3 от всех данных\nfloat(len(test_labels))/len(iris.data)",
"_____no_output_____"
],
[
"print 'Размер обучающей выборки: {} объектов \\nРазмер тестовой выборки: {} объектов'.format(len(train_data),\n len(test_data))",
"_____no_output_____"
],
[
"print 'Обучающая выборка:\\n', train_data[:5]\nprint '\\n'\nprint 'Тестовая выборка:\\n', test_data[:5]",
"_____no_output_____"
],
[
"print 'Метки классов на обучающей выборке:\\n', train_labels\nprint '\\n'\nprint 'Метки классов на тестовой выборке:\\n', test_labels",
"_____no_output_____"
]
],
[
[
"### Стратегии проведения кросс-валидации",
"_____no_output_____"
]
],
[
[
"#сгенерируем короткое подобие датасета, где элементы совпадают с порядковым номером\nX = range(0,10)",
"_____no_output_____"
]
],
[
[
"#### KFold",
"_____no_output_____"
]
],
[
[
"kf = model_selection.KFold(n_splits = 5)\nfor train_indices, test_indices in kf.split(X):\n print train_indices, test_indices",
"_____no_output_____"
],
[
"kf = model_selection.KFold(n_splits = 2, shuffle = True)\nfor train_indices, test_indices in kf.split(X):\n print train_indices, test_indices",
"_____no_output_____"
],
[
"kf = model_selection.KFold(n_splits = 2, shuffle = True, random_state = 1)\nfor train_indices, test_indices in kf.split(X):\n print train_indices, test_indices",
"_____no_output_____"
]
],
[
[
"#### StratifiedKFold",
"_____no_output_____"
]
],
[
[
"y = np.array([0] * 5 + [1] * 5)\nprint y\n\nskf = model_selection.StratifiedKFold(n_splits = 2, shuffle = True, random_state = 0)\nfor train_indices, test_indices in skf.split(X, y):\n print train_indices, test_indices",
"_____no_output_____"
],
[
"target = np.array([0, 1] * 5)\nprint target\n\nskf = model_selection.StratifiedKFold(n_splits = 2,shuffle = True)\nfor train_indices, test_indices in skf.split(X, target):\n print train_indices, test_indices",
"_____no_output_____"
]
],
[
[
"#### ShuffleSplit",
"_____no_output_____"
]
],
[
[
"ss = model_selection.ShuffleSplit(n_splits = 10, test_size = 0.2)\n\nfor train_indices, test_indices in ss.split(X):\n print train_indices, test_indices",
"_____no_output_____"
]
],
[
[
"#### StratifiedShuffleSplit",
"_____no_output_____"
]
],
[
[
"target = np.array([0] * 5 + [1] * 5)\nprint target\n\nsss = model_selection.StratifiedShuffleSplit(n_splits = 4, test_size = 0.2)\nfor train_indices, test_indices in sss.split(X, target):\n print train_indices, test_indices",
"_____no_output_____"
]
],
[
[
"#### Leave-One-Out",
"_____no_output_____"
]
],
[
[
"loo = model_selection.LeaveOneOut()\n\nfor train_indices, test_index in loo.split(X):\n print train_indices, test_index",
"_____no_output_____"
]
],
[
[
"Больше стратегий проведения кросс-валидации доступно здесь: http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0edef5680e646294c0ff00704185de3079029b2
| 763,171 |
ipynb
|
Jupyter Notebook
|
examples/bl/bl_base_charts.ipynb
|
roprisor/alphamodel
|
ecf2f16df3495de7b20f597eee756c7ecc30e342
|
[
"Apache-2.0"
] | 4 |
2021-01-15T10:27:07.000Z
|
2022-02-16T03:40:01.000Z
|
examples/bl/bl_base_charts.ipynb
|
roprisor/alphamodel
|
ecf2f16df3495de7b20f597eee756c7ecc30e342
|
[
"Apache-2.0"
] | 1 |
2021-05-07T16:10:43.000Z
|
2021-08-13T23:00:58.000Z
|
examples/bl/bl_base_charts.ipynb
|
roprisor/alphamodel
|
ecf2f16df3495de7b20f597eee756c7ecc30e342
|
[
"Apache-2.0"
] | 6 |
2021-05-19T09:17:57.000Z
|
2022-03-08T16:34:02.000Z
| 542.410092 | 247,568 | 0.934335 |
[
[
[
"# Black Litterman with Investor Views Optimization: Oldest Country ETFs\n# Charts",
"_____no_output_____"
],
[
"## 1. Data Fetching",
"_____no_output_____"
],
[
"### 1.1 Model configuration",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport datetime as dt\nimport logging\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom hmmlearn import hmm\nimport cvxportfolio as cp\nimport alphamodel as am\n\nconfig = {'name': 'bl_sim_charts',\n 'universe':\n {'list': ['SPY', 'EWA', 'EWC', 'EWG', 'EWH', 'EWJ', 'EWS', 'EWU', 'EWW'],\n 'ticker_col': 'Symbol',\n 'risk_free_symbol': 'USDOLLAR'},\n 'data':\n {'name': 'eod_returns',\n 'source': 'quandl',\n 'table': 'EOD',\n 'api_key': \"6XyApK2BBj_MraQg2TMD\"},\n 'model':\n {'start_date': '19970102',\n 'end_date': '20091231',\n 'halflife': 65,\n 'min_periods': 3,\n 'hidden_states': 2,\n 'train_len': 1700,\n 'process': 'none',\n 'data_dir': '/Users/razvan/PyRepo/research_masc/data_store/bl/',\n 'returns':\n {'sampling_freq': 'daily'},\n 'covariance':\n {'method' : 'SS',\n 'sampling_freq' : 'monthly',\n 'train_days': 360}\n }\n }\n\n# Logging\nlogger = logging.getLogger()\nlogger.setLevel(logging.WARNING)",
"_____no_output_____"
]
],
[
[
"### 1.2 Fetch return data",
"_____no_output_____"
]
],
[
[
"# Fetch returns / volumes\nss = am.SingleStockBLEWM(config)\nss.train(force=True)\n\n# Realized Data for Simulation\nprices = ss.get('prices', 'realized', ss.cfg['returns']['sampling_freq']).iloc[1:,:]\nreturns = ss.get('returns', 'realized', ss.cfg['returns']['sampling_freq'])\nvolumes = ss.get('volumes', 'realized', ss.cfg['returns']['sampling_freq'])\nsigmas = ss.get('sigmas', 'realized', ss.cfg['returns']['sampling_freq'])\n\nsimulated_tcost = cp.TcostModel(half_spread=0.0005/2., nonlin_coeff=1., sigma=sigmas, volume=volumes)\nsimulated_hcost = cp.HcostModel(borrow_costs=0.0001)\nsimulator = cp.MarketSimulator(returns, costs=[simulated_tcost, simulated_hcost],\n market_volumes=volumes, cash_key=ss.risk_free_symbol)",
"downloading SPY from 19970102 to 20091231\ndownloading EWA from 19970102 to 20091231\ndownloading EWC from 19970102 to 20091231\ndownloading EWG from 19970102 to 20091231\ndownloading EWH from 19970102 to 20091231\ndownloading EWJ from 19970102 to 20091231\ndownloading EWS from 19970102 to 20091231\ndownloading EWU from 19970102 to 20091231\ndownloading EWW from 19970102 to 20091231\ndownloading USDOLLAR from 19970102 to 20091231\nRemoving these days from dataset:\n nan price\n1999-04-02 9\n2001-09-13 9\n2001-09-14 9\n2007-01-02 9\n2007-04-06 9\n remaining nan price\nSPY 0\nEWA 0\nEWC 0\nEWG 0\nEWH 0\nEWJ 0\nEWS 0\nEWU 0\nEWW 0\nUSDOLLAR 25\nProceeding with forward fills to remove remaining NaNs\n"
]
],
[
[
"### 1.3 Plot return data",
"_____no_output_____"
]
],
[
[
"# Process returns for charting\nchart_returns = returns[returns.index >= dt.datetime(2005, 1, 2)]\nchart_growth = (chart_returns + 1).cumprod()\nchart_returns_cum = chart_growth - 1\nchart_returns_cum = chart_returns_cum.stack().reset_index()\nchart_returns_cum.columns = ['Date', 'Ticker', 'Value']",
"_____no_output_____"
],
[
"plt.figure(figsize=(15,8))\nsns.set(font_scale=1.5)\nwith sns.axes_style('ticks'):\n data = chart_returns_cum\n ax = sns.lineplot(x='Date', y='Value', hue='Ticker', data=data)\n ax.set(xlabel='Date', ylabel='Return')\n plt.savefig(ss.cfg['data_dir'] + 'bl_asset_returns.png')",
"_____no_output_____"
]
],
[
[
"## 2. Model fitting",
"_____no_output_____"
],
[
"### 2.1 Extract Black Litterman equilibrium returns",
"_____no_output_____"
]
],
[
[
"# Aggregate market stats for cal\nmarket_stats = pd.DataFrame({'MarketCap/GDP': [1.25, 1, 1.25, 0.45, 3.5, 0.8, 2, 1.25, 0.3, 0],\n 'GDP': [2543500, 150000, 239000, 853000, 22500, 1037500, 10000, 422500, 164500, 0]},\n index=ss.universe + ['USDOLLAR'])\nmarket_stats.loc[:, 'MarketCap'] = market_stats.loc[:, 'MarketCap/GDP'] * market_stats.loc[:, 'GDP']\nmarket_stats.loc[:, 'MarketCap Weights'] = market_stats.loc[:, 'MarketCap'] / market_stats.loc[:, 'MarketCap'].sum()\nmarket_stats",
"_____no_output_____"
],
[
"# Generate market cap weights pandas.Series\nw_mktcap = pd.Series(index=market_stats.index, data=market_stats.loc[:, 'MarketCap Weights'])\nw_mktcap['USDOLLAR'] = 0.",
"_____no_output_____"
]
],
[
[
"### 2.2 Generate BL posterior returns/covariance",
"_____no_output_____"
]
],
[
[
"# Parameters that match simulations\nrisk_aversion = 2.5\nconfidence = 0.8\nvconf = 0.7\ngamma_risk = 0.1\ngamma_trade = 0.1\ngamma_hold = 0",
"_____no_output_____"
]
],
[
[
"#### 2.2.1 Correct View",
"_____no_output_____"
]
],
[
[
"# Predicted Data for Optimization\n# US underperforms Germany 4% per year - correct view\nss.predict(w_market_cap_init=w_mktcap, risk_aversion=risk_aversion, c=confidence,\n P_view=np.array([-1, 0, 0, 1, 0, 0, 0, 0, 0, 0]), Q_view=np.array(0.04 / 252),\n view_confidence=vconf\n )\n\n# Black Litterman output\nr_cor_pred = ss.get('returns', 'predicted')\ncovariance_cor_pred = ss.get('covariance', 'predicted')\nvolumes_cor_pred = ss.get('volumes', 'predicted')\nsigmas_cor_pred = ss.get('sigmas', 'predicted')",
"Typical variance of returns: 0.000319834\n"
]
],
[
[
"#### 2.2.2 Incorrect View",
"_____no_output_____"
]
],
[
[
"# Predicted Data for Optimization\n# US outperforms Germany 4% per year - correct view\nss.predict(w_market_cap_init=w_mktcap, risk_aversion=risk_aversion, c=confidence,\n P_view=np.array([1, 0, 0, -1, 0, 0, 0, 0, 0, 0]), Q_view=np.array(0.04 / 252),\n view_confidence=vconf\n )\n\n# Black Litterman output\nr_incor_pred = ss.get('returns', 'predicted')\ncovariance_incor_pred = ss.get('covariance', 'predicted')\nvolumes_incor_pred = ss.get('volumes', 'predicted')\nsigmas_incor_pred = ss.get('sigmas', 'predicted')",
"Typical variance of returns: 0.000319834\n"
]
],
[
[
"## 3. Simulation Results",
"_____no_output_____"
],
[
"### Input Data",
"_____no_output_____"
]
],
[
[
"# Start and end date\nstart_date = dt.datetime(2005, 1, 2)\nend_date = dt.datetime.strptime(config['model']['end_date'], '%Y%m%d')\n\n# Predicted costs\noptimization_tcost = cp.TcostModel(half_spread=0.0005/2., nonlin_coeff=1.,\n sigma=sigmas_cor_pred,\n volume=volumes_cor_pred)\noptimization_hcost=cp.HcostModel(borrow_costs=0.0001)",
"_____no_output_____"
]
],
[
[
"## 3.1 Single Period Optimization for Allocation",
"_____no_output_____"
],
[
"### 3.1.1 Market Capitalization Weights",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Market cap weights\nmktcap_rebalance = cp.Hold(trading_freq=\"once\")\n\n# Backtest\nmarket_cap_w = simulator.run_multiple_backtest(1E6*w_mktcap,\n start_time=start_date, end_time=end_date,\n policies=[mktcap_rebalance],\n loglevel=logging.WARNING, parallel=True)\nmarket_cap_w[0].summary()",
"Number of periods 1259\nInitial timestamp 2005-01-03 00:00:00\nFinal timestamp 2009-12-31 00:00:00\nPortfolio return (%) 5.262\nExcess return (%) 2.501\nExcess risk (%) 24.951\nSharpe ratio 0.100\nMax. drawdown 57.101\nTurnover (%) 0.000\nAverage policy time (sec) 0.000\nAverage simulator time (sec) 0.004\nCPU times: user 48.9 ms, sys: 43.1 ms, total: 91.9 ms\nWall time: 21.1 s\n"
],
[
"market_cap_w[0].v.plot(figsize=(17,7))",
"_____no_output_____"
]
],
[
[
"### 3.1.2 Black Litterman Returns & Covariance Simulation",
"_____no_output_____"
]
],
[
[
"# Optimization parameters\nleverage_limit = cp.LeverageLimit(1)\nfully_invested = cp.ZeroCash()\nlong_only = cp.LongOnly()",
"_____no_output_____"
]
],
[
[
"#### 3.1.2.1 Correct View",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Covariance setup\nbl_cor_risk_model = cp.FullSigma(covariance_cor_pred)\n\n# Optimization policy\nbl_cor_policy = cp.SinglePeriodOpt(return_forecast=r_cor_pred, \n costs=[gamma_risk*bl_cor_risk_model,\n gamma_trade*optimization_tcost,\n gamma_hold*optimization_hcost],\n constraints=[leverage_limit, fully_invested, long_only],\n trading_freq='hour')\n\n# Backtest\nbl_cor_results = simulator.run_multiple_backtest(1E6*w_mktcap,\n start_time=start_date, end_time=end_date,\n policies=[bl_cor_policy],\n loglevel=logging.WARNING, parallel=True)\nbl_cor_results[0].summary()",
"2020-09-13 17:41:18,067 BasePolicy: trading_freq hour is not supported, the policy will only trade once.\n"
],
[
"bl_cor_results[0].v.plot(figsize=(17,7))",
"_____no_output_____"
],
[
"bl_cor_results[0].w.plot(figsize=(17,6))",
"_____no_output_____"
]
],
[
[
"#### 3.1.2.2 Incorrect View",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Covariance setup\nbl_incor_risk_model = cp.FullSigma(covariance_incor_pred)\n\n# Optimization policy\nbl_incor_policy = cp.SinglePeriodOpt(return_forecast=r_incor_pred, \n costs=[gamma_risk*bl_incor_risk_model,\n gamma_trade*optimization_tcost,\n gamma_hold*optimization_hcost],\n constraints=[leverage_limit, fully_invested, long_only],\n trading_freq='hour')\n\n# Backtest\nbl_incor_results = simulator.run_multiple_backtest(1E6*w_mktcap,\n start_time=start_date, end_time=end_date,\n policies=[bl_incor_policy],\n loglevel=logging.WARNING, parallel=True)\nbl_incor_results[0].summary()",
"2020-09-13 17:41:48,552 BasePolicy: trading_freq hour is not supported, the policy will only trade once.\n"
],
[
"bl_incor_results[0].v.plot(figsize=(17,7))",
"_____no_output_____"
],
[
"bl_incor_results[0].w.plot(figsize=(17,6))",
"_____no_output_____"
]
],
[
[
"### 3.1.3 Weight Allocation Difference",
"_____no_output_____"
]
],
[
[
"# Market capitalization weights\nw_mktcap\nw_mktcap.name = 'Equilibrium'\n\n# Correct view weights\nw_bl_cor = bl_cor_results[0].w.iloc[1,:]\nw_bl_cor.name = 'Correct View'\n\n#Incorrect view weights\nw_bl_incor = bl_incor_results[0].w.iloc[1,:]\nw_bl_incor.name = 'Incorrect View'\n\n# Construct weight dataframe\nbl_weights = pd.concat([w_mktcap, w_bl_cor, w_bl_incor], axis=1)\nbl_weights = bl_weights.stack().reset_index()\nbl_weights.columns = ['Ticker', 'Scenario', 'Value']",
"_____no_output_____"
],
[
"%matplotlib inline \n\nwith sns.axes_style('ticks', {'figure.figsize': (15,8), 'font_scale': 1.5}):\n data = bl_weights\n ax = sns.catplot(x='Ticker', y='Value', hue='Scenario', data=data, kind='bar', palette='muted', height=10)\n ax.set(xlabel='Scenario', ylabel='Portfolio Weight')\n ax.fig.set_size_inches(12,5)\n plt.xticks(rotation=30, horizontalalignment='right')\n plt.savefig(ss.cfg['data_dir'] + 'bl_view_weights.png', bbox_inches=\"tight\")",
"_____no_output_____"
]
],
[
[
"### 3.1.4 View Confidence Sharpe Difference",
"_____no_output_____"
]
],
[
[
"# Grab Black-Litterman view simulation results\nbl_eq_results = market_cap_w[0]\nbl_eq = pd.DataFrame.from_dict({'Ex-Post View': ['Equilibrium'],\n 'view_confidence': [0],\n 'excess_return': [bl_eq_results.excess_returns.mean() * 100 * bl_eq_results.ppy],\n 'excess_risk': [bl_eq_results.excess_returns.std() * 100 * np.sqrt(bl_eq_results.ppy)]})\n\nbl_cor_view = pd.read_csv(ss.cfg['data_dir'] + 'bl_ewm_corview.csv')\nbl_cor = bl_cor_view[['view_confidence', 'excess_return', 'excess_risk']].copy()\nbl_cor.loc[:, 'Ex-Post View'] = 'Correct View'\n\nbl_incor_view = pd.read_csv(ss.cfg['data_dir'] + 'bl_ewm_incorview.csv')\nbl_incor = bl_incor_view[['view_confidence', 'excess_return', 'excess_risk']].copy()\nbl_incor.loc[:, 'Ex-Post View'] = 'Incorrect View'\n\nbl_results = pd.concat([bl_eq, bl_cor, bl_incor])\nbl_results.loc[:, 'sharpe'] = bl_results.loc[:, 'excess_return'] / bl_results.loc[:, 'excess_risk']\nbl_results",
"_____no_output_____"
],
[
"plt.figure(figsize=(15,8))\n\nwith sns.axes_style('ticks', {'font_scale': 1.5}):\n data = bl_results\n ax = sns.lineplot(x='view_confidence', y='sharpe', hue='Ex-Post View', style='Ex-Post View', data=data, markers=True)\n ax.set(xlabel='Static View Confidence', ylabel='Sharpe Ratio')\n ax.axhline(0.100230, ls='--')\n plt.savefig(ss.cfg['data_dir'] + 'bl_view_sharpe.png')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0edf0e83a6f7c468d8ce31e799a866f32627d70
| 33,744 |
ipynb
|
Jupyter Notebook
|
C1/W2/ungraded_labs/C1_W2_Lab_1_beyond_hello_world_completed.ipynb
|
Conscious-Mind/TensorFlow-Course-DeepLearning.AI
|
ec1b50e7f9846862c48d36f2594582d147e695ea
|
[
"Apache-2.0"
] | null | null | null |
C1/W2/ungraded_labs/C1_W2_Lab_1_beyond_hello_world_completed.ipynb
|
Conscious-Mind/TensorFlow-Course-DeepLearning.AI
|
ec1b50e7f9846862c48d36f2594582d147e695ea
|
[
"Apache-2.0"
] | null | null | null |
C1/W2/ungraded_labs/C1_W2_Lab_1_beyond_hello_world_completed.ipynb
|
Conscious-Mind/TensorFlow-Course-DeepLearning.AI
|
ec1b50e7f9846862c48d36f2594582d147e695ea
|
[
"Apache-2.0"
] | null | null | null | 38.697248 | 790 | 0.553076 |
[
[
[
"<a href=\"https://colab.research.google.com/github/Conscious-Mind/TensorFlow-Course-DeepLearning.AI/blob/main/C1/W2/ungraded_labs/C1_W2_Lab_1_beyond_hello_world_completed.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Ungraded Lab: Beyond Hello World, A Computer Vision Example\nIn the previous exercise, you saw how to create a neural network that figured out the problem you were trying to solve. This gave an explicit example of learned behavior. Of course, in that instance, it was a bit of overkill because it would have been easier to write the function `y=2x-1` directly instead of bothering with using machine learning to learn the relationship between `x` and `y`.\n\nBut what about a scenario where writing rules like that is much more difficult -- for example a computer vision problem? Let's take a look at a scenario where you will build a neural network to recognize different items of clothing, trained from a dataset containing 10 different types.",
"_____no_output_____"
],
[
"## Start Coding\n\nLet's start with our import of TensorFlow.",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\nprint(tf.__version__)",
"_____no_output_____"
]
],
[
[
"The [Fashion MNIST dataset](https://github.com/zalandoresearch/fashion-mnist) is a collection of grayscale 28x28 pixel clothing images. Each image is associated with a label as shown in this table⁉\n\n| Label | Description |\n| --- | --- |\n| 0 | T-shirt/top |\n| 1 | Trouser |\n| 2 | Pullover |\n| 3 | Dress |\n| 4 | Coat |\n| 5 | Sandal |\n| 6 | Shirt |\n| 7 | Sneaker |\n| 8 | Bag |\n| 9 | Ankle boot |\n\nThis dataset is available directly in the [tf.keras.datasets](https://www.tensorflow.org/api_docs/python/tf/keras/datasets) API and you load it like this:",
"_____no_output_____"
]
],
[
[
"# Load the Fashion MNIST dataset\nfmnist = tf.keras.datasets.fashion_mnist",
"_____no_output_____"
]
],
[
[
"Calling `load_data()` on this object will give you two tuples with two lists each. These will be the training and testing values for the graphics that contain the clothing items and their labels.\n",
"_____no_output_____"
]
],
[
[
"# Load the training and test split of the Fashion MNIST dataset\n(training_images, training_labels), (test_images, test_labels) = fmnist.load_data()",
"_____no_output_____"
]
],
[
[
"What does these values look like? Let's print a training image (both as an image and a numpy array), and a training label to see. Experiment with different indices in the array. For example, also take a look at index `42`. That's a different boot than the one at index `0`.\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\n# You can put between 0 to 59999 here\nindex = 0\n\n# Set number of characters per row when printing\nnp.set_printoptions(linewidth=320)\n\n# Print the label and image\nprint(f'LABEL: {training_labels[index]}')\nprint(f'\\nIMAGE PIXEL ARRAY:\\n {training_images[index]}')\n\n# Visualize the image\nplt.imshow(training_images[index])",
"_____no_output_____"
],
[
"# You can put between 0 to 59999 here\nindex = 22\n\n# Set number of characters per row when printing\nnp.set_printoptions(linewidth=320)\n\n# Print the label and image\nprint(f'LABEL: {training_labels[index]}')\nprint(f'\\nIMAGE PIXEL ARRAY:\\n {training_images[index]}')\n\n# Visualize the image\nplt.imshow(training_images[index])",
"_____no_output_____"
]
],
[
[
"You'll notice that all of the values in the number are between 0 and 255. If you are training a neural network especially in image processing, for various reasons it will usually learn better if you scale all values to between 0 and 1. It's a process called _normalization_ and fortunately in Python, it's easy to normalize an array without looping. You do it like this:",
"_____no_output_____"
]
],
[
[
"# Normalize the pixel values of the train and test images\ntraining_images = training_images / 255.0\ntest_images = test_images / 255.0",
"_____no_output_____"
]
],
[
[
"Now you might be wondering why the dataset is split into two: training and testing? Remember we spoke about this in the intro? The idea is to have 1 set of data for training, and then another set of data that the model hasn't yet seen. This will be used to evaluate how good it would be at classifying values.",
"_____no_output_____"
],
[
"Let's now design the model. There's quite a few new concepts here. But don't worry, you'll get the hang of them. ",
"_____no_output_____"
]
],
[
[
"# Build the classification model\nmodel = tf.keras.models.Sequential([tf.keras.layers.Flatten(), \n tf.keras.layers.Dense(128, activation=tf.nn.relu), \n tf.keras.layers.Dense(10, activation=tf.nn.softmax)])",
"_____no_output_____"
]
],
[
[
"[Sequential](https://keras.io/api/models/sequential/): That defines a sequence of layers in the neural network.\n\n[Flatten](https://keras.io/api/layers/reshaping_layers/flatten/): Remember earlier where our images were a 28x28 pixel matrix when you printed them out? Flatten just takes that square and turns it into a 1-dimensional array.\n\n[Dense](https://keras.io/api/layers/core_layers/dense/): Adds a layer of neurons\n\nEach layer of neurons need an [activation function](https://keras.io/api/layers/activations/) to tell them what to do. There are a lot of options, but just use these for now: \n\n[ReLU](https://keras.io/api/layers/activations/#relu-function) effectively means:\n\n```\nif x > 0: \n return x\n\nelse: \n return 0\n```\n\nIn other words, it it only passes values 0 or greater to the next layer in the network.\n\n[Softmax](https://keras.io/api/layers/activations/#softmax-function) takes a list of values and scales these so the sum of all elements will be equal to 1. When applied to model outputs, you can think of the scaled values as the probability for that class. For example, in your classification model which has 10 units in the output dense layer, having the highest value at `index = 4` means that the model is most confident that the input clothing image is a coat. If it is at index = 5, then it is a sandal, and so forth. See the short code block below which demonstrates these concepts. You can also watch this [lecture](https://www.youtube.com/watch?v=LLux1SW--oM&ab_channel=DeepLearningAI) if you want to know more about the Softmax function and how the values are computed.\n",
"_____no_output_____"
]
],
[
[
"# Declare sample inputs and convert to a tensor\ninputs = np.array([[1.0, 3.0, 4.0, 2.0]])\ninputs = tf.convert_to_tensor(inputs)\nprint(f'input to softmax function: {inputs.numpy()}')\n\n# Feed the inputs to a softmax activation function\noutputs = tf.keras.activations.softmax(inputs)\nprint(f'output of softmax function: {outputs.numpy()}')\n\n# Get the sum of all values after the softmax\nsum = tf.reduce_sum(outputs)\nprint(f'sum of outputs: {sum}')\n\n# Get the index with highest value\nprediction = np.argmax(outputs)\nprint(f'class with highest probability: {prediction}')",
"_____no_output_____"
]
],
[
[
"The next thing to do, now that the model is defined, is to actually build it. You do this by compiling it with an optimizer and loss function as before -- and then you train it by calling `model.fit()` asking it to fit your training data to your training labels. It will figure out the relationship between the training data and its actual labels so in the future if you have inputs that looks like the training data, then it can predict what the label for that input is.",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer = tf.optimizers.Adam(),\n loss = 'sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\nmodel.fit(training_images, training_labels, epochs=5)",
"_____no_output_____"
]
],
[
[
"Once it's done training -- you should see an accuracy value at the end of the final epoch. It might look something like `0.9098`. This tells you that your neural network is about 91% accurate in classifying the training data. That is, it figured out a pattern match between the image and the labels that worked 91% of the time. Not great, but not bad considering it was only trained for 5 epochs and done quite quickly.\n\nBut how would it work with unseen data? That's why we have the test images and labels. We can call [`model.evaluate()`](https://keras.io/api/models/model_training_apis/#evaluate-method) with this test dataset as inputs and it will report back the loss and accuracy of the model. Let's give it a try:",
"_____no_output_____"
]
],
[
[
"# Evaluate the model on unseen data\nmodel.evaluate(test_images, test_labels)",
"_____no_output_____"
]
],
[
[
"You can expect the accuracy here to be about `0.88` which means it was 88% accurate on the entire test set. As expected, it probably would not do as well with *unseen* data as it did with data it was trained on! As you go through this course, you'll look at ways to improve this. ",
"_____no_output_____"
],
[
"# Exploration Exercises\n\nTo explore further and deepen your understanding, try the below exercises:",
"_____no_output_____"
],
[
"### Exercise 1:\nFor this first exercise run the below code: It creates a set of classifications for each of the test images, and then prints the first entry in the classifications. The output, after you run it is a list of numbers. Why do you think this is, and what do those numbers represent? ",
"_____no_output_____"
]
],
[
[
"classifications = model.predict(test_images)\n\nprint(classifications[0])",
"_____no_output_____"
],
[
"print(classifications)",
"_____no_output_____"
]
],
[
[
"**Hint:** try running `print(test_labels[0])` -- and you'll get a `9`. Does that help you understand why this list looks the way it does? ",
"_____no_output_____"
]
],
[
[
"print(test_labels[0])",
"_____no_output_____"
]
],
[
[
"### E1Q1: What does this list represent?\n\n\n1. It's 10 random meaningless values\n2. It's the first 10 classifications that the computer made\n3. It's the probability that this item is each of the 10 classes\n",
"_____no_output_____"
],
[
"<details><summary>Click for Answer</summary>\n<p>\n\n#### Answer: \nThe correct answer is (3)\n\nThe output of the model is a list of 10 numbers. These numbers are a probability that the value being classified is the corresponding value (https://github.com/zalandoresearch/fashion-mnist#labels), i.e. the first value in the list is the probability that the image is of a '0' (T-shirt/top), the next is a '1' (Trouser) etc. Notice that they are all VERY LOW probabilities.\n\nFor index 9 (Ankle boot), the probability was in the 90's, i.e. the neural network is telling us that the image is most likely an ankle boot.\n\n</p>\n</details>",
"_____no_output_____"
],
[
"### E1Q2: How do you know that this list tells you that the item is an ankle boot?\n\n\n1. There's not enough information to answer that question\n2. The 10th element on the list is the biggest, and the ankle boot is labelled 9\n2. The ankle boot is label 9, and there are 0->9 elements in the list\n",
"_____no_output_____"
],
[
"<details><summary>Click for Answer</summary>\n<p>\n\n#### Answer\nThe correct answer is (2). Both the list and the labels are 0 based, so the ankle boot having label 9 means that it is the 10th of the 10 classes. The list having the 10th element being the highest value means that the Neural Network has predicted that the item it is classifying is most likely an ankle boot\n\n</p>\n</details>",
"_____no_output_____"
],
[
"### Exercise 2: \nLet's now look at the layers in your model. Experiment with different values for the dense layer with 512 neurons. What different results do you get for loss, training time etc? Why do you think that's the case? \n",
"_____no_output_____"
]
],
[
[
"mnist = tf.keras.datasets.mnist\n\n(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n\ntraining_images = training_images/255.0\ntest_images = test_images/255.0\n\nmodel = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(1024, activation=tf.nn.relu), # Try experimenting with this layer\n tf.keras.layers.Dense(10, activation=tf.nn.softmax)])\n\nmodel.compile(optimizer = 'adam',\n loss = 'sparse_categorical_crossentropy')\n\nmodel.fit(training_images, training_labels, epochs=5)\n\nmodel.evaluate(test_images, test_labels)\n\nclassifications = model.predict(test_images)\n\nprint(classifications[0])\nprint(test_labels[0])",
"_____no_output_____"
]
],
[
[
"### E2Q1: Increase to 1024 Neurons -- What's the impact?\n\n1. Training takes longer, but is more accurate\n2. Training takes longer, but no impact on accuracy\n3. Training takes the same time, but is more accurate\n",
"_____no_output_____"
],
[
"<details><summary>Click for Answer</summary>\n<p>\n\n#### Answer\nThe correct answer is (1) by adding more Neurons we have to do more calculations, slowing down the process, but in this case they have a good impact -- we do get more accurate. That doesn't mean it's always a case of 'more is better', you can hit the law of diminishing returns very quickly!\n\n</p>\n</details>",
"_____no_output_____"
],
[
"### Exercise 3: \n\n### E3Q1: What would happen if you remove the Flatten() layer. Why do you think that's the case? \n\n<details><summary>Click for Answer</summary>\n<p>\n\n#### Answer\nYou get an error about the shape of the data. It may seem vague right now, but it reinforces the rule of thumb that the first layer in your network should be the same shape as your data. Right now our data is 28x28 images, and 28 layers of 28 neurons would be infeasible, so it makes more sense to 'flatten' that 28,28 into a 784x1. Instead of writng all the code to handle that ourselves, we add the Flatten() layer at the begining, and when the arrays are loaded into the model later, they'll automatically be flattened for us.\n\n</p>\n</details>",
"_____no_output_____"
]
],
[
[
"mnist = tf.keras.datasets.mnist\n\n(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n\ntraining_images = training_images/255.0\ntest_images = test_images/255.0\n\nmodel = tf.keras.models.Sequential([#tf.keras.layers.Flatten(), #Try removing this layer\n tf.keras.layers.Dense(64, activation=tf.nn.relu),\n tf.keras.layers.Dense(10, activation=tf.nn.softmax)])\n\nmodel.compile(optimizer = 'adam',\n loss = 'sparse_categorical_crossentropy')\n\nmodel.fit(training_images, training_labels, epochs=5)\n\nmodel.evaluate(test_images, test_labels)\n\nclassifications = model.predict(test_images)\n\nprint(classifications[0])\nprint(test_labels[0])",
"_____no_output_____"
]
],
[
[
"### Exercise 4: \n\nConsider the final (output) layers. Why are there 10 of them? What would happen if you had a different amount than 10? For example, try training the network with 5.\n\n<details><summary>Click for Answer</summary>\n<p>\n\n#### Answer\nYou get an error as soon as it finds an unexpected value. Another rule of thumb -- the number of neurons in the last layer should match the number of classes you are classifying for. In this case it's the digits 0-9, so there are 10 of them, hence you should have 10 neurons in your final layer.\n\n</p>\n</details>",
"_____no_output_____"
]
],
[
[
"mnist = tf.keras.datasets.mnist\n\n(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n\ntraining_images = training_images/255.0\ntest_images = test_images/255.0\n\nmodel = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(64, activation=tf.nn.relu),\n tf.keras.layers.Dense(5, activation=tf.nn.softmax) # Try experimenting with this layer\n ])\n\nmodel.compile(optimizer = 'adam',\n loss = 'sparse_categorical_crossentropy')\n\nmodel.fit(training_images, training_labels, epochs=5)\n\nmodel.evaluate(test_images, test_labels)\n\nclassifications = model.predict(test_images)\n\nprint(classifications[0])\nprint(test_labels[0])",
"_____no_output_____"
]
],
[
[
"### Exercise 5: \n\nConsider the effects of additional layers in the network. What will happen if you add another layer between the one with 512 and the final layer with 10. \n\n<details><summary>Click for Answer</summary>\n<p>\n\n#### Answer \nThere isn't a significant impact -- because this is relatively simple data. For far more complex data (including color images to be classified as flowers that you'll see in the next lesson), extra layers are often necessary. \n\n</p>\n</details>",
"_____no_output_____"
]
],
[
[
"mnist = tf.keras.datasets.mnist\n\n(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n\ntraining_images = training_images/255.0\ntest_images = test_images/255.0\n\nmodel = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(512, activation=tf.nn.relu), # Add a layer here \n tf.keras.layers.Dense(256, activation=tf.nn.relu),\n tf.keras.layers.Dense(10, activation=tf.nn.softmax) # Add a layer here\n ])\n\nmodel.compile(optimizer = 'adam',\n loss = 'sparse_categorical_crossentropy')\n\nmodel.fit(training_images, training_labels, epochs=5)\n\nmodel.evaluate(test_images, test_labels)\n\nclassifications = model.predict(test_images)\n\nprint(classifications[0])\nprint(test_labels[0])",
"_____no_output_____"
]
],
[
[
"### Exercise 6: \n\n### E6Q1: Consider the impact of training for more or less epochs. Why do you think that would be the case? \n\n- Try 15 epochs -- you'll probably get a model with a much better loss than the one with 5\n- Try 30 epochs -- you might see the loss value stops decreasing, and sometimes increases.\n\nThis is a side effect of something called 'overfitting' which you can learn about later and it's something you need to keep an eye out for when training neural networks. There's no point in wasting your time training if you aren't improving your loss, right! :)",
"_____no_output_____"
]
],
[
[
"mnist = tf.keras.datasets.mnist\n\n(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n\ntraining_images = training_images/255.0\ntest_images = test_images/255.0\n\nmodel = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation=tf.nn.relu),\n tf.keras.layers.Dense(10, activation=tf.nn.softmax)])\n\nmodel.compile(optimizer = 'adam',\n loss = 'sparse_categorical_crossentropy')\n\nmodel.fit(training_images, training_labels, epochs=15) # Experiment with the number of epochs\n\nmodel.evaluate(test_images, test_labels)\n\nclassifications = model.predict(test_images)\n\nprint(classifications[34])\nprint(test_labels[34])",
"_____no_output_____"
],
[
"mnist = tf.keras.datasets.mnist\n\n(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n\ntraining_images = training_images/255.0\ntest_images = test_images/255.0\n\nmodel = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation=tf.nn.relu),\n tf.keras.layers.Dense(10, activation=tf.nn.softmax)])\n\nmodel.compile(optimizer = 'adam',\n loss = 'sparse_categorical_crossentropy')\n\nmodel.fit(training_images, training_labels, epochs=30) # Experiment with the number of epochs\n\nmodel.evaluate(test_images, test_labels)\n\nclassifications = model.predict(test_images)\n\nprint(classifications[34])\nprint(test_labels[34])",
"_____no_output_____"
]
],
[
[
"### Exercise 7: \n\nBefore you trained, you normalized the data, going from values that were 0-255 to values that were 0-1. What would be the impact of removing that? Here's the complete code to give it a try. Why do you think you get different results? ",
"_____no_output_____"
]
],
[
[
"mnist = tf.keras.datasets.mnist\n(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n# training_images=training_images/255.0 # Experiment with removing this line\n# test_images=test_images/255.0 # Experiment with removing this line\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(512, activation=tf.nn.relu),\n tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n])\nmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy')\nmodel.fit(training_images, training_labels, epochs=5)\nmodel.evaluate(test_images, test_labels)\nclassifications = model.predict(test_images)\nprint(classifications[0])\nprint(test_labels[0])",
"_____no_output_____"
]
],
[
[
"### Exercise 8: \n\nEarlier when you trained for extra epochs you had an issue where your loss might change. It might have taken a bit of time for you to wait for the training to do that, and you might have thought 'wouldn't it be nice if I could stop the training when I reach a desired value?' -- i.e. 95% accuracy might be enough for you, and if you reach that after 3 epochs, why sit around waiting for it to finish a lot more epochs....So how would you fix that? Like any other program...you have callbacks! Let's see them in action...",
"_____no_output_____"
]
],
[
[
"class myCallback(tf.keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs={}):\n if(logs.get('accuracy') >= 0.9): # Experiment with changing this value\n print(\"\\nReached 90% accuracy so cancelling training!\")\n self.model.stop_training = True\n\ncallbacks = myCallback()\nmnist = tf.keras.datasets.fashion_mnist\n(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\ntraining_images=training_images/255.0\ntest_images=test_images/255.0\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(1024, activation=tf.nn.relu),\n tf.keras.layers.Dense(512, activation=tf.nn.relu),\n tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n])\nmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\nmodel.fit(training_images, training_labels, epochs=10, callbacks=[callbacks])\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0ee02dbd5486cbca306238c1e5a4a0832e35a3d
| 10,638 |
ipynb
|
Jupyter Notebook
|
Notebooks/CUDEM.ipynb
|
envidynxlab/Barriers_Development
|
dbf7cdba149875c68fb37a9f8b7da02b67b60867
|
[
"MIT"
] | null | null | null |
Notebooks/CUDEM.ipynb
|
envidynxlab/Barriers_Development
|
dbf7cdba149875c68fb37a9f8b7da02b67b60867
|
[
"MIT"
] | null | null | null |
Notebooks/CUDEM.ipynb
|
envidynxlab/Barriers_Development
|
dbf7cdba149875c68fb37a9f8b7da02b67b60867
|
[
"MIT"
] | null | null | null | 41.554688 | 267 | 0.494454 |
[
[
[
" # <strong>Road networks and robustness to flooding on US Atlantic and Gulf barrier islands</strong>\n ## <strong>- Download elevations for the US Atlantic and Gulf barrier islands -</strong>\n ### The purpose of this notebook is to download CUDEM tiles from https://coast.noaa.gov/htdata/raster2/elevation/NCEI_ninth_Topobathy_2014_8483/. These tiles will be clipped to the extent of the study area and used to retrieve elevations for each network node.",
"_____no_output_____"
]
],
[
[
"### Packages\n\nimport os\nimport requests\nimport urllib\nimport ssl\nfrom osgeo import gdal, ogr\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport rasterio\nfrom rasterio.merge import merge\nfrom rasterio.plot import show\nfrom rasterio.plot import show_hist\nfrom rasterio.mask import mask\nfrom shapely.geometry import box\nimport geopandas as gpd\nfrom fiona.crs import from_epsg\nimport pycrs\nimport rasterio\nimport glob\n%matplotlib inline",
"_____no_output_____"
],
[
"### Set working directory\n\npath='' # introduce path to your working directory\nos.chdir(path)",
"_____no_output_____"
],
[
"### Download CUDEM tiles\n\n# Create folders if they don't exist\noutdir= './Data/CUDEM'\nif not os.path.exists(outdir):\n os.makedirs(outdir)\n\noutdir = './Data/CUDEM/Tiles'\nif not os.path.exists(outdir):\n os.makedirs(outdir)\n\n# Retrive url list with cudem tiles and save it\nurl = 'https://coast.noaa.gov/htdata/raster2/elevation/NCEI_ninth_Topobathy_2014_8483/urllist8483.txt'\nr = requests.get(url, allow_redirects=True)\nopen('./Data/CUDEM/urllist8483.txt', 'wb').write(r.content)\n\n## If urlopen error pops up [SSL: CERTIFICATE_VERIFY_FAILED] use this line of code but keep in mind that allowing the use of unverified ssl can introduce security risks if the \n## data source is not trustworthy\n# ssl._create_default_https_context = ssl._create_unverified_context \n\n# Function to get the name of the file from the url\ndef get_filename(url):\n \"\"\"\n Parses filename from given url\n \"\"\"\n if url.find('/'):\n return url.rsplit('/', 1)[1]\n\n# Download tiles using url list\nwith open('./Data/CUDEM/urllist8483.txt') as f:\n flat_list=[word for line in f for word in line.split()]\nurl_list = flat_list\n\nfor url in url_list:\n # parse filename\n fname = get_filename(url)\n outfp = os.path.join(outdir, fname)\n # download the file if it does not exist already\n try:\n if not os.path.exists(outfp):\n print('Downloading', fname)\n r = urllib.request.urlretrieve(url, outfp)\n except:\n continue",
"_____no_output_____"
],
[
"### Check if extent of raster tile and barrier island shp (200m buffer) overlap. If true, clip raster using polygon and save it \n\ndef getFeatures(gdf):\n \"\"\"Function to parse features from GeoDataFrame in such a manner that rasterio wants them\"\"\"\n import json\n return [json.loads(gdf.to_json())['features'][0]['geometry']]\n\n# Create folder if it does no exist\noutdir= './Data/CUDEM/CUDEM_Clip'\nif not os.path.exists(outdir):\n os.makedirs(outdir)\n\nfor filename1 in os.listdir('./Data/CUDEM/Tiles'):\n if filename1.endswith('.tif'):\n raster_dir=('./Data/CUDEM/Tiles/{0}'.format(filename1))\n raster_name=filename1.replace ('.tif', '')\n raster = gdal.Open(raster_dir)\n \n # get raster geometry\n transform = raster.GetGeoTransform()\n pixelWidth = transform[1]\n pixelHeight = transform[5]\n cols = raster.RasterXSize\n rows = raster.RasterYSize\n xLeft = transform[0]\n yTop = transform[3]\n xRight = xLeft+cols*pixelWidth\n yBottom = yTop+rows*pixelHeight\n ring = ogr.Geometry(ogr.wkbLinearRing)\n ring.AddPoint(xLeft, yTop)\n ring.AddPoint(xLeft, yBottom)\n ring.AddPoint(xRight, yBottom)\n ring.AddPoint(xRight, yTop)\n ring.AddPoint(xLeft, yTop)\n rasterGeometry = ogr.Geometry(ogr.wkbPolygon)\n rasterGeometry.AddGeometry(ring)\n \n for filename2 in os.listdir('./Data/Barriers/Buffers_200m'):\n if filename2.endswith('.shp'):\n vector_dir=('./Data/Barriers/Buffers_200m/{0}'.format(filename2))\n vector_name=filename2.replace('.shp', '')\n vector = ogr.Open(vector_dir)\n \n # get vector geometry\n layer = vector.GetLayer()\n feature = layer.GetFeature(0)\n vectorGeometry = feature.GetGeometryRef()\n \n # check if they intersect and if they do clip raster tile using polygon\n if rasterGeometry.Intersect(vectorGeometry) == True:\n # output clipped raster\n out_tif = os.path.join('./Data/CUDEM/CUDEM_Clip/{0}_{1}.tif'.format(vector_name,raster_name))\n # read the data\n data = rasterio.open(raster_dir)\n barrier = gpd.read_file(vector_dir)\n # project the Polygon into same CRS as the grid\n barrier = barrier.to_crs(crs=data.crs.data)\n coords = getFeatures(barrier)\n # clip raster with polygon\n out_img, out_transform = mask(dataset=data, shapes=coords, crop=True)\n # copy the metadata\n out_meta = data.meta.copy()\n out_meta.update({\"driver\": \"GTiff\",\n \"height\": out_img.shape[1],\n \"width\": out_img.shape[2],\n \"transform\": out_transform,\n \"crs\": \"EPSG:4269\"})\n # write clipped raster to disk\n with rasterio.open(out_tif, \"w\", **out_meta) as dest:\n dest.write(out_img) \n else:\n continue",
"_____no_output_____"
],
[
"### With the clipped rasters, create CUDEM mosaic for each barrier \n\n# Create folder if it does no exist\noutdir= './Data/CUDEM/CUDEM_Mosaic'\nif not os.path.exists(outdir):\n os.makedirs(outdir)\n\n# Merge all clipped rasters that start with the same name (belong to the same barrier) in one mosaic\nfor vector in os.listdir('./Data/Barriers/Buffers_200m'):\n if vector.endswith('.shp'):\n vector_name= vector.replace('.shp', '')\n # list for the source files\n src_files_to_mosaic = []\n for raster in os.listdir('./Data/CUDEM/CUDEM_Clip'):\n if raster.startswith(vector_name):\n src = rasterio.open('./Data/CUDEM/CUDEM_Clip/{0}'.format(raster))\n src_files_to_mosaic.append(src)\n # merge function returns a single mosaic array and the transformation info\n mosaic, out_trans = merge(src_files_to_mosaic)\n # copy the metadata\n out_meta = src.meta.copy()\n # update the metadata\n out_meta.update({\"driver\": \"GTiff\",\n \"height\": mosaic.shape[1],\n \"width\": mosaic.shape[2],\n \"transform\": out_trans,\n \"crs\": \"EPSG:4269\"})\n # write the mosaic raster to disk\n with rasterio.open(\"./Data/CUDEM/CUDEM_Mosaic/{0}.tif\".format(vector_name), \"w\", **out_meta) as dest:\n dest.write(mosaic)\n \n else:\n continue\n ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0ee1d26e0eff6c103a288132638dec3135f960d
| 4,620 |
ipynb
|
Jupyter Notebook
|
content/lectures/lecture12/notebook/L3_1.ipynb
|
DavidAssaraf106/2021-CS109B
|
502970bf8bb653222eade0ff42d1fc2c54d3fa74
|
[
"MIT"
] | 2 |
2021-02-06T02:56:05.000Z
|
2021-03-24T22:53:43.000Z
|
content/lectures/lecture12/notebook/L3_1.ipynb
|
DavidAssaraf106/2021-CS109B
|
502970bf8bb653222eade0ff42d1fc2c54d3fa74
|
[
"MIT"
] | null | null | null |
content/lectures/lecture12/notebook/L3_1.ipynb
|
DavidAssaraf106/2021-CS109B
|
502970bf8bb653222eade0ff42d1fc2c54d3fa74
|
[
"MIT"
] | null | null | null | 24.315789 | 148 | 0.444589 |
[
[
[
"## Title :\nBayes - Exercise 1\n\n## Description :\nModel y as normal distribution with unknown mean and std dev, ignoring x\n\n- After completing this exercise you should see following trace plots: \n\n<img src=\"../fig/fig1.png\" style=\"width: 500px;\">\n\n\n## Hints: \n\n<a href=\"https://docs.pymc.io/api/distributions/continuous.html#pymc3.distributions.continuous.Normal\" target=\"_blank\">pymc3 Normal</a>\n\nRefer to lecture notebook.\n\nDo not change any other code except the blanks.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n\nimport pymc3 as pm\nimport warnings\nwarnings.filterwarnings('ignore')\n\nfrom matplotlib import pyplot\n%matplotlib inline",
"_____no_output_____"
],
[
"df = pd.read_csv('data3.csv')\ndf.head()",
"_____no_output_____"
],
[
"### edTest(test_pm_model) ###\nnp.random.seed(109)\nwith pm.Model() as model:\n #Set priors for unknown model parameters\n alpha = pm.Normal('alpha',mu=0,tau=1000)\n \n # Likelihood (sampling distribution) of observations\n tau_obs = pm.Gamma('tau', alpha=0.001, beta=0.001)\n obs = pm.Normal(____________) #Parameters to set: name, mu, tau, observed\n # create trace plots \n trace = pm.sample(2000, tune=2000)\n pm.traceplot(trace, compact=False);\n",
"_____no_output_____"
],
[
"#posterior means\nnp.mean(trace['alpha']) , np.mean(trace['tau'])",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0ee2b8a07a9cc4897ff0a98a02c4751e9709eba
| 72,779 |
ipynb
|
Jupyter Notebook
|
python/.ipynb_checkpoints/mnli-checkpoint.ipynb
|
debajyotidatta/multiNLI_mod
|
d94e30ddd628a2df65859424ebec7d212d3227b5
|
[
"Apache-2.0"
] | null | null | null |
python/.ipynb_checkpoints/mnli-checkpoint.ipynb
|
debajyotidatta/multiNLI_mod
|
d94e30ddd628a2df65859424ebec7d212d3227b5
|
[
"Apache-2.0"
] | null | null | null |
python/.ipynb_checkpoints/mnli-checkpoint.ipynb
|
debajyotidatta/multiNLI_mod
|
d94e30ddd628a2df65859424ebec7d212d3227b5
|
[
"Apache-2.0"
] | null | null | null | 44.677103 | 1,401 | 0.49605 |
[
[
[
"\"\"\"\nTraining script to train a model on MultiNLI and, optionally, on SNLI data as well.\nThe \"alpha\" hyperparamaters set in paramaters.py determines if SNLI data is used in training. If alpha = 0, no SNLI data is used in training. If alpha > 0, then down-sampled SNLI data is used in training. \n\"\"\"\n\n\n%tb\n\nimport tensorflow as tf\nimport os\nimport importlib\nimport random\nfrom util import logger\nimport util.parametersipynb as params\nfrom util.data_processing_ipynb import *\nfrom util.evaluate import *\n\n\nargs = params.argparser(\"cbow petModel-0 --keep_rate 0.9 --seq_length 25 --emb_train\")\nFIXED_PARAMETERS = params.load_parameters(args)\ntest_matched = \"{}/multinli_0.9/multinli_0.9_test_matched_unlabeled.jsonl\".format(args.datapath)\n\nif os.path.isfile(test_matched):\n test_matched = \"{}/multinli_0.9/multinli_0.9_test_matched_unlabeled.jsonl\".format(args.datapath)\n test_mismatched = \"{}/multinli_0.9/multinli_0.9_test_matched_unlabeled.jsonl\".format(args.datapath)\n test_path = \"{}/multinli_0.9/\".format(args.datapath)\nelse:\n test_path = \"{}/multinli_0.9/\".format(args.datapath)\n temp_file = os.path.join(test_path, \"temp.jsonl\")\n io.open(temp_file, \"wb\")\n test_matched = temp_file\n test_mismatched = temp_file\n\nmodname = FIXED_PARAMETERS[\"model_name\"]\nlogpath = os.path.join(FIXED_PARAMETERS[\"log_path\"], modname) + \".log\"\nlogger = logger.Logger(logpath)\n\nmodel = FIXED_PARAMETERS[\"model_type\"]\n\nmodule = importlib.import_module(\".\".join(['models', model])) \nMyModel = getattr(module, 'MyModel')\n\n# Logging parameter settings at each launch of training script\n# This will help ensure nothing goes awry in reloading a model and we consistenyl use the same hyperparameter settings. \nlogger.Log(\"FIXED_PARAMETERS\\n %s\" % FIXED_PARAMETERS)\n\n\n######################### LOAD DATA #############################\n\nlogger.Log(\"Loading data\")\ntraining_snli = load_nli_data(FIXED_PARAMETERS[\"training_snli\"], snli=True)\ndev_snli = load_nli_data(FIXED_PARAMETERS[\"dev_snli\"], snli=True)\ntest_snli = load_nli_data(FIXED_PARAMETERS[\"test_snli\"], snli=True)\n\ntraining_mnli = load_nli_data(FIXED_PARAMETERS[\"training_mnli\"])\ndev_matched = load_nli_data(FIXED_PARAMETERS[\"dev_matched\"])\ndev_mismatched = load_nli_data(FIXED_PARAMETERS[\"dev_mismatched\"])\n# test_matched = load_nli_data(FIXED_PARAMETERS[\"test_matched\"])\n# test_mismatched = load_nli_data(FIXED_PARAMETERS[\"test_mismatched\"])",
"_____no_output_____"
],
[
"# if 'temp.jsonl' in FIXED_PARAMETERS[\"test_matched\"]:\n# # Removing temporary empty file that was created in parameters.py\n# os.remove(FIXED_PARAMETERS[\"test_matched\"])\n# logger.Log(\"Created and removed empty file called temp.jsonl since test set is not available.\")\n\ndictpath = os.path.join(FIXED_PARAMETERS[\"log_path\"], modname) + \".p\"\n\nif not os.path.isfile(dictpath): \n logger.Log(\"Building dictionary\")\n if FIXED_PARAMETERS[\"alpha\"] == 0:\n word_indices_uni, word_indices_bi, word_indices_tri = build_dictionary_ngrams([training_mnli])\n else:\n word_indices_uni, word_indices_bi, word_indices_tri = build_dictionary_ngrams([training_mnli, training_snli])\n \n logger.Log(\"Padding and indexifying sentences\")\n sentences_to_padded_index_sequences_ngrams(word_indices_uni, word_indices_bi, word_indices_tri, [training_mnli, training_snli, dev_matched, dev_mismatched, dev_snli, test_snli])\n# pickle.dump(word_indices_uni, word_indices_bi, word_indices_tri, open(dictpath, \"wb\"))\n\nelse:\n logger.Log(\"Loading dictionary from %s\" % (dictpath))\n word_indices_uni, word_indices_bi, word_indices_tri = pickle.load(open(dictpath, \"rb\"))\n logger.Log(\"Padding and indexifying sentences\")\n sentences_to_padded_index_sequences_ngrams(word_indices_uni, word_indices_bi, word_indices_tri, [training_mnli, training_snli, dev_matched, dev_mismatched, dev_snli, test_snli])\n\nlogger.Log(\"Loading embeddings\")\nloaded_embeddings = loadEmbedding_rand(FIXED_PARAMETERS[\"embedding_data_path\"], word_indices_uni)",
"[1] Building dictionary\n[1] Padding and indexifying sentences\n[1] Loading embeddings\n"
],
[
"word_indices_bi[PADDING]",
"_____no_output_____"
],
[
"training_mnli[0]",
"_____no_output_____"
],
[
"sent = training_mnli[0]['sentence1']",
"_____no_output_____"
],
[
"def tokenize(string):\n string = re.sub(r'\\(|\\)', '', string)\n return string.split()",
"_____no_output_____"
],
[
"bigrammed_sent = list(nltk.bigrams(tokenize(sent)))",
"_____no_output_____"
],
[
"bigrammed_sent",
"_____no_output_____"
],
[
"# bigrams",
"_____no_output_____"
],
[
"sent2 = [0]*FIXED_PARAMETERS[\"seq_length\"]\n\ntoken_sequence = list(nltk.bigrams(tokenize(sent)))\npadding = FIXED_PARAMETERS[\"seq_length\"] - len(token_sequence)\n\nfor i in range(FIXED_PARAMETERS[\"seq_length\"]):\n if i >= len(token_sequence):\n index = bigrams[PADDING]\n else:\n if token_sequence[i] in bigrams:\n index = bigrams[token_sequence[i]]\n else:\n index = bigrams[UNKNOWN]\n sent2[i] = index",
"_____no_output_____"
],
[
"sent2",
"_____no_output_____"
],
[
"bigrams = word_indices_bi.values()",
"_____no_output_____"
],
[
"len(word_indices_bi)",
"_____no_output_____"
],
[
"len(word_indices_uni)",
"_____no_output_____"
],
[
"bigrams = collections.Counter(word_indices_bi)",
"_____no_output_____"
],
[
"most_common = bigrams.most_common(500)\nleast_common = bigrams.most_common()[-500:]",
"_____no_output_____"
],
[
"words_to_get_rid_off = dict(most_common+least_common)",
"_____no_output_____"
],
[
"words_to_get_rid_off = words_to_get_rid_off.keys()",
"_____no_output_____"
],
[
"bigrams_fin = {k:v for k,v in bigrams.iteritems() if k not in words_to_get_rid_off}",
"_____no_output_____"
],
[
"bigrams_fin",
"_____no_output_____"
],
[
"a = [i for i in range(10)]\nb = [i for i in range(20)]",
"_____no_output_____"
],
[
"a,b",
"_____no_output_____"
],
[
"import pickle",
"_____no_output_____"
],
[
"pickle.dump(a,b,open('./test1.p',\"wb\"))",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0ee2e27df9b27aaf9822df537722c7ca8e11c20
| 14,017 |
ipynb
|
Jupyter Notebook
|
Beranque_Assessment_W7.ipynb
|
M4rkAdrian/CPEN21A-1-2
|
fa56ce18c89e3c2ffc3cb9fced34a0fe7af7e11c
|
[
"Apache-2.0"
] | null | null | null |
Beranque_Assessment_W7.ipynb
|
M4rkAdrian/CPEN21A-1-2
|
fa56ce18c89e3c2ffc3cb9fced34a0fe7af7e11c
|
[
"Apache-2.0"
] | null | null | null |
Beranque_Assessment_W7.ipynb
|
M4rkAdrian/CPEN21A-1-2
|
fa56ce18c89e3c2ffc3cb9fced34a0fe7af7e11c
|
[
"Apache-2.0"
] | null | null | null | 21.10994 | 241 | 0.37854 |
[
[
[
"<a href=\"https://colab.research.google.com/github/M4rkAdrian/CPEN21A-1-2/blob/main/Beranque_Assessment_W7.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"### Python Indention",
"_____no_output_____"
]
],
[
[
"if 5>2:\n print(\"Five is greater than two!\")",
"Five is greater than two!\n"
]
],
[
[
"### Python Comments\n",
"_____no_output_____"
]
],
[
[
"#This is a comment\nprint(\"Hello,World\")",
"Hello,World\n"
]
],
[
[
"### Python Variable",
"_____no_output_____"
]
],
[
[
"x = 1\na,b=0,-1\na,b,c=0,-1,-2\nprint(x)\nprint(a,b,c)\nb= \"Sally\" #This is a type of string\n",
"1\n0 -1 -2\n"
]
],
[
[
"### Casting",
"_____no_output_____"
]
],
[
[
"x = 10\ny = \"John\"\nz = -5\nprint(type(x))\nprint(type(y))\nprint(type(z))",
"<class 'int'>\n<class 'str'>\n<class 'int'>\n"
]
],
[
[
"### Type Function",
"_____no_output_____"
]
],
[
[
"x = 5\ny = \"John\"\nz = \"Marie\"\nprint(type(x))\nprint(type(y))\nprint(type(z))",
"<class 'int'>\n<class 'str'>\n<class 'str'>\n"
]
],
[
[
"### Double quotes or Single quote",
"_____no_output_____"
]
],
[
[
"y = \"John\"\ny = 'John'\nprint(y)\nx = \"Marie\"\nx = 'Marie'\nprint(x)",
"John\nMarie\n"
]
],
[
[
"### Case Sensitive",
"_____no_output_____"
]
],
[
[
"a = 4\nA = \"Sally\"\nprint(A)\n\nb = 7\nB = \"Senpai\"\nprint(B)",
"Sally\nSenpai\n"
]
],
[
[
"### Multiple Variables",
"_____no_output_____"
]
],
[
[
"x,y,z=\"one\",\"two\",\"three\"\nprint(x)\nprint(y)\nprint(z)",
"one\ntwo\nthree\n"
]
],
[
[
"### Output Variables",
"_____no_output_____"
]
],
[
[
"x = \"enjoying\"\nprint(\"Python programming is\" \" \"+ x)",
"Python programming is enjoying\n"
],
[
"x = \"Python is\"\" \"\ny = \"enjoying\"\nz = x+y\nprint(z)",
"Python is enjoying\n"
]
],
[
[
"### Arithmetic Operations",
"_____no_output_____"
]
],
[
[
"x = 7\ny = 5\nprint(x+y)",
"12\n"
],
[
"x = 5\ny = 8\nsum=x+y\nprint(sum)",
"13\n"
]
],
[
[
"### Assignment Operators",
"_____no_output_____"
]
],
[
[
"x,y,z=0,-2,6\nx += 5\nprint(x)",
"5\n"
],
[
"a,b,c=-2,6,8\nb *= 7\nprint(b)",
"42\n"
]
],
[
[
"### Comparison Operators",
"_____no_output_____"
]
],
[
[
"a,b,c=4,-2, 5\nb != c",
"_____no_output_____"
]
],
[
[
"### Logical Operators",
"_____no_output_____"
]
],
[
[
"a,b,c=2,6,9\na<b and b>c",
"_____no_output_____"
],
[
"a<b or b<c",
"_____no_output_____"
],
[
"not(a<b and b>c)",
"_____no_output_____"
]
],
[
[
"### Identity Operators",
"_____no_output_____"
]
],
[
[
"x,y,z=-2,-4,6 \nprint(x is x)\nprint(x is not z)",
"True\nTrue\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0ee2ffbf3a0a92b4953096ffe295997826732c7
| 19,838 |
ipynb
|
Jupyter Notebook
|
zz_study_ws_01.ipynb
|
Mouatez/PetaDroid
|
e7f554cf8830d2da1a68f683ccb65bc16d18b28d
|
[
"MIT"
] | 2 |
2021-06-25T13:20:14.000Z
|
2022-02-21T13:36:21.000Z
|
zz_study_ws_01.ipynb
|
Mouatez/PetaDroid
|
e7f554cf8830d2da1a68f683ccb65bc16d18b28d
|
[
"MIT"
] | null | null | null |
zz_study_ws_01.ipynb
|
Mouatez/PetaDroid
|
e7f554cf8830d2da1a68f683ccb65bc16d18b28d
|
[
"MIT"
] | 1 |
2021-12-14T10:35:13.000Z
|
2021-12-14T10:35:13.000Z
| 43.31441 | 131 | 0.534076 |
[
[
[
"import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nfrom torch.optim.lr_scheduler import ReduceLROnPlateau\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data import Dataset\nfrom torch.utils.tensorboard import SummaryWriter\n\nimport numpy as np\nimport pandas as pd\nimport hashlib\nimport shutil\nimport glob\nimport time\nimport re\nimport os\n\nfrom tqdm import tqdm\nfrom datetime import datetime\nfrom sklearn.metrics import f1_score, recall_score, precision_score, accuracy_score\n \nclass Net(nn.Module):\n def __init__(self, sequenceSize=20000, embeddingDim=128, vocabularySize=2**16, filterWidth=5, filterNumber=1024):\n super(Net, self).__init__()\n self.sequenceSize = sequenceSize\n self.embeddingDim = embeddingDim\n self.vocabularySize = vocabularySize\n self.filterWidth = filterWidth\n self.filterNumber = filterNumber \n \n self.embedding = nn.Embedding(self.vocabularySize, self.embeddingDim)\n self.conv = nn.Sequential(\n nn.Conv2d(1, self.filterNumber, (self.filterWidth, self.embeddingDim)),\n nn.BatchNorm2d(self.filterNumber),\n nn.ReLU()\n )\n \n self.fc = nn.Sequential(\n nn.Linear(self.filterNumber , 512),\n nn.BatchNorm1d(512),\n nn.ReLU(),\n \n nn.Linear(512, 256),\n nn.BatchNorm1d(256),\n nn.ReLU(),\n \n nn.Linear(256, 1),\n nn.Sigmoid()\n )\n\n def forward(self, x):\n x = self.embedding(x)\n #print(x.size())\n \n x = self.conv(x)\n #print(x.size())\n \n x = x.max(dim=2)[0]\n #print(x.size())\n\n x = x.view(-1, self.filterNumber)\n x = self.fc(x)\n return x\n\nclass SampleDataset(Dataset):\n def __init__(self, filePathList, labels, sequenceSize=20000, featureName='functionMethodCallsArgs'):\n self.filePathList = filePathList\n self.labels = labels\n self.sequenceSize = sequenceSize\n self.featureName = featureName\n \n def __len__(self):\n return len(self.filePathList)\n\n def __getitem__(self, idx):\n df = pd.read_parquet(self.filePathList[idx])\n seed = int(round(time.time()%1, 6) * 1000000)\n x = np.concatenate(df.iloc[np.random.RandomState(seed).permutation(len(df))][self.featureName].values)\n\n if len(x) > self.sequenceSize:\n x = x[:self.sequenceSize]\n else:\n x = np.concatenate((x, np.zeros([self.sequenceSize - len(x)])))\n \n sample = torch.from_numpy(x)\n return (sample.long(), self.labels[idx], self.filePathList[idx])\n\ndef train(model, optimizer, dataLoader, device):\n running_loss = 0.0 \n label_lst = list()\n predicted_lst = list()\n\n model.train()\n for inputs, labels, _ in dataLoader:\n \n #\n inputs = inputs.unsqueeze(1).to(device)\n labels = labels.to(device)\n\n #\n optimizer.zero_grad()\n\n #\n outputs = model(inputs)\n predicted = (outputs > 0.5).squeeze().long()\n loss = F.binary_cross_entropy(outputs.squeeze(), labels.float())\n\n #\n loss.backward()\n optimizer.step()\n\n #\n label_lst.append(labels.cpu().numpy())\n predicted_lst.append(predicted.cpu().numpy()) \n running_loss += loss.item() \n\n labels = np.concatenate(label_lst)\n predicted = np.concatenate(predicted_lst)\n loss = running_loss / len(predicted)\n \n return labels, predicted, loss\n\ndef assess(model, dataLoader, device):\n running_loss = 0.0 \n label_lst = list()\n predicted_lst = list()\n proba_lst = list()\n path_lst = list()\n\n with torch.no_grad():\n model.eval()\n for inputs, labels, paths in dataLoader:\n #\n inputs = inputs.unsqueeze(1).to(device)\n labels = labels.to(device)\n\n #\n outputs = model(inputs)\n predicted = (outputs > 0.5).squeeze().long()\n loss = F.binary_cross_entropy(outputs.squeeze(), labels.float())\n\n #\n if len(inputs) > 1:\n label_lst.append(labels.cpu().numpy())\n predicted_lst.append(predicted.cpu().numpy())\n proba_lst.append(outputs.squeeze().cpu().numpy())\n path_lst.append(paths)\n running_loss += loss.item() \n \n labels = np.concatenate(label_lst)\n predicted = np.concatenate(predicted_lst)\n proba = np.concatenate(proba_lst)\n paths = np.concatenate(path_lst)\n loss = running_loss / len(predicted)\n \n return labels, predicted, loss, proba, paths\n\ndef trainModel(ws, modelTag, epochNum, trainLoader, validLoader, device, lr=3e-4, weightDecay=9e-5):\n #\n model = Net()\n model = model.to(device)\n optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weightDecay)\n scheduler = ReduceLROnPlateau(optimizer, 'min', verbose=True, patience=5, factor=0.8)\n\n outputlogFilePath = f'./traces/{ws}/logs'\n outputtracesPath = f'./traces/{ws}'\n #shutil.rmtree(outputtracesPath)\n #os.mkdir(outputtracesPath)\n\n result_lst = list()\n\n message = '----------'\n with open(outputlogFilePath, 'a') as writer:\n writer.write(message + '\\n')\n print(message)\n \n for epoch in range(epochNum):\n\n tlabel, tpredicted, tloss = train(model, optimizer, trainLoader, device)\n vlabel, vpredicted, vloss, vproba, vproba = assess(model, validLoader, device)\n\n message = f'Train: {modelTag} '\n message += '[{:04d}] '.format(epoch)\n\n tf1score = f1_score(tlabel, tpredicted)\n message += 'TF1: {:2.4f}, '.format(tf1score*100)\n message += 'Tloss: {:2.8f}, '.format(tloss)\n\n vf1score = f1_score(vlabel, vpredicted)\n message += 'VF1: {:2.4f}, '.format(vf1score*100)\n message += 'VLoss: {:2.8f},'.format(vloss) \n \n with open(outputlogFilePath, 'a') as writer:\n writer.write(message + '\\n')\n print(message)\n\n modelOutputPath = f'{outputtracesPath}/model_{modelTag}_{epoch:03d}.pth'\n torch.save(model.state_dict(), modelOutputPath)\n result_lst.append((epoch, modelOutputPath, vlabel, vpredicted, vproba, vf1score, vloss, tf1score, tloss))\n\n scheduler.step(tloss)\n\n df = pd.DataFrame(result_lst, \n columns=['epoch', 'path', 'labels', 'predicted', 'proba', 'vf1score', 'vloss', 'tf1score', 'tloss'])\n df.to_parquet(f'{outputtracesPath}/{modelTag}.parquet')\n\n message = '----------'\n with open(outputlogFilePath, 'a') as writer:\n writer.write(message + '\\n')\n print(message)\n\n return df\n\ndef evaluate(ws, modelPathList, dataloader, device, numberFragments=1):\n modelResultList = []\n outputlogFilePath = f'./traces/{ws}/logs'\n \n for modelPath in modelPathList:\n for fragment in range(numberFragments):\n mdl = Net().to(device)\n mdl.load_state_dict(torch.load(modelPath))\n mdl.eval()\n modelResult = assess(mdl, dataloader, device)\n modelF1Score = f1_score(modelResult[0], modelResult[1])\n modelResultList.append((modelPath, modelF1Score,) + modelResult)\n message = f'Evaluate: '\n message += f'ModelPath={modelPath} Fragment={fragment:02d} '\n message += f'score={modelF1Score}'\n print(message)\n with open(outputlogFilePath, 'a') as writer:\n writer.write(message + '\\n')\n return pd.DataFrame(modelResultList, columns=['name', 'f1score', 'Truth', 'Predicted', 'loss', 'Proba', 'Path'])\n\ndef getDataloaders(dataset_df, test_df, batchSize=32, numWorkers=16, trainPercentage=0.7, validPercentage=0.8):\n rand_idx = np.random.permutation(len(dataset_df))\n train_df = dataset_df.iloc[rand_idx[:int(trainPercentage * len(dataset_df))]]\n valid_df = dataset_df.iloc[rand_idx[int(trainPercentage * len(dataset_df)):]]\n #test_df = dataset_df.iloc[rand_idx[int(validPercentage * len(dataset_df)):]]\n\n print(len(train_df))\n print(train_df.label.value_counts())\n print(len(valid_df))\n print(valid_df.label.value_counts())\n print(len(test_df))\n print(test_df.label.value_counts())\n \n trainDataset = SampleDataset(train_df.filePath.values, train_df.label.values)\n trainLoader = DataLoader(trainDataset, batch_size=batchSize, shuffle=True, num_workers=numWorkers)\n\n validDataset = SampleDataset(valid_df.filePath.values, valid_df.label.values)\n validLoader = DataLoader(validDataset, batch_size=2*batchSize, shuffle=False, num_workers=numWorkers)\n\n testDataset = SampleDataset(test_df.filePath.values, test_df.label.values)\n testLoader = DataLoader(testDataset, batch_size=2*batchSize, shuffle=False, num_workers=numWorkers)\n \n return trainLoader, validLoader, testLoader\n\ndef evalDataset(ws, result_df, probaUpperBorn = 0.9, probaLowerBorn = 0.1):\n outputlogFilePath = f'./traces/{ws}/logs'\n results = np.vstack(result_df.Proba.values)\n\n truth = result_df.Truth.iloc[0]\n paths = result_df.Path.iloc[0]\n result_mean = results.mean(axis=0)\n predicted = (result_mean > 0.5).astype('int')\n f1score = f1_score(truth, predicted)\n\n vtruth = truth[(result_mean >= probaUpperBorn) | (result_mean <= probaLowerBorn)]\n vpaths = paths[(result_mean >= probaUpperBorn) | (result_mean <= probaLowerBorn)]\n vresult_prob = result_mean[(result_mean >= probaUpperBorn) | (result_mean <= probaLowerBorn)]\n vpredicted = (vresult_prob > 0.5).astype('int')\n vcoverage = (len(vtruth)/len(truth))\n vextendSize = len(vtruth)\n vf1score = f1_score(vtruth, vpredicted)\n\n etruth = truth[(result_mean < probaUpperBorn) & (result_mean > probaLowerBorn)]\n epaths = paths[(result_mean < probaUpperBorn) & (result_mean > probaLowerBorn)]\n eresult_prob = result_mean[(result_mean < probaUpperBorn) & (result_mean > probaLowerBorn)]\n epredicted = (eresult_prob > 0.5).astype('int')\n ecoverage = (len(etruth)/len(truth))\n erestSize = len(etruth)\n ef1score = f1_score(etruth, epredicted)\n\n message = f'Extend: '\n message += f'f1score={f1score*100:2.4f}, '\n message += f'vcoverage={vcoverage*100:2.4f}, vf1score={vf1score*100:2.4f}, vexentdSize={vextendSize}, '\n message += f'ecoverage={ecoverage*100:2.4f}, ef1score={ef1score*100:2.4f}, erestSize={erestSize}'\n\n print(message)\n with open(outputlogFilePath, 'a') as writer:\n writer.write(message + '\\n')",
"_____no_output_____"
],
[
"# \nws = 'studyWS01'\nepochNum = 100\ndevice = torch.device('cuda:5')\nensembleSize = 10\ntrainPercentageParam = 0.8\nvalidPercentageParam = 0.9\n\noutputlogFilePath = f'./traces/{ws}/logs'\noutputtracesPath = f'./traces/{ws}'\nos.mkdir(outputtracesPath)",
"_____no_output_____"
],
[
"test_df = pd.read_parquet('dataset/androzooDone_meta.parquet')\ntest_df['label'] = (test_df.vt_detection == 0).apply(int)\ntest_df['filePath'] = '/ws/mnt/local/data/output/datasets/zoo/' + test_df.sha256",
"_____no_output_____"
],
[
"dataset_metaList = [10000, 20000, 50000, 100000]\nfor sizeMeta in dataset_metaList:\n\n currentTag = str(sizeMeta)\n\n message = '######## '\n message += currentTag\n\n with open(outputlogFilePath, 'a') as writer:\n writer.write(message + '\\n')\n print(message)\n\n #\n dataset_df = test_df.sample(sizeMeta, random_state=54)\n\n #\n trainLoader, validLoader, testLoader = getDataloaders(dataset_df, test_df, trainPercentage=trainPercentageParam, \n validPercentage=validPercentageParam)\n\n #\n models_df = trainModel(ws, f'train_{currentTag}', epochNum, trainLoader, validLoader, device)\n models_df.sort_values(by=['vloss', 'tloss'], inplace=True)\n selectedModelPaths = models_df.path.iloc[:ensembleSize].tolist()\n\n #\n evalresult_df = evaluate(ws, selectedModelPaths, testLoader, device)\n\n #\n evalDataset(ws, evalresult_df, probaUpperBorn = 0.8, probaLowerBorn = 0.2)\n\n #\n outputPath = f'traces/{ws}/{currentTag}.pickle'\n currentResults = pd.DataFrame([(currentTag, models_df, evalresult_df)], columns=['TimeTag', 'models', 'evalResuls'])\n currentResults.to_pickle(outputPath)\n\n #\n message = '########'\n with open(outputlogFilePath, 'a') as writer:\n writer.write(message + '\\n')\n print(message)",
"######## 10000\n8000\n1 6450\n0 1550\nName: label, dtype: int64\n2000\n1 1608\n0 392\nName: label, dtype: int64\n3671721\n1 2960498\n0 711223\nName: label, dtype: int64\n----------\nTrain: train_10000 [0000] TF1: 91.6001, Tloss: 0.01118719, VF1: 94.1808, VLoss: 0.00434575,\nTrain: train_10000 [0001] TF1: 93.9708, Tloss: 0.00859743, VF1: 93.9897, VLoss: 0.00422207,\nTrain: train_10000 [0002] TF1: 94.6473, Tloss: 0.00780043, VF1: 93.7150, VLoss: 0.00498325,\nTrain: train_10000 [0003] TF1: 94.5860, Tloss: 0.00762491, VF1: 94.4229, VLoss: 0.00417069,\nTrain: train_10000 [0004] TF1: 95.2834, Tloss: 0.00699199, VF1: 93.5897, VLoss: 0.00510785,\nTrain: train_10000 [0005] TF1: 95.5208, Tloss: 0.00671948, VF1: 93.9003, VLoss: 0.00478910,\nTrain: train_10000 [0006] TF1: 95.5640, Tloss: 0.00658481, VF1: 93.3667, VLoss: 0.00446015,\nTrain: train_10000 [0007] TF1: 95.5589, Tloss: 0.00651030, VF1: 93.2519, VLoss: 0.00613624,\nTrain: train_10000 [0008] TF1: 95.6456, Tloss: 0.00626072, VF1: 69.4787, VLoss: 0.01247621,\nTrain: train_10000 [0009] TF1: 95.8245, Tloss: 0.00612385, VF1: 94.1004, VLoss: 0.00528070,\nTrain: train_10000 [0010] TF1: 96.0170, Tloss: 0.00602754, VF1: 93.8453, VLoss: 0.00553848,\nTrain: train_10000 [0011] TF1: 95.8166, Tloss: 0.00603877, VF1: 94.0887, VLoss: 0.00420522,\nTrain: train_10000 [0012] TF1: 96.1293, Tloss: 0.00580834, VF1: 94.3396, VLoss: 0.00538907,\nTrain: train_10000 [0013] TF1: 96.2261, Tloss: 0.00560039, VF1: 94.7054, VLoss: 0.00467081,\nTrain: train_10000 [0014] TF1: 96.1737, Tloss: 0.00570706, VF1: 94.2813, VLoss: 0.00440096,\nTrain: train_10000 [0015] TF1: 96.1705, Tloss: 0.00557351, VF1: 94.4099, VLoss: 0.00563976,\nTrain: train_10000 [0016] TF1: 96.4492, Tloss: 0.00553318, VF1: 94.6391, VLoss: 0.00452580,\nTrain: train_10000 [0017] TF1: 96.5439, Tloss: 0.00546701, VF1: 94.2399, VLoss: 0.00488487,\nTrain: train_10000 [0018] TF1: 96.1132, Tloss: 0.00535311, VF1: 94.9653, VLoss: 0.00430818,\nTrain: train_10000 [0019] TF1: 96.5753, Tloss: 0.00505385, VF1: 94.6262, VLoss: 0.00426432,\nTrain: train_10000 [0020] TF1: 96.2412, Tloss: 0.00540487, VF1: 84.2509, VLoss: 0.00924311,\nTrain: train_10000 [0021] TF1: 96.5051, Tloss: 0.00525261, VF1: 94.0830, VLoss: 0.00532114,\nTrain: train_10000 [0022] TF1: 96.5350, Tloss: 0.00519163, VF1: 95.3846, VLoss: 0.00465617,\nTrain: train_10000 [0023] TF1: 96.6297, Tloss: 0.00514930, VF1: 92.7673, VLoss: 0.00501266,\nTrain: train_10000 [0024] TF1: 96.4522, Tloss: 0.00508962, VF1: 94.2683, VLoss: 0.00442061,\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
d0ee30827688c2df4b2c06ee94a004ff0a6ad660
| 17,635 |
ipynb
|
Jupyter Notebook
|
scripts/Session3.ipynb
|
jurquiza/Practical_systems_biology_plants
|
77259979223b240628a350e2da9f22250f9a3f2c
|
[
"MIT"
] | null | null | null |
scripts/Session3.ipynb
|
jurquiza/Practical_systems_biology_plants
|
77259979223b240628a350e2da9f22250f9a3f2c
|
[
"MIT"
] | null | null | null |
scripts/Session3.ipynb
|
jurquiza/Practical_systems_biology_plants
|
77259979223b240628a350e2da9f22250f9a3f2c
|
[
"MIT"
] | null | null | null | 169.567308 | 15,436 | 0.913184 |
[
[
[
"# Transcriptional regulation ",
"_____no_output_____"
]
],
[
[
"%pylab inline \nimport pandas as pd\nimport tellurium as te",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"trans_regulation = te.loadAntimonyModel('Transcriptional_regulation.txt')\nresults = trans_regulation.simulate(0,40,100)\nplot(results['time'],results['[B]'])\n\ntrans_regulation = te.loadAntimonyModel('Transcriptional_regulation.txt')\ntrans_regulation.setValue('kd',4)\nresults = trans_regulation.simulate(0,40,100)\nplot(results['time'],results['[B]'])\n\ntrans_regulation = te.loadAntimonyModel('Transcriptional_regulation.txt')\ntrans_regulation.setValue('kd',1)\nresults = trans_regulation.simulate(0,40,100)\nplot(results['time'],results['[B]'])\nylim(0,3)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
]
] |
d0ee40be81c8dd2e2396b986b0fece6caee617f6
| 4,200 |
ipynb
|
Jupyter Notebook
|
dictionary.ipynb
|
manoj1002-wq/LetsUpgrade-python
|
59adc19de68b1107cc01f71ef4d0f797f49bb2b0
|
[
"MIT"
] | null | null | null |
dictionary.ipynb
|
manoj1002-wq/LetsUpgrade-python
|
59adc19de68b1107cc01f71ef4d0f797f49bb2b0
|
[
"MIT"
] | null | null | null |
dictionary.ipynb
|
manoj1002-wq/LetsUpgrade-python
|
59adc19de68b1107cc01f71ef4d0f797f49bb2b0
|
[
"MIT"
] | null | null | null | 17.79661 | 112 | 0.45 |
[
[
[
"# Dictionary and its functions",
"_____no_output_____"
]
],
[
[
"dict = {\"name\":\"Manoj\",\"age\":22,\"brand\":\"audi\",\"number\":\"9999\",\"color\":\"white\"}",
"_____no_output_____"
],
[
"dict",
"_____no_output_____"
],
[
"# Returns the value of the specified key\n\ndict.get(\"brand\")",
"_____no_output_____"
],
[
"# Returns a list containing a tuple for each key value pair\n\ndict.items()",
"_____no_output_____"
],
[
"# Returns a list containing the dictionary's keys.\n\ndict.keys()",
"_____no_output_____"
],
[
"# Removes the element with the specified key\n\ndict.pop(\"name\")",
"_____no_output_____"
],
[
"# Returns a copy of the dictionary\n\ndict.copy()",
"_____no_output_____"
],
[
"# Removes all the elements from the dictionary\n\ndict.clear()",
"_____no_output_____"
],
[
"dict",
"_____no_output_____"
],
[
"# Return the number of items in a list\n\nlen(dict)",
"_____no_output_____"
]
],
[
[
"# The End",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0ee4123c81bb4fb2d6a2b5615c0ccff93619918
| 31,608 |
ipynb
|
Jupyter Notebook
|
visualize.ipynb
|
olafmersmann/lhc-generator
|
a62db225e582cc8a25cae77179c294a03fabe5f4
|
[
"MIT"
] | null | null | null |
visualize.ipynb
|
olafmersmann/lhc-generator
|
a62db225e582cc8a25cae77179c294a03fabe5f4
|
[
"MIT"
] | 1 |
2021-01-07T13:42:15.000Z
|
2021-01-12T17:26:50.000Z
|
visualize.ipynb
|
olafmersmann/lhc-generator
|
a62db225e582cc8a25cae77179c294a03fabe5f4
|
[
"MIT"
] | null | null | null | 445.183099 | 30,312 | 0.953176 |
[
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"x = np.loadtxt(\"x.txt\")\nplt.scatter(x[:,0], x[:,2])",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code"
]
] |
d0ee5a810f42384aa02bd125297469b01b61c3e1
| 8,981 |
ipynb
|
Jupyter Notebook
|
test/learning/tensorflow/test_TF_notebook.ipynb
|
AndyLc/snorkel
|
4fde903615bd4da1db106d41c05f14d563e3ac4f
|
[
"Apache-2.0"
] | 30 |
2019-08-22T19:27:59.000Z
|
2022-03-13T22:03:15.000Z
|
test/learning/tensorflow/test_TF_notebook.ipynb
|
AndyLc/snorkel
|
4fde903615bd4da1db106d41c05f14d563e3ac4f
|
[
"Apache-2.0"
] | 2 |
2019-08-22T16:51:58.000Z
|
2022-03-21T02:59:18.000Z
|
test/learning/tensorflow/test_TF_notebook.ipynb
|
AndyLc/snorkel
|
4fde903615bd4da1db106d41c05f14d563e3ac4f
|
[
"Apache-2.0"
] | 31 |
2019-08-22T19:28:08.000Z
|
2022-03-23T12:50:49.000Z
| 24.947222 | 313 | 0.566863 |
[
[
[
"# Testing `TFNoiseAwareModel`\n\nWe'll start by testing the `textRNN` model on a categorical problem from `tutorials/crowdsourcing`. In particular we'll test for (a) basic performance and (b) proper construction / re-construction of the TF computation graph both after (i) repeated notebook calls, and (ii) with `GridSearch` in particular.",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport os\nos.environ['SNORKELDB'] = 'sqlite:///{0}{1}crowdsourcing.db'.format(os.getcwd(), os.sep)\n\nfrom snorkel import SnorkelSession\nsession = SnorkelSession()",
"_____no_output_____"
]
],
[
[
"### Load candidates and training marginals",
"_____no_output_____"
]
],
[
[
"from snorkel.models import candidate_subclass\nfrom snorkel.contrib.models.text import RawText\nTweet = candidate_subclass('Tweet', ['tweet'], cardinality=5)\ntrain_tweets = session.query(Tweet).filter(Tweet.split == 0).order_by(Tweet.id).all()\nlen(train_tweets)",
"_____no_output_____"
],
[
"from snorkel.annotations import load_marginals\ntrain_marginals = load_marginals(session, train_tweets, split=0)\ntrain_marginals.shape",
"_____no_output_____"
]
],
[
[
"### Train `LogisticRegression`",
"_____no_output_____"
]
],
[
[
"# Simple unigram featurizer\ndef get_unigram_tweet_features(c):\n for w in c.tweet.text.split():\n yield w, 1\n\n# Construct feature matrix\nfrom snorkel.annotations import FeatureAnnotator\nfeaturizer = FeatureAnnotator(f=get_unigram_tweet_features)\n\n%time F_train = featurizer.apply(split=0)\nF_train",
"_____no_output_____"
],
[
"%time F_test = featurizer.apply_existing(split=1)\nF_test",
"_____no_output_____"
],
[
"from snorkel.learning.tensorflow import LogisticRegression\n\nmodel = LogisticRegression(cardinality=Tweet.cardinality)\nmodel.train(F_train.todense(), train_marginals)",
"_____no_output_____"
]
],
[
[
"### Train `SparseLogisticRegression`\n\nNote: Testing doesn't currently work with `LogisticRegression` above, but no real reason to use that over this...",
"_____no_output_____"
]
],
[
[
"from snorkel.learning.tensorflow import SparseLogisticRegression\n\nmodel = SparseLogisticRegression(cardinality=Tweet.cardinality)\nmodel.train(F_train, train_marginals, n_epochs=50, print_freq=10)",
"_____no_output_____"
],
[
"import numpy as np\ntest_labels = np.load('crowdsourcing_test_labels.npy')\nacc = model.score(F_test, test_labels)\nprint(acc)\nassert acc > 0.6",
"_____no_output_____"
],
[
"# Test with batch size s.t. N % batch_size == 1...\nmodel.score(F_test, test_labels, batch_size=9)",
"_____no_output_____"
]
],
[
[
"### Train basic LSTM\n\nWith dev set scoring during execution (note we use test set here to be simple)",
"_____no_output_____"
]
],
[
[
"from snorkel.learning.tensorflow import TextRNN\ntest_tweets = session.query(Tweet).filter(Tweet.split == 1).order_by(Tweet.id).all()\n\ntrain_kwargs = {\n 'dim': 100,\n 'lr': 0.001,\n 'n_epochs': 25,\n 'dropout': 0.2,\n 'print_freq': 5\n}\nlstm = TextRNN(seed=123, cardinality=Tweet.cardinality)\nlstm.train(train_tweets, train_marginals, X_dev=test_tweets, Y_dev=test_labels, **train_kwargs)",
"_____no_output_____"
],
[
"acc = lstm.score(test_tweets, test_labels)\nprint(acc)\nassert acc > 0.60",
"_____no_output_____"
],
[
"# Test with batch size s.t. N % batch_size == 1...\nlstm.score(test_tweets, test_labels, batch_size=9)",
"_____no_output_____"
]
],
[
[
"### Run `GridSearch`",
"_____no_output_____"
]
],
[
[
"from snorkel.learning.utils import GridSearch\n\n# Searching over learning rate\nparam_ranges = {'lr': [1e-3, 1e-4], 'dim': [50, 100]}\nmodel_class_params = {'seed' : 123, 'cardinality': Tweet.cardinality}\nmodel_hyperparams = {\n 'dim': 100,\n 'n_epochs': 20,\n 'dropout': 0.1,\n 'print_freq': 10\n}\nsearcher = GridSearch(TextRNN, param_ranges, train_tweets, train_marginals,\n model_class_params=model_class_params,\n model_hyperparams=model_hyperparams)\n\n# Use test set here (just for testing)\nlstm, run_stats = searcher.fit(test_tweets, test_labels)",
"_____no_output_____"
],
[
"acc = lstm.score(test_tweets, test_labels)\nprint(acc)\nassert acc > 0.60",
"_____no_output_____"
]
],
[
[
"### Reload saved model outside of `GridSearch`",
"_____no_output_____"
]
],
[
[
"lstm = TextRNN(seed=123, cardinality=Tweet.cardinality)\nlstm.load('TextRNN_best', save_dir='checkpoints/grid_search')\nacc = lstm.score(test_tweets, test_labels)\nprint(acc)\nassert acc > 0.60",
"_____no_output_____"
]
],
[
[
"### Reload a model with different structure",
"_____no_output_____"
]
],
[
[
"lstm.load('TextRNN_0', save_dir='checkpoints/grid_search')\nacc = lstm.score(test_tweets, test_labels)\nprint(acc)\nassert acc < 0.60",
"_____no_output_____"
]
],
[
[
"# Testing `GenerativeModel`",
"_____no_output_____"
],
[
"### Testing `GridSearch` on crowdsourcing data",
"_____no_output_____"
]
],
[
[
"from snorkel.annotations import load_label_matrix\nimport numpy as np\n\nL_train = load_label_matrix(session, split=0)\ntrain_labels = np.load('crowdsourcing_train_labels.npy')",
"_____no_output_____"
],
[
"from snorkel.learning import GenerativeModel\n\n# Searching over learning rate\nsearcher = GridSearch(GenerativeModel, {'epochs': [0, 10, 30]}, L_train)\n\n# Use training set labels here (just for testing)\ngen_model, run_stats = searcher.fit(L_train, train_labels)",
"_____no_output_____"
],
[
"acc = gen_model.score(L_train, train_labels)\nprint(acc)\nassert acc > 0.97",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0ee6b941f1ad91ced34a107bbaeb27eac2314c2
| 15,075 |
ipynb
|
Jupyter Notebook
|
JupySessions/USA-covidtracking-com.ipynb
|
AlainLich/COVID-Data
|
43d7f950c86270bfe411af8bc899464f0599f48e
|
[
"MIT"
] | null | null | null |
JupySessions/USA-covidtracking-com.ipynb
|
AlainLich/COVID-Data
|
43d7f950c86270bfe411af8bc899464f0599f48e
|
[
"MIT"
] | 3 |
2020-05-16T07:29:01.000Z
|
2021-08-29T10:04:17.000Z
|
JupySessions/USA-covidtracking-com.ipynb
|
AlainLich/COVID-Data
|
43d7f950c86270bfe411af8bc899464f0599f48e
|
[
"MIT"
] | null | null | null | 28.389831 | 192 | 0.564842 |
[
[
[
"# Analyze population data from https://covidtracking.com\n\n\n**Note:** This is a Jupyter notebook which is also available as its executable export as a Python 3 script (therefore with automatically generated comments).",
"_____no_output_____"
],
[
"### Sept 29,2021: Obsolete data\nOur source https://covidtracking.com/data/api says:\n- `As of March 7, 2021 we are no longer collecting new data. Learn about available federal data at https://covidtracking.com/analysis-updates/federal-covid-data-101-how-to-find-data.`\n - https://covidtracking.com/analysis-updates/simple-covid-data\n - https://covidtracking.com/about-data/data-summary\n - https://covidtracking.com/about-data/federal-resources\n\n**The following loads and analyses data up to March 7, 2021.**",
"_____no_output_____"
],
[
"# Libraries",
"_____no_output_____"
]
],
[
[
"import sys,os\naddPath= [os.path.abspath(\"../venv/lib/python3.9/site-packages/\"),\n os.path.abspath(\"../source\")]\naddPath.extend(sys.path)\nsys.path = addPath",
"_____no_output_____"
],
[
"# Sys import\nimport sys, os, re\n# Common imports\nimport math\nimport numpy as NP\nimport numpy.random as RAND\nimport scipy.stats as STATS\nfrom scipy import sparse\nfrom scipy import linalg\n\n# Better formatting functions\nfrom IPython.display import display, HTML\nfrom IPython import get_ipython\n\nimport matplotlib as MPL\nimport matplotlib.pyplot as PLT\nimport seaborn as SNS\nSNS.set(font_scale=1)\n\n# Python programming\nfrom itertools import cycle\nfrom time import time\nimport datetime\n\n# Using pandas\nimport pandas as PAN\nimport xlrd",
"_____no_output_____"
],
[
"sys.path.append('/home/alain/test/MachLearn/COVID/source')",
"_____no_output_____"
],
[
"import libApp.appUSA as appUSA",
"_____no_output_____"
],
[
"import warnings\nwarnings.filterwarnings('ignore')\nprint(\"For now, reduce python warnings, I will look into this later\")",
"_____no_output_____"
]
],
[
[
"### Import my own modules\nThe next cell attempts to give user some information if things improperly setup.\nIntended to work both in Jupyter and when executing the Python file directly.",
"_____no_output_____"
]
],
[
[
"if not get_ipython() is None and os.path.abspath(\"../source/\") not in sys.path:\n sys.path.append(os.path.abspath(\"../source/\"))\ntry:\n from lib.utilities import *\n from lib.figureHelpers import *\n from lib.DataMgrJSON import *\n from lib.DataMgr import *\n from lib.pandaUtils import *\nexcept Exception as err:\n print(\"Could not find library 'lib' with contents 'DataGouvFr' \")\n if get_ipython() is None:\n print(\"Check the PYTHONPATH environment variable which should point to 'source' wich contains 'lib'\")\n else:\n print(\"You are supposed to be running in JupySessions, and '../source/lib' should exist\")\n raise err",
"_____no_output_____"
],
[
"from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))",
"_____no_output_____"
]
],
[
[
"## Check environment\n\nIt is expected that:\n- your working directory is named `JupySessions`, \n- that it has subdirectories \n - `images/*` where generated images may be stored to avoid overcrowding. \n- At the same level as your working dir there should be directories \n - `../data` for storing input data and \n - `../source` for python scripts.\n \nMy package library is in `../source/lib`, and users running under Python (not in Jupyter) should\nset their PYTHONPATH to include \"../source\" ( *or whatever appropriate* ).",
"_____no_output_____"
]
],
[
[
"checkSetup(chap=\"Chap04\")\nImgMgr = ImageMgr(chapdir=\"Chap04\")",
"_____no_output_____"
]
],
[
[
"# Load Data",
"_____no_output_____"
],
[
"## Functions",
"_____no_output_____"
],
[
"## Load CSV and XLSX data from remote \nThe `dataFileVMgr` will manage a cache of data files in `../dataUSCovidTrack`.\n\nWe check what is in the cache/data directory; for each file, we identify the latest version, \nand list this below to make sure. Files of interest are documented in `.filespecs.json`\n\nConsulted: https://github.com/COVID19Tracking/covid-tracking-api\n \nDownloaded: see `.filespecs.json`",
"_____no_output_____"
]
],
[
[
"dataFileVMgr = manageAndCacheFilesJSONHandwritten(\"../dataUSCovidTrack\")",
"_____no_output_____"
],
[
"dataFileVMgr.getRemoteInfo()\ndataFileVMgr.updatePrepare()\ndataFileVMgr.cacheUpdate()",
"_____no_output_____"
],
[
"print(\"Most recent versions of files in data directory:\")\nfor f in dataFileVMgr.listMostRecent() :\n print(f\"\\t{f}\")",
"_____no_output_____"
],
[
"last = lambda x: dataFileVMgr.getRecentVersion(x,default=True)",
"_____no_output_____"
]
],
[
[
"This ensures we load the most recent version, so that it is not required to update the list \nbelow. The timestamps shown in the following sequence will be update by the call to `getRecentVersion`.",
"_____no_output_____"
]
],
[
[
"USStatesDailyCSV = last('CTStatesDaily.csv' ) \nUSStatesInfoCSV = last('CTStatesInfo.csv')\nUSDailyCSV = last('CTUSDaily.csv')\n\nUSAPopChangeCSV = last('USACensusPopchange.csv') \nUSAPopChangeRankCSV = last('USACensusPopchangeRanks.csv')",
"_____no_output_____"
]
],
[
[
"Now load the stuff",
"_____no_output_____"
]
],
[
[
"ad = lambda x: \"../dataUSCovidTrack/\"+x\n\ndata_USStatesDaily = read_csvPandas(ad(USStatesDailyCSV) , error_bad_lines=False, sep=\",\" )\ndata_USStatesInfo = read_csvPandas(ad(USStatesInfoCSV), error_bad_lines=False, sep=\",\" )\ndata_USDaily = read_csvPandas(ad(USDailyCSV), error_bad_lines=False, sep=\",\" )\ndata_USAPopChange = read_csvPandas(ad(USAPopChangeCSV) , error_bad_lines=False, sep=\",\" )\ndata_USAPopChangeRank = read_csvPandas(ad(USAPopChangeRankCSV), error_bad_lines=False, sep=\",\" )",
"_____no_output_____"
]
],
[
[
"Show the shape of the loaded data:",
"_____no_output_____"
]
],
[
[
"def showBasics(data,dataName):\n print(f\"{dataName:24}\\thas shape {data.shape}\")\n\ndataListDescr = ( (data_USStatesDaily, \"data_USStatesDaily\"),\n (data_USStatesInfo, \"data_USStatesInfo\"),\n (data_USDaily , \"data_USDaily\"),\n (data_USAPopChange, \"data_USAPopChange\"),\n (data_USAPopChangeRank, \"data_USAPopChangeRank\"),\n )\n \nfor (dat,name) in dataListDescr:\n showBasics(dat,name)\n",
"_____no_output_____"
],
[
"for (dat,name) in dataListDescr:\n if name[0:5]==\"meta_\": continue\n print(f\"\\nDescription of data in '{name}'\\n\")\n display(dat.describe().transpose())",
"_____no_output_____"
],
[
"for (dat,name) in dataListDescr:\n if name[0:5]==\"meta_\": continue\n print(f\"\\nInformation about '{name}'\\n\")\n dat.info()",
"_____no_output_____"
]
],
[
[
"### Get demographics information\nThe metadata is in `../dataUSCovidTrack/*.pdf`. We need to preprocess the demographics information for ease of use below. Notice that column `STATE` features state's **FIPS codes**.",
"_____no_output_____"
]
],
[
[
"demogrCols=(\"SUMLEV\",\"STATE\",\"NAME\",\"POPESTIMATE2019\" )\ndemogrX = data_USAPopChange.loc[:,demogrCols]\ndemogrX[\"SUMLEV\"]== 40\ndemogr = demogrX[demogrX[\"SUMLEV\"]== 40 ].copy() ",
"_____no_output_____"
],
[
"dtCols = ('date','fips', 'state', \n 'positive', 'negative', \n 'hospitalizedCurrently', 'hospitalizedCumulative', \n 'inIcuCurrently', 'inIcuCumulative',\n 'onVentilatorCurrently', 'onVentilatorCumulative', \n 'recovered','death', 'hospitalized'\n )",
"_____no_output_____"
],
[
"dt = data_USStatesDaily.loc[ :, dtCols].copy()\ndt[\"dateNum\"] = PAN.to_datetime(dt.loc[:,\"date\"], format=\"%Y%m%d\")\ndateStart = dt[\"dateNum\"].min()\ndateEnd = dt[\"dateNum\"].max() \ndateSpan = dateEnd - dateStart \nprint(f\"Our statistics span {dateSpan.days+1} days, start: {dateStart} and end {dateEnd}\")\ndt[\"elapsedDays\"] = (dt[\"dateNum\"] - dateStart).dt.days\n\ndt = dt.set_index(\"state\")\ndtg = dt.groupby(\"state\")\n\n#dtx = dt[dt.index == \"Europe\"]\n#dtg = dtx.groupby(\"countriesAndTerritories\")",
"_____no_output_____"
]
],
[
[
"Now, the figure making process is generalized into this class, since we plan to emit multiple figures.",
"_____no_output_____"
],
[
"First attempt, just get the first!",
"_____no_output_____"
]
],
[
[
"plotCols=(\"recovered\",\"death\",\"hospitalized\")\n\npsFig = appUSA.perStateFigure(dateStart)\npsFig.getDemographics(data_USAPopChange)\npsFig.initPainter(subnodeSpec=15, maxCol=3)\npsFig.mkImage(dtg,plotCols)\nImgMgr.save_fig(\"FIG001\")\nprint(f\"Had issues with state encodings:{psFig.abbrevIssueList}\")",
"_____no_output_____"
]
],
[
[
"## Now select States according to multiple criteria\n### Start with most populated states",
"_____no_output_____"
]
],
[
[
"tble = psFig.getPopStateTble(dtg)",
"_____no_output_____"
],
[
"mostPopulated = tble.sort_values(by=[\"pop\"], ascending=False,).iloc[:15,0].values",
"_____no_output_____"
],
[
"psFig2 = appUSA.perStateSelected(dateStart,mostPopulated)\npsFig2.getDemographics(data_USAPopChange)\npsFig2.initPainter(subnodeSpec=15, maxCol=3)\npsFig2.mkImage(dtg,plotCols)\nImgMgr.save_fig(\"FIG002\")\nprint(f\"Had issues with state encodings:{psFig2.abbrevIssueList}\")",
"_____no_output_____"
],
[
"dtgMax = dtg.max().loc[:,[\"fips\",\"death\",\"recovered\",\"hospitalized\"]]\n\ndtgMerged = PAN.merge(dtgMax.reset_index(), demogr, left_on=\"fips\", right_on=\"STATE\")\ndtgMerged[\"deathPM\"]= dtgMerged.loc[:,\"death\"]/dtgMerged.loc[:,\"POPESTIMATE2019\"]*1.0e6\n\nmostDeadly = dtgMerged.sort_values(by=[\"deathPM\"], ascending=False,).iloc[:15,0].values",
"_____no_output_____"
],
[
"psFig3 = appUSA.perStateSelected(dateStart,mostDeadly)\npsFig3.getDemographics(data_USAPopChange)\npsFig3.initPainter(subnodeSpec=15, maxCol=3)\npsFig3.mkImage(dtg,plotCols)\nImgMgr.save_fig(\"FIG003\")\nprint(f\"Had issues with state encodings:{psFig3.abbrevIssueList}\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0ee6d574bc5a13a4d47bcb50201643f54fb545b
| 107,529 |
ipynb
|
Jupyter Notebook
|
analysis/check_1M_tags.ipynb
|
ludazhao/art_history_net
|
396f7b99b3c4fe4f9da752d8f7eb0f3395752f57
|
[
"MIT"
] | 15 |
2016-12-30T21:03:48.000Z
|
2022-01-31T08:09:09.000Z
|
analysis/check_1M_tags.ipynb
|
ludazhao/ArtHistoryNet
|
396f7b99b3c4fe4f9da752d8f7eb0f3395752f57
|
[
"MIT"
] | null | null | null |
analysis/check_1M_tags.ipynb
|
ludazhao/ArtHistoryNet
|
396f7b99b3c4fe4f9da752d8f7eb0f3395752f57
|
[
"MIT"
] | null | null | null | 628.824561 | 103,208 | 0.93891 |
[
[
[
"import h5py\nimport os\nimport cPickle as pickle\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport collections\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"(image_metadata, book_metadata, image_to_idx) = pickle.load(open(\"/data/all_metadata.pkl\", 'r'))\nimage_hdf5 = h5py.File('/data/image_data.hdf5','r')",
"_____no_output_____"
],
[
"labels = []\nwith open(\"/data/10k_aug_outputs/output_labels9800.txt\", 'r') as ifile:\n for line in ifile:\n labels.append(line.rstrip())\nprint labels",
"['animals', 'nature', 'text', 'maps', 'people', 'seals', 'miniatures', 'objects', 'architecture', 'decorations', 'landscapes', 'diagrams']\n"
],
[
"tag_to_count = collections.defaultdict(lambda: 0)\nfor i in range(98,99):\n chunk_file = \"/data/1M_tags/Chunk{}.pkl\".format(i)\n print chunk_file\n scores = pickle.load(open(chunk_file, 'r'))\n\n for idx in range(len(scores.keys())):\n tag = labels[np.argmax(scores[idx])]\n tag_to_count[np.argmax(scores[idx])] += 1\n #continue\n image_metadata[i * 5000 + idx][-1] = tag\n if tag == 'landscapes':\n print scores[idx]\n\n [img, date] = image_metadata[i * 5000 + idx][:2]\n print img, date\n plt.imshow(image_hdf5[\"Chunk{}\".format(i)][idx][:,:,0], cmap=mpl.cm.gray)\n\n break\n break",
"/data/1M_tags/Chunk98.pkl\n[ 2.94249057e-05 2.47128744e-04 2.27261207e-05 8.68173011e-05\n 1.44324731e-04 9.38013764e-05 3.33539174e-05 5.98136056e-03\n 7.21738040e-02 8.06584239e-07 9.21015680e-01 1.70685511e-04]\n000156506_0_000107_1_ 1883\n"
],
[
"chunk_file = \"/data/1M_tags/Chunk20.pkl\".format(i)\npickle.load(open(chunk_file, 'r'))[]\n#plt.imshow(image_hdf5[\"Chunk5\".format(i)][idx][:,:,0], cmap=mpl.cm.gray)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0ee974640389beb635661be926d4f9aa45d96ca
| 2,097 |
ipynb
|
Jupyter Notebook
|
MaterialCursoPython/Fase 2 - Manejo de datos y optimizacion/Tema 05 - Entradas y salidas de datos/Apuntes/Leccion 1 (Apuntes) - Entradas.ipynb
|
mangrovex/CursoPython
|
85b3d8a920f79a1f184b8508cf011fda238eada0
|
[
"MIT"
] | 105 |
2016-07-08T19:43:03.000Z
|
2018-10-20T14:00:14.000Z
|
Fase 2 - Manejo de datos y optimizacion/Tema 05 - Entradas y salidas de datos/Apuntes/Leccion 1 (Apuntes) - Entradas.ipynb
|
ruben69695/python-course
|
a3d3532279510fa0315a7636c373016c7abe4f0a
|
[
"MIT"
] | null | null | null |
Fase 2 - Manejo de datos y optimizacion/Tema 05 - Entradas y salidas de datos/Apuntes/Leccion 1 (Apuntes) - Entradas.ipynb
|
ruben69695/python-course
|
a3d3532279510fa0315a7636c373016c7abe4f0a
|
[
"MIT"
] | 145 |
2016-09-26T14:02:55.000Z
|
2018-10-27T06:49:28.000Z
| 19.063636 | 177 | 0.51216 |
[
[
[
"# Entradas por teclado\nYa conocemos la función input() que lee una cadena por teclado. Su único inconveniente es que debemos transformar el valor a numérico si deseamos hacer operaciones con él:",
"_____no_output_____"
]
],
[
[
"decimal = float( input(\"Introduce un número decimal con punto: \") )",
"Introduce un número decimal con punto: 3.14\n"
],
[
"valores = []",
"_____no_output_____"
],
[
"print(\"Introduce 3 valores\")\nfor x in range(3):\n valores.append( input(\"Introduce un valor >\") )",
"Introduce 3 valores\nIntroduce un valor >10\nIntroduce un valor >sdkjsdk\nIntroduce un valor >skdjs\n"
],
[
"valores",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0ee9e2c5e7299974619caeaf545d1624caf6c1f
| 864,827 |
ipynb
|
Jupyter Notebook
|
10-BoostingPractice/.ipynb_checkpoints/CatBoost-checkpoint.ipynb
|
samstikhin/mlmathmech
|
de09dc86b98fb0d8ab4c954afc070438887f0752
|
[
"MIT"
] | 16 |
2019-09-17T12:50:15.000Z
|
2021-01-27T12:49:29.000Z
|
10-BoostingPractice/CatBoost.ipynb
|
samstikhin/mlmathmech
|
de09dc86b98fb0d8ab4c954afc070438887f0752
|
[
"MIT"
] | null | null | null |
10-BoostingPractice/CatBoost.ipynb
|
samstikhin/mlmathmech
|
de09dc86b98fb0d8ab4c954afc070438887f0752
|
[
"MIT"
] | 14 |
2019-09-21T21:18:26.000Z
|
2020-01-23T10:35:59.000Z
| 263.988706 | 34,119 | 0.619392 |
[
[
[
"from scripts.setup_libs import *",
"_____no_output_____"
]
],
[
[
"# [CatBoost](https://github.com/catboost/catboost)\n\nБустинг от Яндекса для категориальных фичей и много чего еще.\n\nДля начала настоятельно рекомендуется посмотреть видео. Там идет основная теория по CatBoost",
"_____no_output_____"
]
],
[
[
"from IPython.display import YouTubeVideo\nYouTubeVideo('UYDwhuyWYSo', width=640, height=360)",
"_____no_output_____"
]
],
[
[
"Резюмируя видео:\n\nCatboost строится на **Obvious Decision Tree** (ODT полное бинарное дерево) - это значит, что на каждом уровне дерева во всех вершинах идет разбиение по одному и тому же признаку. Дерево полное и симметричное. Листов - $2^H$, где $H$ - высота дерева и количество используемых фич.\n\nВ Catboost куча фичей для скорости и регуляризации.\n\nРегуляризация (стараемся делать как можно более разные деревья):\n* Чтобы базовое дерево было небольшое, обычно берется какая-то часть фич (max_features) например $0.1$ от общего числа. В силу большого количества деревьев в композиции, информация не потеряется.\n\n* При построении дерева можно использовать **бутстреп для выборки**.\n\n* При слитинге в дереве к скору можно добавлять случайную величину.\n\nСкорость: \n* Так как мы еще до обучения знаем схему дерева (потому что ODT) - мы знаем количество листьев. Количество разных значений будет равно количеству листьев, поэтому на шаге обучения базового дерева давайте приближать не **полный вектор антиградиентов** (который размера количества фич), а **вектор листов**. В этом случае сильно сокращается время выбора наилучшего сплита на каждом этапе обучения базового дерева.\n\n* Бинаризация численных данных, для ускорения нахождения наилучшего сплита. Слабая - равномерная или медианная. Хорошие **MaxLogSum**, **GreedyLogSum**\n\n* На верхних вершинах дерева делаем только один градиентный шаг, на нижних можно несколько. \n\n* **Ordered boosting**",
"_____no_output_____"
],
[
"# [Примеры](https://catboost.ai/docs/concepts/python-usages-examples.html#custom-objective-function) работы с CatBoost ",
"_____no_output_____"
],
[
"Еще одно очень полезное видео, но теперь уже с практикой.",
"_____no_output_____"
]
],
[
[
"from IPython.display import YouTubeVideo\nYouTubeVideo('xl1fwCza9C8', width=640, height=360)",
"_____no_output_____"
]
],
[
[
"## Простой пример",
"_____no_output_____"
]
],
[
[
"train_data = [[1, 4, 5, 6],\n [4, 5, 6, 7],\n [30, 40, 50, 60]]\n\neval_data = [[2, 4, 6, 8],\n [1, 4, 50, 60]]\n\ntrain_labels = [10, 20, 30]\n# Initialize CatBoostRegressor\nmodel = CatBoostRegressor(iterations=2,\n learning_rate=1,\n depth=2)\n# Fit model\nmodel.fit(train_data, train_labels)\n# Get predictions\npreds = model.predict(eval_data)",
"0:\tlearn: 6.1237244\ttotal: 164us\tremaining: 164us\n1:\tlearn: 4.5927933\ttotal: 519us\tremaining: 0us\n"
]
],
[
[
"## Визуализация",
"_____no_output_____"
]
],
[
[
"rng = np.random.RandomState(31337)\n\nboston = load_boston()\ny = boston['target']\nX = boston['data']\n\nkf = KFold(n_splits=3, shuffle=True, random_state=rng)\n\nX_train, X_rest, y_train, y_rest = train_test_split(X, y, test_size=0.25)\nX_val, X_test, y_val, y_test = train_test_split(X_rest, y_rest, test_size=0.5)",
"_____no_output_____"
],
[
"cb = CatBoostRegressor(silent=True, eval_metric=\"MAE\", custom_metric=[\"MAPE\"])",
"_____no_output_____"
]
],
[
[
"Тут включена крутая визуализация, с которой можно поиграться, она не работает в Jupyter Lab, но работает в Jupyter Notebook",
"_____no_output_____"
]
],
[
[
"cb.fit(X_train, y_train, eval_set=[(X_val , y_val ), (X_test, y_test)], plot=True)",
"_____no_output_____"
]
],
[
[
"## Бинаризации float",
"_____no_output_____"
],
[
" Выбрать стратегию бинаризации можно установив параметр *feature_border_type*.\n \n - **Uniform**. Границы выбираются равномерно по значениям;\n - **Median**. В каждый бин попадает примерно одинаковое число различных значений;\n - **UniformAndQuantiles**. Uniform + Median;\n - **MaxLogSum, GreedyLogSum**. Максимизируется значение формулы $\\sum_{i=1}^K \\log(n_i)$, где $K$ - требуемое кол-во бинов, $n_i$ число объектов в этом бакете;\n - **MinEntropy**. Аналогично, но максимизируется энтропия: $-\\sum_{i=1}^K n_i \\log(n_i)$",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import GridSearchCV\n\nparams = {\"feature_border_type\": [\n \"Uniform\",\n \"Median\",\n \"UniformAndQuantiles\",\n \"MaxLogSum\",\n \"GreedyLogSum\",\n \"MinEntropy\"\n]}\n\ncb = CatBoostRegressor(silent=True)\ngrid = GridSearchCV(cb, params)\ngrid.fit(X, y)\n\nfor score, strategy in sorted(zip(grid.cv_results_['mean_test_score'],\n grid.cv_results_['param_feature_border_type'].data)):\n print(\"MSE: {}, strategy: {}\".format(score, strategy))",
"MSE: 0.6691938464447091, strategy: Uniform\nMSE: 0.6748412401429629, strategy: GreedyLogSum\nMSE: 0.6799599408912946, strategy: MaxLogSum\nMSE: 0.6800117523745861, strategy: MinEntropy\nMSE: 0.6809778425359567, strategy: UniformAndQuantiles\nMSE: 0.6871689084536771, strategy: Median\n"
]
],
[
[
"## Feature importance",
"_____no_output_____"
]
],
[
[
"cb = CatBoostRegressor(silent=True)\ncb.fit(X_train, y_train)\nfor value, name in sorted(zip(cb.get_feature_importance(fstr_type=\"FeatureImportance\"),\n boston[\"feature_names\"])):\n print(\"{}\\t{}\".format(name, value))\n",
"CHAS\t0.7931402360726668\nZN\t1.0494204022222386\nINDUS\t1.7426222080011684\nRAD\t2.448575683482915\nTAX\t2.793079491612796\nNOX\t3.4660541597836496\nAGE\t3.6293317827732454\nB\t3.7497167241771456\nCRIM\t6.265450147752076\nPTRATIO\t6.32854168803267\nDIS\t8.319406311024368\nLSTAT\t26.740053619030157\nRM\t32.6746075460349\n"
]
],
[
[
"# Categorical features",
"_____no_output_____"
]
],
[
[
"from catboost.datasets import titanic\ntitanic_df = titanic()\n\nX = titanic_df[0].drop('Survived',axis=1)\ny = titanic_df[0].Survived",
"_____no_output_____"
],
[
"X.head(5)",
"_____no_output_____"
],
[
"is_cat = (X.dtypes != float)\nis_cat.to_dict()",
"_____no_output_____"
],
[
"is_cat = (X.dtypes != float)\nfor feature, feat_is_cat in is_cat.to_dict().items():\n if feat_is_cat:\n X[feature].fillna(\"NAN\", inplace=True)\n\ncat_features_index = np.where(is_cat)[0]",
"_____no_output_____"
],
[
"cat_features_index",
"_____no_output_____"
],
[
"X.columns",
"_____no_output_____"
]
],
[
[
"Аналогом для класса DMatrix в катбусте служит класс **catboost.Pool**. Помимо прочего, содержит индексы категориальных факторов и описание пар для режима попарного обучения.\n\n[Подробнее](https://tech.yandex.com/catboost/doc/dg/concepts/python-reference_pool-docpage/)",
"_____no_output_____"
]
],
[
[
"from catboost import Pool\nX_train, X_test, y_train, y_test = train_test_split(X, y, train_size=.85, random_state=1234)\n\ntrain_pool = Pool(data=X_train, \n label=y_train, \n cat_features=cat_features_index, # в явном виде передаем категориальные фичи, которыми хотим работать\n feature_names=list(X_train.columns)) # названия фич, для удобной визуализации и дебага\n\ntest_pool = Pool(data=X_test, \n label=y_test, \n cat_features=cat_features_index, \n feature_names=list(X_test.columns))",
"_____no_output_____"
],
[
"from catboost import CatBoostClassifier\nfrom sklearn.metrics import roc_auc_score\n\nmodel = CatBoostClassifier(eval_metric='Accuracy', use_best_model=True, random_seed=42)\nmodel.fit(train_pool, eval_set=test_pool, metric_period=100)\ny_pred = model.predict_proba(X_test)\nroc_auc_score(y_test, y_pred[:, 1])",
"Learning rate set to 0.029583\n0:\tlearn: 0.8124174\ttest: 0.8059701\tbest: 0.8059701 (0)\ttotal: 8.47ms\tremaining: 8.47s\n100:\tlearn: 0.8533686\ttest: 0.8059701\tbest: 0.8059701 (0)\ttotal: 317ms\tremaining: 2.82s\n200:\tlearn: 0.8837517\ttest: 0.8208955\tbest: 0.8208955 (200)\ttotal: 623ms\tremaining: 2.48s\n300:\tlearn: 0.9009247\ttest: 0.8283582\tbest: 0.8283582 (300)\ttotal: 894ms\tremaining: 2.07s\n400:\tlearn: 0.9194188\ttest: 0.8358209\tbest: 0.8358209 (400)\ttotal: 1.2s\tremaining: 1.79s\n500:\tlearn: 0.9352708\ttest: 0.8358209\tbest: 0.8358209 (400)\ttotal: 1.55s\tremaining: 1.54s\n600:\tlearn: 0.9379128\ttest: 0.8432836\tbest: 0.8432836 (600)\ttotal: 1.87s\tremaining: 1.24s\n700:\tlearn: 0.9418758\ttest: 0.8358209\tbest: 0.8432836 (600)\ttotal: 2.17s\tremaining: 925ms\n800:\tlearn: 0.9471598\ttest: 0.8358209\tbest: 0.8432836 (600)\ttotal: 2.54s\tremaining: 631ms\n900:\tlearn: 0.9524439\ttest: 0.8432836\tbest: 0.8432836 (600)\ttotal: 2.85s\tremaining: 313ms\n999:\tlearn: 0.9603699\ttest: 0.8283582\tbest: 0.8432836 (600)\ttotal: 3.16s\tremaining: 0us\n\nbestTest = 0.8432835821\nbestIteration = 600\n\nShrink model to first 601 iterations.\n"
]
],
[
[
"На самом деле в Catboost происходит еще много чего интересного при обработке категорий:\n - среднее сглаживается некоторым априорным приближением;\n - по факту обучается несколько (3) модели на разных перестановках;\n - рассматриваются композиции категориальных факторов (max_ctr_complexity);\n - в момент применения модели, новые объекты приписываются в конец перестановки по обучающей выборке и, таким образом, статистика для них считается по всем имеющимся данным;\n - таргето-независимые счетчики считаются по всем данным.\n - для факторов с небольшим числом различных значений производится OneHotEncoding (параметр one_hot_max_size - максимальное значение для OneHotEncoding'а)",
"_____no_output_____"
],
[
"# [Категориальные статистики](https://catboost.ai/docs/concepts/algorithm-main-stages_cat-to-numberic.html)\n\nОдно из основных преимуществ катбуста - обработка категориальных факторов.\n\nТакие факторы заменяются на \"счетчики\": для каждого значения кат.фактора **по таргету** вычисляется некоторая **статистика** этого значения (счетчик, ctr), например, среднее значение таргета по объектам, которые имеют данное значение категориального фактора. Далее категориальный фактор заменяется на подсчитанные для него статистики (каждое значение фактора на свою статистику).\n\nБудем использовать технику кодирования категориальных признаков средним значением целевого признака.\nОсновная идея – для каждого значения категориального признака посчитать среднее значение целевого признака и заменить категориальный признак на посчитанные средние. \n\nДавайте попробуем сделать следующую операцию: \n* Возьмем категориальную фичу (один столбец). Пусть фича принимает $m$ значений: $l_1, \\ldots, l_m$\n* Заменим значение $l_k$ на $\\frac{1}{N_{l_k}}\\sum_{i \\in l_k}y_i$ - среднее значение целевой переменной для данного значения категориальной фичи.\n* Переменной в тесте будут приравниваться все средние значение данных",
"_____no_output_____"
]
],
[
[
"df_train = pd.DataFrame({'float':[1,2,3,4,5], \n 'animal': ['cat', 'dog', 'cat', 'dog', 'cat'],\n 'sign': ['rock', 'rock', 'paper', 'paper', 'paper']})\ny_train = np.array([0,1,0,1, 0])\n\ndf_test = pd.DataFrame({'float':[6,7,8,9], \n 'animal': ['cat', 'dog', 'cat', 'dog'],\n 'sign': ['rock', 'rock', 'paper', 'paper']})",
"_____no_output_____"
],
[
"import warnings\nwarnings.filterwarnings(\"ignore\")\n\ndef mean_target(df_train, y_train, df_test):\n n = len(df_train)\n cat_features = df_train.columns[df_train.dtypes == 'object'].tolist()\n float_features = df_train.columns[df_train.dtypes != 'object'].tolist()\n \n new_X_train = df_train.copy()\n new_X_train['y'] = y_train\n new_X_test = df_test.copy()\n \n \n for col in cat_features:\n mean_dict = new_X_train.groupby(col)['y'].mean().to_dict()\n new_X_train[col + '_mean'] = df_train[col].map(mean_dict) \n new_X_test[col + '_mean'] = df_test[col].map(mean_dict)\n \n return new_X_train, new_X_test",
"_____no_output_____"
],
[
"X_train, X_test = mean_target(df_train, y_train, df_test)",
"_____no_output_____"
],
[
"X_train",
"_____no_output_____"
],
[
"X_test",
"_____no_output_____"
]
],
[
[
"Данный подход лучше чем One-Hot, так как при нем мы можем серьезно вылететь за пределы памяти.",
"_____no_output_____"
],
[
"#### Важный момент. \nВ ходе подсчета статистики мы по сути сильно привязываемся к данным. Из-за чего может произойти сильное **переобучение**. ",
"_____no_output_____"
],
[
"## Накопительные статистики",
"_____no_output_____"
],
[
"Такие манипуляции очень легко могут привести к переобучению, потому что в данные подливается информация о метках объектов, после чего происходит обучение. \n\nПоэтому в катбусте делают **накопительные статистики**\n\nОсобенности работы с категориальными факторами: \n - объекты перемешиваются в случайном порядке;\n - для i-го объекта и j-го признака в перестановке **статистика** (счетчик) вычисляется по всем объектам, идущим **до него** с таким же значением признака\n - заменяем все категориальные факторы в выборке и обучаем модель\n - Тестовую же выборку просто приравниваем к средним значениям по ",
"_____no_output_____"
]
],
[
[
"def late_mean_target(df_train, df_test, y_train):\n n = len(df_train)\n cat_features = df_train.columns[df_train.dtypes == 'object'].tolist()\n num_features = df_train.columns[df_train.dtypes != 'object'].tolist()\n \n new_X_test = df_test.copy()\n new_X_train = df_train.copy()\n new_X_train['y'] = y_train\n new_X_train = new_X_train.sample(frac=1).reset_index() #shuffling\n new_X_train['ones'] = np.ones((len(X_train),)) \n \n for col in cat_features:\n mean_dict = new_X_train.groupby(col)['y'].mean().to_dict()\n new_X_test[col + '_mean'] = df_test[col].map(mean_dict) / n\n \n count = new_X_train.groupby([col])['ones'].apply(lambda x: x.cumsum())\n cum = new_X_train.groupby([col])['y'].apply(lambda x: x.cumsum())\n \n new_X_train[col + '_mean'] = (cum - new_X_train['y'])/count\n \n \n return new_X_train, new_X_test",
"_____no_output_____"
],
[
"df_train = pd.DataFrame({'float':[1,2,3,4,5], \n 'animal': ['cat', 'dog', 'cat', 'dog', 'cat'],\n 'sign': ['rock', 'rock', 'paper', 'paper', 'paper']})\ny_train = np.array([0,1,0,1, 0])\n\ndf_test = pd.DataFrame({'float':[6,7,8,9], \n 'animal': ['cat', 'dog', 'cat', 'dog'],\n 'sign': ['rock', 'rock', 'paper', 'paper']})",
"_____no_output_____"
],
[
"X_train, X_test = late_mean_target(df_train, df_test, y_train)",
"_____no_output_____"
],
[
"X_train",
"_____no_output_____"
],
[
"X_test",
"_____no_output_____"
]
],
[
[
"# Полезные ссылки \n* [Tutorial](https://github.com/catboost/tutorials)\n* [Github Catboost](https://github.com/catboost/catboost)\n* [Статья о Catboost на arxiv](https://arxiv.org/pdf/1706.09516.pdf)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0eea4aff0b11fc8a6e504b668ae185c64b41a63
| 65,297 |
ipynb
|
Jupyter Notebook
|
examen_1.ipynb
|
sebastiangonzalezv/examen_estadistica
|
13429fb7a0f425c650a6f3ba5fa051696a2e23c3
|
[
"MIT"
] | null | null | null |
examen_1.ipynb
|
sebastiangonzalezv/examen_estadistica
|
13429fb7a0f425c650a6f3ba5fa051696a2e23c3
|
[
"MIT"
] | 4 |
2021-08-23T20:44:21.000Z
|
2022-03-12T00:29:34.000Z
|
examen_1.ipynb
|
sebastiangonzalezv/examen_estadistica
|
13429fb7a0f425c650a6f3ba5fa051696a2e23c3
|
[
"MIT"
] | null | null | null | 59.414923 | 9,944 | 0.587362 |
[
[
[
"### Evaluación 1. Parte Computacional (60 puntos) \n#### (Elementos de Probabilidad y Estadística: 3008450)\n\nSe tiene información acerca de 694 propiedades ubicadas en el valle de aburra. La base de datos fue recolectada en el año 2015, e incluye las siguientes variables: \n1. valor comercial de la propiedad en millones de pesos `precio`\n2. el área de la propiedad en metros cuadrados `mt2`\n3. el sector donde está ubicada la propiedad `ubicacion`\n4. el estrato socieconómico al que pertenece `estrato`\n5. el número de alcobas `alcobas`\n6. el número de baños `banos`\n7. si tiene o no balcón `balcon`\n8. si tiene o no parqueadero `parqueadero`\n9. el valor de la administración en millones de pesos `administracion`\n10. el avalúo catastral en millones de pesos `avaluo` y si la propiedad tiene o no mejoras desde que fue entregado como nuevo `terminado`. \n\nLa base de datos está disponible en: [**https://tinyurl.com/hwhb769**](https://tinyurl.com/hwhb769)",
"_____no_output_____"
],
[
"Con estos datos resuelva los siguientes puntos:\n\n1. Recategorice la variable estrato así: `medio-bajo` para los estratos 2 y 3, `medio-alto` para los estratos 4 y 5, y `alto` para el estrato 6.\n2. Realice un análisis descriptivo de las variables de la base de datos y resalte las características más destacadas de cada una de ellas. Utilice todos los gráficos y resúmenes que considere pertinentes.\n3. Elabore una tabla de doble entrada donde relacione si la propiedad tiene o no mejoras con la nueva categorización de la variable estrato. ¿En cuál de los nuevos estratos se presenta la mayor proporción de propiedades que tienen mejoras? Explique.\n4. Elabore un histograma de frecuencias relativas para la variable precio. ¿Qué observa? Comente\n5. ¿Es muy diferente el comportamiento de los precios de las propiedades de acuerdo a la ubicación? Comente. Elabore los gráficos y/o resúmenes que considere pertinentes. \n6. Grafique un diagrama de dispersión de las variables avalúo y precio. ¿Qué observa? ¿Qué pasa si los puntos graficados se separan por color de acuerdo a la nueva categorización del estrato?. Explique.\n7. Calcule el coeficiente de correlación de Pearson entre precio y avalúo. ¿Qué se puede decir sobre este valor?\n8. Realice un análisis descriptivo adicional que considere pertinente sobre una o varias de las variables restantes de la base de datos.",
"_____no_output_____"
]
],
[
[
"# importar librerias\nimport pandas as pd\nfrom bokeh.server.server import Server\nfrom bokeh.application import Application\nfrom bokeh.application.handlers.function import FunctionHandler\nfrom bokeh.io import output_notebook, push_notebook, show\nfrom bokeh.layouts import column, row, gridplot\nfrom bokeh.models import ColumnDataSource, DataTable, TableColumn, NumberFormatter, Select, MultiSelect, BoxSelectTool, LassoSelectTool, CategoricalColorMapper\nfrom bokeh.plotting import figure\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom bokeh.palettes import Spectral6\nfrom bokeh.transform import factor_cmap\noutput_notebook()",
"_____no_output_____"
],
[
"df = pd.read_csv('https://raw.githubusercontent.com/fhernanb/datos/master/aptos2015',sep='\\s+', header=4)\ndf['estrato'] = df.estrato.apply(lambda x: 'medio-bajo' if x < 3.1 else('medio-alto' if x < 5.1 else 'alto'))",
"_____no_output_____"
]
],
[
[
"Recategorice la variable estrato así: medio-bajo para los estratos 2 y 3, medio-alto para los estratos 4 y 5, y alto para el estrato 6. <br>\nRealice un análisis descriptivo de las variables de la base de datos y resalte las características más destacadas de cada una de ellas. Utilice todos los gráficos y resúmenes que considere pertinentes.",
"_____no_output_____"
]
],
[
[
"def modify_doc(doc):\n \n def update():\n\n current = df[(df.estrato.isin([str(x) for x in estrato.value])) & (df.ubicacion.isin([str(x) for x in ubicacion.value])) & \n (df.alcobas.isin([int(x) for x in alcobas.value])) & (df.banos.isin([int(x) for x in banos.value])) &\n (df.parqueadero.isin([str(x) for x in parqueadero.value])) & (df.terminado.isin([str(x) for x in terminado.value]))]\n source.data = current\n source_static.data = current.describe()[['precio', 'mt2', 'alcobas', 'banos', 'administracion', 'avaluo']].reset_index()\n corr.xaxis.axis_label = ticker1.value\n corr.yaxis.axis_label = ticker2.value\n plot_source.data = {'x': current[ticker1.value], 'y': current[ticker2.value], 'legend': [str(x) for x in current[ticker4.value]],\n 'color': current[ticker4.value].replace(dict(zip(current[ticker4.value].unique(),Spectral6)))}\n hist_1, hedges_1 = np.histogram(current[ticker1.value], density=False, bins=ticker3.value)\n ph.quad(bottom=0, left=hedges_1[:-1], right=hedges_1[1:], top=hist_1)\n hist_2, hedges_2 = np.histogram(current[ticker2.value], density=False, bins=ticker3.value)\n pv.quad(bottom=hedges_2[:-1], left=0, right=hist_2, top=hedges_2[1:])\n \n def ticker_change(attr, old, new):\n update()\n \n\n # Create Input controls\n estrato = MultiSelect(title=\"estrato\", value=[str(x) for x in df.estrato.unique()], options=[str(x) for x in df.estrato.unique()])\n estrato.on_change('value',ticker_change)\n ubicacion = MultiSelect(title=\"ubicacion\", value=[str(x) for x in df.ubicacion.unique()], options=[str(x) for x in df.ubicacion.unique()])\n ubicacion.on_change('value',ticker_change)\n alcobas = MultiSelect(title=\"# de alcobas\", value=[str(x) for x in df.alcobas.unique()], options=[str(x) for x in df.alcobas.unique()])\n alcobas.on_change('value',ticker_change)\n banos = MultiSelect(title=\"# de baños\", value=[str(x) for x in df.banos.unique()], options=[str(x) for x in df.banos.unique()])\n banos.on_change('value',ticker_change)\n parqueadero = MultiSelect(title=\"parqueadero\", value=[str(x) for x in df.parqueadero.unique()], options=[str(x) for x in df.parqueadero.unique()])\n parqueadero.on_change('value',ticker_change)\n terminado = MultiSelect(title=\"terminado\", value=[str(x) for x in df.terminado.unique()], options=[str(x) for x in df.terminado.unique()])\n terminado.on_change('value',ticker_change)\n # create table view\n source = ColumnDataSource(data=dict(mt2=[], ubicacion=[], estrato=[], alcobas=[], banos=[], balcon=[], parqueadero=[], administracion=[],\n avaluo=[], terminado=[]))\n source_static = ColumnDataSource(data=dict())\n columns = [TableColumn(field=\"mt2\", title=\"tamaño\", formatter=NumberFormatter(format=\"0,0.00\")),\n TableColumn(field=\"ubicacion\", title=\"ubicacion\"),\n TableColumn(field=\"estrato\", title=\"estrato\"),\n TableColumn(field=\"alcobas\", title=\"alcobas\"),\n TableColumn(field=\"banos\", title=\"baños\"),\n TableColumn(field=\"balcon\", title=\"balcon\"),\n TableColumn(field=\"parqueadero\", title=\"parqueadero\"),\n TableColumn(field=\"administracion\", title=\"administracion\", formatter=NumberFormatter(format=\"$0.0\")),\n TableColumn(field=\"avaluo\", title=\"avaluo\", formatter=NumberFormatter(format=\"$0.0\")),\n TableColumn(field=\"terminado\", title=\"terminado\")]\n \n columns_static = [TableColumn(field=\"metrica\", title=\"metrica\"),\n TableColumn(field=\"precio\", title=\"precio\"),\n TableColumn(field=\"mt2\", title=\"tamaño\"),\n TableColumn(field=\"alcobas\", title=\"alcobas\"),\n TableColumn(field=\"banos\", title=\"baños\"),\n TableColumn(field=\"administracion\", title=\"administracion\"),\n TableColumn(field=\"avaluo\", title=\"avaluo\")]\n data_table = DataTable(source=source, columns=columns)\n static_table = DataTable(source=source_static, columns=columns_static)\n \n # plot\n ticker_options = ['precio', 'mt2', 'administracion', 'avaluo']\n ticker1 = Select(value='precio', options=ticker_options)\n ticker2 = Select(value='mt2', options=ticker_options)\n ticker3 = Select(value='auto', options=['auto', 'Freedman Diaconis Estimator', 'scott', 'rice', 'sturges', 'doane', 'sqrt'])\n ticker4 = Select(value='estrato', options=['estrato', 'alcobas', 'ubicacion', 'banos', 'parqueadero', 'terminado'])\n #plot_source = ColumnDataSource(data=dict(x=[], y=[], color=[]))\n plot_source = ColumnDataSource(data=dict(x=[], y=[], color=[], legend=[]))\n ticker1.on_change('value', ticker_change)\n ticker2.on_change('value', ticker_change)\n ticker3.on_change('value', ticker_change)\n ticker4.on_change('value', ticker_change)\n tools = \"pan,wheel_zoom,box_select,lasso_select,reset,box_zoom,reset\"\n corr = figure(plot_width=350, plot_height=350, tools=tools)\n corr.circle(x='x', y='y', size=2, source=plot_source, color='color', legend='legend')\n ph = figure(toolbar_location=None, plot_width=corr.plot_width, plot_height=200, x_range=corr.x_range, min_border=10, min_border_left=50, y_axis_location=\"right\")\n pv = figure(toolbar_location=None, plot_width=200, plot_height=corr.plot_height, y_range=corr.y_range, min_border=10, y_axis_location=\"right\")\n corr.select(BoxSelectTool).select_every_mousemove = False\n corr.select(LassoSelectTool).select_every_mousemove = False\n ph.xgrid.grid_line_color = None\n ph.yaxis.major_label_orientation = np.pi/4\n pv.ygrid.grid_line_color = None\n pv.xaxis.major_label_orientation = np.pi/4\n # grid\n widgets = column(estrato, ubicacion, alcobas, banos, parqueadero, terminado)\n data_panel = column(data_table, static_table)\n plots_panel = gridplot([[corr, pv], [ph, None]], merge_tools=False)\n top_layout = row(widgets, data_panel)\n down_layout = row(column(ticker1, ticker2, ticker3, ticker4), plots_panel)\n layout = column(top_layout, down_layout)\n update()\n doc.add_root(layout)\n \n\nhandler = FunctionHandler(modify_doc)\napp = Application(handler)\nshow(app, notebook_url=\"localhost:8888\")",
"_____no_output_____"
]
],
[
[
"Elabore una tabla de doble entrada donde relacione si la propiedad tiene o no mejoras con la nueva categorización de la variable estrato. ¿En cuál de los nuevos estratos se presenta la mayor proporción de propiedades que tienen mejoras? Explique.",
"_____no_output_____"
]
],
[
[
"\nfrec_clase_peso = pd.pivot_table(df, columns='terminado', index='estrato', values='administracion', aggfunc='count')\nfrec_rel_clase_peso = frec_clase_peso/frec_clase_peso.sum().sum()\npd.concat([frec_clase_peso, frec_rel_clase_peso], axis=1)\nfrec_rel_clase_peso.plot(kind='bar')\nplt.show()",
"_____no_output_____"
]
],
[
[
"el estrarto alto es el que mas modificaciones ha hecho a sus vivienda esto se debe a su poder adqusitivo puede gastar en comodities",
"_____no_output_____"
],
[
"Elabore un histograma de frecuencias relativas para la variable precio. ¿Qué observa? Comente",
"_____no_output_____"
]
],
[
[
"hist, hedges = np.histogram(df.precio, density=True, bins='sturges')\nplot = figure(plot_width=350, plot_height=350)\nplot.quad(bottom=0, left=hedges[:-1], right=hedges[1:], top=hist)\n\nshow(plot, notebook_url=\"localhost:8888\")",
"_____no_output_____"
]
],
[
[
"Right-Skewed <br>\nMode < Median < Mean",
"_____no_output_____"
],
[
"5. ¿Es muy diferente el comportamiento de los precios de las propiedades de acuerdo a la ubicación? Comente. Elabore los gráficos y/o resúmenes que considere pertinentes. \n6. Grafique un diagrama de dispersión de las variables avalúo y precio. ¿Qué observa? ¿Qué pasa si los puntos graficados se separan por color de acuerdo a la nueva categorización del estrato?. Explique.\n7. Calcule el coeficiente de correlación de Pearson entre precio y avalúo. ¿Qué se puede decir sobre este valor?\n8. Realice un análisis descriptivo adicional que considere pertinente sobre una o varias de las variables restantes de la base de datos.",
"_____no_output_____"
],
[
"* para analizar todos estos puntos se puede usar los resultados dinamicos del punto 2 donde esta un dashboard se obsevra una rango de precio muy uniforme en todas exeptuando en el sectro del poblado. me atreveria a tratar el resto de las zonas para el analisis de precio como un sola componente\n* si se separa por colores ya que existe una correlacion entre las variables donde el precio es directamente porporcional al avaluo y al estrato.\n* existe una alta correlacion entre precio y avaluo de 0.79 pearson, 0.73 kendall y 0.89 spearman. son directamente proporcionales estas variable",
"_____no_output_____"
]
],
[
[
"print('pearson')\ndf.corr(method='pearson')",
"pearson\n"
],
[
"print('kendall')\ndf.corr(method='kendall')\n",
"kendall\n"
],
[
"print('spearman')\ndf.corr(method='spearman')",
"spearman\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0eeaa8f3a534187e3c419583a85a58e2545f83b
| 7,688 |
ipynb
|
Jupyter Notebook
|
data-streaming-with-kafka/notebooks/test/OLAP-Notebook.ipynb
|
natanascimento/programacao-avancada
|
ab6f1ba56903feedc2096ebee342e403bfe0f4aa
|
[
"MIT"
] | null | null | null |
data-streaming-with-kafka/notebooks/test/OLAP-Notebook.ipynb
|
natanascimento/programacao-avancada
|
ab6f1ba56903feedc2096ebee342e403bfe0f4aa
|
[
"MIT"
] | 2 |
2021-05-17T17:29:19.000Z
|
2021-06-08T03:36:58.000Z
|
data-streaming-with-kafka/notebooks/test/OLAP-Notebook.ipynb
|
natanascimento/programacao-avancada
|
ab6f1ba56903feedc2096ebee342e403bfe0f4aa
|
[
"MIT"
] | null | null | null | 44.183908 | 172 | 0.468392 |
[
[
[
"from pyspark.sql import SparkSession\nfrom pyspark.sql.types import *",
"_____no_output_____"
],
[
"spark = SparkSession.builder.appName('myAppName').getOrCreate()",
"_____no_output_____"
],
[
"schema = StructType().add(\"customer_id\", \"string\").add(\"customer_username\", \"string\").add(\"customer_name\", \"string\")\\\n .add(\"customer_gender\", \"string\").add(\"customer_address\", \"string\")\\\n .add(\"customer_purchase_price\", \"string\").add(\"customer_country\", \"string\")\\\n .add(\"createdAt\", \"string\")",
"_____no_output_____"
],
[
"df = spark.read.option(\"header\", \"true\").schema(schema).csv('data/my_csv.csv')",
"_____no_output_____"
],
[
"df.show()",
"+--------------------+-----------------+------------------+---------------+--------------------+-----------------------+--------------------+-------------------+\n| customer_id|customer_username| customer_name|customer_gender| customer_address|customer_purchase_price| customer_country| createdAt|\n+--------------------+-----------------+------------------+---------------+--------------------+-----------------------+--------------------+-------------------+\n|65c250a9-d28f-498...| russellsandoval| Laura Hansen| F|Unit 4703 Box 943...| $46,581.80| Slovenia|31/05/2021 14:38:49|\n| customer_id|customer_username| customer_name|customer_gender| customer_address| customer_purchase...| customer_country| createdAt|\n|d19afc5d-45d5-437...| brentwoodard| Kelsey Combs| F|061 Alexandra Wal...| $183.69| Niger|31/05/2021 14:38:49|\n| customer_id|customer_username| customer_name|customer_gender| customer_address| customer_purchase...| customer_country| createdAt|\n|2ee807bc-02c9-431...| vickifigueroa| Rodney Morgan| M|189 Terri Union S...| $229.43|United States of ...|31/05/2021 14:38:49|\n| customer_id|customer_username| customer_name|customer_gender| customer_address| customer_purchase...| customer_country| createdAt|\n|75d2e8c3-0298-47d...| emmamichael| James Nichols| M|40239 Lewis Valle...| $2.91| Cameroon|31/05/2021 14:38:49|\n| customer_id|customer_username| customer_name|customer_gender| customer_address| customer_purchase...| customer_country| createdAt|\n|f08f7d3b-9a5f-4d0...|williamsonjessica|Jacqueline Bennett| F|Unit 7553 Box 706...| $305.57| Chile|31/05/2021 14:38:49|\n| customer_id|customer_username| customer_name|customer_gender| customer_address| customer_purchase...| customer_country| createdAt|\n|f27a103a-abad-4e2...| brentlawson| Kelly Parsons| F|000 Levine Traffi...| $18.43| Cameroon|31/05/2021 14:38:50|\n| customer_id|customer_username| customer_name|customer_gender| customer_address| customer_purchase...| customer_country| createdAt|\n|cc8759ac-6ebf-493...| ywashington| Ashley Brown| F|26495 Johnson Lod...| $69,607.90| Sudan|31/05/2021 14:38:50|\n| customer_id|customer_username| customer_name|customer_gender| customer_address| customer_purchase...| customer_country| createdAt|\n|efdc7e8d-cc7b-454...| karenwhite| Mitchell Wilkins| M|PSC 8910, Box 351...| $2,802.45| Slovenia|31/05/2021 14:38:50|\n| customer_id|customer_username| customer_name|customer_gender| customer_address| customer_purchase...| customer_country| createdAt|\n|bd2bea97-7f96-4ec...| zimmermanjohn| Anthony Morris| M|5303 Mills Points...| $0.38| Taiwan|31/05/2021 14:38:50|\n| customer_id|customer_username| customer_name|customer_gender| customer_address| customer_purchase...| customer_country| createdAt|\n|a6b8f062-04d9-409...| cookpaula| Steven Allen| M|387 Samuel Row - ...| $4,839.94| China|31/05/2021 14:38:51|\n| customer_id|customer_username| customer_name|customer_gender| customer_address| customer_purchase...| customer_country| createdAt|\n+--------------------+-----------------+------------------+---------------+--------------------+-----------------------+--------------------+-------------------+\nonly showing top 20 rows\n\n"
],
[
"countryCount = df.groupBy('customer_country').count()",
"_____no_output_____"
],
[
"countryCount.show()",
"+--------------------+-----+\n| customer_country|count|\n+--------------------+-----+\n| Chad| 2|\n| Anguilla| 1|\n| Macao| 1|\n|Heard Island and ...| 2|\n| Yemen| 3|\n| Sweden| 1|\n| Tokelau| 2|\n|French Southern T...| 1|\n| Kiribati| 2|\n| Guyana| 2|\n| Eritrea| 1|\n| Philippines| 1|\n| Norfolk Island| 3|\n| Tonga| 1|\n| Malaysia| 2|\n| Fiji| 1|\n| Turkey| 2|\n|United States Vir...| 1|\n| Western Sahara| 1|\n| Malawi| 1|\n+--------------------+-----+\nonly showing top 20 rows\n\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0eeb0e43dc62adfb42ebb5df828f8d7b7e0956f
| 47,454 |
ipynb
|
Jupyter Notebook
|
DataImport.ipynb
|
avisionh/Sorting-Madonna-Songs
|
bbad1749212a12811f1460ac2b0149c1b1612f56
|
[
"Unlicense"
] | null | null | null |
DataImport.ipynb
|
avisionh/Sorting-Madonna-Songs
|
bbad1749212a12811f1460ac2b0149c1b1612f56
|
[
"Unlicense"
] | null | null | null |
DataImport.ipynb
|
avisionh/Sorting-Madonna-Songs
|
bbad1749212a12811f1460ac2b0149c1b1612f56
|
[
"Unlicense"
] | null | null | null | 31.912576 | 541 | 0.351393 |
[
[
[
"# Sorting Madonna Songs\nThis project will randomly order the entire backlog of Madonna's songs. This was motivated following a colleague's offhand remark about one's favourite song being *Material Girl* by Madge, which triggered another colleague to provide the challenge of ranking all of Madonna's songs. \n\nIn particular, there are two key stages to this random ordering:\n1. Assign a random, distinct integer number next to each song to act as its preference ranking\n1. Employ a sorting algorithm to sort this list via the preference ranking column just created\n\nWill try a variety of sorting algorithms detailed below\n- **Quicksort**\n- **Bubble sort**\n- **Breadth-first search**\n- **Heapsort**\n- **Insertion sort**\n- **Shell sort**",
"_____no_output_____"
],
[
"## Set-up\nNeed to load in the relevant packages, set-up the right environemnt, and import the dataset.",
"_____no_output_____"
]
],
[
[
"# Export our environment, \"NewEnv\" and save it as \"anomaly-detection.yml\"\n!conda env export -n NewEnv -f environment_anaconda.yml\n\n# Check working directory to ensure user notebook is easily transferable\nimport os\nos.getcwd()",
"_____no_output_____"
],
[
"# Import required libraries\nimport numpy as np\nimport pandas as pd\nimport xlrd\nimport csv",
"_____no_output_____"
]
],
[
[
"### Convert to CSV\nDo not have Excel installed so cannot convert it via that. Instead, get round it via the *xlrd* and *csv* packages.\nNote, could directly read in Excel file and play with that. However, learn less that way!\n\nCode for function was taken from this [link](https://stackoverflow.com/questions/9884353/xls-to-csv-converter). However, first encountered an issue on using subfolders. This was resolved in this [link](https://stackoverflow.com/questions/7165749/open-file-in-a-relative-location-in-python). Then encountered an issue concerning the reading of entries as `bytes` instead of `str` which was resolved in this [link](https://stackoverflow.com/questions/33054527/typeerror-a-bytes-like-object-is-required-not-str-when-writing-to-a-file-in).",
"_____no_output_____"
]
],
[
[
"def csv_from_excel(file_input, file_output, sheet_index):\n\n wb = xlrd.open_workbook(filename = file_input)\n sh = wb.sheet_by_index(sheet_index)\n file_csv = open(file = file_output, mode = 'wt')\n wr = csv.writer(file_csv, quoting = csv.QUOTE_ALL)\n\n for rownum in range(sh.nrows):\n wr.writerow(sh.row_values(rownum))\n\n file_csv.close()",
"_____no_output_____"
],
[
"# run function to output .csv file\ncsv_from_excel(file_input = 'data\\songs_madonna.xlsx', file_output = 'data\\songs_madonna.csv', sheet_index = 0)",
"_____no_output_____"
]
],
[
[
"## Data Wrangle\nLoad in our .csv file so that we can add distinct random numbers as a column which we will use to sort on.\n\nNote: File is encoded as *ANSI* which is `mbcs` in the `pd.red_csv()`.",
"_____no_output_____"
]
],
[
[
"# import data\ndata_madge = pd.read_csv(filepath_or_buffer = 'data\\songs_madonna.csv', encoding = 'mbcs')",
"_____no_output_____"
],
[
"# display data\ndata_madge.head()",
"_____no_output_____"
]
],
[
[
"In the code below, are following a naive method for creating a column of distinct random numbers. This will be in steps:\n\n1. Store the number of rows in a variable.\n1. Generate a random sample without replacement using the number of rows as our region of interest.\n1. Bind this random sample onto our `data_madge` dataframe.\n",
"_____no_output_____"
]
],
[
[
"# import package for random-sampling\nimport random",
"_____no_output_____"
],
[
"# set random seed\nseed_random = np.random.RandomState(123)\n\n# 1. store number of rows in a variable\nn_rows = len(data_madge.index)\n\n# 2. generate random sample without replacement\n# note: using try-catch logic to ensure we generate a sample\ntry:\n sample_random = random.sample(population = range(0, n_rows), k = n_rows)\n print('Random sample generated is of object type: ', type(sample_random))\nexcept ValueError:\n print('Sample size exceeded population size.')\n\n# 3. bind random sample onto dataframe\ndata_madge['Preference_Avision'] = sample_random\ndata_madge = data_madge[['Songs', 'Preference_Avision']]",
"Random sample generated is of object type: <class 'list'>\n"
],
[
"# check new dataframe\ndata_madge.head(57)",
"_____no_output_____"
]
],
[
[
"### Specific preferences\nWhilst broadly indifferent between the vast majority of Madge's discology, two songs stand out to the author:\n- Material Girl\n- La Isla Bonita\nWhat one wants to do is thus ensure that these two songs have the highest two preference rankings, `0` and `1`.",
"_____no_output_____"
]
],
[
[
"# 1. find songs randomly classified as favourites\nvalue_top_preferences_random = [0, 1]\ndata_top_random = data_madge[data_madge.Preference_Avision.isin(value_top_preferences_random)]",
"_____no_output_____"
],
[
"# 2. find 'Material Girl' and 'La Isla Bonita'\nsongs_top_own = ['Material Girl', 'La Isla Bonita']\ndata_top_own = data_madge[data_madge.Songs.isin(songs_top_own)]",
"_____no_output_____"
],
[
"# 3. rename columns so can distinguish them\ndata_top_random = data_top_random.rename(columns = {\"Songs\": \"SongsRandom\", \n \"Preference_Avision\": \"PreferenceRandom\"})\ndata_top_random",
"_____no_output_____"
],
[
"data_top_own = data_top_own.rename(columns = {\"Preference_Avision\": \"Preference\"})\ndata_top_own",
"_____no_output_____"
],
[
"# 3. append dataframes together\n# need to reset indices to do so\ndata_temp = pd.concat(objs = [data_top_own.reset_index(drop = True), \n data_top_random.reset_index(drop = True)],\n axis = 1)\ndata_temp",
"_____no_output_____"
],
[
"# 4.i select correct preference columns now to accurately map preferences\ndata_top_random = data_temp.loc[:, ['SongsRandom', 'Preference']]\ndata_top_own = data_temp.loc[:, ['Songs', 'PreferenceRandom']]\n\n# 4.ii rename columns so we can append/union\ndata_top_random = data_top_random.rename(columns = {\"SongsRandom\": \"Song\"})\ndata_top_own = data_top_own.rename(columns = {\"PreferenceRandom\": \"Preference\",\n \"Songs\": \"Song\"})",
"_____no_output_____"
],
[
"# 5. append/union two dataframes together\ndata_temp = pd.concat(objs = [data_top_random, data_top_own], ignore_index = True)\ndata_temp",
"_____no_output_____"
],
[
"# 6. bring back to original dataframe\ndata_preference = pd.merge(left = data_madge, right = data_temp,\n left_on = 'Songs', right_on = 'Song',\n how = 'left')\n\ndel(data_madge, data_temp, data_top_own, data_top_random, value_top_preferences_random, songs_top_own)\ndata_preference.head()",
"_____no_output_____"
],
[
"# 7. create a final preference column\ndata_preference['SongPreference'] = np.where(data_preference.Preference.isnull(),\n data_preference['Preference_Avision'],\n data_preference['Preference'])\n\ndata_preference = data_preference.loc[:, ['Songs', 'SongPreference']]\ndata_preference = data_preference.rename(columns = {'Songs': 'Song'})\ndata_preference.head(58)",
"_____no_output_____"
]
],
[
[
"## Data Export\nNow that we have the dataframe in a suitable format and reflective of the author's preferences, can export it as a .csv file for use in Java when we will apply assorted sorting algorithms on it.",
"_____no_output_____"
]
],
[
[
"data_preference.to_csv(path_or_buf = 'output\\output_preference.csv',\n sep = ',', encoding = \n )",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0eebc9d8ee55b12f17747aaf1be45f1fd1ea9c8
| 87,177 |
ipynb
|
Jupyter Notebook
|
homework/key-hypothesis_testing.ipynb
|
nishadalal120/NEU-365P-385L-Spring-2021
|
eff075482913a6c72737c578f1c5fc42527c12bb
|
[
"Unlicense"
] | 12 |
2021-01-05T18:26:42.000Z
|
2021-03-11T19:26:07.000Z
|
homework/key-hypothesis_testing.ipynb
|
nishadalal120/NEU-365P-385L-Spring-2021
|
eff075482913a6c72737c578f1c5fc42527c12bb
|
[
"Unlicense"
] | 1 |
2021-04-21T00:57:10.000Z
|
2021-04-21T00:57:10.000Z
|
homework/key-hypothesis_testing.ipynb
|
nishadalal120/NEU-365P-385L-Spring-2021
|
eff075482913a6c72737c578f1c5fc42527c12bb
|
[
"Unlicense"
] | 22 |
2021-01-21T18:52:41.000Z
|
2021-04-15T20:22:20.000Z
| 188.287257 | 21,380 | 0.887562 |
[
[
[
"# Homework (16 pts) - Hypothesis Testing",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport scipy.stats as st\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"1. You measure the duration of high frequency bursts of action potentials under two different experimental conditions (call them conditions A and B). Based on your measured data below, determine if the conditions affect the mean burst duration or if differences are just due to random fluctuations? See 1a-d below.",
"_____no_output_____"
]
],
[
[
"burstDurationsA_ms = np.array([180.38809356, 118.54316518, 47.36070342, 258.43152543,\n 157.58441772, 53.00241256, 97.87549106, 98.58339172,\n 3.82151168, 149.63437886, 78.36434292, 207.1499196 ,\n 249.99308288, 52.33575872, 177.16295745, 20.90902826,\n 355.53831638, 17.14676607, 194.82448255, 364.30099202,\n 10.46025411, 63.80995802, 186.96964679, 16.76391482,\n 66.04825185, 169.95991378, 174.85051452, 95.51534595,\n 164.81818483, 165.92316127, 21.99840476, 176.27450914,\n 367.20238806, 53.55081561, 18.54310649, 309.36915353,\n 34.8110391 , 170.70514854, 4.80755719, 185.70861565,\n 42.81031454, 77.63480453, 22.78673497, 27.15480627,\n 81.19289909, 7.5754338 , 143.53588895, 1.45355329,\n 56.93153072, 35.7227909 , 120.88947208, 268.68459917,\n 36.56451611, 335.29492244, 18.88246351, 698.21607381,\n 47.24456065, 68.47935918, 246.50352868, 39.17939247,\n 130.00962739, 12.63485608, 16.5060213 , 85.73872575,\n 30.34193446, 12.18596266, 133.13145381, 39.68448593,\n 227.5104642 , 274.45272375, 167.76767172, 23.93871685,\n 319.05649273, 6.3491122 , 35.14797547, 170.29631475,\n 33.54342976, 2.71282041, 134.5042415 , 42.498552 ,\n 144.87658813, 122.78633957, 46.58727698, 143.74260009,\n 27.95191179, 462.66535543, 187.17111074, 21.05730056,\n 27.92875799, 73.0405984 , 137.67114744, 25.51076087,\n 68.71066451, 188.46823412, 20.58525518, 18.06289499,\n 388.79209834, 9.42246312, 270.11609469, 20.51123798])\nburstDurationsB_ms = np.array([ 19.1579061 , 103.28099491, 155.40048778, 54.00532297,\n 19.60552475, 38.33218511, 172.39377537, 100.60095889,\n 123.39067736, 32.30752807, 140.81577413, 10.03036383,\n 76.95250023, 111.4112118 , 106.77958145, 100.03741994,\n 54.40736747, 169.72641863, 170.51048794, 84.31738796,\n 32.48573515, 71.14968724, 18.07487628, 48.27775752,\n 249.00817236, 40.88078534, 149.55876359, 171.68318734,\n 64.7972247 , 179.67199065, 211.24354393, 49.54367304,\n 5.97816835, 270.82356699, 99.33133967, 14.35603709,\n 61.8917307 , 48.13722571, 65.23703418, 119.95425274,\n 64.3948595 , 57.40459219, 18.76680104, 37.37173184,\n 143.4622583 , 21.6463496 , 45.86107014, 3.98511098,\n 11.8424448 , 105.59224929, 71.49909777, 29.64941255,\n 117.62835465, 31.33284437, 124.17263642, 249.31437673,\n 92.15958114, 66.2842341 , 5.01333126, 18.53478564,\n 44.09316335, 119.8752612 , 52.31171617, 3.03888107,\n 109.94031571, 5.52411681, 43.88839751, 48.63036147,\n 22.71317076, 30.20052081, 32.10942778, 117.08796453,\n 53.83369891, 68.82006208, 92.29204674, 93.829404 ,\n 0.67985216, 10.42751195, 4.35827727, 127.21452508,\n 42.69414115, 34.9520911 , 20.16096766, 178.44190716,\n 43.04340469, 89.11997718, 163.48474361, 277.29716851,\n 17.08902205, 103.74782303, 49.29308393, 72.1459098 ,\n 11.4600829 , 4.09194418, 51.55511185, 91.81103802,\n 31.36955782, 23.24407568, 90.13594215, 69.37118937])",
"_____no_output_____"
]
],
[
[
"1a. (1 pt) State the null and alternative hypotheses.",
"_____no_output_____"
],
[
"H0: Conditions have no affect on mean burst durations.\n\nHa: Mean burst duration differs between conditions.",
"_____no_output_____"
],
[
"1b. (3 ps) Plot the burst distributions for conditions A and B overlaid with your best estimate for the probability density function that describes them.",
"_____no_output_____"
]
],
[
[
"distA = st.expon(loc=0, scale=burstDurationsA_ms.mean())\ndistB = st.expon(loc=0, scale=burstDurationsB_ms.mean())\n\nplt.hist(burstDurationsA_ms, bins=20, density=True, alpha=0.25, label='A')\nplt.hist(burstDurationsB_ms, bins=20, density=True, alpha=0.25, label='B')\ndur_ms = np.linspace(0, 500, 100)\nplt.plot(dur_ms, distA.pdf(dur_ms), label='dist A')\nplt.plot(dur_ms, distB.pdf(dur_ms), label='dist B')\nplt.xlabel('Burst Duration (ms)')\nplt.ylabel('pdf')\nplt.legend();",
"_____no_output_____"
]
],
[
[
"1c. (3 pts) Use a permutation test with 1000 permutations to test your null hypothesis. Compute the difference between mean burst durations for all 1000 permutations of the datasets.",
"_____no_output_____"
]
],
[
[
"nA = len(burstDurationsA_ms)\nnB = len(burstDurationsB_ms)\n\nallBurstDurations = np.zeros((nA + nB,))\nallBurstDurations[:nA] = burstDurationsA_ms\nallBurstDurations[-nB:] = burstDurationsB_ms\n\nnumPermutations = 1000\npermutedMeanBurstDurationDiffs = np.zeros((numPermutations,))\n\nfor i in range(numPermutations):\n np.random.shuffle(allBurstDurations)\n \n permutedBurstDurationsA = allBurstDurations[:nA]\n permutedBurstDurationsB = allBurstDurations[-nB:]\n \n permutedMeanBurstDurationDiffs[i] = permutedBurstDurationsB.mean() - permutedBurstDurationsA.mean()",
"_____no_output_____"
]
],
[
[
"1d. (3 pts) Plot the distribtuion of mean burst time differences from each permutation and use vertical dashed lines ot indicate the 95% confidence interval and a vertical solid line to indicate the measured mean burst time difference between the actual datasets. Finally, answer the original question, do the conditions affect mean burst duration?",
"_____no_output_____"
]
],
[
[
"# plot the distribution differences between taus for each permutation\nplt.hist(permutedMeanBurstDurationDiffs, bins=50, alpha=0.25, label='Expected under H0');\nplt.xlabel('Mean Burst Duration Diff B - A (ms)')\nplt.ylabel('# Permutations');\n\n# add 95% confidence intervals to the plot\nlb, ub = np.quantile(permutedMeanBurstDurationDiffs, [0.025, 0.975])\nplt.axvline(lb, linestyle='--', label='95% CI')\nplt.axvline(ub, linestyle='--');\n\n# add measured difference to plot\nmeasuredMeanBurstDurationDiff = burstDurationsB_ms.mean() - burstDurationsA_ms.mean()\nplt.axvline(measuredMeanBurstDurationDiff, color='r', label='Measured')\n\nplt.legend();",
"_____no_output_____"
]
],
[
[
"Reject H0 as measured difference falls outside of 95% confidence interval for expected differenece if H0 was true.\n\nThus, we infer that condition B did affect the mean burst duration as compared to condition A.",
"_____no_output_____"
],
[
"2. You record the resting potential of a cell (see below). See 2a-c below.",
"_____no_output_____"
]
],
[
[
"restingPotential_mV = np.array([-85.06885608, -68.0333149 , -77.04147864, -70.82636201,\n -73.11516394, -70.87124656, -69.8945143 , -71.35017797,\n -78.97700081, -76.06762065, -80.16301496, -75.53757879,\n -66.29208026, -84.46635021, -74.99594162, -81.64926101,\n -69.43971079, -60.09946296, -66.79822251, -60.85633766,\n -54.32637416, -66.45195357, -82.98456323, -81.95661922,\n -60.47209247, -80.55272128, -62.85999264, -86.59379859,\n -78.64488589, -68.84506935, -80.77647186, -67.85623328,\n -74.45114227, -89.65579119, -82.64751201, -63.75968145,\n -74.22283582, -59.31586296, -93.0908073 , -73.64374549,\n -62.68738212, -57.96506437, -72.3717666 , -86.33058942,\n -78.92751452, -58.80136699, -85.71378949, -57.19191734,\n -91.30229149, -75.05287933, -75.33300218, -62.74969485,\n -79.59156555, -52.61256484, -77.21434863, -83.18228806,\n -62.06267252, -68.56599363, -74.33860286, -74.25433867,\n -67.10062548, -70.91001388, -74.54319772, -89.15247536,\n -72.25311527, -88.42966306, -77.76328165, -68.46582471,\n -75.94389499, -58.47565688, -71.13726886, -82.4352595 ,\n -61.93586705, -83.83289675, -51.7473573 , -72.18052423,\n -77.19392687, -87.97762782, -68.17409172, -62.04925685,\n -72.86214908, -69.43243604, -82.89191418, -67.91943956,\n -59.00530849, -62.53955662, -68.66192422, -73.86176431,\n -63.33605874, -84.78928316, -79.38590405, -85.06698722,\n -77.99176887, -70.8097979 , -70.458364 , -77.83905415,\n -79.05549124, -67.7530506 , -86.29135786, -60.87285052,\n -68.75028368, -69.48216823, -87.97546221, -74.25401398,\n -72.00639248, -73.25242423, -99.49034043, -81.86020062,\n -78.38191113, -68.64333415, -62.26209287, -75.46279644,\n -82.18768283, -77.45752358, -79.82870353, -69.4572625 ,\n -78.32253067, -73.59782921, -72.25046001, -80.64590368,\n -76.92874101, -90.79517065, -73.90324566, -81.67875556,\n -67.59862905, -81.49491813, -75.79660561, -81.14508062,\n -78.95641057, -80.56089537, -80.23390812, -72.4244641 ,\n -87.47818531, -73.59907449, -66.92882851, -67.87048944,\n -69.79223622, -67.11253617, -64.8935525 , -80.52556846,\n -78.19259758, -62.10604477, -95.98603544, -75.95599522,\n -66.3355366 , -80.87436998, -81.5009947 , -88.22430255,\n -83.72971765, -75.86416506, -82.52663772, -53.76916602,\n -66.21196557, -72.93868097, -91.42283677, -80.22444843,\n -75.08391826, -52.05541454, -72.0154604 , -80.24943593,\n -65.97047566, -81.62631839, -73.18646105, -70.85923137,\n -66.05248632, -60.82923084, -59.49883812, -78.38967591,\n -84.79797173, -95.00305539, -78.06355062, -71.60393851,\n -70.37115932, -86.7155815 , -65.38955127, -76.78546928,\n -79.85586826, -76.65572665, -71.50214043, -83.65681821,\n -59.9250123 , -76.05986927, -82.68107711, -70.01703154,\n -74.46337865, -63.38903087, -78.73136431, -76.56253395,\n -72.43137511, -52.60067507, -54.23945626, -63.68117735,\n -88.19424095, -76.29322833, -77.01457066, -72.88256829,\n -67.46931905, -60.91331725, -79.17094879, -74.96126989])",
"_____no_output_____"
]
],
[
[
"2a. (3 pts) You only have one sample (above) with a single mean. Use the Central Limit Theorem to estimate the distribution of mean resting potentials were you to collect a bunch more samples. Plot this distribution and indicate its 95% confidence interval with vertical lines on the plot.",
"_____no_output_____"
]
],
[
[
"mu = restingPotential_mV.mean()\nsem = restingPotential_mV.std() / np.sqrt(len(restingPotential_mV))\n\nmeanDist = st.norm(mu, sem)\nmV = np.linspace(-77, -71, 101)\nplt.plot(mV, meanDist.pdf(mV))\nplt.xlabel('Mean Resting Potential (mV)')\nplt.ylabel('pdf')\nplt.title('Central Limit Theorem')\nlb, ub = meanDist.ppf([0.025, 0.975])\nplt.axvline(lb, linestyle='--', label='95% CI')\nplt.axvline(ub, linestyle='--')\nplt.legend();",
"_____no_output_____"
]
],
[
[
"2b. (3 pts) Use 1000 bootstrapped samples to estimate the 95% confidence interval for the mean resting potential. Plot the distribution of bootstrap mean resting potentials and indicate the 95% confidence intervals with vertical lines. How do these compare to that obtained by the Central Limit Theorem?",
"_____no_output_____"
]
],
[
[
"numBootstraps = 1000\nbootstrappedMeans = np.zeros((numBootstraps,))\nfor i in range(numBootstraps):\n bootstrappedRestingPotentials_mV = \\\n np.random.choice(restingPotential_mV, size=restingPotential_mV.shape, replace=True)\n bootstrappedMeans[i] = bootstrappedRestingPotentials_mV.mean()\n\nbootstrappedMeansCI = np.quantile(bootstrappedMeans, [0.025, 0.975])\n\nplt.hist(bootstrappedMeans, bins=30, alpha=0.25, label='Bootstrapped')\nplt.axvline(bootstrappedMeansCI[0], linestyle='--', label='95% CI')\nplt.axvline(bootstrappedMeansCI[1], linestyle='--')\nplt.xlabel('Mean Resting Potential (mV)')\nplt.ylabel('# Bootstrap Samples')\nplt.legend();",
"_____no_output_____"
]
],
[
[
"Bootstrap confidence interval is very similar to that based on Central Limit Theorem.",
"_____no_output_____"
]
],
[
[
"2c. (3 pts) Use a t-Test to determine whether this cell belongs to a set of cells that you previously determined have a resting potential of -60 mV?",
"_____no_output_____"
]
],
[
[
"# I didn't specifically ask for the normality test, so it is ok if it was not included.\n# But you should do some sort of check for normality if you are using a t-Test.\nstat, pvalue = st.normaltest(restingPotential_mV)\nisNormallyDistributed = pvalue >= 0.05\nisNormallyDistributed",
"_____no_output_____"
],
[
"t, pvalue = st.ttest_1samp(restingPotential_mV, -60)\npvalue",
"_____no_output_____"
]
],
[
[
"p-value < 0.025, so reject null hypothesis that cell has a resting potential of -60 mV. Thus, we infer that this cell differs from the previous set of cells. This is also what we would infer from the plots in 2a-b.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"raw",
"markdown",
"code",
"raw"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"raw"
],
[
"markdown"
],
[
"code",
"code"
],
[
"raw"
]
] |
d0eeff2486a749012377be8c03fad45ad782fb8c
| 38,039 |
ipynb
|
Jupyter Notebook
|
TDD_for_data_cleaning_with_pytest.ipynb
|
surfaceowl/pythontalk_tdd_for_data
|
ee901661df8375d5eca47b39247e930350fcda34
|
[
"MIT"
] | null | null | null |
TDD_for_data_cleaning_with_pytest.ipynb
|
surfaceowl/pythontalk_tdd_for_data
|
ee901661df8375d5eca47b39247e930350fcda34
|
[
"MIT"
] | null | null | null |
TDD_for_data_cleaning_with_pytest.ipynb
|
surfaceowl/pythontalk_tdd_for_data
|
ee901661df8375d5eca47b39247e930350fcda34
|
[
"MIT"
] | null | null | null | 29.305855 | 210 | 0.509609 |
[
[
[
"\n### TDD for data with pytest\n\n\nTDD is great for software engineering, but did you know TDD can add a lot of speed and quality to Data Science projects too?\n\nWe'll learn how we can use TDD to save you time - and quickly improve functions which extract and process data.",
"_____no_output_____"
],
[
"# About me\n\n**Chris Brousseau**\n\n*Surface Owl - Founder & Data Scientist*\n<br>\n*Pragmatic AI Labs - Cloud-Native ML*\n<br>\n\n<br>\nPrior work at Accenture & USAF\n<br>\nEngineering @Boston University",
"_____no_output_____"
],
[
"<img src=\"data/images/detective_and_murderer.jpg\" alt=\"Filipe Fortes circa 2013\" style=\"width: 1400px;\">",
"_____no_output_____"
],
[
"# 0 - Problem to solve\n- speed up development & improve quality on data science projects\n<br><br>\n\n# Two main cases\n 1. test tidy input *(matrix - columns = var, row = observation)* **(tidy data != clean data)**\n<br><br>\n 2. test ingest/transformations of complex input *(creating tidy & clean data)*\n",
"_____no_output_____"
],
[
"# Our Objectives\n- Intro TDD (Test Driven Development)\n- Learn about two packages: pytest & datatest \n 1. For tidy data - *see datatest in action* \n 2. For data engineering - *see TDD for complex input*\n- Understand When not to use TDD\n- Get links to Resources",
"_____no_output_____"
],
[
"### Why TDD?\n<img src=data/images/debugging_switches.200w.webp style=\"height: 600px;\"/>",
"_____no_output_____"
],
[
"### What is TDD\n- process for software development\n- **themes:** intentional -> small -> explicit -> automated\n\n\n### How does it work?\n\n- confirm requirements\n- write a failing test (vs ONLY these requirements!)\n- write code to pass the test (keep it small)\n- refactor & retest\n- automate",
"_____no_output_____"
],
[
"### Why TDD?\n\n1. first - focus on requirements and outcomes\n\n2. save time debugging\n\n3. boost confidence in your code\n\n4. improve refactoring - *speed and confidence*\n\n5. encourages \"clean code\" - *SRP, organization*\n\n6. speed up onboarding new team members - *read 1K lines, or a test?*\n\n\n### Why TDD for data?\n1. all the above\n2. confidence in pipeline",
"_____no_output_____"
],
[
"# Relevant Packages: pytest\n<br>\n\n**pytest:**\nframework for writing and running tests\n- pypi\n- auto-discovery of your tests (prefix `test` on files, classes & functions)\n- runs unittest and nose tests\n- write functions not classes\n- useful plugins (coverage, aws, selenium, databases, etc)\n- [Human-readable usage here](https://gist.github.com/kwmiebach/3fd49612ef7a52b5ce3a)\n<br><br>\n",
"_____no_output_____"
],
[
"# Relevant Packages: datatest\n\n**datatest:**\nhelps speed up and formalize data-wrangling and data validation tasks\n\n- pypi\n- sits on top of pytest or unittest\n- Test data pipeline components and end-to-end behavior\n\n**ipytest:**\nhelper package - run tests inside jupyter notebook\n(labs coming)",
"_____no_output_____"
],
[
"# 1- TDD for tidy data\n\n\n### datatest deets!\n\n- *core functions:*\n 1. validation\n 2. error reporting\n 3. acceptance declarations (data is dirty!)\n <br><br>\n- built-in classes: selecting, querying, and iterating over data \n- both pytest & unittest styles\n- works with Pandas\n- useful for pipelines\n\n\n- https://github.com/shawnbrown/datatest\n- https://datatest.readthedocs.io/en/stable/index.html\n",
"_____no_output_____"
],
[
"#### datatest - what does it do for you?\n\n- **validation:** check that raw data meets requirements you specify\n - columns exist\n - values are in: specific set, range, types\n - match order and sequences@specific index, mapping\n - fuzzy\n\n- **compute differences** between inputs & test conditions\n\n- **acceptances** - based on differences\n - tolerance - absolute\n - tolerance - percentage\n - fuzzy, others\n - composable - construct acceptance criteria based on *intersection of lower-level datatest acceptances*\n\n- **all in a test framework**\n\n**[link: validate docs](https://datatest.readthedocs.io/en/stable/reference/datatest-core.html#datatest.validate)**",
"_____no_output_____"
],
[
"### Example 0 - datatest cases\n\n#### sources:\n- https://datatest.readthedocs.io/en/stable/tutorial/dataframe.html\n<br><br>\n- https://github.com/moshez/interactive-unit-test/blob/master/unit_testing.ipynb",
"_____no_output_____"
]
],
[
[
"# setup - thank you Moshe!\nimport unittest\n\ndef test(klass):\n loader = unittest.TestLoader()\n # suite=loader.loadTestsFromTestCase(klass) # original\n suite=loader.loadTestsFromModule(klass) # to work with datatest example\n runner = unittest.TextTestRunner()\n runner.run(suite)\n\n# other helpful setup\n# ipytest - https://github.com/chmp/ipytest\nimport ipytest\nimport ipytest.magics\n# enable pytest's assertions and ipytest's magics\nipytest.config(rewrite_asserts=False, magics=True)\n\n\n# load datatest example\nimport pandas as pd\ndf = pd.read_csv(\"./data/test_datatest/movies.csv\")\ndf.head(5)\n",
"_____no_output_____"
],
[
"# %load tests/test_01_datatest_movies_df_unit\n#!/usr/bin/env python\nimport pandas as pd\nimport datatest as dt\nimport os\n\n\ndef setUpModule():\n global df\n print(os.getcwd())\n df = pd.read_csv('data/test_datatest/movies.csv')\n\n\nclass TestMovies(dt.DataTestCase):\n @dt.mandatory\n def test_columns(self):\n self.assertValid(\n df.columns,\n {'title', 'rating', 'year', 'runtime'},\n )\n\n def test_title(self):\n self.assertValidRegex(df['title'], r'^[A-Z]')\n\n def test_rating(self):\n self.assertValidSuperset(\n df['rating'],\n {'G', 'PG', 'PG-13', 'R', 'NC-17'},\n )\n\n def test_year(self):\n self.assertValid(df['year'], int)\n\n def test_runtime(self):\n self.assertValid(df['runtime'], int)\n\ntest(TestMovies())\n",
".F.F."
],
[
"# what is going on with our original data?\ndf_fixed = pd.read_csv('data/test_datatest/movies.csv')\nprint(df_fixed.iloc[7:11, :]) #looks better w/print",
" title rating year runtime\n7 Cool Hand Luke GP 1967 127\n8 The Craft R 1996 101\n9 Doctor Zhivago PG-13 1965 197\n10 el Topo Not Rated 1970 125\n"
],
[
"# fix the bad data - through pipeline or manually\ndf_fixed = pd.read_csv('data/test_datatest/movies_fixed.csv')\nprint(df_fixed.iloc[7:11, :])",
" title rating year runtime\n7 Cool Hand Luke PG-13 1967 127\n8 The Craft R 1996 101\n9 Doctor Zhivago PG-13 1965 197\n10 El Topo R 1970 125\n"
],
[
"# clear existing test objects in jupyter notebook - similar to reset\n%reset_selective -f df \n%reset_selective -f TestMovies",
"_____no_output_____"
],
[
"# fixed data - rerun tests\ndef setUpModule():\n global df\n print(os.getcwd())\n df = pd.read_csv('data/test_datatest/movies_fixed.csv') # note new source\n\n\nclass TestMovies(dt.DataTestCase):\n @dt.mandatory\n def test_columns(self):\n self.assertValid(\n df.columns,\n {'title', 'rating', 'year', 'runtime'},\n )\n\n def test_title(self):\n self.assertValidRegex(df['title'], r'^[A-Z]')\n\n def test_rating(self):\n self.assertValidSuperset(\n df['rating'],\n {'G', 'PG', 'PG-13', 'R', 'NC-17'},\n )\n\n def test_year(self):\n self.assertValid(df['year'], int)\n\n def test_runtime(self):\n self.assertValid(df['runtime'], int)\n\ntest(TestMovies())",
"....."
]
],
[
[
"# 2 - TDD for data engineering\n\n\n### Example 1 - finding urls in excel\n\n- url test case\n- multiple url test case which breaks prior tests\n- regex101.com illustration (edit function to make tests pass)\n https://regex101.com/\n- final regex to rule them all\n\n\n",
"_____no_output_____"
],
[
"#### Sample data (under /data/test_cais) - needs transformation\n\n|example 1 |~ |example 2 |\n|:--- |:--- |:---|\n| <img src=\"data/images/excel_sample2013.png\" alt=\"excel example 1\" style=\"height: 900px;\"> | ...|<img src=\"data/images/excel_sample2018.png\" alt=\"excel example 2\" style=\"height: 900px;\"> |\n ",
"_____no_output_____"
]
],
[
[
"# %load tests/test_02_cais_find_single_url\n\"\"\"\ntest functions to find url in cell content from an excel worksheet\n\nfunctions below have \"do_this_later_\" prefix to prevent tests from running during early part of talk\nremove prefix as we walk through examples, and re-run tests\n\"\"\"\nfrom src.excel_find_url import find_url\n\n\ndef test_find_single_url():\n \"\"\"\n unit test to find url in a single text string\n :return: None\n \"\"\"\n # the find_url function we are testing takes cell content as a string, and current results dict\n # pass an empty results dict, so no existing value is found\n result = {}\n\n # inputs we expect to pass\n input01 = \"Coeducational Boarding/Day School Grades 6-12; Enrollment 350 www.prioryca.org\"\n\n # declare result we expect to find here\n assert find_url(input01, result) == \"www.prioryca.org\"\n",
"_____no_output_____"
],
[
"# %load src/excel_find_url.py\n# %load src/excel_find_url.py\n# %%writefile src/excel_find_url.py\n\n\nimport re\nfrom src.excel_read_cell_info import check_if_already_found\n\ndef find_url(content, result):\n \"\"\"\n finds url of school if it exists in cell\n :param content: cell content from spreadsheet\n :type content: string\n :param result: dict of details on current school\n :type result: dict\n :return: url\n :rtype: basestring\n \"\"\"\n if check_if_already_found(\"url\", result):\n return result['url']\n\n # different regex to use during python talk\n # https://regex101.com\n\n # regex = re.compile(r\"w{3}.*\", re.IGNORECASE)\n # regex = re.compile(r\"(http|https):\\/\\/.*\", re.IGNORECASE) # EDIT THIS LIVE\n\n regex = re.compile(\n r\"((http|https):\\/\\/)?[a-zA-Z0-9.\\/?::-_=#]+\\.([a-zA-Z]){2,6}([a-zA-Z0-9..\\/&\\/\\-_=#])*\",\n re.IGNORECASE)\n\n try:\n match = re.search(regex,\n str(content))\n except TypeError:\n raise TypeError\n\n if match:\n url = str(match.group()).strip()\n return url\n else:\n return None\n",
"_____no_output_____"
],
[
"%ls \"tests/\"",
" Volume in drive D is Local.storage\n Volume Serial Number is FA8E-D32B\n\n Directory of D:\\Dropbox\\0.SurfaceOwl\\dev\\pythontalk_tdd_for_data\\tests\n\n2019-07-24 05:33 PM <DIR> .\n2019-07-24 05:33 PM <DIR> ..\n2019-07-24 05:33 PM <DIR> __pycache__\n2019-07-24 05:33 PM 227 test_00_simple_pytest_example.py\n2019-07-24 04:34 PM 789 test_01_datatest_movies_df_unit.py\n2019-07-24 04:41 PM 802 test_02_cais_find_single_url.py\n2019-07-24 04:41 PM 900 test_03_cais_find_https_url.py\n2019-07-24 04:41 PM 1,611 test_04_cais_find_multi_url.py\n2019-07-24 04:41 PM 3,275 test_05_cais_name_count_2013.py\n2019-07-24 04:41 PM 14,826 test_06_cais_name_count_2018.py\n 7 File(s) 22,430 bytes\n 3 Dir(s) 129,539,538,944 bytes free\n"
],
[
"test02 = \"tests/test_02_cais_find_single_url.py\"\n\n__file__ = test02\n\nipytest.clean_tests()\nipytest.config.addopts=['-v']\n# ['-k test_03_cais_find_https_url.py']\n\nipytest.run()",
"================================================= test session starts =================================================\nplatform win32 -- Python 3.7.3, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 -- d:\\dropbox\\0.surfaceowl\\dev\\pythontalk_tdd_for_data\\venv\\scripts\\python.exe\ncachedir: .pytest_cache\nrootdir: D:\\Dropbox\\0.SurfaceOwl\\dev\\pythontalk_tdd_for_data\nplugins: datatest-0.9.6, cov-2.7.1\ncollecting ... collected 1 item\n\ntests/test_02_cais_find_single_url.py::test_find_single_url PASSED [100%]\n\n============================================== 1 passed in 0.02 seconds ===============================================\n"
],
[
"# %load tests/test_03_cais_find_https_url.py\n\"\"\"\ntest functions to find url in cell content from an excel worksheet\n\nfunctions below have \"do_this_later_\" prefix to prevent tests from running during early part of talk\nremove prefix as we walk through examples, and re-run tests\n\"\"\"\nfrom src.excel_find_url import find_url\n\n\ndef test_find_https_url():\n \"\"\"\n unit test multiple strings for urls in bulk - rather than separate test functions for each\n one way to rapidly iterate on your code, nicely encapsulates similar cases\n\n requires editing REGEX in excel_read_cell_info.find_url to make this test pass\n \"\"\"\n result = {}\n\n # inputs we expect to pass\n input01 = \"Coed Boarding/Day School Grades 6-12; Enrollment 350 http://www.prioryca.org\"\n input02 = \"https://windwardschool.org\"\n\n assert find_url(input01, result) == \"http://www.prioryca.org\"\n assert find_url(input02, result) == \"https://windwardschool.org\"\n",
"_____no_output_____"
],
[
"# %load src/excel_find_url.py\n# %load src/excel_find_url.py\n# %%writefile src/excel_find_url.py\n\n\nimport re\nfrom src.excel_read_cell_info import check_if_already_found\n\ndef find_url(content, result):\n \"\"\"\n finds url of school if it exists in cell\n :param content: cell content from spreadsheet\n :type content: string\n :param result: dict of details on current school\n :type result: dict\n :return: url\n :rtype: basestring\n \"\"\"\n if check_if_already_found(\"url\", result):\n return result['url']\n\n # different regex to use during python talk\n # https://regex101.com\n\n # regex = re.compile(r\"w{3}.*\", re.IGNORECASE)\n # regex = re.compile(r\"(http|https):\\/\\/.*\", re.IGNORECASE) # EDIT THIS LIVE\n\n regex = re.compile(\n r\"((http|https):\\/\\/)?[a-zA-Z0-9.\\/?::-_=#]+\\.([a-zA-Z]){2,6}([a-zA-Z0-9..\\/&\\/\\-_=#])*\",\n re.IGNORECASE)\n\n try:\n match = re.search(regex,\n str(content))\n except TypeError:\n raise TypeError\n\n if match:\n url = str(match.group()).strip()\n return url\n else:\n return None\n### Switch to PyCharm",
"_____no_output_____"
]
],
[
[
"### [regex101](https://regex101.com)\n\n w{3}.*\n\n (http|https):\\/\\/.*\n\n ((http|https):\\/\\/)?[a-zA-Z0-9.\\/?::-_=#]+\\.([a-zA-Z]){2,6}([a-zA-Z0-9..\\/&\\/\\-_=#])*\n \n www.prioryca.org\n http://www.prioryca.org\n https://prioryca.org\n\n\n### Switch to PyCharm or your IDE to edit code and run multiple tests",
"_____no_output_____"
],
[
"### Example 2 - finding names & use of supplementary data summaries\n\n- use of expected results file bundled with data as pytest input\n- structured discovery of edge cases\n\n- **objective:** find school names in messy excel document\n- **strategy:** find names by finding specific formats - removing stopwords & addresses\n- **test goals:** confirm code finds same # of names as we do manually\n- **test approach:** summarize names manually in new tab, *then test code results vs. manual results*\n",
"_____no_output_____"
],
[
"#### Recall our data (under /data/test_cais) - needs transformation\n\n|example 1 |~ |example 2 |\n|:--- |:--- |:---|\n| <img src=\"data/images/excel_sample2013.png\" alt=\"excel example 1\" style=\"height: 900px;\"> | ...|<img src=\"data/images/excel_sample2018.png\" alt=\"excel example 2\" style=\"height: 900px;\"> |",
"_____no_output_____"
],
[
"#### Review input files (/data/test_cais)\n<br><br>\n<img src=\"data/images/excel_summarize_expected_results.png\" alt=\"excel example 1\" style=\"height: 900px;\">",
"_____no_output_____"
]
],
[
[
"\"\"\"\ntests focused on ability to pull all the names from a cais excel file\n\"\"\"\n\ndef test_find_2013_cais_name_table10():\n \"\"\"\n test finding names in first member schools tab\n test function to dynamically look up names vs. expected result from separate file\n :return: True or False\n \"\"\"\n test_file = \"School_Directory_2013-2014-converted.xlsx\"\n results_file = \"cais_name_counts_manual_2013-2014.xlsx\"\n table_num = 10\n\n found_in_table_10, expected_in_table_10 = common_search(test_file, results_file, table_num)\n\n assert found_in_table_10 == expected_in_table_10\n",
"_____no_output_____"
]
],
[
[
"#### Data Driven transformation accuracy (/data/test_cais)\n<br><br>\n<img src=\"data/images/test_results.excel_table_accuracy.png\" alt=\"dynamic input testing\">",
"_____no_output_____"
],
[
"\n# 3 - When not to use TDD for data?\n\n- EDA\n- quick prototypes\n- data source is complete & managed\n- cost / time >> benefits\n",
"_____no_output_____"
],
[
"# 4 - Resources\n\n**this talk:** https://github.com/surfaceowl/pythontalk_tdd_for_data\n<br><br>\n**pytest**\n\n[pytest on pypi](https://pypi.org/project/pytest/) [pytest docs](https://docs.pytest.org/en/latest/)\n<br><br>\n**ipytest**\n\n[ipytest pypi](https://pypi.org/project/ipytest/) [ipytest github](https://github.com/chmp/ipytest)\n<br><br>\n**datatest**\n\n[datatest on pypi](https://pypi.org/project/datatest/) [github](https://github.com/shawnbrown/datatest) [docs](https://datatest.readthedocs.io/en/stable/)\n\n**TDD for data**\n\n[towards data science article](https://towardsdatascience.com/tdd-datascience-689c98492fcc)\n",
"_____no_output_____"
],
[
"# Recap: Our Objectives\n\n- Intro TDD (Test Driven Development)\n- Learned about pytest & datatest \n- Saw testing in action for:\n 1. tidy data\n 2. transformation / data engineering\n- Understand When not to use TDD\n- Have links to Resources",
"_____no_output_____"
],
[
"# END",
"_____no_output_____"
],
[
"### setup notes\n\nvenv, then pip install -r requirements.txt\nconftest.py must be in project root\n\nrun pytest from terminal - must be in tests dir\n\npycharm setup -- set test runner to pytest\n\n\n### resources\n\nhttps://nbviewer.jupyter.org/github/agostontorok/tdd_data_analysis/blob/master/TDD%20in%20data%20analysis%20-%20Step-by-step%20tutorial.ipynb#Step-by-step-TDD-in-a-data-science-task\n\nhttp://www.tdda.info/\n\nfix pytest Module not found\nhttps://medium.com/@dirk.avery/pytest-modulenotfounderror-no-module-named-requests-a770e6926ac5\n\n\n#### regex\nhttps://regex101.com/\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0ef0ad7b5bc4e884adb482279c744ad29b9a1e4
| 43,093 |
ipynb
|
Jupyter Notebook
|
notebooks/InformationFlow.ipynb
|
R3dFruitRollUp/fuzzingbook
|
02d4800b6a9aeb21b5a01ee57ddbcb8ec0ae5738
|
[
"MIT"
] | 1 |
2019-02-02T19:04:36.000Z
|
2019-02-02T19:04:36.000Z
|
notebooks/InformationFlow.ipynb
|
FOGSEC/fuzzingbook
|
02d4800b6a9aeb21b5a01ee57ddbcb8ec0ae5738
|
[
"MIT"
] | null | null | null |
notebooks/InformationFlow.ipynb
|
FOGSEC/fuzzingbook
|
02d4800b6a9aeb21b5a01ee57ddbcb8ec0ae5738
|
[
"MIT"
] | 1 |
2018-12-01T16:34:30.000Z
|
2018-12-01T16:34:30.000Z
| 26.244214 | 343 | 0.489964 |
[
[
[
"# Information Flow\n\nIn this chapter, we detail how to track information flows in python by tainting input strings, and tracking the taint across string operations.",
"_____no_output_____"
],
[
"Some material on `eval` exploitation is adapted from the excellent [blog post](https://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html) by Ned Batchelder.",
"_____no_output_____"
],
[
"**Prerequisites**\n\n* You should have read the [chapter on coverage](Coverage.ipynb).",
"_____no_output_____"
],
[
"Setting up our infrastructure",
"_____no_output_____"
]
],
[
[
"import fuzzingbook_utils",
"_____no_output_____"
],
[
"from ExpectError import ExpectError",
"_____no_output_____"
],
[
"import inspect\nimport enum",
"_____no_output_____"
],
[
"%%html\n<div>\n<style>\ndiv.todo {\n color:red;\n font-weight: bold;\n}\ndiv.todo::before {\n content: \"TODO: \";\n}\ndiv.done {\n color:blue;\n font-weight: bold;\n}\ndiv.done::after {\n content: \" :DONE\";\n}\n\n</style>\n<script>\n function todo_toggle() {\n if (todo_shown){\n $('div.todo').hide('500');\n $('div.done').hide('500');\n $('#toggleButton').val('Show Todo')\n } else {\n $('div.todo').show('500');\n $('div.done').show('500');\n $('#toggleButton').val('Hide Todo')\n }\n todo_shown = !todo_shown\n }\n $( document ).ready(function(){\n todo_shown=false;\n $('div.todo').hide()\n });\n</script>\n<form action=\"javascript:todo_toggle()\"><input type=\"submit\" id=\"toggleButton\" value=\"Show Todo\"></form>",
"_____no_output_____"
]
],
[
[
"Say we want to implement a calculator service in Python. A really simple way to do that is to rely on the `eval()` function in Python. Since we do not want our users to be able to execute arbitrary commands on our server, we use `eval()` with empty `locals` and `globals`",
"_____no_output_____"
]
],
[
[
"def my_calculator(my_input):\n result = eval(my_input, {}, {})\n print(\"The result of %s was %d\" % (my_input, result))",
"_____no_output_____"
]
],
[
[
"It wors as expected:",
"_____no_output_____"
]
],
[
[
"my_calculator('1+2')",
"_____no_output_____"
]
],
[
[
"Does it?",
"_____no_output_____"
]
],
[
[
"with ExpectError():\n my_calculator('__import__(\"os\").popen(\"ls\").read()')",
"_____no_output_____"
]
],
[
[
"As you can see from the error, `eval()` completed successfully, with the system command `ls` executing successfully. It is easy enough for the user to see the output if needed.",
"_____no_output_____"
]
],
[
[
"my_calculator(\"1 if __builtins__['print'](__import__('os').popen('ls').read()) else 0\")",
"_____no_output_____"
]
],
[
[
"The problem is that the Python `__builtins__` is [inserted by default](https://docs.python.org/3/library/functions.html#eval) when one uses `eval()`. We can avoid this by restricting `__builtins__` in `eval` explicitly.",
"_____no_output_____"
]
],
[
[
"def my_calculator(my_input):\n result = eval(my_input, {\"__builtins__\":None}, {})\n print(\"The result of %s was %d\" % (my_input, result))",
"_____no_output_____"
]
],
[
[
"Does it help?",
"_____no_output_____"
]
],
[
[
"with ExpectError():\n my_calculator(\"1 if __builtins__['print'](__import__('os').popen('ls').read()) else 0\")",
"_____no_output_____"
]
],
[
[
"But does it actually?",
"_____no_output_____"
]
],
[
[
"my_calculator(\"1 if [x['print'](x['__import__']('os').popen('ls').read()) for x in ([x for x in (1).__class__.__base__.__subclasses__() if x.__name__ == 'Sized'][0].__len__.__globals__['__builtins__'],)] else 0\")",
"_____no_output_____"
]
],
[
[
"The problem here is that when the user has a way to inject **uninterpreted strings** that can reach a dangerous routine such as `eval()` or an `exec()`, it makes it possible for them to inject dangerous code. What we need is a way to restrict the ability of uninterpreted input string fragments from reaching dangerous portions of code.",
"_____no_output_____"
],
[
"## A Simple Taint Tracker",
"_____no_output_____"
],
[
"For capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class.\n\n We need to write the `tstr.__new__()` method because we want to track the parent object responsible for the taint (essentially because we want to customize the object creation, and `__init__` is [too late](https://docs.python.org/3/reference/datamodel.html#basic-customization) for that.).\n\nThe taint map in variable `_taint` contains non-overlapping taints mapped to the original string.",
"_____no_output_____"
]
],
[
[
"class tstr_(str):\n def __new__(cls, value, *args, **kw):\n return super(tstr_, cls).__new__(cls, value)\n\n\nclass tstr(tstr_):\n def __init__(self, value, taint=None, parent=None, **kwargs):\n self.parent = parent\n l = len(self)\n if taint:\n if isinstance(taint, int):\n self._taint = list(range(taint, taint + len(self)))\n else:\n assert len(taint) == len(self)\n self._taint = taint\n else:\n self._taint = list(range(0, len(self)))\n\n def has_taint(self):\n return any(True for i in self._taint if i >= 0)\n\n def __repr__(self):\n return str.__repr__(self)\n\n def __str__(self):\n return str.__str__(self)",
"_____no_output_____"
],
[
"t = tstr('hello')\nt.has_taint(), t._taint",
"_____no_output_____"
],
[
"t = tstr('world', taint = 6)\nt._taint",
"_____no_output_____"
]
],
[
[
"By default, when we wrap a string, it is tainted. Hence we also need a way to `untaint` the string.",
"_____no_output_____"
]
],
[
[
"class tstr(tstr):\n def untaint(self):\n self._taint = [-1] * len(self)\n return self",
"_____no_output_____"
],
[
"t = tstr('hello world')\nt.untaint()\nt.has_taint()",
"_____no_output_____"
]
],
[
[
"However, the taint does not transition from the whole string to parts.",
"_____no_output_____"
]
],
[
[
"with ExpectError():\n t = tstr('hello world')\n t[0:5].has_taint()",
"_____no_output_____"
]
],
[
[
"### Slice",
"_____no_output_____"
],
[
"The Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method.",
"_____no_output_____"
]
],
[
[
"class tstr(tstr):\n def __iter__(self):\n return tstr_iterator(self)\n \n def create(self, res, taint):\n return tstr(res, taint, self)\n\n def __getitem__(self, key):\n res = super().__getitem__(key)\n if type(key) == int:\n key = len(self) + key if key < 0 else key\n return self.create(res, [self._taint[key]])\n elif type(key) == slice:\n return self.create(res, self._taint[key])\n else:\n assert False",
"_____no_output_____"
]
],
[
[
"The Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method.",
"_____no_output_____"
],
[
"#### The iterator class\nThe `__iter__()` method requires a supporting `iterator` object.",
"_____no_output_____"
]
],
[
[
"class tstr_iterator():\n def __init__(self, tstr):\n self._tstr = tstr\n self._str_idx = 0\n\n def __next__(self):\n if self._str_idx == len(self._tstr): raise StopIteration\n # calls tstr getitem should be tstr\n c = self._tstr[self._str_idx]\n assert type(c) is tstr\n self._str_idx += 1\n return c",
"_____no_output_____"
],
[
"t = tstr('hello world')\nt[0:5].has_taint()",
"_____no_output_____"
]
],
[
[
"### Helper Methods\nWe define a few helper methods that deals with the mapped taint index.",
"_____no_output_____"
]
],
[
[
"class tstr(tstr):\n class TaintException(Exception):\n pass\n\n def x(self, i=0):\n v = self._x(i)\n if v < 0:\n raise taint.TaintException('Invalid mapped char idx in tstr')\n return v\n\n def _x(self, i=0):\n return self.get_mapped_char_idx(i)\n\n def get_mapped_char_idx(self, i):\n if self._taint:\n return self._taint[i]\n else:\n raise taint.TaintException('Invalid request idx')\n\n def get_first_mapped_char(self):\n for i in self._taint:\n if i >= 0:\n return i\n return -1\n\n def is_tpos_contained(self, tpos):\n return tpos in self._taint\n\n def is_idx_tainted(self, idx):\n return self._taint[idx] != -1",
"_____no_output_____"
],
[
"my_str = tstr('abcdefghijkl', taint=list(range(4,16)))\nmy_str[0].x(),my_str[-1].x(),my_str[-2].x()",
"_____no_output_____"
],
[
"s = my_str[0:4]\ns.x(0),s.x(3)",
"_____no_output_____"
],
[
"s = my_str[0:-1]\nlen(s),s.x(10)",
"_____no_output_____"
]
],
[
[
"### Concatenation",
"_____no_output_____"
],
[
"Implementing concatenation is straight forward:",
"_____no_output_____"
]
],
[
[
"class tstr(tstr):\n def __add__(self, other):\n if type(other) is tstr:\n return self.create(str.__add__(self, other), (self._taint + other._taint))\n else:\n return self.create(str.__add__(self, other), (self._taint + [-1 for i in other]))",
"_____no_output_____"
]
],
[
[
"Testing concatenations",
"_____no_output_____"
]
],
[
[
"my_str1 = tstr(\"hello\")\nmy_str2 = tstr(\"world\", taint=6)\nmy_str3 = \"bye\"\nv = my_str1 + my_str2\nprint(v._taint)\n\nw = my_str1 + my_str3 + my_str2\nprint(w._taint)",
"_____no_output_____"
],
[
"class tstr(tstr):\n def __radd__(self, other): #concatenation (+) -- other is not tstr\n if type(other) is tstr:\n return self.create(str.__add__(other, self), (other._taint + self._taint))\n else:\n return self.create(str.__add__(other, self), ([-1 for i in other] + self._taint))",
"_____no_output_____"
],
[
"my_str1 = \"hello\"\nmy_str2 = tstr(\"world\")\nv = my_str1 + my_str2\nv._taint",
"_____no_output_____"
]
],
[
[
"### Replace",
"_____no_output_____"
]
],
[
[
"class tstr(tstr):\n def replace(self, a, b, n=None):\n old_taint = self._taint\n b_taint = b._taint if type(b) is tstr else [-1] * len(b)\n mystr = str(self)\n i = 0\n while True:\n if n and i >= n: break\n idx = mystr.find(a)\n if idx == -1: break\n last = idx + len(a)\n mystr = mystr.replace(a, b, 1)\n partA, partB = old_taint[0:idx], old_taint[last:]\n old_taint = partA + b_taint + partB\n i += 1\n return self.create(mystr, old_taint)",
"_____no_output_____"
],
[
"my_str = tstr(\"aa cde aa\")\nres = my_str.replace('aa', 'bb')\nres, res._taint",
"_____no_output_____"
]
],
[
[
"### Split",
"_____no_output_____"
],
[
"We essentially have to re-implement split operations, and split by space is slightly different from other splits.",
"_____no_output_____"
]
],
[
[
"class tstr(tstr):\n def _split_helper(self, sep, splitted):\n result_list = []\n last_idx = 0\n first_idx = 0\n sep_len = len(sep)\n\n for s in splitted:\n last_idx = first_idx + len(s)\n item = self[first_idx:last_idx]\n result_list.append(item)\n first_idx = last_idx + sep_len\n return result_list\n\n def _split_space(self, splitted):\n result_list = []\n last_idx = 0\n first_idx = 0\n sep_len = 0\n for s in splitted:\n last_idx = first_idx + len(s)\n item = self[first_idx:last_idx]\n result_list.append(item)\n v = str(self[last_idx:])\n sep_len = len(v) - len(v.lstrip(' '))\n first_idx = last_idx + sep_len\n return result_list\n\n def rsplit(self, sep=None, maxsplit=-1):\n splitted = super().rsplit(sep, maxsplit)\n if not sep:\n return self._split_space(splitted)\n return self._split_helper(sep, splitted)\n\n def split(self, sep=None, maxsplit=-1):\n splitted = super().split(sep, maxsplit)\n if not sep:\n return self._split_space(splitted)\n return self._split_helper(sep, splitted)",
"_____no_output_____"
],
[
"my_str = tstr('ab cdef ghij kl')\nab, cdef, ghij, kl = my_str.rsplit(sep=' ')\nprint(ab._taint, cdef._taint, ghij._taint, kl._taint)\n\nmy_str = tstr('ab cdef ghij kl', taint=100)\nab, cdef, ghij, kl = my_str.rsplit()\nprint(ab._taint, cdef._taint, ghij._taint, kl._taint)",
"_____no_output_____"
],
[
"my_str = tstr('ab cdef ghij kl', taint=list(range(0, 15)))\nab, cdef, ghij, kl = my_str.split(sep=' ')\nprint(ab._taint, cdef._taint, kl._taint)\n\nmy_str = tstr('ab cdef ghij kl', taint=list(range(0, 20)))\nab, cdef, ghij, kl = my_str.split()\nprint(ab._taint, cdef._taint, kl._taint)",
"_____no_output_____"
]
],
[
[
"### Strip",
"_____no_output_____"
]
],
[
[
"class tstr(tstr):\n def strip(self, cl=None):\n return self.lstrip(cl).rstrip(cl)\n\n def lstrip(self, cl=None):\n res = super().lstrip(cl)\n i = self.find(res)\n return self[i:]\n\n def rstrip(self, cl=None):\n res = super().rstrip(cl)\n return self[0:len(res)]\n",
"_____no_output_____"
],
[
"my_str1 = tstr(\" abc \")\nv = my_str1.strip()\nv, v._taint",
"_____no_output_____"
],
[
"my_str1 = tstr(\" abc \")\nv = my_str1.lstrip()\nv, v._taint",
"_____no_output_____"
],
[
"my_str1 = tstr(\" abc \")\nv = my_str1.rstrip()\nv, v._taint",
"_____no_output_____"
]
],
[
[
"### Expand Tabs",
"_____no_output_____"
]
],
[
[
"class tstr(tstr):\n def expandtabs(self, n=8):\n parts = self.split('\\t')\n res = super().expandtabs(n)\n all_parts = []\n for i, p in enumerate(parts):\n all_parts.extend(p._taint)\n if i < len(parts) - 1:\n l = len(all_parts) % n\n all_parts.extend([p._taint[-1]] * l)\n return self.create(res, all_parts)",
"_____no_output_____"
],
[
"my_tstr = tstr(\"ab\\tcd\")\nmy_str = str(\"ab\\tcd\")\nv1 = my_str.expandtabs(4)\nv2 = my_tstr.expandtabs(4)\nprint(len(v1), repr(my_tstr), repr(v2), v2._taint)",
"_____no_output_____"
],
[
"class tstr(tstr):\n def join(self, iterable):\n mystr = ''\n mytaint = []\n sep_taint = self._taint\n lst = list(iterable)\n for i, s in enumerate(lst):\n staint = s._taint if type(s) is tstr else [-1] * len(s)\n mytaint.extend(staint)\n mystr += str(s)\n if i < len(lst)-1:\n mytaint.extend(sep_taint)\n mystr += str(self)\n res = super().join(iterable)\n assert len(res) == len(mystr)\n return self.create(res, mytaint)",
"_____no_output_____"
],
[
"my_str = tstr(\"ab cd\", taint=100)\n(v1, v2), v3 = my_str.split(), 'ef'\nprint(v1._taint, v2._taint)\nv4 = tstr('').join([v2,v3,v1])\nprint(v4, v4._taint)",
"_____no_output_____"
],
[
"my_str = tstr(\"ab cd\", taint=100)\n(v1, v2), v3 = my_str.split(), 'ef'\nprint(v1._taint, v2._taint)\nv4 = tstr(',').join([v2,v3,v1])\nprint(v4, v4._taint)",
"_____no_output_____"
]
],
[
[
"### Partitions",
"_____no_output_____"
]
],
[
[
"class tstr(tstr):\n def partition(self, sep):\n partA, sep, partB = super().partition(sep)\n return (\n self.create(partA, self._taint[0:len(partA)]), self.create(sep, self._taint[len(partA): len(partA) + len(sep)]), self.create(partB, self._taint[len(partA) + len(sep):]))\n\n def rpartition(self, sep):\n partA, sep, partB = super().rpartition(sep)\n return (self.create(partA, self._taint[0:len(partA)]), self.create(sep, self._taint[len(partA): len(partA) + len(sep)]), self.create(partB, self._taint[len(partA) + len(sep):]))",
"_____no_output_____"
]
],
[
[
"### Justify",
"_____no_output_____"
]
],
[
[
"class tstr(tstr):\n def ljust(self, width, fillchar=' '):\n res = super().ljust(width, fillchar)\n initial = len(res) - len(self)\n if type(fillchar) is tstr:\n t = fillchar.x()\n else:\n t = -1\n return self.create(res, [t] * initial + self._taint)\n\n def rjust(self, width, fillchar=' '):\n res = super().rjust(width, fillchar)\n final = len(res) - len(self)\n if type(fillchar) is tstr:\n t = fillchar.x()\n else:\n t = -1\n return self.create(res, self._taint + [t] * final)",
"_____no_output_____"
]
],
[
[
"### String methods that do not change taint",
"_____no_output_____"
]
],
[
[
"def make_str_wrapper_eq_taint(fun):\n def proxy(*args, **kwargs):\n res = fun(*args, **kwargs)\n return args[0].create(res, args[0]._taint)\n return proxy\n\nfor name, fn in inspect.getmembers(str, callable):\n if name in ['swapcase', 'upper', 'lower', 'capitalize', 'title']:\n setattr(tstr, name, make_str_wrapper_eq_taint(fn))\n",
"_____no_output_____"
],
[
"a = tstr('aa', taint=100).upper()\na, a._taint",
"_____no_output_____"
]
],
[
[
"### General wrappers",
"_____no_output_____"
],
[
"These are not strictly needed for operation, but can be useful for tracing",
"_____no_output_____"
]
],
[
[
"def make_str_wrapper(fun):\n def proxy(*args, **kwargs):\n res = fun(*args, **kwargs)\n return res\n return proxy\n\nimport types\ntstr_members = [name for name, fn in inspect.getmembers(tstr,callable)\nif type(fn) == types.FunctionType and fn.__qualname__.startswith('tstr')]\n\nfor name, fn in inspect.getmembers(str, callable):\n if name not in set(['__class__', '__new__', '__str__', '__init__',\n '__repr__','__getattribute__']) | set(tstr_members):\n setattr(tstr, name, make_str_wrapper(fn))",
"_____no_output_____"
]
],
[
[
"### Methods yet to be translated",
"_____no_output_____"
],
[
"These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.",
"_____no_output_____"
]
],
[
[
"def make_str_abort_wrapper(fun):\n def proxy(*args, **kwargs):\n raise TaintException('%s Not implemented in TSTR' % fun.__name__)\n return proxy\n\nfor name, fn in inspect.getmembers(str, callable):\n if name in ['__format__', '__rmod__', '__mod__', 'format_map', 'format',\n '__mul__','__rmul__','center','zfill', 'decode', 'encode', 'splitlines']:\n setattr(tstr, name, make_str_abort_wrapper(fn))",
"_____no_output_____"
]
],
[
[
"## EOF Tracker",
"_____no_output_____"
],
[
"Sometimes we want to know where an empty string came from. That is, if an empty string is the result of operations on a tainted string, we want to know the best guess as to what the taint index of the preceding character is.",
"_____no_output_____"
],
[
"### Slice",
"_____no_output_____"
],
[
"\nFor detecting EOF, we need to carry the cursor. The main idea is the cursor indicates the taint of the character in front of it.",
"_____no_output_____"
]
],
[
[
"class eoftstr(tstr):\n def create(self, res, taint):\n return eoftstr(res, taint, self)\n \n def __getitem__(self, key):\n def get_interval(key):\n return ((0 if key.start is None else key.start),\n (len(res) if key.stop is None else key.stop))\n\n res = super().__getitem__(key)\n if type(key) == int:\n key = len(self) + key if key < 0 else key\n return self.create(res, [self._taint[key]])\n elif type(key) == slice:\n if res:\n return self.create(res, self._taint[key])\n # Result is an empty string\n t = self.create(res, self._taint[key])\n key_start, key_stop = get_interval(key)\n cursor = 0\n if key_start < len(self):\n assert key_stop < len(self)\n cursor = self._taint[key_stop]\n else:\n if len(self) == 0:\n # if the original string was empty, we assume that any\n # empty string produced from it should carry the same taint.\n cursor = self.x()\n else:\n # Key start was not in the string. We can reply only\n # if the key start was just outside the string, in\n # which case, we guess.\n if key_start != len(self):\n raise taint.TaintException('Can\\'t guess the taint')\n cursor = self._taint[len(self) - 1] + 1\n # _tcursor gets created only for empty strings.\n t._tcursor = cursor\n return t\n\n else:\n assert False",
"_____no_output_____"
],
[
"class eoftstr(eoftstr):\n def get_mapped_char_idx(self, i):\n if self._taint:\n return self._taint[i]\n else:\n if i != 0:\n raise taint.TaintException('Invalid request idx')\n # self._tcursor gets created only for empty strings.\n # use the exception to determine which ones need it.\n return self._tcursor",
"_____no_output_____"
],
[
"t = eoftstr('hello world')\nprint(repr(t[11:]))\nprint(t[11:].x(), t[11:]._taint)",
"_____no_output_____"
]
],
[
[
"## A Comparison Tracker",
"_____no_output_____"
],
[
"Sometimes, we also want to know what each character in an input was compared to.",
"_____no_output_____"
],
[
"### Operators",
"_____no_output_____"
]
],
[
[
"class Op(enum.Enum):\n LT = 0\n LE = enum.auto()\n EQ = enum.auto()\n NE = enum.auto()\n GT = enum.auto()\n GE = enum.auto()\n IN = enum.auto()\n NOT_IN = enum.auto()\n IS = enum.auto()\n IS_NOT = enum.auto()\n FIND_STR = enum.auto()\n\n\nCOMPARE_OPERATORS = {\n Op.EQ: lambda x, y: x == y,\n Op.NE: lambda x, y: x != y,\n Op.IN: lambda x, y: x in y,\n Op.NOT_IN: lambda x, y: x not in y,\n Op.FIND_STR: lambda x, y: x.find(y)\n}\n\nComparisons = []",
"_____no_output_____"
]
],
[
[
"### Instructions",
"_____no_output_____"
]
],
[
[
"class Instr:\n def __init__(self, o, a, b):\n self.opA = a\n self.opB = b\n self.op = o\n\n def o(self):\n if self.op == Op.EQ:\n return 'eq'\n elif self.op == Op.NE:\n return 'ne'\n else:\n return '?'\n\n def opS(self):\n if not self.opA.has_taint() and type(self.opB) is tstr:\n return (self.opB, self.opA)\n else:\n return (self.opA, self.opB)\n\n @property\n def op_A(self):\n return self.opS()[0]\n\n @property\n def op_B(self):\n return self.opS()[1]\n\n def __repr__(self):\n return \"%s,%s,%s\" % (self.o(), repr(self.opA), repr(self.opB))\n\n def __str__(self):\n if self.op == Op.EQ:\n if str(self.opA) == str(self.opB):\n return \"%s = %s\" % (repr(self.opA), repr(self.opB))\n else:\n return \"%s != %s\" % (repr(self.opA), repr(self.opB))\n elif self.op == Op.NE:\n if str(self.opA) == str(self.opB):\n return \"%s = %s\" % (repr(self.opA), repr(self.opB))\n else:\n return \"%s != %s\" % (repr(self.opA), repr(self.opB))\n elif self.op == Op.IN:\n if str(self.opA) in str(self.opB):\n return \"%s in %s\" % (repr(self.opA), repr(self.opB))\n else:\n return \"%s not in %s\" % (repr(self.opA), repr(self.opB))\n elif self.op == Op.NOT_IN:\n if str(self.opA) in str(self.opB):\n return \"%s in %s\" % (repr(self.opA), repr(self.opB))\n else:\n return \"%s not in %s\" % (repr(self.opA), repr(self.opB))\n else:\n assert False",
"_____no_output_____"
]
],
[
[
"### Equivalance",
"_____no_output_____"
]
],
[
[
"class ctstr(eoftstr):\n def create(self, res, taint):\n o = ctstr(res, taint, self)\n o.comparisons = self.comparisons\n return o\n \n def with_comparisons(self, comparisons):\n self.comparisons = comparisons\n return self",
"_____no_output_____"
],
[
"class ctstr(ctstr):\n def __eq__(self, other):\n if len(self) == 0 and len(other) == 0:\n self.comparisons.append(Instr(Op.EQ, self, other))\n return True\n elif len(self) == 0:\n self.comparisons.append(Instr(Op.EQ, self, other[0]))\n return False\n elif len(other) == 0:\n self.comparisons.append(Instr(Op.EQ, self[0], other))\n return False\n elif len(self) == 1 and len(other) == 1:\n self.comparisons.append(Instr(Op.EQ, self, other))\n return super().__eq__(other)\n else:\n if not self[0] == other[0]:\n return False\n return self[1:] == other[1:]",
"_____no_output_____"
],
[
"t = ctstr('hello world', taint=100).with_comparisons([])\nprint(t.comparisons)\nt == 'hello'\nfor c in t.comparisons:\n print(repr(c))",
"_____no_output_____"
],
[
"class ctstr(ctstr):\n def __ne__(self, other):\n return not self.__eq__(other)",
"_____no_output_____"
],
[
"t = ctstr('hello', taint=100).with_comparisons([])\nprint(t.comparisons)\nt != 'bye'\nfor c in t.comparisons:\n print(repr(c))",
"_____no_output_____"
],
[
"class ctstr(ctstr):\n def __contains__(self, other):\n self.comparisons.append(Instr(Op.IN, self, other))\n return super().__contains__(other)",
"_____no_output_____"
],
[
"class ctstr(ctstr):\n def find(self, sub, start=None, end=None):\n if start == None:\n start_val = 0\n if end == None:\n end_val = len(self)\n self.comparisons.append(Instr(Op.IN, self[start_val:end_val], sub))\n return super().find(sub, start, end)",
"_____no_output_____"
]
],
[
[
"## Lessons Learned\n\n* One can track the information flow form input to the internals of a system.",
"_____no_output_____"
],
[
"## Next Steps\n\n_Link to subsequent chapters (notebooks) here:_",
"_____no_output_____"
],
[
"## Background\n\n\\cite{Lin2008}",
"_____no_output_____"
],
[
"## Exercises\n\n_Close the chapter with a few exercises such that people have things to do. To make the solutions hidden (to be revealed by the user), have them start with_\n\n```markdown\n**Solution.**\n```\n\n_Your solution can then extend up to the next title (i.e., any markdown cell starting with `#`)._\n\n_Running `make metadata` will automatically add metadata to the cells such that the cells will be hidden by default, and can be uncovered by the user. The button will be introduced above the solution._",
"_____no_output_____"
],
[
"### Exercise 1: _Title_\n\n_Text of the exercise_",
"_____no_output_____"
]
],
[
[
"# Some code that is part of the exercise\npass",
"_____no_output_____"
]
],
[
[
"_Some more text for the exercise_",
"_____no_output_____"
],
[
"**Solution.** _Some text for the solution_",
"_____no_output_____"
]
],
[
[
"# Some code for the solution\n2 + 2",
"_____no_output_____"
]
],
[
[
"_Some more text for the solution_",
"_____no_output_____"
],
[
"### Exercise 2: _Title_\n\n_Text of the exercise_",
"_____no_output_____"
],
[
"**Solution.** _Solution for the exercise_",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d0ef26648cc664c93a65ff4d5230b9b66b1adcf0
| 414,501 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/3. Facial Keypoint Detection, Complete Pipeline-checkpoint.ipynb
|
adjanni/Facial_Keypoints_Detection_w_CNN--Pytorch
|
f9db16197c776872fb3a3de4b4e32a40619a7535
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/3. Facial Keypoint Detection, Complete Pipeline-checkpoint.ipynb
|
adjanni/Facial_Keypoints_Detection_w_CNN--Pytorch
|
f9db16197c776872fb3a3de4b4e32a40619a7535
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/3. Facial Keypoint Detection, Complete Pipeline-checkpoint.ipynb
|
adjanni/Facial_Keypoints_Detection_w_CNN--Pytorch
|
f9db16197c776872fb3a3de4b4e32a40619a7535
|
[
"MIT"
] | null | null | null | 1,148.202216 | 189,292 | 0.951284 |
[
[
[
"## Face and Facial Keypoint detection\n\nAfter you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing.\n\n1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook).\n2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any iimage into a Tensor to be accepted as input to your CNN.\n3. Use your trained model to detect facial keypoints on the image.\n\n---",
"_____no_output_____"
],
[
"In the next python cell we load in required libraries for this section of the project.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"#### Select an image \n\nSelect an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory.",
"_____no_output_____"
]
],
[
[
"import cv2\n# load in color image for face detection\nimage = cv2.imread('images/michelle_detected.png')\n\n# switch red and blue color channels \n# --> by default OpenCV assumes BLUE comes first, not RED as in many images\nimage = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n\n# plot the image\nfig = plt.figure(figsize=(9,9))\nplt.imshow(image)",
"_____no_output_____"
]
],
[
[
"## Detect all faces in an image\n\nNext, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image.\n\nIn the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors.\n\nAn example of face detection on a variety of images is shown below.\n\n<img src='images/haar_cascade_ex.png' width=80% height=80%/>\n",
"_____no_output_____"
]
],
[
[
"# load in a haar cascade classifier for detecting frontal faces\nface_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')\neye_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_eye.xml')\n\n# run the detector\n# the output here is an array of detections; the corners of each detection box\n# if necessary, modify these parameters until you successfully identify every face in a given image\nfaces = face_cascade.detectMultiScale(image, 1.2, 3)\n\n# make a copy of the original image to plot detections on\nimage_with_detections = image.copy()\n\n# from color to gray\ngray = cv2.cvtColor(image_with_detections, cv2.COLOR_BGR2GRAY)\n\n# loop over the detected faces, mark the image where each face is found\nfor (x,y,w,h) in faces:\n # draw a rectangle around each detected face\n # you may also need to change the width of the rectangle drawn depending on image resolution\n cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3)\n roi_gray = gray[y:y+h, x:x+w]\n roi_color = image_with_detections[y:y+h, x:x+w]\n eyes = eye_cascade.detectMultiScale(roi_gray, 1.9, 3)\n for (ex,ey,ew,eh) in eyes:\n cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)\n \n\nfig = plt.figure(figsize=(9,9))\n\nplt.imshow(image_with_detections)",
"_____no_output_____"
]
],
[
[
"## Loading in a trained model\n\nOnce you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector.\n\nFirst, load your best model by its filename.",
"_____no_output_____"
]
],
[
[
"import torch\nfrom models import Net\n\nnet = Net()\n\n#keypoints_model_BachSize32_Relu_NaimishNet.pt\n## TODO: load the best saved model parameters (by your path name)\n## You'll need to un-comment the line below and add the correct name for *your* saved model\nnet.load_state_dict(torch.load('saved_models/keypoints_model_BachSize64_Relu_NaimishNet.pt'))\n\n## print out your net and prepare it for testing (uncomment the line below)\nnet.eval()",
"_____no_output_____"
]
],
[
[
"## Keypoint detection\n\nNow, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images.\n\n### TODO: Transform each detected face into an input Tensor\n\nYou'll need to perform the following steps for each detected face:\n1. Convert the face from RGB to grayscale\n2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]\n3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested)\n4. Reshape the numpy image into a torch image.\n\n**Hint**: The sizes of faces detected by a Haar detector and the faces your network has been trained on are of different sizes. If you find that your model is generating keypoints that are too small for a given face, try adding some padding to the detected `roi` before giving it as input to your model.\n\nYou may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps.\n\n\n### TODO: Detect and display the predicted keypoints\n\nAfter each face has been appropriately converted into an input Tensor for your network to see as input, you can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be \"un-normalized\" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face:\n\n<img src='images/michelle_detected.png' width=30% height=30%/>\n\n\n",
"_____no_output_____"
]
],
[
[
"image_copy = np.copy(image)\n\n# loop over the detected faces from your haar cascade\nfor (x,y,w,h) in faces:\n \n # Select the region of interest that is the face in the image \n #NOTE: The faces in the datasets in the training set are not as zoomed in as the ones Haar Cascade detects.\n # That s why I HAVE to grab more area around the detected faces to make sure the entire head is present in the\n # input image.\n \n margin = int(w*100)\n roi = image_copy[max(y-margin,0):min(y+h+margin,image.shape[0]), \n max(x-margin,0):min(x+w+margin,image.shape[1])]\n \n ## TODO: Convert the face region from RGB to grayscale\n roi = cv2.cvtColor(roi, cv2.COLOR_RGB2GRAY)\n\n ## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]\n roi = roi/255.0\n \n ## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested)\n roi_resize = cv2.resize(roi, (224, 224))\n print('Original size ', roi.shape, 'Resize image ', roi_resize.shape)\n \n ## TODO: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W)\n roi = torch.from_numpy(roi_resize.reshape(1, 1, 224, 224))\n roi = roi.type(torch.FloatTensor)\n \n ## TODO: Make facial keypoint predictions using your loaded, trained network \n pred = net(roi)\n pred = pred.view(68, -1)\n \n pred_key = pred.data.numpy()\n pred_key = pred_key*50 + 0\n \n ## TODO: Display each detected face and the corresponding keypoints \n plt.figure(figsize = (6,6))\n plt.imshow(roi_resize, cmap = 'gray')\n plt.scatter(pred_key[:, 0], pred_key[:, 1])\n plt.show()",
"Original size (908, 916) Resize image (224, 224)\n"
]
],
[
[
"**Observation** The predicted keypoints seems to be doing an okay job! The misfitting of some keypoint is due to the fact the training of the model is not yet optimal at 7 iterations and has a loss of 0.03 in average. I could improve it by running a bact size of 64 and increasing the epoch to 10 or 12.\n\n**Questions** How do we tune the model to identify keypoints on the around the nose and the mouth?\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0ef57ca94919714f3fa7e45a83e563cd3d5ae34
| 17,312 |
ipynb
|
Jupyter Notebook
|
Sensitivity_Analysis/3_Methods_Bar_Plot.ipynb
|
tomtuamnuq/covasim-dds
|
1e3ce8f9dda6908ca20040a3b532495de3bdc4c1
|
[
"Apache-2.0"
] | 2 |
2022-03-11T09:48:19.000Z
|
2022-03-20T09:06:31.000Z
|
Sensitivity_Analysis/3_Methods_Bar_Plot.ipynb
|
tomtuamnuq/covasim-dds
|
1e3ce8f9dda6908ca20040a3b532495de3bdc4c1
|
[
"Apache-2.0"
] | null | null | null |
Sensitivity_Analysis/3_Methods_Bar_Plot.ipynb
|
tomtuamnuq/covasim-dds
|
1e3ce8f9dda6908ca20040a3b532495de3bdc4c1
|
[
"Apache-2.0"
] | null | null | null | 102.43787 | 13,076 | 0.838205 |
[
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sn\nimport pandas as pd",
"_____no_output_____"
],
[
"par_list = ['Beta', 'Asymp Factor', 'Severe Prob', 'Death Prob']\nmorris = np.array([590.433333, 310.450000, 443.050000, 187.233333])\nrbd = np.array([0.415621,0.066794, 0.226657, 0.045249])\ndelta = np.array([0.278506, 0.072211, 0.241161, 0.061120])\n\nmethods = {\"Morris Index\": morris, \"RBD-Fast\":rbd, \"Delta\": delta}\nfor k in methods.keys():\n m = max(methods[k])\n methods[k] = methods[k] / m\n \ndf = pd.DataFrame(methods, index=par_list)\ndf",
"_____no_output_____"
],
[
"df.plot.bar(figsize=(10,6))",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
d0ef667616b7c400b4b4ddadba0459c2a3d98538
| 15,842 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/RL_Tutorial_1-checkpoint.ipynb
|
sezan92/RLTutorialKeras
|
eda3048e8515e13aefbfae49a545ae3672af3f9d
|
[
"MIT"
] | 11 |
2019-04-08T10:06:07.000Z
|
2020-06-11T02:15:56.000Z
|
.ipynb_checkpoints/RL_Tutorial_1-checkpoint.ipynb
|
sezan92/RLTutorialKeras
|
eda3048e8515e13aefbfae49a545ae3672af3f9d
|
[
"MIT"
] | 1 |
2019-02-26T16:21:37.000Z
|
2019-02-26T16:21:37.000Z
|
.ipynb_checkpoints/RL_Tutorial_1-checkpoint.ipynb
|
sezan92/RLTutorialKeras
|
eda3048e8515e13aefbfae49a545ae3672af3f9d
|
[
"MIT"
] | 6 |
2019-04-07T16:00:44.000Z
|
2021-05-10T00:52:28.000Z
| 27.503472 | 424 | 0.527522 |
[
[
[
"## Reinforcement Learning Tutorial -1: Q Learning",
"_____no_output_____"
],
[
"#### MD Muhaimin Rahman\nsezan92[at]gmail[dot]com",
"_____no_output_____"
],
[
"Q learning , can be said one of the most famous -and kind of intuitive- of all Reinforcement learning algorithms. In fact ,the recent all algorithms using Deep learning , are based on the Q learning algorithms. So, to work on recent algorithms, one must have a good idea on Q learning.",
"_____no_output_____"
],
[
"### Intuition",
"_____no_output_____"
],
[
"First , start with an Intuition. Lets assume , you are in a maze",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"Okay okay! I admit, it is not a maze. just a house with 5 rooms. And I got it from, this [link](http://mnemstudio.org/path-finding-q-learning-tutorial.htm) . Your goal is to get out of this place, no matter where you are. But you dont know - atleast pretend to - how to get there! After wondering about the map, you stumbled upon a mysterious letter with a lot of numbers in the room.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"The matrix has 6 columns and 6 rows. What you will have to do , is to go to the room with highest value. Suppose, you are in room number 2. Then , you will have to move to room number 3 . Then you get out! Look at the picture again! You can try with every state, you are guaranteed to get out of the house, using this matrix! .\n",
"_____no_output_____"
],
[
"In the world of RL, every room is called a ```state```, movement from one state to another is called ```action```. Our game has a very ***JARGONISH*** name, ```Markov Decision Process``` . Maybe they invented this name to freak everybody out. But in short, this process means, your action from current state never depends on previous state. Practically such processes are impossible, but it helps to simplify problems ",
"_____no_output_____"
],
[
" Now the question is , how can we get this ? ",
"_____no_output_____"
],
[
"- First , initialize the matrix as Zeros",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"- Then we will apply the Q learning update equation",
"_____no_output_____"
],
[
"\\begin{equation}\nQ(s_t,a) = Q(s_t,a) + \\alpha (Q'(s_{t+1},a)-Q(s_t,a))\n\\end{equation}",
"_____no_output_____"
],
[
"Here, $s_t$ is state at time $t$ , $s_{t+1}$ means the next state, $a$ is action , $r$ is reward we get-if we can get - from one state to another state. Q(s_t,a_t) means Q matrix value for state $s_t$ and action $a_t$ , $Q'(s_{t+1},a)$ means target Q value with state $s_{t+1}$ and the ***BEST ACTION*** for next state. Here $\\alpha $ is learning rate}",
"_____no_output_____"
],
[
"Before we proceed, let me ask you, does this equation ring a bell ? I mean, haven't you seen a similar equation ? ",
"_____no_output_____"
],
[
"Yeah, you got it , it is similar to Gradient descent Equation. If you dont know Gradient descent equation, I am sorry, you wont be able to get the future tutorials. So I suggest you get the basic and working Idea of Neural Networks and Gradient descent algorithms",
"_____no_output_____"
],
[
"Now ,How can we get $Q'(s_{t+1},a)$ ? ",
"_____no_output_____"
],
[
"Using Bellman Equation",
"_____no_output_____"
],
[
"\\begin{equation}\nQ'(s_{t+1},a) = r+ \\gamma max(Q(s_{t+1},a_t))\n\\end{equation}",
"_____no_output_____"
],
[
"It means the target $Q$ value for every state and action is the sum of reward with that state and action, and the maximum $Q$ value of next state multiplied with discount factor $\\gamma$",
"_____no_output_____"
],
[
"***Where did this equation came from ? ***",
"_____no_output_____"
],
[
"Okay chill! let's start from the game again ! So suppose , every room has reward, $R_t,R_{t+1},R_{t+2},R_{t+3},R_{t+4},R_{t+5}$.. So obviously , the value of a state will be the expected cumulative reward",
"_____no_output_____"
],
[
"\\begin{equation}\nQ(s,a) = R_t + R_{t+1} + R_{t+2}+ R_{t+3}+ R_{t+4}+ R_{t+5}\n\\end{equation}",
"_____no_output_____"
],
[
"Suppose, someone comes here, and says, He wants give more weight to sooner rewards than later rewards. What should we do ? We will introduce, discount factor, $\\gamma$ , which is $0<\\gamma<1$ ..",
"_____no_output_____"
],
[
"\\begin{equation}\nQ(s,a) = R_t + \\gamma R_{t+1} + \\gamma^2 R_{t+2}+ \\gamma^3 R_{t+3}+ \\gamma^4 R_{t+4}+ \\gamma^5 R_{t+5}\n\\end{equation}",
"_____no_output_____"
],
[
"\\begin{equation}\nQ(s,a) = R_t + \\gamma [R_{t+1} + \\gamma R_{t+2}+ \\gamma^2 R_{t+3}+ \\gamma^3 R_{t+4}+ \\gamma^4 R_{t+5}]\n\\end{equation}",
"_____no_output_____"
],
[
"This equation can be rewritten as",
"_____no_output_____"
],
[
"\\begin{equation}\nQ(s_t,a) = R_t+\\gamma Q(s_{t+1},a_{t+1})\n\\end{equation}",
"_____no_output_____"
],
[
"Suppose, we have some finite discrete actions in our hand, and each resulting $Q$ values of its own, what we will do ? We will try to take the action of maximum $Q$ value!",
"_____no_output_____"
],
[
"\\begin{equation}\nQ(s_t,a) = R_t+\\gamma max(Q(s_{t+1},a))\n\\end{equation}",
"_____no_output_____"
],
[
"### Coding!",
"_____no_output_____"
],
[
"Let's start coding!",
"_____no_output_____"
],
[
"I will be using ***Open Ai*** gym environment. The Introduction and Installtion of environments are given [here](https://github.com/openai/gym)",
"_____no_output_____"
]
],
[
[
"import gym\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"Initialization of Environments",
"_____no_output_____"
],
[
"I will use the Mountaincar environment by Open AI gym. It is a classic problem invented from 90s. I intend to use this environment for all algorithms .",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"In this game, your task is to get the car reach that green flag. For every step you will get -1 .So , your job is to reach the goal position with minimum steps. Maximum steps limit is 200. ",
"_____no_output_____"
]
],
[
[
"env = gym.make('MountainCar-v0')\ns = env.reset() #Reset the car",
"_____no_output_____"
]
],
[
[
"```env.reset()``` gives the initial state. State is the position and velocity of the car in a given time",
"_____no_output_____"
],
[
"This game's actions can be 0,1,2 . 0 for left, 1 for doing nothing, 2 for right\n```env.step(action)``` returns four arguments",
"_____no_output_____"
],
[
"- next state\n- reward\n- terminal , it means if game is over or not\n- info , for now , it is unnecessary",
"_____no_output_____"
],
[
"Hyper Parameters",
"_____no_output_____"
],
[
"- ```legal_actions``` number of actions\n- ```actions``` the actions list\n- ```gamma``` discount factor $\\gamma$\n- ```lr``` learning rate $\\alpha$\n- ```num_episodes``` number of episodes\n- ```epsilon``` epsilon , to choose random actions \n- ```epsilon_decay``` epsilon decay rate",
"_____no_output_____"
]
],
[
[
"legal_actions=env.action_space.n\nactions = [0,1,2]\ngamma =0.99\nlr =0.5\nnum_episodes =30000\nepsilon =0.5\nepsilon_decay =0.99",
"_____no_output_____"
]
],
[
[
"Codeblock to discretize the state. Because ***Q learning*** doesnt work on continuous state space, we have to convert states into 10 discrete states",
"_____no_output_____"
]
],
[
[
"N_BINS = [10,10]\n\nMIN_VALUES = [0.6,0.07]\nMAX_VALUES = [-1.2,-.07]\nBINS = [np.linspace(MIN_VALUES[i], MAX_VALUES[i], N_BINS[i]) for i in range(len(N_BINS))]\nrList =[]\ndef discretize(obs):\n return tuple([int(np.digitize(obs[i], BINS[i])) for i in range(len(N_BINS))])",
"_____no_output_____"
]
],
[
[
"Q Learning CLass",
"_____no_output_____"
]
],
[
[
"class QL:\n def __init__(self,Q,policy,\n legal_actions,\n actions,\n gamma,\n lr):\n self.Q = Q #Q matrix\n self.policy =policy\n self.legal_actions=legal_actions\n self.actions = actions\n self.gamma =gamma\n self.lr =lr\n def q_value(self,s,a):\n \"\"\"Gets the Q value for a certain state and action\"\"\"\n if (s,a) in self.Q:\n self.Q[(s,a)]\n else:\n self.Q[s,a]=0\n return self.Q[s,a]\n def action(self,s):\n \"\"\"Gets the action for cetain state\"\"\"\n if s in self.policy:\n return self.policy[s]\n else:\n self.policy[s] = np.random.randint(0,self.legal_actions)\n return self.policy[s]\n def learn(self,s,a,s1,r,done):\n \"\"\"Updates the Q matrix\"\"\"\n if done== False:\n self.Q[(s,a)] =self.q_value(s,a)+ self.lr*(r+self.gamma*max([self.q_value(s1,a1) for a1 in self.actions]) - self.q_value(s,a))\n else:\n self.Q[(s,a)] =self.q_value(s,a)+ self.lr*(r - self.q_value(s,a))\n self.q_values = [self.q_value(s,a1) for a1 in self.actions]\n self.policy[s] = self.actions[self.q_values.index(max(self.q_values))]\n",
"_____no_output_____"
]
],
[
[
"Q Matrix Parameters",
"_____no_output_____"
],
[
"- ```Q``` - Q table. We will use dictionary data structure.\n- ```policy``` - policy table , it will give us the action for given state\n",
"_____no_output_____"
]
],
[
[
"Q = {}\npolicy ={}\nlegal_actions =3\nQL = QL(Q,policy,legal_actions,actions,gamma,lr)",
"_____no_output_____"
]
],
[
[
"Training",
"_____no_output_____"
],
[
"### Psuedocode",
"_____no_output_____"
],
[
"- get initial state $s_{raw}$\n- discretize initial state , $s \\gets discretize(s_{raw})$\n- set total reward to zero , $r_{total} \\gets 0$\n- set terminal $d$ to false , $d \\gets False$\n- for each step\n- - choose action based on epsilon greedy policy\n- - get next state $s1_{raw} $, reward , $r$, terminal $d$ doing the action\n- - $s1 \\gets discretize(s1_{raw}) $\n- - $r_{total} \\gets r_{total}+r$\n- - if $d == True $ \n- - - if $r_{total}<-199$ \n- - - - then give $r \\gets -100$\n- - - - Update $Q$ table\n- - - - break \n- - else \n- - - Update $Q$ table\n- - - break\n- - $s \\gets s1$",
"_____no_output_____"
]
],
[
[
"for i in range(num_episodes):\n s_raw= env.reset() #initialize\n s = discretize(s_raw) #discretize the state\n rAll =0 #total reward\n d = False\n j = 0\n for j in range(200):\n \n #epsilon greedy. to choose random actions initially when Q is all zeros\n if np.random.random()< epsilon:\n a = np.random.randint(0,legal_actions)\n epsilon = epsilon*epsilon_decay\n else:\n a =QL.action(s)\n s1_raw,r,d,_ = env.step(a)\n rAll=rAll+r\n s1 = discretize(s1_raw)\n env.render()\n if d:\n if rAll<-199:\n r =-100 #punishment, if the game finishes before reaching the goal , we can give punishment\n QL.learn(s,a,s1,r,d)\n print(\"Failed! Reward %d\"%rAll)\n elif rAll>-199:\n print(\"Passed! Reward %d\"%rAll)\n break\n QL.learn(s,a,s1,r,d)\n if j==199:\n print(\"Reward %d after full episode\"%(rAll))\n \n s = s1\nenv.close() ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
d0ef68c45cd5bc5ec44515c179d1655a3a13d79a
| 149,618 |
ipynb
|
Jupyter Notebook
|
notebooks/RPLIB_Card.ipynb
|
IGARDS/structured_artificial
|
96dac147c56508b43fe0a3296ca24160310ed83b
|
[
"MIT"
] | null | null | null |
notebooks/RPLIB_Card.ipynb
|
IGARDS/structured_artificial
|
96dac147c56508b43fe0a3296ca24160310ed83b
|
[
"MIT"
] | null | null | null |
notebooks/RPLIB_Card.ipynb
|
IGARDS/structured_artificial
|
96dac147c56508b43fe0a3296ca24160310ed83b
|
[
"MIT"
] | null | null | null | 84.339346 | 34,511 | 0.321913 |
[
[
[
"<a href=\"https://colab.research.google.com/github/IGARDS/structured_artificial/blob/main/notebooks/RPLIB_Card.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# RPLIB Card\n\nThis notebook displays the results of a RPLIB analysis (i.e., an RPLIB Card).\n\nFind your dataset ID by visiting https://birg.dev/rplib/, and then enter it below.",
"_____no_output_____"
]
],
[
[
"#@title Dataset ID and Card Type\ndataset_id = 636#@param {type:\"number\"}\ncard_type = 'lop' #@param [\"lop\", \"hillside\", \"massey\", \"colley\"]",
"_____no_output_____"
],
[
"RPLIB_DATA_PREFIX='https://raw.githubusercontent.com/IGARDS/RPLib/main/data'",
"_____no_output_____"
],
[
"#@title Includes and Dependencies\nfrom IPython.display import display, Markdown, Latex\ntry:\n import pyrankability as pyrankability\n import pyrplib as pyrplib\n print('Imported preinstalled pyrankability and pyrplib')\nexcept:\n try:\n import sys\n sys.path.insert(0,f\"{RPLIB_DATA_PREFIX}/../../ranking_toolbox\") \n sys.path.insert(0,f\"{RPLIB_DATA_PREFIX}/..\") \n import pyrankability\n import pyrplib\n print('Imported pyrankability and pyrplib relative to RPLIB_DATA_PREFIX')\n except:\n try:\n sys.path.insert(0,f\"../ranking_toolbox\") \n sys.path.insert(0,f\"../RPLib\") \n import pyrankability\n import pyrplib\n print('Imported pyrankability and pyrplib relative to current directory')\n except:\n print('Assuming colab')\n !apt install libgraphviz-dev\n !pip install git+https://github.com/IGARDS/ranking_toolbox.git --upgrade\n !pip install git+https://github.com/IGARDS/RPLib.git --upgrade\n import pyrankability\n import pyrplib\n print('Installed and imported pyrankability and pyrplib')\n\nfrom pyrplib.artificial import *",
"Imported preinstalled pyrankability and pyrplib\n"
],
[
"#@title Load the card\nrplib_data = pyrplib.data.Data(RPLIB_DATA_PREFIX)\ncard = rplib_data.load_card(dataset_id,card_type)\nprocessed = rplib_data.load_processed(card.source_dataset_id)\nprocessed_dataset = rplib_data.processed_datasets_df.set_index('Dataset ID').loc[card.source_dataset_id]\nunprocessed_dataset = rplib_data.datasets_df.set_index('Dataset ID').loc[processed_dataset['Source Dataset ID']]",
"_____no_output_____"
]
],
[
[
"## Dataset",
"_____no_output_____"
]
],
[
[
"unprocessed_dataset",
"_____no_output_____"
]
],
[
[
"## Processed Dataset",
"_____no_output_____"
]
],
[
[
"processed_dataset",
"_____no_output_____"
]
],
[
[
"## Card",
"_____no_output_____"
]
],
[
[
"visuals = card.get_visuals()['notebook']\nfor key in visuals:\n display(Markdown(f\"### {key}\"))\n display(visuals[key])",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0ef6fda269089e6bc1348f9ef32a756885357aa
| 111,920 |
ipynb
|
Jupyter Notebook
|
examples/product.ipynb
|
MaxDauber/FiMDP
|
95c61d6adb1a856fefdf1c50b9932815bdad9dde
|
[
"MIT"
] | null | null | null |
examples/product.ipynb
|
MaxDauber/FiMDP
|
95c61d6adb1a856fefdf1c50b9932815bdad9dde
|
[
"MIT"
] | null | null | null |
examples/product.ipynb
|
MaxDauber/FiMDP
|
95c61d6adb1a856fefdf1c50b9932815bdad9dde
|
[
"MIT"
] | null | null | null | 66.068477 | 618 | 0.518281 |
[
[
[
"import sys; sys.path.insert(0, '..')\nimport spot\nspot.setup()\nimport buddy\nfrom spot.jupyter import display_inline\n\nfrom decimal import Decimal\nimport decimal\n\nfrom fimdp.core import ConsMDP\nfrom fimdp.energy_solvers import BasicES\nfrom fimdp.labeled import LabeledConsMDP\nfrom fimdp.objectives import BUCHI",
"_____no_output_____"
]
],
[
[
"# Product of lCMDP and DBA with the link to LTL",
"_____no_output_____"
],
[
"The goal of this notebook is, given an LTL formula over the set $AP$ of atomic proposition and a consumption MDP with states labeled by subsets of $AP$, decide if there is a strategy for the MDP such that the LTL formula is satisfied with probability 1. We ilustrate the whole concept of a running example in which we want to enforce visiting 2 states infinitely often.",
"_____no_output_____"
],
[
"Let's first create a CMDP, we will use the following function for easier definitions of actions using uniform distributions.",
"_____no_output_____"
]
],
[
[
"def uniform(dests):\n \"\"\"Create a uniform distribution for given destinations.\n \n dests: iterable of states\n \"\"\"\n count = len(dests)\n mod = 100 % count\n decimal.getcontext().prec = 2\n prob = Decimal(1)/Decimal(count)\n dist = {i: prob for i in dests}\n last = dests[-1]\n dist[last] = dist[last] + Decimal(\"0.01\")*mod\n return dist",
"_____no_output_____"
]
],
[
[
"In the following code, we verify that we can achieve the Büchi objective with targets set `{1,2}` with capacity `5` and that is not enough to visit the state `1`. What we actualy want is to visit **both** of these states infinitely often which we solve later.",
"_____no_output_____"
]
],
[
[
"mdp = ConsMDP()\nmdp.new_states(4)\nmdp.set_reload(3)\nmdp.add_action(0, uniform([1,2]), \"α\", 3)\nmdp.add_action(0, uniform([2,3]), \"β\", 1)\nmdp.add_action(1, uniform([3]), \"r\", 3)\nmdp.add_action(2, uniform([3]), \"r\", 1)\nmdp.add_action(3, uniform([0]), \"s\", 3)\nsolver = BasicES(mdp, 5, [1,2])\nsolver.get_min_levels(BUCHI)\nsolver",
"_____no_output_____"
]
],
[
[
"The corresponding strategy confirms that the state 1 won't be visited by the strategy as there is no occurence of the action `α`.",
"_____no_output_____"
]
],
[
[
"solver.get_selector(4, True)",
"_____no_output_____"
]
],
[
[
"## LTL and Büchi automata\nOur goal of visiting both states `1` \\& `2` infinitely often can be expressed by the LTL formula $\\mathsf{G}\\mathsf{F} s_1 \\land \\mathsf{G}\\mathsf{F}s_2$ (or in the box-diamond notation: $\\Box \\diamond s_1 \\land \\Box \\diamond s_2$) where the atomic proposition $s_1$ corresponds to visiting state `1` and the tomic proposition $s_2$ corresponds to visiting state`2`.\n\nThis formula can be expressed by a **deterministic** üchi automaton (DBA). We use Spot to make the translation for us. The option `BA` forces Spot to deliver a state-based Büchi automaton (default is transition-based generalized Büchi automaton), the option `deterministic` indicates that we prefer deterministic automata, and `complete` asks for an automaton with complete transition function. If you are not sure that your formula can be translated to a DBA, consult [hierarchy of LTL](https://spot.lrde.epita.fr/hierarchy.html). It is also a good practice to make yourself sure by running \n```python\naut.is_deterministic()\n```",
"_____no_output_____"
]
],
[
[
"f = spot.formula(\"GF s1 & GF s2\")\naut = spot.translate(f, \"BA\", \"deterministic\", \"complete\")",
"_____no_output_____"
],
[
"display(aut, aut.is_deterministic())",
"_____no_output_____"
]
],
[
[
"The produced automaton can be used in parallel with our input MDP; this is achieved by a _product_ (alternatively _parallel synchonous composition_) of this automaton an the input MDP. But we need to label states of the MDP with the atomic propositions `s₁` and `s₂`.",
"_____no_output_____"
],
[
"## Labeled CMDP\nWe create a copy of our CMDP and label the states `1` and `2` with the corresponding atomic propositions using the function\n```python\nLabeledConsMDP.state_labels(labels)\n```\nwhere `labels` is a list (of length equal to number of states) of sets of ints; the ints are indices to the list `AP` given in the constructor of `LabeledConsMDP`.",
"_____no_output_____"
]
],
[
[
"lmdp = LabeledConsMDP(AP=[\"s1\",\"s2\"], mdp=mdp)\nlmdp.state_labels = [set(), {0}, {1}, set()]\ndisplay(lmdp, lmdp.state_labels)",
"_____no_output_____"
]
],
[
[
"## Product of labeled CMDP and DBA",
"_____no_output_____"
],
[
"In the following, we explain and show the (simplified) implementation of `LabeledConsMDP.product_with_dba(self, dba)`.\n\nThe states of the product are tuples `(ms,as)` where `ms` stands for a state of the MDP and `as` stands for a states of the automaton. Let's call the set of states of the MDP $S$ and the set of states of the DBA $Q$; further, the labeling function of the labeled MDP is $\\lambda \\colon S \\to 2^{AP}$ and the transition function of the DBA as $\\delta \\colon Q \\times 2^{AP} \\to Q$. For each action `α` and each successor `ms'` for this action from state `ms`, the action `α` of `(ms,as)` has an `α` successor (with the same probability) `(ms', as')` where `as'` is equal to $\\delta(as, \\lambda(ms'))$.\n\nAll tuples that contain a reload state of the mdp, are again reloading. All tuples with an accepting state of the automaton will become targets. The following function `product(lmdp, aut)` returns a CMDP that is the product of `lmdp` and `aut` and a list of target states.\n\n#### Treatment of atomic propositions\nLabels (sets of atomic propositions) are represented by sets of integers in LabeledConsMDP, while they are represented by _binary decission diagrams (BDD)_ in Spot. One BDD can actually represent a set of labels as it in fact represents a boolean function over AP. In our algorithm, we need to evaluate $\\delta(as, \\lambda(ms'))$, which is, we need to find an edge in the automaton whose label (guard) is satisfied by the label of `ms'`. We do this in 2 steps:\n 1. Create a BDD representing exactly the desired label $\\lambda(md')$. This is implemented in \n ```python\n def get_bdd_for_label(label)\n ```\n 2. Perform logical and on this BDD and BDD of all outgoing edges of the current state of the automaton. For all but one edge this operation returns false. We choose the one that is not false (is, in fact, equal to $\\lambda(ms')$.",
"_____no_output_____"
]
],
[
[
"def product(lmdp, aut):\n #TODO check for correct type of mdp\n \n result = ConsMDP()\n num_ap = len(lmdp.AP)\n \n # Check the type of automaton and convert it into\n # complete DBA if needed\n if not aut.is_sba() or not spot.is_complete(aut):\n aut = aut.postprocess(\"BA\", \"complete\")\n \n \n \n # This will store the list of Büchi states\n targets = []\n # This will be our state dictionary\n sdict = {}\n # The list of output states for which we have not yet\n # computed the successors. Items on this list are triplets\n # of the form `(mdps, auts, p)` where `mdps` is the state\n # number in the mdp, `auts` is the state number in the \n # automaton, and p is the state number in the output mdp.\n todo = []\n \n # Mapping of AP representation in MDP to repr. in automaton\n ap2bdd_var = {}\n aut_ap = aut.ap()\n for ap_i, ap in enumerate(lmdp.AP):\n if ap in aut_ap:\n ap2bdd_var[ap_i] = aut_ap.index(ap)\n \n # Given label in mdp, return corresponding BDD\n def get_bdd_for_label(mdp_label):\n cond = buddy.bddtrue\n for ap_i in ap2bdd_var.keys():\n if ap_i in mdp_label:\n cond &= buddy.bdd_ithvar(ap2bdd_var[ap_i])\n else:\n cond -= buddy.bdd_ithvar(ap2bdd_var[ap_i])\n return cond\n \n # Transform a pair of state numbers (mdps, auts) into a state\n # number in the output mdp, creating a new state if needed. \n # Whenever a new state is created, we can add it to todo.\n def dst(mdps, auts):\n pair = (mdps, auts)\n p = sdict.get(pair)\n if p is None:\n p = result.new_state(name=f\"{mdps},{auts}\",\n reload=lmdp.is_reload(mdps))\n sdict[pair] = p\n todo.append((mdps, auts, p))\n if aut.state_is_accepting(auts):\n targets.append(p)\n return p\n \n # Get a successor state in automaton based on label\n def get_successor(aut_state, mdp_label):\n for e in aut.out(aut_state):\n mdp_bdd = get_bdd_for_label(mdp_label)\n if mdp_bdd & e.cond != buddy.bddfalse:\n return e.dst\n \n # Initialization\n # For each state of mdp add a new initial state\n aut_i = aut.get_init_state_number()\n for mdp_s in range(lmdp.num_states):\n label = lmdp.state_labels[mdp_s]\n aut_s = get_successor(aut_i, label)\n dst(mdp_s, aut_s)\n\n # Build all states and edges in the product\n while todo:\n msrc, asrc, osrc = todo.pop()\n for a in lmdp.actions_for_state(msrc):\n # build new distribution\n odist = {}\n for mdst, prob in a.distr.items():\n adst = get_successor(asrc, lmdp.state_labels[mdst])\n odst = dst(mdst, adst)\n odist[odst] = prob\n result.add_action(osrc, odist, a.label, a.cons)\n \n return result, targets",
"_____no_output_____"
],
[
"p, T = product(lmdp, aut)\npsolver = BasicES(p, 5, T)\npsolver.get_min_levels(BUCHI, True)\ndisplay_inline(psolver)",
"_____no_output_____"
]
],
[
[
"We can now see the result of the product on the labeled MDP and the automaton for $\\mathsf{G}\\mathsf{F}s_1 \\land \\mathsf{G}\\mathsf{F} s_2$. We can also see that capacity 5 is no longer sufficient for the Büchi objective (the green ∞ indicate that no initial load is sufficient from given state to satisfy the Büchi objectives with targets `T`). In fact, we need at least 9 units of energy to pass the path through mdp-state `1`.",
"_____no_output_____"
]
],
[
[
"psolver.cap = 9\npsolver.get_min_levels(BUCHI, recompute=True)\ndisplay_inline(psolver)",
"_____no_output_____"
]
],
[
[
"In fact, the function `product` is implemented as a method of `LabeledConsMDP` class.",
"_____no_output_____"
]
],
[
[
"p, T = lmdp.product_with_dba(aut)\npsolver = BasicES(p, 9, T)\npsolver.get_min_levels(BUCHI, True)\ndisplay_inline(psolver)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0ef7612720f78616e860e6122bc59861e799db8
| 74,878 |
ipynb
|
Jupyter Notebook
|
Synthethic Test data.ipynb
|
prakass1/InteractiveSimilarityExplorer
|
2fa5fb91c7df6424b9ed777ef4373ed7094c2348
|
[
"MIT"
] | null | null | null |
Synthethic Test data.ipynb
|
prakass1/InteractiveSimilarityExplorer
|
2fa5fb91c7df6424b9ed777ef4373ed7094c2348
|
[
"MIT"
] | null | null | null |
Synthethic Test data.ipynb
|
prakass1/InteractiveSimilarityExplorer
|
2fa5fb91c7df6424b9ed777ef4373ed7094c2348
|
[
"MIT"
] | null | null | null | 33.000441 | 1,292 | 0.391103 |
[
[
[
"\"\"\"\nThe concept of the creation of the test data in order to evaluate the similarities, a synthethic data creation is introduced.\nThe idea is as follows:\n1. Outlier Instances.\nThese are the users who deviate from the other in the data. \nIdentifying them would of interest to understand how they behave against most of the dataset\nThe outlier/dissimilar users are identified through the outlier_detection concept we have introduced in this work.\nIdentified outliers are [8,20,27,149] -- Of them [8,27,149] are more suspectful\n4 - Outlier Instances.\n2. Twin Instances.\nCreating an instance with similar properties as in the data. This is a virtual data having same properties.\nFor the time series following thing needs to be done. A sample from a start_date to end_date to create such a data in range from 0 to 1.\n1. A random set of user are first extracted. We extract 3 for example.\n2. Then we create exact same features for a virtual user and change their identity such as user_id... (only static properties)\n3. Virtual user would also require to have their time series recordings. \nFor this following steps are done:\n1. For a randomly choosen start date between (first timestamp) to (end time)\n\n3. Normal Instances.\nThe usual test instances.\n\"\"\"",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns",
"_____no_output_____"
],
[
"static_data_tchq = pd.read_csv(\"data/input_csv/3_q.csv\")\nstatic_data_hq = pd.read_csv(\"data/input_csv/4_q.csv\")\ntyt_data = pd.read_csv(\"data/input_csv/1_q.csv\")",
"_____no_output_____"
],
[
"drop_user_ids = [54, 60, 140, 170, 4, 6, 7, 9, 12, 19, 25, 39, 53, 59, 128, 130, 144, 145, 148, 156, 166, 167]",
"_____no_output_____"
],
[
"valid_static_data_tchq = static_data_tchq[~static_data_tchq[\"user_id\"].isin(drop_user_ids)]\nvalid_static_data_hq = static_data_hq[~static_data_hq[\"user_id\"].isin(drop_user_ids)]\nvalid_tyt_data = tyt_data[~tyt_data[\"user_id\"].isin(drop_user_ids)]",
"_____no_output_____"
],
[
"valid_static_data_tchq.head() ",
"_____no_output_____"
],
[
"valid_static_data_tchq.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 50 entries, 3 to 70\nData columns (total 39 columns):\nUnnamed: 0 50 non-null int64\nuser_id 50 non-null int64\nquestionnaire_id 50 non-null int64\ncreated_at 50 non-null object\ntschq01 50 non-null object\ntschq02 50 non-null object\ntschq03 50 non-null object\ntschq04-1 50 non-null object\ntschq05 50 non-null object\ntschq06 50 non-null object\ntschq07-1 50 non-null object\ntschq08 50 non-null object\ntschq09 50 non-null object\ntschq10 50 non-null object\ntschq11 50 non-null object\ntschq12 50 non-null int64\ntschq13 49 non-null object\ntschq14 50 non-null object\ntschq15 50 non-null object\ntschq16 50 non-null int64\ntschq17 50 non-null int64\ntschq18 50 non-null object\ntschq19 50 non-null object\ntschq20 50 non-null object\ntschq21 50 non-null object\ntschq22 50 non-null object\ntschq23 50 non-null object\ntschq24 50 non-null object\ntschq25 33 non-null object\ntschq28 50 non-null object\ntschq29 50 non-null object\ntschq30 50 non-null object\ntschq31 50 non-null object\ntschq32 50 non-null object\ntschq33 50 non-null object\ntschq34 50 non-null object\ntschq35 50 non-null object\ntschq04-2 11 non-null object\ntschq07-2 21 non-null object\ndtypes: int64(6), object(33)\nmemory usage: 15.6+ KB\n"
],
[
"user_ids=valid_static_data_tchq[\"user_id\"].to_numpy()\nnp.random.choice(user_ids, 3, replace=False)",
"_____no_output_____"
],
[
"valid_static_data_tchq[valid_static_data_tchq[\"user_id\"].isin([8,20,27,149])]",
"_____no_output_____"
],
[
"valid_static_data_hq[valid_static_data_hq[\"user_id\"].isin([8,20,27,149])]",
"_____no_output_____"
],
[
"valid_static_data_hq[valid_static_data_hq[\"user_id\"].isin([8,20,27,149,])]",
"_____no_output_____"
],
[
"d1 = [str(date) for date in pd.date_range(start='10-01-2018', end='05-09-2019', periods=100)]",
"_____no_output_____"
],
[
"def generate_synthetic_tyt_data(user_id, start_date, end_date, sampling_data, sample_length=80):\n created_at = [str(date) for date in pd.date_range(start=start_date, end=end_date, periods=sample_length)]\n u_id_list = [user_id for _ in range(sample_length)]\n q_id = [3 for _ in range(sample_length)]\n columns=[\"s01\",\"s02\",\"s03\",\"s04\",\"s05\",\"s06\",\"s07\", \"s08\"]\n synthetic_data = sampling_data[columns].sample(n=sample_length)\n synthetic_data[\"user_id\"] = u_id_list\n synthetic_data[\"questionnaire_id\"] = q_id\n synthetic_data[\"created_at\"] = created_at\n return synthetic_data[[\"user_id\",\"questionnaire_id\",\"created_at\",\"s01\",\"s02\",\"s03\",\"s04\",\"s05\",\"s06\",\"s07\",\"s08\"]]",
"_____no_output_____"
],
[
"m1 = generate_synthetic_tyt_data(\"44428\", \"10-10-2018\",\"05-09-2019\",valid_tyt_data)\nm2 = generate_synthetic_tyt_data(\"444154\", \"11-11-2018\",\"05-07-2019\",valid_tyt_data, sample_length=60)\nm3 = generate_synthetic_tyt_data(\"444133\", \"11-11-2018\",\"05-07-2019\",valid_tyt_data, sample_length=60)",
"_____no_output_____"
],
[
"DT_tyt_data = m1.append([m2,m3])",
"_____no_output_____"
],
[
"DT_tyt_data.head()",
"_____no_output_____"
],
[
"valid_static_data_tchq.sample(n=3,random_state=42)",
"_____no_output_____"
],
[
"DT_static_data = valid_static_data_tchq.sample(n=3,random_state=42)\nDT_static_data[\"user_id\"] = DT_static_data[\"user_id\"].apply(lambda x: int(\"\".join(\"444\" + str(x)))) \nDT_static_data[\"Unnamed: 0\"] = DT_static_data[\"Unnamed: 0\"].apply(lambda x: int(\"\".join(\"444\" + str(x)))) ",
"_____no_output_____"
],
[
"DT_static_data",
"_____no_output_____"
],
[
"DT_static_data",
"_____no_output_____"
],
[
"DT_static_data_hq = valid_static_data_hq.sample(n=3,random_state=42)",
"_____no_output_____"
],
[
"DT_static_data_hq[\"user_id\"] = DT_static_data_hq[\"user_id\"].apply(lambda x: int(\"\".join(\"444\" + str(x)))) \nDT_static_data_hq[\"Unnamed: 0\"] = DT_static_data_hq[\"Unnamed: 0\"].apply(lambda x: int(\"\".join(\"444\" + str(x))))",
"_____no_output_____"
],
[
"type(DT_static_data_hq[\"user_id\"].iloc[0])",
"_____no_output_____"
],
[
"valid_static_data_tchq[\"user_id\"].sample(n=3,random_state=0).to_list()",
"_____no_output_____"
],
[
"# Final Simulated Data for 3 users .\n#1. Static Data - DT_tyt_data\n#2. Static Data Hq - DT_static_data\n#3. Time Series Data - DT_static_data_hq",
"_____no_output_____"
],
[
"len(DT_tyt_data)",
"_____no_output_____"
],
[
"len(DT_static_data)",
"_____no_output_____"
],
[
"len(DT_static_data_hq)",
"_____no_output_____"
],
[
"DT_static_data.head()",
"_____no_output_____"
],
[
"static_data_tchq[\"tschq04-2\"].iloc[31]",
"_____no_output_____"
],
[
"DT_static_data_hq.head()",
"_____no_output_____"
],
[
"DT_tyt_data.head()",
"_____no_output_____"
],
[
"type(DT_static_data[\"tschq04-2\"].iloc[0])",
"_____no_output_____"
],
[
"valid_static_data_tchq[\"tschq04-2\"][11]",
"_____no_output_____"
],
[
"import ast\n# Converting string to list \nres = ast.literal_eval(valid_static_data_tchq[\"tschq04-2\"][11]) ",
"_____no_output_____"
],
[
"res\nlist_to_str = \"_\".join([val for val in res + [\"CHILDREN\"]])\nlist_to_str",
"_____no_output_____"
],
[
"res = \"[1,2,3]\"",
"_____no_output_____"
],
[
"isinstance(res, str)",
"_____no_output_____"
],
[
"ast.literal_eval(res)",
"_____no_output_____"
],
[
"## Save all the simulated files as csv and pickle\nDT_static_data.iloc[:,1:].to_csv(\"data/simulate/3_q_sim.csv\")\nDT_static_data_hq.iloc[:,1:].to_csv(\"data/simulate/4_q_sim.csv\")\nDT_tyt_data.to_csv(\"data/simulate/1_q_sim.csv\")\nDT_static_data.iloc[:,1:].to_pickle(\"data/simulate/3_q_sim.pckl\")\nDT_static_data_hq.iloc[:,1:].to_pickle(\"data/simulate/4_q_sim.pckl\")\nDT_tyt_data.to_pickle(\"data/simulate/1_q_sim.pckl\")",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0ef78857f7a7627f72e66810effa7a1817cc8e7
| 5,494 |
ipynb
|
Jupyter Notebook
|
20enero.ipynb
|
michelmunoz99/daa_2021_1
|
4661dbfd9b0684ee3cbe75dfe7eb5d19241f5527
|
[
"MIT"
] | null | null | null |
20enero.ipynb
|
michelmunoz99/daa_2021_1
|
4661dbfd9b0684ee3cbe75dfe7eb5d19241f5527
|
[
"MIT"
] | null | null | null |
20enero.ipynb
|
michelmunoz99/daa_2021_1
|
4661dbfd9b0684ee3cbe75dfe7eb5d19241f5527
|
[
"MIT"
] | null | null | null | 32.898204 | 230 | 0.465417 |
[
[
[
"<a href=\"https://colab.research.google.com/github/michelmunoz99/daa_2021_1/blob/master/20enero.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"class NodoArbol:\n def __init__(self, value, left=None, right=None):\n self.data=value\n self.left=left\n self.right=right\n ",
"_____no_output_____"
],
[
"class BinarySearchTree:\n\n def __init__(self):\n self.__root=None\n\n def insert(self, value):\n if self.__root==None:\n self.__root= NodoArbol(value,None,None)\n else:\n # preguntar si value es menor que root, de ser el caso\n # insertar a la izq, pero puede ser el caso de que el \n # sub arbol izq ya tenga muchos elementos\n self.insert_nodo(self.__root,value)\n \n def insert_nodo(self,nodo,value):\n if nodo.data==value:\n pass\n elif value<nodo.data: # True va a la izq\n if nodo.left==None: # si hay espacio en la izq, ahi va\n nodo.left=NodoArbol(value,None,None)#insertamos el nodo\n else: \n self.insert_nodo(nodo.left,value)# Buscar el sub arbol izq\n else:\n if nodo.right==None:\n nodo.right=NodoArbol(value,None,None)\n else:\n self.insert_nodo(nodo.right,value)# Buscar en sub arbol der\n \n def buscar(self, value):\n if self.__root==None:\n return None\n else:\n # Haremos busqueda recursiva\n return self.__busca_nodo(self.__root,value)\n \n def __busca_nodo(self,nodo,value):\n if nodo ==None:\n return None\n elif nodo.data==value:\n return nodo.data\n elif value< nodo.data:\n return self.__busca_nodo(nodo.left,value)\n else:\n return self.__busca_nodo(nodo.right,value) \n \n def transversal(self,format=\"inorden\"):\n if format ==\"inorden\":\n self.__recorrido_in(self.__root)\n elif format==\"preorden\":\n self.__recorrido_pre(self.__root)\n elif format ==\"posorden\":\n self.__recorrido_pos(self.__root)\n else:\n print(\"Formato de recorrido no válido\")\n \n def __recorrido_pre(self, nodo):\n if nodo != None:\n print(nodo.data, end=\",\")\n self.__recorrido_pre(nodo.left)\n self.__recorrido_pre(nodo.right)\n \n def __recorrido_in(self, nodo):\n if nodo != None:\n self.__recorrido_in(nodo.left)\n print(nodo.data, end=\",\")\n self.__recorrido_in(nodo.right)\n \n def __recorrido_pos(self, nodo):\n if nodo!= None:\n self.__recorrido_pos(nodo.left)\n self.__recorrido_pos(nodo.right)\n print(nodo.data, end=\",\")\n ",
"_____no_output_____"
],
[
"bst=BinarySearchTree()\nbst.insert(50)\nbst.insert(30)\nbst.insert(20)\nres=bst.buscar(30)#true o false?\nprint(\"Dato:\", str(res))\nprint(bst.buscar(40))\nprint(\"Recorrido:\")\nbst.transversal(format=\"preorden\")\nprint(\"recorrido in orden:\")\nbst.transversal()\nprint(\"recorrido pos:\")\nbst.transversal(format=\"pos\")\n",
"Dato: 30\nNone\nRecorrido:\n50,30,20,recorrido in orden:\n20,30,50,recorrido pos:\nFormato de recorrido no válido\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0ef79c1a970e89a17b33c1063655f1ff314daf3
| 1,821 |
ipynb
|
Jupyter Notebook
|
cv.ipynb
|
HaiyiMei/i3d-keras
|
386af843692372f0d7d5378e64d6913a5130aba6
|
[
"MIT"
] | 1 |
2021-08-06T05:44:49.000Z
|
2021-08-06T05:44:49.000Z
|
cv.ipynb
|
HaiyiMei/i3d-keras
|
386af843692372f0d7d5378e64d6913a5130aba6
|
[
"MIT"
] | null | null | null |
cv.ipynb
|
HaiyiMei/i3d-keras
|
386af843692372f0d7d5378e64d6913a5130aba6
|
[
"MIT"
] | null | null | null | 20.233333 | 65 | 0.445909 |
[
[
[
"import multiprocessing\ndef worker(d):\n d.reverse()\n print('worker')\n print(d)\n\n\nmgr = multiprocessing.Manager()\nd = mgr.list(range(10))\njobs = multiprocessing.Process(target=worker, args=(d,))\njobs.start()\njobs.join()\n",
"_____no_output_____"
],
[
"from multiprocessing import Process, Manager\n\ndef f(d, l):\n d[1] = '1'\n d['2'] = 2\n d[0.25] = None\n l.reverse()\n print(l)\n\nif __name__ == '__main__':\n with Manager() as manager:\n d = manager.dict()\n l = manager.list(range(10))\n\n p = Process(target=f, args=(d, l))\n p.start()\n p.join()\n\n print(d)\n print(l)",
"{}\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code"
]
] |
d0ef7bea6ae270aa52f1f18783725cd54e144b87
| 1,409 |
ipynb
|
Jupyter Notebook
|
predischarge.ipynb
|
xiahualiu/notebook
|
1709c7117948e24c37ca68779684840255239e0e
|
[
"MIT"
] | null | null | null |
predischarge.ipynb
|
xiahualiu/notebook
|
1709c7117948e24c37ca68779684840255239e0e
|
[
"MIT"
] | null | null | null |
predischarge.ipynb
|
xiahualiu/notebook
|
1709c7117948e24c37ca68779684840255239e0e
|
[
"MIT"
] | null | null | null | 24.293103 | 97 | 0.569198 |
[
[
[
"'''\nGlobal Parameters:\n - Battery output voltage: battery_voltage [Unit: Volt]\n - Motor controller coil capacitance: coil_cap [Unit: uF/Micro Farads]\n - Target pre-charge voltage percentage: stop_voltage_percent [Default: 90% EV.6.6.2]\n''' \nbattery_voltage=84*3.6\ncoil_cap=1880\nstop_voltage_percent=0.9",
"_____no_output_____"
],
[
"'''\nTarget Parameters:\n - Target precharge time: target_precharge_time [Unit: Seconds]\n - Motor controller coil capacitance: coil_cap [Unit: uF/Micro Farads]\n - Target pre-charge voltage percentage: stop_voltage_percent [Default: 90% EV.6.6.2]\n''' \ntarget_precharge_time=3",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code"
]
] |
d0ef7ccf6e1ef57e046af3612381f8efd153727a
| 168,905 |
ipynb
|
Jupyter Notebook
|
docs/source/notebooks/marginalized_gaussian_mixture_model.ipynb
|
JvParidon/pymc3
|
f60dc1db74df3c54d707761e9dc054a7fdf0435c
|
[
"Apache-2.0"
] | 2 |
2020-05-29T07:10:45.000Z
|
2021-04-07T06:43:52.000Z
|
docs/source/notebooks/marginalized_gaussian_mixture_model.ipynb
|
nd1511/pymc3
|
4e33b323c35ccd0311b37686c12b56d0b4e8a957
|
[
"Apache-2.0"
] | 2 |
2017-03-02T05:56:13.000Z
|
2019-12-06T19:15:42.000Z
|
docs/source/notebooks/marginalized_gaussian_mixture_model.ipynb
|
nd1511/pymc3
|
4e33b323c35ccd0311b37686c12b56d0b4e8a957
|
[
"Apache-2.0"
] | 1 |
2019-01-02T09:02:18.000Z
|
2019-01-02T09:02:18.000Z
| 473.123249 | 68,228 | 0.929641 |
[
[
[
"# Marginalized Gaussian Mixture Model\n\nAuthor: [Austin Rochford](http://austinrochford.com)",
"_____no_output_____"
]
],
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"from matplotlib import pyplot as plt\nimport numpy as np\nimport pymc3 as pm\nimport seaborn as sns",
"_____no_output_____"
],
[
"SEED = 383561\n\nnp.random.seed(SEED) # from random.org, for reproducibility",
"_____no_output_____"
]
],
[
[
"Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.",
"_____no_output_____"
]
],
[
[
"N = 1000\n\nW = np.array([0.35, 0.4, 0.25])\n\nMU = np.array([0., 2., 5.])\nSIGMA = np.array([0.5, 0.5, 1.])",
"_____no_output_____"
],
[
"component = np.random.choice(MU.size, size=N, p=W)\nx = np.random.normal(MU[component], SIGMA[component], size=N)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(8, 6))\n\nax.hist(x, bins=30, normed=True, lw=0);",
"/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
]
],
[
[
"A natural parameterization of the Gaussian mixture model is as the [latent variable model](https://en.wikipedia.org/wiki/Latent_variable_model)\n\n$$\n\\begin{align*}\n\\mu_1, \\ldots, \\mu_K\n & \\sim N(0, \\sigma^2) \\\\\n\\tau_1, \\ldots, \\tau_K\n & \\sim \\textrm{Gamma}(a, b) \\\\\n\\boldsymbol{w}\n & \\sim \\textrm{Dir}(\\boldsymbol{\\alpha}) \\\\\nz\\ |\\ \\boldsymbol{w}\n & \\sim \\textrm{Cat}(\\boldsymbol{w}) \\\\\nx\\ |\\ z\n & \\sim N(\\mu_z, \\tau^{-1}_i).\n\\end{align*}\n$$\n\nAn implementation of this parameterization in PyMC3 is available [here](gaussian_mixture_model.ipynb). A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.\n\nAn alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is\n\n$$\n\\begin{align*}\n\\mu_1, \\ldots, \\mu_K\n & \\sim N(0, \\sigma^2) \\\\\n\\tau_1, \\ldots, \\tau_K\n & \\sim \\textrm{Gamma}(a, b) \\\\\n\\boldsymbol{w}\n & \\sim \\textrm{Dir}(\\boldsymbol{\\alpha}) \\\\\nf(x\\ |\\ \\boldsymbol{w})\n & = \\sum_{i = 1}^K w_i\\ N(x\\ |\\ \\mu_i, \\tau^{-1}_i),\n\\end{align*}\n$$\n\nwhere\n\n$$N(x\\ |\\ \\mu, \\sigma^2) = \\frac{1}{\\sqrt{2 \\pi} \\sigma} \\exp\\left(-\\frac{1}{2 \\sigma^2} (x - \\mu)^2\\right)$$\n\nis the probability density function of the normal distribution.\n\nMarginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the [Stan](http://mc-stan.org/) community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the [_Stan User's Guide and Reference Manual_](http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/stan-reference-2.8.0.pdf).\n\nPyMC3 supports marginalized Gaussian mixture models through its `NormalMixture` class. (It also supports marginalized general mixture models through its `Mixture` class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.",
"_____no_output_____"
]
],
[
[
"with pm.Model() as model:\n w = pm.Dirichlet('w', np.ones_like(W))\n \n mu = pm.Normal('mu', 0., 10., shape=W.size)\n tau = pm.Gamma('tau', 1., 1., shape=W.size)\n \n x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)",
"_____no_output_____"
],
[
"with model:\n trace = pm.sample(5000, n_init=10000, tune=1000, random_seed=SEED)[1000:]",
"Auto-assigning NUTS sampler...\nInitializing NUTS using advi...\nAverage ELBO = -6,663.8: 100%|██████████| 10000/10000 [00:06<00:00, 1582.50it/s]\nFinished [100%]: Average ELBO = -6,582.7\n100%|██████████| 5000/5000 [-1:54:12<00:00, -0.07s/it]\n"
]
],
[
[
"We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.",
"_____no_output_____"
]
],
[
[
"pm.traceplot(trace, varnames=['w', 'mu']);",
"/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
],
[
"pm.plot_posterior(trace, varnames=['w', 'mu']);",
"/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
]
],
[
[
"We can also sample from the model's posterior predictive distribution, as follows.",
"_____no_output_____"
]
],
[
[
"with model:\n ppc_trace = pm.sample_posterior_predictive(trace, 5000, random_seed=SEED)",
"100%|██████████| 5000/5000 [03:28<00:00, 23.93it/s]\n"
]
],
[
[
"We see that the posterior predictive samples have a distribution quite close to that of the observed data.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(8, 6))\n\nax.hist(x, bins=30, normed=True,\n histtype='step', lw=2,\n label='Observed data');\nax.hist(ppc_trace['x_obs'], bins=30, normed=True,\n histtype='step', lw=2,\n label='Posterior predictive distribution');\n\nax.legend(loc=1);",
"/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0ef826897cf1981445bbe09de324cab42702054
| 74,162 |
ipynb
|
Jupyter Notebook
|
13-04-closing-opening.ipynb
|
lmcanavals/dip
|
ef8e8aacac9578e71ce301efc952a68d25efafb4
|
[
"CC0-1.0"
] | 1 |
2021-08-23T20:57:26.000Z
|
2021-08-23T20:57:26.000Z
|
13-04-closing-opening.ipynb
|
lmcanavals/dip
|
ef8e8aacac9578e71ce301efc952a68d25efafb4
|
[
"CC0-1.0"
] | null | null | null |
13-04-closing-opening.ipynb
|
lmcanavals/dip
|
ef8e8aacac9578e71ce301efc952a68d25efafb4
|
[
"CC0-1.0"
] | null | null | null | 537.405797 | 70,502 | 0.949813 |
[
[
[
"from skimage.io import imread\nfrom skimage.color import rgb2gray\nfrom skimage.filters import threshold_otsu\nfrom scipy.ndimage.morphology import binary_erosion, binary_dilation\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"im = rgb2gray(imread('imagenes/huella_1.jpg'))\nthres = threshold_otsu(im)\nim = (im > thres).astype(np.uint8)\n\nplt.imshow(im, cmap='gray'), plt.axis('off')",
"/tmp/ipykernel_38233/2028505557.py:1: FutureWarning: The behavior of rgb2gray will change in scikit-image 0.19. Currently, rgb2gray allows 2D grayscale image to be passed as inputs and leaves them unmodified as outputs. Starting from version 0.19, 2D arrays will be treated as 1D images with 3 channels.\n im = rgb2gray(imread('imagenes/huella_1.jpg'))\n"
],
[
"im1 = 1 - im\nim2 = binary_erosion(im1, structure=np.ones((3, 3)))\nim3 = binary_dilation(im2, structure=np.ones((3, 3)))\nim4 = binary_dilation(im3, structure=np.ones((2, 2)))\nim5 = binary_erosion(im4, structure=np.ones((2, 2)))\nim6 = 1 - im5",
"_____no_output_____"
],
[
"plt.figure(figsize=(30, 20))\nplt.gray()\nplt.subplots_adjust(left=0, right=1, bottom=0, top=0.95, hspace=0.05, wspace=0.05)\n\nplt.subplot(231), plt.imshow(im1), plt.axis('off')\nplt.title('Otsu', size=20)\n\nplt.subplot(232), plt.imshow(im2), plt.axis('off')\nplt.title('erosion', size=20)\n\nplt.subplot(233), plt.imshow(im3), plt.axis('off')\nplt.title('apertura', size=20)\n\nplt.subplot(234), plt.imshow(im4), plt.axis('off')\nplt.title('apertura + dilasion', size=20)\n\nplt.subplot(235), plt.imshow(im5), plt.axis('off')\nplt.title('apertura + cierre', size=20)\n\nplt.subplot(236), plt.imshow(im6), plt.axis('off')\nplt.title('inv', size=20)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
d0ef8ab33c771c4b82442dc1765fed0e9f904e1f
| 759,022 |
ipynb
|
Jupyter Notebook
|
sprint-challenge/LS_DS_Uni_4_Sprint_3_Challenge.ipynb
|
DNason1999/DS-Unit-4-Sprint-3-Deep-Learning
|
68e34aa336b074d08440745b1bd09a6dee620752
|
[
"MIT"
] | null | null | null |
sprint-challenge/LS_DS_Uni_4_Sprint_3_Challenge.ipynb
|
DNason1999/DS-Unit-4-Sprint-3-Deep-Learning
|
68e34aa336b074d08440745b1bd09a6dee620752
|
[
"MIT"
] | null | null | null |
sprint-challenge/LS_DS_Uni_4_Sprint_3_Challenge.ipynb
|
DNason1999/DS-Unit-4-Sprint-3-Deep-Learning
|
68e34aa336b074d08440745b1bd09a6dee620752
|
[
"MIT"
] | null | null | null | 1,122.813609 | 172,776 | 0.955432 |
[
[
[
"<img align=\"left\" src=\"https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png\" width=200>\n<br></br>\n<br></br>\n\n# Major Neural Network Architectures Challenge\n## *Data Science Unit 4 Sprint 3 Challenge*\n\nIn this sprint challenge, you'll explore some of the cutting edge of Data Science. This week we studied several famous neural network architectures: \nrecurrent neural networks (RNNs), long short-term memory (LSTMs), convolutional neural networks (CNNs), and Autoencoders. In this sprint challenge, you will revisit these models. Remember, we are testing your knowledge of these architectures not your ability to fit a model with high accuracy. \n\n__*Caution:*__ these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on SageMaker, Colab or a comparable environment. If something is running longer, doublecheck your approach!\n\n## Challenge Objectives\n*You should be able to:*\n* <a href=\"#p1\">Part 1</a>: Train a LSTM classification model\n* <a href=\"#p2\">Part 2</a>: Utilize a pre-trained CNN for objective detection\n* <a href=\"#p3\">Part 3</a>: Describe the components of an autoencoder\n* <a href=\"#p4\">Part 4</a>: Describe yourself as a Data Science and elucidate your vision of AI",
"_____no_output_____"
],
[
"<a id=\"p1\"></a>\n## Part 1 - RNNs\n\nUse an RNN/LSTM to fit a multi-class classification model on reuters news articles to distinguish topics of articles. The data is already encoded properly for use in an RNN model. \n\nYour Tasks: \n- Use Keras to fit a predictive model, classifying news articles into topics. \n- Report your overall score and accuracy\n\nFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.\n\n__*Note:*__ Focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.datasets import reuters\nimport numpy as np\n# save np.load\nnp_load_old = np.load\n# modify the default parameters of np.load\nnp.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)\n# call load_data with allow_pickle implicitly set to true\n(X_train, y_train), (X_test, y_test) = reuters.load_data(num_words=None,\n skip_top=0,\n maxlen=None,\n test_split=0.2,\n seed=723812,\n start_char=1,\n oov_char=2,\n index_from=3)\n\n# restore np.load for future normal usage\nnp.load = np_load_old",
"_____no_output_____"
],
[
"# Demo of encoding\n\nword_index = reuters.get_word_index(path=\"reuters_word_index.json\")\n\nprint(f\"Iran is encoded as {word_index['iran']} in the data\")\nprint(f\"London is encoded as {word_index['london']} in the data\")\nprint(\"Words are encoded as numbers in our dataset.\")",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/reuters_word_index.json\n557056/550378 [==============================] - 0s 0us/step\nIran is encoded as 779 in the data\nLondon is encoded as 544 in the data\nWords are encoded as numbers in our dataset.\n"
],
[
"from tensorflow.keras.preprocessing import sequence\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Embedding, LSTM\n\nbatch_size = 46\nmax_features = len(word_index.values())\nmaxlen = 200\n\nprint(len(X_train), 'train sequences')\nprint(len(X_test), 'test sequences')\n\nprint('Pad sequences (samples x time)')\nX_train = sequence.pad_sequences(X_train, maxlen=maxlen)\nX_test = sequence.pad_sequences(X_test, maxlen=maxlen)\nprint('X_train shape:', X_train.shape)\nprint('X_test shape:', X_test.shape)\n\n\nprint('Build model...')",
"8982 train sequences\n2246 test sequences\nPad sequences (samples x time)\nX_train shape: (8982, 200)\nX_test shape: (2246, 200)\nBuild model...\n"
],
[
"model = Sequential()\nmodel.add(Embedding(max_features+1, 64))\nmodel.add(LSTM(64))\nmodel.add(Dense(max_features, activation='softmax'))",
"_____no_output_____"
],
[
"# You should only run this cell once your model has been properly configured\n\nmodel.compile(loss='sparse_categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\nprint('Train...')\nmodel.fit(X_train, y_train,\n batch_size=batch_size,\n epochs=1,\n validation_data=(X_test, y_test))\n\nscore, acc = model.evaluate(X_test, y_test,\n batch_size=batch_size)\n\nprint('Test score:', score)\nprint('Test accuracy:', acc)",
"Train...\nTrain on 8982 samples, validate on 2246 samples\n8982/8982 [==============================] - 132s 15ms/sample - loss: 4.2270 - acc: 0.3479 - val_loss: 2.4033 - val_acc: 0.3664\n2246/2246 [==============================] - 7s 3ms/sample - loss: 2.4033 - acc: 0.3664\nTest score: 2.403268739780144\nTest accuracy: 0.3664292\n"
]
],
[
[
"## Sequence Data Question\n#### *Describe the `pad_sequences` method used on the training dataset. What does it do? Why do you need it?*\n\npad_sequences truncates sequences longer than the maxlen variable, and will add padding to sequences shorter than maxlen.\npad_sequences makes all of the sequences the same length. This is necessary to ensure the batches will all work properly when processed by the RNN\n\n## RNNs versus LSTMs\n#### *What are the primary motivations behind using Long-ShortTerm Memory Cell unit over traditional Recurrent Neural Networks?*\n\nLong-ShortTerm Memory Cell units are used to allow the RNN to learn patterns by referencing data that was presented to it in the past.\nThis allows it to detect inputs it has seen before and to give a similar activation to when it saw that data before.\n\n## RNN / LSTM Use Cases\n#### *Name and Describe 3 Use Cases of LSTMs or RNNs and why they are suited to that use case*\n\n1. Language Generation: They are suited for this as they can remember patterns presented in the training data set and can generate useful sequences that actually make sense.\n\n2. Speech Recognition: Can be used to detect patterns in input sound wave sequences to detremine the most likely word spoken.\n\n3. Speech Sythesis: In a similar way to being able to recognize speech, an RNN could be used to generate speech based on what it was trained on.",
"_____no_output_____"
],
[
"<a id=\"p2\"></a>\n## Part 2- CNNs\n\n### Find the Frog\n\nTime to play \"find the frog!\" Use Keras and ResNet50 (pre-trained) to detect which of the following images contain frogs:\n\n<img align=\"left\" src=\"https://d3i6fh83elv35t.cloudfront.net/newshour/app/uploads/2017/03/GettyImages-654745934-1024x687.jpg\" width=400>\n",
"_____no_output_____"
]
],
[
[
"!pip install google_images_download",
"Requirement already satisfied: google_images_download in c:\\users\\dylan nason\\anaconda3\\envs\\u4-s3-dnn\\lib\\site-packages (2.8.0)\nRequirement already satisfied: selenium in c:\\users\\dylan nason\\anaconda3\\envs\\u4-s3-dnn\\lib\\site-packages (from google_images_download) (3.141.0)\nRequirement already satisfied: urllib3 in c:\\users\\dylan nason\\anaconda3\\envs\\u4-s3-dnn\\lib\\site-packages (from selenium->google_images_download) (1.25.3)\n"
],
[
"from google_images_download import google_images_download\n\nresponse = google_images_download.googleimagesdownload()\narguments = {\"keywords\": \"lilly frog pond\", \"limit\": 5, \"print_urls\": True}\nabsolute_image_paths = response.download(arguments)",
"\nItem no.: 1 --> Item name = lilly frog pond\nEvaluating...\nStarting Download...\nImage URL: https://i.pinimg.com/originals/9a/49/08/9a49083d4d7458a194a451eea757a444.jpg\nCompleted Image ====> 1.9a49083d4d7458a194a451eea757a444.jpg\nImage URL: http://www.slrobertson.com/images/usa/georgia/atlanta/atl-botanical-gardens/frog-lily-pond-2-b.jpg\nCompleted Image ====> 2.frog-lily-pond-2-b.jpg\nImage URL: https://cdn.pixabay.com/photo/2017/07/14/17/44/frog-2504507_960_720.jpg\nCompleted Image ====> 3.frog-2504507_960_720.jpg\nImage URL: https://c1.wallpaperflare.com/preview/866/536/996/bull-frog-green-pond-lily-pad.jpg\nCompleted Image ====> 4.bull-frog-green-pond-lily-pad.jpg\nImage URL: https://c8.alamy.com/comp/C63A50/green-frog-floating-on-a-water-lily-pad-in-a-pond-with-pink-flowers-C63A50.jpg\nCompleted Image ====> 5.green-frog-floating-on-a-water-lily-pad-in-a-pond-with-pink-flowers-C63A50.jpg\n\nErrors: 0\n\n"
]
],
[
[
"At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.\n\n*Hint* - ResNet 50 doesn't just return \"frog\". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`\n\n*Stretch goals* \n- Check for fish or other labels\n- Create a matplotlib visualizations of the images and your prediction as the visualization label",
"_____no_output_____"
]
],
[
[
"# You've got something to do in this cell. ;)\n\nimport numpy as np\n\nfrom tensorflow.keras.applications.resnet50 import ResNet50\nfrom tensorflow.keras.preprocessing import image\nfrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions\n\nfrogs = ['bullfrog', 'tree_frog', 'tailed_frog']\n\ndef process_img_path(img_path):\n return image.load_img(img_path, target_size=(224, 224))\n\ndef img_contains_frog(img):\n \"\"\" Scans image for Frogs\n \n Should return a boolean (True/False) if a frog is in the image.\n \n Inputs:\n ---------\n img: Precrossed image ready for prediction. The `process_img_path` function should already be applied to the image. \n \n Returns: \n ---------\n frogs (boolean): TRUE or FALSE - There are frogs in the image.\n \n \"\"\"\n x = image.img_to_array(img)\n x = np.expand_dims(x, axis=0)\n x = preprocess_input(x)\n model = ResNet50(weights='imagenet')\n features = model.predict(x)\n results = decode_predictions(features, top=3)[0]\n print(results)\n for result in results:\n if result[1] in frogs:\n return 'frog'\n \n return 'not frog'\n \n \n return None",
"_____no_output_____"
]
],
[
[
"#### Stretch Goal: Displaying Predictions",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport os",
"_____no_output_____"
],
[
"path = './downloads/lilly frog pond/'\n\npaths = os.listdir(path)\n\nfor file in paths:\n img = process_img_path(path+file)\n plt.imshow(img)\n plt.show()\n prediction = img_contains_frog(img)\n print(prediction)\n print()",
"_____no_output_____"
]
],
[
[
"<a id=\"p3\"></a>\n## Part 3 - Autoencoders\n\nDescribe a use case for an autoencoder given that an autoencoder tries to predict its own input. \n\n__*Your Answer:*__ \nAn autoencoder can be used for creating a format to use in the compression of files to the smallest possible size with minimal information loss. This is even more effective if all the files to be encoded are very similar in starting format.",
"_____no_output_____"
],
[
"<a id=\"p4\"></a>\n## Part 4 - More...",
"_____no_output_____"
],
[
"Answer the following questions, with a target audience of a fellow Data Scientist:\n\n- What do you consider your strongest area, as a Data Scientist?\n- What area of Data Science would you most like to learn more about, and why?\n- Where do you think Data Science will be in 5 years?\n- What are the threats posed by AI to our society?\n- How do you think we can counteract those threats? \n- Do you think achieving General Artifical Intelligence is ever possible?\n\nA few sentences per answer is fine - only elaborate if time allows.",
"_____no_output_____"
],
[
"1. My strongest area as a Data Scientist would be ability to experiment and think conceptually about issues to come to a conclusion more efficiently. I am also fairly competent at deploying Flask APIs to Heroku. \n2. I would like to learn more about how I would implement more APIs on different platforms such as AWS instead of Heroku.\n3. I think data science will have evolved immensly. Honestly, it might be impossible to determine the state of data science since it could advance to such a degree in the areas video processing and modeling, including realistic water and physics models with reduced computational load. I also believe a lot of progress will be made in the areas of computer vision and AI robotics. \n4. Threats posed to our society by AI technology is the idea of an advanced AI that can perform tasks we cannot counteract or access information that it is not intended to. This can lead to breaches of saftey or privacy which is not good.\n5. We can counteract the threats posed to our society by devloping in limited environments such as a sort of sandbox environment limited in its scope to the outside world. Another common suggestion is to pass legislation to prevent development of advanced machine intelligence.\n6. I do believe general artificial intelligence is possible eventually. ",
"_____no_output_____"
],
[
"## Congratulations! \n\nThank you for your hard work, and congratulations! You've learned a lot, and you should proudly call yourself a Data Scientist.\n",
"_____no_output_____"
]
],
[
[
"from IPython.display import HTML\n\nHTML(\"\"\"<iframe src=\"https://giphy.com/embed/26xivLqkv86uJzqWk\" width=\"480\" height=\"270\" frameBorder=\"0\" class=\"giphy-embed\" allowFullScreen></iframe><p><a href=\"https://giphy.com/gifs/mumm-champagne-saber-26xivLqkv86uJzqWk\">via GIPHY</a></p>\"\"\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
d0ef90f6ddd7fbf321aad8c2727d9a34340173a4
| 170,056 |
ipynb
|
Jupyter Notebook
|
code/Old Stuff With Data.ipynb
|
jxfischer/buoy-data-analysis
|
1a41431eb15f9547fb9040db537313f73212fd6e
|
[
"Apache-2.0"
] | 1 |
2019-05-12T18:26:48.000Z
|
2019-05-12T18:26:48.000Z
|
code/Old Stuff With Data.ipynb
|
data-workspaces/buoy-data-analysis
|
126bb4976264adbefb4e13a67ceba113f333c2c9
|
[
"Apache-2.0"
] | null | null | null |
code/Old Stuff With Data.ipynb
|
data-workspaces/buoy-data-analysis
|
126bb4976264adbefb4e13a67ceba113f333c2c9
|
[
"Apache-2.0"
] | 2 |
2019-09-17T08:14:45.000Z
|
2020-02-27T18:42:03.000Z
| 32.496847 | 1,527 | 0.319013 |
[
[
[
"# The Data\nto see where we got the data go here: https://www.ndbc.noaa.gov/station_history.php?station=42040",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport datetime",
"_____no_output_____"
]
],
[
[
"This is the first set of data from 1995",
"_____no_output_____"
]
],
[
[
"from utils import read_file, build_median_df\ndf1995 = read_file('data/42040/buoy_data_1995.txt') #allows you to a table for each year\ndf1995.head(6)#allows you to print certain sections of the data0\n \n ",
"data/42040/buoy_data_1995.txt has 648 entries\n"
],
[
"df1995d= df1995.set_index(\"timestamp\").resample(\"D\").mean()\ndf1995d.head(5)",
"_____no_output_____"
],
[
"df1996 = read_file('data/42040/buoy_data_1996.txt') #allows you to a table for each year\ndf1996.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_1996.txt has 8784 entries\n"
],
[
"df1997 = read_file('data/42040/buoy_data_1997.txt') #allows you to a table for each year\ndf1997.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_1997.txt has 8760 entries\n"
],
[
"df1998 = read_file('data/42040/buoy_data_1998.txt') #allows you to a table for each year\ndf1998.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_1998.txt has 8784 entries\n"
],
[
"df1999 = read_file('data/42040/buoy_data_1999.txt') #allows you to a table for each year\ndf1999.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_1999.txt has 6984 entries\n"
],
[
"df2000 = read_file('data/42040/buoy_data_2000.txt') #allows you to a table for each year\ndf2000.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2000.txt has 8172 entries\n"
],
[
"df2001 = read_file('data/42040/buoy_data_2001.txt') #allows you to a table for each year\ndf2001.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2001.txt has 8760 entries\n"
],
[
"df2002 = read_file('data/42040/buoy_data_2002.txt') #allows you to a table for each year\ndf2002.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2002.txt has 8760 entries\n"
],
[
"df2003 = read_file('data/42040/buoy_data_2003.txt') #allows you to a table for each year\ndf2003.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2003.txt has 8760 entries\n"
],
[
"df2004 = read_file('data/42040/buoy_data_2004.txt') #allows you to a table for each year\ndf2004.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2004.txt has 7553 entries\n"
],
[
"df2005 = read_file('data/42040/buoy_data_2005.txt') #allows you to a table for each year\ndf2005.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2005.txt has 8251 entries\n"
],
[
"df2006 = read_file('data/42040/buoy_data_2006.txt') #allows you to a table for each year\ndf2006.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2006.txt has 7944 entries\n"
],
[
"#has incomplete data. 999 points are NaN\n\ndf2007 = read_file('data/42040/buoy_data_2007.txt') #allows you to a table for each year\ndf2007.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2007.txt has 8677 entries\n"
],
[
"df2008 = read_file('data/42040/buoy_data_2008.txt') #allows you to a table for each year\ndf2008.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2008.txt has 6520 entries\n"
],
[
"df2009 = read_file('data/42040/buoy_data_2009.txt') #allows you to a table for each year\ndf2009.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2009.txt has 6649 entries\n"
],
[
"df2010 = read_file('data/42040/buoy_data_2010.txt') #allows you to a table for each year\ndf2010.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2010.txt has 6182 entries\n"
],
[
"df2011 = read_file('data/42040/buoy_data_2011.txt') #allows you to a table for each year\ndf2011.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2011.txt has 8736 entries\n"
],
[
"df2012 = read_file('data/42040/buoy_data_2010.txt') #allows you to a table for each year\ndf2012.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2010.txt has 6182 entries\n"
],
[
"df2013 = read_file('data/42040/buoy_data_2013.txt') #allows you to a table for each year\ndf2013.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2013.txt has 8126 entries\n"
],
[
"df2014 = read_file('data/42040/buoy_data_2014.txt') #allows you to a table for each year\ndf2014.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2014.txt has 8223 entries\n"
],
[
"df2015 = read_file('data/42040/buoy_data_2015.txt') #allows you to a table for each year\ndf2015.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2015.txt has 8758 entries\n"
],
[
"df2016 = read_file('data/42040/buoy_data_2016.txt') #allows you to a table for each year\ndf2016.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2016.txt has 39535 entries\n"
],
[
"df2017 = read_file('data/42040/buoy_data_2017.txt') #allows you to a table for each year\ndf2017.head(6)#allows you to print certain sections of the data0",
"data/42040/buoy_data_2017.txt has 41943 entries\n"
],
[
"grouped2016=build_median_df(df2016, 'ATMP', 2016)\ngrouped1996=build_median_df(df1996, 'ATMP', 1996)\ngrouped2000=build_median_df(df2000, 'ATMP', 2000)\ngrouped2005=build_median_df(df2005, 'ATMP', 2005)\ngrouped2010=build_median_df(df2010, 'ATMP', 2010,\n index=['03-Mar', '04-Apr', '05-May', '06-Jun', '07-Jul', '08-Aug', '09-Sep', '10-Oct', '11-Nov', '12-Dec'])\ngrouped=pd.concat([grouped1996, grouped2000, grouped2005, grouped2010, grouped2016], axis=1, sort=True)\ngrouped.plot(figsize=(15,10), kind='bar');\nimport matplotlib.pyplot as plt\nimport calendar\nplt.title(\"Monthly median air temperature for buoy: LUKE OFFSHORE TEST PLATFORM - 63 NM South of Dauphin Island, AL\");\nplt.ylabel(\"Temperature, degrees Celsius\");\nplt.xticks(np.arange(12), calendar.month_name[1:13], rotation=20);\nplt.savefig('42040-airtemp.pdf')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0efa73892e78c50a8e4ffd3fa144f647e99973d
| 217,963 |
ipynb
|
Jupyter Notebook
|
module1-join-and-reshape-data/1.Lecture_JoinandReshapeData.ipynb
|
CVanchieri/DS-Unit1-Sprint2-DataWranglingandStorytelling
|
5d5892af20f11a6e6bcb7536f93b90f256788fe5
|
[
"MIT"
] | null | null | null |
module1-join-and-reshape-data/1.Lecture_JoinandReshapeData.ipynb
|
CVanchieri/DS-Unit1-Sprint2-DataWranglingandStorytelling
|
5d5892af20f11a6e6bcb7536f93b90f256788fe5
|
[
"MIT"
] | null | null | null |
module1-join-and-reshape-data/1.Lecture_JoinandReshapeData.ipynb
|
CVanchieri/DS-Unit1-Sprint2-DataWranglingandStorytelling
|
5d5892af20f11a6e6bcb7536f93b90f256788fe5
|
[
"MIT"
] | null | null | null | 217,963 | 217,963 | 0.774205 |
[
[
[
"Lambda School Data Science\n\n*Unit 1, Sprint 2, Module 1*\n\n---",
"_____no_output_____"
],
[
"_Lambda School Data Science_\n\n# Join and Reshape datasets\n\nObjectives\n- concatenate data with pandas\n- merge data with pandas\n- understand tidy data formatting\n- melt and pivot data with pandas\n\nLinks\n- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)\n- [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data)\n - Combine Data Sets: Standard Joins\n - Tidy Data\n - Reshaping Data\n- Python Data Science Handbook\n - [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append\n - [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join\n - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping\n - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables\n \nReference\n- Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)\n- Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)",
"_____no_output_____"
],
[
"## Download data\n\nWe’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)!",
"_____no_output_____"
]
],
[
[
"# we can use !wget to gt the file.\n!wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz",
"--2019-08-26 23:41:21-- https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz\nResolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.32.35\nConnecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.32.35|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 205548478 (196M) [application/x-gzip]\nSaving to: ‘instacart_online_grocery_shopping_2017_05_01.tar.gz’\n\ninstacart_online_gr 100%[===================>] 196.03M 16.1MB/s in 13s \n\n2019-08-26 23:41:36 (14.8 MB/s) - ‘instacart_online_grocery_shopping_2017_05_01.tar.gz’ saved [205548478/205548478]\n\n"
],
[
"# use !tar code to unzip a .tar file.\n!tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz",
"instacart_2017_05_01/\ninstacart_2017_05_01/._aisles.csv\ninstacart_2017_05_01/aisles.csv\ninstacart_2017_05_01/._departments.csv\ninstacart_2017_05_01/departments.csv\ninstacart_2017_05_01/._order_products__prior.csv\ninstacart_2017_05_01/order_products__prior.csv\ninstacart_2017_05_01/._order_products__train.csv\ninstacart_2017_05_01/order_products__train.csv\ninstacart_2017_05_01/._orders.csv\ninstacart_2017_05_01/orders.csv\ninstacart_2017_05_01/._products.csv\ninstacart_2017_05_01/products.csv\n"
],
[
"# we can see directory the files are stored in.\n%cd instacart_2017_05_01",
"/content/instacart_2017_05_01\n"
],
[
"# list out all the .csv files.\n!ls -lh *.csv",
"-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv\n-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv\n-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv\n-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv\n-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv\n-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv\n"
]
],
[
[
"# Join Datasets",
"_____no_output_____"
],
[
"## Goal: Reproduce this example\n\nThe first two orders for user id 1:",
"_____no_output_____"
]
],
[
[
"# we can use this code to displqy an image in our code.\nfrom IPython.display import display, Image\nurl = 'https://cdn-images-1.medium.com/max/1600/1*vYGFQCafJtGBBX5mbl0xyw.png'\nexample = Image(url=url, width=600)\n\ndisplay(example)",
"_____no_output_____"
]
],
[
[
"## Load data\n\nHere's a list of all six CSV filenames",
"_____no_output_____"
]
],
[
[
"# list out all the .csv files.\n!ls -lh *.csv",
"-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv\n-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv\n-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv\n-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv\n-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv\n-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv\n"
]
],
[
[
"For each CSV\n- Load it with pandas\n- Look at the dataframe's shape\n- Look at its head (first rows)\n- `display(example)`\n- Which columns does it have in common with the example we want to reproduce?",
"_____no_output_____"
],
[
"### aisles",
"_____no_output_____"
]
],
[
[
"# import pandas library to load the .csv files to data sets.\nimport pandas as pd\n# label the data set and load with pd.read_csv().\naisles = pd.read_csv('aisles.csv')\n# show the shape of the data frame.\nprint(aisles.shape)\n# show the data set with headers.\naisles.head()",
"(134, 2)\n"
],
[
"# we can always check the 'image' of the data set that we are trying to replicate.\ndisplay(example)",
"_____no_output_____"
]
],
[
[
"### departments",
"_____no_output_____"
]
],
[
[
"# label the data set and load with pd.read_csv().\ndepartments = pd.read_csv('departments.csv')\n# show the shape of the data frame.\nprint(departments.shape)\n# show the data set with headers.\ndepartments.head()",
"(21, 2)\n"
]
],
[
[
"### order_products__prior",
"_____no_output_____"
]
],
[
[
"# label the data set and load with pd.read_csv().\norder_products__prior = pd.read_csv('order_products__prior.csv')\n# show the shape of the data frame.\nprint(order_products__prior.shape)\n# show the data set with headers.\norder_products__prior.head()",
"(32434489, 4)\n"
]
],
[
[
"### order_products__train",
"_____no_output_____"
]
],
[
[
"# label the data set and load with pd.read_csv().\norder_products__train = pd.read_csv('order_products__train.csv')\n# show the shape of the data frame.\nprint(order_products__train.shape)\n# show the data set with headers.\norder_products__train.head()",
"(1384617, 4)\n"
],
[
"# we can always check out how much memory we are using and have left to use.\n!free -m",
" total used free shared buff/cache available\nMem: 26126 2565 20040 0 3520 25176\nSwap: 0 0 0\n"
],
[
"# we can always delete data sets if we wish.\n## del order_products__train",
"_____no_output_____"
]
],
[
[
"### orders",
"_____no_output_____"
]
],
[
[
"# label the data set and load with pd.read_csv().\norders = pd.read_csv('orders.csv')\n# show the shape of the data frame.\nprint(orders.shape)\n# show the data set with headers.\norders.head()",
"(3421083, 7)\n"
]
],
[
[
"### products",
"_____no_output_____"
]
],
[
[
"# label the data set and load with pd.read_csv().\nproducts = pd.read_csv('products.csv')\n# show the shape of the data frame.\nprint(products.shape)\n# show the data set with headers.\nproducts.head()",
"(49688, 4)\n"
]
],
[
[
"## Concatenate order_products__prior and order_products__train",
"_____no_output_____"
]
],
[
[
"# check 'order_products__prior' for NA'S.\norder_products__prior.isna().sum()",
"_____no_output_____"
],
[
"# check 'order_products__train' for NA'S.\norder_products__train.isna().sum()",
"_____no_output_____"
],
[
"# both these data sets havce the same column names so we will merge all columns of the data sets.\n# label the new data set amd use pd.concat() to merge them.\norder_products = pd.concat([order_products__prior, order_products__train])\n# show the shape of the data set.\nprint(order_products.shape)\n# show the data set with headers.\norder_products.head()",
"(33819106, 4)\n"
],
[
"# check 'order_products' for NA'S.\norder_products.isna().sum()",
"_____no_output_____"
],
[
"# we can use assert to check certain things, len(op) is = to len(opp) + len(opt) so this will run with no error.\nassert len(order_products) == len(order_products__prior) + len(order_products__train)",
"_____no_output_____"
],
[
"# we can see 1 == 0 so it will show an error.\nassert 1 == 0",
"_____no_output_____"
]
],
[
[
"## Get a subset of orders — the first two orders for user id 1",
"_____no_output_____"
],
[
"From `orders` dataframe:\n- user_id\n- order_id\n- order_number\n- order_dow\n- order_hour_of_day",
"_____no_output_____"
]
],
[
[
"# we can look at the first 2 orders with .head(2).\norders.head(2)",
"_____no_output_____"
],
[
"# we first set the columns we want to use that were listed above.\ncolumns = ['order_id','user_id','order_number', 'order_dow','order_hour_of_day']\n# we want to set a condition for the 'user_id' ==1 & only the first 2 orders so 'order_number' <=2.\ncondition = (orders.user_id == 1) & (orders.order_number <= 2)\n# create the subset.\nsubset = orders.loc[condition, columns]\n# show the subset data set.\nsubset",
"_____no_output_____"
]
],
[
[
"## Merge dataframes",
"_____no_output_____"
],
[
"Merge the subset from `orders` with columns from `order_products`",
"_____no_output_____"
]
],
[
[
"display(example)",
"_____no_output_____"
],
[
"# look at the headers data set and headers.\norder_products.head(2)",
"_____no_output_____"
],
[
"# label the merged data set and use pd.merge with the 2 data sets and the columns of set we are adding to the subset.\nmerged = pd.merge(subset, order_products[['order_id','add_to_cart_order','product_id']], how='left', on='order_id')\n# show the new merged data set.\nmerged",
"_____no_output_____"
],
[
"# we can check the shape of the 3 data sets we created with merging at once.\nsubset.shape, order_products.shape, merged.shape",
"_____no_output_____"
]
],
[
[
"Merge with columns from `products`",
"_____no_output_____"
]
],
[
[
"# show the dat set headers.\nproducts.head(1)",
"_____no_output_____"
],
[
"# label the merged data set and use pd.merge with the 2 data sets and the columns of set we are adding to the subset.\nfinal = pd.merge(merged, products[['product_id','product_name']], how='left', on='product_id')\n# show the merged data set with headers.\nfinal",
"_____no_output_____"
],
[
"display(example)",
"_____no_output_____"
]
],
[
[
"# Reshape Datasets",
"_____no_output_____"
],
[
"## Why reshape data?\n\n#### Some libraries prefer data in different formats\n\nFor example, the Seaborn data visualization library prefers data in \"Tidy\" format often (but not always).\n\n> \"[Seaborn will be most powerful when your datasets have a particular organization.](https://seaborn.pydata.org/introduction.html#organizing-datasets) This format ia alternately called “long-form” or “tidy” data and is described in detail by Hadley Wickham. The rules can be simply stated:\n\n> - Each variable is a column\n- Each observation is a row\n\n> A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot.\"\n\n#### Data science is often about putting square pegs in round holes\n\nHere's an inspiring [video clip from _Apollo 13_](https://www.youtube.com/watch?v=ry55--J4_VQ): “Invent a way to put a square peg in a round hole.” It's a good metaphor for data wrangling!",
"_____no_output_____"
],
[
"## Hadley Wickham's Examples\n\nFrom his paper, [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)",
"_____no_output_____"
]
],
[
[
"# import all the libraries we are using.\n%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\n# create our own data set.\ntable1 = pd.DataFrame(\n [[np.nan, 2],\n [16, 11], \n [3, 1]],\n index=['John Smith', 'Jane Doe', 'Mary Johnson'], \n columns=['treatmenta', 'treatmentb'])\n# table2 is the transverse of table1.\ntable2 = table1.T",
"_____no_output_____"
]
],
[
[
"\"Table 1 provides some data about an imaginary experiment in a format commonly seen in the wild. \n\nThe table has two columns and three rows, and both rows and columns are labelled.\"",
"_____no_output_____"
]
],
[
[
"# show the data set table 1.\ntable1",
"_____no_output_____"
]
],
[
[
"\"There are many ways to structure the same underlying data. \n\nTable 2 shows the same data as Table 1, but the rows and columns have been transposed. The data is the same, but the layout is different.\"",
"_____no_output_____"
]
],
[
[
"# show the data set table2.\ntable2",
"_____no_output_____"
]
],
[
[
"\"Table 3 reorganises Table 1 to make the values, variables and obserations more clear.\n\nTable 3 is the tidy version of Table 1. Each row represents an observation, the result of one treatment on one person, and each column is a variable.\"\n\n| name | trt | result |\n|--------------|-----|--------|\n| John Smith | a | - |\n| Jane Doe | a | 16 |\n| Mary Johnson | a | 3 |\n| John Smith | b | 2 |\n| Jane Doe | b | 11 |\n| Mary Johnson | b | 1 |",
"_____no_output_____"
],
[
"## Table 1 --> Tidy\n\nWe can use the pandas `melt` function to reshape Table 1 into Tidy format.",
"_____no_output_____"
]
],
[
[
"# show data set table1.\ntable1",
"_____no_output_____"
],
[
"# show the columns of table1.\ntable1.columns",
"_____no_output_____"
],
[
"# we can reset the table1 to the original with .reset_index().\ntable1 = table1.reset_index()\n# show the data set table1 with reset.\ntable1",
"_____no_output_____"
],
[
"# label the data set and use .met(id_vars='index') to met into 'tidy' format.\ntidy1 = table1.melt(id_vars='index')\n# show the tidy data set.\ntidy1",
"_____no_output_____"
],
[
"# we can rename the columns in the tidy data set.\ntidy1 = tidy1.rename(columns={'index': 'name', 'variable': 'trt', 'value': 'result'})\n# we can remove the 'treatment' text from the 'trt' column.\ntidy1['trt'] = tidy1.trt.str.replace('treatment','')\n# show the tidy data set with new names and values.\ntidy1",
"_____no_output_____"
]
],
[
[
"## Table 2 --> Tidy",
"_____no_output_____"
]
],
[
[
"table2 = table2.reset_index()\n\ntable2",
"_____no_output_____"
],
[
"# show the columns of data set table2.\ntable2.columns",
"_____no_output_____"
],
[
"# label the data set and use .met(id_vars='') to met into 'tidy' format.\ntidy2 = table2.melt(id_vars='index')\n# show the tidy data set.\ntidy2",
"_____no_output_____"
],
[
"# we can rename the columns in the tidy data set.\ntidy2 = tidy2.rename(columns={'index': 'trt', 'variable': 'name', 'value': 'result'})\n# we can remove the 'treatment' text from the 'trt' column.\ntidy2['trt'] = tidy2.trt.str.replace('treatment','')\n# show the tidy data set with new names and values.\ntidy2",
"_____no_output_____"
]
],
[
[
"## Tidy --> Table 1\n\nThe `pivot_table` function is the inverse of `melt`.",
"_____no_output_____"
]
],
[
[
"# label the data set and use.pivot_table().\nwide = tidy1.pivot_table(values='result', index='name', columns='trt')\n# show the table1 data set.\nwide",
"_____no_output_____"
],
[
"table1",
"_____no_output_____"
]
],
[
[
"## Tidy --> Table 2",
"_____no_output_____"
]
],
[
[
"# label the data set and use.pivot_table().\nwide2 = tidy2.pivot_table(values='result', index='name', columns='name')\n# show the table2 data set.\nwide2",
"_____no_output_____"
],
[
"table2",
"_____no_output_____"
]
],
[
[
"# Seaborn example\n\nThe rules can be simply stated:\n\n- Each variable is a column\n- Each observation is a row\n\nA helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot.\"",
"_____no_output_____"
]
],
[
[
"# using seaborn we can create bar plots for our tidy data set.\nsns.catplot(x='trt', y='result', col='name', kind='bar', data=tidy1, height=2);",
"_____no_output_____"
]
],
[
[
"## Now with Instacart data",
"_____no_output_____"
]
],
[
[
"# import the pandas libary to load the data set.\nimport pandas as pd",
"_____no_output_____"
],
[
"# label and load the 'products' csv file.\nproducts = pd.read_csv('products.csv')\n# we can use pd.concat() and merge the 'prior' and 'train' data sets here.\norder_products = pd.concat([pd.read_csv('order_products__prior.csv'), \n pd.read_csv('order_products__train.csv')])\n# lavel and load the 'orders' csv file.\norders = pd.read_csv('orders.csv')",
"_____no_output_____"
]
],
[
[
"## Goal: Reproduce part of this example\n\nInstead of a plot with 50 products, we'll just do two — the first products from each list\n- Half And Half Ultra Pasteurized\n- Half Baked Frozen Yogurt",
"_____no_output_____"
]
],
[
[
"# we can display an image in our code.\nfrom IPython.display import display, Image\nurl = 'https://cdn-images-1.medium.com/max/1600/1*wKfV6OV-_1Ipwrl7AjjSuw.png'\nexample = Image(url=url, width=600)\n\ndisplay(example)",
"_____no_output_____"
]
],
[
[
"So, given a `product_name` we need to calculate its `order_hour_of_day` pattern.",
"_____no_output_____"
],
[
"## Subset and Merge\n\nOne challenge of performing a merge on this data is that the `products` and `orders` datasets do not have any common columns that we can merge on. Due to this we will have to use the `order_products` dataset to provide the columns that we will use to perform the merge.",
"_____no_output_____"
]
],
[
[
"# label a data set for the values we want to use.\nproduct_names = ['Half And Half Ultra Pasteurized', 'Half Baked Frozen Yogurt']",
"_____no_output_____"
],
[
"# show the columns in the 'products' data set.\nproducts.columns.to_list()",
"_____no_output_____"
],
[
"# show the columns in the 'order_products' data set.\norder_products.columns.to_list()",
"_____no_output_____"
],
[
"# show the columns in the 'orders' data set.\norders.columns.to_list()",
"_____no_output_____"
],
[
"# label the merged data and we can use .merge to merge the columns we want from the 3 data sets, we must have a common column in the data we are merging.\nmerged = (products[['product_id','product_name']]\n .merge(order_products[['product_id','order_id']])\n .merge(orders[['order_id','order_hour_of_day']]))\n# show the shape of the data set.\nprint(merged.shape)\n# show the data set with headers.\nmerged.head()",
"(33819106, 4)\n"
],
[
"# we are looking for specific products that we label as 'product_names' above.\n# we can write a condition that will show if the product name is in 'product_names' is 1 or 0, true or false.\ncondition = ((merged['product_name'] == product_names[0]) |\n (merged['product_name'] == product_names[1]))\n# Other approach that works\ncondition = merged['product_name'].isin(product_names)\n\ncondition",
"_____no_output_____"
],
[
"# label the subset and use merged[] to implement the condition into the 'merged' data set we created.\nsubset = merged[condition]\n# show the data set shape.\nprint(subset.shape)\n# show the data set and headers.\nsubset.head()",
"(5978, 4)\n"
]
],
[
[
"## 4 ways to reshape and plot",
"_____no_output_____"
],
[
"### 1. value_counts",
"_____no_output_____"
]
],
[
[
"# we can use groupby() to plot our 'product_name' column vs 'order_hour_of_day', we use unstack to seperate the values.\nsubset.groupby('order_hour_of_day').product_name.value_counts().unstack().plot();",
"_____no_output_____"
]
],
[
[
"### 2. crosstab",
"_____no_output_____"
]
],
[
[
"# we can use crosstab() to plot our 'product_name' column vs 'order_hour_of_day', we use normalize='columns * 100'.\n(pd.crosstab(subset['order_hour_of_day'], subset['product_name'], normalize='columns')*100).plot();",
"_____no_output_____"
]
],
[
[
"### 3. Pivot Table",
"_____no_output_____"
]
],
[
[
"# we can use.pivot to plot our 'product_name' column vs 'order_hour_of_day', we use set the value 'order_id'.\nsubset.pivot_table(index='order_hour_of_day', columns='product_name', values='order_id', aggfunc=len).plot();",
"_____no_output_____"
]
],
[
[
"### 4. melt",
"_____no_output_____"
]
],
[
[
"# imnport seaborn to create a plot.\nimport seaborn as sns",
"_____no_output_____"
],
[
"table = pd.crosstab(subset['order_hour_of_day'], subset['product_name'], normalize='columns')\nmelted = table.reset_index().melt(id_vars='order_hour_of_day')\nsns.relplot(x='order_hour_of_day', y='value', hue='product_name', data=melted, kind='line');",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0efaa54eddc907ebee4789cbaa68bf5e21bc2de
| 137,604 |
ipynb
|
Jupyter Notebook
|
notebooks/5_model_training.ipynb
|
erikgrip/chess_eda
|
fb39422ecf9897db99fff7bc6fdbf86cb51fd5de
|
[
"MIT"
] | null | null | null |
notebooks/5_model_training.ipynb
|
erikgrip/chess_eda
|
fb39422ecf9897db99fff7bc6fdbf86cb51fd5de
|
[
"MIT"
] | null | null | null |
notebooks/5_model_training.ipynb
|
erikgrip/chess_eda
|
fb39422ecf9897db99fff7bc6fdbf86cb51fd5de
|
[
"MIT"
] | null | null | null | 209.442922 | 48,584 | 0.90315 |
[
[
[
"## Model training\nIn this notebook we'll first define a baseline model and then train a couple of ML models to try to better that performance.\n\nThe dataset is quite small and the features are few, so I'm going to keep it simple in terms of algotrithms. We'll see how a logistic regression model and a random forest model compare against each other and the baseline model-",
"_____no_output_____"
]
],
[
[
"# Import packages\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pickle\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import auc\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import ConfusionMatrixDisplay\nfrom sklearn.metrics import RocCurveDisplay\nfrom sklearn.metrics import roc_curve\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler\n\nsns.set()\ndefault_color='steelblue'\n\nfeatures = [\n 'is_white',\n 'rating',\n 'rating_diff',\n 'is_3min',\n 'did_win_last',\n 'avg_points_last_5'\n]\n\ndata_path = '/home/jovyan/work/data/'\ntrain_data_filename = 'train_data.csv'\ntest_data_filename = 'test_data.csv'\n\nmodel_output_path = '/home/jovyan/work/model/'\nmodel_filename = 'chess_prediction_model.sav'",
"_____no_output_____"
]
],
[
[
"### Define functions",
"_____no_output_____"
]
],
[
[
"def avg_points_prev_rows(df, num_games):\n \"\"\"Return the average of column 'won_points' over a set number of rows.\n Need data to be sorted ascending by time to return a chonological history\n \"\"\"\n avg_points = (\n df.rolling(num_games, min_periods=1, closed='both')\n ['won_points'].sum()\n .sub(df['won_points']) # Don't inlude this record's result\n .div(num_games))\n return avg_points\n\n\ndef get_X_and_y(df, features):\n \"\"\"Return a feature dataframe (X) and a target dataframe (y) from downloaded data\n \"\"\"\n # Sort data by time to get chronological features right\n df['end_ts'] = pd.to_datetime(df['end_date_local'].astype('str') + \" \" + \\\n df['end_time_local'].astype('str'))\n df = df.sort_values('end_ts', ascending=True)\n \n df['rating_diff'] = df['rating'].sub(df['opp_rating'])\n df['is_3min'] = df['time_control'].str.startswith('180').astype('int')\n df['did_win_last'] = (avg_points_prev_rows(df, 1) == 1).astype('int')\n df['avg_points_last_5'] = avg_points_prev_rows(df, 5)\n X = df[features]\n y = df['is_loss']\n return (X, y) \n\n\ndef print_prob_dist(clf, X):\n \"\"\" Plot KDE curves for predicted probabilities of actual 0's and 1's\n \"\"\"\n probs = clf.predict_proba(X_train)[:, 1]\n plot_df = pd.DataFrame({'y': y_train,'p': probs})\n\n sns.kdeplot(data=plot_df[plot_df['y'] == 0], x='p', common_norm=False, label='Is Not Loss')\n sns.kdeplot(data=plot_df[plot_df['y'] == 1], x='p', common_norm=False, label='Is Loss')\n plt.title(f\"Prediction probabilities by actual label\\n {clf.estimator.named_steps['clf']}\")\n plt.xlabel(\"Probability of losing game\")\n plt.legend()\n plt.xlim(0, 1)\n plt.show()",
"_____no_output_____"
]
],
[
[
"### Load data",
"_____no_output_____"
]
],
[
[
"# Read training data into a pandas DataFrame : df\ndf_train = pd.read_csv(data_path + train_data_filename)\ndf_test = pd.read_csv(data_path + test_data_filename)\n\nX_train, y_train = get_X_and_y(df_train, features)\nX_test, y_test = get_X_and_y(df_test, features)",
"_____no_output_____"
],
[
"# Display sample rows from feature dataframe X_train\ndisplay(X_train.head(3))",
"_____no_output_____"
]
],
[
[
"### Target balance ",
"_____no_output_____"
]
],
[
[
"print(\"Share of losses in the training data:\",\n np.round(y_train.mean(), 2))",
"Share of losses in the training data: 0.48\n"
]
],
[
[
"So, 52% accuracy is the best we could do without knowing or learning anything at all and just always predicting 0.\n\n### Baseline model \nIn this first step we'll create a simple model to benchmark later models against. Let's use the seemingly strongest feature that measures the difference of ratings (rating_diff). We\nll always predict loss when an underdog ratingwise, and never predict a loss else.",
"_____no_output_____"
]
],
[
[
"baseline_preds = np.where(X_train['rating_diff'] < 0, 1, 0)\nbaseline_accuracy = accuracy_score(baseline_preds, y_train)\n\nprint(\"Baseline accuracy:\", np.round(baseline_accuracy, 2))",
"Baseline accuracy: 0.68\n"
]
],
[
[
"### Logistic regression \n\nLet's first train a logistic regression classifier, and use sklearn's GridSearchCV to tune the parameters.",
"_____no_output_____"
]
],
[
[
"param_grid = {'clf__penalty': ['l1', 'l2'],\n 'clf__C' : np.logspace(-4, 4, 50)}\n\npipe = Pipeline([\n ('scaler', StandardScaler()),\n ('clf', LogisticRegression(solver='liblinear',\n class_weight='balanced',\n max_iter=1_000,\n random_state=1))])\n\nlr_clf = GridSearchCV(pipe,\n param_grid=param_grid,\n cv=5,\n return_train_score=True,\n verbose=True,\n n_jobs=-1,\n scoring='accuracy')\n\n# Fit on data\nlr_clf = lr_clf.fit(X_train, y_train)",
"Fitting 5 folds for each of 100 candidates, totalling 500 fits\n"
],
[
"# Print the best model parameters and best score\nprint(\"Chosen parameters:\", lr_clf.best_params_)\nprint(\"Best mean cross validation score\", \n np.round(lr_clf.best_score_, 2))",
"Chosen parameters: {'clf__C': 0.00021209508879201905, 'clf__penalty': 'l2'}\nBest mean cross validation score 0.7\n"
]
],
[
[
"The cross validated accuracy is better than the beseline model's, but only slightly so.",
"_____no_output_____"
]
],
[
[
"# Plot kernel densities for predicted probabilities by actual outcome\nprint_prob_dist(lr_clf, X_train)",
"_____no_output_____"
]
],
[
[
"The plot above graphically shows the substantial overlap between the distributions meaning that when using a cutoff value of 0.5 there will be quite a lot of misclassification for both labels.",
"_____no_output_____"
]
],
[
[
"sns.barplot(x=lr_clf.best_estimator_.named_steps['clf'].coef_[0],\n y=lr_clf.best_estimator_.feature_names_in_,\n color=default_color)\nplt.title('Logistic regression classifier feature weights')\nplt.show()",
"_____no_output_____"
]
],
[
[
"The traning pipeline standardized the features so the coefficients can be interpreted as a form of feature impact score in the fitted model. The feature rating_diff stands out. The lower the rating_diff goes (and it's negative when the opponent has the higher rating) the higher the probability of losing.",
"_____no_output_____"
],
[
"## Random Forest \n\nNext we'll see if a random forest classifier can beat logistic regression for this task. The non-linearity of the algorithm could potentially pick up different patters both between features and target, and between different features.",
"_____no_output_____"
]
],
[
[
"param_grid = { \n 'clf__n_estimators': [500, 1_000, 1_500],\n 'clf__max_features': [2, 'auto', 'sqrt', 'log2'],\n 'clf__max_depth' : [2, 3, 4, 5],\n 'clf__criterion' : ['gini', 'entropy']\n}\n\npipe = Pipeline([\n ('scaler', StandardScaler()),\n ('clf', RandomForestClassifier(class_weight='balanced',\n random_state=1))])\n\nrf_clf = GridSearchCV(pipe,\n param_grid=param_grid,\n cv=5,\n verbose=True,\n n_jobs=-1,\n scoring='accuracy')\n\n# Fit on data\nrf_clf = rf_clf.fit(X_train, y_train)",
"Fitting 5 folds for each of 96 candidates, totalling 480 fits\n"
],
[
"# Print the best model parameters and best score\nprint(\"Chosen parameters:\", rf_clf.best_params_)\nprint(\"Best mean cross validation score\", \n np.round(rf_clf.best_score_, 2))",
"Chosen parameters: {'clf__criterion': 'gini', 'clf__max_depth': 4, 'clf__max_features': 2, 'clf__n_estimators': 500}\nBest mean cross validation score 0.69\n"
]
],
[
[
"The cross validatet accurracy is a little worse for the random forest model than the logistic regression model, but it's very even.",
"_____no_output_____"
]
],
[
[
"# Visualise the probability \nprint_prob_dist(rf_clf, X_train)",
"_____no_output_____"
]
],
[
[
"The random forest model can classify a good chunk of the records correctly as clear wins and losses. But there's also many records in the middle where the model doesn't do that great in telling the classes apart.\n\n### Conclusion \n\nNeither model really improves much on the baseline model which only predicts from who's got the higher rating. Whether the extra couple of percentages of correct classifications would warrant the added complexity of using a ML-model rather than a 'business rule' would be up for debate depending on the use case.\n\nThe logistic regression model performed slightly better than the random forrest model, so I'll go with that one. Although the improvment compared to the baseline is underwhelming remeber that we kep everythin as simple as possible in this first iteration. Another round of feature engineering could well improve the results. The general conclusion will likely still hold though - that there is much noise in the signal and that it's unlikely one could get near perfect accuracy only based on the game history.\n\nNow it's time to see how the baseline model and logistic regression classifier performs on the test data and take a look at the confusion matrix to get a feel for how the trained model predicting.",
"_____no_output_____"
],
[
"##### Performance on test data",
"_____no_output_____"
]
],
[
[
"baseline_test_preds = np.where(X_test['rating_diff'] < 0, 1, 0)\nbaseline_test_accuracy = accuracy_score(baseline_test_preds, y_test)\nprint(\"Baseline accuracy on test data:\",\n np.round(baseline_test_accuracy, 2))\nprint(\"Logistic regression model accuracy on test data:\",\n np.round(lr_clf.score(X_test, y_test), 2))",
"Baseline accuracy on test data: 0.66\nLogistic regression model accuracy on test data: 0.68\n"
],
[
"# Plot confusion matrix\nsns.set_style(\"whitegrid\", {'axes.grid' : False})\npreds = lr_clf.predict(X_test)\ncm = confusion_matrix(y_test, preds, labels=lr_clf.classes_)\ndisp = ConfusionMatrixDisplay(confusion_matrix=cm,\n display_labels=lr_clf.classes_)\ndisp.plot()\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Save model\n",
"_____no_output_____"
]
],
[
[
"pickle.dump(lr_clf, open(model_output_path + model_filename, 'wb'))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0efb135dd14faef96c3b0ea9bb3b34d7bc52fa2
| 76,952 |
ipynb
|
Jupyter Notebook
|
prior/2-prior-viz.ipynb
|
mmayers12/learn
|
e6d9a2906fd852cb14c4df47381c758e93309cb8
|
[
"CC0-1.0"
] | null | null | null |
prior/2-prior-viz.ipynb
|
mmayers12/learn
|
e6d9a2906fd852cb14c4df47381c758e93309cb8
|
[
"CC0-1.0"
] | null | null | null |
prior/2-prior-viz.ipynb
|
mmayers12/learn
|
e6d9a2906fd852cb14c4df47381c758e93309cb8
|
[
"CC0-1.0"
] | null | null | null | 165.133047 | 26,036 | 0.872063 |
[
[
[
"library(magrittr)",
"_____no_output_____"
],
[
"treatment_df = readr::read_tsv('../summary/indications.tsv') %>% \n dplyr::filter(rel_type == 'TREATS_CtD') %>%\n dplyr::select(compound_id, disease_id) %>%\n dplyr::mutate(status = 1)",
"Parsed with column specification:\ncols(\n compound_id = col_character(),\n compound_name = col_character(),\n disease_id = col_character(),\n disease_name = col_character(),\n rel_type = col_character()\n)\n"
],
[
"degree_prior_df = readr::read_tsv('data/degree-prior.tsv') %>%\n dplyr::mutate(Empiric = n_treatments / n_possible, Permuted = prior_perm) %>%\n dplyr::mutate(logit_prior_perm = boot::logit(prior_perm)) %>%\n dplyr::mutate(prior_theoretic = compound_treats * disease_treats) %>%\n dplyr::mutate(prior_theoretic = prior_theoretic / sum(prior_theoretic)) %>%\n dplyr::mutate(logit_prior_theoretic = boot::logit(prior_theoretic))",
"Parsed with column specification:\ncols(\n compound_treats = col_integer(),\n disease_treats = col_integer(),\n prior_perm = col_double(),\n prior_perm_stderr = col_double(),\n n_treatments = col_integer(),\n n_possible = col_integer()\n)\n"
],
[
"degree_prior_df %>% head(2)",
"_____no_output_____"
],
[
"width = 3\nheight = 3\noptions(repr.plot.width=width, repr.plot.height=height)\ngg_scatter = degree_prior_df %>%\n ggplot2::ggplot(ggplot2::aes(logit_prior_theoretic, logit_prior_perm)) +\n ggplot2::geom_point(alpha = 0.1, shape = 16) +\n ggplot2::theme_bw() +\n ggplot2::coord_equal() +\n ggplot2::xlab('Logit of the Theoretic Prob') +\n ggplot2::ylab('Logit of the Permuted Prob')\n\ngg_scatter\nggplot2::ggsave('viz/scatter-theoretic-v-perm.png', gg_scatter, dpi = 300, width = width, height = height)",
"_____no_output_____"
],
[
"#lm(logit_prior_perm ~ logit_prior_theoretic, data = degree_prior_df) %>% summary",
"_____no_output_____"
],
[
"plot_tiles <- function(df, scale_name='') {\n gg = ggplot2::ggplot(df, ggplot2::aes(x = disease_treats, y = compound_treats, fill = prior)) +\n ggplot2::geom_tile() +\n viridis::scale_fill_viridis(scale_name) +\n ggplot2::theme_bw() +\n ggplot2::coord_equal() +\n ggplot2::theme(\n legend.position='top',\n legend.key.width=grid::unit(0.85, 'inches'),\n legend.key.size=grid::unit(0.12, 'inches')\n ) +\n ggplot2::xlab('Disease Degree') +\n ggplot2::ylab('Compound Degree') +\n ggplot2::scale_x_continuous(expand=c(0, 1)) +\n ggplot2::scale_y_continuous(expand=c(0, 1)) +\n ggplot2::theme(plot.margin = grid::unit(c(2,2,2,2), 'points'))\n return(gg)\n}",
"_____no_output_____"
],
[
"width = 6\nheight = 4.5\noptions(repr.plot.width=width, repr.plot.height=height)\n\ngg = degree_prior_df %>%\n tidyr::gather(kind, prior, Permuted, Empiric) %>% \n plot_tiles('Prob') +\n ggplot2::facet_grid(kind ~ ., 'Probability') +\n ggplot2::theme(strip.background=ggplot2::element_rect(fill='#FEF2E2'))\n\nggplot2::ggsave('viz/prob-tiled-empiric-v-perm.png', gg, dpi = 300, width = width, height = height)",
"_____no_output_____"
],
[
"gg = degree_prior_df %>%\n tidyr::gather(kind, prior, Permuted, Empiric) %>% \n dplyr::mutate(prior = log(0.01 + prior)) %>%\n plot_tiles('log(0.01 + Prob)') +\n ggplot2::facet_grid(kind ~ .) +\n ggplot2::theme(strip.background=ggplot2::element_rect(fill='#FEF2E2'))\n\nggplot2::ggsave('viz/log-prob-tiled-empiric-v-perm.png', gg, dpi = 300, width = width, height = height)",
"_____no_output_____"
],
[
"# obs_prior_df = obs_prior_df %>% \n# dplyr::left_join(treatment_df)",
"_____no_output_____"
],
[
"width = 6\nheight = 2.75\noptions(repr.plot.width=width, repr.plot.height=height)\n\ngg_tile_logit = degree_prior_df %>% \n dplyr::mutate(prior = boot::logit(prior_perm)) %>%\n plot_tiles('logit(Prob)')\n\nggplot2::ggsave('viz/logit-perm-prior-tiled.png', gg_tile_logit, dpi = 300, width = width, height = height)\nplot(gg_tile_logit)",
"_____no_output_____"
],
[
"width = 6\nheight = 2.75\noptions(repr.plot.width=width, repr.plot.height=height)\n\ngg_tile_prob = degree_prior_df %>%\n dplyr::mutate(prior = prior_perm) %>%\n plot_tiles('Prob')\n\nggplot2::ggsave('viz/perm-prior-tiled.png', gg_tile_prob, dpi = 300, width = width, height = height)\nplot(gg_tile_prob)",
"_____no_output_____"
],
[
"obs_prior_df = readr::read_tsv('data/observation-prior.tsv')",
"Parsed with column specification:\ncols(\n compound_id = col_character(),\n disease_id = col_character(),\n compound_treats = col_integer(),\n disease_treats = col_integer(),\n prior_perm = col_double(),\n prior_perm_stderr = col_double()\n)\n"
],
[
"width = 6\nheight = 2.5\ngg_hist = obs_prior_df %>%\n #dplyr::mutate(prior = prior_perm) %>%\n dplyr::mutate(logit_prior_perm = boot::logit(prior_perm)) %>%\n tidyr::gather(kind, prior, prior_perm, logit_prior_perm) %>%\n ggplot2::ggplot(ggplot2::aes(x = prior)) +\n ggplot2::geom_histogram(bins=100) +\n ggplot2::facet_wrap( ~ kind, scales='free') +\n ggplot2::theme_bw() +\n ggplot2::theme(strip.background=ggplot2::element_rect(fill='#FEF2E2')) +\n ggplot2::theme(plot.margin=grid::unit(c(2, 2, 2, 2), 'points')) +\n ggplot2::xlab('Compound–Disease Prior') +\n ggplot2::ylab('Count')\nggplot2::ggsave('viz/prob-histograms.png', gg_hist, dpi = 300, width = width, height = height)",
"_____no_output_____"
],
[
"# gg = gridExtra::arrangeGrob(gg_hist, gg_tile_prob, gg_tile_logit, ncol=1)\n# ggplot2::ggsave('viz/combined.png', gg, dpi = 300, width = width, height = 8, device = cairo_pdf)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0efbe394a929107c76c044685a0d7d20f758f3c
| 4,293 |
ipynb
|
Jupyter Notebook
|
Script-026-SM-Design Matrices.ipynb
|
paulinelemenkova/Python-script-026-SM-Design-Matrices
|
e357fb0eebf5d02c31e5c0bc0af804d9a5e61cff
|
[
"MIT"
] | null | null | null |
Script-026-SM-Design Matrices.ipynb
|
paulinelemenkova/Python-script-026-SM-Design-Matrices
|
e357fb0eebf5d02c31e5c0bc0af804d9a5e61cff
|
[
"MIT"
] | null | null | null |
Script-026-SM-Design Matrices.ipynb
|
paulinelemenkova/Python-script-026-SM-Design-Matrices
|
e357fb0eebf5d02c31e5c0bc0af804d9a5e61cff
|
[
"MIT"
] | null | null | null | 26.5 | 79 | 0.353366 |
[
[
[
"#!/usr/bin/env python\n# coding: utf-8\nfrom __future__ import print_function\nimport os\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nfrom patsy import dmatrices\n\nos.chdir('/Users/pauline/Documents/Python')\ndf = pd.read_csv(\"Tab-Morph.csv\")\ndf = df.dropna()\ndf[-10:]\n\ny, X = dmatrices('profile ~ sedim_thick + igneous_volc + slope_angle',\n data=df, return_type='dataframe')\ny[:7]\nX[:7]",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
d0efe30300831ce764cb87c9daabb4b40e164b0a
| 62,014 |
ipynb
|
Jupyter Notebook
|
synth/samples/mwem_sample/Visualizing MWEM.ipynb
|
shlomihod/smartnoise-sdk
|
1131fed432027c15caa5182d6da00286514efd00
|
[
"MIT"
] | 63 |
2020-03-26T15:26:10.000Z
|
2020-10-22T06:26:38.000Z
|
synth/samples/mwem_sample/Visualizing MWEM.ipynb
|
shlomihod/smartnoise-sdk
|
1131fed432027c15caa5182d6da00286514efd00
|
[
"MIT"
] | 87 |
2021-02-20T20:43:49.000Z
|
2022-03-31T16:24:46.000Z
|
synth/samples/mwem_sample/Visualizing MWEM.ipynb
|
shlomihod/smartnoise-sdk
|
1131fed432027c15caa5182d6da00286514efd00
|
[
"MIT"
] | 17 |
2021-02-18T18:47:09.000Z
|
2022-03-01T06:44:17.000Z
| 326.389474 | 18,716 | 0.930371 |
[
[
[
"## Fake data to visualize MWEM's histograms\nMWEM works by first creating a uniformly distributed histogram out of real data. It then iteratively updates this histogram with noisy samples from the real data. In other words, using the multiplicative weights mechanism, MWEM updates the histograms \"weights\" via the DP exponential mechanism (for querying the original data).\n\nHere, we create a heatmap from the histograms. We visualize the histogram made from the real data, and the differentially private histogram. Brighter values correspond to more higher probability bins in each histogram.",
"_____no_output_____"
]
],
[
[
"import os\nimport pandas as pd\nimport numpy as np\nimport random\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom snsynth.mwem import MWEMSynthesizer",
"_____no_output_____"
],
[
"def plot_histo(title,histo):\n fig = plt.figure(figsize=(6, 6))\n ax = fig.add_subplot(111)\n ax.set_title(title)\n plt.imshow(histo)\n ax.set_aspect('equal')\n cax = fig.add_axes([0.1, 1.0, 1., 0.1])\n cax.get_xaxis().set_visible(False)\n cax.get_yaxis().set_visible(False)\n cax.set_frame_on(False)\n plt.colorbar(orientation='horizontal')\n plt.show()\n",
"_____no_output_____"
],
[
"# Make ourselves some fake data, with a \"hot-spot\" in the distribution\n# in the bottom right corner\ndf = pd.DataFrame({'fake_column_1': [random.randint(0,100) for i in range(3000)] + [random.randint(80,100) for i in range(1000)],\n 'fake_column_2': [random.randint(0,100) for i in range(3000)] + [random.randint(80,100) for i in range(1000)],})\n\nsynth = MWEMSynthesizer(10.0, 400, 30, 20,[[0,1]])\nsynth.fit(df)\n\nplot_histo('\"Real\" Data', synth.synthetic_histograms[0][1])\nplot_histo('\"Fake\" Data', synth.synthetic_histograms[0][0])",
"/var/folders/nx/9gd39tcx2lq7c3vmpmfk8vjc0000gn/T/ipykernel_59986/3611354744.py:11: MatplotlibDeprecationWarning: Starting from Matplotlib 3.6, colorbar() will steal space from the mappable's axes, rather than from the current axes, to place the colorbar. To silence this warning, explicitly pass the 'ax' argument to colorbar().\n plt.colorbar(orientation='horizontal')\n"
]
],
[
[
"## Effect of Bin Count\nHere we can visualize the effect of specifying a max_bin_count. In the original data, we have 100 bins. If we halve that, we see that we still do a pretty good job at capturing the overall distribution.",
"_____no_output_____"
]
],
[
[
"synth = MWEMSynthesizer(10.0, 400, 30, 20,[[0,1]], max_bin_count=50)\nsynth.fit(df)\n\nplot_histo('\"Real\" Data', synth.synthetic_histograms[0][1])\nplot_histo('\"Fake\" Data', synth.synthetic_histograms[0][0])",
"/Users/joshuaallen/opt/anaconda3/envs/synth/lib/python3.8/site-packages/snsynth/mwem.py:307: Warning: Bin count 101 in column: 0 exceeds max_bin_count, defaulting to: 50. Is this a continuous variable?\n warnings.warn(\n/Users/joshuaallen/opt/anaconda3/envs/synth/lib/python3.8/site-packages/snsynth/mwem.py:307: Warning: Bin count 101 in column: 1 exceeds max_bin_count, defaulting to: 50. Is this a continuous variable?\n warnings.warn(\n/var/folders/nx/9gd39tcx2lq7c3vmpmfk8vjc0000gn/T/ipykernel_59986/3611354744.py:11: MatplotlibDeprecationWarning: Starting from Matplotlib 3.6, colorbar() will steal space from the mappable's axes, rather than from the current axes, to place the colorbar. To silence this warning, explicitly pass the 'ax' argument to colorbar().\n plt.colorbar(orientation='horizontal')\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0efe64d8b6cd52416c96abeff4f5a211de72c29
| 8,963 |
ipynb
|
Jupyter Notebook
|
01.蒸汽量预测/.ipynb_checkpoints/5.Feature_Optimization-checkpoint.ipynb
|
XiaoyuBi/TianChi_Case
|
648338be71fa8b53a5e18ecb145408e137633ff1
|
[
"MIT"
] | null | null | null |
01.蒸汽量预测/.ipynb_checkpoints/5.Feature_Optimization-checkpoint.ipynb
|
XiaoyuBi/TianChi_Case
|
648338be71fa8b53a5e18ecb145408e137633ff1
|
[
"MIT"
] | null | null | null |
01.蒸汽量预测/.ipynb_checkpoints/5.Feature_Optimization-checkpoint.ipynb
|
XiaoyuBi/TianChi_Case
|
648338be71fa8b53a5e18ecb145408e137633ff1
|
[
"MIT"
] | null | null | null | 33.196296 | 105 | 0.534419 |
[
[
[
"import pandas as pd \n\ntrain_data_file = 'data/zhengqi_train.txt'\ntest_data_file = 'data/zhengqi_test.txt'\n\ntrain_data = pd.read_csv(train_data_file, sep = '\\t', encoding = 'utf-8')\ntest_data = pd.read_csv(test_data_file, sep = '\\t', encoding = 'utf_8')",
"_____no_output_____"
]
],
[
[
"## 定义特征构造方法",
"_____no_output_____"
]
],
[
[
"eps = 1e-5\n\n# 交叉特征方式\nfunc_dict = {\n 'add':lambda x, y: x + y,\n 'multi':lambda x, y: x * y,\n 'div':lambda x, y: x / (y + eps),\n}",
"_____no_output_____"
],
[
"# 特征构造方法\ndef auto_features(train_data, test_data, func_dict, col_list):\n train_data, test_data = train_data.copy(), test_data.copy()\n for col_i in col_list:\n for col_j in col_list:\n for func_name, func in func_dict.items():\n for data in [train_data, test_data]:\n func_features = func(data[col_i], data[col_j])\n col_func_features = '-'.join([col_i, func_name, col_j])\n data[col_func_features] = func_features\n \n return train_data, test_data",
"_____no_output_____"
]
],
[
[
"## 构造特征并降维",
"_____no_output_____"
]
],
[
[
"# 构造特征\ntrain_data2, test_data2 = auto_features(train_data,test_data,func_dict,col_list=test_data.columns)",
"_____no_output_____"
],
[
"# PCA降维\nfrom sklearn.decomposition import PCA\n\npca = PCA(n_components = 500)\ntrain_data2_pca = pca.fit_transform(train_data2.iloc[:,0:-1])\ntest_data2_pca = pca.transform(test_data2)\ntrain_data2_pca = pd.DataFrame(train_data2_pca)\ntest_data2_pca = pd.DataFrame(test_data2_pca)\ntrain_data2_pca['target'] = train_data2['target']\n\n# 训练准备\nX_train2 = train_data2[test_data2.columns].values\ny_train = train_data2['target']",
"_____no_output_____"
]
],
[
[
"## LGB模型训练",
"_____no_output_____"
]
],
[
[
"# ls_validation i\nfrom sklearn.model_selection import KFold\nfrom sklearn.metrics import mean_squared_error\nimport lightgbm as lgb\nimport numpy as np\n\n# 5折交叉验证\nFolds = 5\nkf = KFold(n_splits = Folds, random_state = 0, shuffle = True)\n# 记录训练和预测MSE\nMSE_DICT = {\n 'train_mse':[],\n 'test_mse':[]\n}\n\n# 线下训练预测\nfor i, (train_index, test_index) in enumerate(kf.split(X_train2)):\n # lgb树模型\n lgb_reg = lgb.LGBMRegressor(\n boosting_type = 'gbdt',\n objective = 'regression',\n metric = 'mse',\n train_metric = True,\n n_estimators = 3000,\n early_stopping_rounds = 100,\n n_jobs = -1,\n learning_rate = 0.01,\n max_depth = 4,\n feature_fraction = 0.8,\n feature_fraction_seed = 0,\n bagging_fraction = 0.8, \n bagging_freq = 2,\n bagging_seed = 0,\n lambda_l1 = 1,\n lambda_l2 = 1,\n verbosity = 1\n )\n \n # 切分训练集和预测集\n X_train_KFold, X_test_KFold = X_train2[train_index], X_train2[test_index]\n y_train_KFold, y_test_KFold = y_train[train_index], y_train[test_index]\n \n # 训练模型\n lgb_reg.fit(\n X=X_train_KFold,y=y_train_KFold,\n eval_set=[(X_train_KFold, y_train_KFold),(X_test_KFold, y_test_KFold)],\n eval_names=['Train','Test'],\n early_stopping_rounds=100,\n eval_metric='MSE',\n verbose=600\n )\n\n\n # 训练集预测 测试集预测\n y_train_KFold_predict = lgb_reg.predict(X_train_KFold,num_iteration=lgb_reg.best_iteration_)\n y_test_KFold_predict = lgb_reg.predict(X_test_KFold,num_iteration=lgb_reg.best_iteration_) \n \n print('第{}折 训练和预测 训练MSE 预测MSE'.format(i))\n train_mse = mean_squared_error(y_train_KFold_predict, y_train_KFold)\n print('------\\t', '训练MSE\\t', train_mse, '\\t------')\n test_mse = mean_squared_error(y_test_KFold_predict, y_test_KFold)\n print('------\\t', '预测MSE\\t', test_mse, '\\t------\\n')\n \n MSE_DICT['train_mse'].append(train_mse)\n MSE_DICT['test_mse'].append(test_mse)\nprint('------\\t', '训练平均MSE\\t', np.mean(MSE_DICT['train_mse']), '\\t------')\nprint('------\\t', '预测平均MSE\\t', np.mean(MSE_DICT['test_mse']), '\\t------')",
"Training until validation scores don't improve for 100 rounds.\n[600]\tTrain's l2: 0.0429543\tTest's l2: 0.117409\n[1200]\tTrain's l2: 0.0206657\tTest's l2: 0.11418\nEarly stopping, best iteration is:\n[1698]\tTrain's l2: 0.0119091\tTest's l2: 0.112934\n第0折 训练和预测 训练MSE 预测MSE\n------\t 训练MSE\t 0.011909097708741524 \t------\n------\t 预测MSE\t 0.11293385783352594 \t------\n\nTraining until validation scores don't improve for 100 rounds.\n[600]\tTrain's l2: 0.0446672\tTest's l2: 0.104044\n[1200]\tTrain's l2: 0.0207863\tTest's l2: 0.100736\n[1800]\tTrain's l2: 0.010513\tTest's l2: 0.0996182\nEarly stopping, best iteration is:\n[1808]\tTrain's l2: 0.0104171\tTest's l2: 0.0995806\n第1折 训练和预测 训练MSE 预测MSE\n------\t 训练MSE\t 0.010417092420258707 \t------\n------\t 预测MSE\t 0.09958063481516348 \t------\n\nTraining until validation scores don't improve for 100 rounds.\n[600]\tTrain's l2: 0.044053\tTest's l2: 0.10463\n[1200]\tTrain's l2: 0.0209939\tTest's l2: 0.101379\n[1800]\tTrain's l2: 0.0109506\tTest's l2: 0.0996974\n[2400]\tTrain's l2: 0.00607394\tTest's l2: 0.0988323\n[3000]\tTrain's l2: 0.00365503\tTest's l2: 0.098321\nDid not meet early stopping. Best iteration is:\n[3000]\tTrain's l2: 0.00365503\tTest's l2: 0.098321\n第2折 训练和预测 训练MSE 预测MSE\n------\t 训练MSE\t 0.0036550315928538407 \t------\n------\t 预测MSE\t 0.09832095115712135 \t------\n\nTraining until validation scores don't improve for 100 rounds.\n[600]\tTrain's l2: 0.043226\tTest's l2: 0.122334\n[1200]\tTrain's l2: 0.0203464\tTest's l2: 0.117855\n[1800]\tTrain's l2: 0.0105192\tTest's l2: 0.116615\nEarly stopping, best iteration is:\n[2013]\tTrain's l2: 0.00843265\tTest's l2: 0.116259\n第3折 训练和预测 训练MSE 预测MSE\n------\t 训练MSE\t 0.008432647602258895 \t------\n------\t 预测MSE\t 0.11625884614608846 \t------\n\nTraining until validation scores don't improve for 100 rounds.\n[600]\tTrain's l2: 0.0446795\tTest's l2: 0.108496\n[1200]\tTrain's l2: 0.0211013\tTest's l2: 0.105102\nEarly stopping, best iteration is:\n[1383]\tTrain's l2: 0.0171819\tTest's l2: 0.104447\n第4折 训练和预测 训练MSE 预测MSE\n------\t 训练MSE\t 0.017181862883028358 \t------\n------\t 预测MSE\t 0.1044469676545597 \t------\n\n------\t 训练平均MSE\t 0.010319146441428265 \t------\n------\t 预测平均MSE\t 0.1063082515212918 \t------\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0efe97e05cabf79ce28eb2ac06f9d8b0639fdfc
| 2,504 |
ipynb
|
Jupyter Notebook
|
00_core.ipynb
|
ericct/hello_nbdev
|
eca44724bfaa0892e5547e46ef9500daf9d94fba
|
[
"Apache-2.0"
] | null | null | null |
00_core.ipynb
|
ericct/hello_nbdev
|
eca44724bfaa0892e5547e46ef9500daf9d94fba
|
[
"Apache-2.0"
] | 2 |
2021-09-28T05:43:25.000Z
|
2022-02-26T10:12:34.000Z
|
00_core.ipynb
|
ericct/hello_nbdev
|
eca44724bfaa0892e5547e46ef9500daf9d94fba
|
[
"Apache-2.0"
] | null | null | null | 18.827068 | 174 | 0.474441 |
[
[
[
"# default_exp core",
"_____no_output_____"
]
],
[
[
"# module name here\n\n> API details.",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.showdoc import *\nfrom fastcore.test import *",
"_____no_output_____"
],
[
"#export\ndef say_hello(to):\n \"Say hello to somebody\"\n return f'Hello {to}!'",
"_____no_output_____"
],
[
"say_hello(\"toto\")",
"_____no_output_____"
],
[
"test_eq(say_hello(\"Jeremy\"), \"Hello Jeremy!\")",
"_____no_output_____"
],
[
"#export\nclass HelloSayer:\n \"Say hello to `to` using `say_hello`\"\n def __init__(self, to): self.to = to\n\n def say(self):\n \"Do the saying\"\n return say_hello(self.to)",
"_____no_output_____"
],
[
"show_doc(HelloSayer.say)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0efef0ba4537d4b9956451a4cf8af4041625d88
| 26,435 |
ipynb
|
Jupyter Notebook
|
lab10.ipynb
|
MorganEstep/IA241
|
ef31672d24a1745a18169610c85f59187c911818
|
[
"MIT"
] | null | null | null |
lab10.ipynb
|
MorganEstep/IA241
|
ef31672d24a1745a18169610c85f59187c911818
|
[
"MIT"
] | null | null | null |
lab10.ipynb
|
MorganEstep/IA241
|
ef31672d24a1745a18169610c85f59187c911818
|
[
"MIT"
] | null | null | null | 128.95122 | 22,932 | 0.903045 |
[
[
[
"# first jupyter notebook",
"_____no_output_____"
]
],
[
[
"print('hello world')",
"hello world\n"
],
[
"a=3\nb=4\na+b",
"_____no_output_____"
]
],
[
[
"## test print",
"_____no_output_____"
]
],
[
[
"print(c)",
"5\n"
]
],
[
[
"## define new variable",
"_____no_output_____"
]
],
[
[
"c = 5",
"_____no_output_____"
]
],
[
[
"1. this is line1\n2. this is line2\n3. this is line3",
"_____no_output_____"
],
[
"* item1\n* item2\n* item3",
"_____no_output_____"
],
[
"[JMU](https://www.jmu.edu)",
"_____no_output_____"
],
[
"![jmu image] (https://www.jmu.edu/_images/_story-rotator/full-width-images/choices4-2000x666.jpg)",
"_____no_output_____"
]
],
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\nX = np.linspace(-np.pi, np.pi, 256, endpoint=True)\nC, S = np.cos(X), np.sin(X)\n\nplt.plot(X, C)\nplt.plot(X, S)\n\nplt.show()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0f0135b70cee9b774575cf8845dc29322e23b3e
| 25,584 |
ipynb
|
Jupyter Notebook
|
nmt/Persian-English_pes-eng.ipynb
|
WangXingqiu/machine-translation
|
ba6e9556645c777d8a15dbb3bec11521a75744a9
|
[
"MIT"
] | 3 |
2020-12-16T03:58:09.000Z
|
2021-06-06T07:25:35.000Z
|
nmt/Persian-English_pes-eng.ipynb
|
WangXingqiu/machine-translation
|
ba6e9556645c777d8a15dbb3bec11521a75744a9
|
[
"MIT"
] | null | null | null |
nmt/Persian-English_pes-eng.ipynb
|
WangXingqiu/machine-translation
|
ba6e9556645c777d8a15dbb3bec11521a75744a9
|
[
"MIT"
] | 2 |
2020-12-20T03:18:06.000Z
|
2021-06-06T07:25:55.000Z
| 29.138952 | 342 | 0.542839 |
[
[
[
"# 基于注意力的神经机器翻译",
"_____no_output_____"
],
[
"此笔记本训练一个将波斯语翻译为英语的序列到序列(sequence to sequence,简写为 seq2seq)模型。此例子难度较高,需要对序列到序列模型的知识有一定了解。\n\n训练完此笔记本中的模型后,你将能够输入一个波斯语句子,例如 *\"من می دانم.\"*,并返回其英语翻译 *\"I know.\"*\n\n对于一个简单的例子来说,翻译质量令人满意。但是更有趣的可能是生成的注意力图:它显示在翻译过程中,输入句子的哪些部分受到了模型的注意。\n\n<img src=\"https://tensorflow.google.cn/images/spanish-english.png\" alt=\"spanish-english attention plot\">\n\n请注意:运行这个例子用一个 P100 GPU 需要花大约 10 分钟。",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nfrom sklearn.model_selection import train_test_split\n\nimport unicodedata\nimport re\nimport numpy as np\nimport os\nimport io\nimport time",
"_____no_output_____"
]
],
[
[
"## 下载和准备数据集\n\n我们将使用 http://www.manythings.org/anki/ 提供的一个语言数据集。这个数据集包含如下格式的语言翻译对:\n\n```\nMay I borrow this book?\t¿Puedo tomar prestado este libro?\n```\n\n这个数据集中有很多种语言可供选择。我们将使用英语 - 波斯语数据集。为方便使用,我们在谷歌云上提供了此数据集的一份副本。但是你也可以自己下载副本。下载完数据集后,我们将采取下列步骤准备数据:\n\n1. 给每个句子添加一个 *开始* 和一个 *结束* 标记(token)。\n2. 删除特殊字符以清理句子。\n3. 创建一个单词索引和一个反向单词索引(即一个从单词映射至 id 的词典和一个从 id 映射至单词的词典)。\n4. 将每个句子填充(pad)到最大长度。",
"_____no_output_____"
]
],
[
[
"'''\n# 下载文件\npath_to_zip = tf.keras.utils.get_file(\n 'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',\n extract=True)\n\npath_to_file = os.path.dirname(path_to_zip)+\"/spa-eng/spa.txt\"\n'''\npath_to_file = \"./lan/pes.txt\"",
"_____no_output_____"
],
[
"# 将 unicode 文件转换为 ascii\ndef unicode_to_ascii(s):\n return ''.join(c for c in unicodedata.normalize('NFD', s)\n if unicodedata.category(c) != 'Mn')\n\n\ndef preprocess_sentence(w):\n w = unicode_to_ascii(w.lower().strip())\n\n # 在单词与跟在其后的标点符号之间插入一个空格\n # 例如: \"he is a boy.\" => \"he is a boy .\"\n # 参考:https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation\n w = re.sub(r\"([?.!,¿])\", r\" \\1 \", w)\n w = re.sub(r'[\" \"]+', \" \", w)\n\n # 除了 (a-z, A-Z, \".\", \"?\", \"!\", \",\"),将所有字符替换为空格\n w = re.sub(r\"[^a-zA-Z?.!,¿]+\", \" \", w)\n\n w = w.rstrip().strip()\n\n # 给句子加上开始和结束标记\n # 以便模型知道何时开始和结束预测\n w = '<start> ' + w + ' <end>'\n return w",
"_____no_output_____"
],
[
"en_sentence = u\"May I borrow this book?\"\nsp_sentence = u\"¿Puedo tomar prestado este libro?\"\nprint(preprocess_sentence(en_sentence))\nprint(preprocess_sentence(sp_sentence).encode('utf-8'))",
"_____no_output_____"
],
[
"# 1. 去除重音符号\n# 2. 清理句子\n# 3. 返回这样格式的单词对:[ENGLISH, SPANISH]\ndef create_dataset(path, num_examples):\n lines = io.open(path, encoding='UTF-8').read().strip().split('\\n')\n\n word_pairs = [[preprocess_sentence(w) for w in l.split('\\t')] for l in lines[:num_examples]]\n\n return zip(*word_pairs)",
"_____no_output_____"
],
[
"en, sp = create_dataset(path_to_file, None)\nprint(en[-1])\nprint(sp[-1])",
"_____no_output_____"
],
[
"def max_length(tensor):\n return max(len(t) for t in tensor)",
"_____no_output_____"
],
[
"def tokenize(lang):\n lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(\n filters='')\n lang_tokenizer.fit_on_texts(lang)\n\n tensor = lang_tokenizer.texts_to_sequences(lang)\n\n tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,\n padding='post')\n\n return tensor, lang_tokenizer",
"_____no_output_____"
],
[
"def load_dataset(path, num_examples=None):\n # 创建清理过的输入输出对\n targ_lang, inp_lang = create_dataset(path, num_examples)\n\n input_tensor, inp_lang_tokenizer = tokenize(inp_lang)\n target_tensor, targ_lang_tokenizer = tokenize(targ_lang)\n\n return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer",
"_____no_output_____"
]
],
[
[
"### 限制数据集的大小以加快实验速度(可选)\n\n在超过 10 万个句子的完整数据集上训练需要很长时间。为了更快地训练,我们可以将数据集的大小限制为 3 万个句子(当然,翻译质量也会随着数据的减少而降低):",
"_____no_output_____"
]
],
[
[
"# 尝试实验不同大小的数据集\nnum_examples = 30000\ninput_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)\n\n# 计算目标张量的最大长度 (max_length)\nmax_length_targ, max_length_inp = max_length(target_tensor), max_length(input_tensor)",
"_____no_output_____"
],
[
"# 采用 80 - 20 的比例切分训练集和验证集\ninput_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)\n\n# 显示长度\nprint(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))",
"_____no_output_____"
],
[
"def convert(lang, tensor):\n for t in tensor:\n if t!=0:\n print (\"%d ----> %s\" % (t, lang.index_word[t]))",
"_____no_output_____"
],
[
"print (\"Input Language; index to word mapping\")\nconvert(inp_lang, input_tensor_train[0])\nprint ()\nprint (\"Target Language; index to word mapping\")\nconvert(targ_lang, target_tensor_train[0])",
"_____no_output_____"
]
],
[
[
"### 创建一个 tf.data 数据集",
"_____no_output_____"
]
],
[
[
"BUFFER_SIZE = len(input_tensor_train)\nBATCH_SIZE = 64\nsteps_per_epoch = len(input_tensor_train)//BATCH_SIZE\nembedding_dim = 256\nunits = 1024\nvocab_inp_size = len(inp_lang.word_index)+1\nvocab_tar_size = len(targ_lang.word_index)+1\n\ndataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)\ndataset = dataset.batch(BATCH_SIZE, drop_remainder=True)",
"_____no_output_____"
],
[
"example_input_batch, example_target_batch = next(iter(dataset))\nexample_input_batch.shape, example_target_batch.shape",
"_____no_output_____"
]
],
[
[
"## 编写编码器 (encoder) 和解码器 (decoder) 模型\n\n实现一个基于注意力的编码器 - 解码器模型。关于这种模型,你可以阅读 TensorFlow 的 [神经机器翻译 (序列到序列) 教程](https://github.com/tensorflow/nmt)。本示例采用一组更新的 API。此笔记本实现了上述序列到序列教程中的 [注意力方程式](https://github.com/tensorflow/nmt#background-on-the-attention-mechanism)。下图显示了注意力机制为每个输入单词分配一个权重,然后解码器将这个权重用于预测句子中的下一个单词。下图和公式是 [Luong 的论文](https://arxiv.org/abs/1508.04025v5)中注意力机制的一个例子。\n\n<img src=\"https://tensorflow.google.cn/images/seq2seq/attention_mechanism.jpg\" width=\"500\" alt=\"attention mechanism\">\n\n输入经过编码器模型,编码器模型为我们提供形状为 *(批大小,最大长度,隐藏层大小)* 的编码器输出和形状为 *(批大小,隐藏层大小)* 的编码器隐藏层状态。\n\n下面是所实现的方程式:\n\n<img src=\"https://tensorflow.google.cn/images/seq2seq/attention_equation_0.jpg\" alt=\"attention equation 0\" width=\"800\">\n<img src=\"https://tensorflow.google.cn/images/seq2seq/attention_equation_1.jpg\" alt=\"attention equation 1\" width=\"800\">\n\n本教程的编码器采用 [Bahdanau 注意力](https://arxiv.org/pdf/1409.0473.pdf)。在用简化形式编写之前,让我们先决定符号:\n\n* FC = 完全连接(密集)层\n* EO = 编码器输出\n* H = 隐藏层状态\n* X = 解码器输入\n\n以及伪代码:\n\n* `score = FC(tanh(FC(EO) + FC(H)))`\n* `attention weights = softmax(score, axis = 1)`。 Softmax 默认被应用于最后一个轴,但是这里我们想将它应用于 *第一个轴*, 因为分数 (score) 的形状是 *(批大小,最大长度,隐藏层大小)*。最大长度 (`max_length`) 是我们的输入的长度。因为我们想为每个输入分配一个权重,所以 softmax 应该用在这个轴上。\n* `context vector = sum(attention weights * EO, axis = 1)`。选择第一个轴的原因同上。\n* `embedding output` = 解码器输入 X 通过一个嵌入层。\n* `merged vector = concat(embedding output, context vector)`\n* 此合并后的向量随后被传送到 GRU\n\n每个步骤中所有向量的形状已在代码的注释中阐明:",
"_____no_output_____"
]
],
[
[
"class Encoder(tf.keras.Model):\n def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):\n super(Encoder, self).__init__()\n self.batch_sz = batch_sz\n self.enc_units = enc_units\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = tf.keras.layers.GRU(self.enc_units,\n return_sequences=True,\n return_state=True,\n recurrent_initializer='glorot_uniform')\n\n def call(self, x, hidden):\n x = self.embedding(x)\n output, state = self.gru(x, initial_state = hidden)\n return output, state\n\n def initialize_hidden_state(self):\n return tf.zeros((self.batch_sz, self.enc_units))",
"_____no_output_____"
],
[
"encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)\n\n# 样本输入\nsample_hidden = encoder.initialize_hidden_state()\nsample_output, sample_hidden = encoder(example_input_batch, sample_hidden)\nprint ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))\nprint ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))",
"_____no_output_____"
],
[
"class BahdanauAttention(tf.keras.layers.Layer):\n def __init__(self, units):\n super(BahdanauAttention, self).__init__()\n self.W1 = tf.keras.layers.Dense(units)\n self.W2 = tf.keras.layers.Dense(units)\n self.V = tf.keras.layers.Dense(1)\n\n def call(self, query, values):\n # 隐藏层的形状 == (批大小,隐藏层大小)\n # hidden_with_time_axis 的形状 == (批大小,1,隐藏层大小)\n # 这样做是为了执行加法以计算分数 \n hidden_with_time_axis = tf.expand_dims(query, 1)\n\n # 分数的形状 == (批大小,最大长度,1)\n # 我们在最后一个轴上得到 1, 因为我们把分数应用于 self.V\n # 在应用 self.V 之前,张量的形状是(批大小,最大长度,单位)\n score = self.V(tf.nn.tanh(\n self.W1(values) + self.W2(hidden_with_time_axis)))\n\n # 注意力权重 (attention_weights) 的形状 == (批大小,最大长度,1)\n attention_weights = tf.nn.softmax(score, axis=1)\n\n # 上下文向量 (context_vector) 求和之后的形状 == (批大小,隐藏层大小)\n context_vector = attention_weights * values\n context_vector = tf.reduce_sum(context_vector, axis=1)\n\n return context_vector, attention_weights",
"_____no_output_____"
],
[
"attention_layer = BahdanauAttention(10)\nattention_result, attention_weights = attention_layer(sample_hidden, sample_output)\n\nprint(\"Attention result shape: (batch size, units) {}\".format(attention_result.shape))\nprint(\"Attention weights shape: (batch_size, sequence_length, 1) {}\".format(attention_weights.shape))",
"_____no_output_____"
],
[
"class Decoder(tf.keras.Model):\n def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):\n super(Decoder, self).__init__()\n self.batch_sz = batch_sz\n self.dec_units = dec_units\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = tf.keras.layers.GRU(self.dec_units,\n return_sequences=True,\n return_state=True,\n recurrent_initializer='glorot_uniform')\n self.fc = tf.keras.layers.Dense(vocab_size)\n\n # 用于注意力\n self.attention = BahdanauAttention(self.dec_units)\n\n def call(self, x, hidden, enc_output):\n # 编码器输出 (enc_output) 的形状 == (批大小,最大长度,隐藏层大小)\n context_vector, attention_weights = self.attention(hidden, enc_output)\n\n # x 在通过嵌入层后的形状 == (批大小,1,嵌入维度)\n x = self.embedding(x)\n\n # x 在拼接 (concatenation) 后的形状 == (批大小,1,嵌入维度 + 隐藏层大小)\n x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)\n\n # 将合并后的向量传送到 GRU\n output, state = self.gru(x)\n\n # 输出的形状 == (批大小 * 1,隐藏层大小)\n output = tf.reshape(output, (-1, output.shape[2]))\n\n # 输出的形状 == (批大小,vocab)\n x = self.fc(output)\n\n return x, state, attention_weights",
"_____no_output_____"
],
[
"decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)\n\nsample_decoder_output, _, _ = decoder(tf.random.uniform((64, 1)),\n sample_hidden, sample_output)\n\nprint ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))",
"_____no_output_____"
]
],
[
[
"## 定义优化器和损失函数",
"_____no_output_____"
]
],
[
[
"optimizer = tf.keras.optimizers.Adam()\nloss_object = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True, reduction='none')\n\ndef loss_function(real, pred):\n mask = tf.math.logical_not(tf.math.equal(real, 0))\n loss_ = loss_object(real, pred)\n\n mask = tf.cast(mask, dtype=loss_.dtype)\n loss_ *= mask\n\n return tf.reduce_mean(loss_)",
"_____no_output_____"
]
],
[
[
"## 检查点(基于对象保存)",
"_____no_output_____"
]
],
[
[
"checkpoint_dir = './training_checkpoints'\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")\ncheckpoint = tf.train.Checkpoint(optimizer=optimizer,\n encoder=encoder,\n decoder=decoder)",
"_____no_output_____"
]
],
[
[
"## 训练\n\n1. 将 *输入* 传送至 *编码器*,编码器返回 *编码器输出* 和 *编码器隐藏层状态*。\n2. 将编码器输出、编码器隐藏层状态和解码器输入(即 *开始标记*)传送至解码器。\n3. 解码器返回 *预测* 和 *解码器隐藏层状态*。\n4. 解码器隐藏层状态被传送回模型,预测被用于计算损失。\n5. 使用 *教师强制 (teacher forcing)* 决定解码器的下一个输入。\n6. *教师强制* 是将 *目标词* 作为 *下一个输入* 传送至解码器的技术。\n7. 最后一步是计算梯度,并将其应用于优化器和反向传播。",
"_____no_output_____"
]
],
[
[
"@tf.function\ndef train_step(inp, targ, enc_hidden):\n loss = 0\n\n with tf.GradientTape() as tape:\n enc_output, enc_hidden = encoder(inp, enc_hidden)\n\n dec_hidden = enc_hidden\n\n dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)\n\n # 教师强制 - 将目标词作为下一个输入\n for t in range(1, targ.shape[1]):\n # 将编码器输出 (enc_output) 传送至解码器\n predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)\n\n loss += loss_function(targ[:, t], predictions)\n\n # 使用教师强制\n dec_input = tf.expand_dims(targ[:, t], 1)\n\n batch_loss = (loss / int(targ.shape[1]))\n\n variables = encoder.trainable_variables + decoder.trainable_variables\n\n gradients = tape.gradient(loss, variables)\n\n optimizer.apply_gradients(zip(gradients, variables))\n\n return batch_loss",
"_____no_output_____"
],
[
"EPOCHS = 10\n\nfor epoch in range(EPOCHS):\n start = time.time()\n\n enc_hidden = encoder.initialize_hidden_state()\n total_loss = 0\n\n for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):\n batch_loss = train_step(inp, targ, enc_hidden)\n total_loss += batch_loss\n\n if batch % 100 == 0:\n print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,\n batch,\n batch_loss.numpy()))\n # 每 2 个周期(epoch),保存(检查点)一次模型\n if (epoch + 1) % 2 == 0:\n checkpoint.save(file_prefix = checkpoint_prefix)\n\n print('Epoch {} Loss {:.4f}'.format(epoch + 1,\n total_loss / steps_per_epoch))\n print('Time taken for 1 epoch {} sec\\n'.format(time.time() - start))",
"_____no_output_____"
]
],
[
[
"## 翻译\n\n* 评估函数类似于训练循环,不同之处在于在这里我们不使用 *教师强制*。每个时间步的解码器输入是其先前的预测、隐藏层状态和编码器输出。\n* 当模型预测 *结束标记* 时停止预测。\n* 存储 *每个时间步的注意力权重*。\n\n请注意:对于一个输入,编码器输出仅计算一次。",
"_____no_output_____"
]
],
[
[
"def evaluate(sentence):\n attention_plot = np.zeros((max_length_targ, max_length_inp))\n\n sentence = preprocess_sentence(sentence)\n\n inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]\n inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],\n maxlen=max_length_inp,\n padding='post')\n inputs = tf.convert_to_tensor(inputs)\n\n result = ''\n\n hidden = [tf.zeros((1, units))]\n enc_out, enc_hidden = encoder(inputs, hidden)\n\n dec_hidden = enc_hidden\n dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)\n\n for t in range(max_length_targ):\n predictions, dec_hidden, attention_weights = decoder(dec_input,\n dec_hidden,\n enc_out)\n\n # 存储注意力权重以便后面制图\n attention_weights = tf.reshape(attention_weights, (-1, ))\n attention_plot[t] = attention_weights.numpy()\n\n predicted_id = tf.argmax(predictions[0]).numpy()\n\n result += targ_lang.index_word[predicted_id] + ' '\n\n if targ_lang.index_word[predicted_id] == '<end>':\n return result, sentence, attention_plot\n\n # 预测的 ID 被输送回模型\n dec_input = tf.expand_dims([predicted_id], 0)\n\n return result, sentence, attention_plot",
"_____no_output_____"
],
[
"# 注意力权重制图函数\ndef plot_attention(attention, sentence, predicted_sentence):\n fig = plt.figure(figsize=(10,10))\n ax = fig.add_subplot(1, 1, 1)\n ax.matshow(attention, cmap='viridis')\n\n fontdict = {'fontsize': 14}\n\n ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)\n ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)\n\n ax.xaxis.set_major_locator(ticker.MultipleLocator(1))\n ax.yaxis.set_major_locator(ticker.MultipleLocator(1))\n\n plt.show()",
"_____no_output_____"
],
[
"def translate(sentence):\n result, sentence, attention_plot = evaluate(sentence)\n\n print('Input: %s' % (sentence))\n print('Predicted translation: {}'.format(result))\n\n attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]\n plot_attention(attention_plot, sentence.split(' '), result.split(' '))",
"_____no_output_____"
]
],
[
[
"## 恢复最新的检查点并验证",
"_____no_output_____"
]
],
[
[
"# 恢复检查点目录 (checkpoint_dir) 中最新的检查点\ncheckpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))",
"_____no_output_____"
],
[
"translate(u'hace mucho frio aqui.')",
"_____no_output_____"
],
[
"translate(u'esta es mi vida.')",
"_____no_output_____"
],
[
"translate(u'¿todavia estan en casa?')",
"_____no_output_____"
],
[
"# 错误的翻译\ntranslate(u'trata de averiguarlo.')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0f015c8b19b8c1e722616ec31b7e2993ed435f6
| 32,528 |
ipynb
|
Jupyter Notebook
|
kaggle-titanic/cli/.ipynb_checkpoints/experimentation-checkpoint.ipynb
|
datmo/datmo-tutorials
|
9ff1a1a546f6ed278fe9d86430950ee27ee1416a
|
[
"MIT"
] | 5 |
2018-07-02T10:53:26.000Z
|
2019-02-21T18:43:50.000Z
|
kaggle-titanic/cli/.ipynb_checkpoints/experimentation-checkpoint.ipynb
|
asuprem/datmo-tutorials
|
8aa7d34aec72689fb80943ae71fd842b6ba8f6da
|
[
"MIT"
] | 3 |
2018-07-03T23:22:44.000Z
|
2019-09-04T14:47:06.000Z
|
kaggle-titanic/cli/.ipynb_checkpoints/experimentation-checkpoint.ipynb
|
asuprem/datmo-tutorials
|
8aa7d34aec72689fb80943ae71fd842b6ba8f6da
|
[
"MIT"
] | 8 |
2018-07-02T10:53:28.000Z
|
2019-11-26T04:02:18.000Z
| 33.090539 | 399 | 0.45862 |
[
[
[
"This is from a \"Getting Started\" competition from Kaggle [Titanic competition](https://www.kaggle.com/c/titanic) to showcase how we can use Auto-ML along with datmo and docker, in order to track our work and make machine learning workflow reprocible and usable. Some part of data analysis is inspired from this [kernel](https://www.kaggle.com/sinakhorami/titanic-best-working-classifier)\n\nThis approach can be categorized into following methods,\n\n1. Exploratory Data Analysis (EDA) \n2. Data Cleaning\n3. Using Auto-ML to figure out the best algorithm and hyperparameter\n\nDuring the process of EDA and feature engineering, we would be using datmo to create versions of work by creating snapshot. ",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport re as re\n\ntrain = pd.read_csv('./input/train.csv', header = 0, dtype={'Age': np.float64})\ntest = pd.read_csv('./input/test.csv' , header = 0, dtype={'Age': np.float64})\nfull_data = [train, test]\n\nprint (train.info())",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 891 entries, 0 to 890\nData columns (total 12 columns):\nPassengerId 891 non-null int64\nSurvived 891 non-null int64\nPclass 891 non-null int64\nName 891 non-null object\nSex 891 non-null object\nAge 714 non-null float64\nSibSp 891 non-null int64\nParch 891 non-null int64\nTicket 891 non-null object\nFare 891 non-null float64\nCabin 204 non-null object\nEmbarked 889 non-null object\ndtypes: float64(2), int64(5), object(5)\nmemory usage: 83.6+ KB\nNone\n"
]
],
[
[
"#### 1. Exploratory Data Analysis \n###### To understand how each feature has the contribution to Survive",
"_____no_output_____"
],
[
"###### a. `Sex`",
"_____no_output_____"
]
],
[
[
"print (train[[\"Sex\", \"Survived\"]].groupby(['Sex'], as_index=False).mean())",
" Sex Survived\n0 female 0.742038\n1 male 0.188908\n"
]
],
[
[
"###### b. `Pclass`",
"_____no_output_____"
]
],
[
[
"print (train[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean())",
" Pclass Survived\n0 1 0.629630\n1 2 0.472826\n2 3 0.242363\n"
]
],
[
[
"c. `SibSp and Parch`\n\nWith the number of siblings/spouse and the number of children/parents we can create new feature called Family Size. ",
"_____no_output_____"
]
],
[
[
"for dataset in full_data:\n dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1\nprint (train[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean())",
" FamilySize Survived\n0 1 0.303538\n1 2 0.552795\n2 3 0.578431\n3 4 0.724138\n4 5 0.200000\n5 6 0.136364\n6 7 0.333333\n7 8 0.000000\n8 11 0.000000\n"
]
],
[
[
"`FamilySize` seems to have a significant effect on our prediction. `Survived` has increased until a `FamilySize` of 4 and has decreased after that. Let's categorize people to check they are alone or not.",
"_____no_output_____"
]
],
[
[
"for dataset in full_data:\n dataset['IsAlone'] = 0\n dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1\nprint (train[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean())",
" IsAlone Survived\n0 0 0.505650\n1 1 0.303538\n"
]
],
[
[
"d. `Embarked` \n\nwe fill the missing values with most occured value `S`",
"_____no_output_____"
]
],
[
[
"for dataset in full_data:\n dataset['Embarked'] = dataset['Embarked'].fillna('S')\nprint (train[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean())",
" Embarked Survived\n0 C 0.553571\n1 Q 0.389610\n2 S 0.339009\n"
]
],
[
[
"e. `Fare`\n\nFare also has some missing values which will be filled with the median",
"_____no_output_____"
]
],
[
[
"for dataset in full_data:\n dataset['Fare'] = dataset['Fare'].fillna(train['Fare'].median())\ntrain['CategoricalFare'] = pd.qcut(train['Fare'], 4)\nprint (train[['CategoricalFare', 'Survived']].groupby(['CategoricalFare'], as_index=False).mean())",
" CategoricalFare Survived\n0 (-0.001, 7.91] 0.197309\n1 (7.91, 14.454] 0.303571\n2 (14.454, 31.0] 0.454955\n3 (31.0, 512.329] 0.581081\n"
]
],
[
[
"It shows the `Fare` has a significant affect on survival, showcasing that people haivng paid higher fares had higher chances of survival",
"_____no_output_____"
],
[
"f. `Age`\n\nThere are plenty of missing values in this feature. # generate random numbers between (mean - std) and (mean + std). then we categorize age into 5 range.",
"_____no_output_____"
]
],
[
[
"for dataset in full_data:\n age_avg = dataset['Age'].mean()\n age_std = dataset['Age'].std()\n age_null_count = dataset['Age'].isnull().sum()\n \n age_null_random_list = np.random.randint(age_avg - age_std, age_avg + age_std, size=age_null_count)\n dataset['Age'][np.isnan(dataset['Age'])] = age_null_random_list\n dataset['Age'] = dataset['Age'].astype(int)\n \ntrain['CategoricalAge'] = pd.cut(train['Age'], 5)\n\nprint (train[['CategoricalAge', 'Survived']].groupby(['CategoricalAge'], as_index=False).mean())\n",
" CategoricalAge Survived\n0 (-0.08, 16.0] 0.523810\n1 (16.0, 32.0] 0.347345\n2 (32.0, 48.0] 0.389764\n3 (48.0, 64.0] 0.434783\n4 (64.0, 80.0] 0.090909\n"
]
],
[
[
"g. `Name`\n\nLet's the title of people ",
"_____no_output_____"
]
],
[
[
"def get_title(name):\n title_search = re.search(' ([A-Za-z]+)\\.', name)\n # If the title exists, extract and return it.\n if title_search:\n return title_search.group(1)\n return \"\"\n\nfor dataset in full_data:\n dataset['Title'] = dataset['Name'].apply(get_title)\n\nprint(\"=====Title vs Sex=====\")\nprint(pd.crosstab(train['Title'], train['Sex']))\nprint(\"\")\nprint(\"=====Title vs Survived=====\")\nprint (train[['Title', 'Survived']].groupby(['Title'], as_index=False).mean())",
"=====Title vs Sex=====\nSex female male\nTitle \nCapt 0 1\nCol 0 2\nCountess 1 0\nDon 0 1\nDr 1 6\nJonkheer 0 1\nLady 1 0\nMajor 0 2\nMaster 0 40\nMiss 182 0\nMlle 2 0\nMme 1 0\nMr 0 517\nMrs 125 0\nMs 1 0\nRev 0 6\nSir 0 1\n\n=====Title vs Survived=====\n Title Survived\n0 Capt 0.000000\n1 Col 0.500000\n2 Countess 1.000000\n3 Don 0.000000\n4 Dr 0.428571\n5 Jonkheer 0.000000\n6 Lady 1.000000\n7 Major 0.500000\n8 Master 0.575000\n9 Miss 0.697802\n10 Mlle 1.000000\n11 Mme 1.000000\n12 Mr 0.156673\n13 Mrs 0.792000\n14 Ms 1.000000\n15 Rev 0.000000\n16 Sir 1.000000\n"
]
],
[
[
"Let's categorize it and check the title impact on survival rate convert the rare titles to `Rare`",
"_____no_output_____"
]
],
[
[
"for dataset in full_data:\n dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\\\n 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')\n\n dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')\n dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')\n dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')\n\nprint (train[['Title', 'Survived']].groupby(['Title'], as_index=False).mean())\n",
" Title Survived\n0 Master 0.575000\n1 Miss 0.702703\n2 Mr 0.156673\n3 Mrs 0.793651\n4 Rare 0.347826\n"
],
[
"import json\nconfig = {\"features analyzed\": [\"Sex\", \"Pclass\", \"FamilySize\", \"IsAlone\", \"Embarked\", \"Fare\", \"Age\", \"Title\"]}\n\nwith open('config.json', 'w') as outfile:\n json.dump(config, outfile)",
"_____no_output_____"
]
],
[
[
"#### Creating a datmo snapshot to save my work, this helps me save my current work before proceeding onto data cleaning \n```bash\nhome:~/datmo-tutorials/auto-ml$ datmo snapshot create -m \"EDA\"\nCreating a new snapshot\nCreated snapshot with id: 30803662ab49bb1ef67a5d0861eecf91cff1642f\nhome:~/datmo-tutorials/auto-ml$ datmo snapshot ls\n+---------+-------------+-------------------------------------------+-------+---------+-------+\n| id | created at | config | stats | message | label |\n+---------+-------------+-------------------------------------------+-------+---------+-------+\n| 30803662| 2018-05-15 | {u'features analyzed': [u'Sex', | {} | EDA | None |\n| | 23:15:44 | u'Pclass', u'FamilySize', u'IsAlone', | | | |\n| | | u'Embarked', u'Fare', u'Age', u'Title']} | | | |\n+---------+-------------+-------------------------------------------+-------+---------+-------+\n```",
"_____no_output_____"
],
[
"#### 2. Data Cleaning\nNow let's clean our data and map our features into numerical values.",
"_____no_output_____"
]
],
[
[
"train_copy = train.copy()\ntest_copy = test.copy()\nfull_data_copy = [train_copy, test_copy]\n\nfor dataset in full_data_copy:\n # Mapping Sex\n dataset['Sex'] = dataset['Sex'].map( {'female': 0, 'male': 1} ).astype(int)\n \n # Mapping titles\n title_mapping = {\"Mr\": 1, \"Miss\": 2, \"Mrs\": 3, \"Master\": 4, \"Rare\": 5}\n dataset['Title'] = dataset['Title'].map(title_mapping)\n dataset['Title'] = dataset['Title'].fillna(0)\n \n # Mapping Embarked\n dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)\n \n # Mapping Fare\n dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0\n dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1\n dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2\n dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3\n dataset['Fare'] = dataset['Fare'].astype(int)\n \n # Mapping Age\n dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0\n dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1\n dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2\n dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3\n dataset.loc[ dataset['Age'] > 64, 'Age'] = 4\n",
"_____no_output_____"
],
[
"# Feature Selection\ndrop_elements = ['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp',\\\n 'Parch', 'FamilySize']\n\ntrain_copy = train_copy.drop(drop_elements, axis = 1)\ntrain_copy = train_copy.drop(['CategoricalAge', 'CategoricalFare'], axis = 1)\n\ntest_copy = test_copy.drop(drop_elements, axis = 1)\n\nprint (train_copy.head(10))\n\ntrain_copy = train_copy.values\ntest_copy = test_copy.values",
" Survived Pclass Sex Age Fare Embarked IsAlone Title\n0 0 3 1 1 0 0 0 1\n1 1 1 0 2 3 1 0 3\n2 1 3 0 1 1 0 1 2\n3 1 1 0 2 3 0 0 3\n4 0 3 1 2 1 0 1 1\n5 0 3 1 1 1 2 1 1\n6 0 1 1 3 3 0 1 1\n7 0 3 1 0 2 0 0 4\n8 1 3 0 1 1 0 0 3\n9 1 2 0 0 2 1 0 3\n"
],
[
"config = {\"selected features\": [\"Sex\", \"Pclass\", \"Age\", \"Fare\", \"Embarked\", \"Fare\", \"IsAlone\", \"Title\"]}\n\nwith open('config.json', 'w') as outfile:\n json.dump(config, outfile)",
"_____no_output_____"
]
],
[
[
"#### 3. Using Auto-ML to figure out the best algorithm and hyperparameter\n##### Now we have cleaned our data it's time to use auto-ml in order to get the best algorithm for this data\n",
"_____no_output_____"
]
],
[
[
"from tpot import TPOTClassifier\nfrom sklearn.datasets import load_digits\nfrom sklearn.model_selection import train_test_split\n\nX = train_copy[0::, 1::]\ny = train_copy[0::, 0]\n\nX_train, X_test, y_train, y_test = train_test_split(X, y,\n train_size=0.75, test_size=0.25)\n\ntpot = TPOTClassifier(generations=5, population_size=50, verbosity=2)\ntpot.fit(X_train, y_train)\nprint(tpot.score(X_test, y_test))\ntpot.export('tpot_titanic_pipeline.py')",
"Optimization Progress: 33%|███▎ | 100/300 [00:55<01:36, 2.07pipeline/s]"
],
[
"stats = {\"accuracy\": (tpot.score(X_test, y_test))} \n\nwith open('stats.json', 'w') as outfile:\n json.dump(stats, outfile)",
"_____no_output_____"
]
],
[
[
"### Let's again create a datmo snapshot to save my work, this helps me save my current work before changing my feature selection\n\n```bash\nhome:~/datmo-tutorials/auto-ml$ datmo snapshot create -m \"auto-ml-1\"\nCreating a new snapshot\nCreated snapshot with id: adf76fa7d0800cc6eec033d4b00f97536bcb0c20\nhome:~/datmo-tutorials/auto-ml$ datmo snapshot ls\n+---------+-------------+-------------------------------------------+-----------------+---------------+-------+\n| id | created at | config | stats | message | label |\n+---------+-------------+-------------------------------------------+-----------------+---------------+-------+\n| adf76fa7| 2018-05-16 | {u'selected features': [u'Sex', u'Pclass',|{u'accuracy': | auto-ml-1 | None |\n| | 01:24:53 | u'Age', u'Fare', u'Embarked', | 0.8206278} | | |\n| | | u'Fare', u'IsAlone', u'Title']} | | | |\n| 30803662| 2018-05-15 | {u'features analyzed': [u'Sex', | {} | EDA | None |\n| | 23:15:44 | u'Pclass', u'FamilySize', u'IsAlone', | | | |\n| | | u'Embarked', u'Fare', u'Age', u'Title']} | | | |\n+---------+-------------+-------------------------------------------+-----------------+---------------+-------+\n```",
"_____no_output_____"
],
[
"#### Another feature selection\n1. Let's leave `FamilySize` rather than just unsing `IsAlone` \n2. Let's use `Fare_Per_Person` insted of binning `Fare`",
"_____no_output_____"
]
],
[
[
"train_copy = train.copy()\ntest_copy = test.copy()\nfull_data_copy = [train_copy, test_copy]\n\nfor dataset in full_data_copy:\n # Mapping Sex\n dataset['Sex'] = dataset['Sex'].map( {'female': 0, 'male': 1} ).astype(int)\n \n # Mapping titles\n title_mapping = {\"Mr\": 1, \"Miss\": 2, \"Mrs\": 3, \"Master\": 4, \"Rare\": 5}\n dataset['Title'] = dataset['Title'].map(title_mapping)\n dataset['Title'] = dataset['Title'].fillna(0)\n \n # Mapping Embarked\n dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)\n \n # Mapping Fare\n dataset['FarePerPerson']=dataset['Fare']/(dataset['FamilySize']+1)\n \n # Mapping Age\n dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0\n dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1\n dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2\n dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3\n dataset.loc[ dataset['Age'] > 64, 'Age'] = 4",
"_____no_output_____"
],
[
"# Feature Selection\ndrop_elements = ['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp',\\\n 'Parch', 'IsAlone', 'Fare']\n\ntrain_copy = train_copy.drop(drop_elements, axis = 1)\ntrain_copy = train_copy.drop(['CategoricalAge', 'CategoricalFare'], axis = 1)\n\ntest_copy = test_copy.drop(drop_elements, axis = 1)\n\nprint (train_copy.head(10))\n\ntrain_copy = train_copy.values\ntest_copy = test_copy.values",
" Survived Pclass Sex Age Embarked FamilySize Title FarePerPerson\n0 0 3 1 1 0 2 1 2.416667\n1 1 1 0 2 1 2 3 23.761100\n2 1 3 0 1 0 1 2 3.962500\n3 1 1 0 2 0 2 3 17.700000\n4 0 3 1 2 0 1 1 4.025000\n5 0 3 1 1 2 1 1 4.229150\n6 0 1 1 3 0 1 1 25.931250\n7 0 3 1 0 0 5 4 3.512500\n8 1 3 0 1 0 3 3 2.783325\n9 1 2 0 0 1 2 3 10.023600\n"
],
[
"from tpot import TPOTClassifier\nfrom sklearn.datasets import load_digits\nfrom sklearn.model_selection import train_test_split\n\nX = train_copy[0::, 1::]\ny = train_copy[0::, 0]\n\nX_train, X_test, y_train, y_test = train_test_split(X, y,\n train_size=0.75, test_size=0.25)\n\ntpot = TPOTClassifier(generations=5, population_size=50, verbosity=2)\ntpot.fit(X_train, y_train)\nprint(tpot.score(X_test, y_test))\ntpot.export('tpot_titanic_pipeline.py')",
"Optimization Progress: 33%|███▎ | 100/300 [00:36<01:33, 2.14pipeline/s]"
],
[
"config = {\"selected features\": [\"Sex\", \"Pclass\", \"Age\", \"Fare\", \"Embarked\", \"FarePerPerson\", \"FamilySize\", \"Title\"]}\n\nwith open('config.json', 'w') as outfile:\n json.dump(config, outfile)\n\nstats = {\"accuracy\": (tpot.score(X_test, y_test))} \n\nwith open('stats.json', 'w') as outfile:\n json.dump(stats, outfile)",
"_____no_output_____"
]
],
[
[
"### Let's again create a datmo snapshot to save my final work\n\n```bash\nhome:~/datmo-tutorials/auto-ml$ datmo snapshot create -m \"auto-ml-2\"\nCreating a new snapshot\nCreated snapshot with id: 30f8366b7de96d58a7ef8cda266216b01cab4940\nhome:~/datmo-tutorials/auto-ml$ datmo snapshot ls\n+---------+-------------+-------------------------------------------+-----------------+---------------+-------+\n| id | created at | config | stats | message | label |\n+---------+-------------+-------------------------------------------+-----------------+---------------+-------+\n| 30f8366b| 2018-05-16 | {u'selected features': [u'Sex', u'Pclass',|{u'accuracy': | auto-ml-2 | None |\n| | 03:04:06 | u'Age', u'Fare', u'Embarked', u'Title', | 0.8206278} | | |\n| | | u'FarePerPerson', u'FamilySize']} | | | |\n| adf76fa7| 2018-05-16 | {u'selected features': [u'Sex', u'Pclass',|{u'accuracy': | auto-ml-1 | None |\n| | 01:24:53 | u'Age', u'Fare', u'Embarked', | 0.8206278} | | |\n| | | u'Fare', u'IsAlone', u'Title']} | | | |\n| 30803662| 2018-05-15 | {u'features analyzed': [u'Sex', | {} | EDA | None |\n| | 23:15:44 | u'Pclass', u'FamilySize', u'IsAlone', | | | |\n| | | u'Embarked', u'Fare', u'Age', u'Title']} | | | |\n+---------+-------------+-------------------------------------------+-----------------+---------------+-------+\n```",
"_____no_output_____"
],
[
"#### Let's now move to a different snapshot in order to either get the `experimentation.ipynb`, `submission.csv` or `tpot_titanice_pipeline.py` or any other files in that version\n\nWe perform `checkout` command in order to achieve it\n\n```bash\nhome:~/datmo-tutorials/auto-ml$ # Run this command: datmo snapshot checkout --id <snapshot-id>\nhome:~/datmo-tutorials/auto-ml$ datmo snapshot checkout --id 30803662\n```\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0f01693b510575651438fcdb86fa80fbd1208b4
| 18,731 |
ipynb
|
Jupyter Notebook
|
python/Examples/BandwidthTest2.ipynb
|
JonathanCamargo/Eris
|
34c389f0808c8b47933605ed19d98e62280e56dd
|
[
"MIT"
] | null | null | null |
python/Examples/BandwidthTest2.ipynb
|
JonathanCamargo/Eris
|
34c389f0808c8b47933605ed19d98e62280e56dd
|
[
"MIT"
] | null | null | null |
python/Examples/BandwidthTest2.ipynb
|
JonathanCamargo/Eris
|
34c389f0808c8b47933605ed19d98e62280e56dd
|
[
"MIT"
] | null | null | null | 117.805031 | 15,416 | 0.873045 |
[
[
[
"import rosbag\nimport numpy as np\nimport matplotlib.pyplot as plt\nbag=rosbag.Bag('/home/ossip/test.bag')\n\nN=7\ntopicNames=[]\nfor i in range(0,N):\n topicNames.append('/float'+str(i))\n\nfor topic in topicNames:\n msgs=bag.read_messages(topics=[topic])\n seq=[]\n t=[]\n x=[]\n for msg in msgs:\n seq.append(msg.message.header.seq)\n t.append(msg.message.header.stamp.to_sec())\n x.append(msg.message.data)\n seq=np.array(seq) \n t=np.array(t)\n x=np.array(x)\n t=t[t.argsort()]\n x=x[t.argsort()]\n ideal=np.arange(x.min(),x.max()+1)\n d=np.setdiff1d(x,ideal)\n print('Topic: '+str(topic))\n print('\\tsamples='+str(len(seq)))\n print('\\tmissing='+str(len(d)))\n #print('\\tmissing_indices='+str(d)) \n np.setdiff1d(ideal,seq) \n \n ",
"Topic: /float0\n\tsamples=2360\n\tmissing=0\nTopic: /float1\n\tsamples=2330\n\tmissing=0\nTopic: /float2\n\tsamples=2360\n\tmissing=0\nTopic: /float3\n\tsamples=2360\n\tmissing=0\nTopic: /float4\n\tsamples=2305\n\tmissing=0\nTopic: /float5\n\tsamples=2325\n\tmissing=0\nTopic: /float6\n\tsamples=2330\n\tmissing=0\n"
],
[
"plt.plot(t,x)",
"_____no_output_____"
],
[
"msg",
"_____no_output_____"
],
[
"publishers",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
d0f0255967573c042e8239259d19b1b2820f8a19
| 15,342 |
ipynb
|
Jupyter Notebook
|
apps/information-retrieval/tf-idf.ipynb
|
guillermozbta/data-science-sup2
|
386c9ec2508245a750174cdf8b759726b92b912f
|
[
"Apache-2.0"
] | 625 |
2015-08-19T10:48:35.000Z
|
2022-03-22T15:59:18.000Z
|
apps/information-retrieval/tf-idf.ipynb
|
guillermozbta/data-science-sup2
|
386c9ec2508245a750174cdf8b759726b92b912f
|
[
"Apache-2.0"
] | 2 |
2015-09-05T15:09:01.000Z
|
2017-12-16T22:52:05.000Z
|
apps/information-retrieval/tf-idf.ipynb
|
guillermozbta/data-science-sup2
|
386c9ec2508245a750174cdf8b759726b92b912f
|
[
"Apache-2.0"
] | 297 |
2015-08-20T23:51:10.000Z
|
2022-03-29T04:15:35.000Z
| 30.500994 | 354 | 0.513623 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
d0f026fb0bbb5df38b7a673d52bfea9e2da15e08
| 59,907 |
ipynb
|
Jupyter Notebook
|
CA-fitting/CA-fitting-06-14.ipynb
|
JGageWright/CHEM274
|
a038e460ed0ce46df12499bc4156a9b0840e5199
|
[
"MIT"
] | null | null | null |
CA-fitting/CA-fitting-06-14.ipynb
|
JGageWright/CHEM274
|
a038e460ed0ce46df12499bc4156a9b0840e5199
|
[
"MIT"
] | 6 |
2021-10-31T23:03:17.000Z
|
2021-12-07T05:56:11.000Z
|
CA-fitting/CA-fitting-06-14.ipynb
|
JGageWright/CHEM274
|
a038e460ed0ce46df12499bc4156a9b0840e5199
|
[
"MIT"
] | 1 |
2021-11-12T22:07:10.000Z
|
2021-11-12T22:07:10.000Z
| 122.259184 | 33,644 | 0.849266 |
[
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nimport os\nfrom scipy.optimize import curve_fit\nos.getcwd()",
"_____no_output_____"
],
[
"data = pd.read_csv('data/CA_Fc_GC_MeCN_0V-1.2V_P-06-14/data.csv', sep=',')\ndata",
"_____no_output_____"
],
[
"data.plot('t', 'iw', xlim=(0,2))",
"_____no_output_____"
],
[
"index_max = data['iw'].idxmax()\ntime_max = data.loc[index_max,'t']\nprint(time_max)",
"0.7841666666666667\n"
],
[
"# In E4, near optimal values of Rm and Cm were:\nRm = 10000 #10 kOhm\nCm = 100e-9 #100 nF\npstat_time_constant = Rm*Cm\n\n# From an EIS spectrum of Fc in dry MeCN, Ru and Cdl are approximately:\nRu = 4.04e+02\nCdl = 3.93e-06\ncell_time_constant = Ru*Cdl\n\n#Value of the combined time constant tau\n\nprint(cell_time_constant + pstat_time_constant)",
"0.00258772\n"
],
[
"pot_step_time = time_max # step time start in s\npot_rest_time = data.iloc[-1,-1] # rest time start in s\n\n# For both of these capacitors to charge, we should ignore data before at least 5τ of each:\nfit_start_time = pot_step_time + (5 * (cell_time_constant + pstat_time_constant))\n\n# Fit until 5 ms before the rest step\nfit_times = data[data['t'].between(fit_start_time, pot_rest_time - 0.005)]['t'].to_numpy()\nfit_currents = data[data['t'].between(fit_start_time, pot_rest_time - 0.005)]['iw'].to_numpy()\n\nfit_times_no_offset = fit_times - pot_step_time\n#print(fit_times_no_offset)",
"_____no_output_____"
],
[
"#Defines a function for curve_fit to fit to\n\ndef Empirical_Cottrell(t, a):\n return a / np.sqrt(t)\n\n#Implementing curve_fit to solve for the empirical Cottrell prefactor a\n\nguess_prefactor = 1e-10\nfit_prefactor, cov = curve_fit(Empirical_Cottrell, fit_times_no_offset, fit_currents, guess_prefactor)\nprint('a = {0:.3E}'.format(fit_prefactor[0]))\n\n#Calculating the diffusion constant D based on the fitted prefactor a, and the Cottrell Equation\n\na = fit_prefactor[0]\nn = 1\nF = 96485 #C/mol\nA = np.pi*2.5**2/1000**2 #m^2\nC_bulk = 0.8 #mol*m^-2\nD = (a**2 * np.pi) / (n*F*A*C_bulk)**2 * 100**2 #cm^2/s\nprint('D = {0:.3E}'.format(D) + ' cm^2 s^-1')\n\n#Plotting the chronoamperometry curve with the Cottrell Equation fit\n\nfig, (ax1, ax2) = plt.subplots(1,2, figsize = (15,10))\nax1.scatter(data['t'], data['iw'], label = 'Data', color = 'greenyellow')\nax1.set_ylabel('$i_w$ / A', fontsize = 15)\nax1.set_xlabel('t / s', fontsize = 15)\n#ax.set_xlim(.99, 2.01)\nax1.plot(fit_times, Empirical_Cottrell(fit_times_no_offset,a), color='red', label = 'Cottrell Equation Fit - Forward Step', linewidth=3)\nax1.legend(fontsize = 15)\n\nax2.scatter(data['t'], data['iw'], label = 'Data', color = 'greenyellow')\nax2.set_title('Zoomed-In')\nax2.set_ylabel('$i_w$ / A', fontsize = 15)\nax2.set_xlabel('t / s', fontsize = 15)\nax2.set_xlim(0, 3)\nax2.plot(fit_times, Empirical_Cottrell(fit_times_no_offset,a), color='red', label = 'Cottrell Equation Fit - Forward Step', linewidth=3)\n#ax2.legend(fontsize = 15)",
"a = 2.529E-05\nD = 8.749E-06 cm^2 s^-1\n"
],
[
"#Integrating under the current vs. time curve to obtain the charge passed.\n\n\nfrom scipy.integrate import trapz\n\ntotal_charge = trapz(fit_currents,fit_times)\nprint('Charge passed is '+'{0:.3E}'.format(total_current) + ' C')\n\n#Using this charge, calcs the moles of Fc oxidized assuming 100% Faradaic efficiency\n\nF = 96485 #C / mol\nmoles_Fc_oxidized = total_charge/F\n\nprint('Moles of Fc Oxidized is '+'{0:.3E}'.format(moles_Fc_oxidized) + ' mol')\n\n#If you assume that the Cottrell equation fit for a bulk [Fc+] concentration of 0.8 mM is good, \n#then you can backcalculate an electrolyzed volume of 0.0129 mL. That assumes that the concentration \n#is constant throughout that volume which we know is not true\n\nvolume = total_charge / (F*C_bulk) * 1000**2\nprint('Vol. of [Fc+] electrolyzed is '+'{0:.3}'.format(volume) + ' mL')",
"Charge passed is 9.993E-04 C\nMoles of Fc Oxidized is 1.036E-08 mol\nVol. of [Fc+] electrolyzed is 0.0129 mL\n"
],
[
"#Assume volume electrolyzed of 20 mL. This is a bad assumption.\n\nvolume = 0.020/1000 #L\nconc = moles_Fc_oxidized / volume *1000\n\nprint('Conc. of [Fc+] is '+'{0:.3E}'.format(conc) + ' mM')",
"Conc. of [Fc+] is 5.179E-01 mM\n"
],
[
"#This is just recalculating the Cottrell equation.. and confirming that the bulk conc. we assumed\n#to be 0.8 mM is indeed 0.8 mM\n\na = -1.706E-05 #From #CA-fitting-06-16\nn = 1\nF = 96485 #C/mol\nA = np.pi*2.5**2/1000**2 #m^2\n\nD = 3.979E-06 #cm^2 s^-1\n\nC_bulk = (-a*100)/(n*F*A) * np.sqrt(np.pi/D)\nprint('Conc. of [Fc+] is '+'{0:.3E}'.format(C_bulk) + ' mM')",
"Conc. of [Fc+] is 8.002E-01 mM\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0f02ebfe9f0fc5e467a14f7aa0ab108f82b4c7c
| 10,298 |
ipynb
|
Jupyter Notebook
|
lessons/ETLPipelines/2_extract_exercise/.ipynb_checkpoints/2_extract_exercise-checkpoint.ipynb
|
rabadzhiyski/Data_Science_Udacity
|
8285515a765b7b5737b55e02714c9b27da4201e7
|
[
"MIT"
] | null | null | null |
lessons/ETLPipelines/2_extract_exercise/.ipynb_checkpoints/2_extract_exercise-checkpoint.ipynb
|
rabadzhiyski/Data_Science_Udacity
|
8285515a765b7b5737b55e02714c9b27da4201e7
|
[
"MIT"
] | null | null | null |
lessons/ETLPipelines/2_extract_exercise/.ipynb_checkpoints/2_extract_exercise-checkpoint.ipynb
|
rabadzhiyski/Data_Science_Udacity
|
8285515a765b7b5737b55e02714c9b27da4201e7
|
[
"MIT"
] | null | null | null | 36.133333 | 330 | 0.611963 |
[
[
[
"# Extract from JSON and XML\n\nYou'll now get practice extracting data from JSON and XML. You'll extract the same population data from the previous exercise, except the data will be in a different format.\n\nBoth JSON and XML are common formats for storing data. XML was established before JSON, and JSON has become more popular over time. They both tend to be used for sending data via web APIs, which you'll learn about later in the lesson.\n\nSometimes, you can obtain the same data in either JSON or XML format. That is the case for this exercise. You'll use the same data except one file is formatted as JSON and the other as XML.\n\nThere is a solution file for these exercises. Go to File->Open and click on 2_extract_exercise_solution.ipynb.",
"_____no_output_____"
],
[
"# Extract JSON and JSON Exercise\n\nFirst, you'll practice extracting data from a JSON file. Run the cell below to print out the first line of the JSON file.",
"_____no_output_____"
]
],
[
[
"### \n# Run the following cell.\n# This cell loads a function that prints the first n lines of\n# a file.\n#\n# Then this function is called on the JSON file to print out\n# the first line of the population_data.json file\n###\n\ndef print_lines(n, file_name):\n f = open(file_name)\n for i in range(n):\n print(f.readline())\n f.close()\n \nprint_lines(1, 'population_data.json')",
"_____no_output_____"
]
],
[
[
"The first \"line\" in the file is actually the entire file. JSON is a compact way of representing data in a dictionary-like format. Luckily, pandas has a method to [read in a json file](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html). \n\nIf you open the link with the documentation, you'll see there is an *orient* option that can handle JSON formatted in different ways:\n```\n'split' : dict like {index -> [index], columns -> [columns], data -> [values]}\n'records' : list like [{column -> value}, ... , {column -> value}]\n'index' : dict like {index -> {column -> value}}\n'columns' : dict like {column -> {index -> value}}\n'values' : just the values array\n```\n\nIn this case, the JSON is formatted with a 'records' orientation, so you'll need to use that value in the read_json() method. You can tell that the format is 'records' by comparing the pattern in the documentation with the pattern in the JSON file.\n\nNext, read in the population_data.json file using pandas.",
"_____no_output_____"
]
],
[
[
"# TODO: Read in the population_data.json file using pandas's \n# read_json method. Don't forget to specific the orient option\n# store the results in df_json\n\nimport pandas as pd\ndf_json = None",
"_____no_output_____"
],
[
"# TODO: Use the head method to see the first few rows of the resulting\n# dataframe",
"_____no_output_____"
]
],
[
[
"Notice that this population data is the same as the data from the previous exercise. The column order might have changed, but the data is otherwise the same.",
"_____no_output_____"
],
[
"# Other Ways to Read in JSON\n\nBesides using pandas to read JSON files, you can use the json library. Run the code cell below to see an example of reading in JSON with the json library. Python treats JSON data like a dictionary.",
"_____no_output_____"
]
],
[
[
"import json\n\n# read in the JSON file\nwith open('population_data.json') as f:\n json_data = json.load(f)\n\n# print the first record in the JSON file\nprint(json_data[0])\nprint('\\n')\n\n# show that JSON data is essentially a dictionary\nprint(json_data[0]['Country Name'])\nprint(json_data[0]['Country Code'])",
"_____no_output_____"
]
],
[
[
"# Extract XML\n\nNext, you'll work with the same data except now the data is in xml format. Run the next code cell to see what the first fifteen lines of the data file look like.",
"_____no_output_____"
]
],
[
[
"# Run the code cell to print out the first 15 lines of the xml file\nprint_lines(15, 'population_data.xml')",
"_____no_output_____"
]
],
[
[
"XML looks very similar to HTML. XML is formatted with tags with values inside the tags. XML is not as easy to navigate as JSON. Pandas cannot read in XML directly. One reason is that tag names are user defined. Every XML file might have different formatting. You can imagine why XML has fallen out of favor relative to JSON.",
"_____no_output_____"
],
[
"### How to read and navigate XML\n\nThere is a Python library called BeautifulSoup, which makes reading in and parsing XML data easier. Here is the link to the documentation: [Beautiful Soup Documentation](https://www.crummy.com/software/BeautifulSoup/)\n\nThe find() method will find the first place where an xml element occurs. For example using find('record') will return the first record in the xml file:\n\n```xml\n<record>\n <field name=\"Country or Area\" key=\"ABW\">Aruba</field>\n <field name=\"Item\" key=\"SP.POP.TOTL\">Population, total</field>\n <field name=\"Year\">1960</field>\n <field name=\"Value\">54211</field>\n</record>\n```\n\nThe find_all() method returns all of the matching tags. So find_all('record') would return all of the elements with the `<record>` tag.\n\nRun the code cells below to get a basic idea of how to navigate XML with BeautifulSoup. To navigate through the xml file, you search for a specific tag using the find() method or find_all() method. \n\nBelow these code cells, there is an exercise for wrangling the XML data.",
"_____no_output_____"
]
],
[
[
"# import the BeautifulSoup library\nfrom bs4 import BeautifulSoup\n\n# open the population_data.xml file and load into Beautiful Soup\nwith open(\"population_data.xml\") as fp:\n soup = BeautifulSoup(fp, \"lxml\") # lxml is the Parser type",
"_____no_output_____"
],
[
"# output the first 5 records in the xml file\n# this is an example of how to navigate the XML document with BeautifulSoup\n\ni = 0\n# use the find_all method to get all record tags in the document\nfor record in soup.find_all('record'):\n # use the find_all method to get all fields in each record\n i += 1\n for record in record.find_all('field'):\n print(record['name'], ': ' , record.text)\n print()\n if i == 5:\n break",
"_____no_output_____"
]
],
[
[
"# XML Exercise (Challenge)\n\nCreate a data frame from the xml file. This exercise is somewhat tricky. One solution would be to convert the xml data into dictionaries and then use the dictionaries to create a data frame. \n\nThe dataframe should have the following layout:\n\n| Country or Area | Year | Item | Value |\n|----|----|----|----|\n| Aruba | 1960 | Population, total | 54211 |\n| Aruba | 1961 | Population, total | 55348 |\netc...\n\nTechnically, extracting XML, transforming the results, and putting it into a data frame is a full ETL pipeline. This exercise is jumping ahead in terms of what's to come later in the lesson. But it's a good chance to familiarize yourself with XML. ",
"_____no_output_____"
]
],
[
[
"# TODO: Create a pandas data frame from the XML data.\n# HINT: You can use dictionaries to create pandas data frames.\n# HINT: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_dict.html#pandas.DataFrame.from_dict\n# HINT: You can make a dictionary for each column or for each row (See the link above for more information)\n# HINT: Modify the code from the previous code cell",
"_____no_output_____"
]
],
[
[
"# Conclusion\n\nLike CSV, JSON and XML are ways to format data. If everything is formatted correctly, JSON is especially easy to work with. XML is an older standard and a bit trickier to handle.\n\nAs a reminder, there is a solution file for these exercises. You can go to File->Open and then click on 2_extract_exercise.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0f02f90ba04ba3a870e85792a727a426e4e46e7
| 429,162 |
ipynb
|
Jupyter Notebook
|
face_detection/Debug.ipynb
|
SohaibAnwaar/Skin-care
|
2f8837f465855a8c4f32b984f588eb2553958214
|
[
"Apache-2.0"
] | 1 |
2022-03-28T20:36:07.000Z
|
2022-03-28T20:36:07.000Z
|
face_detection/Debug.ipynb
|
SohaibAnwaar/Skin-care
|
2f8837f465855a8c4f32b984f588eb2553958214
|
[
"Apache-2.0"
] | null | null | null |
face_detection/Debug.ipynb
|
SohaibAnwaar/Skin-care
|
2f8837f465855a8c4f32b984f588eb2553958214
|
[
"Apache-2.0"
] | null | null | null | 1,114.706494 | 144,328 | 0.944578 |
[
[
[
"# Main Code",
"_____no_output_____"
]
],
[
[
"import os\nimport time\nimport numpy as np\nimport redis\nfrom IPython.display import clear_output\nfrom PIL import Image\nfrom io import BytesIO\nimport base64\nimport json\nimport matplotlib.pyplot as plt\nfrom face_detection import get_face\nfrom utils import img_to_txt, decode_img, log_error\n\n\n##########################\n#\n# Global Variables\n#\n#\n##########################\n\n# Get Request \nserver = os.environ['face_input_redis_server'] if 'os.environ' in os.environ and len(os.environ['redis_server']) > 1 else 'localhost'\n# connect with redis server as Bob\nr = redis.Redis(host=server, port=6379)\n# Publish and suscribe redis\nreq_p = r.pubsub()\n# subscribe to request Channel\nreq_p.subscribe('new_request')\n\n\n# Forward Request\nout_server = os.environ['face_ouput_redis_server'] if 'os.environ' in os.environ and len(os.environ['redis_server']) > 1 else 'localhost'\nprint(f\"User Server {out_server}\")\n# connect with redis server as Bob\nout_r = redis.Redis(host=out_server, port=6379)\n\n\n\n\n\n\ndef process_request(request ):\n '''\n Do you request processing here\n '''\n im = decode_img(request['image'])\n face = get_face(im)\n plt.imshow(face)\n plt.show()\n return face\n\n\ndef forward_request(id_, face):\n global out_r\n with out_r.pipeline() as pipe:\n\n \n\n image= {\n 'id' : id_\n 'request_time' : str(datetime.today()),\n 'image' : img_to_txt(\"test_images/test.jpeg\"),\n 'status' : 'pending'\n }\n\n # Publishing to the stream for testing\n pipe.publish('new_request', json.dumps(image))\n count+=1\n\n\n pipe.execute()\n print(f\"Request Forward to {ip}\")\n\ndef listen_stream():\n '''\n Listening to the stream. \n \n IF got any request from the stream then process it at the same time.\n '''\n count = 0\n requests =[] \n while 1:\n\n try:\n try:\n # Listening To the stream\n request = str(req_p.get_message()['data'].decode())\n if request is not None :requests.append(request)\n except TypeError as e: log_error(e)\n \n # If got any request from stream then process the function\n if len(requests) > 0:\n req_id = requests.pop(0)\n process_request(json.loads(request) )\n \n count += 1\n \n print(count)\n\n except Exception as e: log_error(e)\n\n \nlisten_stream() ",
"2022-02-12 18:01:50.650467: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA\nTo enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.\n2022-02-12 18:01:50.650757: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 8. Tune using inter_op_parallelism_threads for best performance.\n"
],
[
"from PIL import Image\nimport base64\nimport numpy as np\nfrom io import BytesIO\nimage = np.asarray(Image.open(\"test_images/test.jpeg\").convert(\"RGB\"))\nprint(image.shape)\n\n\nimport base64\nimport numpy as np\n\n\n",
"(400, 400, 3)\n"
],
[
"import matplotlib.pyplot as plt\nplt.imshow(image)\nplt.show()",
"_____no_output_____"
],
[
"\nimport cv2\ndef npImage_to_txt(image):\n '''\n Convert numpy image to base64\n '''\n _, im_arr = cv2.imencode('.jpg', image) # im_arr: image in Numpy one-dim array format.\n im_bytes = im_arr.tobytes()\n im_b64 = base64.b64encode(im_bytes)\n return im_b64.decode()\nnpImage_to_txt(image)",
"_____no_output_____"
],
[
"im_bytes = base64.b64decode(im_b64)\nim_arr = np.frombuffer(im_bytes, dtype=np.uint8) # im_arr is one-dim Numpy array\nimg = cv2.imdecode(im_arr, flags=cv2.IMREAD_COLOR)",
"_____no_output_____"
],
[
"from utils import decode_img\nimage = \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCABsAFUDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD8MrWK3mhVkd1VfuB5Dgn6dOtWbFdcu5/stvAWLMAAqH+dZF1DcTXENrFIVyhZfr1r7d/Yg/4J36r8cPC1v4ruPEUlnLkEZi3KRjuMZHPvXHjMbDCR5paH0GV4Crj6rjHY+UbbwF4vumVV06VwGwQytgCnnwncW+oCzurF1kU4OOFNfopf/wDBJL9pfXboaT4F8faJDatJ81xcqwwPYd69E+DX/BBjxpo122teN/H8OtXwO6FlsCIAfoTk46dR0rz1m7s7anuvJKMJLmlZH5mW/wANfEem2Y1L+zZY0l4CSqWU9vw/AVveGvh74vGqwWmi6fd2N3LgxmFC45/iIIIx9f06V+2Pw2/4JC+A7GSK68bz/wBp3CgFvMgCBT2CgD5QBgdeetewaT/wT3+D/h6NJ7DwjbJKsZQyLCAcZ6cCud43FS1SOtYPLKS1kfhV4g+CXxfTRynxI8G2+u6Yg3Ot7pLxlx1ys8JWQY+uOOleb614T8JS2k1houvapaxxyBv7Du9P+1xbwOsLxBWVwOMOmf8AbPU/0aT/ALIPgZ9LaxbQYtjrtJfnjpjmvDPj9/wSa+EnxB0e6ufD2gQ2molCIpojjBx7joaI4/FRfvrQwlg8urfw5WZ+SvwI/bzv/BGiL8JviN4atfHnhK9g8jUfC/jWwNykYSRsNaO2J4pFB7O4VgcVwHxZ8S2fhPVZtc+B/iDUIPD1zO0thZ3urq91p4b70RyQzpnIweoxuyRmvX/2s/8AgnV+0d8EfE8uoXGjSXuleaga5tIdyrjpkFTjgAdD07dvPtJ/Y4+MfxHW01qC1e3jnV1ulWB5ktmHIb5OcbMErnOT26V2wnhqvvM5JRxVD3VqcJ4V8f8AxL8QT3eoaTqOrSXB8sXN1p0aP5igEIrMQ2Svz/ng5IzRX098WP2Mr/4FazH8PPhh4k0+6axTZrGpPcG3NzcFEcjy2OUC7yBnr170UP2Ld0QpY6x8ffsv/C/UPjJ8ZdG8JW1jJMs0oWUquQg75447c9K/dT9lD4a2PgDwVYeDdE0xHks4VRzGmFV+4JGN3XNfDf7FPgjSP2av2TdN+MVl4Bm13xV4xuvJ0u3s4AZGhfPRiPkXbGTk5+8K9T8S67/wVZ8WRpq/ww8MweH9MEIksdL0y3RpIxjpJI7nc+c5OAM9q8DHVYZjjmpStGJ9Hl2E+oZdeCvKR+mfw98MWkCie6nTeFGVCgD347YNen6Ra6MYkgaaIk/Nhmxz/n3r8OvEvxZ/4LU+Crdrubwt4qmA/wBddQR2z475GK7P9nD/AIKgft06B4lt9E+N3hrUhiVVlnvbXyyy/hkE4wOK76Tw1GN+ZM8ythMTinazR+11tZwuMxuhQfd2HirH9nwSn5gAOwzXivwW/aEn8deE7TWdQ0+aKaSINMWTAAIyOPpiuv8AEHxw8GeGdEl17WtYWCCOIMGYfePQr9a6YV6U48y2PJqYLEU6nJrc7kaRBcgxIFbHG3FaOn+ALi5jWCK1XbnOWWvjrxl/wWP/AGWfg9qkdr4z8ROv2i58qE2ymQjnG5sdAOn4V9KfA79vX9nz4r6VFqfhX4i6bcxTIPLZbgde6nPCn6kV0UvY1bXOfE0MXQV1e50viX9mTwR4ssLmy8V6dHPDPCUdCvQEHoeo69sV8p/Ej9hb4c/s6eCZrT4VeH7aFJLyaVfNBdg8isSSTznI7+tfemi+IPDfii187Q9VguhIpYlZRgD8M5/CvBf24re+0j4Sa3r1hII1s7Pzml8s8bSDgc8dP1Nc+Ow1GlDnRWWY7Fyq+yn+J+cnxFtfB954s/tbW2umN5p0Eu1oYmzId++Q/IDub5QeTwi9KK6Xwlonwy/aG0G28Wa1cz6dcwReSEdlKPFuYqRkjBzvyOeNtFZQtyI9x0rszf2c/hTe6J8CvBehlYYLbQfD1ukrSR8pmNWcficj6GvNvE3/AAUXlX4ot8FfhdHc3WrXFwtpYQ2trFaRPITgGa8uQYolz2CsSO46V9v/AAl8F6ZHoUenWmmRqixgbJPmwMcbuxOPYVwfj79k3w3aazca7c/D/T9XjuXLSW4tEG1ucHAABIPTIPavm4UPfdSSvc+hp4qnyexUrNaHwR8QP+CsnxC8EeI7v4ffFP4YazpN9BgNLpWv2V+77n8tD5Qt41kyedodSVwR1r0X4JftSaV8SPEl14J8Y6VFLqltJiOJrHyZQpwfngY5hbBBxkg/e3c17frX7Od1YMx8J/s7WeS+YxfmBYU7g4MZzg8jGPxrjdS/Y3v9S8ZwfETxH8OfDtv4jtn3Wt5okDwSHJyQxjZUceoZcE/nXTVo0K0LwTi0deDnPDwanNSTPrv9m+y0/wAS6Mtvps7MWwPvbuny4J9sY/Dv1roP2sPhVpEPw4Om3w2LLHvZUXrj+tdF+xb8I7vw/wCH1vNbKtcy7flxjHPPAxiu+/ax+HK6h4bhvra0e5jiUiSMPg/d6D19a7KWDqfUW29T5SvjorOoxT0ufjL+0P8Asd/sq6pfHWfH3iC8sZZnHkxi82hieu1QCzc5PC9c1sfsqfsG6T4E1Sfxz8DP2mdYttOuFK3Wk6zoFw+mtGy4+fcq7zjkHK4qt8bvgfqut/Gy51Xxd8WbzR7QTvHZaNHor2YJIIG+7Db3POcKy9h2rnfg9/wTj/bM8D+M7TxT8Kf22NZ0meO6VrSS1lvYJki67S4ZkkO3H3tykdR2rmwcJQj70z6rMY05RThC59yfCfwL+1p+zdaxeMvhP8Z7TXYoFLvpMpMttKg5KjD5TjtuJUY4NfW3w++LWn/tU/DHUdJ8U+EbjTri5svs2r6bOmVDyJt3oT95c4IPcEcdq+ZPgF4V/auvLW2tvjh4A06+kS4aO48V+ErlIfttuWO2S8tNiqs23GZYwm4jJBya+vfhz4Tk0PT4JQxIQlSWTazJkFQ2OuBjmuuFStKTi9YnyWOeEpuMlG07n5d/CHSPC3h2fxP8FvHNrbXNx4M8VXlrFLdo+4RyPvVRhhwFC8e/vRXTftOeGdQ8H/tZ/EQaNYpN/aOqw3k+5d3zvCORjGBgY7/doq4zmlY74RjViprqfVnwIngXw7aQSFpZXjUyyMOWbbnnPua9UTRtMucNcRqCyjKgcdK8U+Dut6Vb+FbbVILqQxJEjCQ9WG0Y/DFenWPi2zu4YzZ3RK4yWftU4OvTjGzOHGUZupzxL2q6Fo1tCSIR8o+6Op/OuOu9L0bUNXFlBFHtRh5oCFsseR8x9q0PF+r6lqMTrp8hzHESfc+lcpo37S3wK0fXbL4b6x4+0ew16aRPs2lXN/HHcXDFjuwrkHGemNx9q6ZSoznawUYVp02o3bPpD4P6ELR4QqKxWPnI7ZzXe+ItGs9f0xtNuYwQTuHHQ+vNeb+B/Gem6XOkk1xvXaCzxMpUZ529ev0/IdB6JaeM/DN22xdUj81hkQuGUqvr90g16tCdFUeRnymMp4p4j2kE9DxH4kfsfeA/iBBIupeHbS4jZyXjuIw4J9RuBIP0rlPC37BHhDw9cqdPgv7eIEny4tQk2r9AScV7H458bjw94pe2imZ4NiMCvTlRWn4b+IcGpFB5nyZIJbr1rjWHws6jTR6kcxzSOH0lo0N+GXwh8OeDLAWtpYs0RUA/aH3liONxz1/l7V0mo6fY2EJFvbJgqcj3q0dUs5IvME42AAgj0zzWJreoi583yZSqF9quDkHt/SuutToUqXunjxqYrEV7zPkv4g/Cjw749+PXjB7y/s4ZbVrLPnzohIeDfgZGSASfzoqD40/BLwr8U/jXr/iKHxKtjcwx2trdxSM0ZcpGSr5zhgQ2M+qkUV4MqzUrWPs6CpqjHXofL37E/wC1h4Z+J/wP0XUor8Syxabai8MLbx5m0R7M8YOV3cjoR9a+nNF8Ss1k2p30K20RcqS0g2sF4z+n61/Pr+xt+1Nq/wAJdc03T59ZlttLt7pJbpFkO13HTIz8xIxX6Z+Nf24/EniL9kDVfEXg/T31bWLmPEIhcJFYqwPzOfTYARjvXHWws8PiXd6Nno4evRx2FU4dtT7Yu/2i/hFoVjMb7xfpxMCFpGN2vXA4+mDX5Vftj/t7/BbTP2sojpngfSvEmkC6H9ptMo8yCVMbJI3z98DGCOgwK+b/AIcaD8Wfjjd3mo6r4wurDTzn7Zqd1dMsXzHP446D2FS2/wCwYdY19rqP4r6NKJZCys0xaYknqqtjP4GvWhHBygk5ak4ejmMfeoQvqffWi/8ABWcah8NLu+0L4lrZQ6eqwRz3DebcxoV4CxuTuYDjd7Zre/4JsQ61+2F+0+fFmiftKePdN0/SttxrAuL6RkvQRxGoZtiZ2nIC18I6r/wTV+JU+hx3Fl43t47drhWcHTpW34/2kPGcepr6h/YTl8RfsgQaoNFnifU9QnhEU2oTtHb24RGXJBwWzuLdfT0qIU8NTlfmZ2V6eZexcfZJX6n7W694S0iDThZz+ZLIYtsMk0u5yoPUt3P/AOqvKvEev3/w5vZFmc+RIcxzH7i+xrwbXf8AgpXd6R4BuR4rggv7ywEX2a6s72NmnIjUsEVWHHOPrgfxVrfs3/tUeEv2tvBN1qVrqKTx+cyS2jkebZ7SQyycnJPBVhgYI4p4isuW9NnzdDCV6Mn7ZH0N4P8AiVHqNt9tt7wSNJtRdsh2Yz2B967HVvGlhpFkZNQ1FIiqGVzxtAVdxz74IxXgmm30Pw50Ke41h0tYYAzIoYNtwxwPfPX8a+Yv+CjH7feleA/gxP4QsNXktNZ12wlaO5t5gDBCp5JP94hePY4xXNRxVSq+VhUwUJvngtD6r/4Jq6tb/H/R/iR8WviN4RW9F54+ns9IW/jYmK1to1jAHPRm3t/wKit//gmp8Wfh3b/sVeA9V1TxNpVjqer6ONR1SKC4RN0srMdxDEkEqBweeM9xRXZHH5ZFWlUin6ni1cWoVHFS0XmfyZeE0u7HX7XT41X7UswKJIOFYZ65OK++f2QPi5q1/wCB9U+GX2CPVob+NIWDOsKJgkNsBPz85B/SvlD9qj4X+Hfh74xHiHwhrwuIbyUmaEx7DEwwOP51g/DT4qa3oOqW7S6u8EKTqWkUgbRnsfet8RQ/tDCe0jpodHC2b0KMuWa3P3N+Bv7LnwU1P4UReHY/DGni2uoRHPbpDnY/OSTkjOaqat+yT4Y8FWsmgfEHwFY6/wCG3JWC4gsx5lsM8DcuGX8DXmv/AAT7/a28M+JfCkPhyK5kjRVwbmR9xll5IUAY/Ova/jp8QdZX4eXN1oGtXFhexIz27KD8zYJ28kg84r5aDqYeryuNz9HoYqq5c1OXus53TP2OP2TxbCTSp/EK6eAA2nWni25WIAnJyhbK85716t4O/Yw/Y3u9MFp4T+BFnqd4sOFuL29uLtQT3YySHcefw6dq/KvUf2+f2nrT4l3Xh/StQjvgZ/KMJs13Kf7x24z0r9Hv2DPjF8ffEPhuD/haF7BBE9sksTWsYjL57bec+nFey5qEE5x3FjcfXq037OWx0uqf8EpP2XPDdhqPiKXwdCurXdqwkmE0iQ24YE4jUEBeOO/r1wa+ef8Agl18KdL/AGZviP8AEbxTf6nJEhunsfs7SHyXiErOHUHOT5ZUEj0r7m+Mvj+Lwx8Ori/1ENEbiMx/KxYruGQxB+tflh8Xf2lfG3wt8d3+mtcwxpfTNKtyDsjIxgCRWJxk8DGPWudqU5+51PKw1VV6T9ueg/8ABQr/AIKW3llND8O/CWq3duGmWWC+t0AV4yWzDyT85GCCeOelfnv8T/jne/FTW7ZvF2u3N4lvbS20dnJJ5kjSykkKcYBwXwcY6VU/ah+MUPjG7mmj02IXc7KiqrrIUYYB5zgEY4OOBxz1r3D/AIJZf8E0dZ/ayluPif4u8QnRPD1obq2ivry3JVpyAUeFQCCQ2QWJwSD0rsrww+EwjqVXZtHyOb5msPTnTpvQ3rP44+PvAvhLStN+D2o3uvaWYisfnRSb7RVVAsTeWpHHzfgBRX318H/+CLH7Ovwy8KjSNR+NHjXX5ppDJLNaXyWEUbdwsaA5B45JOcZ4zgFfm86VGU29Hd318z8+eKcne5+C/iHxn4x+J0n27xhrk1480qgbolQBsqCflAzx9Kr+OfAHiH4X+K5/D2s2c6hJv3PmJsMowCCAc+tX9I060f4bDVzH+9DyD7xx94DPrnmvtv8Abo/Zw+Gc+i3OuyWt0bkadaTQOZwfIZ7VGbYSucFsnknrX6vUxKw2IVNLRn1OT5bHFUZ1E7OJ80/AD9pPxD8PWkkt9WmimSMx2oUBQinqMDHOc8179oX7afja/wDBS+GL7xLd3bAbYI7ibcsW5yzM3cnHTnjPevhLSLiVLR7hXO9W4Ofeu58P+IdW0rWIr21uTvEIIDcjpivReBoTfNY9LD5niaS5b7H0l8CY9L0/4t2fifVJ/Mjguo7l5iPNW4EjMQkgGDj6Yr9V/wBnX9q34b6N4W0mLXfDptVjLixFrCsixrtMh+bHQnceem5RnqR+Gfgjxx4j8K+KoL3R74o4XeQxJDY34BGenzH8hX0f4W+OHxJ0b4XG60nX2t5FsIAzIM79kkkfIJI+ZcBsYz7ZOeXE4SOh6WGzKdSEoy6n6B/tm/tw6LoPgvU9X1fUIEtoZbW2mSKcN5M8gdiHXghV2hsZywyAQa/Jr4q/tba146g1vSdVuUuJb/UW+y+TbpJGHU7VUMcs0Z2lhggg9T2rlPip8c/iV4gmuxrevG6h15rk6ja3CB43ZpThwDzuUqCpJODnqCRXMeDgfF3jm3utX2+bqt60c728SRiMMefLVQFU4wOnYd8kqhh6VOPM0clXGVak/Zw0O4+HP7Nviv4teFfFPxMtZWufD3h3QZdQ1u++y+Rg/aIbaKL5gRvklnVhg/dDd+R+mD/tvfstfsv/APBOnwTDpHiqNr3xH4LjEHhTS5Fku0umj2T/AC8iBVl3nccNnnnNemeLf2d/hZ8M/wDghP4rsvDOg4Gr+HLue+lmYF3ltxJJG+QBz5iBznjcTgAYA/Ca4vri7vlilb5JrUM6DpkgHp+PTpXl5hl9POEoVHaMWfO5zhVKSjfc++P2Zv8AgqT+0l8KfBUuk+D9YGtadcT77dPEdw8klqAWGxJCwLj1z0IOOMUV8XxeIdU8P6ba6ZpVx5UaRZJGSWJOeaK+frZBhlVaR81PLoqbXMf/2Q==\",\nimage = decode_img(bytes(image[0], encoding=\"utf-8\"))\nplt.imshow(image)\nplt.show()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0f036afbccc65a29b0f74527d672a23ee5f11a0
| 1,314 |
ipynb
|
Jupyter Notebook
|
Linkedin/NLP/CH03/.ipynb_checkpoints/CountVectorizer-checkpoint.ipynb
|
Dhaneshgupta1027/Python
|
12193d689cc49d3198ea6fee3f7f7d37b8e59175
|
[
"MIT"
] | 37 |
2019-04-03T07:19:57.000Z
|
2022-01-09T06:18:41.000Z
|
Linkedin/NLP/CH03/.ipynb_checkpoints/CountVectorizer-checkpoint.ipynb
|
Dhaneshgupta1027/Python
|
12193d689cc49d3198ea6fee3f7f7d37b8e59175
|
[
"MIT"
] | 16 |
2020-08-11T08:09:42.000Z
|
2021-10-30T17:40:48.000Z
|
Linkedin/NLP/CH03/.ipynb_checkpoints/CountVectorizer-checkpoint.ipynb
|
Dhaneshgupta1027/Python
|
12193d689cc49d3198ea6fee3f7f7d37b8e59175
|
[
"MIT"
] | 130 |
2019-10-02T14:40:20.000Z
|
2022-01-26T17:38:26.000Z
| 22.655172 | 144 | 0.545662 |
[
[
[
"# Vectorizing\n<ul>\n <li>\n The process that we use to convert text to a form that Python and a machine learning model can understand is called vectorizing\n </li>\n <li>\n Vectorization is used to speed up the Python code without using loop\n </li>\n</ul>\n___\n\n- ### Count Vectorization\nCreates a document-term matrix where the entry of each cell will be a count of the number of times that word occurred in that document.",
"_____no_output_____"
]
],
[
[
"import nltk\nimport re\nimport string\nimport pandas as pd\npd.set_options('.max_')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
]
] |
d0f03d9adb1f1a982ad46d6b838cb0a2d2af752b
| 10,500 |
ipynb
|
Jupyter Notebook
|
word2vec_glove_fasttext/glove-action.ipynb
|
ZhouJianyao/NLP
|
9a8c6826f992f3897774b5fcab0902e0f8d828d6
|
[
"Apache-2.0"
] | 1 |
2019-10-03T02:41:12.000Z
|
2019-10-03T02:41:12.000Z
|
word2vec_glove_fasttext/glove-action.ipynb
|
ZhouJianyao/NLP
|
9a8c6826f992f3897774b5fcab0902e0f8d828d6
|
[
"Apache-2.0"
] | null | null | null |
word2vec_glove_fasttext/glove-action.ipynb
|
ZhouJianyao/NLP
|
9a8c6826f992f3897774b5fcab0902e0f8d828d6
|
[
"Apache-2.0"
] | null | null | null | 22.058824 | 256 | 0.505333 |
[
[
[
"from mxnet import nd\nfrom mxnet.contrib import text",
"_____no_output_____"
],
[
"glove_vec = text.embedding.get_pretrained_file_names(\"glove\")",
"_____no_output_____"
],
[
"print(glove_vec)",
"['glove.42B.300d.txt', 'glove.6B.50d.txt', 'glove.6B.100d.txt', 'glove.6B.200d.txt', 'glove.6B.300d.txt', 'glove.840B.300d.txt', 'glove.twitter.27B.25d.txt', 'glove.twitter.27B.50d.txt', 'glove.twitter.27B.100d.txt', 'glove.twitter.27B.200d.txt']\n"
],
[
"glove_6b50d = text.embedding.create('glove', pretrained_file_name=\"glove.6B.50d.txt\")",
"Downloading /Users/zhoujianyao/.mxnet/embeddings/glove/glove.6B.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/embeddings/glove/glove.6B.zip...\n"
],
[
"word_size = len(glove_6b50d)\nprint(word_size)",
"400001\n"
],
[
"#词的索引\nindex = glove_6b50d.token_to_idx['happy']\nprint(index)",
"1752\n"
],
[
"#索引到词\nword = glove_6b50d.idx_to_token[1752]\nprint(word)",
"happy\n"
],
[
"#词向量\nprint(glove_6b50d.idx_to_vec[1752])",
"\n[ 0.092086 0.25709999 -0.58692998 -0.37029001 1.08280003 -0.55466002\n -0.78141999 0.58696002 -0.58714002 0.46318001 -0.11267 0.2606\n -0.26927999 -0.072466 1.24699998 0.30570999 0.56730998 0.30509001\n -0.050312 -0.64442998 -0.54513001 0.86429 0.20914 0.56334001\n 1.12279999 -1.05159998 -0.78105003 0.29655999 0.72610003 -0.61391997\n 2.4224999 1.01419997 -0.17753001 0.4147 -0.12966 -0.47064\n 0.38069999 0.16309001 -0.32300001 -0.77898997 -0.42473 -0.30825999\n -0.42242 0.055069 0.38266999 0.037415 -0.43020001 -0.39442\n 0.10511 0.87286001]\n<NDArray 50 @cpu(0)>\n"
]
],
[
[
"# Glove应用",
"_____no_output_____"
]
],
[
[
"#余玄相似度\ndef cos_sim(x, y):\n return nd.dot(x,y)/(x.norm() * y.norm())",
"_____no_output_____"
],
[
"a = nd.array([4,5])\nb = nd.array([400,500])\nprint(cos_sim(a,b))",
"\n[ 1.]\n<NDArray 1 @cpu(0)>\n"
],
[
"#求近义词\ndef norm_vecs_by_row(x):\n # 分母中添加的 1e-10 是为了数值稳定性。\n return x / (nd.sum(x * x, axis=1) + 1e-10).sqrt().reshape((-1, 1))\n\ndef get_knn(token_embedding, k, word):\n word_vec = token_embedding.get_vecs_by_tokens([word]).reshape((-1, 1))\n vocab_vecs = norm_vecs_by_row(token_embedding.idx_to_vec)\n dot_prod = nd.dot(vocab_vecs, word_vec)\n indices = nd.topk(dot_prod.reshape((len(token_embedding), )), k=k+1,\n ret_typ='indices')\n indices = [int(i.asscalar()) for i in indices]\n # 除去输入词。\n return token_embedding.to_tokens(indices[1:])",
"_____no_output_____"
],
[
"sim_list = get_knn(glove_6b50d, 10, 'baby')\nprint(sim_list)",
"['babies', 'boy', 'girl', 'newborn', 'pregnant', 'mom', 'child', 'toddler', 'mother', 'cat']\n"
],
[
"sim_val = cos_sim(glove_6b50d.get_vecs_by_tokens('baby'), glove_6b50d.get_vecs_by_tokens('babies'))\nprint(sim_val)",
"\n[ 0.83871293]\n<NDArray 1 @cpu(0)>\n"
],
[
"print(get_knn(glove_6b50d, 10, 'computer'))",
"['computers', 'software', 'technology', 'electronic', 'internet', 'computing', 'devices', 'digital', 'applications', 'pc']\n"
],
[
"print(get_knn(glove_6b50d, 10, 'run'))",
"['running', 'runs', 'went', 'start', 'ran', 'out', 'third', 'home', 'off', 'got']\n"
],
[
"print(get_knn(glove_6b50d, 10, 'love'))",
"['dream', 'life', 'dreams', 'loves', 'me', 'my', 'mind', 'loving', 'wonder', 'soul']\n"
],
[
"#求类比词\n#vec(c)+vec(b)−vec(a) \ndef get_top_k_by_analogy(token_embedding, k, word1, word2, word3):\n word_vecs = token_embedding.get_vecs_by_tokens([word1, word2, word3])\n word_diff = (word_vecs[1] - word_vecs[0] + word_vecs[2]).reshape((-1, 1))\n vocab_vecs = norm_vecs_by_row(token_embedding.idx_to_vec)\n dot_prod = nd.dot(vocab_vecs, word_diff)\n indices = nd.topk(dot_prod.reshape((len(token_embedding), )), k=k,\n ret_typ='indices')\n indices = [int(i.asscalar()) for i in indices]\n return token_embedding.to_tokens(indices)",
"_____no_output_____"
],
[
"#验证vec(son)+vec(woman)-vec(man) 与 vec(daughter) 两个向量之间的余弦相似度\ndef cos_sim_word_analogy(token_embedding, word1, word2, word3, word4):\n words = [word1, word2, word3, word4]\n vecs = token_embedding.get_vecs_by_tokens(words)\n return cos_sim(vecs[1] - vecs[0] + vecs[2], vecs[3])",
"_____no_output_____"
],
[
"word_list = get_top_k_by_analogy(glove_6b50d,1, 'man', 'woman', 'son')",
"_____no_output_____"
],
[
"print(word_list)",
"['daughter']\n"
],
[
"word_list = get_top_k_by_analogy(glove_6b50d, 1, 'man', 'son', 'woman')\nprint(word_list)",
"['daughter']\n"
],
[
"sim_val = cos_sim_word_analogy(glove_6b50d, 'man', 'woman', 'son', 'daughter')\nprint(sim_val)",
"\n[ 0.96583432]\n<NDArray 1 @cpu(0)>\n"
],
[
"word_list = get_top_k_by_analogy(glove_6b50d, 1, 'beijing', 'china', 'tokyo')",
"_____no_output_____"
],
[
"print(word_list)",
"['japan']\n"
],
[
"word_list = get_top_k_by_analogy(glove_6b50d, 1, 'bad', 'worst', 'big')",
"_____no_output_____"
],
[
"print(word_list)",
"['biggest']\n"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0f04daaaaf20841d593ad8351e74f4a2f78b2d1
| 11,531 |
ipynb
|
Jupyter Notebook
|
3_Dictionary.ipynb
|
dheerajjoshim/machinelearningcourse
|
61c798567ea5754c9e15153b60ce58beca21cac7
|
[
"Unlicense"
] | null | null | null |
3_Dictionary.ipynb
|
dheerajjoshim/machinelearningcourse
|
61c798567ea5754c9e15153b60ce58beca21cac7
|
[
"Unlicense"
] | null | null | null |
3_Dictionary.ipynb
|
dheerajjoshim/machinelearningcourse
|
61c798567ea5754c9e15153b60ce58beca21cac7
|
[
"Unlicense"
] | null | null | null | 22.698819 | 942 | 0.490764 |
[
[
[
"# Dictionaries\n \n### # Dictionary in Python is an unordered collection of data values. \n### # Dictionary holds key:value pair. \n### # Each key-value pair in a Dictionary is separated by a colon whereas each key is separated by a ‘comma’.\n### # Keys of a Dictionary must be unique.",
"_____no_output_____"
]
],
[
[
"Dictionary = {1:'Nokia',2:'EDP',3:'Nokia',4:'Nokia', 'Nokia':1}\nprint(Dictionary)\n\nDictionary = {1:'Nokia', 2:'EDP',3:'Nokia', 1:'ML', 2:'Pritish'}\nprint(Dictionary)",
"{1: 'Nokia', 2: 'EDP', 3: 'Nokia', 4: 'Nokia', 'Nokia': 1}\n{1: 'ML', 2: 'Pritish', 3: 'Nokia'}\n"
]
],
[
[
"### Dictionary keys can be of different data types:",
"_____no_output_____"
]
],
[
[
"Dictionary = {'name':'Pritish',1:[2,3,4]} \nprint(Dictionary)",
"{'name': 'Pritish', 1: [2, 3, 4]}\n"
]
],
[
[
"### Accessing a particular element in a dictionary via the key:",
"_____no_output_____"
]
],
[
[
"Dictionary = {1:'Nokia',2:'EDP',3:'Machine Learning'} \nprint(Dictionary[1]) ",
"Nokia\n"
]
],
[
[
"### Length of a dictionary:\n### # len(dictionary_name)",
"_____no_output_____"
]
],
[
[
"Dictionary = {1:'Nokia',2:'EDP',3:'Machine Learning'} \nprint(len(Dictionary)) ",
"3\n"
]
],
[
[
"### Keys of the dictionary:\n### # dictionary_name.keys()",
"_____no_output_____"
]
],
[
[
"Dictionary = {1:'Nokia',2:'EDP',3:'Machine Learning'} \nprint(Dictionary.keys())",
"dict_keys([1, 2, 3])\n"
]
],
[
[
"### Values corresponding to the keys of a dictionary:\n### # dictionary_name.values()",
"_____no_output_____"
]
],
[
[
"Dictionary = {1:'Nokia',2:'EDP',3:'Machine Learning'} \nprint(Dictionary.values())",
"dict_values(['Nokia', 'EDP', 'Machine Learning'])\n"
]
],
[
[
"### Check for all the items in the dictionary:\n### # dictionary_name.items()",
"_____no_output_____"
]
],
[
[
"Dictionary = {1:'Nokia',2:'ML'} # item is the combination of the key, value pair\nprint(Dictionary.items())",
"dict_items([(1, 'Nokia'), (2, 'ML')])\n"
]
],
[
[
"### Getting the value of a particular key element in dictionary\n### # dictionary_name.get(element_index_key)",
"_____no_output_____"
]
],
[
[
"Dictionary = {1:'Nokia',2:'ML'}\nprint(Dictionary.get(1)) \nprint(Dictionary.get(2))",
"Nokia\nML\n"
]
],
[
[
"### Updating a value in the dictionary:",
"_____no_output_____"
]
],
[
[
"Dictionary = {1:'Nokia',2:'EDP',3:'Project Management'}\nDictionary.update( {3:'ML',1:'NOKIA'} ) # Can update multiple keys at a single time\nprint(Dictionary)",
"{1: 'NOKIA', 2: 'EDP', 3: 'ML'}\n"
]
],
[
[
"### Updating a value in the dictionary without using built-in update:",
"_____no_output_____"
]
],
[
[
"edp_subjects = {\"ML\":93, \"Project Management\":86, \"Cloud Ambessador\":90}\nedp_subjects[\"ML\"]=95 \nprint(edp_subjects[\"ML\"])\nprint(edp_subjects)",
"95\n{'ML': 95, 'Project Management': 86, 'Cloud Ambessador': 90}\n"
]
],
[
[
"### Pop an element from the dictionary:\n### # dictionary_name.pop(key)",
"_____no_output_____"
]
],
[
[
"Dictionary = {1:'Nokia',2:'EDP',3:'ML'}\nprint(Dictionary.pop(2)) # pop mandatorily needs a value as an argument to be passed \nprint(Dictionary)",
"EDP\n{1: 'Nokia', 3: 'ML'}\n"
]
],
[
[
"### Clearing the dictionary elements, deleting the dictionary:\n### # dictionary_name.clear()\n### # del dictionary_name",
"_____no_output_____"
]
],
[
[
"students = {\"Eric\":14, \"Bob\":12, \"Alice\":26}\nstudents.clear()\nprint(students)\ndel students",
"{}\n"
]
],
[
[
"### Delete an element in the dictionary:\n### # del dictionary_name[element_index]",
"_____no_output_____"
]
],
[
[
"students = {\"Eric\":14, \"Bob\":12, \"Alice\":26}\ndel students[\"Bob\"]\nprint(students)\ndel students",
"{'Eric': 14, 'Alice': 26}\n"
]
],
[
[
"### Updating a dictionary by adding one more dictionary to an existing dictionary:\n### # dictionary_name1.update(dictionary_name2)",
"_____no_output_____"
]
],
[
[
"students_1 = {\"Eric\":14, \"Bob\":12, \"Alice\":26}\nstudents_2 = {1:'John',2:'Bob',3:'James'}\nstudents_1.update(students_2)\nprint(students_1)",
"{'Eric': 14, 'Bob': 12, 'Alice': 26, 1: 'John', 2: 'Bob', 3: 'James'}\n"
]
],
[
[
"### Ordered Dictionary\n### # usage of OrderedDict\n \n#### OrderedDict preserves the order in which the keys are inserted. A regular dict doesn’t track the \n#### insertion order, and iterating it gives the values in an arbitrary order. By contrast, the order the \n#### items are inserted is remembered by OrderedDict.",
"_____no_output_____"
]
],
[
[
"from collections import OrderedDict\n \nprint(\"This is a normal Dictionary: \")\nd = {}\nd['a'] = 1\nd['b'] = 2\nd['c'] = 3\nd['d'] = 4\nd['e'] = 5\nd['f'] = 6\n \nfor key, value in d.items():\n print(key, value)\n\nprint(\"\\nThis is an Ordered Dictionary: \")\nod = OrderedDict()\nod['a'] = 1\nod['b'] = 2\nod['c'] = 3\nod['d'] = 4\nod['e'] = 5\nod['f'] = 6\n\nfor key, value in od.items():\n print(key, value)\n",
"This is a normal Dictionary: \na 1\nb 2\nc 3\nd 4\ne 5\nf 6\n\nThis is an Ordered Dictionary: \na 1\nb 2\nc 3\nd 4\ne 5\nf 6\n"
]
],
[
[
"### Dictionary built-in function",
"_____no_output_____"
]
],
[
[
"Dictionary = {1:'Nokia',2:'EDP',3:'Machine Learning'} \nprint(dir(Dictionary[1]))",
"['__add__', '__class__', '__contains__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'capitalize', 'casefold', 'center', 'count', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'format_map', 'index', 'isalnum', 'isalpha', 'isascii', 'isdecimal', 'isdigit', 'isidentifier', 'islower', 'isnumeric', 'isprintable', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'maketrans', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0f06369ba53457835f8899da0d7254bccbe1dcb
| 30,945 |
ipynb
|
Jupyter Notebook
|
python1.ipynb
|
mat-esp-2016/python-1-carlosjessicathais2
|
d54063bf03d54733038e9b3abc8bef557acba520
|
[
"CC-BY-4.0"
] | null | null | null |
python1.ipynb
|
mat-esp-2016/python-1-carlosjessicathais2
|
d54063bf03d54733038e9b3abc8bef557acba520
|
[
"CC-BY-4.0"
] | null | null | null |
python1.ipynb
|
mat-esp-2016/python-1-carlosjessicathais2
|
d54063bf03d54733038e9b3abc8bef557acba520
|
[
"CC-BY-4.0"
] | null | null | null | 131.122881 | 26,284 | 0.880918 |
[
[
[
"# Importamos o numpy",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"# Usamos o numpy para abrir os dados do .txt",
"_____no_output_____"
]
],
[
[
"dados= np.loadtxt('dados/brazil-TAVG-Trend.txt', comments='%')",
"_____no_output_____"
],
[
"print(dados)",
"[[ 1.83200000e+03 1.00000000e+00 -5.75000000e-01 ..., nan\n nan nan]\n [ 1.83200000e+03 2.00000000e+00 -1.00500000e+00 ..., nan\n nan nan]\n [ 1.83200000e+03 3.00000000e+00 -7.93000000e-01 ..., nan\n nan nan]\n ..., \n [ 2.01300000e+03 7.00000000e+00 7.72000000e-01 ..., nan\n nan nan]\n [ 2.01300000e+03 8.00000000e+00 1.86000000e-01 ..., nan\n nan nan]\n [ 2.01300000e+03 9.00000000e+00 nan ..., nan\n nan nan]]\n"
]
],
[
[
"# Faremos a média das anomalias anuais",
"_____no_output_____"
]
],
[
[
"ano = dados[:,0]\nmes = dados[:,1]\nanomalia_anual = dados[:,4]",
"_____no_output_____"
],
[
"media=np.nanmean(anomalia_anual)\nprint(media)",
"-0.229509107894\n"
]
],
[
[
"# Faremos o desvio padrão das anomalias anuais",
"_____no_output_____"
]
],
[
[
"desvpad=np.nanstd(anomalia_anual)\nprint(desvpad)",
"0.496111048439\n"
]
],
[
[
"# Plotaremos um gráfico a partir dos dados de Anomalia de Temperatura Média Anual X Tempo",
"_____no_output_____"
]
],
[
[
"ano_dec= ano+(mes -1)/12\nmedia_tem= np.nanmean(dados[:,8])",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"plt.plot(ano_dec, anomalia_anual)\nplt.xlabel('Tempo (em anos)')\nplt.ylabel('Anomalia de Temperatura')\nplt.title('Brasil')\nplt.savefig('Brasil.pdf')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0f0765df6c4b675cbda06db51ddaf0a91989da3
| 273,592 |
ipynb
|
Jupyter Notebook
|
Open Parking Stockholm.ipynb
|
salgo60/open-data-examples
|
05a16a92c53117ff23a330a3fa5914a33b19ff6a
|
[
"MIT"
] | 5 |
2019-05-30T13:10:32.000Z
|
2021-06-30T06:04:29.000Z
|
Open Parking Stockholm.ipynb
|
salgo60/open-data-examples
|
05a16a92c53117ff23a330a3fa5914a33b19ff6a
|
[
"MIT"
] | null | null | null |
Open Parking Stockholm.ipynb
|
salgo60/open-data-examples
|
05a16a92c53117ff23a330a3fa5914a33b19ff6a
|
[
"MIT"
] | null | null | null | 34.631899 | 137 | 0.521565 |
[
[
[
"## Open Parking Stockholm \nhttps://openstreetgs.stockholm.se/Home/Parking\n",
"_____no_output_____"
]
],
[
[
"import urllib3 \nimport geopandas as gpd\nhttp = urllib3.PoolManager()\nurlbase = \"https://openparking.stockholm.se/LTF-Tolken/v1/servicedagar/\"\nurl = urlbase + \"weekday/m%C3%A5ndag?outputFormat=json&apiKey=cb5d9a39-f208-459a-b8d5-ccd6bf712fe2\"\ndf = gpd.read_file(url)\ndf.head()\n",
"_____no_output_____"
],
[
"df.info()",
"<class 'geopandas.geodataframe.GeoDataFrame'>\nRangeIndex: 2509 entries, 0 to 2508\nData columns (total 25 columns):\nid 2509 non-null object\nFID 2509 non-null int64\nFEATURE_OBJECT_ID 2509 non-null int64\nFEATURE_VERSION_ID 2509 non-null int64\nEXTENT_NO 2509 non-null int64\nVALID_FROM 2509 non-null object\nSTART_TIME 2509 non-null int64\nEND_TIME 2509 non-null int64\nSTART_WEEKDAY 2509 non-null object\nSTART_MONTH 1480 non-null float64\nEND_MONTH 1480 non-null float64\nSTART_DAY 1480 non-null float64\nEND_DAY 1480 non-null float64\nCITATION 2509 non-null object\nSTREET_NAME 2509 non-null object\nCITY_DISTRICT 2509 non-null object\nPARKING_DISTRICT 2509 non-null object\nVF_PLATSER 128 non-null float64\nVF_PLATS_TYP 2438 non-null object\nADDRESS 2509 non-null object\nRDT_URL 2509 non-null object\nVF_METER 306 non-null float64\nVALID_TO 15 non-null object\nODD_EVEN 19 non-null object\ngeometry 2509 non-null geometry\ndtypes: float64(6), geometry(1), int64(6), object(12)\nmemory usage: 490.2+ KB\n"
],
[
"import folium",
"_____no_output_____"
],
[
"for index, row in df.iterrows():\n print ( \"\\nID: \", row[\"id\"],\"\\n\\t\",row[\"STREET_NAME\"],\", \", row[\"CITY_DISTRICT\"],\" - \",row[\"VF_PLATSER\"])\n ",
"\nID: LTFR_SERVICEDAG.31030102 \n\t Fållnäsgatan , Tallkrogen - 10.0\n\nID: LTFR_SERVICEDAG.31030270 \n\t Lummerstigen , Herrängen - nan\n\nID: LTFR_SERVICEDAG.31030271 \n\t Tanneforsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030272 \n\t Östrandsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030273 \n\t Östrandsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31029771 \n\t Ejdervägen , Fagersjö - nan\n\nID: LTFR_SERVICEDAG.31029944 \n\t Jordanesvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31029945 \n\t Skoghallsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31029946 \n\t Skoghallsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31029947 \n\t Skutskärsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31029948 \n\t Skutskärsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31029951 \n\t Folkparksvägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31030290 \n\t Börjesonsvägen , Södra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030291 \n\t Börjesonsvägen , Södra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030292 \n\t Börjesonsvägen , Södra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030293 \n\t Börjesonsvägen , Södra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030294 \n\t Börjesonsvägen , Södra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030295 \n\t Beckombergavägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31029779 \n\t Norregöksvägen , Solhem - nan\n\nID: LTFR_SERVICEDAG.31029780 \n\t Norregöksvägen , Solhem - nan\n\nID: LTFR_SERVICEDAG.31029955 \n\t Sjösavägen , Bandhagen - 12.0\n\nID: LTFR_SERVICEDAG.31029961 \n\t Vråkstigen , Herrängen - nan\n\nID: LTFR_SERVICEDAG.31029964 \n\t Torvsticksvägen , Bromma Kyrka - nan\n\nID: LTFR_SERVICEDAG.31030133 \n\t Knäckepilsgränd , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31030137 \n\t Vildrosstigen , Herrängen - nan\n\nID: LTFR_SERVICEDAG.31030138 \n\t Idögränd , Bandhagen - 7.0\n\nID: LTFR_SERVICEDAG.31029967 \n\t Vendelstigen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31029970 \n\t Ivarskärrsvägen , Vinsta - nan\n\nID: LTFR_SERVICEDAG.31029971 \n\t Ivarskärrsvägen , Vinsta - nan\n\nID: LTFR_SERVICEDAG.31029972 \n\t Ivarskärrsvägen , Vinsta - nan\n\nID: LTFR_SERVICEDAG.31029973 \n\t Ivarskärrsvägen , Vinsta - nan\n\nID: LTFR_SERVICEDAG.31029974 \n\t Skebokvarnsvägen , Högdalen - 48.0\n\nID: LTFR_SERVICEDAG.31030145 \n\t Blodboksgränd , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31030150 \n\t Brunkullegränd , Kälvesta - nan\n\nID: LTFR_SERVICEDAG.31030151 \n\t Brunkullegränd , Kälvesta - nan\n\nID: LTFR_SERVICEDAG.31030324 \n\t Fagerstagatan , Lunda - nan\n\nID: LTFR_SERVICEDAG.31029796 \n\t Örhängevägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31029797 \n\t Ottarsvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31029800 \n\t Torparmors Väg , Vinsta - nan\n\nID: LTFR_SERVICEDAG.31029801 \n\t Torparmors Väg , Vinsta - nan\n\nID: LTFR_SERVICEDAG.31029802 \n\t Torparmors Väg , Vinsta - nan\n\nID: LTFR_SERVICEDAG.31029803 \n\t Torparmors Väg , Vinsta - nan\n\nID: LTFR_SERVICEDAG.31029804 \n\t Torparmors Väg , Vinsta - nan\n\nID: LTFR_SERVICEDAG.31029807 \n\t Storkvägen , Långsjö - nan\n\nID: LTFR_SERVICEDAG.31029980 \n\t Börjesonsvägen , Södra Ängby - nan\n\nID: LTFR_SERVICEDAG.31029981 \n\t Börjesonsvägen , Södra Ängby - nan\n\nID: LTFR_SERVICEDAG.31029982 \n\t Börjesonsvägen , Södra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030161 \n\t Hasselagränd , Vällingby - nan\n\nID: LTFR_SERVICEDAG.31029808 \n\t Lingonrisgränd , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31029815 \n\t Lingonrisgränd , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31030169 \n\t Tallskogsgränd , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31030000 \n\t Spindelvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31030001 \n\t Spindelvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31030176 \n\t Brennervägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030361 \n\t Kronvägen , Bromsten - nan\n\nID: LTFR_SERVICEDAG.31030016 \n\t Doktor Abrahams Väg , Eneby - nan\n\nID: LTFR_SERVICEDAG.31030017 \n\t Doktor Abrahams Väg , Eneby - nan\n\nID: LTFR_SERVICEDAG.31030192 \n\t Folkparksvägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31030362 \n\t Kronvägen , Bromsten - nan\n\nID: LTFR_SERVICEDAG.31030202 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030203 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030204 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030377 \n\t Brennervägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030214 \n\t Kallforsvägen , Bandhagen - 6.0\n\nID: LTFR_SERVICEDAG.31030215 \n\t Ångermannagatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030221 \n\t Stålbogavägen , Högdalen - 38.0\n\nID: LTFR_SERVICEDAG.31030386 \n\t Knäpparvägen , Långsjö - nan\n\nID: LTFR_SERVICEDAG.31030387 \n\t Knäpparvägen , Långsjö - nan\n\nID: LTFR_SERVICEDAG.31030389 \n\t Galtabäcksvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030390 \n\t Galtabäcksvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030392 \n\t Ringarstigen , Bromma Kyrka - nan\n\nID: LTFR_SERVICEDAG.31030041 \n\t Lessebovägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030042 \n\t Lessebovägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030045 \n\t Maskrosstigen , Herrängen - nan\n\nID: LTFR_SERVICEDAG.31030051 \n\t Östrandsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030400 \n\t Almgränd , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31030402 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030403 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030070 \n\t Konditorsvägen , Sköndal - nan\n\nID: LTFR_SERVICEDAG.31030245 \n\t Vaktelstigen , Herrängen - nan\n\nID: LTFR_SERVICEDAG.31030252 \n\t Brushaneslingan , Herrängen - nan\n\nID: LTFR_SERVICEDAG.31029736 \n\t Krattvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31029737 \n\t Krattvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31029738 \n\t Krattvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31029739 \n\t Krattvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31029911 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31029912 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31029913 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31029914 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31029915 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31029916 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030075 \n\t Rågångsvägen , Bromsten - nan\n\nID: LTFR_SERVICEDAG.31030076 \n\t Rågångsvägen , Bromsten - nan\n\nID: LTFR_SERVICEDAG.31030079 \n\t Spritsvägen , Sköndal - nan\n\nID: LTFR_SERVICEDAG.31030082 \n\t Danavägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030254 \n\t Toppklockegränd , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31030262 \n\t Klenätvägen , Sköndal - nan\n\nID: LTFR_SERVICEDAG.31029927 \n\t Oviksgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030092 \n\t Kiviksvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31031050 \n\t Sätterstavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31031051 \n\t Sätterstavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31031052 \n\t Jönåkersvägen , Svedmyra - nan\n\nID: LTFR_SERVICEDAG.31031053 \n\t Helgestavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31031054 \n\t Helgestavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31031055 \n\t Helgestavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31031056 \n\t Helgestavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31031057 \n\t Helgestavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31030467 \n\t Annexvägen , Bromma Kyrka - nan\n\nID: LTFR_SERVICEDAG.31030659 \n\t Vadsbrovägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31031063 \n\t Folkparksvägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31031072 \n\t Kirunagatan , Vällingby - nan\n\nID: LTFR_SERVICEDAG.31030473 \n\t Vrenavägen , Högdalen - 18.0\n\nID: LTFR_SERVICEDAG.31030478 \n\t Tystbergavägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030479 \n\t Tystbergavägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030480 \n\t Tystbergavägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31031062 \n\t Folkparksvägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31030660 \n\t Uddeholmsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030668 \n\t Tanneforsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030669 \n\t Tanneforsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030670 \n\t Albert Landbergs Gränd , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31030671 \n\t Albert Landbergs Gränd , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31030878 \n\t Persikogatan , Hässeby Strand - nan\n\nID: LTFR_SERVICEDAG.31030879 \n\t Persikogatan , Hässeby Strand - nan\n\nID: LTFR_SERVICEDAG.31031083 \n\t Krister Siöblads Väg , Nälsta - nan\n\nID: LTFR_SERVICEDAG.31031087 \n\t Funäsgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030885 \n\t Röllekebacken , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31030888 \n\t Näshultavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31030890 \n\t Måndagsvägen , Hökarängen - nan\n\nID: LTFR_SERVICEDAG.31030891 \n\t Måndagsvägen , Hökarängen - nan\n\nID: LTFR_SERVICEDAG.31031097 \n\t Funäsgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030496 \n\t Bjursätragatan , Rågsved - nan\n\nID: LTFR_SERVICEDAG.31030497 \n\t Bjursätragatan , Rågsved - nan\n\nID: LTFR_SERVICEDAG.31030498 \n\t Rangstaplan , Högdalen - nan\n\nID: LTFR_SERVICEDAG.31031102 \n\t Funäsgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31031111 \n\t Maltesholmsvägen , Hässeby Strand - 120.0\n\nID: LTFR_SERVICEDAG.31031112 \n\t Formansgränd , Hässelby Gård - nan\n\nID: LTFR_SERVICEDAG.31030510 \n\t Markvägen , Nälsta - nan\n\nID: LTFR_SERVICEDAG.31030516 \n\t Askersgatan , Rågsved - 18.0\n\nID: LTFR_SERVICEDAG.31030911 \n\t Uddeholmsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030912 \n\t Uddeholmsvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31031121 \n\t Hagmarksvägen , Nälsta - nan\n\nID: LTFR_SERVICEDAG.31030928 \n\t Götalandsvägen , Örby Slott - nan\n\nID: LTFR_SERVICEDAG.31030929 \n\t Götalandsvägen , Liseberg - nan\n\nID: LTFR_SERVICEDAG.31030930 \n\t Ormängsgatan , Hässelby Gård - nan\n\nID: LTFR_SERVICEDAG.31031131 \n\t Funäsgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31030531 \n\t Skebokvarnsvägen , Högdalen - 23.0\n\nID: LTFR_SERVICEDAG.31030534 \n\t Franklandsvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030535 \n\t Franklandsvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030536 \n\t Franklandsvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030931 \n\t Mällstensgränd , Högdalen - nan\n\nID: LTFR_SERVICEDAG.31031145 \n\t Lundagårdsvägen , Solhem - nan\n\nID: LTFR_SERVICEDAG.31031146 \n\t Lundagårdsvägen , Solhem - nan\n\nID: LTFR_SERVICEDAG.31031150 \n\t Västerängsvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31031151 \n\t Västerängsvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31030739 \n\t Blackebergsbacken , Blackeberg - nan\n\nID: LTFR_SERVICEDAG.31030741 \n\t Hyltingevägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31030742 \n\t Ripsavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31030743 \n\t Ripsavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31030948 \n\t Melongatan , Hässeby Strand - nan\n\nID: LTFR_SERVICEDAG.31030951 \n\t Jultomtestigen , Liseberg - nan\n\nID: LTFR_SERVICEDAG.31030952 \n\t Jultomtestigen , Liseberg - nan\n\nID: LTFR_SERVICEDAG.31030563 \n\t Östmarksgatan , Farsta - nan\n\nID: LTFR_SERVICEDAG.31030564 \n\t Östmarksgatan , Farsta - nan\n\nID: LTFR_SERVICEDAG.31030750 \n\t Tallstubbsbacken , Solhem - nan\n\nID: LTFR_SERVICEDAG.31030756 \n\t Götalandsvägen , Örby Slott - nan\n\nID: LTFR_SERVICEDAG.31030956 \n\t Olshammarsgatan , Hagsätra - nan\n\nID: LTFR_SERVICEDAG.31030957 \n\t Olshammarsgatan , Hagsätra - nan\n\nID: LTFR_SERVICEDAG.31030958 \n\t Olshammarsgatan , Hagsätra - nan\n\nID: LTFR_SERVICEDAG.31030959 \n\t Olshammarsgatan , Hagsätra - nan\n\nID: LTFR_SERVICEDAG.31031179 \n\t Plommonvägen , Eneby - nan\n\nID: LTFR_SERVICEDAG.31031180 \n\t Plommonvägen , Eneby - nan\n\nID: LTFR_SERVICEDAG.31030570 \n\t Sågverksgatan , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030968 \n\t Aspövägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31030969 \n\t Aspövägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31030972 \n\t Skarpnäcks Alle , Skarpnäcks Gård - nan\n\nID: LTFR_SERVICEDAG.31031184 \n\t Nippervägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31031185 \n\t Nippervägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31031194 \n\t Kirunagatan , Vällingby - nan\n\nID: LTFR_SERVICEDAG.31030589 \n\t Torsburgsvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030774 \n\t Råbyvägen , Örby Slott - nan\n\nID: LTFR_SERVICEDAG.31030775 \n\t Råbyvägen , Örby Slott - nan\n\nID: LTFR_SERVICEDAG.31030776 \n\t Råbyvägen , Örby Slott - nan\n\nID: LTFR_SERVICEDAG.31030777 \n\t Råbyvägen , Örby Slott - nan\n\nID: LTFR_SERVICEDAG.31030778 \n\t Råbyvägen , Örby Slott - nan\n\nID: LTFR_SERVICEDAG.31030591 \n\t Stiklastadsvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030785 \n\t Vårdingebacken , Örby - nan\n\nID: LTFR_SERVICEDAG.31031059 \n\t Sten Stures Gränd , Solberga - nan\n\nID: LTFR_SERVICEDAG.31030609 \n\t Pukslagargatan , Älvsjö - 9.0\n\nID: LTFR_SERVICEDAG.31030613 \n\t Ramviksvägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030798 \n\t Skarpnäcks Alle , Skarpnäcks Gård - nan\n\nID: LTFR_SERVICEDAG.31031007 \n\t Bedaröbacken , Högdalen - nan\n\nID: LTFR_SERVICEDAG.31031008 \n\t Bedaröbacken , Högdalen - nan\n\nID: LTFR_SERVICEDAG.31031009 \n\t Grytvägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31031015 \n\t Helsingforsgatan , Akalla - nan\n\nID: LTFR_SERVICEDAG.31031016 \n\t Helsingforsgatan , Akalla - nan\n\nID: LTFR_SERVICEDAG.31030431 \n\t Franklandsvägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030435 \n\t Skebokvarnsvägen , Högdalen - 41.0\n\nID: LTFR_SERVICEDAG.31030615 \n\t Brennervägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030617 \n\t Vargövägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31030618 \n\t Vargövägen , Stureby - nan\n\nID: LTFR_SERVICEDAG.31031021 \n\t Folkparksvägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31031022 \n\t Folkparksvägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31031024 \n\t Lotta Svärds Gränd , Fruängen - nan\n\nID: LTFR_SERVICEDAG.31031028 \n\t Västerängsvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31031029 \n\t Västerängsvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31031060 \n\t Beckombergavägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31031061 \n\t Beckombergavägen , Norra Ängby - nan\n\nID: LTFR_SERVICEDAG.31030630 \n\t Västerängsvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31030631 \n\t Västerängsvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31030828 \n\t Ripsavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31030829 \n\t Ripsavägen , Örby - nan\n\nID: LTFR_SERVICEDAG.31031036 \n\t Sävstaholmsvägen , Örby Slott - nan\n\nID: LTFR_SERVICEDAG.31031037 \n\t Sävstaholmsvägen , Örby Slott - nan\n\nID: LTFR_SERVICEDAG.31030451 \n\t Skogsklöverbacken , Hässelby Villastad - nan\n\nID: LTFR_SERVICEDAG.31030636 \n\t Östbergabackarna , Östberga - nan\n\nID: LTFR_SERVICEDAG.31030637 \n\t Östbergabackarna , Östberga - nan\n\nID: LTFR_SERVICEDAG.31030638 \n\t Östbergabackarna , Östberga - nan\n\nID: LTFR_SERVICEDAG.31030642 \n\t Kronvägen , Bromsten - nan\n\nID: LTFR_SERVICEDAG.31031251 \n\t Östbergabackarna , Östberga - nan\n\nID: LTFR_SERVICEDAG.31031468 \n\t Flottbrovägen , Stora Essingen - nan\n\nID: LTFR_SERVICEDAG.31031469 \n\t Flottbrovägen , Stora Essingen - nan\n\nID: LTFR_SERVICEDAG.31031669 \n\t Olaus Magnus Väg , Hammarbyhöjden - nan\n\nID: LTFR_SERVICEDAG.31031671 \n\t Vidängsvägen , Traneberg - nan\n\nID: LTFR_SERVICEDAG.31031676 \n\t Arkövägen , Kärrtorp - nan\n\nID: LTFR_SERVICEDAG.31031876 \n\t Essingestråket , Stora Essingen - nan\n\nID: LTFR_SERVICEDAG.31031260 \n\t Persikogatan , Hässeby Strand - 28.0\n\nID: LTFR_SERVICEDAG.31031491 \n\t Klyftvägen , Ulvsunda - nan\n\nID: LTFR_SERVICEDAG.31031492 \n\t Klyftvägen , Ulvsunda - nan\n\nID: LTFR_SERVICEDAG.31031687 \n\t Vivelvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31031688 \n\t Vivelvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31031689 \n\t Vivelvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31031690 \n\t Vivelvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31031691 \n\t Vivelvägen , Långbro - nan\n\nID: LTFR_SERVICEDAG.31031694 \n\t Föreningsvägen , Enskede Gård - nan\n\nID: LTFR_SERVICEDAG.31031695 \n\t Föreningsvägen , Enskede Gård - nan\n\nID: LTFR_SERVICEDAG.31031886 \n\t Möckelvägen , Årsta - 8.0\n\nID: LTFR_SERVICEDAG.31031892 \n\t Gullmarsvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031893 \n\t Gullmarsvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031894 \n\t Gullmarsvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031281 \n\t Funäsgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31031496 \n\t Bägersta Byväg , Enskedefältet - nan\n\nID: LTFR_SERVICEDAG.31031497 \n\t Folkparksvägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31031498 \n\t Folkparksvägen , Solberga - nan\n\nID: LTFR_SERVICEDAG.31031499 \n\t Åkerbladsgatan , Hammarbyhöjden - nan\n\nID: LTFR_SERVICEDAG.31031500 \n\t Åkerbladsgatan , Hammarbyhöjden - nan\n\nID: LTFR_SERVICEDAG.31031501 \n\t Ulricehamnsvägen , Hammarbyhöjden - nan\n\nID: LTFR_SERVICEDAG.31031502 \n\t Ulricehamnsvägen , Hammarbyhöjden - nan\n\nID: LTFR_SERVICEDAG.31031503 \n\t Ulricehamnsvägen , Hammarbyhöjden - nan\n\nID: LTFR_SERVICEDAG.31031701 \n\t Hjälmarsvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031702 \n\t Ottsjövägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031703 \n\t Ottsjövägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031706 \n\t Olshammarsgatan , Hagsätra - nan\n\nID: LTFR_SERVICEDAG.31031707 \n\t Olshammarsgatan , Hagsätra - nan\n\nID: LTFR_SERVICEDAG.31031708 \n\t Ödmårdsvägen , Traneberg - nan\n\nID: LTFR_SERVICEDAG.31031896 \n\t Arkövägen , Kärrtorp - nan\n\nID: LTFR_SERVICEDAG.31031900 \n\t Möckelvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031901 \n\t Möckelvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031902 \n\t Möckelvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031903 \n\t Möckelvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031904 \n\t Möckelvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031905 \n\t Siljansvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031906 \n\t Siljansvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031907 \n\t Siljansvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031908 \n\t Anna Sandströms Gata , Fruängen - 52.0\n\nID: LTFR_SERVICEDAG.31031290 \n\t Trolle-Bondes Gata , Nälsta - nan\n\nID: LTFR_SERVICEDAG.31031296 \n\t Funäsgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31031297 \n\t Funäsgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31031512 \n\t Hertigvägen , Aspudden - nan\n\nID: LTFR_SERVICEDAG.31031514 \n\t Olaus Magnus Väg , Hammarbyhöjden - nan\n\nID: LTFR_SERVICEDAG.31031515 \n\t Olaus Magnus Väg , Hammarbyhöjden - nan\n\nID: LTFR_SERVICEDAG.31031516 \n\t Ulvsunda Slottsväg , Ulvsunda - nan\n\nID: LTFR_SERVICEDAG.31031517 \n\t Ulvsunda Slottsväg , Ulvsunda - nan\n\nID: LTFR_SERVICEDAG.31031518 \n\t Ulvsunda Slottsväg , Ulvsunda - nan\n\nID: LTFR_SERVICEDAG.31031520 \n\t Nybohovsbacken , Liljeholmen - nan\n\nID: LTFR_SERVICEDAG.31031521 \n\t Nybohovsbacken , Liljeholmen - nan\n\nID: LTFR_SERVICEDAG.31031712 \n\t Skagersvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031713 \n\t Skagersvägen , Årsta - nan\n\nID: LTFR_SERVICEDAG.31031714 \n\t Liljeholmsvägen , Liljeholmen - nan\n\nID: LTFR_SERVICEDAG.31031721 \n\t Nybohovsbacken , Liljeholmen - nan\n\nID: LTFR_SERVICEDAG.31031722 \n\t Nybohovsbacken , Liljeholmen - nan\n\nID: LTFR_SERVICEDAG.31031304 \n\t Funäsgatan , Råcksta - nan\n\nID: LTFR_SERVICEDAG.31031315 \n\t Gryningsvägen , Solhem - nan\n\nID: LTFR_SERVICEDAG.31031526 \n\t Yrkesvägen , Gamla Enskede - nan\n\nID: LTFR_SERVICEDAG.31031527 \n\t Yrkesvägen , Gamla Enskede - nan\n\nID: LTFR_SERVICEDAG.31031723 \n\t Nybohovsbacken , Liljeholmen - nan\n\nID: LTFR_SERVICEDAG.31031730 \n\t Olaus Magnus Väg , Hammarbyhöjden - nan\n\nID: LTFR_SERVICEDAG.31031731 \n\t Olaus Magnus Väg , Hammarbyhöjden - nan\n\nID: LTFR_SERVICEDAG.31031732 \n\t Wormsövägen , Enskedefältet - nan\n\nID: LTFR_SERVICEDAG.31031733 \n\t Wormsövägen , Enskedefältet - nan\n\nID: LTFR_SERVICEDAG.31031925 \n\t Ödmårdsvägen , Traneberg - nan\n\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0f077dd9760da96d0689929f7c034066095f9d1
| 16,489 |
ipynb
|
Jupyter Notebook
|
tutorials/tutorial01_sumo.ipynb
|
syuntoku14/flow
|
3a1157cde31d0b7d6a3cc2f91eef0ec9ea53575e
|
[
"MIT"
] | null | null | null |
tutorials/tutorial01_sumo.ipynb
|
syuntoku14/flow
|
3a1157cde31d0b7d6a3cc2f91eef0ec9ea53575e
|
[
"MIT"
] | null | null | null |
tutorials/tutorial01_sumo.ipynb
|
syuntoku14/flow
|
3a1157cde31d0b7d6a3cc2f91eef0ec9ea53575e
|
[
"MIT"
] | null | null | null | 39.353222 | 653 | 0.659349 |
[
[
[
"# Tutorial 01: Running Sumo Simulations\n\nThis tutorial walks through the process of running non-RL traffic simulations in Flow. Simulations of this form act as non-autonomous baselines and depict the behavior of human dynamics on a network. Similar simulations may also be used to evaluate the performance of hand-designed controllers on a network. This tutorial focuses primarily on the former use case, while an example of the latter may be found in `exercise07_controllers.ipynb`.\n\nIn this exercise, we simulate a initially perturbed single lane ring road. We witness in simulation that as time advances the initially perturbations do not dissipate, but instead propagates and expands until vehicles are forced to periodically stop and accelerate. For more information on this behavior, we refer the reader to the following article [1].\n\n## 1. Components of a Simulation\nAll simulations, both in the presence and absence of RL, require two components: a *scenario*, and an *environment*. Scenarios describe the features of the transportation network used in simulation. This includes the positions and properties of nodes and edges constituting the lanes and junctions, as well as properties of the vehicles, traffic lights, inflows, etc. in the network. Environments, on the other hand, initialize, reset, and advance simulations, and act the primary interface between the reinforcement learning algorithm and the scenario. Moreover, custom environments may be used to modify the dynamical features of an scenario.\n\n## 2. Setting up a Scenario\nFlow contains a plethora of pre-designed scenarios used to replicate highways, intersections, and merges in both closed and open settings. All these scenarios are located in flow/scenarios. In order to recreate a ring road network, we begin by importing the scenario `LoopScenario`.",
"_____no_output_____"
]
],
[
[
"from flow.scenarios.loop import LoopScenario",
"_____no_output_____"
]
],
[
[
"This scenario, as well as all other scenarios in Flow, is parametrized by the following arguments: \n* name\n* vehicles\n* net_params\n* initial_config\n* traffic_lights\n\nThese parameters allow a single scenario to be recycled for a multitude of different network settings. For example, `LoopScenario` may be used to create ring roads of variable length with a variable number of lanes and vehicles.\n\n### 2.1 Name\nThe `name` argument is a string variable depicting the name of the scenario. This has no effect on the type of network created.",
"_____no_output_____"
]
],
[
[
"name = \"ring_example\"",
"_____no_output_____"
]
],
[
[
"### 2.2 VehicleParams\nThe `VehicleParams` class stores state information on all vehicles in the network. This class is used to identify the dynamical behavior of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various get methods within this class.\n\nThe initial configuration of this class describes the number of vehicles in the network at the start of every simulation, as well as the properties of these vehicles. We begin by creating an empty `VehicleParams` object.",
"_____no_output_____"
]
],
[
[
"from flow.core.params import VehicleParams\n\nvehicles = VehicleParams()",
"_____no_output_____"
]
],
[
[
"Once this object is created, vehicles may be introduced using the `add` method. This method specifies the types and quantities of vehicles at the start of a simulation rollout. For a description of the various arguements associated with the `add` method, we refer the reader to the following documentation (reference readthedocs).\n\nWhen adding vehicles, their dynamical behaviors may be specified either by the simulator (default), or by user-generated models. For longitudinal (acceleration) dynamics, several prominent car-following models are implemented in Flow. For this example, the acceleration behavior of all vehicles will be defined by the Intelligent Driver Model (IDM) [2].",
"_____no_output_____"
]
],
[
[
"from flow.controllers.car_following_models import IDMController",
"_____no_output_____"
]
],
[
[
"Another controller we define is for the vehicle's routing behavior. For closed network where the route for any vehicle is repeated, the `ContinuousRouter` controller is used to perpetually reroute all vehicles to the initial set route.",
"_____no_output_____"
]
],
[
[
"from flow.controllers.routing_controllers import ContinuousRouter",
"_____no_output_____"
]
],
[
[
"Finally, we add 22 vehicles of type \"human\" with the above acceleration and routing behavior into the `Vehicles` class.",
"_____no_output_____"
]
],
[
[
"vehicles.add(\"human\",\n acceleration_controller=(IDMController, {}),\n routing_controller=(ContinuousRouter, {}),\n num_vehicles=22)",
"_____no_output_____"
]
],
[
[
"### 2.3 NetParams\n\n`NetParams` are network-specific parameters used to define the shape and properties of a network. Unlike most other parameters, `NetParams` may vary drastically depending on the specific network configuration, and accordingly most of its parameters are stored in `additional_params`. In order to determine which `additional_params` variables may be needed for a specific scenario, we refer to the `ADDITIONAL_NET_PARAMS` variable located in the scenario file.",
"_____no_output_____"
]
],
[
[
"from flow.scenarios.loop import ADDITIONAL_NET_PARAMS\n\nprint(ADDITIONAL_NET_PARAMS)",
"_____no_output_____"
]
],
[
[
"Importing the `ADDITIONAL_NET_PARAMS` dict from the ring road scenario, we see that the required parameters are:\n\n* **length**: length of the ring road\n* **lanes**: number of lanes\n* **speed**: speed limit for all edges\n* **resolution**: resolution of the curves on the ring. Setting this value to 1 converts the ring to a diamond.\n\n\nAt times, other inputs (for example `no_internal_links`) may be needed from `NetParams` to recreate proper network features/behavior. These requirements can be founded in the scenario's documentation. For the ring road, no attributes are needed aside from the `additional_params` terms. Furthermore, for this exercise, we use the scenario's default parameters when creating the `NetParams` object.",
"_____no_output_____"
]
],
[
[
"from flow.core.params import NetParams\n\nnet_params = NetParams(additional_params=ADDITIONAL_NET_PARAMS)",
"_____no_output_____"
]
],
[
[
"### 2.4 InitialConfig\n\n`InitialConfig` specifies parameters that affect the positioning of vehicle in the network at the start of a simulation. These parameters can be used to limit the edges and number of lanes vehicles originally occupy, and provide a means of adding randomness to the starting positions of vehicles. In order to introduce a small initial disturbance to the system of vehicles in the network, we set the `perturbation` term in `InitialConfig` to 1m.",
"_____no_output_____"
]
],
[
[
"from flow.core.params import InitialConfig\n\ninitial_config = InitialConfig(spacing=\"uniform\", perturbation=1)",
"_____no_output_____"
]
],
[
[
"### 2.5 TrafficLightParams\n\n`TrafficLightParams` are used to desribe the positions and types of traffic lights in the network. These inputs are outside the scope of this tutorial, and instead are covered in `exercise06_traffic_lights.ipynb`. For our example, we create an empty `TrafficLightParams` object, thereby ensuring that none are placed on any nodes.",
"_____no_output_____"
]
],
[
[
"from flow.core.params import TrafficLightParams\n\ntraffic_lights = TrafficLightParams()",
"_____no_output_____"
]
],
[
[
"## 3. Setting up an Environment\n\nSeveral envionrments in Flow exist to train autonomous agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. These environments are often scenario or task specific; however, some can be deployed on an ambiguous set of scenarios as well. One such environment, `AccelEnv`, may be used to train a variable number of vehicles in a fully observable network with a *static* number of vehicles.",
"_____no_output_____"
]
],
[
[
"from flow.envs.loop.loop_accel import AccelEnv",
"_____no_output_____"
]
],
[
[
"Although we will not be training any autonomous agents in this exercise, the use of an environment allows us to view the cumulative reward simulation rollouts receive in the absence of autonomy.\n\nEnvrionments in Flow are parametrized by three components:\n* `EnvParams`\n* `SumoParams`\n* `Scenario`\n\n### 3.1 SumoParams\n`SumoParams` specifies simulation-specific variables. These variables include the length a simulation step (in seconds) and whether to render the GUI when running the experiment. For this example, we consider a simulation step length of 0.1s and activate the GUI.",
"_____no_output_____"
]
],
[
[
"from flow.core.params import SumoParams\n\nsumo_params = SumoParams(sim_step=0.1, render=True)",
"_____no_output_____"
]
],
[
[
"### 3.2 EnvParams\n\n`EnvParams` specify environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the scenario. Much like `NetParams`, the attributes associated with this parameter are mostly environment specific, and can be found in the environment's `ADDITIONAL_ENV_PARAMS` dictionary.",
"_____no_output_____"
]
],
[
[
"from flow.envs.loop.loop_accel import ADDITIONAL_ENV_PARAMS\n\nprint(ADDITIONAL_ENV_PARAMS)",
"_____no_output_____"
]
],
[
[
"Importing the `ADDITIONAL_ENV_PARAMS` variable, we see that it consists of only one entry, \"target_velocity\", which is used when computing the reward function associated with the environment. We use this default value when generating the `EnvParams` object.",
"_____no_output_____"
]
],
[
[
"from flow.core.params import EnvParams\n\nenv_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)",
"_____no_output_____"
]
],
[
[
"## 4. Setting up and Running the Experiment\nOnce the inputs to the scenario and environment classes are ready, we are ready to set up a `Experiment` object.",
"_____no_output_____"
]
],
[
[
"from flow.core.experiment import Experiment",
"_____no_output_____"
]
],
[
[
"These objects may be used to simulate rollouts in the absence of reinforcement learning agents, as well as acquire behaviors and rewards that may be used as a baseline with which to compare the performance of the learning agent. In this case, we choose to run our experiment for one rollout consisting of 3000 steps (300 s).\n\n**Note**: When executing the below code, remeber to click on the <img style=\"display:inline;\" src=\"img/play_button.png\"> Play button after the GUI is rendered.",
"_____no_output_____"
]
],
[
[
"# create the scenario object\nscenario = LoopScenario(name=\"ring_example\",\n vehicles=vehicles,\n net_params=net_params,\n initial_config=initial_config,\n traffic_lights=traffic_lights)\n\n# create the environment object\nenv = AccelEnv(env_params, sumo_params, scenario)\n\n# create the experiment object\nexp = Experiment(env)\n\n# run the experiment for a set number of rollouts / time steps\n_ = exp.run(1, 3000)",
"_____no_output_____"
]
],
[
[
"As we can see from the above simulation, the initial perturbations in the network instabilities propogate and intensify, eventually leading to the formation of stop-and-go waves after approximately 180s.",
"_____no_output_____"
],
[
"## 5. Modifying the Simulation\nThis tutorial has walked you through running a single lane ring road experiment in Flow. As we have mentioned before, these simulations are highly parametrizable. This allows us to try different representations of the task. For example, what happens if no initial perturbations are introduced to the system of homogenous human-driven vehicles?\n\n```\ninitial_config = InitialConfig()\n```\n\nIn addition, how does the task change in the presence of multiple lanes where vehicles can overtake one another?\n\n```\nnet_params = NetParams(\n additional_params={\n 'length': 230, \n 'lanes': 2, \n 'speed_limit': 30, \n 'resolution': 40\n }\n)\n```\n\nFeel free to experiment with all these problems and more!\n\n## Bibliography\n[1] Sugiyama, Yuki, et al. \"Traffic jams without bottlenecks—experimental evidence for the physical mechanism of the formation of a jam.\" New journal of physics 10.3 (2008): 033001.\n\n[2] Treiber, Martin, Ansgar Hennecke, and Dirk Helbing. \"Congested traffic states in empirical observations and microscopic simulations.\" Physical review E 62.2 (2000): 1805.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0f08fd20f640ca2e2465ee6ab2d7cc0cd8440cd
| 43,102 |
ipynb
|
Jupyter Notebook
|
docs/Guide.ipynb
|
timothycrosley/fri
|
94c6b74bfb4ceee405df41834e2e3f8b0a617b38
|
[
"MIT"
] | null | null | null |
docs/Guide.ipynb
|
timothycrosley/fri
|
94c6b74bfb4ceee405df41834e2e3f8b0a617b38
|
[
"MIT"
] | null | null | null |
docs/Guide.ipynb
|
timothycrosley/fri
|
94c6b74bfb4ceee405df41834e2e3f8b0a617b38
|
[
"MIT"
] | null | null | null | 41.444231 | 11,396 | 0.713842 |
[
[
[
"# Quick start guide\n",
"_____no_output_____"
],
[
"## Installation\n### Stable\nFri can be installed via the Python Package Index (PyPI).\n\nIf you have `pip` installed just execute the command\n\n pip install fri\n \nto get the newest stable version.\n\nThe dependencies should be installed and checked automatically.\nIf you have problems installing please open issue at our [tracker](https://github.com/lpfann/fri/issues/new).\n\n### Development\nTo install a bleeding edge dev version of `FRI` you can clone the GitHub repository using\n\n git clone [email protected]:lpfann/fri.git\n\nand then check out the `dev` branch: `git checkout dev`.\n\nWe use [poetry](https://poetry.eustace.io/) for dependency management.\n\nRun\n\n poetry install\n\nin the cloned repository to install `fri` in a virtualenv.\n\n\n\nTo check if everything works as intented you can use `pytest` to run the unit tests.\nJust run the command\n\n poetry run pytest\n\nin the main project folder",
"_____no_output_____"
],
[
"## Using FRI\nNow we showcase the workflow of using FRI on a simple classification problem.\n\n### Data\nTo have something to work with, we need some data first.\n`fri` includes a generation method for binary classification and regression data.\n\nIn our case we need some classification data.",
"_____no_output_____"
]
],
[
[
"from fri import genClassificationData",
"_____no_output_____"
]
],
[
[
"We want to create a small set with a few features.\n\nBecause we want to showcase the all-relevant feature selection, we generate multiple strongly and weakly relevant features.",
"_____no_output_____"
]
],
[
[
"n = 100\nfeatures = 6\nstrongly_relevant = 2\nweakly_relevant = 2",
"_____no_output_____"
],
[
"X,y = genClassificationData(n_samples=n,\n n_features=features,\n n_strel=strongly_relevant,\n n_redundant=weakly_relevant,\n random_state=123)",
"_____no_output_____"
]
],
[
[
"The method also prints out the parameters again.",
"_____no_output_____"
]
],
[
[
"X.shape",
"_____no_output_____"
]
],
[
[
"We created a binary classification set with 6 features of which 2 are strongly relevant and 2 weakly relevant.",
"_____no_output_____"
],
[
"#### Preprocess\nBecause our method expects mean centered data we need to standardize it first.\nThis centers the values around 0 and deviation to the standard deviation",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\nX_scaled = StandardScaler().fit_transform(X)",
"_____no_output_____"
]
],
[
[
"### Model\nNow we need to creata a Model. \n\nWe use the `FRI` module.\n",
"_____no_output_____"
]
],
[
[
"import fri",
"_____no_output_____"
]
],
[
[
"`fri` provides a convenience class `fri.FRI` to create a model.\n\n`fri.FRI` needs the type of problem as a first argument of type `ProblemName`.\n\nDepending on the Problem you want to analyze pick from one of the available models in `ProblemName`.",
"_____no_output_____"
]
],
[
[
"list(fri.ProblemName)",
"_____no_output_____"
]
],
[
[
"Because we have Classification data we use the `ProblemName.CLASSIFICATION` to instantiate our model.",
"_____no_output_____"
]
],
[
[
"fri_model = fri.FRI(fri.ProblemName.CLASSIFICATION,slack_loss=0.2,slack_regularization=0.2)",
"_____no_output_____"
],
[
"fri_model",
"_____no_output_____"
]
],
[
[
"We used no parameters for creation so the defaults are active.",
"_____no_output_____"
],
[
"#### Fitting to data\nNow we can just fit the model to the data using `scikit-learn` like commands.",
"_____no_output_____"
]
],
[
[
"fri_model.fit(X_scaled,y)",
"_____no_output_____"
]
],
[
[
"The resulting feature relevance bounds are saved in the `interval_` variable.",
"_____no_output_____"
]
],
[
[
"fri_model.interval_",
"_____no_output_____"
]
],
[
[
"If you want to print out the relevance class use the `print_interval_with_class()` function.",
"_____no_output_____"
]
],
[
[
"print(fri_model.print_interval_with_class())",
"############## Relevance bounds ##############\nfeature: [LB -- UB], relevance class\n 0: [0.3 -- 0.4], Strong relevant\n 1: [0.3 -- 0.4], Strong relevant\n 2: [0.0 -- 0.5], Weak relevant\n 3: [0.0 -- 0.4], Weak relevant\n 4: [0.0 -- 0.0], Irrelevant\n 5: [0.0 -- 0.1], Irrelevant\n\n"
]
],
[
[
"The bounds are grouped in 2d sublists for each feature.\n\n\nTo acess the relevance bounds for feature 2 we would use",
"_____no_output_____"
]
],
[
[
"fri_model.interval_[2]",
"_____no_output_____"
]
],
[
[
"The relevance classes are saved in the corresponding variable `relevance_classes_`:",
"_____no_output_____"
]
],
[
[
"fri_model.relevance_classes_",
"_____no_output_____"
]
],
[
[
"`2` denotes strongly relevant features, `1` weakly relevant and `0` irrelevant.",
"_____no_output_____"
],
[
"#### Plot results\n\nThe bounds in numerical form are useful for postprocesing.\nIf we want a human to look at it, we recommend the plot function `plot_relevance_bars`.\n\nWe can also color the bars according to `relevance_classes_`",
"_____no_output_____"
]
],
[
[
"# Import plot function\nfrom fri.plot import plot_relevance_bars\nimport matplotlib.pyplot as plt\n%matplotlib inline\n# Create new figure, where we can put an axis on\nfig, ax = plt.subplots(1, 1,figsize=(6,3))\n# plot the bars on the axis, colored according to fri\nout = plot_relevance_bars(ax,fri_model.interval_,classes=fri_model.relevance_classes_)",
"_____no_output_____"
]
],
[
[
"### Setting constraints manually\nOur model also allows to compute relevance bounds when the user sets a given range for the features.\n\nWe use a dictionary to encode our constraints.\n",
"_____no_output_____"
]
],
[
[
"preset = {}",
"_____no_output_____"
]
],
[
[
"#### Example\nAs an example, let us constrain the third from our example to the minimum relevance bound.\n\n",
"_____no_output_____"
]
],
[
[
"preset[2] = fri_model.interval_[2, 0]",
"_____no_output_____"
]
],
[
[
"We use the function `constrained_intervals`.\n\nNote: we need to fit the model before we can use this function.\nWe already did that, so we are fine.",
"_____no_output_____"
]
],
[
[
"const_ints = fri_model.constrained_intervals(preset=preset)",
"_____no_output_____"
],
[
"const_ints",
"_____no_output_____"
]
],
[
[
"Feature 3 is set to its minimum (at 0).\n\nHow does it look visually?",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(1, 1,figsize=(6,3))\nout = plot_relevance_bars(ax, const_ints)",
"_____no_output_____"
]
],
[
[
"Feature 3 is reduced to its minimum (no contribution).\n\nIn turn, its correlated partner feature 4 had to take its maximum contribution.",
"_____no_output_____"
],
[
"### Print internal Parameters\n\nIf we want to take at internal parameters, we can use the `verbose` flag in the model creation.",
"_____no_output_____"
]
],
[
[
"fri_model = fri.FRI(fri.ProblemName.CLASSIFICATION, verbose=True)",
"_____no_output_____"
],
[
"fri_model.fit(X_scaled,y)",
"Fitting 3 folds for each of 10 candidates, totalling 30 fits\n"
]
],
[
[
"This prints out the parameters of the baseline model\n\nOne can also see the best selected hyperparameter according to gridsearch and the training score of the model in `score`.\n",
"_____no_output_____"
],
[
"### Multiprocessing\nTo enable multiprocessing simply use the `n_jobs` parameter when init. the model.\n\nIt expects an integer parameter which defines the amount of processes used.\n`n_jobs=-1` uses all available on the CPU.",
"_____no_output_____"
]
],
[
[
"fri_model = fri.FRI(fri.ProblemName.CLASSIFICATION, n_jobs=-1, verbose=1)",
"_____no_output_____"
],
[
"fri_model.fit(X_scaled,y)",
"Fitting 3 folds for each of 10 candidates, totalling 30 fits\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0f091cbb0e4ede7347d4d4411684582af91d102
| 1,204 |
ipynb
|
Jupyter Notebook
|
artifact_evaluation/Jupyter/src/testcase_reducer/simplify.ipynb
|
weucode/COMFORT
|
0cfce5d70a58503f8ba3c3ff825abc24b79a1d2b
|
[
"Apache-2.0"
] | 55 |
2021-03-05T06:42:22.000Z
|
2022-02-22T05:33:47.000Z
|
artifact_evaluation/Jupyter/src/testcase_reducer/simplify.ipynb
|
ZhanyongTang/COMFORT
|
deb335f65342cef4a8b8b3a063132465f5e0143d
|
[
"Apache-2.0"
] | null | null | null |
artifact_evaluation/Jupyter/src/testcase_reducer/simplify.ipynb
|
ZhanyongTang/COMFORT
|
deb335f65342cef4a8b8b3a063132465f5e0143d
|
[
"Apache-2.0"
] | 20 |
2021-04-14T15:17:05.000Z
|
2022-02-15T10:48:00.000Z
| 24.571429 | 118 | 0.591362 |
[
[
[
"import Ipynb_importer\nfrom detection.harness import Harness\nfrom testcase_reducer import reduce_by_block\n\n\ndef simplify(testcase, with_output_info=False):\n harness = Harness()\n harness_result = harness.run_testcase(testcase=testcase)\n simplified_testcase = reduce_by_block.simple_by_block(harness_result, with_output_info=with_output_info)\n # 测试用例已经最简,无法被继续精简\n if simplified_testcase is None:\n return harness_result.testcase\n else:\n return simplified_testcase\n",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
d0f0a57d1aed569f29f316e9aa07e4d48f283231
| 36,017 |
ipynb
|
Jupyter Notebook
|
Notebooks/Part_7/Vartional Neural Inference Localization/Neural Variational Inference Localization Problem.ipynb
|
olmosUC3M/Introduction-to-Tensor-Flow-and-Deep-Learning
|
3d173606f273f6b3e2bf3cbdccea1c4fe59af71f
|
[
"MIT"
] | 4 |
2018-03-05T14:19:15.000Z
|
2020-09-13T23:53:08.000Z
|
Notebooks/Part_7/Vartional Neural Inference Localization/Neural Variational Inference Localization Problem.ipynb
|
olmosUC3M/Introduction-to-Tensor-Flow-and-Deep-Learning
|
3d173606f273f6b3e2bf3cbdccea1c4fe59af71f
|
[
"MIT"
] | null | null | null |
Notebooks/Part_7/Vartional Neural Inference Localization/Neural Variational Inference Localization Problem.ipynb
|
olmosUC3M/Introduction-to-Tensor-Flow-and-Deep-Learning
|
3d173606f273f6b3e2bf3cbdccea1c4fe59af71f
|
[
"MIT"
] | 1 |
2022-03-31T20:26:47.000Z
|
2022-03-31T20:26:47.000Z
| 91.413706 | 13,296 | 0.806925 |
[
[
[
"# Amortized Neural Variational Inference for a toy probabilistic model\n\n\nConsider a certain number of sensors placed at known locations, $\\mathbf{s}_1,\\mathbf{s}_2,\\ldots,\\mathbf{s}_L$. There is a target at an unknown position $\\mathbf{z}\\in\\mathbb{R}^2$ that is emitting a certain signal that is received at the $i$-th sensor with a signal strength distributed as follows:\n\n\\begin{align}\nx_i \\sim \\mathcal{N}\\Big(- A \\log\\left(||\\mathbf{s}_i-\\mathbf{z} ||^2\\right), \\sigma^2\\Big),\n\\end{align}\n\nwhere $A$ is a constant related to how fast signal strength degrades with distance. We assume a Gaussian prior for the unknown position $\\mathcal{N}(\\mathbf{0},\\mathbf{I})$. Given a set of $N$ i.i.d. samples for each sensor, $\\mathbf{X}\\in\\mathbb{R}^{L\\times N}$, we will use a Amortized Neural Variational Inference to find a Gaussian approximation to \n\n\\begin{align}\np(\\mathbf{z}|\\mathbf{X}) \\propto p(\\mathbf{X}|\\mathbf{z}) p(\\mathbf{z})\n\\end{align}\n\nOur approximation to $p(\\mathbf{z}|\\mathbf{X})$ is of the form\n\\begin{align}\np(\\mathbf{z}|\\mathbf{X}) \\approx q(\\mathbf{z}|\\mathbf{X})=\\mathcal{N}\\Big(\\mu(\\mathbf{X}),\\Sigma(\\mathbf{X})\\Big),\n\\end{align}\nwhere\n\n- $\\mu(\\mathbf{X})$ --> Given by a Neural Network with parameter vector $\\theta$ and input $\\mathbf{X}$\n\n- $\\Sigma(\\mathbf{X})$ --> Diagonal covariance matrix, where the log of the main diagonal is constructed by a Neural Network with parameter vector $\\gamma$ and input $\\mathbf{X}$",
"_____no_output_____"
],
[
"## ELBO lower-bound to $p(\\mathbf{X})$\n\nWe will optimize $q(\\mathbf{z}|\\mathbf{X})$ w.r.t. $\\theta,\\gamma$ by optimizing the Evidence-Lower-Bound (ELBO):\n\n\\begin{align}\np(\\mathbf{X}) &= \\int p(\\mathbf{X}|\\mathbf{z}) p(\\mathbf{z}) d\\mathbf{z}\\\\\n&\\geq \\int q(\\mathbf{X}|\\mathbf{z}) \\log \\left(\\frac{p(\\mathbf{X},\\mathbf{z})}{q(\\mathbf{X}|\\mathbf{z})}\\right)d\\mathbf{z}\\\\\n& = \\mathbb{E}_{q}\\left[\\log p(\\mathbf{X}|\\mathbf{z})\\right] - D_{KL}(q(\\mathbf{z}|\\mathbf{X})||p(\\mathbf{z})\\triangleq \\mathcal{L}(\\mathbf{X},\\theta,\\gamma),\n\\end{align}\nwhere $D_{KL}(q(\\mathbf{z}|\\mathbf{X})||p(\\mathbf{z})$ is known in closed form since it is the KL divergence between two Gaussian pdfs:\n\n\\begin{align}\nD_{KL}(q(\\mathbf{z}|\\mathbf{X})||p(\\mathbf{z})) = \\frac{1}{2} \\left[\\text{tr}\\left(\\Sigma(\\mathbf{X})\\right)+\\left(\\mu(\\mathbf{X})^T\\mu(\\mathbf{X})\\right)-2-\\log\\det \\left(\\Sigma(\\mathbf{X})\\right) \\right]\n\\end{align}\n\n## SGD optimization\n\n- Sample $\\mathbf{\\epsilon}\\sim \\mathcal{N}(\\mathbf{0},\\mathbf{I})$\n- Sample from $q(\\mathbf{z}|\\mathbf{X})$:\n\\begin{align}\n\\mathbf{z}^0 = \\mu(\\mathbf{X}) + \\sqrt{\\text{diag}(\\Sigma(\\mathbf{X}))} \\circ \\mathbf{\\epsilon}\n\\end{align}\n- Compute gradients of \n\\begin{align}\n\\hat{\\mathcal{L}}(\\mathbf{X},\\theta,\\gamma) =\\log p(\\mathbf{X}|\\mathbf{z}^0) - D_{KL}(q(\\mathbf{z}|\\mathbf{X})||p(\\mathbf{z})\n\\end{align}\nw.r.t. $\\theta,\\gamma$\n\n\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n%matplotlib inline\n\n# use seaborn plotting defaults\nimport seaborn as sns; sns.set()",
"/Users/olmos/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n"
]
],
[
[
"### Probabilistic model definition and generating samples",
"_____no_output_____"
]
],
[
[
"############## Elements of the true probabilistic model ####################\n\nloc_info = {} \n \nloc_info['S'] = 3 # Number o sensors\n\nloc_info['pos_s'] = np.array([[0.5,1], [3.5,1], [2,3]]) #Position of sensors\n\n#loc_info['target'] = np.random.uniform(-3,3,[2,]) #(Unknown target position)\n\nloc_info['target'] = np.array([-1,2]) #(Unknown target position)\n\nloc_info['var_s'] = 5.*np.ones(loc_info['S']).reshape([loc_info['S'],1]) #Variance of sensors\n\nloc_info['A'] = np.ones(loc_info['S'],dtype=np.float32) * 10.0 #Attenuation mean factor per sensor\n\nloc_info['N'] = 5 # Number of measurements per sensor\n\ndef sample_X(S,M,z,pos_s,A,var_s):\n \n means = -1*A*np.log(np.sum((pos_s-z)**2,1))\n \n X = means.reshape([S,1]) + np.random.randn(S,M) * np.sqrt(var_s)\n \n return X\n",
"_____no_output_____"
],
[
"# Sampling from model for the right target\n \nX = sample_X(loc_info['S'],loc_info['N'], loc_info['target'],loc_info['pos_s'],loc_info['A'],loc_info['var_s'])",
"_____no_output_____"
],
[
"plt.plot(loc_info['pos_s'][:,0],loc_info['pos_s'][:,1],'b>',label='Sensors',ms=15)\nplt.plot(loc_info['target'][0],loc_info['target'][1],'ro',label='Target',ms=15)\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### TensorFlow Computation Graph and Loss Function",
"_____no_output_____"
]
],
[
[
"z_dim = 2 #Latent Space\n\nmodel_name = 'model1' #In 'model1.py' we define the variational family\n\nlearning_rate = 1e-2\nnum_samples_avg = 1 #Number of samples to approximate the expectation in the ELBO\nnum_samples = 10 #Number of samples from the posterior (for testing)\nnum_it = int(1e4) #SGD iterations\nperiod_plot = int(1000) #Show resuts every period_plot iterations\ndims = X.shape #X.shape\n",
"_____no_output_____"
],
[
"sess_VAE = tf.Graph()\n\nwith sess_VAE.as_default():\n \n print('[*] Importing model: ' + model_name)\n model = __import__(model_name)\n \n print('[*] Defining placeholders')\n\n inputX = tf.placeholder(tf.float32, shape=dims, name='x-input')\n \n print('[*] Defining the encoder')\n log_var, mean, samples_z, KL = model.encoder(inputX,dims,z_dim,num_samples_avg)\n \n print('[*] Defining the log_likelyhood')\n \n loglik = model.decoder(loc_info,inputX,samples_z,num_samples_avg) \n \n loss = -(loglik-KL)\n \n optim = tf.train.AdamOptimizer(learning_rate).minimize(loss)\n \n # Output dictionary -> Useful if computation graph is defined in a separate .py file\n \n tf_nodes = {}\n \n tf_nodes['X'] = inputX\n \n tf_nodes['mean'] = mean\n \n tf_nodes['logvar'] = log_var\n\n tf_nodes['KL'] = KL\n \n tf_nodes['loglik'] = loglik\n \n tf_nodes['optim'] = optim\n \n tf_nodes['samples'] = samples_z\n ",
"[*] Importing model: model1\n[*] Defining placeholders\n[*] Defining the encoder\n[*] Defining the log_likelyhood\n"
]
],
[
[
"## SGD optimization",
"_____no_output_____"
]
],
[
[
" \n############ SGD Inference #####################################\n\nmean_list = []\n \nwith tf.Session(graph=sess_VAE) as session:\n \n # Add ops to save and restore all the variables.\n saver = tf.train.Saver()\n\n tf.global_variables_initializer().run()\n \n print('Training the VAE ...') \n \n for it in range(num_it):\n\n feedDict = {tf_nodes['X'] : X} \n\n _= session.run(tf_nodes['optim'],feedDict)\n\n \n if(it % period_plot ==0):\n \n mean, logvar,loglik,KL = session.run([tf_nodes['mean'],tf_nodes['logvar'],tf_nodes['loglik'],tf_nodes['KL']],feedDict)\n\n print(\"It = %d, loglik = %.5f, KL = %.5f\" %(it,loglik,KL))\n\n mean_list.append(mean)\n\n \n samples = session.run(tf_nodes['samples'],feedDict)\n",
"Training the VAE ...\nIt = 0, loglik = -50926.40625, KL = 1.63167\nIt = 1000, loglik = -113.08937, KL = 7.00147\nIt = 2000, loglik = -115.41471, KL = 7.60987\nIt = 3000, loglik = -115.16046, KL = 7.83914\nIt = 4000, loglik = -112.18390, KL = 8.10726\nIt = 5000, loglik = -113.60258, KL = 8.09174\nIt = 6000, loglik = -134.30858, KL = 8.19208\nIt = 7000, loglik = -113.80139, KL = 8.34016\nIt = 8000, loglik = -323.55902, KL = 7.21795\nIt = 9000, loglik = -118.62814, KL = 8.40690\n"
],
[
"#Samples from q(z|x)\nm_evol = np.vstack(mean_list)\nnsamples = 50\n\nsamples = mean + np.sqrt(np.exp(logvar)) * np.random.randn(nsamples,2)\n\nplt.plot(loc_info['pos_s'][:,0],loc_info['pos_s'][:,1],'b>',label='Sensors',ms=15)\nplt.plot(loc_info['target'][0],loc_info['target'][1],'ro',label='Target',ms=15)\nplt.plot(m_evol[:,0],m_evol[:,1],'g>',label='Post Mean')\nplt.scatter(samples[:,0],samples[:,1],label='Post Samples')\nplt.rcParams[\"figure.figsize\"] = [8,8]\nplt.legend()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0f0aa66c1c0a43b88f6d11a9bde8dc03065101e
| 7,269 |
ipynb
|
Jupyter Notebook
|
locale/examples/02-plot/cmap.ipynb
|
tkoyama010/pyvista-doc-translations
|
23bb813387b7f8bfe17e86c2244d5dd2243990db
|
[
"MIT"
] | 4 |
2020-08-07T08:19:19.000Z
|
2020-12-04T09:51:11.000Z
|
locale/examples/02-plot/cmap.ipynb
|
tkoyama010/pyvista-doc-translations
|
23bb813387b7f8bfe17e86c2244d5dd2243990db
|
[
"MIT"
] | 19 |
2020-08-06T00:24:30.000Z
|
2022-03-30T19:22:24.000Z
|
locale/examples/02-plot/cmap.ipynb
|
tkoyama010/pyvista-doc-translations
|
23bb813387b7f8bfe17e86c2244d5dd2243990db
|
[
"MIT"
] | 1 |
2021-03-09T07:50:40.000Z
|
2021-03-09T07:50:40.000Z
| 38.871658 | 676 | 0.569679 |
[
[
[
"%matplotlib inline\nfrom pyvista import set_plot_theme\nset_plot_theme('document')",
"_____no_output_____"
]
],
[
[
"Colormap Choices {#colormap_example}\n================\n\nUse a Matplotlib, Colorcet, cmocean, or custom colormap when plotting\nscalar values.\n",
"_____no_output_____"
]
],
[
[
"from pyvista import examples\nimport pyvista as pv\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"Any colormap built for `matplotlib`, `colorcet`, or `cmocean` is fully\ncompatible with PyVista. Colormaps are typically specified by passing\nthe string name of the colormap to the plotting routine via the `cmap`\nargument.\n\nSee [Matplotlib\\'s complete list of available\ncolormaps](https://matplotlib.org/tutorials/colors/colormaps.html),\n[Colorcet\\'s complete\nlist](https://colorcet.holoviz.org/user_guide/index.html), and\n[cmocean\\'s complete list](https://matplotlib.org/cmocean/).\n",
"_____no_output_____"
],
[
"Custom Made Colormaps\n=====================\n\nTo get started using a custom colormap, download some data with scalar\nvalues to plot.\n",
"_____no_output_____"
]
],
[
[
"mesh = examples.download_st_helens().warp_by_scalar()\n# Add scalar array with range (0, 100) that correlates with elevation\nmesh['values'] = pv.plotting.normalize(mesh['Elevation']) * 100",
"_____no_output_____"
]
],
[
[
"Build a custom colormap - here we make a colormap with 5 discrete colors\nand we specify the ranges where those colors fall:\n",
"_____no_output_____"
]
],
[
[
"# Define the colors we want to use\nblue = np.array([12/256, 238/256, 246/256, 1])\nblack = np.array([11/256, 11/256, 11/256, 1])\ngrey = np.array([189/256, 189/256, 189/256, 1])\nyellow = np.array([255/256, 247/256, 0/256, 1])\nred = np.array([1, 0, 0, 1])\n\nmapping = np.linspace(mesh['values'].min(), mesh['values'].max(), 256)\nnewcolors = np.empty((256, 4))\nnewcolors[mapping >= 80] = red\nnewcolors[mapping < 80] = grey\nnewcolors[mapping < 55] = yellow\nnewcolors[mapping < 30] = blue\nnewcolors[mapping < 1] = black\n\n# Make the colormap from the listed colors\nmy_colormap = ListedColormap(newcolors)",
"_____no_output_____"
]
],
[
[
"Simply pass the colormap to the plotting routine!\n",
"_____no_output_____"
]
],
[
[
"mesh.plot(scalars='values', cmap=my_colormap)",
"_____no_output_____"
]
],
[
[
"Or you could make a simple colormap\\... any Matplotlib colormap can be\npassed to PyVista!\n",
"_____no_output_____"
]
],
[
[
"boring_cmap = plt.cm.get_cmap(\"viridis\", 5)\nmesh.plot(scalars='values', cmap=boring_cmap)",
"_____no_output_____"
]
],
[
[
"You can also pass a list of color strings to the color map. This\napproach divides up the colormap into 5 equal parts.\n",
"_____no_output_____"
]
],
[
[
"mesh.plot(scalars=mesh['values'], cmap=['black', 'blue', 'yellow', 'grey', 'red'])",
"_____no_output_____"
]
],
[
[
"If you still wish to have control of the separation of values, you can\ndo this by creating a scalar array and passing that to the plotter along\nwith the the colormap\n",
"_____no_output_____"
]
],
[
[
"scalars = np.empty(mesh.n_points)\nscalars[mesh['values'] >= 80] = 4 # red\nscalars[mesh['values'] < 80] = 3 # grey\nscalars[mesh['values'] < 55] = 2 # yellow\nscalars[mesh['values'] < 30] = 1 # blue\nscalars[mesh['values'] < 1] = 0 # black\n\nmesh.plot(scalars=scalars, cmap=['black', 'blue', 'yellow', 'grey', 'red'])",
"_____no_output_____"
]
],
[
[
"Matplotlib vs. Colorcet\n=======================\n\nLet\\'s compare Colorcet\\'s perceptually uniform \\\"fire\\\" colormap to\nMatplotlib\\'s \\\"hot\\\" colormap much like the example on the [first page\nof Colorcet\\'s docs](https://colorcet.holoviz.org/index.html).\n\nThe \\\"hot\\\" version washes out detail at the high end, as if the image\nis overexposed, while \\\"fire\\\" makes detail visible throughout the data\nrange.\n\nPlease note that in order to use Colorcet\\'s colormaps including\n\\\"fire\\\", you must have Colorcet installed in your Python environment:\n`pip install colorcet`\n",
"_____no_output_____"
]
],
[
[
"p = pv.Plotter(shape=(2, 2), border=False)\np.subplot(0, 0)\np.add_mesh(mesh, scalars='Elevation', cmap=\"fire\",\n lighting=True, scalar_bar_args={'title': \"Colorcet Fire\"})\n\np.subplot(0, 1)\np.add_mesh(mesh, scalars='Elevation', cmap=\"fire\",\n lighting=False, scalar_bar_args={'title': \"Colorcet Fire (No Lighting)\"})\n\np.subplot(1, 0)\np.add_mesh(mesh, scalars='Elevation', cmap=\"hot\",\n lighting=True, scalar_bar_args={'title': \"Matplotlib Hot\"})\n\np.subplot(1, 1)\np.add_mesh(mesh, scalars='Elevation', cmap=\"hot\",\n lighting=False, scalar_bar_args={'title': \"Matplotlib Hot (No Lighting)\"})\n\np.show()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0f0ac066d17019991c6b2eb0171f08a2f3dcdbb
| 3,091 |
ipynb
|
Jupyter Notebook
|
Web Scraping (Beautiful Soup, Scrapy, Selenium)/webScraping_Day39/Scrape-OldWebsite/solution.ipynb
|
pooja-gera/TheWireUsChallenge
|
18abb5ff3fd31b7dbfef41b8008f91d3fac029d3
|
[
"MIT"
] | null | null | null |
Web Scraping (Beautiful Soup, Scrapy, Selenium)/webScraping_Day39/Scrape-OldWebsite/solution.ipynb
|
pooja-gera/TheWireUsChallenge
|
18abb5ff3fd31b7dbfef41b8008f91d3fac029d3
|
[
"MIT"
] | null | null | null |
Web Scraping (Beautiful Soup, Scrapy, Selenium)/webScraping_Day39/Scrape-OldWebsite/solution.ipynb
|
pooja-gera/TheWireUsChallenge
|
18abb5ff3fd31b7dbfef41b8008f91d3fac029d3
|
[
"MIT"
] | 1 |
2021-05-21T09:30:41.000Z
|
2021-05-21T09:30:41.000Z
| 27.353982 | 160 | 0.567777 |
[
[
[
"# import necessary libraries\nfrom selenium import webdriver\nimport pandas as pd\nimport time",
"_____no_output_____"
],
[
"# make a web driver object\ndriver=webdriver.Chrome(\"/Users/jappanjeetsingh/Downloads/Drivers/chromedriver\")",
"_____no_output_____"
],
[
"# make call to the url \ndriver.get(\"http://www2.scc.rutgers.edu/memdb/search_form_metzpr.php\")",
"_____no_output_____"
],
[
"data=[]\nfor page in range(1,6):\n table=driver.find_element_by_xpath(\"/html/body/form/main/table/tbody/tr/td[2]/p[3]/table\")\n for row in table.find_elements_by_xpath(\".//tr\"):\n data.append([td.text for td in row.find_elements_by_xpath(\".//td\")])\n \n try:\n # More commonly found button to navigate to the next page.\n button = '/html/body/form/main/table/tbody/tr/td[2]/center[3]/input[2]'\n driver.find_element_by_xpath(button).click()\n except:\n # Less commonly found button to navigate to the next page. Using this for every page will navigate you to the previous page which we dont want.\n button = '/html/body/form/main/table/tbody/tr/td[2]/center[3]/input'\n driver.find_element_by_xpath(button).click()\n #optional: waiting to avoid any errors that could occur due to network \n time.sleep(2)",
"_____no_output_____"
],
[
"# create the dataframe using pandas having the required columns and adding the list of data to it.\ndf=pd.DataFrame(data[1:],columns=[\"Num\",\"Year\",\"Month\",\"Week\",\"Product\",\"Malters sold\",\"Currency\",\"Price/Malter\"])",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0f0b4876d19680a451589c4d945d008b268c580
| 80,670 |
ipynb
|
Jupyter Notebook
|
Machine_learning/ARIMA/5_year_pred/emp_pred_2019_2024.ipynb
|
jseverin1984/Emerging_Cities_2024
|
f6cf406b315c697227add716ce7a2226fd2b7cf4
|
[
"MIT"
] | null | null | null |
Machine_learning/ARIMA/5_year_pred/emp_pred_2019_2024.ipynb
|
jseverin1984/Emerging_Cities_2024
|
f6cf406b315c697227add716ce7a2226fd2b7cf4
|
[
"MIT"
] | null | null | null |
Machine_learning/ARIMA/5_year_pred/emp_pred_2019_2024.ipynb
|
jseverin1984/Emerging_Cities_2024
|
f6cf406b315c697227add716ce7a2226fd2b7cf4
|
[
"MIT"
] | null | null | null | 47.677305 | 225 | 0.534734 |
[
[
[
"import pandas as pd\nfrom statsmodels.tsa.arima.model import ARIMA\nimport pymongo\nfrom pymongo import MongoClient",
"_____no_output_____"
],
[
"# Connection to mongo\nclient = MongoClient('mongodb+srv://<username>:<password>@cluster0.l3pqt.mongodb.net/MSA?retryWrites=true&w=majority')\n# Select database\ndb = client['MSA']\n# see list of collections\nclient.MSA.list_collection_names()",
"_____no_output_____"
],
[
"# Select the collection with needed data\nemp = db.Employment_clean\ndf = pd.DataFrame(list(emp.find()))\nprint(df.dtypes)\ndf",
"_id object\nCBSA int64\n2010 int64\n2011 int64\n2012 int64\n2013 int64\n2014 int64\n2015 int64\n2016 int64\n2017 int64\n2018 int64\n2019 int64\ndtype: object\n"
],
[
"df.drop(columns='_id', inplace=True)\ndf",
"_____no_output_____"
],
[
"# for loop to predict 2024 values\nprediction2020 = []\nprediction2021 = []\nprediction2022 = []\nprediction2023 = []\nprediction2024 = []\nfor i in range(0,384):\n y = df.iloc[i, 1:].values\n series = pd.Series(y, dtype='int')\n model = ARIMA(series, order=(2, 1, 1))\n model_fit = model.fit()\n pred = model_fit.forecast(5)\n forecast = pred.values.tolist()\n prediction2020.append(forecast[0])\n prediction2021.append(forecast[1])\n prediction2022.append(forecast[2])\n prediction2023.append(forecast[3])\n prediction2024.append(forecast[4])\ndf['2020'] = prediction2020\ndf['2021'] = prediction2021\ndf['2022'] = prediction2022\ndf['2023'] = prediction2023\ndf['2024'] = prediction2024",
"C:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\tsa\\statespace\\sarimax.py:966: UserWarning: Non-stationary starting autoregressive parameters found. Using zeros as starting parameters.\n warn('Non-stationary starting autoregressive parameters'\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\tsa\\statespace\\sarimax.py:978: UserWarning: Non-invertible starting MA parameters found. Using zeros as starting parameters.\n warn('Non-invertible starting MA parameters found.'\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\nC:\\Users\\Joshua\\anaconda3\\envs\\mlenv\\lib\\site-packages\\statsmodels\\base\\model.py:568: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n ConvergenceWarning)\n"
],
[
"df",
"_____no_output_____"
],
[
"df['2020'] = df['2020'].astype('int')\ndf['2021'] = df['2021'].astype('int')\ndf['2022'] = df['2022'].astype('int')\ndf['2023'] = df['2023'].astype('int')\ndf['2024'] = df['2024'].astype('int')\ndf.head()",
"_____no_output_____"
],
[
"df.sort_values(by='2024').head()",
"_____no_output_____"
],
[
"# create new collection in mongo\narima_emp_2019_2024 = db.arima_emp_2019_2024",
"_____no_output_____"
],
[
"df_dict = df.to_dict(orient='records')",
"_____no_output_____"
],
[
"arima_emp_2019_2024.insert_many(df_dict)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0f0b6f24e0fa5df59e31c049ebe6585e04d70ec
| 163,548 |
ipynb
|
Jupyter Notebook
|
faster_generate.ipynb
|
hellovivian/text2image
|
9a2f7c140c4eca093c70f72bbae37bf91f7093d0
|
[
"MIT"
] | null | null | null |
faster_generate.ipynb
|
hellovivian/text2image
|
9a2f7c140c4eca093c70f72bbae37bf91f7093d0
|
[
"MIT"
] | null | null | null |
faster_generate.ipynb
|
hellovivian/text2image
|
9a2f7c140c4eca093c70f72bbae37bf91f7093d0
|
[
"MIT"
] | null | null | null | 21.471445 | 162 | 0.450204 |
[
[
[
"# Originally made by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings)\n# The original BigGAN+CLIP method was by https://twitter.com/advadnoun\n\nimport math\nimport random\n# from email.policy import default\nfrom urllib.request import urlopen\nfrom tqdm import tqdm\nimport sys\nimport os\nimport flask\n# pip install taming-transformers doesn't work with Gumbel, but does not yet work with coco etc\n# appending the path does work with Gumbel, but gives ModuleNotFoundError: No module named 'transformers' for coco etc\nsys.path.append('taming-transformers')\nfrom itertools import product\n\n\nfrom omegaconf import OmegaConf\nfrom taming.models import cond_transformer, vqgan\n\nimport torch\nfrom torch import nn, optim\nfrom torch.nn import functional as F\nfrom torchvision import transforms\nfrom torchvision.transforms import functional as TF\nfrom torch.cuda import get_device_properties\ntorch.backends.cudnn.benchmark = False\t\t# NR: True is a bit faster, but can lead to OOM. False is more deterministic.\n#torch.use_deterministic_algorithms(True)\t# NR: grid_sampler_2d_backward_cuda does not have a deterministic implementation\n\nfrom torch_optimizer import DiffGrad, AdamP, RAdam\n\nfrom CLIP import clip\nimport kornia.augmentation as K\nimport numpy as np\nimport imageio\n\nfrom PIL import ImageFile, Image, PngImagePlugin, ImageChops\nImageFile.LOAD_TRUNCATED_IMAGES = True\n\nfrom subprocess import Popen, PIPE\nimport re\n# Supress warnings\nimport warnings\nwarnings.filterwarnings('ignore')\n\n\n# Various functions and classes\ndef sinc(x):\n\treturn torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))\n\n\ndef lanczos(x, a):\n\tcond = torch.logical_and(-a < x, x < a)\n\tout = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))\n\treturn out / out.sum()\n\n\ndef ramp(ratio, width):\n\tn = math.ceil(width / ratio + 1)\n\tout = torch.empty([n])\n\tcur = 0\n\tfor i in range(out.shape[0]):\n\t\tout[i] = cur\n\t\tcur += ratio\n\treturn torch.cat([-out[1:].flip([0]), out])[1:-1]\n\n\nclass ReplaceGrad(torch.autograd.Function):\n\t@staticmethod\n\tdef forward(ctx, x_forward, x_backward):\n\t\tctx.shape = x_backward.shape\n\t\treturn x_forward\n\n\t@staticmethod\n\tdef backward(ctx, grad_in):\n\t\treturn None, grad_in.sum_to_size(ctx.shape)\n\n\nclass ClampWithGrad(torch.autograd.Function):\n\t@staticmethod\n\tdef forward(ctx, input, min, max):\n\t\tctx.min = min\n\t\tctx.max = max\n\t\tctx.save_for_backward(input)\n\t\treturn input.clamp(min, max)\n\n\t@staticmethod\n\tdef backward(ctx, grad_in):\n\t\tinput, = ctx.saved_tensors\n\t\treturn grad_in * (grad_in * (input - input.clamp(ctx.min, ctx.max)) >= 0), None, None\n\n\ndef vector_quantize(x, codebook):\n\td = x.pow(2).sum(dim=-1, keepdim=True) + codebook.pow(2).sum(dim=1) - 2 * x @ codebook.T\n\tindices = d.argmin(-1)\n\tx_q = F.one_hot(indices, codebook.shape[0]).to(d.dtype) @ codebook\n\treturn replace_grad(x_q, x)\n\n\nclass Prompt(nn.Module):\n\tdef __init__(self, embed, weight=1., stop=float('-inf')):\n\t\tsuper().__init__()\n\t\tself.register_buffer('embed', embed)\n\t\tself.register_buffer('weight', torch.as_tensor(weight))\n\t\tself.register_buffer('stop', torch.as_tensor(stop))\n\n\tdef forward(self, input):\n\t\tinput_normed = F.normalize(input.unsqueeze(1), dim=2)\n\t\tembed_normed = F.normalize(self.embed.unsqueeze(0), dim=2)\n\t\tdists = input_normed.sub(embed_normed).norm(dim=2).div(2).arcsin().pow(2).mul(2)\n\t\tdists = dists * self.weight.sign()\n\t\treturn self.weight.abs() * replace_grad(dists, torch.maximum(dists, self.stop)).mean()\n\n#NR: Split prompts and weights\ndef split_prompt(prompt):\n\tvals = prompt.rsplit(':', 2)\n\tvals = vals + ['', '1', '-inf'][len(vals):]\n\treturn vals[0], float(vals[1]), float(vals[2])\n\n\nclass MakeCutouts(nn.Module):\n\tdef __init__(self, cut_size, cutn, cut_pow=1.):\n\t\tsuper().__init__()\n\t\tself.cut_size = cut_size\n\t\tself.cutn = cutn\n\t\tself.cut_pow = cut_pow # not used with pooling\n\t\t\n\t\t# Pick your own augments & their order\n\t\taugment_list = []\n\t\tfor item in augments[0]:\n\t\t\tif item == 'Ji':\n\t\t\t\taugment_list.append(K.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1, p=0.7))\n\t\t\telif item == 'Sh':\n\t\t\t\taugment_list.append(K.RandomSharpness(sharpness=0.3, p=0.5))\n\t\t\telif item == 'Gn':\n\t\t\t\taugment_list.append(K.RandomGaussianNoise(mean=0.0, std=1., p=0.5))\n\t\t\telif item == 'Pe':\n\t\t\t\taugment_list.append(K.RandomPerspective(distortion_scale=0.7, p=0.7))\n\t\t\telif item == 'Ro':\n\t\t\t\taugment_list.append(K.RandomRotation(degrees=15, p=0.7))\n\t\t\telif item == 'Af':\n\t\t\t\taugment_list.append(K.RandomAffine(degrees=15, translate=0.1, shear=5, p=0.7, padding_mode='zeros', keepdim=True)) # border, reflection, zeros\n\t\t\telif item == 'Et':\n\t\t\t\taugment_list.append(K.RandomElasticTransform(p=0.7))\n\t\t\telif item == 'Ts':\n\t\t\t\taugment_list.append(K.RandomThinPlateSpline(scale=0.8, same_on_batch=True, p=0.7))\n\t\t\telif item == 'Cr':\n\t\t\t\taugment_list.append(K.RandomCrop(size=(self.cut_size,self.cut_size), pad_if_needed=True, padding_mode='reflect', p=0.5))\n\t\t\telif item == 'Er':\n\t\t\t\taugment_list.append(K.RandomErasing(scale=(.1, .4), ratio=(.3, 1/.3), same_on_batch=True, p=0.7))\n\t\t\telif item == 'Re':\n\t\t\t\taugment_list.append(K.RandomResizedCrop(size=(self.cut_size,self.cut_size), scale=(0.1,1), ratio=(0.75,1.333), cropping_mode='resample', p=0.5))\n\t\t\t\t\n\t\tself.augs = nn.Sequential(*augment_list)\n\t\tself.noise_fac = 0.1\n\n\t\t# Pooling\n\t\tself.av_pool = nn.AdaptiveAvgPool2d((self.cut_size, self.cut_size))\n\t\tself.max_pool = nn.AdaptiveMaxPool2d((self.cut_size, self.cut_size))\n\n\tdef forward(self, input):\n\t\tcutouts = []\n\t\t\n\t\tfor _ in range(self.cutn): \n\t\t\t# Use Pooling\n\t\t\tcutout = (self.av_pool(input) + self.max_pool(input))/2\n\t\t\tcutouts.append(cutout)\n\t\t\t\n\t\tbatch = self.augs(torch.cat(cutouts, dim=0))\n\t\t\n\t\tif self.noise_fac:\n\t\t\tfacs = batch.new_empty([self.cutn, 1, 1, 1]).uniform_(0, self.noise_fac)\n\t\t\tbatch = batch + facs * torch.randn_like(batch)\n\t\treturn batch\n\n\ndef load_vqgan_model(config_path, checkpoint_path):\n\tglobal gumbel\n\tgumbel = False\n\tconfig = OmegaConf.load(config_path)\n\tif config.model.target == 'taming.models.vqgan.VQModel':\n\t\tmodel = vqgan.VQModel(**config.model.params)\n\t\tmodel.eval().requires_grad_(False)\n\t\tmodel.init_from_ckpt(checkpoint_path)\n\telif config.model.target == 'taming.models.vqgan.GumbelVQ':\n\t\tmodel = vqgan.GumbelVQ(**config.model.params)\n\t\tmodel.eval().requires_grad_(False)\n\t\tmodel.init_from_ckpt(checkpoint_path)\n\t\tgumbel = True\n\telif config.model.target == 'taming.models.cond_transformer.Net2NetTransformer':\n\t\tparent_model = cond_transformer.Net2NetTransformer(**config.model.params)\n\t\tparent_model.eval().requires_grad_(False)\n\t\tparent_model.init_from_ckpt(checkpoint_path)\n\t\tmodel = parent_model.first_stage_model\n\telse:\n\t\traise ValueError(f'unknown model type: {config.model.target}')\n\tdel model.loss\n\treturn model\n\n\ndef resize_image(image, out_size):\n\tratio = image.size[0] / image.size[1]\n\tarea = min(image.size[0] * image.size[1], out_size[0] * out_size[1])\n\tsize = round((area * ratio)**0.5), round((area / ratio)**0.5)\n\treturn image.resize(size, Image.LANCZOS)\n\n# Set the optimiser\ndef get_opt(opt_name, opt_lr ,z):\n\tif opt_name == \"Adam\":\n\t\topt = optim.Adam([z], lr=opt_lr)\t# LR=0.1 (Default)\n\telif opt_name == \"AdamW\":\n\t\topt = optim.AdamW([z], lr=opt_lr)\t\n\telif opt_name == \"Adagrad\":\n\t\topt = optim.Adagrad([z], lr=opt_lr)\t\n\telif opt_name == \"Adamax\":\n\t\topt = optim.Adamax([z], lr=opt_lr)\t\n\telif opt_name == \"DiffGrad\":\n\t\topt = DiffGrad([z], lr=opt_lr, eps=1e-9, weight_decay=1e-9) # NR: Playing for reasons\n\telif opt_name == \"AdamP\":\n\t\topt = AdamP([z], lr=opt_lr)\t\t \n\telif opt_name == \"RAdam\":\n\t\topt = RAdam([z], lr=opt_lr)\t\t \n\telif opt_name == \"RMSprop\":\n\t\topt = optim.RMSprop([z], lr=opt_lr)\n\telse:\n\t\tprint(\"Unknown optimiser. Are choices broken?\")\n\t\topt = optim.Adam([z], lr=opt_lr)\n\treturn opt\n\n\"\"\"\nTakes in a latent \n\"\"\"\n# Vector quantize\ndef synth(z):\n\tz_q = vector_quantize(z.movedim(1, 3), model.quantize.embedding.weight).movedim(3, 1)\n\treturn clamp_with_grad(model.decode(z_q).add(1).div(2), 0, 1)\n\n\"\"\"\nWrites the loss\nsynthesizes \nSaves the output\n\"\"\"\[email protected]_grad()\ndef checkin(i, losses, z, output):\n\tlosses_str = ', '.join(f'{loss.item():g}' for loss in losses)\n\ttqdm.write(f'i: {i}, loss: {sum(losses).item():g}, losses: {losses_str}')\n\tout = synth(z)\n\tinfo = PngImagePlugin.PngInfo()\n# \tinfo.add_text('comment', f'{prompts}')\n\tTF.to_pil_image(out[0].cpu()).save(output, pnginfo=info) \t\n\n\"\"\"\niii is the image\n\"\"\"\ndef ascend_txt(z, pMs):\n\tout = synth(z)\n\tiii = perceptor.encode_image(normalize(make_cutouts(out))).float()\n\tresult = []\n\tfor prompt in pMs:\n\t\tresult.append(prompt(iii))\n\treturn result # return loss\n\n\ndef train(i,z, opt, pMs, output, z_min, z_max):\n\topt.zero_grad(set_to_none=True)\n\tlossAll = ascend_txt(z, pMs)\n\t\n\tif i % display_freq == 0:\n\t\tcheckin(i, lossAll,z, output)\n\t \n\tloss = sum(lossAll)\n\tloss.backward()\n\topt.step()\n\t\n\twith torch.no_grad():\n\t\tz.copy_(z.maximum(z_min).minimum(z_max))\n\ncutn = 32\ncut_pow = 1\noptimizer = 'Adam'\ntorch.backends.cudnn.deterministic = True\naugments = [['Af', 'Pe', 'Ji', 'Er']]\nreplace_grad = ReplaceGrad.apply\nclamp_with_grad = ClampWithGrad.apply\n\ncuda_device = 0\ndevice = torch.device(cuda_device)\nclip_model='ViT-B/32'\nvqgan_config=f'checkpoints/vqgan_imagenet_f16_16384.yaml'\nvqgan_checkpoint=f'checkpoints/vqgan_imagenet_f16_16384.ckpt'\n\n# Do it\ndevice = torch.device(cuda_device)\nmodel = load_vqgan_model(vqgan_config, vqgan_checkpoint).to(device)\njit = True if float(torch.__version__[:3]) < 1.8 else False\nperceptor = clip.load(clip_model, jit=jit)[0].eval().requires_grad_(False).to(device)\n\ncut_size = perceptor.visual.input_resolution\n\nreplace_grad = ReplaceGrad.apply\nclamp_with_grad = ClampWithGrad.apply\nmake_cutouts = MakeCutouts(cut_size, cutn, cut_pow=cut_pow)\ntorch.backends.cudnn.deterministic = True\naugments = [['Af', 'Pe', 'Ji', 'Er']]\noptimizer = 'Adam'\n\nstep_size=0.1\ncutn = 32\ncut_pow = 1\nseed = 64\ndisplay_freq=50\n\n\n\n\n\n",
"Working with z of shape (1, 256, 16, 16) = 65536 dimensions.\nloaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth\nVQLPIPSWithDiscriminator running with hinge loss.\nRestored from checkpoints/vqgan_imagenet_f16_16384.ckpt\n"
],
[
"subjects = [\"blue\", \"cornucopia\", \"pumpkin pie\", \"turkey\", \"family feud\" ]\nstyles = [\"abstract art\", \"collage\", \"computer art\", \"drawing\", \"chalk drawing\", \"charcoal drawing\", \"conte crayon drawing\", \\\n\"pastel drawing\", \"pen and ink drawing\", \"pencil drawing\", \"graffiti art\", \"mosaic art\", \"painting\", \\\n\"acrylic painting\", \"encaustic painting\", \"fresco painting\", \"gouache painting\", \"ink and wash painting\" \\\n\"oil painting\", \"watercolor painting\", \"printmaking\", \"engraving\", \"etching\", \"giclee print\", \\\n\"lithography\", \"screenprinting\", \"woodcut printing\", \"sand art\", \"stained glass art\", \"tapestry art\", \"vector art\", \\\n\"flat illustration\"]\n",
"_____no_output_____"
],
[
"\ndef generate(prompt_string, output_name,iterations = 100, size=(256, 256), seed=16, width=256, height=256):\n\tpMs=[]\n\tprompts = [prompt_string]\n\toutput = output_name\n \n\tfor prompt in prompts:\n\t\ttxt, weight, stop = split_prompt(prompt)\n\t\tembed = perceptor.encode_text(clip.tokenize(txt).to(device)).float()\n\t\tpMs.append(Prompt(embed, weight, stop).to(device))\n\n\tlearning_rate = 0.1\n\n\t# Output for the user\n\tprint('Using device:', device)\n\tprint('Optimising using:', optimizer)\n\tprint('Using text prompts:', prompts) \n\tprint('Using seed:', seed)\n \n\ti = 0\n\n\tf = 2**(model.decoder.num_resolutions - 1)\n\ttoksX, toksY = width // f, height // f\n\tsideX, sideY = toksX * f, toksY * f\n\n\te_dim = model.quantize.e_dim\n\tn_toks = model.quantize.n_e\n\tz_min = model.quantize.embedding.weight.min(dim=0).values[None, :, None, None]\n\tz_max = model.quantize.embedding.weight.max(dim=0).values[None, :, None, None]\n\n\n\tone_hot = F.one_hot(torch.randint(n_toks, [toksY * toksX], device=device), n_toks).float()\n\tz = one_hot @ model.quantize.embedding.weight\n\n\tz = z.view([-1, toksY, toksX, e_dim]).permute(0, 3, 1, 2)\n\tz_orig = z.clone()\n\tz.requires_grad_(True)\n\n\topt = get_opt(optimizer, learning_rate,z)\n\n\tnormalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],\n\t\t\t\t\t\t\t\t\t std=[0.26862954, 0.26130258, 0.27577711])\n\n\twith tqdm() as pbar:\n\t\twhile True: \n\n\t\t\t# Training time\n\t\t\ttrain(i,z, opt, pMs, output_name, z_min, z_max)\n\n\t\t\t# Ready to stop yet?\n\t\t\tif i == iterations:\n\n\t\t\t\tbreak\n\n\t\t\ti += 1\n\t\t\tpbar.update()\n",
"_____no_output_____"
],
[
"normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],\n std=[0.26862954, 0.26130258, 0.27577711])\n",
"_____no_output_____"
],
[
"for subject, style in product(subjects, styles):\n generate(f\"{subject} in the style of {style}\", output_name=f\"data/{subject}_{style}_100.png\")",
"Using device: cuda:0\nOptimising using: Adam\nUsing text prompts: ['blue in the style of abstract art']\nUsing seed: 16\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0f0b71ea1ba696b69a2c4d684e16bb1d81cc5a3
| 33,607 |
ipynb
|
Jupyter Notebook
|
week01_embeddings/homework.ipynb
|
movb/nlp_course
|
05f58f2eac051883794aac046d4d9a328f29405e
|
[
"MIT"
] | null | null | null |
week01_embeddings/homework.ipynb
|
movb/nlp_course
|
05f58f2eac051883794aac046d4d9a328f29405e
|
[
"MIT"
] | null | null | null |
week01_embeddings/homework.ipynb
|
movb/nlp_course
|
05f58f2eac051883794aac046d4d9a328f29405e
|
[
"MIT"
] | null | null | null | 39.81872 | 382 | 0.534323 |
[
[
[
"## Homework: Multilingual Embedding-based Machine Translation (7 points)",
"_____no_output_____"
],
[
"**In this homework** **<font color='red'>YOU</font>** will make machine translation system without using parallel corpora, alignment, attention, 100500 depth super-cool recurrent neural network and all that kind superstuff.\n\nBut even without parallel corpora this system can be good enough (hopefully). \n\nFor our system we choose two kindred Slavic languages: Ukrainian and Russian. ",
"_____no_output_____"
],
[
"### Feel the difference!\n\n(_синій кіт_ vs. _синій кит_)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"### Frament of the Swadesh list for some slavic languages\n\nThe Swadesh list is a lexicostatistical stuff. It's named after American linguist Morris Swadesh and contains basic lexis. This list are used to define subgroupings of languages, its relatedness.\n\nSo we can see some kind of word invariance for different Slavic languages.\n\n\n| Russian | Belorussian | Ukrainian | Polish | Czech | Bulgarian |\n|-----------------|--------------------------|-------------------------|--------------------|-------------------------------|-----------------------|\n| женщина | жанчына, кабета, баба | жінка | kobieta | žena | жена |\n| мужчина | мужчына | чоловік, мужчина | mężczyzna | muž | мъж |\n| человек | чалавек | людина, чоловік | człowiek | člověk | човек |\n| ребёнок, дитя | дзіця, дзіцёнак, немаўля | дитина, дитя | dziecko | dítě | дете |\n| жена | жонка | дружина, жінка | żona | žena, manželka, choť | съпруга, жена |\n| муж | муж, гаспадар | чоловiк, муж | mąż | muž, manžel, choť | съпруг, мъж |\n| мать, мама | маці, матка | мати, матір, неня, мама | matka | matka, máma, 'стар.' mateř | майка |\n| отец, тятя | бацька, тата | батько, тато, татусь | ojciec | otec | баща, татко |\n| много | шмат, багата | багато | wiele | mnoho, hodně | много |\n| несколько | некалькі, колькі | декілька, кілька | kilka | několik, pár, trocha | няколко |\n| другой, иной | іншы | інший | inny | druhý, jiný | друг |\n| зверь, животное | жывёла, звер, істота | тварина, звір | zwierzę | zvíře | животно |\n| рыба | рыба | риба | ryba | ryba | риба |\n| птица | птушка | птах, птиця | ptak | pták | птица |\n| собака, пёс | сабака | собака, пес | pies | pes | куче, пес |\n| вошь | вош | воша | wesz | veš | въшка |\n| змея, гад | змяя | змія, гад | wąż | had | змия |\n| червь, червяк | чарвяк | хробак, черв'як | robak | červ | червей |\n| дерево | дрэва | дерево | drzewo | strom, dřevo | дърво |\n| лес | лес | ліс | las | les | гора, лес |\n| палка | кій, палка | палиця | patyk, pręt, pałka | hůl, klacek, prut, kůl, pálka | палка, пръчка, бастун |",
"_____no_output_____"
],
[
"But the context distribution of these languages demonstrates even more invariance. And we can use this fact for our for our purposes.",
"_____no_output_____"
],
[
"## Data",
"_____no_output_____"
]
],
[
[
"import gensim\nimport numpy as np\nfrom gensim.models import KeyedVectors",
"_____no_output_____"
]
],
[
[
"Download embeddings here:\n* [cc.uk.300.vec.zip](https://yadi.sk/d/9CAeNsJiInoyUA)\n* [cc.ru.300.vec.zip](https://yadi.sk/d/3yG0-M4M8fypeQ)",
"_____no_output_____"
],
[
"Load embeddings for ukrainian and russian.",
"_____no_output_____"
]
],
[
[
"uk_emb = KeyedVectors.load_word2vec_format(\"cc.uk.300.vec\")",
"_____no_output_____"
],
[
"ru_emb = KeyedVectors.load_word2vec_format(\"cc.ru.300.vec\")",
"_____no_output_____"
],
[
"ru_emb.most_similar([ru_emb[\"август\"]], topn=10)",
"_____no_output_____"
],
[
"uk_emb.most_similar([uk_emb[\"серпень\"]])",
"_____no_output_____"
],
[
"ru_emb.most_similar([uk_emb[\"серпень\"]])",
"_____no_output_____"
]
],
[
[
"Load small dictionaries for correspoinding words pairs as trainset and testset.",
"_____no_output_____"
]
],
[
[
"def load_word_pairs(filename):\n uk_ru_pairs = []\n uk_vectors = []\n ru_vectors = []\n with open(filename, \"r\") as inpf:\n for line in inpf:\n uk, ru = line.rstrip().split(\"\\t\")\n if uk not in uk_emb or ru not in ru_emb:\n continue\n uk_ru_pairs.append((uk, ru))\n uk_vectors.append(uk_emb[uk])\n ru_vectors.append(ru_emb[ru])\n return uk_ru_pairs, np.array(uk_vectors), np.array(ru_vectors)",
"_____no_output_____"
],
[
"uk_ru_train, X_train, Y_train = load_word_pairs(\"ukr_rus.train.txt\")",
"_____no_output_____"
],
[
"uk_ru_test, X_test, Y_test = load_word_pairs(\"ukr_rus.test.txt\")",
"_____no_output_____"
]
],
[
[
"## Embedding space mapping",
"_____no_output_____"
],
[
"Let $x_i \\in \\mathrm{R}^d$ be the distributed representation of word $i$ in the source language, and $y_i \\in \\mathrm{R}^d$ is the vector representation of its translation. Our purpose is to learn such linear transform $W$ that minimizes euclidian distance between $Wx_i$ and $y_i$ for some subset of word embeddings. Thus we can formulate so-called Procrustes problem:\n\n$$W^*= \\arg\\min_W \\sum_{i=1}^n||Wx_i - y_i||_2$$\nor\n$$W^*= \\arg\\min_W ||WX - Y||_F$$\n\nwhere $||*||_F$ - Frobenius norm.\n\nIn Greek mythology, Procrustes or \"the stretcher\" was a rogue smith and bandit from Attica who attacked people by stretching them or cutting off their legs, so as to force them to fit the size of an iron bed. We make same bad things with source embedding space. Our Procrustean bed is target embedding space.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"But wait...$W^*= \\arg\\min_W \\sum_{i=1}^n||Wx_i - y_i||_2$ looks like simple multiple linear regression (without intercept fit). So let's code.",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LinearRegression\n\nmapping = LinearRegression(fit_intercept=False).fit(X_train, Y_train)",
"_____no_output_____"
]
],
[
[
"Let's take a look at neigbours of the vector of word _\"серпень\"_ (_\"август\"_ in Russian) after linear transform.",
"_____no_output_____"
]
],
[
[
"august = mapping.predict(uk_emb[\"серпень\"].reshape(1, -1))\nru_emb.most_similar(august)",
"_____no_output_____"
]
],
[
[
"We can see that neighbourhood of this embedding cosists of different months, but right variant is on the ninth place.",
"_____no_output_____"
],
[
"As quality measure we will use precision top-1, top-5 and top-10 (for each transformed Ukrainian embedding we count how many right target pairs are found in top N nearest neighbours in Russian embedding space).",
"_____no_output_____"
]
],
[
[
"def precision(pairs, mapped_vectors, topn=1):\n \"\"\"\n :args:\n pairs = list of right word pairs [(uk_word_0, ru_word_0), ...]\n mapped_vectors = list of embeddings after mapping from source embedding space to destination embedding space\n topn = the number of nearest neighbours in destination embedding space to choose from\n :returns:\n precision_val, float number, total number of words for those we can find right translation at top K.\n \"\"\"\n assert len(pairs) == len(mapped_vectors)\n num_matches = 0\n for i, (_, ru) in enumerate(pairs):\n if ru in {x[0] for x in ru_emb.most_similar([mapped_vectors[i]])[:topn]}:\n num_matches+=1\n precision_val = num_matches / len(pairs)\n return precision_val",
"_____no_output_____"
],
[
"assert precision([(\"серпень\", \"август\")], august, topn=5) == 0.0\nassert precision([(\"серпень\", \"август\")], august, topn=9) == 1.0\nassert precision([(\"серпень\", \"август\")], august, topn=10) == 1.0",
"_____no_output_____"
],
[
"assert precision(uk_ru_test, X_test) == 0.0\nassert precision(uk_ru_test, Y_test) == 1.0",
"_____no_output_____"
],
[
"precision_top1 = precision(uk_ru_test, mapping.predict(X_test), 1)\nprecision_top5 = precision(uk_ru_test, mapping.predict(X_test), 5)\n\nassert precision_top1 >= 0.635\nassert precision_top5 >= 0.813",
"_____no_output_____"
]
],
[
[
"## Making it better (orthogonal Procrustean problem)",
"_____no_output_____"
],
[
"It can be shown (see original paper) that a self-consistent linear mapping between semantic spaces should be orthogonal. \nWe can restrict transform $W$ to be orthogonal. Then we will solve next problem:\n\n$$W^*= \\arg\\min_W ||WX - Y||_F \\text{, where: } W^TW = I$$\n\n$$I \\text{- identity matrix}$$\n\nInstead of making yet another regression problem we can find optimal orthogonal transformation using singular value decomposition. It turns out that optimal transformation $W^*$ can be expressed via SVD components:\n$$X^TY=U\\Sigma V^T\\text{, singular value decompostion}$$\n$$W^*=UV^T$$",
"_____no_output_____"
]
],
[
[
"def learn_transform(X_train, Y_train):\n \"\"\" \n :returns: W* : float matrix[emb_dim x emb_dim] as defined in formulae above\n \"\"\"\n u, s, vh = np.linalg.svd(X_train.T.dot(Y_train))\n return u.dot(vh) ",
"_____no_output_____"
],
[
"W = learn_transform(X_train, Y_train)",
"_____no_output_____"
],
[
"ru_emb.most_similar([np.matmul(uk_emb[\"серпень\"], W)])",
"_____no_output_____"
],
[
"assert precision(uk_ru_test, np.matmul(X_test, W)) >= 0.653\nassert precision(uk_ru_test, np.matmul(X_test, W), 5) >= 0.824",
"_____no_output_____"
]
],
[
[
"## UK-RU Translator",
"_____no_output_____"
],
[
"Now we are ready to make simple word-based translator: for earch word in source language in shared embedding space we find the nearest in target language.\n",
"_____no_output_____"
]
],
[
[
"with open(\"fairy_tale.txt\", \"r\") as inpf:\n uk_sentences = [line.rstrip().lower() for line in inpf]",
"_____no_output_____"
],
[
"def translate(sentence):\n \"\"\"\n :args:\n sentence - sentence in Ukrainian (str)\n :returns:\n translation - sentence in Russian (str)\n\n * find ukrainian embedding for each word in sentence\n * transform ukrainian embedding vector\n * find nearest russian word and replace\n \"\"\"\n words = sentence.split()\n translation = [ru_emb.most_similar([np.matmul(uk_emb[word], W)])[0][0] if word in uk_emb else word for word in words ]\n \n return \" \".join(translation)",
"_____no_output_____"
],
[
"assert translate(\".\") == \".\"\nassert translate(\"1 , 3\") == \"1 , 3\"\nassert translate(\"кіт зловив мишу\") == \"кот поймал мышку\"",
"_____no_output_____"
],
[
"for sentence in uk_sentences:\n print(\"src: {}\\ndst: {}\\n\".format(sentence, translate(sentence)))",
"src: лисичка - сестричка і вовк - панібрат\ndst: лисичка – сестричка и волк – панібрат\n\nsrc: як була собі лисичка , да й пішла раз до однії баби добувать огню ; ввійшла у хату да й каже : \" добрий день тобі , бабусю !\ndst: как была себе лисичка , че и пошла раз к однії бабы добувать огня ; вошла во избу че и говорит : \" хороший день тебе , бабушку !\n\nsrc: дай мені огня \" .\ndst: дай мне огня \" .\n\nsrc: а баба тільки що вийняла із печі пирожок із маком , солодкий , да й положила , щоб він прохолов ; а лисичка се і підгледала , да тілько що баба нахилилась у піч , щоб достать огня , то лисичка зараз ухватила пирожок да і драла з хати , да , біжучи , весь мак із його виїла , а туда сміття наклала .\ndst: а бабка только что вынула со печи пирожок со маком , сладкий , че и согнула , чтобы он прохолов ; а лисичка ой и підгледала , че токмо что бабка качнулась во печь , чтобы достать огня , то лисичка сейчас ухватила пирожок че и деру со хаты , че , пробежать , весь мак со его виїла , а туда мусора наложила .\n\nsrc: прибігла на поле , аж там пасуть хлопці бичків .\ndst: прибежала по поле , аж там пасут парни бычков .\n\nsrc: вона і каже їм : \" ей , хлопці !\ndst: она и говорит им : \" ой , парни !\n\nsrc: проміняйте мені бичка - третячка за маковий пирожок \" .\ndst: проміняйте мне бычка – третячка за маковый пирожок \" .\n\nsrc: тії согласились ; так вона їм говорить : \" смотріть же , ви не їжте зараз сього пирожка , а тоді уже розломите , як я заведу бичка за могилку ; а то ви його ні за що не розломите \" .\ndst: ишо поглумиться ; так она им говорит : \" смотріть то , мы не ешьте сейчас сего пирожка , а тогда уже розломите , как мной заведу бычка за могилу ; а то мы его ни за что не розломите \" .\n\nsrc: бачите вже - лисичка таки собі була розумна , що хоть кого да обманить .\ndst: вижу уже – лисичка таки себе была умная , что хоть кого че обманить .\n\nsrc: тії хлопці так і зробили , а лисичка як зайшла за могилу , да зараз у ліс і повернула , щоб на дорозі не догнали ; прийшла у ліс да і зробила собі санки да й їде .\ndst: ишо парни так и сделали , а лисичка как зашла за могилу , че сейчас во лес и вернула , чтобы по дороге не погнали ; пришла во лес че и сделала себе санки че и едет .\n\nsrc: коли йде вовчик : \" здорова була , лисичко - сестричко ! \"\ndst: когда идет вовчик : \" здоровая была , лисичко – сестричка ! \"\n\nsrc: - \" здоров , вовчику - братику ! \"\ndst: – \" здоровье , вовчику – братику ! \"\n\nsrc: - \" де се ти узяла собі і бичка і санки ? \"\ndst: – \" куда ой ты взяла себе и бычка и санки ? \"\n\nsrc: - \" е !\ndst: – \" ьн !\n\nsrc: зробила \" .\ndst: сделала \" .\n\nsrc: - \" підвези ж і мене \" .\ndst: – \" підвези же и меня \" .\n\nsrc: - \" е , вовчику !\ndst: – \" ьн , вовчику !\n\nsrc: не можна \" .\ndst: не можно \" .\n\nsrc: - \" мені хоть одну ніжку \" .\ndst: – \" мне хоть одну ножку \" .\n\nsrc: - \" одну можна \" .\ndst: – \" одну можно \" .\n\nsrc: він і положив , да од'їхавши немного і просить , щоби іще одну положить .\ndst: он и положил , че од'їхавши конешно и просит , чтобы еще одну возмет .\n\nsrc: \" не можна , братику !\ndst: \" не можно , братику !\n\nsrc: боюсь , щоб ти саней не зламав \" .\ndst: боюсь , чтобы ты саней не сломал \" .\n\nsrc: - \" ні , сестричко , не бійся ! \"\ndst: – \" ни , сестричка , не бойся ! \"\n\nsrc: - да і положив другую ніжку .\ndst: – че и положил одну ножку .\n\nsrc: тілько що од'їхали , як щось і тріснуло .\ndst: токмо что од'їхали , как что-то и треснуло .\n\nsrc: \" бачиш , вовчику , уже і ламаєш санки \" .\ndst: \" видишь , вовчику , уже и ламаєш санки \" .\n\nsrc: - \" ні , лисичко !\ndst: – \" ни , лисичко !\n\nsrc: се у мене був орішок , так я розкусив \" .\ndst: ой во меня был орішок , так мной розкусив \" .\n\nsrc: да просить оп'ять , щоб і третю ногу положить ; лисичка і ту пустила , да тілько що оп'ять од'їхали , аж щось уже дужче тріснуло .\ndst: че просит оп'ять , чтобы и третью ногу возмет ; лисичка и ту пустила , че токмо что оп'ять од'їхали , аж что-то уже сильней треснуло .\n\nsrc: лисичка закричала : \" ох , лишечко !\ndst: лисичка закричала : \" ой , лишечко !\n\nsrc: ти ж мені , братику , зовсім зламаєш санки \" .\ndst: ты же мне , братику , совсем зламаєш санки \" .\n\nsrc: - \" ні , лисичко , се я орішок розкусив \" .\ndst: – \" ни , лисичко , ой мной орішок розкусив \" .\n\nsrc: - \" дай же і мені , бачиш який , що сам їж , а мені і не даєш \" .\ndst: – \" дай то и мне , видишь который , что сам ел , а мне и не даешь \" .\n\nsrc: - \" нема уже більше , а я б дав \" .\ndst: – \" нету уже больше , а мной бы дал \" .\n\nsrc: да і просить оп'ять , щоб пустила положить і послідню ногу .\ndst: че и просит оп'ять , чтобы пустила возмет и послідню ногу .\n\nsrc: лисичка і согласилась .\ndst: лисичка и согласилась .\n\nsrc: так він тілько що положив ногу , як санки зовсім розламались .\ndst: так он токмо что положил ногу , как санки совсем розламались .\n\nsrc: тоді вже лисичка так на його розсердилась , що і сама не знала щоб робила !\ndst: тогда уже лисичка так по его розсердилась , что и сама не знала чтобы делала !\n\nsrc: а як отошло серце , вона і каже : \" іди ж , ледащо !\ndst: а как отошло сердце , она и говорит : \" иди же , лодырь !\n\nsrc: да нарубай дерева , щоб нам оп'ять ізробить санки ; тільки рубавши кажи так : \" рубайся ж , дерево , і криве і пряме \" .\ndst: че нарубай деревья , чтобы нам оп'ять ізробить санки ; только рубавши говори так : \" рубайся же , дерево , и кривое и прямое \" .\n\nsrc: він і пішов да й каже усе : \" рубайся ж , дерево , усе пряме да пряме ! \"\ndst: он и пошел че и говорит всё : \" рубайся же , дерево , всё прямое че прямое ! \"\n\nsrc: нарубавши і приносить ; лисичка увидала , що дерево не таке , як їй нужно , оп'ять розсердилась .\ndst: нарубавши и приносит ; лисичка увидала , что дерево не такое , как им надо , оп'ять розсердилась .\n\nsrc: \" ти , - говорить , - не казав , видно , так , як я тобі веліла ! \"\ndst: \" ты , – говорит , – не говорил , видно , так , как мной тебе велела ! \"\n\nsrc: - \" ні , я усе теє казав , що ти мені казала \" .\ndst: – \" ни , мной всё Эх говорил , что ты мне говорила \" .\n\nsrc: - \" да чомусь не таке рубалось ?\ndst: – \" че почему-то не такое рубалось ?\n\nsrc: ну , сиди ж ти тут , а я сама піду нарубаю \" , - да і пішла у ліс .\ndst: ну , сиди же ты здесь , а мной сама пойду нарубаю \" , – че и пошла во лес .\n\nsrc: а вовк дивиться , що він сам остався ; узяв да проїв у бичка дірку да виїв усе в середині , а напускав туда горобців да ще соломою заткнув , поставив бичка , а сам і втік .\ndst: а волк смотрит , что он сам остался ; взял че проїв во бычка дыру че виїв всё во середине , а напускав туда воробьёв че ещe соломой заткнул , поставил бычка , а сам и сбежал .\n\nsrc: аж лисичка приходить , зробила санки да й сіла і стала поганять : \" гей , бичок - третячок ! \"\ndst: аж лисичка приходит , сделала санки че и присела и стала поганять : \" гей , бычок – третячок ! \"\n\nsrc: тілько він не везе .\ndst: токмо он не увозит .\n\nsrc: от вона встала , щоб поправить : може , що не так запряжено ; да , не хотячи , одоткнула солому , а оттуда так і сипнули горобці летіти .\ndst: из она встала , чтобы поправит : может , что не так запряжено ; че , не вздумал , одоткнула солому , а туды так и сипнули воробьи лететь .\n\nsrc: вона уже тоді побачила , що бичок неживий ; покинула його да й пішла .\ndst: она уже тогда увидела , что бычок неживой ; покинула его че и пошла .\n\nsrc: легла на дорозі , аж дивиться - їде мужик з рибою ; вона і притворилась , що здохла .\ndst: легла по дороге , аж смотрит – едет мужик со рыбой ; она и притворилась , что сдохла .\n\nsrc: от мужик і говорить : \" возьму я оцю лисицю , обдеру да хоть шапку собі зошью \" .\ndst: из мужик и говорит : \" возьму мной ихнюю лисицу , обдеру че хоть шапку себе зошью \" .\n\nsrc: узяв да і положив ззаді у воза .\ndst: взял че и положил взади во телега .\n\nsrc: вона замітила , що мужик не смотрить , стала ногами викидувать рибу з воза , а когда побачила , що навикидала уже багато , тоди потихесеньку і сама злізла ; сіла біля риби да і їсть собі , - коли біжить оп'ять той самий вовчик .\ndst: она заметила , что мужик не смотрить , стала ногами викидувать рыбу со телега , а .когда увидела , что навикидала уже много , тоды потихесеньку и сама слезла ; присела возле рыбы че и ест себе , – когда бежит оп'ять тот самый вовчик .\n\nsrc: побачивши , що вона їсть рибу , прибіг до їй да й каже : \" здорово була , лисичко - сестричко !\ndst: увидев , что она ест рыбу , прибежал к им че и говорит : \" здорово была , лисичко – сестричка !\n\n"
]
],
[
[
"Not so bad, right? We can easily improve translation using language model and not one but several nearest neighbours in shared embedding space. But next time.",
"_____no_output_____"
],
[
"## Would you like to learn more?\n\n### Articles:\n* [Exploiting Similarities among Languages for Machine Translation](https://arxiv.org/pdf/1309.4168) - entry point for multilingual embedding studies by Tomas Mikolov (the author of W2V)\n* [Offline bilingual word vectors, orthogonal transformations and the inverted softmax](https://arxiv.org/pdf/1702.03859) - orthogonal transform for unsupervised MT\n* [Word Translation Without Parallel Data](https://arxiv.org/pdf/1710.04087)\n* [Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion](https://arxiv.org/pdf/1804.07745)\n* [Unsupervised Alignment of Embeddings with Wasserstein Procrustes](https://arxiv.org/pdf/1805.11222)\n\n### Repos (with ready-to-use multilingual embeddings):\n* https://github.com/facebookresearch/MUSE\n\n* https://github.com/Babylonpartners/fastText_multilingual -",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0f0e6d9b715d3d61dbfdba125ca7d3d3c97dacb
| 26,222 |
ipynb
|
Jupyter Notebook
|
06_Stats/US_Baby_Names/Exercises_with_solutions.ipynb
|
deep0892/pandas_practice
|
f3c1581d2aa167416c057b6bb140c3c29677ab20
|
[
"BSD-3-Clause"
] | null | null | null |
06_Stats/US_Baby_Names/Exercises_with_solutions.ipynb
|
deep0892/pandas_practice
|
f3c1581d2aa167416c057b6bb140c3c29677ab20
|
[
"BSD-3-Clause"
] | null | null | null |
06_Stats/US_Baby_Names/Exercises_with_solutions.ipynb
|
deep0892/pandas_practice
|
f3c1581d2aa167416c057b6bb140c3c29677ab20
|
[
"BSD-3-Clause"
] | null | null | null | 25.310811 | 175 | 0.339143 |
[
[
[
"# US - Baby Names\n\nCheck out [Baby Names Exercises Video Tutorial](https://youtu.be/Daf2QNAy-qA) to watch a data scientist go through the exercises",
"_____no_output_____"
],
[
"### Introduction:\n\nWe are going to use a subset of [US Baby Names](https://www.kaggle.com/kaggle/us-baby-names) from Kaggle. \nIn the file it will be names from 2004 until 2014\n\n\n### Step 1. Import the necessary libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/06_Stats/US_Baby_Names/US_Baby_Names_right.csv). ",
"_____no_output_____"
],
[
"### Step 3. Assign it to a variable called baby_names.",
"_____no_output_____"
]
],
[
[
"baby_names = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/06_Stats/US_Baby_Names/US_Baby_Names_right.csv')\nbaby_names.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1016395 entries, 0 to 1016394\nData columns (total 7 columns):\nUnnamed: 0 1016395 non-null int64\nId 1016395 non-null int64\nName 1016395 non-null object\nYear 1016395 non-null int64\nGender 1016395 non-null object\nState 1016395 non-null object\nCount 1016395 non-null int64\ndtypes: int64(4), object(3)\nmemory usage: 54.3+ MB\n"
]
],
[
[
"### Step 4. See the first 10 entries",
"_____no_output_____"
]
],
[
[
"baby_names.head(10)",
"_____no_output_____"
]
],
[
[
"### Step 5. Delete the column 'Unnamed: 0' and 'Id'",
"_____no_output_____"
]
],
[
[
"# deletes Unnamed: 0\ndel baby_names['Unnamed: 0']\n\n# deletes Unnamed: 0\ndel baby_names['Id']\n\nbaby_names.head()",
"_____no_output_____"
]
],
[
[
"### Step 6. Are there more male or female names in the dataset?",
"_____no_output_____"
]
],
[
[
"baby_names['Gender'].value_counts()",
"_____no_output_____"
]
],
[
[
"### Step 7. Group the dataset by name and assign to names",
"_____no_output_____"
]
],
[
[
"# you don't want to sum the Year column, so you delete it\ndel baby_names[\"Year\"]\n\n# group the data\nnames = baby_names.groupby(\"Name\").sum()\n\n# print the first 5 observations\nnames.head()\n\n# print the size of the dataset\nprint(names.shape)\n\n# sort it from the biggest value to the smallest one\nnames.sort_values(\"Count\", ascending = 0).head()",
"(17632, 1)\n"
]
],
[
[
"### Step 8. How many different names exist in the dataset?",
"_____no_output_____"
]
],
[
[
"# as we have already grouped by the name, all the names are unique already. \n# get the length of names\nlen(names)",
"_____no_output_____"
]
],
[
[
"### Step 9. What is the name with most occurrences?",
"_____no_output_____"
]
],
[
[
"names.Count.idxmax()\n\n# OR\n\n# names[names.Count == names.Count.max()]",
"_____no_output_____"
]
],
[
[
"### Step 10. How many different names have the least occurrences?",
"_____no_output_____"
]
],
[
[
"len(names[names.Count == names.Count.min()])",
"_____no_output_____"
]
],
[
[
"### Step 11. What is the median name occurrence?",
"_____no_output_____"
]
],
[
[
"names[names.Count == names.Count.median()]",
"_____no_output_____"
]
],
[
[
"### Step 12. What is the standard deviation of names?",
"_____no_output_____"
]
],
[
[
"names.Count.std()",
"_____no_output_____"
]
],
[
[
"### Step 13. Get a summary with the mean, min, max, std and quartiles.",
"_____no_output_____"
]
],
[
[
"names.describe()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0f0f71f37f8a837cb47734f1223f44c6bb1203f
| 391,986 |
ipynb
|
Jupyter Notebook
|
lecture1/NumPy_crash_course.ipynb
|
HSE-LaMBDA/modern-technologies-for-ml-and-big-data
|
488bef843d9e5d2803dd8bef48448d15f57aaf1c
|
[
"MIT"
] | 8 |
2015-11-11T12:26:29.000Z
|
2018-05-04T07:37:30.000Z
|
lecture1/NumPy_crash_course.ipynb
|
HSE-LaMBDA/modern-technologies-for-ml-and-big-data
|
488bef843d9e5d2803dd8bef48448d15f57aaf1c
|
[
"MIT"
] | null | null | null |
lecture1/NumPy_crash_course.ipynb
|
HSE-LaMBDA/modern-technologies-for-ml-and-big-data
|
488bef843d9e5d2803dd8bef48448d15f57aaf1c
|
[
"MIT"
] | 9 |
2015-12-08T12:38:57.000Z
|
2017-12-07T01:43:31.000Z
| 259.937666 | 100,142 | 0.916464 |
[
[
[
"# Matplotlib and NumPy crash course",
"_____no_output_____"
],
[
"You may install numpy, matplotlib, sklearn and many other usefull package e.g. via Anaconda distribution.",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"## NumPy basics",
"_____no_output_____"
],
[
"### Array creation",
"_____no_output_____"
]
],
[
[
"np.array(range(10))",
"_____no_output_____"
],
[
"np.ndarray(shape=(5, 4))",
"_____no_output_____"
],
[
"np.linspace(0, 1, num=20)",
"_____no_output_____"
],
[
"np.arange(0, 20)",
"_____no_output_____"
],
[
"np.zeros(shape=(5, 4))",
"_____no_output_____"
],
[
"np.ones(shape=(5,4))",
"_____no_output_____"
]
],
[
[
"Possible types of array:\n- bool\n- various ints\n- float, double\n- string",
"_____no_output_____"
]
],
[
[
"np.ones(shape=(2, 3), dtype=\"string\")",
"_____no_output_____"
],
[
"np.zeros(shape=(2, 3), dtype=bool)",
"_____no_output_____"
],
[
"np.savetxt(\"eye.txt\", np.eye(5, 6))\nnp.loadtxt(\"eye.txt\")",
"_____no_output_____"
],
[
"%rm eye.txt",
"_____no_output_____"
]
],
[
[
"## Array operations",
"_____no_output_____"
]
],
[
[
"a = np.linspace(0, 9, num=10)\na + 1",
"_____no_output_____"
],
[
"a * a",
"_____no_output_____"
],
[
"a - a",
"_____no_output_____"
],
[
"print a.max()\nprint a.min()",
"9.0\n0.0\n"
],
[
"np.sum(a)",
"_____no_output_____"
],
[
"a = np.random.standard_normal(size=(25, ))\na",
"_____no_output_____"
],
[
"b = a.reshape((5, 5))\nb",
"_____no_output_____"
],
[
"b.T",
"_____no_output_____"
],
[
"np.sum(b)",
"_____no_output_____"
],
[
"print np.sum(b, axis=1)\nprint np.sum(b, axis=0)",
"[ 1.93242335 0.32850539 3.43353115 -0.56730104 -0.05693269]\n[ 1.81075943 0.7692373 -3.1728252 1.18330814 4.47974649]\n"
],
[
"### Matrix multiplication\nnp.dot(b, b)",
"_____no_output_____"
],
[
"np.vstack([b, b])",
"_____no_output_____"
]
],
[
[
"### Custom functions",
"_____no_output_____"
]
],
[
[
"def plus(x, y):\n return x + y",
"_____no_output_____"
],
[
"plus_v = np.vectorize(plus)",
"_____no_output_____"
],
[
"plus_v(np.arange(10), np.arange(10, 20))",
"_____no_output_____"
],
[
"plus_v(np.arange(10), 10)",
"_____no_output_____"
],
[
"@np.vectorize\ndef plus(x, y):\n return x + y",
"_____no_output_____"
],
[
"plus(np.arange(10), 10)",
"_____no_output_____"
]
],
[
[
"### Performance",
"_____no_output_____"
]
],
[
[
"N = 10000000\na = np.random.standard_normal(size=N)\nb = np.random.standard_normal(size=N)",
"_____no_output_____"
],
[
"%%time\n\na + b",
"CPU times: user 26.4 ms, sys: 4.39 ms, total: 30.8 ms\nWall time: 30.3 ms\n"
],
[
"ab = zip(range(N), range(N))",
"_____no_output_____"
],
[
"%%time\n\n_ = [ a + b for a, b in ab ]",
"CPU times: user 1.84 s, sys: 75.9 ms, total: 1.92 s\nWall time: 1.92 s\n"
]
],
[
[
"### Slices",
"_____no_output_____"
]
],
[
[
"a = np.arange(15)\na = a.reshape((3,5))\na",
"_____no_output_____"
],
[
"# Just a copy of the array\na[:]",
"_____no_output_____"
],
[
"a[:, 0]",
"_____no_output_____"
],
[
"a[1, :]",
"_____no_output_____"
],
[
"a[2, :] = (np.arange(5) + 1) * 10\na",
"_____no_output_____"
],
[
"a < 10",
"_____no_output_____"
],
[
"a[a < 12]",
"_____no_output_____"
],
[
"np.where(a < 12)",
"_____no_output_____"
],
[
"xs, ys = np.where(a < 20)\na[xs, ys]",
"_____no_output_____"
]
],
[
[
"## Matplotlib",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\n# Don't forget this magic expression if want to show plots in notebook\n%matplotlib inline",
"_____no_output_____"
],
[
"xs = np.arange(100)\nys = np.cumsum(np.random.standard_normal(size=100))",
"_____no_output_____"
]
],
[
[
"### Line plot",
"_____no_output_____"
]
],
[
[
"plt.figure()\nplt.plot(xs, ys)\nplt.show()",
"_____no_output_____"
],
[
"# A little bit of options\n\nplt.figure()\nplt.plot(xs, ys, label=\"1st series\", color=\"green\")\nplt.plot(xs, ys.max() - ys, label=\"2nd series\", color=\"red\")\nplt.legend(loc=\"upper right\")\nplt.xlabel(\"Time, sec\")\nplt.ylabel(\"Something\")\nplt.title(\"Just two random series\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Bar plot",
"_____no_output_____"
]
],
[
[
"plt.figure()\nplt.bar(xs, ys)\nplt.show()",
"_____no_output_____"
],
[
"plt.figure()\nh, bins, patches = plt.hist(ys)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Scatter plot",
"_____no_output_____"
]
],
[
[
"xs1 = np.random.standard_normal(size=100)\nys1 = np.random.standard_normal(size=100)\n\nxs2 = np.random.standard_normal(size=100) + 3\nys2 = np.random.standard_normal(size=100)",
"_____no_output_____"
],
[
"plt.scatter(xs1, ys1, label=\"class1\", color=\"green\")\nplt.scatter(xs2, ys2, label=\"class2\", color=\"red\")\nplt.plot([1.5, 1.5], [-4, 4], linewidth=3)\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### Images",
"_____no_output_____"
]
],
[
[
"means=np.array([[-1, 1], [-1, 1]])\nstds = np.array([1, 1.1])",
"_____no_output_____"
],
[
"@np.vectorize\ndef normal_density(mx, my, std, x, y):\n return np.exp(\n -((x - mx) ** 2 + (y - my) ** 2) / 2.0 / std / std\n ) / std / std\n\[email protected]\ndef f(x, y):\n return np.sum(\n normal_density(means[0, :], means[1, :], stds, x, y)\n )",
"_____no_output_____"
],
[
"mx, my = np.meshgrid(np.linspace(-2, 2, 100), np.linspace(-2, 2, 100))",
"_____no_output_____"
],
[
"fs = f(mx, my)",
"_____no_output_____"
],
[
"plt.contourf(mx, my, fs, 20, cmap=plt.cm.coolwarm)\nplt.colorbar()",
"_____no_output_____"
],
[
"plt.contour(mx, my, fs, 20, cmap=plt.cm.coolwarm)\nplt.colorbar()",
"/home/mborisyak/opt/anaconda/lib/python2.7/site-packages/matplotlib/collections.py:650: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison\n if self._edgecolors_original != str('face'):\n"
],
[
"plt.matshow(fs)\nplt.colorbar()",
"_____no_output_____"
],
[
"plt.imshow(fs)\nplt.colorbar()",
"_____no_output_____"
],
[
"plt.imshow(np.rot90(fs), extent=[-2, 2, -2, 2])\nplt.colorbar()\nplt.contour(mx, my, fs, 15, colors=\"black\")",
"_____no_output_____"
]
],
[
[
"# Exercises",
"_____no_output_____"
],
[
"- load MNIST dataset\n- create arrays of features and labels\n- write a procedure to plot digits\n- calculate mean, std of images for each class, plot the results\n- plot distribution of pixel values: general, for different classes\n- *find out which pixel has the most information about label (advanced)*\n- *make 3D plots using mplot3d or plotly (advanced)*",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0f0fd8881c011f9e8c81186582eda94d7c84ec4
| 5,501 |
ipynb
|
Jupyter Notebook
|
Hawaii vacation/data_engineering.ipynb
|
tardis123/Hawaii-vacation
|
2aa6ca028c24d924cbcb786a73af6b89ce999645
|
[
"ADSL"
] | null | null | null |
Hawaii vacation/data_engineering.ipynb
|
tardis123/Hawaii-vacation
|
2aa6ca028c24d924cbcb786a73af6b89ce999645
|
[
"ADSL"
] | null | null | null |
Hawaii vacation/data_engineering.ipynb
|
tardis123/Hawaii-vacation
|
2aa6ca028c24d924cbcb786a73af6b89ce999645
|
[
"ADSL"
] | null | null | null | 25.004545 | 171 | 0.544992 |
[
[
[
"# Data engineering\n\nIn this section we'll explore the climate data for Hawaii by:\n\n+ checking for missing data (null values)\n+ checking for duplicates\n\n## Results\n\nNo duplicate data was found in either the station data nor in the measurements data.\nThere was about 7.4 % of null values found in the measurements data (1447 rows on a total of 19550 rows).\n\nThe rows with null values were deleted from the original measurements data file.\nA percentage of 7.4% seems on a total of 19550 rows seems acceptable in this case.\nBe aware however that that might not always be the case.\nAnd that statistical analysis might be needed to determine whether deleting data might have a negative impact on data reliability hence on data analysis reliability.",
"_____no_output_____"
]
],
[
[
"# Dependencies\nimport pandas as pd\nimport os",
"_____no_output_____"
],
[
"# Load files into dataframe\n## Measurement file\ninput_file_m = input(\"Enter the name of the measurement file you want to analyze (without extension): \") + \".csv\"\nfilepath_m = os.path.join('Resources', input_file_m)\nmeasurement_df = pd.read_csv(filepath_m)\n## Station file\ninput_file_s = input(\"Enter the name of the station file you want to analyze (without extension): \") + \".csv\"\nfilepath_s = os.path.join('Resources', input_file_s)\nstation_df = pd.read_csv(filepath_s)",
"Enter the name of the measurement file you want to analyze (without extension): hawaii_measurements\nEnter the name of the station file you want to analyze (without extension): hawaii_stations\n"
],
[
"# Check for NaN values in measurements data:\nmeasurement_df.isnull().sum()",
"_____no_output_____"
],
[
"# Check for NaN values in stations data:\nstation_df.isnull().sum()",
"_____no_output_____"
],
[
"# Check for duplicates in measurements data:\nmeasurement_df[measurement_df.duplicated(keep=False)].sum()",
"_____no_output_____"
],
[
"# Check for duplicates in stations data:\n## Stations:\nstation_df[station_df.duplicated(keep=False)].sum()",
"_____no_output_____"
],
[
"# Precipitation is missing for 1447 rows in measurement_df\n# There's no clues as for what the missing data might be so we'll delete all rows with null values:\nmeasurement_df = measurement_df.dropna()",
"_____no_output_____"
],
[
"# Check again:\nmeasurement_df.isnull().sum()",
"_____no_output_____"
],
[
"# Save cleaned csv files with prefix clean_\nprefix = \"clean_\"\nmeasurement_df.to_csv(prefix+input_file_m, encoding = \"utf-8-sig\", index = False)\nstation_df.to_csv(prefix+input_file_s, encoding = \"utf-8-sig\", index = False)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0f101b472c5ba0222dc0430e3eb05d63dbbac8a
| 3,136 |
ipynb
|
Jupyter Notebook
|
l5-syntax/syntax exploration.ipynb
|
biplav-s/course-nl
|
d012d29cd2265cf5d9f449a00bbccd4836790bce
|
[
"MIT"
] | 2 |
2021-07-13T14:46:37.000Z
|
2021-11-18T23:50:04.000Z
|
l5-syntax/syntax exploration.ipynb
|
biplav-s/course-nl
|
d012d29cd2265cf5d9f449a00bbccd4836790bce
|
[
"MIT"
] | null | null | null |
l5-syntax/syntax exploration.ipynb
|
biplav-s/course-nl
|
d012d29cd2265cf5d9f449a00bbccd4836790bce
|
[
"MIT"
] | 5 |
2020-09-13T21:08:27.000Z
|
2020-10-24T11:05:44.000Z
| 19.974522 | 178 | 0.470344 |
[
[
[
"# Shallow Parsing - Chunking",
"_____no_output_____"
]
],
[
[
"# Do imports\nimport nltk",
"_____no_output_____"
],
[
"\ndata = \"The authority did not permit giving of fishing permit.\"\ntokens = nltk.word_tokenize(data)\nprint(tokens)\n",
"['The', 'authority', 'did', 'not', 'permit', 'giving', 'of', 'fishing', 'permit', '.']\n"
],
[
"tag = nltk.pos_tag(tokens)\nprint(tag)\n",
"[('The', 'DT'), ('authority', 'NN'), ('did', 'VBD'), ('not', 'RB'), ('permit', 'VB'), ('giving', 'VBG'), ('of', 'IN'), ('fishing', 'VBG'), ('permit', 'NN'), ('.', '.')]\n"
],
[
"grammar = \"NP: {<DT>?<JJ>*<NN>}\"\ncp =nltk.RegexpParser(grammar)\nresult = cp.parse(tag)\nprint(result)\n",
"(S\n (NP The/DT authority/NN)\n did/VBD\n not/RB\n permit/VB\n giving/VBG\n of/IN\n fishing/VBG\n (NP permit/NN)\n ./.)\n"
],
[
"result.draw() # It will draw the pattern graphically which can be seen in Noun Phrase chunking ",
"_____no_output_____"
],
[
"# Do for a running example\ndata = \"I prefer a morning flight.\"\ntokens = nltk.word_tokenize(data)\ntag = nltk.pos_tag(tokens)\nresult = cp.parse(tag)\nprint(result)",
"(S I/PRP prefer/VBP (NP a/DT morning/NN) (NP flight/NN) ./.)\n"
],
[
"result.draw()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0f10473f360a91a0d0dc15cd841810b55ec323b
| 101,576 |
ipynb
|
Jupyter Notebook
|
NER_LSTM.ipynb
|
piyushsharma1812/recurrent-neural-networks
|
6c2e852f8722400688007a3a6ce6d99bd8acb55a
|
[
"MIT"
] | null | null | null |
NER_LSTM.ipynb
|
piyushsharma1812/recurrent-neural-networks
|
6c2e852f8722400688007a3a6ce6d99bd8acb55a
|
[
"MIT"
] | null | null | null |
NER_LSTM.ipynb
|
piyushsharma1812/recurrent-neural-networks
|
6c2e852f8722400688007a3a6ce6d99bd8acb55a
|
[
"MIT"
] | null | null | null | 63.169154 | 18,236 | 0.608589 |
[
[
[
"<a href=\"https://colab.research.google.com/github/piyushsharma1812/recurrent-neural-networks/blob/master/NER_LSTM.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"# Required Libraries \nimport numpy as np\nimport pandas as pd\nfrom keras.preprocessing.sequence import pad_sequences\nfrom keras.utils import to_categorical\nfrom keras.layers import LSTM, Dense, TimeDistributed, Embedding, Bidirectional\nfrom keras.models import Model, Input\nfrom keras_contrib.layers import CRF\nfrom keras.callbacks import ModelCheckpoint\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nfrom sklearn.model_selection import train_test_split\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sklearn_crfsuite.metrics import flat_classification_report\nfrom sklearn.metrics import f1_score\nfrom seqeval.metrics import precision_score, recall_score, f1_score, classification_report\nfrom keras.preprocessing.text import text_to_word_sequence\nimport pickle",
"Using TensorFlow backend.\n"
],
[
"!pip install sklearn_crfsuite\n!pip install git+https://www.github.com/keras-team/keras-contrib.git\n!pip install seqeval \n",
"Collecting sklearn_crfsuite\n Downloading https://files.pythonhosted.org/packages/25/74/5b7befa513482e6dee1f3dd68171a6c9dfc14c0eaa00f885ffeba54fe9b0/sklearn_crfsuite-0.3.6-py2.py3-none-any.whl\nRequirement already satisfied: tabulate in /usr/local/lib/python3.6/dist-packages (from sklearn_crfsuite) (0.8.6)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sklearn_crfsuite) (1.12.0)\nRequirement already satisfied: tqdm>=2.0 in /usr/local/lib/python3.6/dist-packages (from sklearn_crfsuite) (4.28.1)\nCollecting python-crfsuite>=0.8.3\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/2f/86/cfcd71edca9d25d3d331209a20f6314b6f3f134c29478f90559cee9ce091/python_crfsuite-0.9.6-cp36-cp36m-manylinux1_x86_64.whl (754kB)\n\r\u001b[K |▍ | 10kB 31.4MB/s eta 0:00:01\r\u001b[K |▉ | 20kB 6.2MB/s eta 0:00:01\r\u001b[K |█▎ | 30kB 8.9MB/s eta 0:00:01\r\u001b[K |█▊ | 40kB 11.2MB/s eta 0:00:01\r\u001b[K |██▏ | 51kB 13.3MB/s eta 0:00:01\r\u001b[K |██▋ | 61kB 15.4MB/s eta 0:00:01\r\u001b[K |███ | 71kB 17.3MB/s eta 0:00:01\r\u001b[K |███▌ | 81kB 11.4MB/s eta 0:00:01\r\u001b[K |████ | 92kB 12.5MB/s eta 0:00:01\r\u001b[K |████▍ | 102kB 13.6MB/s eta 0:00:01\r\u001b[K |████▉ | 112kB 13.6MB/s eta 0:00:01\r\u001b[K |█████▏ | 122kB 13.6MB/s eta 0:00:01\r\u001b[K |█████▋ | 133kB 13.6MB/s eta 0:00:01\r\u001b[K |██████ | 143kB 13.6MB/s eta 0:00:01\r\u001b[K |██████▌ | 153kB 13.6MB/s eta 0:00:01\r\u001b[K |███████ | 163kB 13.6MB/s eta 0:00:01\r\u001b[K |███████▍ | 174kB 13.6MB/s eta 0:00:01\r\u001b[K |███████▉ | 184kB 13.6MB/s eta 0:00:01\r\u001b[K |████████▎ | 194kB 13.6MB/s eta 0:00:01\r\u001b[K |████████▊ | 204kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████▏ | 215kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████▋ | 225kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████ | 235kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████▍ | 245kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████▉ | 256kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████▎ | 266kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████▊ | 276kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████▏ | 286kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████▋ | 296kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████ | 307kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████▌ | 317kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████ | 327kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████▍ | 337kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████▊ | 348kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████▏ | 358kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████▋ | 368kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████ | 378kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████▌ | 389kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████████ | 399kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████████▍ | 409kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████████▉ | 419kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████████▎ | 430kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████████▊ | 440kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████████▏ | 450kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████████▌ | 460kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████████ | 471kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████████▍ | 481kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████████▉ | 491kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████████████▎ | 501kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████████████▊ | 512kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████████████▏ | 522kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████████████▋ | 532kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████████████ | 542kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████████████▌ | 552kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████████████ | 563kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████████████▎ | 573kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████████████▊ | 583kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████████████████▏ | 593kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████████████████▋ | 604kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████████████████ | 614kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████████████████▌ | 624kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████████████████ | 634kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████████████████▍ | 645kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████████████████▉ | 655kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████████████████▎ | 665kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████████████████▊ | 675kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▏ | 686kB 13.6MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▌ | 696kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████████████████████ | 706kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▍ | 716kB 13.6MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▉ | 727kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▎| 737kB 13.6MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▊| 747kB 13.6MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 757kB 13.6MB/s \n\u001b[?25hInstalling collected packages: python-crfsuite, sklearn-crfsuite\nSuccessfully installed python-crfsuite-0.9.6 sklearn-crfsuite-0.3.6\nCollecting git+https://www.github.com/keras-team/keras-contrib.git\n Cloning https://www.github.com/keras-team/keras-contrib.git to /tmp/pip-req-build-mi5xcnd6\n Running command git clone -q https://www.github.com/keras-team/keras-contrib.git /tmp/pip-req-build-mi5xcnd6\nRequirement already satisfied: keras in /usr/local/lib/python3.6/dist-packages (from keras-contrib==2.0.8) (2.2.5)\nRequirement already satisfied: numpy>=1.9.1 in /usr/local/lib/python3.6/dist-packages (from keras->keras-contrib==2.0.8) (1.17.4)\nRequirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from keras->keras-contrib==2.0.8) (1.1.0)\nRequirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from keras->keras-contrib==2.0.8) (3.13)\nRequirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras->keras-contrib==2.0.8) (2.8.0)\nRequirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from keras->keras-contrib==2.0.8) (1.12.0)\nRequirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from keras->keras-contrib==2.0.8) (1.0.8)\nRequirement already satisfied: scipy>=0.14 in /usr/local/lib/python3.6/dist-packages (from keras->keras-contrib==2.0.8) (1.3.3)\nBuilding wheels for collected packages: keras-contrib\n Building wheel for keras-contrib (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for keras-contrib: filename=keras_contrib-2.0.8-cp36-none-any.whl size=101065 sha256=90fdfa9b1fd8972be383b86960625f08b2bba9a1913857247f61ed20755f5281\n Stored in directory: /tmp/pip-ephem-wheel-cache-irz3z024/wheels/11/27/c8/4ed56de7b55f4f61244e2dc6ef3cdbaff2692527a2ce6502ba\nSuccessfully built keras-contrib\nInstalling collected packages: keras-contrib\nSuccessfully installed keras-contrib-2.0.8\nCollecting seqeval\n Downloading https://files.pythonhosted.org/packages/34/91/068aca8d60ce56dd9ba4506850e876aba5e66a6f2f29aa223224b50df0de/seqeval-0.0.12.tar.gz\nRequirement already satisfied: numpy>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from seqeval) (1.17.4)\nRequirement already satisfied: Keras>=2.2.4 in /usr/local/lib/python3.6/dist-packages (from seqeval) (2.2.5)\nRequirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from Keras>=2.2.4->seqeval) (1.0.8)\nRequirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from Keras>=2.2.4->seqeval) (1.12.0)\nRequirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from Keras>=2.2.4->seqeval) (3.13)\nRequirement already satisfied: scipy>=0.14 in /usr/local/lib/python3.6/dist-packages (from Keras>=2.2.4->seqeval) (1.3.3)\nRequirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from Keras>=2.2.4->seqeval) (1.1.0)\nRequirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from Keras>=2.2.4->seqeval) (2.8.0)\nBuilding wheels for collected packages: seqeval\n Building wheel for seqeval (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for seqeval: filename=seqeval-0.0.12-cp36-none-any.whl size=7424 sha256=6253fe759fbe66f2147a61e5b5e090f2d29875419560cfb4f94043ba4f9d980e\n Stored in directory: /root/.cache/pip/wheels/4f/32/0a/df3b340a82583566975377d65e724895b3fad101a3fb729f68\nSuccessfully built seqeval\nInstalling collected packages: seqeval\nSuccessfully installed seqeval-0.0.12\n"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
],
[
"path = \"/content/drive/My Drive/Colab Notebooks/DataSet/ner_dataset.csv\"\ndf= pd.read_csv(path,encoding = \"ISO-8859-1\")",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df[\"Tag\"].unique()",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
],
[
"#Checking null values, if any.\ndf.isnull().sum()",
"_____no_output_____"
],
[
"df = df.fillna(method = 'ffill')",
"_____no_output_____"
],
[
"# This is a class te get sentence. The each sentence will be list of tuples with its tag and pos.\nclass sentence(object):\n def __init__(self, df):\n self.n_sent = 1\n self.df = df\n self.empty = False\n agg = lambda s : [(w, p, t) for w, p, t in zip(s['Word'].values.tolist(),\n s['POS'].values.tolist(),\n s['Tag'].values.tolist())]\n self.grouped = self.df.groupby(\"Sentence #\").apply(agg)\n self.sentences = [s for s in self.grouped]\n \n def get_text(self):\n try:\n s = self.grouped['Sentence: {}'.format(self.n_sent)]\n self.n_sent +=1\n return s\n except:\n return None",
"_____no_output_____"
],
[
"#Displaying one full sentence\ngetter = sentence(df)\nsentences = [\" \".join([s[0] for s in sent]) for sent in getter.sentences]\nsentences[0]",
"_____no_output_____"
],
[
"sentences[2]",
"_____no_output_____"
],
[
"#sentence with its pos and tag.\nsent = getter.get_text()\nprint(sent)",
"[('Thousands', 'NNS', 'O'), ('of', 'IN', 'O'), ('demonstrators', 'NNS', 'O'), ('have', 'VBP', 'O'), ('marched', 'VBN', 'O'), ('through', 'IN', 'O'), ('London', 'NNP', 'B-geo'), ('to', 'TO', 'O'), ('protest', 'VB', 'O'), ('the', 'DT', 'O'), ('war', 'NN', 'O'), ('in', 'IN', 'O'), ('Iraq', 'NNP', 'B-geo'), ('and', 'CC', 'O'), ('demand', 'VB', 'O'), ('the', 'DT', 'O'), ('withdrawal', 'NN', 'O'), ('of', 'IN', 'O'), ('British', 'JJ', 'B-gpe'), ('troops', 'NNS', 'O'), ('from', 'IN', 'O'), ('that', 'DT', 'O'), ('country', 'NN', 'O'), ('.', '.', 'O')]\n"
],
[
"# Taking all the sentences\nsentences = getter.sentences",
"_____no_output_____"
],
[
"sentences[:2]",
"_____no_output_____"
],
[
"#Defining the parameter of LSTM\n# Number of data points passed in each iteration\nbatch_size = 64 \n# Passes through entire dataset\nepochs = 8\n# Maximum length of review\nmax_len = 75 \n# Dimension of embedding vector\nembedding = 40",
"_____no_output_____"
],
[
"#Getting unique words and labels from data\nwords = list(df['Word'].unique())\ntags = list(df['Tag'].unique())\n# Dictionary word:index pair\n# word is key and its value is corresponding index\nword_to_index = {w : i + 2 for i, w in enumerate(words)}\nword_to_index[\"UNK\"] = 1\nword_to_index[\"PAD\"] = 0\n\n# Dictionary lable:index pair\n# label is key and value is index.\ntag_to_index = {t : i + 1 for i, t in enumerate(tags)}\ntag_to_index[\"PAD\"] = 0\n\nidx2word = {i: w for w, i in word_to_index.items()}\nidx2tag = {i: w for w, i in tag_to_index.items()}",
"_____no_output_____"
],
[
"print(\"The word India is identified by the index: {}\".format(word_to_index[\"India\"]))\nprint(\"The label B-org for the organization is identified by the index: {}\".format(tag_to_index[\"B-org\"]))",
"The word India is identified by the index: 2570\nThe label B-org for the organization is identified by the index: 6\n"
],
[
"# Converting each sentence into list of index from list of tokens\nX = [[word_to_index[w[0]] for w in s] for s in sentences]\n\n# Padding each sequence to have same length of each word\nX = pad_sequences(maxlen = max_len, sequences = X, padding = \"post\", value = word_to_index[\"PAD\"])",
"_____no_output_____"
],
[
"# Convert label to index\ny = [[tag_to_index[w[2]] for w in s] for s in sentences]\n\n# padding\ny = pad_sequences(maxlen = max_len, sequences = y, padding = \"post\", value = tag_to_index[\"PAD\"])",
"_____no_output_____"
],
[
"num_tag = df['Tag'].nunique()\n# One hot encoded labels\ny = [to_categorical(i, num_classes = num_tag + 1) for i in y]",
"_____no_output_____"
],
[
"y[0]",
"_____no_output_____"
],
[
"\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.15)",
"_____no_output_____"
],
[
"print(\"Size of training input data : \", X_train.shape)\nprint(\"Size of training output data : \", np.array(y_train).shape)\nprint(\"Size of testing input data : \", X_test.shape)\nprint(\"Size of testing output data : \", np.array(y_test).shape)",
"Size of training input data : (40765, 75)\nSize of training output data : (40765, 75, 18)\nSize of testing input data : (7194, 75)\nSize of testing output data : (7194, 75, 18)\n"
],
[
"# Let's check the first sentence before and after processing.\nprint('*****Before Processing first sentence : *****\\n', ' '.join([w[0] for w in sentences[0]]))\nprint('*****After Processing first sentence : *****\\n ', X[0])",
"*****Before Processing first sentence : *****\n Thousands of demonstrators have marched through London to protest the war in Iraq and demand the withdrawal of British troops from that country .\n*****After Processing first sentence : *****\n [ 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 11 17 3 18 19 20 21 22 23\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0]\n"
],
[
"# First label before and after processing.\nprint('*****Before Processing first sentence : *****\\n', ' '.join([w[2] for w in sentences[0]]))\nprint('*****After Processing first sentence : *****\\n ', y[0])",
"*****Before Processing first sentence : *****\n O O O O O O B-geo O O O O O B-geo O O O O O B-gpe O O O O O\n*****After Processing first sentence : *****\n [[0. 1. 0. ... 0. 0. 0.]\n [0. 1. 0. ... 0. 0. 0.]\n [0. 1. 0. ... 0. 0. 0.]\n ...\n [1. 0. 0. ... 0. 0. 0.]\n [1. 0. 0. ... 0. 0. 0.]\n [1. 0. 0. ... 0. 0. 0.]]\n"
],
[
"!pip install keras==2.2.4",
"Collecting keras==2.2.4\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/5e/10/aa32dad071ce52b5502266b5c659451cfd6ffcbf14e6c8c4f16c0ff5aaab/Keras-2.2.4-py2.py3-none-any.whl (312kB)\n\r\u001b[K |█ | 10kB 29.7MB/s eta 0:00:01\r\u001b[K |██ | 20kB 6.7MB/s eta 0:00:01\r\u001b[K |███▏ | 30kB 9.6MB/s eta 0:00:01\r\u001b[K |████▏ | 40kB 6.2MB/s eta 0:00:01\r\u001b[K |█████▎ | 51kB 7.5MB/s eta 0:00:01\r\u001b[K |██████▎ | 61kB 8.9MB/s eta 0:00:01\r\u001b[K |███████▍ | 71kB 10.1MB/s eta 0:00:01\r\u001b[K |████████▍ | 81kB 11.3MB/s eta 0:00:01\r\u001b[K |█████████▍ | 92kB 12.4MB/s eta 0:00:01\r\u001b[K |██████████▌ | 102kB 10.2MB/s eta 0:00:01\r\u001b[K |███████████▌ | 112kB 10.2MB/s eta 0:00:01\r\u001b[K |████████████▋ | 122kB 10.2MB/s eta 0:00:01\r\u001b[K |█████████████▋ | 133kB 10.2MB/s eta 0:00:01\r\u001b[K |██████████████▊ | 143kB 10.2MB/s eta 0:00:01\r\u001b[K |███████████████▊ | 153kB 10.2MB/s eta 0:00:01\r\u001b[K |████████████████▊ | 163kB 10.2MB/s eta 0:00:01\r\u001b[K |█████████████████▉ | 174kB 10.2MB/s eta 0:00:01\r\u001b[K |██████████████████▉ | 184kB 10.2MB/s eta 0:00:01\r\u001b[K |████████████████████ | 194kB 10.2MB/s eta 0:00:01\r\u001b[K |█████████████████████ | 204kB 10.2MB/s eta 0:00:01\r\u001b[K |██████████████████████ | 215kB 10.2MB/s eta 0:00:01\r\u001b[K |███████████████████████ | 225kB 10.2MB/s eta 0:00:01\r\u001b[K |████████████████████████▏ | 235kB 10.2MB/s eta 0:00:01\r\u001b[K |█████████████████████████▏ | 245kB 10.2MB/s eta 0:00:01\r\u001b[K |██████████████████████████▏ | 256kB 10.2MB/s eta 0:00:01\r\u001b[K |███████████████████████████▎ | 266kB 10.2MB/s eta 0:00:01\r\u001b[K |████████████████████████████▎ | 276kB 10.2MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▍ | 286kB 10.2MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▍ | 296kB 10.2MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▌| 307kB 10.2MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 317kB 10.2MB/s \n\u001b[?25hRequirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from keras==2.2.4) (3.13)\nRequirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from keras==2.2.4) (1.12.0)\nRequirement already satisfied: scipy>=0.14 in /usr/local/lib/python3.6/dist-packages (from keras==2.2.4) (1.3.3)\nRequirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from keras==2.2.4) (1.0.8)\nRequirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras==2.2.4) (2.8.0)\nRequirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from keras==2.2.4) (1.1.0)\nRequirement already satisfied: numpy>=1.9.1 in /usr/local/lib/python3.6/dist-packages (from keras==2.2.4) (1.17.4)\nInstalling collected packages: keras\n Found existing installation: Keras 2.2.5\n Uninstalling Keras-2.2.5:\n Successfully uninstalled Keras-2.2.5\nSuccessfully installed keras-2.2.4\n"
],
[
"df.head()\nnum_tags = df['Tag'].nunique()",
"_____no_output_____"
],
[
"num_tags = df['Tag'].nunique()\n# Model architecture\ninput = Input(shape = (max_len,))\nmodel = Embedding(input_dim = len(words) + 2, output_dim = embedding, input_length = max_len, mask_zero = True)(input)\nmodel = Bidirectional(LSTM(units = 50, return_sequences=True, recurrent_dropout=0.1))(model)\nmodel = TimeDistributed(Dense(50, activation=\"relu\"))(model)\ncrf = CRF(num_tags+1) # CRF layer\nout = crf(model) # output\n\nmodel = Model(input, out)\nmodel.compile(optimizer=\"rmsprop\", loss=crf.loss_function, metrics=[crf.accuracy])\n\nmodel.summary()",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:133: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2974: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 75) 0 \n_________________________________________________________________\nembedding_1 (Embedding) (None, 75, 40) 1407200 \n_________________________________________________________________\nbidirectional_1 (Bidirection (None, 75, 100) 36400 \n_________________________________________________________________\ntime_distributed_1 (TimeDist (None, 75, 50) 5050 \n_________________________________________________________________\ncrf_1 (CRF) (None, 75, 18) 1278 \n=================================================================\nTotal params: 1,449,928\nTrainable params: 1,449,928\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"checkpointer = ModelCheckpoint(filepath = 'model.h5',\n verbose = 0,\n mode = 'auto',\n save_best_only = True,\n monitor='val_loss')\n",
"_____no_output_____"
],
[
"history = model.fit(X_train, np.array(y_train), batch_size=batch_size, epochs=epochs,\n validation_split=0.1, callbacks=[checkpointer])",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2741: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n\nTrain on 36688 samples, validate on 4077 samples\nEpoch 1/8\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.\n\n36688/36688 [==============================] - 240s 7ms/step - loss: 9.1598 - crf_viterbi_accuracy: 0.8922 - val_loss: 8.8530 - val_crf_viterbi_accuracy: 0.9503\nEpoch 2/8\n36688/36688 [==============================] - 233s 6ms/step - loss: 8.8847 - crf_viterbi_accuracy: 0.9600 - val_loss: 8.7934 - val_crf_viterbi_accuracy: 0.9647\nEpoch 3/8\n36688/36688 [==============================] - 230s 6ms/step - loss: 8.8488 - crf_viterbi_accuracy: 0.9688 - val_loss: 8.7828 - val_crf_viterbi_accuracy: 0.9646\nEpoch 4/8\n36688/36688 [==============================] - 229s 6ms/step - loss: 8.8348 - crf_viterbi_accuracy: 0.9721 - val_loss: 8.7725 - val_crf_viterbi_accuracy: 0.9686\nEpoch 5/8\n36688/36688 [==============================] - 227s 6ms/step - loss: 8.8265 - crf_viterbi_accuracy: 0.9744 - val_loss: 8.7697 - val_crf_viterbi_accuracy: 0.9688\nEpoch 6/8\n36688/36688 [==============================] - 226s 6ms/step - loss: 8.8210 - crf_viterbi_accuracy: 0.9756 - val_loss: 8.7671 - val_crf_viterbi_accuracy: 0.9696\nEpoch 7/8\n36688/36688 [==============================] - 226s 6ms/step - loss: 8.8168 - crf_viterbi_accuracy: 0.9771 - val_loss: 8.7670 - val_crf_viterbi_accuracy: 0.9686\nEpoch 8/8\n36688/36688 [==============================] - 226s 6ms/step - loss: 8.8135 - crf_viterbi_accuracy: 0.9784 - val_loss: 8.7662 - val_crf_viterbi_accuracy: 0.9685\n"
],
[
"history.history.keys()",
"_____no_output_____"
],
[
"acc = history.history['crf_viterbi_accuracy']\nval_acc = history.history['val_crf_viterbi_accuracy']\nloss = history.history['loss']\nval_loss = history.history['val_loss']\nplt.figure(figsize = (8, 8))\nepochs = range(1, len(acc) + 1)\nplt.plot(epochs, acc, 'bo', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.legend()",
"_____no_output_____"
],
[
"plt.figure(figsize = (8, 8))\nplt.plot(epochs, loss, 'bo', label='Training loss')\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"# Evaluation\ny_pred = model.predict(X_test)\ny_pred = np.argmax(y_pred, axis=-1)\ny_test_true = np.argmax(y_test, -1)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"please use \nkeras==2.2.4 ",
"_____no_output_____"
]
],
[
[
"# Evaluation\ny_pred = model.predict(X_test)\ny_pred = np.argmax(y_pred, axis=-1)\ny_test_true = np.argmax(y_test, -1)",
"_____no_output_____"
],
[
"# Convert the index to tag\ny_pred = [[idx2tag[i] for i in row] for row in y_pred]\ny_test_true = [[idx2tag[i] for i in row] for row in y_test_true]",
"_____no_output_____"
],
[
"print(\"F1-score is : {:.1%}\".format(f1_score(y_test_true, y_pred)))",
"F1-score is : 88.0%\n"
],
[
"report = flat_classification_report(y_pred=y_pred, y_true=y_test_true)\nprint(report)",
" precision recall f1-score support\n\n B-art 0.00 0.00 0.00 56\n B-eve 0.50 0.24 0.32 38\n B-geo 0.88 0.88 0.88 5700\n B-gpe 0.96 0.94 0.95 2352\n B-nat 0.91 0.31 0.47 32\n B-org 0.73 0.74 0.74 3038\n B-per 0.83 0.82 0.82 2488\n B-tim 0.90 0.89 0.89 3017\n I-art 0.00 0.00 0.00 35\n I-eve 0.00 0.00 0.00 33\n I-geo 0.81 0.79 0.80 1127\n I-gpe 0.90 0.60 0.72 30\n I-nat 1.00 0.29 0.44 7\n I-org 0.71 0.82 0.76 2479\n I-per 0.86 0.83 0.84 2553\n I-tim 0.81 0.76 0.79 1011\n O 0.99 0.99 0.99 131646\n PAD 1.00 1.00 1.00 383908\n\n accuracy 0.99 539550\n macro avg 0.71 0.61 0.63 539550\nweighted avg 0.99 0.99 0.99 539550\n\n"
],
[
"# At every execution model picks some random test sample from test set.\ni = np.random.randint(0,X_test.shape[0]) # choose a random number between 0 and len(X_te)b\np = model.predict(np.array([X_test[i]]))\np = np.argmax(p, axis=-1)\ntrue = np.argmax(y_test[i], -1)\n\nprint(\"Sample number {} of {} (Test Set)\".format(i, X_test.shape[0]))\n# Visualization\nprint(\"{:15}||{:5}||{}\".format(\"Word\", \"True\", \"Pred\"))\nprint(30 * \"=\")\nfor w, t, pred in zip(X_test[i], true, p[0]):\n if w != 0:\n print(\"{:15}: {:5} {}\".format(words[w-2], idx2tag[t], idx2tag[pred]))",
"Sample number 5331 of 7194 (Test Set)\nWord ||True ||Pred\n==============================\nIn : O O\na : O O\nseparate : O O\nstatement : O O\n, : O O\nthe : O O\nMuslim : B-org B-org\nPublic : I-org I-org\nAffairs : I-org I-org\nCouncil : I-org I-org\nextended : O O\ncondolences : O O\nto : O O\nthe : O O\nfamilies : O O\nof : O O\nthe : O O\nvictims : O O\nand : O O\nto : O O\nthe : O O\nBritish : B-gpe B-gpe\npeople : O O\n. : O O\n"
],
[
"with open('word_to_index.pickle', 'wb') as f:\n pickle.dump(word_to_index, f)\n\nwith open('tag_to_index.pickle', 'wb') as f:\n pickle.dump(tag_to_index, f)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0f1071aa70f2765f34643b073ce1d2b51c1cdee
| 5,900 |
ipynb
|
Jupyter Notebook
|
课程汇集/虚谷号内置课程目录/1.虚谷号GPIO/WebGPIO/虚谷号WebGPIO应用(客户端Python版) .ipynb
|
xiezuoru/vvBoard-docs
|
4e13b4a6699cc937f2c61b60c8a6523b1f3557c6
|
[
"MIT"
] | 67 |
2019-05-04T05:04:32.000Z
|
2022-03-18T14:34:56.000Z
|
课程汇集/虚谷号内置课程目录/1.虚谷号GPIO/WebGPIO/虚谷号WebGPIO应用(客户端Python版) .ipynb
|
xiezuoru/vvBoard-docs
|
4e13b4a6699cc937f2c61b60c8a6523b1f3557c6
|
[
"MIT"
] | 7 |
2019-05-22T06:26:10.000Z
|
2021-07-12T03:02:24.000Z
|
课程汇集/虚谷号内置课程目录/1.虚谷号GPIO/WebGPIO/虚谷号WebGPIO应用(客户端Python版) .ipynb
|
xiezuoru/vvBoard-docs
|
4e13b4a6699cc937f2c61b60c8a6523b1f3557c6
|
[
"MIT"
] | 32 |
2019-06-03T06:23:30.000Z
|
2021-08-29T22:43:20.000Z
| 22.779923 | 141 | 0.485932 |
[
[
[
"# 虚谷号WebGPIO应用(客户端Python版)\n\n虚谷号和手机(App inventor)如何互动控制?\n\n虚谷号和掌控板如何互动控制?\n\n为了让虚谷号和其他开源硬件、编程语言快速互动,虚谷号的WebGPIO应运而生。简单的说,只要在虚谷号上运行一个python文件,就可以用WebAPI的形式来与虚谷号互动,可以获取虚谷号板载Arduino的所有引脚的电平,也可以控制所有引脚。",
"_____no_output_____"
],
[
"## 1.接口介绍\n\n要在虚谷号上运行“webgpio.py”。也可以将“webgpio.py”文件更名为“main.py”,复制到vvBoard的Python目录,只要一开机,虚谷号就会执行。\n\n下载地址:https://github.com/vvlink/vvBoard-docs/tree/master/webgpio\n\nWebAPI地址:\n\nhttp://[虚谷号ip]:1024/\n\n注:下面假设虚谷号的IP地址为:192.168.1.101\n\n### 1.1 获取引脚状态\n\nmethod方式:GET\n\n参数示例: { pin:\"D2\" }\n\nurl范例:http://192.168.1.101:1024/?pin=D2\n\n信息返回:\n\n当pin为D0--D13时,读取数字引脚的数字值,0为低电平,1为高电平。\n\n{ \"pin\":\"D1\", \"error_code\":0, \"msg\":1 }\n\n当pin为A0--A5时,读取模拟引脚的模拟值,0-255之间。\n\n{ \"pin\":\"A0\", \"error_code\":0, \"msg\":255 }\n\n### 1.2. 控制引脚电平\n\nmethod方式: POST\n\n参数示例:\n\n{ pin:\"D1\" value:255 type:\"digital\" }\n\n注:Digital、Analog、Servo等词语不分大小写,也可以用“1、2、3”等数字来代替。\n\n - 当type为digital时,设置引脚的电平值为value的值,0表示LOW,非0表示HIGH;\n - 当type为analog时,设置引脚的PWM值为value的值,即0-255之间;\n - 当type为servo时,设置引脚上舵机的转动角度为value的值,即0-180之间。\n\n\n返回参数:\n\n{ \"pin\":\"D2\", \"error_code\":0, \"msg\":\"success,set [pin] to [value] with [types] mode\" }\n\n当pin不在D0--D13,A0--A5之间时:\n\n{ \"pin\":\"D2\", \"error_code\":1 \"msg\":\"error,invalid Pin\" }\n\n当value不能转换整数时:\n\n{ \"pin\":\"D2\", \"error_code\":1, \"msg\":\"error,Value is wrong\" }\n\n当type不正确时:\n\n{ \"pin\":\"D2\", \"error_code\":1, \"msg\":\"error,Type is wrong\" }",
"_____no_output_____"
],
[
"## 2. 客户端代码范例(Python)\n\n虽然通过任何一个能够发送Http请求的工具,包括浏览器、Word、掌控板、手机等,都可以和虚谷号互动。接下来选择Python语言写一个Demo代码。Python借助Requests库来发送Http请求,是非常方便的。参数传递方面,同时支持params和data两种模式。",
"_____no_output_____"
],
[
"### 2.1.调用POST方法,对虚谷号的引脚进行控制\n\n在该案例中可以修改的参数有:\n\n - url:设置成虚谷号的IP地址\n - pin:对应的引脚 A0-A5,D0-D13 \n - value:对应的数值\n - type:控制的类型可以是1,2,3,分别代表“digital”、“analog”、“servo”\n \n当设置D13号引脚的电平为1,该引脚对应的LED就会亮起。",
"_____no_output_____"
]
],
[
[
"import requests\n\nvvboardip='192.168.3.42'\npin='D13'\nvalue=1\nt=1\npayload = {\"pin\":pin,'value':value,'type':t}\nre = requests.post(url='http://'+ vvboardip +':1024/',params=payload) \nif (re.status_code==200):\n r=re.json()\n print('成功发送控制命令:'+ r[\"msg\"]) \n print('返回的信息为:') \n print(re.text) ",
"成功发送控制命令:success,set D13 to 1 with 1 mode\n返回的信息为:\n{\n \"error_code\": 0,\n \"msg\": \"success,set D13 to 1 with 1 mode\",\n \"pin\": \"D13\"\n}\n"
]
],
[
[
"### 2.2. 调用GET方法,读取A0号引脚的电平。\n\n在该案例中可以修改的参数有:\n\n - url:设置成虚谷号的IP地址\n - pin:对应的引脚 A0-A5,D0-D13 \n \n注意:该方法需要外接传感器,否则数字口默认返回为低电平,模拟口返回随机数。",
"_____no_output_____"
]
],
[
[
"import requests\n\nvvboardip='192.168.3.42'\npin='A0'\npayload = {\"pin\":pin}\nre = requests.get(url='http://'+ vvboardip +':1024/',params=payload) \nif (re.status_code==200):\n r=re.json()\n print('成功获取引脚'+ r[\"pin\"] + '的状态:'+ r[\"msg\"]) \n print('返回的原始信息为:') \n print(re.text) ",
"成功获取引脚A0的状态:393\n返回的原始信息为:\n{\n \"error_code\": 0,\n \"msg\": \"393\",\n \"pin\": \"A0\"\n}\n"
]
],
[
[
"## 3. 其他说明",
"_____no_output_____"
],
[
"1.手机上快速控制,如何实现?\n\n访问:http://192.168.3.42:1024/help/\n\n可以直接在网页上测试接口。",
"_____no_output_____"
],
[
"2.App invntor如何借助这一接口与虚谷号互动?\n\n请参考github,提供了范例。\n\nhttps://github.com/vvlink/vvBoard-docs/tree/master/webgpio,",
"_____no_output_____"
],
[
"3.掌控板如何利用这一接口与虚谷号互动?\n\n掌控板中提供了urequests库,在mPython软件中可以编写发送Http请求的应用。\n\n另外,掌控板中提供了WebtinyIO,使用方式和虚谷号的WebGPIO基本一致。",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0f107793f38b047584f178f11dda99a08d4694f
| 4,654 |
ipynb
|
Jupyter Notebook
|
Chapter01/Exercise01_01/Exercise01_01.ipynb
|
adityashah95/The-Reinforcement-Learning-Workshop
|
6efe78c68379dd27df6ff56846df49eb60ac81f1
|
[
"MIT"
] | null | null | null |
Chapter01/Exercise01_01/Exercise01_01.ipynb
|
adityashah95/The-Reinforcement-Learning-Workshop
|
6efe78c68379dd27df6ff56846df49eb60ac81f1
|
[
"MIT"
] | null | null | null |
Chapter01/Exercise01_01/Exercise01_01.ipynb
|
adityashah95/The-Reinforcement-Learning-Workshop
|
6efe78c68379dd27df6ff56846df49eb60ac81f1
|
[
"MIT"
] | null | null | null | 32.319444 | 300 | 0.509669 |
[
[
[
"# Toy Environment\n\n\n\nIn this exercise we will learn how to implement a simple Toy environment using Python. The environment is illustrated in figure. It is composed of 3 states and 2 actions. The initial state is state 1.\nThe goal of this exercise is to implement a class Environment with a method step() taking as input the agent’s action and returning the pair (next state, reward). The environment can be implemented using pure python. In addition, write also a reset() method that restarts the environment state.",
"_____no_output_____"
]
],
[
[
"from typing import Tuple",
"_____no_output_____"
],
[
"class Environment:\n def __init__(self):\n \"\"\"\n Constructor of the Environment class.\n \"\"\"\n self._initial_state = 1\n self._allowed_actions = [0, 1] # 0: A, 1: B\n self._states = [1, 2, 3]\n self._current_state = self._initial_state\n\n def step(self, action: int) -> Tuple[int, int]:\n \"\"\"\n Step function: compute the one-step dynamic from the given action.\n \n Args:\n action (int): the action taken by the agent.\n \n Returns:\n The tuple current_state, reward.\n \"\"\"\n\n # check if the action is allowed\n if action not in self._allowed_actions:\n raise ValueError(\"Action is not allowed\")\n\n reward = 0\n if action == 0 and self._current_state == 1:\n self._current_state = 2\n reward = 1\n elif action == 1 and self._current_state == 1:\n self._current_state = 3\n reward = 10\n elif action == 0 and self._current_state == 2:\n self._current_state = 1\n reward = 0\n elif action == 1 and self._current_state == 2:\n self._current_state = 3\n reward = 1\n elif action == 0 and self._current_state == 3:\n self._current_state = 2\n reward = 0\n elif action == 1 and self._current_state == 3:\n self._current_state = 3\n reward = 10\n\n return self._current_state, reward\n\n def reset(self) -> int:\n \"\"\"\n Reset the environment starting from the initial state.\n \n Returns:\n The environment state after reset (initial state).\n \"\"\"\n self._current_state = self._initial_state\n return self._current_state",
"_____no_output_____"
],
[
"env = Environment()\nstate = env.reset()\n\nactions = [0, 0, 1, 1, 0, 1]\n\nprint(f\"Initial state is {state}\")\n\nfor action in actions:\n next_state, reward = env.step(action)\n print(f\"From state {state} to state {next_state} with action {action}, reward: {reward}\")\n state = next_state",
"Initial state is 1\nFrom state 1 to state 2 with action 0, reward: 1\nFrom state 2 to state 1 with action 0, reward: 0\nFrom state 1 to state 3 with action 1, reward: 10\nFrom state 3 to state 3 with action 1, reward: 10\nFrom state 3 to state 2 with action 0, reward: 0\nFrom state 2 to state 3 with action 1, reward: 1\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0f10b6ed71807170f803d21ebee9884e7cd2858
| 9,141 |
ipynb
|
Jupyter Notebook
|
doc/source/cookbook/Power_Law.ipynb
|
NegriAndrea/pyxsim
|
7d3e2924a3d629ea7e91d526475c26572a3a704a
|
[
"BSD-3-Clause"
] | null | null | null |
doc/source/cookbook/Power_Law.ipynb
|
NegriAndrea/pyxsim
|
7d3e2924a3d629ea7e91d526475c26572a3a704a
|
[
"BSD-3-Clause"
] | null | null | null |
doc/source/cookbook/Power_Law.ipynb
|
NegriAndrea/pyxsim
|
7d3e2924a3d629ea7e91d526475c26572a3a704a
|
[
"BSD-3-Clause"
] | null | null | null | 29.872549 | 382 | 0.583525 |
[
[
[
"The second example shows how to set up a power-law spectral source, as well as how to add two sets of photons together.\n\nThis example will also briefly show how to set up a mock dataset \"in memory\" using yt. For more details on how to do this, check out [the yt docs on in-memory datasets](http://yt-project.org/doc/examining/generic_array_data.html).\n\nLoad up the necessary modules:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib\nmatplotlib.rc(\"font\", size=18, family=\"serif\")\nimport yt\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom yt.utilities.physical_ratios import cm_per_kpc, K_per_keV\nfrom yt.units import mp\nimport pyxsim",
"_____no_output_____"
]
],
[
[
"The cluster we set up will be a simple isothermal $\\beta$-model system, with a temperature of 4 keV. We'll set it up on a uniform grid of 2 Mpc and 256 cells on a side. The parameters of the model are:",
"_____no_output_____"
]
],
[
[
"R = 1000. # radius of cluster in kpc\nr_c = 100. # scale radius of cluster in kpc\nrho_c = 1.673e-26 # scale density in g/cm^3\nbeta = 1. # beta parameter\nkT = 4. # cluster temperature in keV\nnx = 256\nddims = (nx,nx,nx)",
"_____no_output_____"
]
],
[
[
"and we set up the density and temperature arrays:",
"_____no_output_____"
]
],
[
[
"x, y, z = np.mgrid[-R:R:nx*1j,\n -R:R:nx*1j,\n -R:R:nx*1j]\nr = np.sqrt(x**2+y**2+z**2)",
"_____no_output_____"
],
[
"dens = np.zeros(ddims)\ndens[r <= R] = rho_c*(1.+(r[r <= R]/r_c)**2)**(-1.5*beta)\ndens[r > R] = 0.0\ntemp = kT*K_per_keV*np.ones(ddims)",
"_____no_output_____"
]
],
[
[
"Next, we will take the density and temperature arrays and put them into a dictionary with their units, where we will also set up velocity fields set to zero. Then, we'll call the yt function `load_uniform_grid` to set this up as a full-fledged yt dataset. ",
"_____no_output_____"
]
],
[
[
"data = {}\ndata[\"density\"] = (dens, \"g/cm**3\")\ndata[\"temperature\"] = (temp, \"K\")\ndata[\"velocity_x\"] = (np.zeros(ddims), \"cm/s\")\ndata[\"velocity_y\"] = (np.zeros(ddims), \"cm/s\")\ndata[\"velocity_z\"] = (np.zeros(ddims), \"cm/s\")\n\nbbox = np.array([[-0.5, 0.5], [-0.5, 0.5], [-0.5, 0.5]]) # The bounding box of the domain in code units \n\nds = yt.load_uniform_grid(data, ddims, 2*R*cm_per_kpc, bbox=bbox)",
"_____no_output_____"
]
],
[
[
"The next thing we have to do is specify a derived field for the normalization of the power-law emission. This could come from a variety of sources, for example, relativistic cosmic-ray electrons. For simplicity, we're not going to assume a specific model, except that we will only specify that the source of the power law emission is proportional to the gas mass in each cell:",
"_____no_output_____"
]
],
[
[
"norm = yt.YTQuantity(1.0e-19, \"photons/s/keV\")\ndef _power_law_emission(field, data):\n return norm*data[\"cell_mass\"]/mp\nds.add_field((\"gas\",\"power_law_emission\"), function=_power_law_emission, units=\"photons/s/keV\")",
"_____no_output_____"
]
],
[
[
"where we have normalized the field arbitrarily. Note that the emission field for a power-law model is a bit odd in that it is technically a specific *luminosity* for the cell. This is done primarily for simplicity in designing the underlying algorithm. \n\nNow, let's set up a sphere to collect photons from:",
"_____no_output_____"
]
],
[
[
"sp = ds.sphere(\"c\", (0.5, \"Mpc\"))",
"_____no_output_____"
]
],
[
[
"And set the parameters for the initial photon sample:",
"_____no_output_____"
]
],
[
[
"A = yt.YTQuantity(500., \"cm**2\")\nexp_time = yt.YTQuantity(1.0e5, \"s\")\nredshift = 0.03",
"_____no_output_____"
]
],
[
[
"Set up two source models, a thermal model and a power-law model:",
"_____no_output_____"
]
],
[
[
"thermal_model = pyxsim.ThermalSourceModel(\"apec\", 0.01, 80.0, 80000, Zmet=0.3)\nplaw_model = pyxsim.PowerLawSourceModel(1.0, 0.01, 80.0, \"power_law_emission\", 1.0)",
"_____no_output_____"
]
],
[
[
"Now, generate the photons for each source model. After we've generated the photons for both, we'll add them together. ",
"_____no_output_____"
]
],
[
[
"thermal_photons = pyxsim.PhotonList.from_data_source(sp, redshift, A, exp_time, thermal_model)\nplaw_photons = pyxsim.PhotonList.from_data_source(sp, redshift, A, exp_time, plaw_model)\n\nphotons = thermal_photons + plaw_photons",
"_____no_output_____"
]
],
[
[
"Now, we want to project the photons along a line of sight. We'll specify the `\"wabs\"` model for foreground galactic absorption. We'll create events from the total set of photons as well as the power-law only set, to see the difference between the two.",
"_____no_output_____"
]
],
[
[
"events = photons.project_photons(\"x\", (30.0, 45.0), absorb_model=\"wabs\", nH=0.02)\nplaw_events = plaw_photons.project_photons(\"x\", (30.0, 45.0), absorb_model=\"wabs\", nH=0.02)",
"_____no_output_____"
]
],
[
[
"Finally, create energy spectra for both sets of events. We won't bother convolving with instrument responses here, because we just want to see what the spectra look like.",
"_____no_output_____"
]
],
[
[
"events.write_spectrum(\"all_spec.fits\", 0.1, 80.0, 8000, overwrite=True)\nplaw_events.write_spectrum(\"plaw_spec.fits\", 0.1, 80.0, 8000, overwrite=True)",
"_____no_output_____"
]
],
[
[
"To visualize the spectra, we'll load them up using [AstroPy's FITS I/O](http://docs.astropy.org/en/stable/io/fits/) and use [Matplotlib](http://matplotlib.org) to plot the spectra: ",
"_____no_output_____"
]
],
[
[
"import astropy.io.fits as pyfits\nf1 = pyfits.open(\"all_spec.fits\")\nf2 = pyfits.open(\"plaw_spec.fits\")\nplt.figure(figsize=(9,7))\nplt.loglog(f2[\"SPECTRUM\"].data[\"ENERGY\"], f2[\"SPECTRUM\"].data[\"COUNTS\"])\nplt.loglog(f1[\"SPECTRUM\"].data[\"ENERGY\"], f1[\"SPECTRUM\"].data[\"COUNTS\"])\nplt.xlim(0.1, 50)\nplt.ylim(1, 2.0e4)\nplt.xlabel(\"E (keV)\")\nplt.ylabel(\"counts/bin\")",
"_____no_output_____"
]
],
[
[
"As you can see, the green line shows a sum of a thermal and power-law spectrum, the latter most prominent at high energies. The blue line shows the power-law spectrum. Both spectra are absorbed at low energies, thanks to the Galactic foreground absorption. ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0f112d9c7d4d268f736aaf909848eb7dc6ee743
| 1,535 |
ipynb
|
Jupyter Notebook
|
Session-1/Assignments/Assignment-1A/first_notebook.ipynb
|
Madhuparna04/ML-Series
|
190c613d6d56566fe7899df8b7ad46ab709e952b
|
[
"MIT"
] | 12 |
2018-03-11T12:39:57.000Z
|
2021-06-06T18:43:39.000Z
|
Session-1/Assignments/Assignment-1A/first_notebook.ipynb
|
Madhuparna04/ML-Series
|
190c613d6d56566fe7899df8b7ad46ab709e952b
|
[
"MIT"
] | null | null | null |
Session-1/Assignments/Assignment-1A/first_notebook.ipynb
|
Madhuparna04/ML-Series
|
190c613d6d56566fe7899df8b7ad46ab709e952b
|
[
"MIT"
] | 26 |
2018-03-12T16:32:09.000Z
|
2020-10-01T06:14:29.000Z
| 21.319444 | 119 | 0.550489 |
[
[
[
"# This is a markdown cell\n\nYou can format text here in markdown.\n\n## You can have subheadings\n\n- You can have bullet points\n1. numbered lists\n\n$and \\ more$\n\nThere are lots of markdown resources on the web you can use to learn markdown. <br>\nHere's a starting point: https://guides.github.com/features/mastering-markdown/ <br>\nMore on Anaconda and Jupyter notebooks, check out this Udacity link: https://classroom.udacity.com/courses/ud1111",
"_____no_output_____"
]
],
[
[
"#This is a code cell You can run code here. Try running the code cells\nprint('Hello World')\na = 2+3",
"_____no_output_____"
],
[
"print(a)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
]
] |
d0f11f430c84826aa2f387c0e1bcfe67bae3d802
| 250,504 |
ipynb
|
Jupyter Notebook
|
extract_features_final.ipynb
|
cjwcommuny/Extract-Bounding-Box-Features-by-Detectron2
|
3c71a8bb3e9cbd7c3863ccf9355bdb95c4b31858
|
[
"MIT"
] | null | null | null |
extract_features_final.ipynb
|
cjwcommuny/Extract-Bounding-Box-Features-by-Detectron2
|
3c71a8bb3e9cbd7c3863ccf9355bdb95c4b31858
|
[
"MIT"
] | null | null | null |
extract_features_final.ipynb
|
cjwcommuny/Extract-Bounding-Box-Features-by-Detectron2
|
3c71a8bb3e9cbd7c3863ccf9355bdb95c4b31858
|
[
"MIT"
] | null | null | null | 1,154.396313 | 136,397 | 0.96153 |
[
[
[
"# extract features of region of an image from mask-rcnn by Detectron2\n\n\nimport detectron2\nfrom detectron2.utils.logger import setup_logger\nsetup_logger()\n\n# import some common libraries\nimport numpy as np\nimport cv2\nimport random\nimport io\nimport torch\n\n# import some common detectron2 utilities\nfrom detectron2 import model_zoo\nfrom detectron2.engine import DefaultPredictor\nfrom detectron2.config import get_cfg\nfrom detectron2.utils.visualizer import Visualizer\nfrom detectron2.data import MetadataCatalog\nfrom detectron2.structures import Instances\nfrom pprint import pprint\n\n# Show the image in ipynb\nfrom IPython.display import clear_output, Image, display\nimport PIL.Image",
"_____no_output_____"
],
[
"def build_cfg(score_thresh=0.5):\n cfg = get_cfg()\n cfg.merge_from_file(model_zoo.get_config_file(\"COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml\"))\n cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = score_thresh\n cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(\"COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml\")\n return cfg",
"_____no_output_____"
],
[
"def extract_features(im, predictor):\n \"\"\"\n @param im: directly imread from OpenCV, e.g. cv2.imread(\"data/input.jpg\")\n @return \n - box_features: shape=(num_rois, feature_dim=1024)\n - outputs: outputs of predictor\n \"\"\"\n outputs = predictor(im)\n instances = outputs[\"instances\"]\n \n # components of model\n model = predictor.model\n backbone = model.backbone\n proposal_generator = model.proposal_generator\n roi_heads = model.roi_heads\n \n # preprocess image\n height, width = im.shape[:2]\n x = predictor.transform_gen.get_transform(im).apply_image(im)\n x = torch.as_tensor(x.astype(\"float32\").transpose(2, 0, 1))\n batched_inputs = [{\"image\": x, \"height\": height, \"width\": width}]\n \n # main procedure\n images = model.preprocess_image(batched_inputs)\n features = backbone(images.tensor)\n proposals = [Instances(image_size=x.shape[1:], proposal_boxes=instances.pred_boxes)]\n features = [features[f] for f in roi_heads.in_features]\n box_features = roi_heads.box_pooler(features, [x.proposal_boxes for x in proposals])\n box_features = roi_heads.box_head(box_features)\n return box_features, outputs",
"_____no_output_____"
],
[
"def showarray(a, fmt='jpeg'):\n a = np.uint8(np.clip(a, 0, 255))\n f = io.BytesIO()\n PIL.Image.fromarray(a).save(f, fmt)\n display(Image(data=f.getvalue()))",
"_____no_output_____"
],
[
"# load image\nim = cv2.imread(\"data/input.jpg\")\nim_rgb = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)\nshowarray(im_rgb)",
"_____no_output_____"
],
[
"cfg = build_cfg()\npredictor = DefaultPredictor(cfg)\nbox_features, outputs = extract_features(im, predictor)\n\nprint(\"box_features.shape: {}\".format(box_features.shape))\nprint(\"box_location.shape: {}\".format(outputs[\"instances\"].pred_boxes.tensor.shape))",
"box_features.shape: torch.Size([18, 1024])\nbox_location.shape: torch.Size([18, 4])\n"
],
[
"# visualization\nv = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)\nv = v.draw_instance_predictions(outputs[\"instances\"].to(\"cpu\"))\nshowarray(v.get_image())",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0f131baaa00107986feeb5b07ea5f5bdbd8988d
| 12,830 |
ipynb
|
Jupyter Notebook
|
analyses/notes/Limma_expects_log_transformed_data.ipynb
|
krassowski/meningitis-integration
|
1c78711d09a48f23b30b1e8d1cd246177055ff2c
|
[
"MIT"
] | null | null | null |
analyses/notes/Limma_expects_log_transformed_data.ipynb
|
krassowski/meningitis-integration
|
1c78711d09a48f23b30b1e8d1cd246177055ff2c
|
[
"MIT"
] | 2 |
2020-05-03T18:02:05.000Z
|
2020-05-03T18:02:29.000Z
|
analyses/notes/Limma_expects_log_transformed_data.ipynb
|
krassowski/meningitis-integration
|
1c78711d09a48f23b30b1e8d1cd246177055ff2c
|
[
"MIT"
] | 1 |
2020-12-27T19:35:10.000Z
|
2020-12-27T19:35:10.000Z
| 21.969178 | 237 | 0.505768 |
[
[
[
"from helpers.utilities import *\n%run helpers/notebook_setup.ipynb",
"_____no_output_____"
]
],
[
[
"While attempting to compare limma's results for log-transformed an non-transformed data, it was noticed (and brought up by Dr Tim) That the values of logFC produced by limma for non-transformed data are of wrong order of magnitude.",
"_____no_output_____"
],
[
"I have investigated the issue, following the limma calculations for non-transformed data step by step:",
"_____no_output_____"
]
],
[
[
"indexed_by_target_path = 'data/clean/protein/indexed_by_target.csv'\nclinical_path = 'data/clean/protein/clinical_data_ordered_to_match_proteins_matrix.csv'",
"_____no_output_____"
],
[
"clinical = read_csv(clinical_path, index_col=0)\nraw_protein_matrix = read_csv(indexed_by_target_path, index_col=0)",
"_____no_output_____"
],
[
"by_condition = clinical.Meningitis",
"_____no_output_____"
],
[
"tb_lysozyme = raw_protein_matrix[\n raw_protein_matrix.columns[by_condition == 'Tuberculosis']\n].loc['Lysozyme'].mean()",
"_____no_output_____"
],
[
"hc_lysozyme = raw_protein_matrix[\n raw_protein_matrix.columns[by_condition == 'Healthy control']\n].loc['Lysozyme'].mean()",
"_____no_output_____"
],
[
"tb_lysozyme / hc_lysozyme",
"_____no_output_____"
],
[
"tb_lysozyme",
"_____no_output_____"
],
[
"hc_lysozyme",
"_____no_output_____"
]
],
[
[
"While for the transformed data:",
"_____no_output_____"
]
],
[
[
"from numpy import log10",
"_____no_output_____"
],
[
"log10(tb_lysozyme)",
"_____no_output_____"
],
[
"log10(hc_lysozyme)",
"_____no_output_____"
],
[
"log10(tb_lysozyme) / log10(hc_lysozyme)",
"_____no_output_____"
],
[
"protein_matrix = raw_protein_matrix.apply(log10)",
"_____no_output_____"
],
[
"%%R -i protein_matrix -i by_condition\nimport::here(space_to_dot, dot_to_space, .from='helpers/utilities.R')\nimport::here(\n limma_fit, limma_diff_ebayes, full_table,\n design_from_conditions, calculate_means,\n .from='helpers/differential_expression.R'\n)\n\ndiff_ebayes = function(a, b, data=protein_matrix, conditions_vector=by_condition, ...) {\n limma_diff_ebayes(a, b, data=data, conditions_vector=conditions_vector, ...)\n}",
"_____no_output_____"
],
[
"%%R -o tb_all_proteins_raw -i raw_protein_matrix\nresult = diff_ebayes('Tuberculosis', 'Healthy control', data=raw_protein_matrix)\ntb_all_proteins_raw = full_table(result)",
"_____no_output_____"
],
[
"%%R\nhead(full_table(result, coef=1))",
" logFC AveExpr t P.Value\nLysozyme 61798.20 67997.26 12.34222 3.414236e-20\nTIMP-1 65320.26 89128.78 11.82749 3.121111e-19\nIGFBP-4 104840.02 186800.74 11.56193 9.882769e-19\nC3d 124850.49 99248.92 11.43494 1.719287e-18\nCyclophilin A 130136.76 117191.29 11.15072 5.970601e-18\n14-3-3 protein zeta/delta 141689.40 105857.89 10.58352 7.404860e-17\n adj.P.Val B protein\nLysozyme 4.455578e-17 -4.254329 Lysozyme\nTIMP-1 2.036525e-16 -4.264678 TIMP-1\nIGFBP-4 4.299004e-16 -4.270296 IGFBP-4\nC3d 5.609172e-16 -4.273051 C3d\nCyclophilin A 1.558327e-15 -4.279385 Cyclophilin A\n14-3-3 protein zeta/delta 1.509659e-14 -4.292907 14-3-3 protein zeta/delta\n"
],
[
"%%R\n# logFC is taken from the coefficient of fit (result):\n# it seems that the coefficients do not represent the FC as would expected...\nresult$coefficients['Lysozyme', ]",
"[1] 61798.2\n"
]
],
[
[
"We can trace it back to:",
"_____no_output_____"
]
],
[
[
"%%R\nfit = limma_fit(\n data=raw_protein_matrix, conditions_vector=by_condition,\n a='Tuberculosis', b='Healthy control'\n)",
"_____no_output_____"
],
[
"%%R\nfit$coefficients['Lysozyme', ]",
"[1] 61798.2\n"
]
],
[
[
"It changes when using using only the data from TB and HC, though it continues to produce large values:",
"_____no_output_____"
]
],
[
[
"%%R\nfit = limma_fit(\n data=raw_protein_matrix, conditions_vector=by_condition,\n a='Tuberculosis', b='Healthy control', use_all=F\n)",
"_____no_output_____"
],
[
"%%R\nfit$coefficients['Lysozyme', ]",
"Intercept Group \n 59749.21 30899.10 \n"
]
],
[
[
"Getting back to the previous version, we can see that the meansare correctly calculated:",
"_____no_output_____"
]
],
[
[
"%%R\ndesign <- design_from_conditions(by_condition)\nfit <- calculate_means(raw_protein_matrix, design)",
"_____no_output_____"
],
[
"%%R\nfit$coefficients['Lysozyme', ]",
" (Intercept) Healthy.control Tuberculosis Viral \n 84617.54 -55767.43 6030.77 -17925.30 \n"
],
[
"tb_lysozyme, hc_lysozyme",
"_____no_output_____"
],
[
"%%R\ncontrast_specification <- paste(\n space_to_dot('Tuberculosis'),\n space_to_dot('Healthy control'),\n sep='-'\n)\ncontrast.matrix <- limma::makeContrasts(contrasts=contrast_specification, levels=design)\ncontrast.matrix",
" Contrasts\nLevels Tuberculosis-Healthy.control\n Intercept 0\n Healthy.control -1\n Tuberculosis 1\n Viral 0\n"
]
],
[
[
"There is only one step more:\n\n> fit <- limma::contrasts.fit(fit, contrast.matrix)\n\nso the problem must be here",
"_____no_output_____"
]
],
[
[
"%%R\nfit_contrasted <- limma::contrasts.fit(fit, contrast.matrix)\nfit_contrasted$coefficients['Lysozyme', ]",
"[1] 61798.2\n"
]
],
[
[
"Note the result we got: 61798.20 is:",
"_____no_output_____"
]
],
[
[
"tb_lysozyme - hc_lysozyme",
"_____no_output_____"
],
[
"%%R\nfinal_fit = limma::eBayes(fit_contrasted, trend=T, robust=T)\nfinal_fit$coefficients['Lysozyme', ]",
"[1] 61798.2\n"
]
],
[
[
"This shows that limma does not produce the fold change at all.",
"_____no_output_____"
],
[
"This is because it assumes that the data are log-transformed upfront. **If we gave it log-transformed data, the difference of logs would be equivalent to division.**",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0f135788bd4f355fab5d8d2561b0f3615700ad6
| 41,623 |
ipynb
|
Jupyter Notebook
|
tutorial/0.0-Start-Here.ipynb
|
xingjeffrey/avgn_paper
|
412e95dabc7b7b13a434b85cc54a21c06efe4e2b
|
[
"MIT"
] | null | null | null |
tutorial/0.0-Start-Here.ipynb
|
xingjeffrey/avgn_paper
|
412e95dabc7b7b13a434b85cc54a21c06efe4e2b
|
[
"MIT"
] | null | null | null |
tutorial/0.0-Start-Here.ipynb
|
xingjeffrey/avgn_paper
|
412e95dabc7b7b13a434b85cc54a21c06efe4e2b
|
[
"MIT"
] | null | null | null | 30.271273 | 581 | 0.487038 |
[
[
[
"### AVGN Tutorial\nThis tutorial walks you through getting started with AVGN on a sample dataset, so you can figure out how to use it on your own data. \n\nIf you're not too familiar with Python, make sure you've first familiarized yourself with Jupyter notebooks and installing pyhton packages. Then come back and try the tutorial. \n\nThere may be some packages here that you need that aren't installed by default. If you find one of these, just try `pip install`-ing it locally. If you're still having trouble add an issue on the GitHub repository and I'll help you figure it out.",
"_____no_output_____"
],
[
"### Installing AVGN on your computer",
"_____no_output_____"
],
[
"First, download the repository locally and install it:\n1. Navigate to the folder in your local environment where you want to install the repository. \n2. Type `git clone https://github.com/timsainb/avgn_paper.git`\n3. Open the `avgn_paper` folder\n4. Install the package by typing `python setup.py develop`",
"_____no_output_____"
],
[
"Now in python you should be able to `import avgn`",
"_____no_output_____"
],
[
"### Downloading a sample dataset\nIn this example, we'll download [a dataset of .WAV files of acoustically isolated Bengalese finch song](https://figshare.com/articles/BirdsongRecognition/3470165?file=5463221). Each .WAV is accompanied by a set of hand annotations, giving us the boundaries for each syllable.",
"_____no_output_____"
]
],
[
[
"from avgn.downloading.download import download_tqdm\nfrom avgn.utils.paths import DATA_DIR\nfrom avgn.utils.general import unzip_file\nfrom tqdm.autonotebook import tqdm",
"_____no_output_____"
],
[
"# where the files are located online (url, filename)\ndata_urls = [\n ('https://ndownloader.figshare.com/articles/3470165/versions/1', 'all_files.zip'),\n]\n# where to save the files\noutput_loc = DATA_DIR/\"raw/koumura/\"",
"_____no_output_____"
],
[
"# download the files locally\nfor url, filename in data_urls:\n download_tqdm(url, output_location=output_loc/filename)",
"/mnt/cube/tsainbur/Projects/github_repos/avgn_paper/avgn/downloading/download.py:21: UserWarning: File /mnt/cube/tsainbur/Projects/github_repos/avgn_paper/data/raw/koumura/all_files.zip already exists\n warnings.warn(\"File {} already exists\".format(output_location))\n"
],
[
"# list the downloaded files\nzip_files = list((output_loc/\"zip_contents\").glob('*.zip'))\nzip_files[:2]",
"_____no_output_____"
],
[
"# unzip the files\nfor zf in tqdm(zip_files):\n unzip_file(zf, output_loc/\"zip_contents\")",
"_____no_output_____"
]
],
[
[
"### Getting the data into a usable format\nNow that the data is saved, we want to get the annotations into the same format as all of the other datasets. \n\nThe format we use is JSON, which just holds a dictionary of information about the dataset. \n\n\nFor each .WAV file, we will create a JSON that looks something like this:\n\n```\n{\n \"length_s\": 15,\n \"samplerate_hz\": 30000,\n \"wav_location\": \"/location/of/my/dataset/myfile.wav\",\n \"indvs\": {\n \"Bird1\": {\n \"species\": \"Bengalese finch\",\n \"units\": {\n \"syllables\": {\n \"start_times\": [1.5, 2.5, 6],\n \"end_times\": [2.3, 4.5, 8],\n \"labels\": [\"a\", \"b\", \"c\"],\n },\n }\n },\n}\n```\n\nTo get data into this format, you're generally going to have two write a custom parser to convert your data from your format into AVGN format. We're going to create a custom parser here for this dataset, as an example. You could also create these JSONs by hand. \n\n**Note:** If your dataset is more annotated than that, take a look at the readme.md in the github repository for more examples of JSONs. If your dataset is not already segmented for syllables, don't add \"units\", and you can add them after automatic segmentation.",
"_____no_output_____"
]
],
[
[
"from datetime import datetime\nimport avgn.utils\nimport numpy as np",
"_____no_output_____"
],
[
"RAW_DATASET_LOC = output_loc/\"zip_contents\"\nRAW_DATASET_LOC",
"_____no_output_____"
],
[
"# first we create a name for our dataset\nDATASET_ID = 'koumura_bengalese_finch'\n\n# create a unique datetime identifier for the files output by this notebook\nDT_ID = datetime.now().strftime(\"%Y-%m-%d_%H-%M-%S\")",
"_____no_output_____"
],
[
"# grab a list of all the raw waveforms\nwav_list = list(RAW_DATASET_LOC.glob('Bird*/Wave/*.wav'))\nlen(wav_list), np.sort(wav_list)[-2:]",
"_____no_output_____"
],
[
"# grab a list of all of the raw annotation files for each bird\nannotation_files = list(RAW_DATASET_LOC.glob('Bird*/Annotation.xml'))\nlen(annotation_files), np.sort(annotation_files)[-2:]",
"_____no_output_____"
]
],
[
[
"#### Now, for each wav file, we want to generate a JSON, using information from the XML.\n\nLets take a look inside an XML first, to see what's in there. It might be useful to take a look at this XML file in your web browser to get a better idea of what's in there as well.",
"_____no_output_____"
]
],
[
[
"import xml.etree.ElementTree\nimport xml.dom.minidom",
"_____no_output_____"
],
[
"# print a sample of the XML\nparssed = xml.dom.minidom.parse(annotation_files[0].as_posix()) \npretty_xml_as_string = dom.toprettyxml()\nprint(pretty_xml_as_string[:400] + '...')",
"<?xml version=\"1.0\" ?>\n<Sequences>\n\t<NumSequence>1350</NumSequence>\n\t<Sequence>\n\t\t<WaveFileName>0.wav</WaveFileName>\n\t\t<Position>32000</Position>\n\t\t<Length>64832</Length>\n\t\t<NumNote>15</NumNote>\n\t\t<Note>\n\t\t\t<Position>5056</Position>\n\t\t\t<Length>1440</Length>\n\t\t\t<Label>0</Label>\n\t\t</Note>\n\t\t<Note>\n\t\t\t<Position>8512</Position>\n\t\t\t<Length>2016</Length>\n\t\t\t<Label>0</Label>\n\t\t</Note>\n\t\t<Note>\n\t\t\t<Positi...\n"
]
],
[
[
"### Parse XML",
"_____no_output_____"
],
[
"Before we create a JSON, we can create a pandas dataframe with all the relevant info from the XML. This is all very specific to this dataset, but hopefully it gives you an idea of what you need to do for your dataset. ",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"song_df = pd.DataFrame(\n columns=[\n \"bird\",\n \"WaveFileName\",\n \"Position\",\n \"Length\",\n \"NumNote\",\n \"NotePositions\",\n \"NoteLengths\",\n \"NoteLabels\",\n ]\n )\nsong_df",
"_____no_output_____"
],
[
"# loop through XML annotation files\nfor bird_loc in tqdm(annotation_files):\n # grab the\n bird_xml = xml.etree.ElementTree.parse(bird_loc).getroot()\n bird = bird_loc.parent.stem\n # loop through each \"sequence\" in the datset (corresponding to a bout)\n for element in tqdm(bird_xml.getchildren(), leave=False):\n if element.tag == \"Sequence\":\n notePositions = []\n noteLengths = []\n noteLabels = []\n # get the metadata for that sequence \n for seq_element in element.getchildren():\n if seq_element.tag == \"Position\":\n position = seq_element.text\n elif seq_element.tag == \"Length\":\n length = seq_element.text\n elif seq_element.tag == \"WaveFileName\":\n WaveFileName = seq_element.text\n elif seq_element.tag == \"NumNote\":\n NumNote = seq_element.text\n # get the metadata for the note\n elif seq_element.tag == \"Note\":\n for note_element in seq_element.getchildren():\n if note_element.tag == \"Label\":\n noteLabels.append(note_element.text)\n elif note_element.tag == \"Position\":\n notePositions.append(note_element.text)\n elif note_element.tag == \"Length\":\n noteLengths.append(note_element.text)\n # add to the pandas dataframe\n song_df.loc[len(song_df)] = [\n bird,\n WaveFileName,\n position,\n length,\n NumNote,\n notePositions,\n noteLengths,\n noteLabels,\n ]",
"_____no_output_____"
],
[
"song_df[:3]",
"_____no_output_____"
]
],
[
[
"### Now we can generate a JSON from that pandas dataframe",
"_____no_output_____"
]
],
[
[
"from avgn.utils.audio import get_samplerate\nimport librosa\nfrom avgn.utils.json import NoIndent, NoIndentEncoder",
"_____no_output_____"
],
[
"# for each bird\nfor bird in tqdm(np.unique(song_df.bird)):\n # grab that bird's annotations\n bird_df = song_df[song_df.bird == bird]\n \n # for each wav file produced by that bird\n for wfn in tqdm(bird_df.WaveFileName.unique(), leave=False):\n \n wfn_df = bird_df[bird_df.WaveFileName == wfn]\n \n # get the location of the wav\n wav_loc = RAW_DATASET_LOC / bird / \"Wave\" / wfn\n \n # get the wav samplerate and duration\n sr = get_samplerate(wav_loc.as_posix())\n wav_duration = librosa.get_duration(filename=wav_loc)\n \n # make json dictionary\n json_dict = {}\n # add species\n json_dict[\"species\"] = \"Lonchura striata domestica\"\n json_dict[\"common_name\"] = \"Bengalese finch\"\n json_dict[\"wav_loc\"] = wav_loc.as_posix()\n # rate and length\n json_dict[\"samplerate_hz\"] = sr\n json_dict[\"length_s\"] = wav_duration\n \n # make a dataframe of wav info\n seq_df = pd.DataFrame(\n (\n [\n [\n list(np.repeat(sequence_num, len(row.NotePositions))),\n list(row.NoteLabels),\n np.array(\n (np.array(row.NotePositions).astype(\"int\") + int(row.Position))\n / sr\n ).astype(\"float64\"),\n np.array(\n (\n np.array(row.NotePositions).astype(\"int\")\n + np.array(row.NoteLengths).astype(\"int\")\n + int(row.Position)\n )\n / sr\n ).astype(\"float64\"),\n ]\n for sequence_num, (idx, row) in enumerate(wfn_df.iterrows())\n ]\n ),\n columns=[\"sequence_num\", \"labels\", \"start_times\", \"end_times\"],\n )\n \n # add syllable information\n json_dict[\"indvs\"] = {\n bird: {\n \"notes\": {\n \"start_times\": NoIndent(\n list(np.concatenate(seq_df.start_times.values))\n ),\n \"end_times\": NoIndent(list(np.concatenate(seq_df.end_times.values))),\n \"labels\": NoIndent(list(np.concatenate(seq_df.labels.values))),\n \"sequence_num\": NoIndent(\n [int(i) for i in np.concatenate(seq_df.sequence_num.values)]\n ),\n }\n }\n }\n \n \n # dump dict into json format\n json_txt = json.dumps(json_dict, cls=NoIndentEncoder, indent=2)\n\n wav_stem = bird + \"_\" + wfn.split(\".\")[0]\n json_out = (\n DATA_DIR / \"processed\" / DATASET_ID / DT_ID / \"JSON\" / (wav_stem + \".JSON\")\n )\n\n # save json\n avgn.utils.paths.ensure_dir(json_out.as_posix())\n print(json_txt, file=open(json_out.as_posix(), \"w\"))\n ",
"_____no_output_____"
],
[
"# print an example JSON corresponding to the dataset we just made\nprint(json_txt)",
"{\n \"species\": \"Lonchura striata domestica\",\n \"common_name\": \"Bengalese finch\",\n \"wav_loc\": \"/mnt/cube/tsainbur/Projects/github_repos/avgn_paper/data/raw/koumura/zip_contents/Bird9/Wave/216.wav\",\n \"samplerate_hz\": 32000,\n \"length_s\": 11.124,\n \"indvs\": {\n \"Bird9\": {\n \"notes\": {\n \"start_times\": [1.158, 1.302, 1.451, 1.605, 1.761, 1.92, 2.094, 2.243, 2.404, 2.563, 2.713, 2.846, 2.971, 3.082, 3.157, 3.262, 3.372, 3.487, 3.6, 3.715, 3.833, 3.949, 4.068, 4.142, 4.249, 4.364, 4.474, 4.587, 4.698, 4.808, 4.916, 5.031, 5.146, 5.275, 5.4, 5.547, 5.657, 5.735, 5.844, 5.959, 6.078, 6.195, 6.313, 6.43, 6.5105, 6.6185, 6.7325, 6.8495, 6.9635, 7.0785, 7.1925, 7.3035, 7.4345, 7.5855, 7.7345, 7.8515, 7.9285, 8.0385, 8.1555, 8.2775, 8.3945, 8.5135, 8.6335, 8.7545, 8.8755, 8.9955, 9.0765, 9.1865, 9.3025, 9.4185, 9.5335, 9.6505, 9.7665, 9.8785],\n \"end_times\": [1.204, 1.372, 1.509, 1.673, 1.837, 1.995, 2.173, 2.327, 2.491, 2.647, 2.791, 2.916, 3.063, 3.138, 3.242, 3.35, 3.461, 3.578, 3.691, 3.809, 3.924, 4.041, 4.125, 4.229, 4.339, 4.452, 4.563, 4.674, 4.785, 4.893, 5.005, 5.117, 5.236, 5.347, 5.474, 5.643, 5.714, 5.821, 5.933, 6.05, 6.168, 6.281, 6.399, 6.491, 6.5965, 6.7105, 6.8245, 6.9395, 7.0535, 7.1675, 7.2755, 7.3905, 7.5055, 7.6585, 7.8355, 7.9065, 8.0165, 8.1305, 8.2495, 8.3655, 8.4855, 8.6045, 8.7235, 8.8475, 8.9615, 9.0545, 9.1625, 9.2775, 9.3935, 9.5075, 9.6235, 9.7365, 9.8465, 9.9665],\n \"labels\": [\"0\", \"0\", \"0\", \"0\", \"0\", \"0\", \"0\", \"0\", \"0\", \"0\", \"1\", \"1\", \"2\", \"3\", \"4\", \"4\", \"4\", \"5\", \"5\", \"5\", \"5\", \"5\", \"3\", \"4\", \"4\", \"4\", \"4\", \"4\", \"4\", \"4\", \"4\", \"4\", \"5\", \"1\", \"1\", \"2\", \"3\", \"4\", \"4\", \"4\", \"5\", \"5\", \"5\", \"3\", \"4\", \"4\", \"4\", \"4\", \"4\", \"4\", \"4\", \"5\", \"1\", \"1\", \"2\", \"3\", \"4\", \"4\", \"4\", \"5\", \"5\", \"5\", \"5\", \"5\", \"5\", \"3\", \"4\", \"4\", \"4\", \"4\", \"4\", \"4\", \"4\", \"5\"],\n \"sequence_num\": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4]\n }\n }\n }\n}\n"
]
],
[
[
"### Now this dataset is in the right format for further analysis.\nIn the next notebook, we'll segment out the notes/syllables and compute spectrograms that can be projected.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0f140cbdc26507dafee6927bdda5dc71f57d65e
| 27,166 |
ipynb
|
Jupyter Notebook
|
courses/machine_learning/deepdive2/launching_into_ml/solutions/first_model.ipynb
|
jonesevan007/training-data-analyst
|
774446719316599cf221bdc5a67b00ec4c0b3ad0
|
[
"Apache-2.0"
] | 2 |
2019-11-10T04:09:25.000Z
|
2019-11-16T14:55:13.000Z
|
courses/machine_learning/deepdive2/launching_into_ml/solutions/first_model.ipynb
|
jonesevan007/training-data-analyst
|
774446719316599cf221bdc5a67b00ec4c0b3ad0
|
[
"Apache-2.0"
] | 10 |
2019-11-20T07:24:52.000Z
|
2022-03-12T00:06:02.000Z
|
courses/machine_learning/deepdive2/launching_into_ml/solutions/first_model.ipynb
|
jonesevan007/training-data-analyst
|
774446719316599cf221bdc5a67b00ec4c0b3ad0
|
[
"Apache-2.0"
] | 4 |
2020-05-15T06:23:05.000Z
|
2021-12-20T06:00:15.000Z
| 27.919836 | 313 | 0.477325 |
[
[
[
"# First BigQuery ML models for Taxifare Prediction\n\nIn this notebook, we will use BigQuery ML to build our first models for taxifare prediction.BigQuery ML provides a fast way to build ML models on large structured and semi-structured datasets.\n\n## Learning Objectives\n1. Choose the correct BigQuery ML model type and specify options\n2. Evaluate the performance of your ML model\n3. Improve model performance through data quality cleanup\n4. Create a Deep Neural Network (DNN) using SQL\n\nEach learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/first_model.ipynb) -- try to complete that notebook first before reviewing this solution notebook. \n\n\nWe'll start by creating a dataset to hold all the models we create in BigQuery",
"_____no_output_____"
],
[
"### Import libraries",
"_____no_output_____"
]
],
[
[
"import os",
"_____no_output_____"
]
],
[
[
"### Set environment variables",
"_____no_output_____"
]
],
[
[
"%%bash\nexport PROJECT=$(gcloud config list project --format \"value(core.project)\")\necho \"Your current GCP Project Name is: \"$PROJECT",
"_____no_output_____"
],
[
"PROJECT = \"your-gcp-project-here\" # REPLACE WITH YOUR PROJECT NAME\nREGION = \"us-central1\" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# Do not change these\nos.environ[\"BUCKET\"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID\nos.environ[\"REGION\"] = REGION\n\nif PROJECT == \"your-gcp-project-here\":\n print(\"Don't forget to update your PROJECT name! Currently:\", PROJECT)",
"_____no_output_____"
]
],
[
[
"## Create a BigQuery Dataset and Google Cloud Storage Bucket\n\nA BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __serverlessml__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.",
"_____no_output_____"
]
],
[
[
"%%bash\n\n## Create a BigQuery dataset for serverlessml if it doesn't exist\ndatasetexists=$(bq ls -d | grep -w serverlessml)\n\nif [ -n \"$datasetexists\" ]; then\n echo -e \"BigQuery dataset already exists, let's not recreate it.\"\nelse\n echo \"Creating BigQuery dataset titled: serverlessml\"\n\n bq --location=US mk --dataset \\\n --description 'Taxi Fare' \\\n $PROJECT:serverlessml\n echo \"\\nHere are your current datasets:\"\n bq ls\nfi \n\n## Create GCS bucket if it doesn't exist already...\nexists=$(gsutil ls -d | grep -w gs://${PROJECT}/)\n\nif [ -n \"$exists\" ]; then\n echo -e \"Bucket exists, let's not recreate it.\"\nelse\n echo \"Creating a new GCS bucket.\"\n gsutil mb -l ${REGION} gs://${PROJECT}\n echo \"\\nHere are your current buckets:\"\n gsutil ls\nfi",
"BigQuery dataset already exists, let's not recreate it.\nBucket exists, let's not recreate it.\n"
]
],
[
[
"## Model 1: Raw data\n\nLet's build a model using just the raw data. It's not going to be very good, but sometimes it is good to actually experience this.\n\nThe model will take a minute or so to train. When it comes to ML, this is blazing fast.",
"_____no_output_____"
]
],
[
[
"%%bigquery\nCREATE OR REPLACE MODEL\n serverlessml.model1_rawdata\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='linear_reg') AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 100000) = 1",
"_____no_output_____"
]
],
[
[
"Once the training is done, visit the [BigQuery Cloud Console](https://console.cloud.google.com/bigquery) and look at the model that has been trained. Then, come back to this notebook.",
"_____no_output_____"
],
[
"Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:",
"_____no_output_____"
]
],
[
[
"%%bigquery\nSELECT * FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata)",
"_____no_output_____"
]
],
[
[
"Let's report just the error we care about, the Root Mean Squared Error (RMSE)",
"_____no_output_____"
]
],
[
[
"%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model1_rawdata)",
"_____no_output_____"
]
],
[
[
"We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6.",
"_____no_output_____"
],
[
"Note that the error is going to depend on the dataset that we evaluate it on.\nWe can also evaluate the model on our own held-out benchmark/test dataset, but we shouldn't make a habit of this (we want to keep our benchmark dataset as the final evaluation, not make decisions using it all along the way. If we do that, our test dataset won't be truly independent).",
"_____no_output_____"
]
],
[
[
"%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model1_rawdata, (\n SELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers\n FROM\n `nyc-tlc.yellow.trips`\n WHERE\n MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 100000) = 2\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n ))",
"_____no_output_____"
]
],
[
[
"## Model 2: Apply data cleanup\n\nRecall that we did some data cleanup in the previous lab. Let's do those before training.\n\nThis is a dataset that we will need quite frequently in this notebook, so let's extract it first.",
"_____no_output_____"
]
],
[
[
"%%bigquery\nCREATE OR REPLACE TABLE\n serverlessml.cleaned_training_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 100000) = 1\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0",
"_____no_output_____"
],
[
"%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM serverlessml.cleaned_training_data\nLIMIT 0",
"_____no_output_____"
],
[
"%%bigquery\nCREATE OR REPLACE MODEL\n serverlessml.model2_cleanup\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='linear_reg') AS\n\nSELECT\n *\nFROM\n serverlessml.cleaned_training_data",
"_____no_output_____"
],
[
"%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model2_cleanup)",
"_____no_output_____"
]
],
[
[
"## Model 3: More sophisticated models\n\nWhat if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery:",
"_____no_output_____"
],
[
"### DNN\nTo create a DNN, simply specify __dnn_regressor__ for the model_type and add your hidden layers.",
"_____no_output_____"
]
],
[
[
"%%bigquery\n-- This model type is in alpha, so it may not work for you yet.\n-- This training takes on the order of 15 minutes.\nCREATE OR REPLACE MODEL\n serverlessml.model3b_dnn\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='dnn_regressor', hidden_units=[32, 8]) AS\n\nSELECT\n *\nFROM\n serverlessml.cleaned_training_data",
"_____no_output_____"
],
[
"%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model3b_dnn)",
"_____no_output_____"
]
],
[
[
"Nice!",
"_____no_output_____"
],
[
"## Evaluate DNN on benchmark dataset\n\nLet's use the same validation dataset to evaluate -- remember that evaluation metrics depend on the dataset. You can not compare two models unless you have run them on the same withheld data.",
"_____no_output_____"
]
],
[
[
"%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse \nFROM\n ML.EVALUATE(MODEL serverlessml.model3b_dnn, (\n SELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers,\n 'unused' AS key\n FROM\n `nyc-tlc.yellow.trips`\n WHERE\n MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 2\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n ))",
"_____no_output_____"
]
],
[
[
"Wow! Later in this sequence of notebooks, we will get to below $4, but this is quite good, for very little work.",
"_____no_output_____"
],
[
"In this notebook, we showed you how to use BigQuery ML to quickly build ML models. We will come back to BigQuery ML when we want to experiment with different types of feature engineering. The speed of BigQuery ML is very attractive for development.",
"_____no_output_____"
],
[
"Copyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d0f143edd5c6f3a925c0cd89beea26511940dce2
| 2,457 |
ipynb
|
Jupyter Notebook
|
playbook/tactics/defense-evasion/T1578.002.ipynb
|
haresudhan/The-AtomicPlaybook
|
447b1d6bca7c3750c5a58112634f6bac31aff436
|
[
"MIT"
] | 8 |
2021-05-25T15:25:31.000Z
|
2021-11-08T07:14:45.000Z
|
playbook/tactics/defense-evasion/T1578.002.ipynb
|
haresudhan/The-AtomicPlaybook
|
447b1d6bca7c3750c5a58112634f6bac31aff436
|
[
"MIT"
] | 1 |
2021-08-23T17:38:02.000Z
|
2021-10-12T06:58:19.000Z
|
playbook/tactics/defense-evasion/T1578.002.ipynb
|
haresudhan/The-AtomicPlaybook
|
447b1d6bca7c3750c5a58112634f6bac31aff436
|
[
"MIT"
] | 2 |
2021-05-29T20:24:24.000Z
|
2021-08-05T23:44:12.000Z
| 54.6 | 858 | 0.715507 |
[
[
[
"# T1578.002 - Create Cloud Instance\nAn adversary may create a new instance or virtual machine (VM) within the compute service of a cloud account to evade defenses. Creating a new instance may allow an adversary to bypass firewall rules and permissions that exist on instances currently residing within an account. An adversary may [Create Snapshot](https://attack.mitre.org/techniques/T1578/001) of one or more volumes in an account, create a new instance, mount the snapshots, and then apply a less restrictive security policy to collect [Data from Local System](https://attack.mitre.org/techniques/T1005) or for [Remote Data Staging](https://attack.mitre.org/techniques/T1074/002).(Citation: Mandiant M-Trends 2020)\n\nCreating a new instance may also allow an adversary to carry out malicious activity within an environment without affecting the execution of current running instances.",
"_____no_output_____"
],
[
"## Atomic Tests:\nCurrently, no tests are available for this technique.",
"_____no_output_____"
],
[
"## Detection\nThe creation of a new instance or VM is a common part of operations within many cloud environments. Events should then not be viewed in isolation, but as part of a chain of behavior that could lead to other activities. For example, the creation of an instance by a new user account or the unexpected creation of one or more snapshots followed by the creation of an instance may indicate suspicious activity.\n\nIn AWS, CloudTrail logs capture the creation of an instance in the <code>RunInstances</code> event, and in Azure the creation of a VM may be captured in Azure activity logs.(Citation: AWS CloudTrail Search)(Citation: Azure Activity Logs) Google's Admin Activity audit logs within their Cloud Audit logs can be used to detect the usage of <code>gcloud compute instances create</code> to create a VM.(Citation: Cloud Audit Logs)",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
]
] |
d0f146328ffb46e41b08c06967dbb03a376bf396
| 105,740 |
ipynb
|
Jupyter Notebook
|
docs/notebooks/plotting.ipynb
|
mramospe/hepspt
|
11f74978a582ebc20e0a7765dafc78f0d1f1d5d5
|
[
"MIT"
] | null | null | null |
docs/notebooks/plotting.ipynb
|
mramospe/hepspt
|
11f74978a582ebc20e0a7765dafc78f0d1f1d5d5
|
[
"MIT"
] | null | null | null |
docs/notebooks/plotting.ipynb
|
mramospe/hepspt
|
11f74978a582ebc20e0a7765dafc78f0d1f1d5d5
|
[
"MIT"
] | 1 |
2021-11-03T03:36:15.000Z
|
2021-11-03T03:36:15.000Z
| 328.385093 | 35,424 | 0.928627 |
[
[
[
"# Plotting\nHere you can explore the different possibilities that the hep_spt package offers for plotting.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport hep_spt\nhep_spt.set_style()\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom scipy.stats import norm",
"_____no_output_____"
]
],
[
[
"## Plotting a (non)weighted sample\nUse the function \"errorbar_hist\" to plot the same sample without and with weights. In the non-weighted case, we will ask for frequentist poissonian errors, so we will get asymmetric error bars for low values of the number of entries.",
"_____no_output_____"
]
],
[
[
"# Create a random sample\nsize = 200\nsmp = np.random.normal(0, 3, size)\nwgts = np.random.uniform(0, 1, size)\n\nfig, (ax0, ax1) = plt.subplots(1, 2, figsize=(10, 5))\n\n# Make the non-weighted plot\nvalues, edges, ex, ey = hep_spt.errorbar_hist(smp, bins=10, range=(-7, 7), uncert='freq')\ncenters = (edges[1:] + edges[:-1])/2.\n\nax0.errorbar(centers, values, ey, ex, ls='none')\nax0.set_title('Non-weighted sample')\n\n# Make the weighted plot\nvalues, edges, ex, ey = hep_spt.errorbar_hist(smp, bins=10, range=(-7, 7), weights=wgts)\ncenters = (edges[1:] + edges[:-1])/2.\n\nax1.errorbar(centers, values, ey, ex, ls='none')\nax1.set_title('Weighted sample');",
"_____no_output_____"
]
],
[
[
"## Calculating the pull of a distribution\nSometimes we want to calculate the distance in terms of standard deviations from a curve to our measurements. This example creates a random sample of events following a normal distribution and overlies it with the original curve. The pull plot is shown below.",
"_____no_output_____"
]
],
[
[
"# Create the samples\nsize=5e3\n\nsample = norm.rvs(size=int(size))\n\nvalues, edges, ex, ey = hep_spt.errorbar_hist(sample, 40, range=(-4, 4), uncert='freq')\ncenters = (edges[1:] + edges[:-1])/2.\n\n# Extract the PDF values in each center, and make the pull\nref = norm.pdf(centers)\nref *= size/ref.sum()\n\npull, perr = hep_spt.pull(values, ey, ref)\n\n# Make the reference to plot (with more points than just the centers of the bins)\nrct, step = np.linspace(-4., 4., 1000, retstep=True)\npref = norm.pdf(rct)\npref = size*pref/pref.sum()*(edges[1] - edges[0])/step\n\nfig, (ax0, ax1) = plt.subplots(2, 1, sharex=True, gridspec_kw = {'height_ratios':[3, 1]}, figsize=(10, 8))\n\n# Draw the histogram and the reference\nax0.errorbar(centers, values, ey, ex, color='k', ls='none', label='data')\nax0.plot(rct, pref, color='blue', marker='', label='reference')\nax0.set_xlim(-4., 4.)\nax0.set_ylabel('Entries')\nax0.legend(prop={'size': 15})\n\n# Draw the pull and lines for -3, 0 and +3 standard deviations\nadd_pull_line = lambda v, c: ax1.plot([-4., 4.], [v, v], color=c, marker='')\n\nadd_pull_line(0, 'blue')\nadd_pull_line(-3, 'red')\nadd_pull_line(+3, 'red')\n\nax1.errorbar(centers, pull, perr, ex, color='k', ls='none')\nax1.set_ylim(-4, 4)\nax1.set_yticks([-3, 0, 3])\nax1.set_ylabel('Pull');",
"_____no_output_____"
]
],
[
[
"## Plotting efficiencies\nLet's suppose we build two histograms from the same sample, one of them after having applied some requirements. The first histogram will follow a gaussian distribution with center at 0 and standard deviation equal to 2, with 1000 entries. The second, with only 100 entries, will have the same center but the standard deviation will be 0.5. The efficiency plot would be calculated as follows:",
"_____no_output_____"
]
],
[
[
"# Create a random sample\nraw = np.random.normal(0, 2, 1000)\ncut = np.random.normal(0, 0.5, 100)\n\n# Create the histograms (we do not care about the errors for the moment). Note that the two\n# histograms have the same number of bins and range.\nh_raw, edges = np.histogram(raw, bins=10, range=(-2, 2))\nh_cut, _ = np.histogram(cut, bins=10, range=(-2, 2))\ncenters = (edges[1:] + edges[:-1])/2.\n\nex = (edges[1:] - edges[:-1])/2.\n\n# Calculate the efficiency and the errors\neff = h_cut.astype(float)/h_raw\ney = hep_spt.clopper_pearson_unc(h_cut, h_raw)\n\nplt.errorbar(centers, eff, ey, ex, ls='none');",
"_____no_output_____"
]
],
[
[
"## Displaying the correlation between variables on a sample\nThe hep_spt package also provides a way to easily plot the correlation among the variables on a given sample. Let's create a sample composed by 5 variables, two being independent and three correlated with them, and plot the results. Note that we must specify the minimum and maximum values for the histogram in order to correctly assign the colors, making them universal across our plots.",
"_____no_output_____"
]
],
[
[
"# Create a random sample\na = np.random.uniform(0, 1, 1000)\nb = np.random.normal(0, 1, 1000)\nc = a + np.random.uniform(0, 1, 1000)\nab = a*b\nabc = ab + c\nsmp = np.array([a, b, c, ab, abc])\n\n# Calculate the correlation\ncorr = np.corrcoef(smp)\n\n# Plot the results\nfig = plt.figure()\n\nhep_spt.corr_hist2d(corr, ['a', 'b', 'c', 'a$\\\\times$b', 'a$\\\\times$b + c'], vmin=-1, vmax=+1)",
"_____no_output_____"
]
],
[
[
"## Plotting a 2D profile\nWhen making 2D histograms, it is often useful to plot the profile in X or Y of the given distribution. This can be done as follows:",
"_____no_output_____"
]
],
[
[
"# Create a random sample\ns = 10000\nx = np.random.normal(0, 1, s)\ny = np.random.normal(0, 1, s)\n\n# Make the figure\nfig = plt.figure()\nax = fig.gca()\n\ndivider = make_axes_locatable(ax)\ncax = divider.append_axes('right', size='5%', pad=0.05)\n\nh, xe, ye, im = ax.hist2d(x, y, (40, 40), range=[(-2.5, +2.5), (-2.5, +2.5)])\n\n# Calculate the profile together with the standard deviation of the sample\nprof, _, std = hep_spt.profile(x, y, xe, std_type='sample')\n\neb = ax.errorbar(hep_spt.cfe(xe), prof, xerr=(xe[1] - xe[0])/2., yerr=std, color='teal', ls='none')\neb[-1][1].set_linestyle(':')\n\n# Calculate the profile together with the default standard deviation (that of the mean)\nprof, _, std = hep_spt.profile(x, y, xe)\n\nax.errorbar(hep_spt.cfe(xe), prof, xerr=(xe[1] - xe[0])/2., yerr=std, color='r', ls='none')\n\nfig.colorbar(im, cax=cax, orientation='vertical');",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0f14e6121b1a04fade47135e6f3de3fbdfcaf1e
| 2,594 |
ipynb
|
Jupyter Notebook
|
parallel/Index.ipynb
|
minrk/ipython-cse17
|
16a9059c7054a8bd4977a3cb8b09c100ea779069
|
[
"BSD-3-Clause"
] | 3 |
2017-03-02T07:11:37.000Z
|
2017-03-03T06:13:32.000Z
|
parallel/Index.ipynb
|
minrk/ipython-cse17
|
16a9059c7054a8bd4977a3cb8b09c100ea779069
|
[
"BSD-3-Clause"
] | null | null | null |
parallel/Index.ipynb
|
minrk/ipython-cse17
|
16a9059c7054a8bd4977a3cb8b09c100ea779069
|
[
"BSD-3-Clause"
] | null | null | null | 30.162791 | 283 | 0.605628 |
[
[
[
"# Interactive (parallel) Python",
"_____no_output_____"
],
[
"# Installation and dependencies\n\nYou will need ipyparallel >= 5.x, and pyzmq ≥ 13. To use the demo notebooks, you will also need tornado ≥ 4. I will also make use of numpy and matplotlib. If you have Canopy or Anaconda, you already have all of these.\n\nQuick one-line install for IPython and its dependencies:\n \n pip install ipyparallel\n \nOr get everything for the tutorial with conda:\n\n conda install anaconda mpi4py\n\nFor those who prefer pip or otherwise manual package installation, the following packages will be used:\n\nipython\nipyparallel\nnumpy\nmatplotlib\nnetworkx\nscikit-image\nrequests\nbeautifulsoup\nmpi4py\n\n\nOptional dependencies: I will use [NetworkX](http://networkx.lanl.gov/)\nfor one demo, and `scikit-image` for another, but they are not critical. Both packages are in in Anaconda.\n\nFor the image-related demos, all you need are some images on your computer. The notebooks will try to fetch images from Wikimedia Commons, but since the networks can be untrustworty, we have [bundled some images here](http://s3.amazonaws.com/ipython-parallel-data/images.zip).",
"_____no_output_____"
],
[
"## Outline\n\n- [Motivating Example](examples/Parallel%20image%20processing.ipynb)\n- [Overview](Overview.ipynb)\n- [Tutorial](tutorial)\n - [Remote Execution](tutorial/Remote%20Execution.ipynb)\n - [Multiplexing](tutorial/Multiplexing.ipynb)\n - [Load-Balancing](tutorial/Load-Balancing.ipynb)\n - [Both!](tutorial/All%20Together.ipynb)\n - [Parallel Magics](tutorial/Parallel%20Magics.ipynb)\n- [Examples](examples)\n- [Exercises](exercises)\n",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
]
] |
d0f155cc5e002897408f02b94745a9da42180440
| 2,721 |
ipynb
|
Jupyter Notebook
|
01_Motivating_Examples.ipynb
|
rdhyee/diversity-census-calc
|
e823392c50bc15d3abc395649b7c05b1e441af02
|
[
"Apache-2.0"
] | 2 |
2016-08-13T19:49:54.000Z
|
2021-06-21T23:23:52.000Z
|
01_Motivating_Examples.ipynb
|
rdhyee/diversity-census-calc
|
e823392c50bc15d3abc395649b7c05b1e441af02
|
[
"Apache-2.0"
] | null | null | null |
01_Motivating_Examples.ipynb
|
rdhyee/diversity-census-calc
|
e823392c50bc15d3abc395649b7c05b1e441af02
|
[
"Apache-2.0"
] | 1 |
2015-08-21T20:31:09.000Z
|
2015-08-21T20:31:09.000Z
| 37.273973 | 204 | 0.636531 |
[
[
[
"\n\n* [Is Houston the most diverse big city in the USA?](http://bit.ly/diversehouston)\n* [Racial Dot Map](http://bit.ly/rdotmap) and [explanation](http://bit.ly/rdotmapintro)\n* [bookmarklets for using the Racial Dot Map](http://bit.ly/rdotlets)\n\ndot maps are spreading: http://www.robertmanduca.com/projects/jobs.html\n\n# more context\n\nThe US Census is complex....so it's good, even essential, to have a framing question to guide your explorations so that you don't get distracted or lost.\n\nI got into thinking of the census in 2002 when I saw a woman I knew in the following SF Chronicle article: \n\n[Claremont-Elmwood / Homogeneity in Berkeley? Well, yeah - SFGate](http://www.sfgate.com/bayarea/article/Claremont-Elmwood-Homogeneity-in-Berkeley-3306778.php)\n\nI thought at that point it should be easy for regular people to do census calculations....\n\nIn the summer of 2013, I wrote the following note to Greg Wilson about diversity calculations:\n\n[notes for Greg Wilson about an example Data Science Workflow](https://www.evernote.com/shard/s1/sh/b3f79cbc-c0c3-48a3-87b6-91da1b939783/1857ddee32d7baa04c55e629da05e0a7)\n\nThere's a whole cottage industry in musing on \"diversity\" in the USA:\n\n* [The Most Diverse Cities In The US - Business Insider](http://www.businessinsider.com/the-most-diverse-cities-in-the-us-2013-7) -- using 4 categories: Vallejo.\n\n* [Most And Least Diverse Cities: Brown University Study Evaluates Diversity In The U.S.](http://www.huffingtonpost.com/2012/09/07/most-least-diverse-cities-brown-university-study_n_1865715.html)\n\n* [The Top 10 Most Diverse Cities in America](http://www.cnbc.com/id/43066296) -- LA?\n\nand let's not forget the [Racial Dot Map](http://bit.ly/rdotmap) and [some background](http://bit.ly/rdotmapintro).",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown"
]
] |
d0f15ee091f4f82310cb1c8e763d411fb8288839
| 12,443 |
ipynb
|
Jupyter Notebook
|
examples/training/decision_tree/example_credit_card_fraud.ipynb
|
LaudateCorpus1/snapml-examples
|
6f465722e6ec5260a7a61cca0074256c558068d5
|
[
"Apache-2.0"
] | null | null | null |
examples/training/decision_tree/example_credit_card_fraud.ipynb
|
LaudateCorpus1/snapml-examples
|
6f465722e6ec5260a7a61cca0074256c558068d5
|
[
"Apache-2.0"
] | 1 |
2021-10-05T16:57:20.000Z
|
2021-10-05T16:57:20.000Z
|
examples/training/decision_tree/example_credit_card_fraud.ipynb
|
LaudateCorpus1/snapml-examples
|
6f465722e6ec5260a7a61cca0074256c558068d5
|
[
"Apache-2.0"
] | null | null | null | 29.20892 | 169 | 0.57446 |
[
[
[
"```\nCopyright 2021 IBM Corporation\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n```",
"_____no_output_____"
],
[
"# Decision Tree on Credit Card Fraud Dataset\n\n## Background \n\nThe goal of this learning task is to predict if a credit card transaction is fraudulent or genuine based on a set of anonymized features.\n\n## Source\n\nThe raw dataset can be obtained directly from [Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud). \n\nIn this example, we download the dataset directly from Kaggle using their API. \n\nIn order for this to work, you must login into Kaggle and folow [these instructions](https://www.kaggle.com/docs/api) to install your API token on your machine.\n\n## Goal\n\nThe goal of this notebook is to illustrate how Snap ML can accelerate training of a decision tree model on this dataset.\n\n## Code",
"_____no_output_____"
]
],
[
[
"cd ../../",
"/home/aan/snapml-examples/examples\n"
],
[
"CACHE_DIR='cache-dir'",
"_____no_output_____"
],
[
"import numpy as np\nimport time\nfrom datasets import CreditCardFraud\nfrom sklearn.tree import DecisionTreeClassifier\nfrom snapml import DecisionTreeClassifier as SnapDecisionTreeClassifier\nfrom sklearn.metrics import roc_auc_score as score",
"_____no_output_____"
],
[
"dataset = CreditCardFraud(cache_dir=CACHE_DIR)\nX_train, X_test, y_train, y_test = dataset.get_train_test_split()",
"Reading binary CreditCardFraud dataset (cache) from disk.\n"
],
[
"print(\"Number of examples: %d\" % (X_train.shape[0]))\nprint(\"Number of features: %d\" % (X_train.shape[1]))\nprint(\"Number of classes: %d\" % (len(np.unique(y_train))))",
"Number of examples: 213605\nNumber of features: 28\nNumber of classes: 2\n"
],
[
"# the dataset is highly imbalanced\nlabels, sizes = np.unique(y_train, return_counts=True)\nprint(\"%6.2f %% of the training transactions belong to class 0\" % (sizes[0]*100.0/(sizes[0]+sizes[1])))\nprint(\"%6.2f %% of the training transactions belong to class 1\" % (sizes[1]*100.0/(sizes[0]+sizes[1])))\n\nfrom sklearn.utils.class_weight import compute_sample_weight\nw_train = compute_sample_weight('balanced', y_train)\nw_test = compute_sample_weight('balanced', y_test)",
" 99.83 % of the training transactions belong to class 0\n 0.17 % of the training transactions belong to class 1\n"
],
[
"model = DecisionTreeClassifier(max_depth=16, random_state=42)\nt0 = time.time()\nmodel.fit(X_train, y_train, sample_weight=w_train)\nt_fit_sklearn = time.time()-t0\nscore_sklearn = score(y_test, model.predict_proba(X_test)[:,1], sample_weight=w_test)\nprint(\"Training time (sklearn): %6.2f seconds\" % (t_fit_sklearn))\nprint(\"ROC AUC score (sklearn): %.4f\" % (score_sklearn))",
"Training time (sklearn): 8.18 seconds\nROC AUC score (sklearn): 0.8976\n"
],
[
"model = SnapDecisionTreeClassifier(max_depth=16, n_jobs=4, random_state=42)\nt0 = time.time()\nmodel.fit(X_train, y_train, sample_weight=w_train)\nt_fit_snapml = time.time()-t0\nscore_snapml = score(y_test, model.predict_proba(X_test)[:,1], sample_weight=w_test)\nprint(\"Training time (snapml): %6.2f seconds\" % (t_fit_snapml))\nprint(\"ROC AUC score (snapml): %.4f\" % (score_snapml))",
"Training time (snapml): 0.24 seconds\nROC AUC score (snapml): 0.9093\n"
],
[
"speed_up = t_fit_sklearn/t_fit_snapml\nscore_diff = (score_snapml-score_sklearn)/score_sklearn\nprint(\"Speed-up: %.1f x\" % (speed_up))\nprint(\"Relative diff. in score: %.4f\" % (score_diff))",
"Speed-up: 34.8 x\nRelative diff. in score: 0.0130\n"
]
],
[
[
"## Disclaimer\n\nPerformance results always depend on the hardware and software environment. \n\nInformation regarding the environment that was used to run this notebook are provided below:",
"_____no_output_____"
]
],
[
[
"import utils\nenvironment = utils.get_environment()\nfor k,v in environment.items():\n print(\"%15s: %s\" % (k, v))",
" platform: Linux-4.15.0-136-generic-x86_64-with-glibc2.10\n cpu_count: 16\n cpu_freq_min: 1200.0\n cpu_freq_max: 3200.0\n total_memory: 62.825439453125\n snapml_version: 1.7.0\nsklearn_version: 0.24.1\n"
]
],
[
[
"## Record Statistics\n\nFinally, we record the enviroment and performance statistics for analysis outside of this standalone notebook.",
"_____no_output_____"
]
],
[
[
"import scrapbook as sb\nsb.glue(\"result\", {\n 'dataset': dataset.name,\n 'n_examples_train': X_train.shape[0],\n 'n_examples_test': X_test.shape[0],\n 'n_features': X_train.shape[1],\n 'n_classes': len(np.unique(y_train)),\n 'model': type(model).__name__,\n 'score': score.__name__,\n 't_fit_sklearn': t_fit_sklearn,\n 'score_sklearn': score_sklearn,\n 't_fit_snapml': t_fit_snapml,\n 'score_snapml': score_snapml,\n 'score_diff': score_diff,\n 'speed_up': speed_up,\n **environment,\n})",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0f16a961a8224ccb91fdd8b54c53ba31c743f3d
| 253,708 |
ipynb
|
Jupyter Notebook
|
docs/notebooks/eqtc_workshop/EQTC_workshop_QAOA_tutorial_solution.ipynb
|
QuTech-Delft/quantum-inspire-examples
|
04e9bb879bf0b8a12cede0d67f7384028c40788f
|
[
"Apache-2.0"
] | 2 |
2021-09-13T11:05:46.000Z
|
2022-01-27T12:28:21.000Z
|
docs/notebooks/eqtc_workshop/EQTC_workshop_QAOA_tutorial_solution.ipynb
|
QuTech-Delft/quantum-inspire-examples
|
04e9bb879bf0b8a12cede0d67f7384028c40788f
|
[
"Apache-2.0"
] | 4 |
2021-06-21T12:29:39.000Z
|
2021-11-28T10:12:09.000Z
|
docs/notebooks/eqtc_workshop/EQTC_workshop_QAOA_tutorial_solution.ipynb
|
QuTech-Delft/quantum-inspire-examples
|
04e9bb879bf0b8a12cede0d67f7384028c40788f
|
[
"Apache-2.0"
] | 2 |
2021-06-21T12:09:52.000Z
|
2021-07-26T10:54:56.000Z
| 163.577047 | 90,600 | 0.884777 |
[
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom collections import defaultdict\nfrom scipy.optimize import minimize\n\nimport networkx as nx\nfrom networkx.generators.random_graphs import erdos_renyi_graph\n\nfrom IPython.display import Image",
"_____no_output_____"
],
[
"from qiskit import QuantumCircuit, execute, Aer\nfrom qiskit.tools.visualization import circuit_drawer, plot_histogram",
"_____no_output_____"
],
[
"from quantuminspire.credentials import get_authentication\nfrom quantuminspire.api import QuantumInspireAPI\nfrom quantuminspire.qiskit import QI\n\nQI_URL = 'https://api.quantum-inspire.com/'",
"_____no_output_____"
]
],
[
[
"In this notebook you will apply what you have just learned about cqasm and Quantum Inspire. We will consider a simple quantum algorithm, the quantum approximate optimization algorithm (QAOA), for which you will code the circuit in cqasm and send some jobs to real quantum hardware on the Quantum Inspire platform.",
"_____no_output_____"
],
[
"## 1. Recap: QAOA and MAXCUT",
"_____no_output_____"
],
[
"### Introduction to the Quantum Approximate Optimization Algorithm\n\n$$\\newcommand{\\ket}[1]{\\left|{#1}\\right\\rangle}$$\n$$\\newcommand{\\bra}[1]{\\left\\langle{#1}\\right|}$$\n$$\\newcommand{\\braket}[2]{\\left\\langle{#1}\\middle|{#2}\\right\\rangle}$$\n\nConsider some combinatorial optimization problem with objective function $C:x\\rightarrow \\mathbb{R}$ acting on $n$-bit strings $x\\in \\{0,1\\}^n$, domain $\\mathcal{D} \\subseteq \\{0,1\\}^n$, and objective\n\n\\begin{align}\n \\max_{x \\in \\mathcal{D}} C(x).\n\\end{align}\n\nIn maximization, an approximate optimization algorithm aims to find a string $x'$ that achieves a desired approximation ratio $\\alpha$, i.e.\n\n\\begin{equation}\n \\frac{C(x')}{C^*}\\geq \\alpha,\n\\end{equation}\n\nwhere $C^* = \\max_{x \\in \\mathcal{D}} C(x)$.\nIn QAOA, such combinatorial optimization problems are encoded into a cost Hamiltonian $H_C$, a mixing Hamiltonian $H_M$ and some initial quantum state $\\ket{\\psi_0}$. The cost Hamiltonian is diagonal in the computational basis by design, and represents $C$ if its eigenvalues satisfy\n\n\\begin{align}\n H_C \\ket{x} = C(x) \\ket{x} \\text{ for all } x \\in \\{0,1\\}^n.\n\\end{align}\n\nThe mixing Hamiltonian $H_M$ depends on $\\mathcal{D}$ and its structure, and is in the unconstrained case (i.e. when $\\mathcal{D}=\\{0,1\\}^n$) usually taken to be the transverse field Hamiltonian $H_M = \\sum_{j} X_j$. Constraints (i.e. when $\\mathcal{D}\\subset \\{0,1\\}^n$) can be incorporated directly into the mixing Hamiltonian or are added as a penalty function in the cost Hamiltonian. The initial quantum state $\\ket{\\psi_0}$ is usually taken as the uniform superposition over all possible states in the domain. $\\text{QAOA}_p$, parametrized in $\\gamma=(\\gamma_0,\\gamma_1,\\dots,\\gamma_{p-1}),\\beta=(\\beta_0,\\beta_1,\\dots,\\beta_{p-1})$, refers to a level-$p$ QAOA circuit that applies $p$ steps of alternating time evolutions of the cost and mixing Hamiltonians on the initial state. At step $k$, the unitaries of the time evolutions are given by\n\n\\begin{align}\n U_C(\\gamma_k) = e^{-i \\gamma_k H_C }, \\label{eq:UC} \\\\\n U_M(\\beta_k) = e^{-i \\beta_k H_M }. \\label{eq:UM}\n\\end{align}\n\nSo the final state $\\ket{\\gamma,\\beta}$ of $\\text{QAOA}_p$ is given by \n\n\\begin{align}\n \\ket{\\gamma,\\beta} = \\prod_{k=0}^{p-1} U_M(\\beta_k) U_C(\\gamma_k) \\ket{\\psi_0}.\n\\end{align}\n\nThe expectation value $ F_p(\\gamma,\\beta)$ of the cost Hamiltonian for state $\\ket{\\gamma,\\beta}$ is given by\n\n\\begin{align}\n F_p(\\gamma,\\beta) = \n \\bra{\\gamma,\\beta}H_C\\ket{\\gamma,\\beta},\n \\label{eq:Fp}\n\\end{align}\n\nand can be statistically estimated by taking samples of $\\ket{\\gamma,\\beta}$. The achieved approximation ratio (in expectation) of $\\text{QAOA}_p$ is then\n\n\\begin{equation}\n \\alpha = \\frac{F_p(\\gamma,\\beta)}{C^*}.\n\\end{equation}\n\nThe parameter combinations of $\\gamma,\\beta$ are usually found through a classical optimization procedure that uses $F_p(\\gamma,\\beta)$ as a black-box function to be maximized.",
"_____no_output_____"
],
[
"### Example application: MAXCUT\n\nMaxCut is an NP-hard optimisation problem that looks for an optimal 'cut' for a graph $G(V,E)$, in the sense that the cut generates a subset of nodes $S \\subset V$ that shares the largest amount of edges with its complement $ V\\setminus S$. In slightly modified form (omitting the constant), it has the following objective function\n\n\\begin{align}\n\\max_{s} \\frac{1}{2} \\sum_{\n\\langle i,j \\rangle \\in E} 1-s_i s_j,\n\\end{align}\n\nwhere the $s_i\\in\\{-1,1\\}$ are the variables and $i,j$ are the edge indices. This function can be easily converted into an Ising cost Hamiltonian, which takes the form\n\n\\begin{align}\nH_C = \\frac{1}{2}\\sum_{\\langle i,j\\rangle \\in E} I-Z_i Z_j.\n\\end{align}\n\nWe use the standard mixing Hamiltonian that sums over all nodes:\n\n\\begin{align}\nH_M = \\sum_{v \\in V} X_v.\n\\end{align}\n\nAs the initial state $\\ket{\\Psi_0}$ we take the uniform superposition, given by\n\n\\begin{align}\n\\ket{\\psi_0} = \\frac{1}{\\sqrt{2^{|V|}}}\\sum_{x=0}^{2^{|V|}-1} \\ket{x} \n\\end{align}\n",
"_____no_output_____"
],
[
"The goal of this workshop is to guide you through an implemented code that simulates a small quantum computer running the QAOA algorithm applied to the MAXCUT problem. We will use qiskit as well as cqasm as SDK's. For the sake of run time, you will always run the classical optimization part using the qiskit simulator: it would take too long for our purposes to do the actual function evualtions in the classical optimization step on the hardware.",
"_____no_output_____"
],
[
"## 2. Some useful functions and intializations",
"_____no_output_____"
],
[
"We first define some useful functions to be used later throughout the code.",
"_____no_output_____"
]
],
[
[
"# Just some function to draw graphs\ndef draw_cc_graph(G,node_color='b',fig_size=4):\n plt.figure(figsize=(fig_size,fig_size))\n nx.draw(G, G.pos, \n node_color= node_color,\n with_labels=True,\n node_size=1000,font_size=14)\n plt.show()",
"_____no_output_____"
],
[
"# Define the objective function\ndef maxcut_obj(x,G):\n cut = 0\n for i, j in G.edges():\n if x[i] != x[j]:\n # the edge is cut, negative value in agreement with the optimizer (which is a minimizer)\n cut -= 1\n return cut\n\n# Brute force method\ndef brute_force(G):\n n = len(G.nodes)\n costs = np.zeros(0)\n costs=[]\n for i in range(2**n):\n calc_costs = -1*maxcut_obj(bin(i)[2:].zfill(n),G)\n costs.append(calc_costs)\n max_costs_bf = max(costs)\n index_max = costs.index(max(costs))\n max_sol_bf = bin(index_max)[2:].zfill(n)\n return max_costs_bf, max_sol_bf,costs\n",
"_____no_output_____"
],
[
"# Generating the distribution resulting from random guessing the solution\ndef random_guessing_dist(G):\n dictio= dict()\n n = len(G.nodes())\n for i in range(2**n):\n key = bin(i)[2:].zfill(n)\n dictio[key] = maxcut_obj(bin(i)[2:].zfill(n),G)\n RG_energies_dist = defaultdict(int)\n for x in dictio:\n RG_energies_dist[maxcut_obj(x,G)] += 1\n return RG_energies_dist\n\n# Visualize multiple distributions\ndef plot_E_distributions(E_dists,p,labels):\n plt.figure()\n x_min = 1000\n x_max = - 1000\n width = 0.25/len(E_dists)\n for index,E_dist in enumerate(E_dists):\n pos = width*index-width*len(E_dists)/4 \n label = labels[index]\n X_list,Y_list = zip(*E_dist.items())\n X = -np.asarray(X_list)\n Y = np.asarray(Y_list)\n plt.bar(X + pos, Y/np.sum(Y), color = 'C'+str(index), width = width,label= label+', $p=$'+str(p))\n if np.min(X)<x_min:\n x_min = np.min(X)\n if np.max(X)>x_max:\n x_max = np.max(X)\n plt.xticks(np.arange(x_min,x_max+1))\n plt.legend()\n plt.xlabel('Objective function value')\n plt.ylabel('Probability')\n plt.show()\n\n\n# Determinet the expected objective function value from the random guessing distribution\ndef energy_random_guessing(RG_energies_dist):\n energy_random_guessing = 0\n total_count = 0\n for energy in RG_energies_dist.keys():\n count = RG_energies_dist[energy]\n energy_random_guessing += energy*count\n total_count += count\n energy_random_guessing = energy_random_guessing/total_count\n return energy_random_guessing",
"_____no_output_____"
]
],
[
[
"### Test instances",
"_____no_output_____"
]
],
[
[
"w2 = np.matrix([\n [0, 1],\n [1, 0]])\nG2 = nx.from_numpy_matrix(w2)\npositions = nx.circular_layout(G2)\nG2.pos=positions\nprint('G2:')\ndraw_cc_graph(G2)\n\n\nw3 = np.matrix([\n [0, 1, 1],\n [1, 0, 1],\n [1, 1, 0]])\nG3 = nx.from_numpy_matrix(w3)\npositions = nx.circular_layout(G3)\nG3.pos=positions\nprint('G3:')\ndraw_cc_graph(G3)",
"G2:\n"
]
],
[
[
"## 3. Circuit generators",
"_____no_output_____"
],
[
"We provide you with an example written in qiskit. You have to write the one for cqasm yourself.",
"_____no_output_____"
],
[
"### Qiskit generators",
"_____no_output_____"
]
],
[
[
"class Qiskit(object): \n # Cost operator:\n def get_cost_operator_circuit(G, gamma):\n N = G.number_of_nodes()\n qc = QuantumCircuit(N,N)\n for i, j in G.edges():\n qc.cx(i,j)\n qc.rz(2*gamma, j)\n qc.cx(i,j)\n return qc\n\n # Mixing operator\n def get_mixer_operator_circuit(G, beta):\n N = G.number_of_nodes()\n qc = QuantumCircuit(N,N)\n for n in G.nodes():\n qc.rx(2*beta, n)\n return qc\n \n # Build the circuit:\n def get_qaoa_circuit(G, beta, gamma):\n assert(len(beta) == len(gamma))\n p = len(beta) # number of unitary operations\n N = G.number_of_nodes()\n qc = QuantumCircuit(N,N)\n # first step: apply Hadamards to obtain uniform superposition\n qc.h(range(N))\n # second step: apply p alternating operators\n for i in range(p):\n qc.compose(Qiskit.get_cost_operator_circuit(G,gamma[i]),inplace=True)\n qc.compose(Qiskit.get_mixer_operator_circuit(G,beta[i]),inplace=True)\n # final step: measure the result\n qc.barrier(range(N))\n qc.measure(range(N), range(N))\n return qc\n\n",
"_____no_output_____"
],
[
"# Show the circuit for the G3 (triangle) graph\np = 1\nbeta = np.random.rand(p)*2*np.pi\ngamma = np.random.rand(p)*2*np.pi\nqc = Qiskit.get_qaoa_circuit(G3,beta, gamma)\nqc.draw(output='mpl')\n",
"_____no_output_____"
]
],
[
[
"### cqasm generators",
"_____no_output_____"
],
[
"Now it is up to you to apply what we have learned about cqasm to write the script for the cost and mixing operators:",
"_____no_output_____"
]
],
[
[
"class Cqasm(object):\n \n ### We give them this part\n def get_qasm_header(N_qubits):\n \"\"\"\n Create cQASM header for `N_qubits` qubits and prepare all in |0>-state.\n \"\"\"\n header = f\"\"\"\nversion 1.0\nqubits {N_qubits}\nprep_z q[0:{N_qubits-1}]\n\"\"\"\n return header\n \n def get_cost_operator(graph, gamma, p=1):\n \"\"\"\n Create cost operator for given angle `gamma`.\n \"\"\"\n layer_list = graph.number_of_edges()*[None]\n for n, (i,j) in enumerate(graph.edges()):\n layer_list[n] = '\\n'.join([f\"CNOT q[{i}], q[{j}]\", \n f\"Rz q[{j}], {2*gamma}\", \n f\"CNOT q[{i}], q[{j}]\"])\n\n return f\".U_gamma_{p}\\n\" + '\\n'.join(layer_list) + '\\n'\n\n def get_mixing_operator(graph, beta, p=1):\n \"\"\"\n Create mixing operator for given angle `beta`. \n Use parallel application of single qubit gates.\n \"\"\"\n U_beta = \"{\" + ' | '.join([f\"Rx q[{i}], {2*beta}\" for i in graph.nodes()]) + \"}\"\n return f\".U_beta_{p}\\n\" + U_beta + '\\n'\n\n def get_qaoa_circuit(graph, beta, gamma):\n \"\"\"\n Create full QAOA circuit for given `graph` and angles `beta` and `gamma`.\n \"\"\"\n assert len(beta) == len(gamma)\n p = len(beta) # number of layers\n N_qubits = graph.number_of_nodes()\n circuit_str = Cqasm.get_qasm_header(5) #N_qubits)\n\n # first step: apply Hadamards to obtain uniform superposition\n circuit_str += \"{\" + ' | '.join([f\"H q[{i}]\" for i in graph.nodes()]) + \"}\\n\\n\"\n # second step: apply p alternating operators\n circuit_str += '\\n'.join([Cqasm.get_cost_operator(graph, gamma[i], i+1) \n + Cqasm.get_mixing_operator(graph, beta[i], i+1) for i in range(p)])\n # final step: measure the result\n circuit_str += \"\\n\"\n circuit_str += \"measure_all\"\n\n return circuit_str",
"_____no_output_____"
]
],
[
[
"## 4. Hybrid-quantum classical optimization",
"_____no_output_____"
],
[
"Since QAOA is usually adopted as a hybrid quantum-classical algorithm, we need to construct an outer loop which optimizes the estimated $\\bra{\\gamma,\\beta}H\\ket{\\gamma,\\beta}$.",
"_____no_output_____"
]
],
[
[
"# Black-box function that describes the energy output of the QAOA quantum circuit\ndef get_black_box_objective(G, p, SDK = 'qiskit', backend = None, shots=2**10):\n if SDK == 'cqasm':\n if not backend:\n backend = 'QX single-node simulator'\n backend_type = qi.get_backend_type_by_name(backend)\n def f(theta):\n # first half is betas, second half is gammas\n beta = theta[:p]\n gamma = theta[p:]\n qc = Cqasm.get_qaoa_circuit(G, beta, gamma)\n result = qi.execute_qasm(qc, backend_type=backend_type, number_of_shots=shots)\n counts = result['histogram'] \n # return the energy\n return compute_maxcut_energy(counts, G)\n\n if SDK == 'qiskit':\n if not backend:\n backend = 'qasm_simulator'\n backend = Aer.get_backend(backend)\n def f(theta):\n # first half is betas, second half is gammas\n beta = theta[:p]\n gamma = theta[p:]\n qc = Qiskit.get_qaoa_circuit(G,beta, gamma)\n counts = execute(qc, backend,shots=shots).result().get_counts()\n # return the energy\n return compute_maxcut_energy(counts, G)\n else:\n return 'error: SDK not found'\n return f\n\n# Estimate the expectation value based on the circuit output\ndef compute_maxcut_energy(counts, G):\n energy = 0\n total_counts = 0\n for meas, meas_count in counts.items():\n obj_for_meas = maxcut_obj(meas, G)\n energy += obj_for_meas * meas_count\n total_counts += meas_count\n return energy / total_counts",
"_____no_output_____"
]
],
[
[
"## 5. A simple instance on the quantum inspire platform: 2-qubit case",
"_____no_output_____"
],
[
"Let us first consider the most simple MAXCUT instance. We have just two nodes, and an optimal cut with objective value 1 would be to place both nodes in its own set.",
"_____no_output_____"
]
],
[
[
"G=G2\nmax_costs_bf, max_sol_bf,costs = brute_force(G)\nprint(\"brute force method best cut: \",max_costs_bf)\nprint(\"best string brute force method:\",max_sol_bf)\n\ncolors = ['red' if x == '0' else 'b' for x in max_sol_bf]\ndraw_cc_graph(G,node_color = colors)",
"brute force method best cut: 1\nbest string brute force method: 01\n"
]
],
[
[
"Using qiskit, the circuit would look the following:",
"_____no_output_____"
]
],
[
[
"# Test and show circuit for some beta,gamma\np = 1\nbeta = np.random.rand(p)*np.pi\ngamma = np.random.rand(p)*2*np.pi\nqc = Qiskit.get_qaoa_circuit(G,beta, gamma)\nqc.draw(output='mpl')",
"_____no_output_____"
]
],
[
[
"Now let's run our hybrid-quantum algorithm simulation using qiskit:",
"_____no_output_____"
]
],
[
[
"# Parameters that can be changed:\np = 1\nlb = np.zeros(2*p)\nub = np.hstack([np.full(p, np.pi), np.full(p, 2*np.pi)])\ninit_point = np.random.uniform(lb, ub, 2*p)\nshots = 2**10\noptimiser = 'COBYLA'\nmax_iter = 100\n\n# Training of the parameters beta and gamma\nobj = get_black_box_objective(G,p,SDK='qiskit',shots=shots)\n# Lower and upper bounds: beta \\in {0, pi}, gamma \\in {0, 2*pi}\nbounds = [lb,ub]\n\n# Maximum number of iterations: 100\nres = minimize(obj, init_point, method=optimiser, bounds = bounds, options={'maxiter':max_iter, 'disp': True})\nprint(res)",
"/home/redwombat/miniconda3/envs/qi-py38/lib/python3.8/site-packages/scipy/optimize/_minimize.py:544: RuntimeWarning: Method COBYLA cannot handle bounds.\n warn('Method %s cannot handle bounds.' % method,\n"
],
[
"#Determine the approximation ratio:\nprint('Approximation ratio is',-res['fun']/max_costs_bf)",
"Approximation ratio is 1.0\n"
],
[
"# Extract the optimal values for beta and gamma and run a new circuit with these parameters\noptimal_theta = res['x']\nqc = Qiskit.get_qaoa_circuit(G, optimal_theta[:p], optimal_theta[p:])\ncounts = execute(qc,backend = Aer.get_backend('qasm_simulator'),shots=shots).result().get_counts()",
"_____no_output_____"
],
[
"plt.bar(counts.keys(), counts.values())\nplt.xlabel('String')\nplt.ylabel('Count')\nplt.show()",
"_____no_output_____"
],
[
"RG_dist = random_guessing_dist(G)",
"_____no_output_____"
],
[
"# Measurement distribution \nE_dist = defaultdict(int)\nfor k, v in counts.items():\n E_dist[maxcut_obj(k,G)] += v\n\nplot_E_distributions([E_dist,RG_dist],p,['Qiskit','random guessing'])\n\nE_random_guessing = energy_random_guessing(RG_dist)\nprint('Energy from random guessing is', E_random_guessing)",
"_____no_output_____"
],
[
"X_list,Y_list = zip(*E_dist.items())\nX = -np.asarray(X_list)\nY = np.asarray(Y_list)\nprint('Probability of measuring the optimal solution is',Y[np.argmax(X)]/shots)",
"Probability of measuring the optimal solution is 1.0\n"
]
],
[
[
"Now that we have obtained some good values for $\\beta$ and $\\gamma$ through classical simulation, let's see what Starmon-5 would give us.",
"_____no_output_____"
],
[
"The figure below shows the topology of Starmon-5. Since q0 is not connected to q1, we have to relabel the nodes. Networkx as such an option, by using 'nx.relabel_nodes(G,{1:2}' we can relabel node 1 as node 2. Since q0 is connected to q2, this does allow us to run our cqasm code on Starmon-5. For qiskit, this step is irrelevant as we have all-to-all connectivity in the simulation.",
"_____no_output_____"
]
],
[
[
"Image(filename='Starmon5.png')",
"_____no_output_____"
],
[
"qc_Cqasm = Cqasm.get_qaoa_circuit(nx.relabel_nodes(G, {1: 2}), optimal_theta[:p], optimal_theta[p:])\nprint(qc_Cqasm)",
"\nversion 1.0\nqubits 5\nprep_z q[0:4]\n{H q[0] | H q[2]}\n\n.U_gamma_1\nCNOT q[0], q[2]\nRz q[2], 1.5828114789648722\nCNOT q[0], q[2]\n.U_beta_1\n{Rx q[0], -0.798661078152829 | Rx q[2], -0.798661078152829}\n\nmeasure_all\n"
]
],
[
[
"Now we run the Cqasm-circuit on the Starmon-5 Hardware.",
"_____no_output_____"
]
],
[
[
"authentication = get_authentication()\nQI.set_authentication(authentication, QI_URL)",
"_____no_output_____"
],
[
"qiapi = QuantumInspireAPI(QI_URL, authentication)\nresult = qiapi.execute_qasm(qc_Cqasm, backend_type=qiapi.get_backend_type('Starmon-5'), number_of_shots=2**10)\ncounts_QI = result['histogram']",
"_____no_output_____"
]
],
[
[
"Inspecting 'counts_QI', we see that it returns the integer corresponding to the bit string result of the measurement ",
"_____no_output_____"
]
],
[
[
"counts_QI",
"_____no_output_____"
]
],
[
[
"Note that we measure more than just the two relevant qubits, since we had the 'measure all' command in the the cqasm code. The distribution over the strings looks the following:",
"_____no_output_____"
]
],
[
[
"counts_bin = {}\nfor k,v in counts_QI.items():\n counts_bin[f'{int(k):05b}'] = v\nprint(counts_bin)\nplt.bar(counts_bin.keys(), counts_bin.values())\nplt.xlabel('State')\nplt.ylabel('Measurement probability')\nplt.xticks(rotation='vertical')\nplt.show()",
"{'00000': 0.056640625, '00001': 0.4208984375, '00010': 0.005859375, '00011': 0.021484375, '00100': 0.3994140625, '00101': 0.0283203125, '00110': 0.0234375, '00111': 0.001953125, '01000': 0.001953125, '01001': 0.0068359375, '01100': 0.015625, '01101': 0.001953125, '10001': 0.0009765625, '10100': 0.0078125, '10110': 0.001953125, '11100': 0.0048828125}\n"
]
],
[
[
"Let's create another counts dictionary with only the relevant qubits, which are q0 and q2:",
"_____no_output_____"
]
],
[
[
"counts_bin_red = defaultdict(float)\nfor string in counts_bin:\n q0 = string[-1]\n q1 = string[-3]\n counts_bin_red[(q0+q1)]+=counts_bin[string]",
"_____no_output_____"
],
[
"counts_bin_red",
"_____no_output_____"
]
],
[
[
"We now plot all distributions (qiskit, Starmon-5, and random guessing) in a single plot.",
"_____no_output_____"
]
],
[
[
"#Determine the approximation ratio:\nprint('Approximation ratio on the hardware is',-compute_maxcut_energy(counts_bin_red,G)/max_costs_bf)\n\n# Random guessing distribution\nRG_dist = random_guessing_dist(G)\n\n# Measurement distribution \nE_dist_S5 = defaultdict(int)\nfor k, v in counts_bin_red.items():\n E_dist_S5[maxcut_obj(k,G)] += v\n \nplot_E_distributions([E_dist,E_dist_S5,RG_dist],p,['Qiskit','Starmon-5','random guessing'])\n\n\nX_list,Y_list = zip(*E_dist_S5.items())\nX = -np.asarray(X_list)\nY = np.asarray(Y_list)\nprint('Probability of measuring the optimal solution is',Y[np.argmax(X)])\n\n\nE_random_guessing = energy_random_guessing(RG_dist)\nprint('Expected approximation ratio random guessing is', -E_random_guessing/max_costs_bf)",
"Approximation ratio on the hardware is 0.9033203125\n"
]
],
[
[
"## 6. Compilation issues: the triangle graph\n",
"_____no_output_____"
],
[
"For the graph with just two nodes we already had some minor compilation issues, but this was easily fixed by relabeling the nodes. We will now consider an example for which relabeling is simply not good enough to get it mapped to the Starmon-5 toplogy.",
"_____no_output_____"
]
],
[
[
"G=G3\nmax_costs_bf, max_sol_bf,costs = brute_force(G)\nprint(\"brute force method best cut: \",max_costs_bf)\nprint(\"best string brute force method:\",max_sol_bf)\n\ncolors = ['red' if x == '0' else 'b' for x in max_sol_bf]\ndraw_cc_graph(G,node_color = colors)",
"brute force method best cut: 2\nbest string brute force method: 001\n"
]
],
[
[
"Due to the topology of Starmon-5 this graph cannot be executed without any SWAPS. Therefore, we ask you to write a new circuit generator that uses SWAPS in order to make the algorithm work with the Starmon-5 topology. Let's also swap back to the original graph configuration, so that we can in the end measure only the qubits that correspond to a node in the graph (this is already written for you)",
"_____no_output_____"
]
],
[
[
"def QAOA_triangle_circuit_cqasm(graph, beta, gamma):\n circuit_str = Cqasm.get_qasm_header(5)\n circuit_str += \"{\" + ' | '.join([f\"H q[{i}]\" for i in graph.nodes()]) + \"}\\n\\n\"\n \n def get_triangle_cost_operator(graph, gamma, p):\n layer_list = graph.number_of_edges() * [None]\n for n, edge in enumerate(graph.edges()):\n if 0 in edge and 1 in edge:\n layer_list[n] = '\\n'.join([f\"SWAP q[{edge[0]}], q[2]\",\n f\"CNOT q[2], q[{edge[1]}]\", \n f\"Rz q[{edge[1]}], {2*gamma}\", \n f\"CNOT q[2], q[{edge[1]}]\",\n f\"SWAP q[{edge[0]}], q[2]\" ])\n else:\n layer_list[n] = '\\n'.join([f\"CNOT q[{edge[0]}], q[{edge[1]}]\", \n f\"Rz q[{edge[1]}], {2*gamma}\", \n f\"CNOT q[{edge[0]}], q[{edge[1]}]\"])\n\n return f\".U_gamma_{p}\\n\" + '\\n'.join(layer_list) + '\\n'\n \n circuit_str += '\\n'.join([get_triangle_cost_operator(graph, gamma[i], i+1) \n + Cqasm.get_mixing_operator(graph, beta[i], i+1) for i in range(p)]) \n circuit_str += \"\\n\"\n circuit_str += \"{\" + ' | '.join([f\"measure q[{i}]\" for i in graph.nodes()]) + \"}\\n\" \n return circuit_str ",
"_____no_output_____"
]
],
[
[
"We now run the same procedure as before to obtain good parameter values",
"_____no_output_____"
]
],
[
[
"# Parameters that can be changed:\np = 1\nlb = np.zeros(2*p)\nub = np.hstack([np.full(p, np.pi), np.full(p, 2*np.pi)])\ninit_point = np.random.uniform(lb, ub, 2*p)\nshots = 2**10\noptimiser = 'COBYLA'\nmax_iter = 100\n\n# Training of the parameters beta and gamma\nobj = get_black_box_objective(G,p,SDK='qiskit',shots=shots)\n# Lower and upper bounds: beta \\in {0, pi}, gamma \\in {0, 2*pi}\nbounds = [lb,ub]\n\n# Maximum number of iterations: 100\nres = minimize(obj, init_point, method=optimiser, bounds = bounds,options={'maxiter':max_iter, 'disp': True})\nprint(res)",
"/home/redwombat/miniconda3/envs/qi-py38/lib/python3.8/site-packages/scipy/optimize/_minimize.py:544: RuntimeWarning: Method COBYLA cannot handle bounds.\n warn('Method %s cannot handle bounds.' % method,\n"
],
[
"#Determine the approximation ratio:\nprint('Approximation ratio is',-res['fun']/max_costs_bf)\n\n# Extract the optimal values for beta and gamma and run a new circuit with these parameters\noptimal_theta = res['x']\nqc = Qiskit.get_qaoa_circuit(G, optimal_theta[:p], optimal_theta[p:])\ncounts = execute(qc,backend = Aer.get_backend('qasm_simulator'),shots=shots).result().get_counts()\n\n# Random guessing distribution\nRG_dist = random_guessing_dist(G)\n\n# Measurement distribution \nE_dist = defaultdict(int)\nfor k, v in counts.items():\n E_dist[maxcut_obj(k,G)] += v\n\nX_list,Y_list = zip(*E_dist.items())\nX = -np.asarray(X_list)\nY = np.asarray(Y_list)\nprint('Probability of measuring the optimal solution is',Y[np.argmax(X)]/shots)\n\nE_random_guessing = energy_random_guessing(RG_dist)\nprint('Expected approximation ratio random guessing is', -E_random_guessing/max_costs_bf)",
"Approximation ratio is 1.0\nProbability of measuring the optimal solution is 1.0\nExpected approximation ratio random guessing is 0.75\n"
],
[
"plt.bar(counts.keys(), counts.values())\nplt.xlabel('String')\nplt.ylabel('Count')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Let's run it on Starmon-5 again!",
"_____no_output_____"
]
],
[
[
"# Extract the optimal values for beta and gamma and run a new circuit with these parameters\noptimal_theta = res['x']\nqasm_circuit = QAOA_triangle_circuit_cqasm(G, optimal_theta[:p], optimal_theta[p:])\nqiapi = QuantumInspireAPI(QI_URL, authentication)\nresult = qiapi.execute_qasm(qasm_circuit, backend_type=qiapi.get_backend_type('Starmon-5'), number_of_shots=shots)\ncounts = result['histogram']\n\nprint(qasm_circuit)\nprint(result)",
"\nversion 1.0\nqubits 5\nprep_z q[0:4]\n{H q[0] | H q[1] | H q[2]}\n\n.U_gamma_1\nSWAP q[0], q[2]\nCNOT q[2], q[1]\nRz q[1], 8.788986167232244\nCNOT q[2], q[1]\nSWAP q[0], q[2]\nCNOT q[0], q[2]\nRz q[2], 8.788986167232244\nCNOT q[0], q[2]\nCNOT q[1], q[2]\nRz q[2], 8.788986167232244\nCNOT q[1], q[2]\n.U_beta_1\n{Rx q[0], 3.781049654543756 | Rx q[1], 3.781049654543756 | Rx q[2], 3.781049654543756}\n\n{measure q[0] | measure q[1] | measure q[2]}\n\n{'id': 7069985, 'url': 'https://api.quantum-inspire.com/results/7069985/', 'job': 'https://api.quantum-inspire.com/jobs/7078006/', 'created_at': '2021-11-26T12:22:32.339692Z', 'number_of_qubits': 5, 'execution_time_in_seconds': 0.155648, 'raw_text': '', 'raw_data_url': 'https://api.quantum-inspire.com/results/7069985/raw-data/c34440ffc174351f859a320acde97bf1f5596d01f3bbd9acacd93a0771764fe5/', 'histogram': OrderedDict([('0', 0.0390625), ('1', 0.1611328125), ('2', 0.08203125), ('3', 0.240234375), ('4', 0.1025390625), ('5', 0.15625), ('6', 0.15625), ('7', 0.0625)]), 'histogram_url': 'https://api.quantum-inspire.com/results/7069985/histogram/c34440ffc174351f859a320acde97bf1f5596d01f3bbd9acacd93a0771764fe5/', 'measurement_mask': 7, 'quantum_states_url': 'https://api.quantum-inspire.com/results/7069985/quantum-states/c34440ffc174351f859a320acde97bf1f5596d01f3bbd9acacd93a0771764fe5/', 'measurement_register_url': 'https://api.quantum-inspire.com/results/7069985/measurement-register/c34440ffc174351f859a320acde97bf1f5596d01f3bbd9acacd93a0771764fe5/', 'calibration': 'https://api.quantum-inspire.com/calibration/109027/'}\n"
],
[
"counts",
"_____no_output_____"
],
[
"counts_bin = {}\nfor k,v in counts.items():\n counts_bin[f'{int(k):03b}'] = v\nprint(counts_bin)\nplt.bar(counts_bin.keys(), counts_bin.values())\nplt.xlabel('String')\nplt.ylabel('Probability')\nplt.show()",
"{'000': 0.0390625, '001': 0.1611328125, '010': 0.08203125, '011': 0.240234375, '100': 0.1025390625, '101': 0.15625, '110': 0.15625, '111': 0.0625}\n"
],
[
"#Determine the approximation ratio:\nprint('Approximation ratio on the hardware is',-compute_maxcut_energy(counts_bin,G)/max_costs_bf)\n\n# Random guessing distribution\nRG_dist = random_guessing_dist(G)\n\n# Measurement distribution \nE_dist_S5 = defaultdict(int)\nfor k, v in counts_bin.items():\n E_dist_S5[maxcut_obj(k,G)] += v\n \nplot_E_distributions([E_dist,E_dist_S5,RG_dist],p,['Qiskit','Starmon-5','random guessing'])\n\n\nX_list,Y_list = zip(*E_dist_S5.items())\nX = -np.asarray(X_list)\nY = np.asarray(Y_list)\nprint('Probability of measuring the optimal solution is',Y[np.argmax(X)])\n\n\nE_random_guessing = energy_random_guessing(RG_dist)\nprint('Expected approximation ratio random guessing is', -E_random_guessing/max_costs_bf)",
"Approximation ratio on the hardware is 0.8984375\n"
]
],
[
[
"## 7. More advanced questions\n\nSome questions you could look at:\n\n- What is the performance on other graph instances?\n- How scalable is this hardware for larger problem sizes?\n- How much can the circuit be optimized for certain graph instances?\n- Are the errors perfectly random or is there some correlation?\n- Are there tricks to find good parameters? ",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0f16cf06af438e22715bd8bb6e174237cc4415b
| 19,205 |
ipynb
|
Jupyter Notebook
|
teams/team_trimble/Trimble.ipynb
|
AbinMM/PYNQ_Hackathon_2017
|
711c75e8590b02f313295cef712188691690c948
|
[
"BSD-3-Clause"
] | 19 |
2017-10-08T03:18:38.000Z
|
2020-07-07T02:34:18.000Z
|
teams/team_trimble/Trimble.ipynb
|
AbinMM/PYNQ_Hackathon_2017
|
711c75e8590b02f313295cef712188691690c948
|
[
"BSD-3-Clause"
] | 2 |
2017-10-08T03:15:10.000Z
|
2017-10-10T16:10:32.000Z
|
teams/team_trimble/Trimble.ipynb
|
AbinMM/PYNQ_Hackathon_2017
|
711c75e8590b02f313295cef712188691690c948
|
[
"BSD-3-Clause"
] | 28 |
2017-10-07T23:24:36.000Z
|
2022-03-29T08:03:40.000Z
| 35.897196 | 1,253 | 0.6063 |
[
[
[
"##### Detection and Location Chain\n\n",
"_____no_output_____"
],
[
"**Abstract**: This hackathon project represents our effort to combine our existing machine learning and photogrametry efforts and further combine those efforts with both Cloud and Edge based solutions based upon Xilinx FPGA acceleration. \n\nThe Trimble team decided that the Xilinx hackathon would provide an excellent oppertunity to take the first steps in combining these technologies and learning how to use the varied Xilinx techologies.\n\nOur initial hope was to use a TensorFlow system to provide the machine learning component of our test based on an AWS Ultrascale instance. That technology was unavailable for the hackathon, so during the event we trained a system based on a more stardard AWS Tensorflow instance and accessed that instance via Pynq networking.\n\nThe Team Trimble is composed of\n\n* Roy Godzdanker – Trimble Product Architect for ICT\n* Robert Banefield – Trimble Data Machine Learning Specialist\n* Vinod Khare – Trimble ICT Photogrammetry\n* Ashish Khare – Trimble Geomatics Photogrammetry\n* Young-Jin Lee – Trimble ICT Photogrammetry\n* Matt Compton - Trimble ICT Design Engineer\n\n_NOTES_:\n\n1. The TensorFlow system is sitting at an AWS instance. This is the slow and simple one for my debug effort. In the spirit of the hackathon, we started in training at the beginning of the night. This implies that it's capabilities were not exceptional at the beginning of the night and it will be better as the newly trained net is swapped in in the morning. Further tests back at the ranch will include testing this chain against some of the other theoretical models. The current net underperforms some previous efforts, further exploration is needed here\n\n2. We also need to explore the TensorFlow element as an edge device. Advances in Xilinx FPGA tools may make that cost competative with a GPU box.\n\n3. Xilinx HLS looks to be able to add needed acceleration functions but this needs further exploration going forward. We explored the idea of overly with python controled DMA, this is very promising\n\nThe following are globals used within this project To Change this to different image set, simply change the images indicated and run through the notebook again.\n",
"_____no_output_____"
],
[
"1. Camera data is sent to the system from a remote repository. \n2. The Camera Data is sent to the Pynq to being processing.\n3. The TensorFlow cloud delivers metadata for the images that were transferred to it back to the Pynq via net transfer\n4. The Pynq software uses the photogrammetric OpenCV software chain that we wrote to estimate and calculate geometric position. In addition, images are displayed on the HDMI monitor and LCD display so we can see what is going on and to serve as a debug aid\n5. The calculated position of the object is returned.",
"_____no_output_____"
]
],
[
[
"## Imports\n\nimport cv2\nimport json\nimport matplotlib.pyplot as pyplot\nimport numpy\nimport matplotlib.patches as patches\nimport pynq.overlays.base\nimport pynq.lib.arduino as arduino\nimport pynq.lib.video as video\nimport requests\nimport scipy\nimport sys\nimport PIL",
"_____no_output_____"
],
[
"## Config\ngAWS_TENSORFLOW_INSTANCE = 'http://34.202.159.80'\ngCAMERA0_IMAGE = \"/home/xilinx/jupyter_notebooks/trimble-mp/CAM2_image_0032.jpg\"\ngCAMERA1_IMAGE = \"/home/xilinx/jupyter_notebooks/trimble-mp/CAM3_image_0032.jpg\"",
"_____no_output_____"
]
],
[
[
"Turn on the HDMI coming off the pink board. This is used in a fashion that is different than their primary test notes and may be difficult to complete during the time period. Specifically, the hdmi out is used without the input",
"_____no_output_____"
]
],
[
[
"base = pynq.overlays.base.BaseOverlay(\"base.bit\")\nhdmi_in = base.video.hdmi_in\nhdmi_out = base.video.hdmi_out\nv = video.VideoMode(1920,1080,24)\nhdmi_out.configure(v, video.PIXEL_BGR)\nhdmi_out.start()\noutframe = hdmi_out.newframe()",
"_____no_output_____"
]
],
[
[
"Using Pillow, pull in the chosen image for Camera 0",
"_____no_output_____"
]
],
[
[
"# Read images\nimage0BGR = cv2.imread(gCAMERA0_IMAGE)\nimage1BGR = cv2.imread(gCAMERA1_IMAGE)\n\nimage0 = image0BGR[...,::-1]\nimage1 = image1BGR[...,::-1]",
"_____no_output_____"
]
],
[
[
"Do exactly the same for the second image of the overlapping pair from camera 1",
"_____no_output_____"
],
[
"To send one of these to the HDMI, we are going to have to reformat it to fit the provided HDMI display",
"_____no_output_____"
]
],
[
[
"# Show image 0 on HDMI\n\n# Need to resize it first\noutframe[:] = cv2.resize(image0BGR, (1920, 1080));\nhdmi_out.writeframe(outframe)",
"_____no_output_____"
]
],
[
[
"We will also display Young-Jin to the LCD screen. Why ? Because Young Jin does awesome work and deserves to be famous and also because I can",
"_____no_output_____"
]
],
[
[
"## Show image on LCD\n\n# Open LCD object and clear\nlcd = arduino.Arduino_LCD18(base.ARDUINO)\nlcd.clear()\n\n# Write image to disk\nnw = 160\nnl = 128\ncv2.imwrite(\"/home/xilinx/small.jpg\", cv2.resize(image0BGR, (nw,nl)))\n\n# Display!\nlcd.display(\"/home/xilinx/small.jpg\",x_pos=0,y_pos=127,orientation=3,background=[255,255,255])",
"_____no_output_____"
]
],
[
[
"We now need to classify the images. This runs the remote version of TensorFlow on the image to get the bounding box. The following routine wraps this for simplicity. The spun up AWS TensorFlow instance is expecting to get be\nsent a JPEG and will classify and send back the results as JSON.\n\nThe IP address of the spun up AWS instance is given by the global gAWS_TENSORFLOW_INSTANCE which is specified at the\nbeginning of this note book.",
"_____no_output_____"
]
],
[
[
"def RemoteTensorFlowClassify(image_name_string):\n f = open(image_name_string,'rb')\n r = requests.put(gAWS_TENSORFLOW_INSTANCE, data=f)\n return json.loads(r.content.decode())",
"_____no_output_____"
]
],
[
[
"Actually call the defined function on images from camera 1 and camera 2. ",
"_____no_output_____"
]
],
[
[
"#Return the object that camera zero sees with the maximum score\ncam0_json_return = RemoteTensorFlowClassify(gCAMERA0_IMAGE)\njson0 = cam0_json_return[\"image_detection\"]\nmax = 0.0\nout = []\nfor var in json0['object']:\n if (var['score'] > max):\n out = var\njson0 = out\njson0",
"_____no_output_____"
],
[
"#Return the object that camera one sees with the maximum score\ncam1_json_return = RemoteTensorFlowClassify(gCAMERA1_IMAGE)\njson1 = cam1_json_return[\"image_detection\"]\nmax = 0.0\nout = []\nfor var in json1['object']:\n if (var['score'] > max):\n out = var\njson1 = out\njson1",
"_____no_output_____"
]
],
[
[
"The AWS tensorflow reports the bounding boxes for the required object.",
"_____no_output_____"
]
],
[
[
"def DrawRect(the_json,the_image, x1, x2, y1, y2 ): \n # Currently offline until the TesnorFlow net is fixed\n #x1 = int(the_json[\"xmin\"]) \n #y1 = int(the_json[\"ymin\"]) \n #x2 = int(the_json[\"xmax\"]) \n #y2 = int(the_json[\"ymax\"])\n \n \n fig, ax = pyplot.subplots(1)\n ax.imshow(the_image)\n rect = patches.Rectangle((x1,y1), (x2-x1), (y2-y1), linewidth = 1 , edgecolor = 'r', facecolor='none') \n ax.add_patch(rect)\n pyplot.show()",
"_____no_output_____"
],
[
"## Convert to grayscale\ngrayImage0 = cv2.cvtColor(image0, cv2.COLOR_RGB2GRAY)\ngrayImage1 = cv2.cvtColor(image1, cv2.COLOR_RGB2GRAY)",
"_____no_output_____"
],
[
"def IsInsideROI(pt, the_json, x1, x2, y1, y2):\n# x_min = int(the_json[\"object\"][\"xmin\"])\n# y_min = int(the_json[\"object\"][\"ymin\"])\n# x_max = int(the_json[\"object\"][\"xmax\"])\n# y_max = int(the_json[\"object\"][\"ymax\"])\n \n x_min = x1\n y_min = y1\n x_max = x2\n y_max = y2\n if(pt[0]>=x_min and pt[0] <=x_max and pt[1]>=y_min and pt[1]<=y_max):\n return True\n else: \n return False",
"_____no_output_____"
],
[
"## Detect keypoints\nBrisk = cv2.BRISK_create()\n\nkeyPoints0 = Brisk.detect(grayImage0)\nkeyPoints1 = Brisk.detect(grayImage1)",
"_____no_output_____"
],
[
"## Find keypoints inside ROI\nroiKeyPoints0 = numpy.asarray([k for k in keyPoints0 if IsInsideROI(k.pt,json0, 955, 1045, 740, 1275 )])\nroiKeyPoints1 = numpy.asarray([k for k in keyPoints1 if IsInsideROI(k.pt,json1, 1335, 1465, 910, 1455 )])",
"_____no_output_____"
],
[
"## Compute descriptors for keypoitns inside ROI\n[keyPoints0, desc0] = Brisk.compute(grayImage0, roiKeyPoints0);\n[keyPoints1, desc1] = Brisk.compute(grayImage1, roiKeyPoints1);",
"_____no_output_____"
],
[
"## Find matches of ROI keypoints\nBF = cv2.BFMatcher()\nmatches = BF.match(desc0, desc1)",
"_____no_output_____"
],
[
"## Extract pixel coordinates from matched keypoints\n\nx_C0 = numpy.asarray([keyPoints0[match.queryIdx].pt for match in matches])\nx_C1 = numpy.asarray([keyPoints1[match.trainIdx].pt for match in matches])",
"_____no_output_____"
]
],
[
[
"Full mesh triangularization is off line until we reconsile the camera calibration. There was an issue discovered during the hackathon that needs to be examined in teh lab setup s the code below this will not function until we reconsile the camera calibration config.",
"_____no_output_____"
]
],
[
[
"# Triangulate points\n\n# We need projection matrices for camera 0 and camera 1\nf = 8.350589e+000 / 3.45E-3\ncx = -3.922872e-002 / 3.45E-3\ncy = -1.396717e-004 / 3.45E-3\nK_C0 = numpy.transpose(numpy.asarray([[f, 0, 0], [0, f, 0], [cx, cy, 1]]))\nk_C0 = numpy.asarray([1.761471e-003, -2.920431e-005, -8.341438e-005, -9.470247e-006, -1.140118e-007])\n\n[R_C0, J] = cv2.Rodrigues(numpy.asarray([1.5315866633, 2.6655790203, -0.0270418317]))\nT_C0 = numpy.transpose(numpy.asarray([[152.9307390952, 260.3066944976, 351.7405264829]])) * 1000\n\nf = 8.259861e+000 / 3.45E-3\ncx = 8.397453e-002 / 3.45E-3\ncy = -2.382030e-002 / 3.45E-3\nK_C0 = numpy.transpose(numpy.asarray([[f, 0, 0], [0, f, 0], [cx, cy, 1]]))\nK_C1 = numpy.asarray([1.660053e-003, -2.986269e-005, -7.461966e-008, -2.247960e-004, -2.290483e-006])\n\n[R_C1, J] = cv2.Rodrigues(numpy.asarray([1.4200199799, -2.6113619450, -0.1371719827]))\nT_C1 = numpy.transpose(numpy.asarray([[146.8718203137, 259.9661037150, 351.5832136366]])) * 1000\n\nP_C0 = numpy.dot(K_C0,numpy.concatenate((R_C0, T_C0), 1))\nP_C1 = numpy.dot(K_C1,numpy.concatenate((R_C1, T_C1), 1))",
"_____no_output_____"
],
[
"# Compute 3D coordinates of detected points\nX_C0 = cv2.convertPointsFromHomogeneous(numpy.transpose(cv2.triangulatePoints(P_C0, P_C1, numpy.transpose(x_C0), numpy.transpose(x_C1))))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0f1811cbd5ecdae1565b42b8fc783a5b965f599
| 26,222 |
ipynb
|
Jupyter Notebook
|
_build/jupyter_execute/KDP-2_operands_real_single_ray.ipynb
|
Jonas231/OpticalDesignDocu_o
|
c99cad0621666aa708e0f4d8b1c3e9d8460d46f6
|
[
"MIT"
] | null | null | null |
_build/jupyter_execute/KDP-2_operands_real_single_ray.ipynb
|
Jonas231/OpticalDesignDocu_o
|
c99cad0621666aa708e0f4d8b1c3e9d8460d46f6
|
[
"MIT"
] | null | null | null |
_build/jupyter_execute/KDP-2_operands_real_single_ray.ipynb
|
Jonas231/OpticalDesignDocu_o
|
c99cad0621666aa708e0f4d8b1c3e9d8460d46f6
|
[
"MIT"
] | null | null | null | 110.64135 | 17,948 | 0.563534 |
[
[
[
"# Real single ray \n\nThese are the real single ray operands. Real raytracing is exact raytracing without paraxial approximation.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport itables\nfrom itables import init_notebook_mode, show\nimport itables.options as opt\n\ninit_notebook_mode(all_interactive=True)\n\nopt.lengthMenu = [50, 100, 200, 500]\n#opt.classes = [\"display\", \"cell-border\"]\n#opt.classes = [\"display\", \"nowrap\"]\n\nopt.columnDefs = [{\"className\": \"dt-left\", \"targets\": \"_all\"}, {\"width\": \"500px\", \"targets\": 4}]\n#opt.maxBytes = 0\n#pd.get_option('display.max_columns')\n#pd.get_option('display.max_rows')\n\n\n",
"_____no_output_____"
],
[
"import os\ncwd = os.getcwd()\nfilename = os.path.join(cwd, os.path.join('Excel', 'KDP-2_optimization_operands.xlsx'))\n\ndf_RSR = pd.read_excel(filename, sheet_name = \"real single ray\", header = 1, index_col = 0)\ndf_RSR = df_RSR.dropna() # drop nan values\n\n",
"_____no_output_____"
],
[
"\ndf_RSR\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0f1929852b2d8539231f89b84a68d52f32301a9
| 43,915 |
ipynb
|
Jupyter Notebook
|
notebooks/regression is my profession/Linear_vs_Ridge.ipynb
|
nkapchenko/Neuron
|
1b450b4785ea9e1862a78b7a89c1786478dca726
|
[
"MIT"
] | null | null | null |
notebooks/regression is my profession/Linear_vs_Ridge.ipynb
|
nkapchenko/Neuron
|
1b450b4785ea9e1862a78b7a89c1786478dca726
|
[
"MIT"
] | null | null | null |
notebooks/regression is my profession/Linear_vs_Ridge.ipynb
|
nkapchenko/Neuron
|
1b450b4785ea9e1862a78b7a89c1786478dca726
|
[
"MIT"
] | null | null | null | 256.812865 | 27,708 | 0.929568 |
[
[
[
"import numpy as np\nfrom numpy import array, exp, sqrt\nimport pandas as pd\nfrom copy import deepcopy\n\nnp.set_printoptions(precision=5)\n\n% load_ext autoreload\n% autoreload 2\n\nfrom sklearn import datasets\nfrom sklearn.linear_model import LinearRegression, Ridge\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\n\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nfrom neuron import vizualisation\n\nimport seaborn as sns\nsns.set_context(\"poster\")\nsns.set(rc={'figure.figsize': (16, 9.)})\nsns.set_style(\"whitegrid\")\n\nfrom neuron import network\nfrom neuron.activation_functions import linear, linear_prime",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"X, y = datasets.load_boston(True)\nX_train, X_test, y_train, y_test = train_test_split(X, y.reshape(-1, 1), random_state=0)",
"_____no_output_____"
],
[
"linear = linear_model.LinearRegression()\nlinear.fit(X_train, y_train)\n\nridge = linear_model.Ridge()\nridge.fit(X_train, y_train)",
"_____no_output_____"
],
[
"for reg in [LinearRegression(), Ridge()]:\n reg.fit(X_train, y_train)\n print(reg.__class__.__name__, reg.score(X_test, y_test))",
"LinearRegression 0.6354638433202114\nRidge 0.626618220461385\n"
],
[
"netw = network.Network(sizes = [X_train.shape[1], 1], activation_function=linear)\nnetw.fit(X_train, y_train, epochs=20, learning_rate=0.0005)\nprint(netw)\nnetw.vizualise",
"Epoch done: \nNetwork(biases[array([[3.33566]])] \n weights[array([[9.20478]])]\n\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0f19edcf03ff06b4fd1fb2294d0d883d4a564aa
| 2,671 |
ipynb
|
Jupyter Notebook
|
Prelim_Exam.ipynb
|
jeremayaaa/OOP-58001
|
513ba3477433d5f05c70199ed47941ad2ce2614a
|
[
"Apache-2.0"
] | null | null | null |
Prelim_Exam.ipynb
|
jeremayaaa/OOP-58001
|
513ba3477433d5f05c70199ed47941ad2ce2614a
|
[
"Apache-2.0"
] | null | null | null |
Prelim_Exam.ipynb
|
jeremayaaa/OOP-58001
|
513ba3477433d5f05c70199ed47941ad2ce2614a
|
[
"Apache-2.0"
] | null | null | null | 28.72043 | 228 | 0.47997 |
[
[
[
"<a href=\"https://colab.research.google.com/github/jeremayaaa/OOP-58001/blob/main/Prelim_Exam.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Write a Python program that displays your full name, student number, age, and course",
"_____no_output_____"
]
],
[
[
"class Student():\n def __init__(self, fullname, studentnumber, age, course):\n self.fullname = fullname\n self.studentnumber = studentnumber\n self.age = age\n self.course = course\n\n def name(self):\n return self.fullname\n\n def number(self):\n return self.studentnumber\n\n def ageko(self):\n return self.age\n\n def courseko(self):\n return self.course \n\n def display(self):\n print(\"My Full Name is\", self.name())\n print(\"My Student Number is\", self.number())\n print(\"My Age is\", self.ageko())\n print(\"My Course is\", self.courseko())\n\nmyself = Student(\"Jeremiah Manalang\", \"202010993\", \"20\", \"Computer Engineering\")\nmyself.display()",
"My Full Name is Jeremiah Manalang\nMy Student Number is 202010993\nMy Age is 20\nMy Course is Computer Engineering\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0f1aef2ad00e0467c0ea460584612604b4cf82f
| 56,579 |
ipynb
|
Jupyter Notebook
|
6 - Pivot Table.ipynb
|
NickScherer/example-NickScherer.github.io
|
ce619fca7905ed61d56e24203611b2619f87fd70
|
[
"MIT",
"BSD-3-Clause"
] | 1 |
2020-04-06T14:16:55.000Z
|
2020-04-06T14:16:55.000Z
|
6 - Pivot Table.ipynb
|
NickScherer/DataWrangling.github.io
|
ce619fca7905ed61d56e24203611b2619f87fd70
|
[
"BSD-3-Clause",
"MIT"
] | null | null | null |
6 - Pivot Table.ipynb
|
NickScherer/DataWrangling.github.io
|
ce619fca7905ed61d56e24203611b2619f87fd70
|
[
"BSD-3-Clause",
"MIT"
] | null | null | null | 37.419974 | 394 | 0.358596 |
[
[
[
"# 6 - Pivot Table\nIn this sixth step I'll show you how to reshape your data using a pivot table.\n\nThis will provide a nice condensed version. \n\nWe'll reshape the data so that we can see how much each customer spent in each category.",
"_____no_output_____"
]
],
[
[
"import pandas as pd \nimport numpy as np\n\ndf = pd.read_json(\"customer_data.json\", convert_dates=False)\ndf.head()",
"_____no_output_____"
]
],
[
[
"Taking a quick look using the <code>.head()</code> function, we can see all of the columns, and the first few rows of the data.\n\nFor this example, let's just use the first 50 rows of the data. ",
"_____no_output_____"
]
],
[
[
"df_subset = df[0:50]\ndf_subset",
"_____no_output_____"
]
],
[
[
"Let's take a look at the types for each column using the <code>.dtypes</code> method.",
"_____no_output_____"
]
],
[
[
"df_subset.dtypes",
"_____no_output_____"
]
],
[
[
"The amount column should be a numeric type, but Pandas thinks it's an <code>object</code>. Let's go ahead and change that column to a numeric <code>float</code> type using the <code>.astype()</code> method.",
"_____no_output_____"
]
],
[
[
"df_subset[\"amount\"] = df_subset[\"amount\"].astype(float)\ndf_subset.dtypes",
"C:\\Users\\NickScherer\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \"\"\"Entry point for launching an IPython kernel.\n"
]
],
[
[
"Now we can see that the <code>amount</code> column is a numeric <code>float</code> type. \n\nWe don't need all of the columns, just the <code>customer_id</code>, <code>category</code>, and <code>amount</code> columns.\n\nHere's what that smaller dataframe would look like.",
"_____no_output_____"
]
],
[
[
"df_subset[[\"customer_id\", \"category\", \"amount\"]]",
"_____no_output_____"
]
],
[
[
"Let's finish up by creating our <code>pivot_table</code>.\n\nWe'll set the index to <code>customer_id</code>, the columns to <code>category</code>, and the values to <code>amount</code>. This will reshape the data so that we can see how much each customer spent in each category. Let's create this using a new dataframe called <code>df_pivot</code>.\n\nThe final important point before we reshape the data is the <code>aggfunc</code> parameter. Since customers probably spent multiple purchase in the same categories, we'll want to collect all of the purchase. We'll do that using Numpy's <code>sum</code> method. I've shorted the Numpy library name to <code>np</code>, so that's why I've set the <code>aggfunc</code> to <code>np.sum</code>.",
"_____no_output_____"
]
],
[
[
"# pivot table; aggregation function \"sum\"\n\ndf_pivot = df_subset.pivot_table(index=\"customer_id\", columns=\"category\", values=\"amount\", aggfunc=np.sum)\nprint(df_pivot)",
"category appliances clothing electronics house household outdoor\ncustomer_id \n100102 NaN NaN NaN 70.66 NaN NaN\n100103 NaN NaN 78.61 NaN NaN NaN\n100105 NaN NaN NaN NaN NaN 50.71\n100106 NaN NaN 183.88 NaN NaN NaN\n100109 NaN NaN NaN NaN NaN 31.79\n100111 NaN NaN NaN NaN NaN 77.28\n100116 NaN NaN NaN NaN NaN 71.07\n100118 30.55 NaN NaN NaN NaN NaN\n100120 NaN 86.29 NaN NaN NaN NaN\n100123 34.57 NaN NaN NaN NaN NaN\n100124 NaN NaN 89.93 NaN NaN NaN\n100133 NaN 23.69 NaN NaN NaN NaN\n100136 NaN 79.43 NaN NaN NaN NaN\n100140 85.23 NaN NaN 85.92 NaN NaN\n100148 NaN NaN NaN NaN 35.03 NaN\n100150 NaN 36.21 NaN NaN NaN NaN\n100151 61.99 NaN NaN NaN 41.34 NaN\n100153 NaN 96.87 87.08 NaN NaN NaN\n100158 NaN NaN NaN NaN NaN 171.74\n100159 NaN 91.98 NaN NaN NaN NaN\n100160 NaN NaN 89.36 NaN NaN NaN\n100162 70.18 76.08 NaN NaN NaN NaN\n100167 32.89 NaN NaN 76.69 NaN NaN\n100170 NaN NaN NaN NaN NaN 89.72\n100173 NaN NaN NaN NaN NaN 81.75\n100182 64.86 NaN NaN NaN NaN NaN\n100183 NaN 66.56 NaN NaN NaN NaN\n100185 75.00 78.81 54.64 NaN NaN NaN\n100186 63.54 NaN NaN NaN NaN NaN\n100188 NaN NaN NaN NaN 53.80 95.85\n100191 NaN 93.44 NaN NaN 24.64 NaN\n100192 85.00 NaN NaN NaN NaN NaN\n100193 97.59 NaN NaN NaN NaN NaN\n100196 NaN 27.37 NaN NaN NaN NaN\n100198 NaN 76.43 NaN NaN NaN NaN\n100199 NaN 35.00 NaN NaN NaN NaN\n"
]
],
[
[
"Now we have a new dataframe showing how much each customer spent in each category. \n\nThere's a lot of <code>NaN</code> values because a lot of customers didn't spend any money in certain categories.\n\nYou should also note that there's a <code>house</code> and <code>household</code> column. We need to clean the data so that we have consistent strings before we reshape it. Look back at <strong>Step 3 - Consistent Strings</strong> to help you with that.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0f1b1613f64531903c0cbe438b6e5c1cedaf6de
| 23,838 |
ipynb
|
Jupyter Notebook
|
nbs/70_callback.wandb.ipynb
|
tyoc213-contrib/fastai2
|
d1b93db7abc83b4f96d31d3740fca1712878e750
|
[
"Apache-2.0"
] | null | null | null |
nbs/70_callback.wandb.ipynb
|
tyoc213-contrib/fastai2
|
d1b93db7abc83b4f96d31d3740fca1712878e750
|
[
"Apache-2.0"
] | null | null | null |
nbs/70_callback.wandb.ipynb
|
tyoc213-contrib/fastai2
|
d1b93db7abc83b4f96d31d3740fca1712878e750
|
[
"Apache-2.0"
] | null | null | null | 37.898251 | 296 | 0.557387 |
[
[
[
"#export\nfrom fastai.basics import *\nfrom fastai.callback.progress import *\nfrom fastai.text.data import TensorText\nfrom fastai.tabular.all import TabularDataLoaders, Tabular\nfrom fastai.callback.hook import total_params",
"_____no_output_____"
],
[
"#hide\nfrom nbdev.showdoc import *",
"_____no_output_____"
],
[
"#default_exp callback.wandb",
"_____no_output_____"
]
],
[
[
"# Wandb\n\n> Integration with [Weights & Biases](https://docs.wandb.com/library/integrations/fastai) ",
"_____no_output_____"
],
[
"First thing first, you need to install wandb with\n```\npip install wandb\n```\nCreate a free account then run \n``` \nwandb login\n```\nin your terminal. Follow the link to get an API token that you will need to paste, then you're all set!",
"_____no_output_____"
]
],
[
[
"#export\nimport wandb\nfrom wandb.wandb_config import ConfigError",
"_____no_output_____"
],
[
"#export\nclass WandbCallback(Callback):\n \"Saves model topology, losses & metrics\"\n toward_end,remove_on_fetch,run_after = True,True,FetchPredsCallback\n # Record if watch has been called previously (even in another instance)\n _wandb_watch_called = False\n\n def __init__(self, log=\"gradients\", log_preds=True, log_model=True, log_dataset=False, dataset_name=None, valid_dl=None, n_preds=36, seed=12345):\n # Check if wandb.init has been called\n if wandb.run is None:\n raise ValueError('You must call wandb.init() before WandbCallback()')\n # W&B log step\n self._wandb_step = wandb.run.step - 1 # -1 except if the run has previously logged data (incremented at each batch)\n self._wandb_epoch = 0 if not(wandb.run.step) else math.ceil(wandb.run.summary['epoch']) # continue to next epoch\n store_attr(self, 'log,log_preds,log_model,log_dataset,dataset_name,valid_dl,n_preds,seed')\n\n def before_fit(self):\n \"Call watch method to log model topology, gradients & weights\"\n self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, \"gather_preds\") and rank_distrib()==0\n if not self.run: return\n\n # Log config parameters\n log_config = self.learn.gather_args()\n _format_config(log_config)\n try:\n wandb.config.update(log_config, allow_val_change=True)\n except Exception as e:\n print(f'WandbCallback could not log config parameters -> {e}')\n\n if not WandbCallback._wandb_watch_called:\n WandbCallback._wandb_watch_called = True\n # Logs model topology and optionally gradients and weights\n wandb.watch(self.learn.model, log=self.log)\n\n # log dataset\n assert isinstance(self.log_dataset, (str, Path, bool)), 'log_dataset must be a path or a boolean'\n if self.log_dataset is True:\n if Path(self.dls.path) == Path('.'):\n print('WandbCallback could not retrieve the dataset path, please provide it explicitly to \"log_dataset\"')\n self.log_dataset = False\n else:\n self.log_dataset = self.dls.path\n if self.log_dataset:\n self.log_dataset = Path(self.log_dataset)\n assert self.log_dataset.is_dir(), f'log_dataset must be a valid directory: {self.log_dataset}'\n metadata = {'path relative to learner': os.path.relpath(self.log_dataset, self.learn.path)}\n log_dataset(path=self.log_dataset, name=self.dataset_name, metadata=metadata)\n\n # log model\n if self.log_model and not hasattr(self, 'save_model'):\n print('WandbCallback requires use of \"SaveModelCallback\" to log best model')\n self.log_model = False\n\n if self.log_preds:\n try:\n if not self.valid_dl:\n #Initializes the batch watched\n wandbRandom = random.Random(self.seed) # For repeatability\n self.n_preds = min(self.n_preds, len(self.dls.valid_ds))\n idxs = wandbRandom.sample(range(len(self.dls.valid_ds)), self.n_preds)\n if isinstance(self.dls, TabularDataLoaders):\n test_items = getattr(self.dls.valid_ds.items, 'iloc', self.dls.valid_ds.items)[idxs]\n self.valid_dl = self.dls.test_dl(test_items, with_labels=True, process=False)\n else:\n test_items = [getattr(self.dls.valid_ds.items, 'iloc', self.dls.valid_ds.items)[i] for i in idxs]\n self.valid_dl = self.dls.test_dl(test_items, with_labels=True)\n self.learn.add_cb(FetchPredsCallback(dl=self.valid_dl, with_input=True, with_decoded=True))\n except Exception as e:\n self.log_preds = False\n print(f'WandbCallback was not able to prepare a DataLoader for logging prediction samples -> {e}')\n\n def after_batch(self):\n \"Log hyper-parameters and training loss\"\n if self.training:\n self._wandb_step += 1\n self._wandb_epoch += 1/self.n_iter\n hypers = {f'{k}_{i}':v for i,h in enumerate(self.opt.hypers) for k,v in h.items()}\n wandb.log({'epoch': self._wandb_epoch, 'train_loss': self.smooth_loss, 'raw_loss': self.loss, **hypers}, step=self._wandb_step)\n\n def after_epoch(self):\n \"Log validation loss and custom metrics & log prediction samples\"\n # Correct any epoch rounding error and overwrite value\n self._wandb_epoch = round(self._wandb_epoch)\n wandb.log({'epoch': self._wandb_epoch}, step=self._wandb_step)\n # Log sample predictions\n if self.log_preds:\n try:\n inp,preds,targs,out = self.learn.fetch_preds.preds\n b = tuplify(inp) + tuplify(targs)\n x,y,its,outs = self.valid_dl.show_results(b, out, show=False, max_n=self.n_preds)\n wandb.log(wandb_process(x, y, its, outs), step=self._wandb_step)\n except Exception as e:\n self.log_preds = False\n print(f'WandbCallback was not able to get prediction samples -> {e}')\n wandb.log({n:s for n,s in zip(self.recorder.metric_names, self.recorder.log) if n not in ['train_loss', 'epoch', 'time']}, step=self._wandb_step)\n\n def after_fit(self):\n if self.log_model:\n if self.save_model.last_saved_path is None:\n print('WandbCallback could not retrieve a model to upload')\n else:\n metadata = {n:s for n,s in zip(self.recorder.metric_names, self.recorder.log) if n not in ['train_loss', 'epoch', 'time']}\n log_model(self.save_model.last_saved_path, metadata=metadata) \n self.run = True\n if self.log_preds: self.remove_cb(FetchPredsCallback)\n wandb.log({}) # ensure sync of last step",
"_____no_output_____"
]
],
[
[
"Optionally logs weights and or gradients depending on `log` (can be \"gradients\", \"parameters\", \"all\" or None), sample predictions if ` log_preds=True` that will come from `valid_dl` or a random sample pf the validation set (determined by `seed`). `n_preds` are logged in this case.\n\nIf used in combination with `SaveModelCallback`, the best model is saved as well (can be deactivated with `log_model=False`).\n\nDatasets can also be tracked:\n* if `log_dataset` is `True`, tracked folder is retrieved from `learn.dls.path`\n* `log_dataset` can explicitly be set to the folder to track\n* the name of the dataset can explicitly be given through `dataset_name`, otherwise it is set to the folder name\n* *Note: the subfolder \"models\" is always ignored*\n\nFor custom scenarios, you can also manually use functions `log_dataset` and `log_model` to respectively log your own datasets and models.",
"_____no_output_____"
]
],
[
[
"#export\ndef _make_plt(img):\n \"Make plot to image resolution\"\n # from https://stackoverflow.com/a/13714915\n my_dpi = 100\n fig = plt.figure(frameon=False, dpi=my_dpi)\n h, w = img.shape[:2]\n fig.set_size_inches(w / my_dpi, h / my_dpi)\n ax = plt.Axes(fig, [0., 0., 1., 1.])\n ax.set_axis_off()\n fig.add_axes(ax)\n return fig, ax",
"_____no_output_____"
],
[
"#export\ndef _format_config(log_config):\n \"Format config parameters before logging them\"\n for k,v in log_config.items():\n if callable(v):\n if hasattr(v,'__qualname__') and hasattr(v,'__module__'): log_config[k] = f'{v.__module__}.{v.__qualname__}'\n else: log_config[k] = str(v)\n if isinstance(v, slice): log_config[k] = dict(slice_start=v.start, slice_step=v.step, slice_stop=v.stop)",
"_____no_output_____"
],
[
"#export\ndef _format_metadata(metadata):\n \"Format metadata associated to artifacts\"\n for k,v in metadata.items(): metadata[k] = str(v)",
"_____no_output_____"
],
[
"#export\ndef log_dataset(path, name=None, metadata={}):\n \"Log dataset folder\"\n # Check if wandb.init has been called in case datasets are logged manually\n if wandb.run is None:\n raise ValueError('You must call wandb.init() before log_dataset()')\n path = Path(path)\n if not path.is_dir():\n raise f'path must be a valid directory: {path}'\n name = ifnone(name, path.name)\n _format_metadata(metadata)\n artifact_dataset = wandb.Artifact(name=name, type='dataset', description='raw dataset', metadata=metadata)\n # log everything except \"models\" folder\n for p in path.ls():\n if p.is_dir():\n if p.name != 'models': artifact_dataset.add_dir(str(p.resolve()), name=p.name)\n else: artifact_dataset.add_file(str(p.resolve()))\n wandb.run.use_artifact(artifact_dataset)",
"_____no_output_____"
],
[
"#export\ndef log_model(path, name=None, metadata={}):\n \"Log model file\"\n if wandb.run is None:\n raise ValueError('You must call wandb.init() before log_model()')\n path = Path(path)\n if not path.is_file():\n raise f'path must be a valid file: {path}'\n name = ifnone(name, f'run-{wandb.run.id}-model')\n _format_metadata(metadata) \n artifact_model = wandb.Artifact(name=name, type='model', description='trained model', metadata=metadata)\n artifact_model.add_file(str(path.resolve()))\n wandb.run.log_artifact(artifact_model)",
"_____no_output_____"
],
[
"#export\n@typedispatch\ndef wandb_process(x:TensorImage, y, samples, outs):\n \"Process `sample` and `out` depending on the type of `x/y`\"\n res_input, res_pred, res_label = [],[],[]\n for s,o in zip(samples, outs):\n img = s[0].permute(1,2,0)\n res_input.append(wandb.Image(img, caption='Input data'))\n for t, capt, res in ((o[0], \"Prediction\", res_pred), (s[1], \"Ground Truth\", res_label)):\n fig, ax = _make_plt(img)\n # Superimpose label or prediction to input image\n ax = img.show(ctx=ax)\n ax = t.show(ctx=ax)\n res.append(wandb.Image(fig, caption=capt))\n plt.close(fig)\n return {\"Inputs\":res_input, \"Predictions\":res_pred, \"Ground Truth\":res_label}",
"_____no_output_____"
],
[
"#export\n@typedispatch\ndef wandb_process(x:TensorImage, y:(TensorCategory,TensorMultiCategory), samples, outs):\n return {\"Prediction Samples\": [wandb.Image(s[0].permute(1,2,0), caption=f'Ground Truth: {s[1]}\\nPrediction: {o[0]}')\n for s,o in zip(samples,outs)]}",
"_____no_output_____"
],
[
"#export\n@typedispatch\ndef wandb_process(x:TensorImage, y:TensorMask, samples, outs):\n res = []\n class_labels = {i:f'{c}' for i,c in enumerate(y.get_meta('codes'))} if y.get_meta('codes') is not None else None\n for s,o in zip(samples, outs):\n img = s[0].permute(1,2,0)\n masks = {}\n for t, capt in ((o[0], \"Prediction\"), (s[1], \"Ground Truth\")):\n masks[capt] = {'mask_data':t.numpy().astype(np.uint8)}\n if class_labels: masks[capt]['class_labels'] = class_labels\n res.append(wandb.Image(img, masks=masks))\n return {\"Prediction Samples\":res}",
"_____no_output_____"
],
[
"#export\n@typedispatch\ndef wandb_process(x:TensorText, y:(TensorCategory,TensorMultiCategory), samples, outs):\n data = [[s[0], s[1], o[0]] for s,o in zip(samples,outs)]\n return {\"Prediction Samples\": wandb.Table(data=data, columns=[\"Text\", \"Target\", \"Prediction\"])}",
"_____no_output_____"
],
[
"#export\n@typedispatch\ndef wandb_process(x:Tabular, y:Tabular, samples, outs):\n df = x.all_cols\n for n in x.y_names: df[n+'_pred'] = y[n].values\n return {\"Prediction Samples\": wandb.Table(dataframe=df)}",
"_____no_output_____"
]
],
[
[
"## Example of use:\n\nOnce your have defined your `Learner`, before you call to `fit` or `fit_one_cycle`, you need to initialize wandb:\n```\nimport wandb\nwandb.init()\n```\nTo use Weights & Biases without an account, you can call `wandb.init(anonymous='allow')`.\n\nThen you add the callback to your `learner` or call to `fit` methods, potentially with `SaveModelCallback` if you want to save the best model:\n```\nfrom fastai.callback.wandb import *\n\n# To log only during one training phase\nlearn.fit(..., cbs=WandbCallback())\n\n# To log continuously for all training phases\nlearn = learner(..., cbs=WandbCallback())\n```\nDatasets and models can be tracked through the callback or directly through `log_model` and `log_dataset` functions.\n\nFor more details, refer to [W&B documentation](https://docs.wandb.com/library/integrations/fastai).",
"_____no_output_____"
]
],
[
[
"#hide\n#slow\nfrom fastai.vision.all import *\n\npath = untar_data(URLs.MNIST_TINY)\nitems = get_image_files(path)\ntds = Datasets(items, [PILImageBW.create, [parent_label, Categorize()]], splits=GrandparentSplitter()(items))\ndls = tds.dataloaders(after_item=[ToTensor(), IntToFloatTensor()])\n\nos.environ['WANDB_MODE'] = 'dryrun' # run offline\nwandb.init(anonymous='allow')\nlearn = cnn_learner(dls, resnet18, loss_func=CrossEntropyLossFlat(), cbs=WandbCallback(log_model=False))\nlearn.fit(1)\n\n# add more data from a new learner on same run\nlearn = cnn_learner(dls, resnet18, loss_func=CrossEntropyLossFlat(), cbs=WandbCallback(log_model=False))\nlearn.fit(1, lr=slice(0.05))",
"_____no_output_____"
],
[
"#export\n_all_ = ['wandb_process']",
"_____no_output_____"
]
],
[
[
"## Export -",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.export import *\nnotebook2script()",
"Converted 00_torch_core.ipynb.\nConverted 01_layers.ipynb.\nConverted 02_data.load.ipynb.\nConverted 03_data.core.ipynb.\nConverted 04_data.external.ipynb.\nConverted 05_data.transforms.ipynb.\nConverted 06_data.block.ipynb.\nConverted 07_vision.core.ipynb.\nConverted 08_vision.data.ipynb.\nConverted 09_vision.augment.ipynb.\nConverted 09b_vision.utils.ipynb.\nConverted 09c_vision.widgets.ipynb.\nConverted 10_tutorial.pets.ipynb.\nConverted 11_vision.models.xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_callback.core.ipynb.\nConverted 13a_learner.ipynb.\nConverted 13b_metrics.ipynb.\nConverted 14_callback.schedule.ipynb.\nConverted 14a_callback.data.ipynb.\nConverted 15_callback.hook.ipynb.\nConverted 15a_vision.models.unet.ipynb.\nConverted 16_callback.progress.ipynb.\nConverted 17_callback.tracker.ipynb.\nConverted 18_callback.fp16.ipynb.\nConverted 18a_callback.training.ipynb.\nConverted 19_callback.mixup.ipynb.\nConverted 20_interpret.ipynb.\nConverted 20a_distributed.ipynb.\nConverted 21_vision.learner.ipynb.\nConverted 22_tutorial.imagenette.ipynb.\nConverted 23_tutorial.vision.ipynb.\nConverted 24_tutorial.siamese.ipynb.\nConverted 24_vision.gan.ipynb.\nConverted 30_text.core.ipynb.\nConverted 31_text.data.ipynb.\nConverted 32_text.models.awdlstm.ipynb.\nConverted 33_text.models.core.ipynb.\nConverted 34_callback.rnn.ipynb.\nConverted 35_tutorial.wikitext.ipynb.\nConverted 36_text.models.qrnn.ipynb.\nConverted 37_text.learner.ipynb.\nConverted 38_tutorial.text.ipynb.\nConverted 39_tutorial.transformers.ipynb.\nConverted 40_tabular.core.ipynb.\nConverted 41_tabular.data.ipynb.\nConverted 42_tabular.model.ipynb.\nConverted 43_tabular.learner.ipynb.\nConverted 44_tutorial.tabular.ipynb.\nConverted 45_collab.ipynb.\nConverted 46_tutorial.collab.ipynb.\nConverted 50_tutorial.datablock.ipynb.\nConverted 60_medical.imaging.ipynb.\nConverted 61_tutorial.medical_imaging.ipynb.\nConverted 65_medical.text.ipynb.\nConverted 70_callback.wandb.ipynb.\nConverted 71_callback.tensorboard.ipynb.\nConverted 72_callback.neptune.ipynb.\nConverted 73_callback.captum.ipynb.\nConverted 74_callback.cutmix.ipynb.\nConverted 97_test_utils.ipynb.\nConverted 99_pytorch_doc.ipynb.\nConverted index.ipynb.\nConverted tutorial.ipynb.\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0f1b366b90835ec040d25a56daaf99932960d59
| 813,294 |
ipynb
|
Jupyter Notebook
|
Payment_Fraud_Detection/Fraud_Detection_Exercise.ipynb
|
simonmijares/ML-case-studies
|
eac679c0bfe74891692dc01675d2d14b357e0747
|
[
"MIT"
] | null | null | null |
Payment_Fraud_Detection/Fraud_Detection_Exercise.ipynb
|
simonmijares/ML-case-studies
|
eac679c0bfe74891692dc01675d2d14b357e0747
|
[
"MIT"
] | null | null | null |
Payment_Fraud_Detection/Fraud_Detection_Exercise.ipynb
|
simonmijares/ML-case-studies
|
eac679c0bfe74891692dc01675d2d14b357e0747
|
[
"MIT"
] | null | null | null | 157.219022 | 1,528 | 0.631366 |
[
[
[
"# Detecting Payment Card Fraud\n\nIn this section, we'll look at a credit card fraud detection dataset, and build a binary classification model that can identify transactions as either fraudulent or valid, based on provided, *historical* data. In a [2016 study](https://nilsonreport.com/upload/content_promo/The_Nilson_Report_10-17-2016.pdf), it was estimated that credit card fraud was responsible for over 20 billion dollars in loss, worldwide. Accurately detecting cases of fraud is an ongoing area of research.\n\n<img src=notebook_ims/fraud_detection.png width=50% />\n\n### Labeled Data\n\nThe payment fraud data set (Dal Pozzolo et al. 2015) was downloaded from [Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud/data). This has features and labels for thousands of credit card transactions, each of which is labeled as fraudulent or valid. In this notebook, we'd like to train a model based on the features of these transactions so that we can predict risky or fraudulent transactions in the future.\n\n### Binary Classification\n\nSince we have true labels to aim for, we'll take a **supervised learning** approach and train a binary classifier to sort data into one of our two transaction classes: fraudulent or valid. We'll train a model on training data and see how well it generalizes on some test data.\n\nThe notebook will be broken down into a few steps:\n* Loading and exploring the data\n* Splitting the data into train/test sets\n* Defining and training a LinearLearner, binary classifier\n* Making improvements on the model\n* Evaluating and comparing model test performance\n\n### Making Improvements\n\nA lot of this notebook will focus on making improvements, as discussed in [this SageMaker blog post](https://aws.amazon.com/blogs/machine-learning/train-faster-more-flexible-models-with-amazon-sagemaker-linear-learner/). Specifically, we'll address techniques for:\n\n1. **Tuning a model's hyperparameters** and aiming for a specific metric, such as high recall or precision.\n2. **Managing class imbalance**, which is when we have many more training examples in one class than another (in this case, many more valid transactions than fraudulent).\n\n---",
"_____no_output_____"
],
[
"First, import the usual resources.",
"_____no_output_____"
]
],
[
[
"import io\nimport os\nimport matplotlib.pyplot as plt\nimport numpy as np \nimport pandas as pd \n\nimport boto3\nimport sagemaker\nfrom sagemaker import get_execution_role\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"I'm storing my **SageMaker variables** in the next cell:\n* sagemaker_session: The SageMaker session we'll use for training models.\n* bucket: The name of the default S3 bucket that we'll use for data storage.\n* role: The IAM role that defines our data and model permissions.",
"_____no_output_____"
]
],
[
[
"# sagemaker session, role\nsagemaker_session = sagemaker.Session()\nrole = sagemaker.get_execution_role()\n\n# S3 bucket name\nbucket = sagemaker_session.default_bucket()\n",
"_____no_output_____"
]
],
[
[
"## Loading and Exploring the Data\n\nNext, I am loading the data and unzipping the data in the file `creditcardfraud.zip`. This directory will hold one csv file of all the transaction data, `creditcard.csv`.\n\nAs in previous notebooks, it's important to look at the distribution of data since this will inform how we develop a fraud detection model. We'll want to know: How many data points we have to work with, the number and type of features, and finally, the distribution of data over the classes (valid or fraudulent).",
"_____no_output_____"
]
],
[
[
"# only have to run once\n!wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c534768_creditcardfraud/creditcardfraud.zip\n!unzip creditcardfraud\n",
"--2021-03-15 22:11:08-- https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c534768_creditcardfraud/creditcardfraud.zip\nResolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.107.142\nConnecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.107.142|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 69155632 (66M) [application/zip]\nSaving to: ‘creditcardfraud.zip’\n\ncreditcardfraud.zip 100%[===================>] 65.95M 40.8MB/s in 1.6s \n\n2021-03-15 22:11:10 (40.8 MB/s) - ‘creditcardfraud.zip’ saved [69155632/69155632]\n\nArchive: creditcardfraud.zip\n inflating: creditcard.csv \n"
],
[
"# read in the csv file\nlocal_data = 'creditcard.csv'\n\n# print out some data\ntransaction_df = pd.read_csv(local_data)\nprint('Data shape (rows, cols): ', transaction_df.shape)\nprint()\ntransaction_df.head()",
"Data shape (rows, cols): (284807, 31)\n\n"
]
],
[
[
"### EXERCISE: Calculate the percentage of fraudulent data\n\nTake a look at the distribution of this transaction data over the classes, valid and fraudulent. \n\nComplete the function `fraudulent_percentage`, below. Count up the number of data points in each class and calculate the *percentage* of the data points that are fraudulent.",
"_____no_output_____"
]
],
[
[
"transaction_df[transaction_df['Class']==1].count()[0]",
"_____no_output_____"
],
[
"# Calculate the fraction of data points that are fraudulent\ndef fraudulent_percentage(transaction_df):\n '''Calculate the fraction of all data points that have a 'Class' label of 1; fraudulent.\n :param transaction_df: Dataframe of all transaction data points; has a column 'Class'\n :return: A fractional percentage of fraudulent data points/all points\n '''\n \n # your code here\n \n # pass\n \n # Solution:\n frauds = transaction_df[transaction_df['Class']==1].count()[0]\n total = transaction_df.count()[0]\n return frauds/total\n",
"_____no_output_____"
]
],
[
[
"Test out your code by calling your function and printing the result.",
"_____no_output_____"
]
],
[
[
"# call the function to calculate the fraud percentage\nfraud_percentage = fraudulent_percentage(transaction_df)\n\nprint('Fraudulent percentage = ', fraud_percentage)\nprint('Total # of fraudulent pts: ', fraud_percentage*transaction_df.shape[0])\nprint('Out of (total) pts: ', transaction_df.shape[0])\n",
"Fraudulent percentage = 0.001727485630620034\nTotal # of fraudulent pts: 492.0\nOut of (total) pts: 284807\n"
]
],
[
[
"### EXERCISE: Split into train/test datasets\n\nIn this example, we'll want to evaluate the performance of a fraud classifier; training it on some training data and testing it on *test data* that it did not see during the training process. So, we'll need to split the data into separate training and test sets.\n\nComplete the `train_test_split` function, below. This function should:\n* Shuffle the transaction data, randomly\n* Split it into two sets according to the parameter `train_frac`\n* Get train/test features and labels\n* Return the tuples: (train_features, train_labels), (test_features, test_labels)",
"_____no_output_____"
]
],
[
[
"from sklearn import model_selection\n# split into train/test\ndef train_test_split(transaction_df, train_frac= 0.7, seed=1):\n '''Shuffle the data and randomly split into train and test sets;\n separate the class labels (the column in transaction_df) from the features.\n :param df: Dataframe of all credit card transaction data\n :param train_frac: The decimal fraction of data that should be training data\n :param seed: Random seed for shuffling and reproducibility, default = 1\n :return: Two tuples (in order): (train_features, train_labels), (test_features, test_labels)\n '''\n \n # shuffle and split the data\n [train_features, test_features, train_labels, test_labels] = model_selection.train_test_split(transaction_df.drop('Class',axis=1).astype('float32').to_numpy(), transaction_df['Class'].astype('float32').to_numpy(), test_size=1-train_frac, random_state=seed)\n \n return (train_features, train_labels), (test_features, test_labels)\n",
"_____no_output_____"
]
],
[
[
"### Test Cell\n\nIn the cells below, I'm creating the train/test data and checking to see that result makes sense. The tests below test that the above function splits the data into the expected number of points and that the labels are indeed, class labels (0, 1).",
"_____no_output_____"
]
],
[
[
"# get train/test data\n(train_features, train_labels), (test_features, test_labels) = train_test_split(transaction_df, train_frac=0.7)",
"_____no_output_____"
],
[
"train_labels",
"_____no_output_____"
],
[
"# manual test\n\n# for a split of 0.7:0.3 there should be ~2.33x as many training as test pts\nprint('Training data pts: ', len(train_features))\nprint('Test data pts: ', len(test_features))\nprint()\n\n# take a look at first item and see that it aligns with first row of data\nprint('First item: \\n', train_features[0])\nprint('Label: ', train_labels[0])\nprint()\n\n# test split\nassert len(train_features) > 2.333*len(test_features), \\\n 'Unexpected number of train/test points for a train_frac=0.7'\n# test labels\nassert np.all(train_labels)== 0 or np.all(train_labels)== 1, \\\n 'Train labels should be 0s or 1s.'\nassert np.all(test_labels)== 0 or np.all(test_labels)== 1, \\\n 'Test labels should be 0s or 1s.'\nprint('Tests passed!')",
"Training data pts: 199364\nTest data pts: 85443\n\nFirst item: \n [ 1.2912400e+05 -1.9007544e-01 2.0332274e-01 -9.9623245e-01\n -1.5969852e+00 3.1925786e+00 3.3569350e+00 2.8829932e-01\n 8.9500278e-01 -3.3002400e-01 -6.4690042e-01 -2.8278485e-01\n 2.3240909e-02 -4.2109159e-01 3.6824697e-01 -1.5846483e-01\n -3.3475593e-01 -3.3717993e-01 -8.4203237e-01 -3.3012636e-02\n -7.4960589e-03 -1.7059891e-01 -6.1972356e-01 3.9651293e-02\n 7.0680517e-01 -1.6086972e-01 2.7482465e-01 -1.0541413e-02\n 2.2199158e-02 1.4370000e+01]\nLabel: 0.0\n\nTests passed!\n"
]
],
[
[
"---\n# Modeling\n\nNow that you've uploaded your training data, it's time to define and train a model!\n\nIn this notebook, you'll define and train the SageMaker, built-in algorithm, [LinearLearner](https://sagemaker.readthedocs.io/en/stable/linear_learner.html). \n\nA LinearLearner has two main applications:\n1. For regression tasks in which a linear line is fit to some data points, and you want to produce a predicted output value given some data point (example: predicting house prices given square area).\n2. For binary classification, in which a line is separating two classes of data and effectively outputs labels; either 1 for data that falls above the line or 0 for points that fall on or below the line.\n\n<img src='notebook_ims/linear_separator.png' width=50% />\n\nIn this case, we'll be using it for case 2, and we'll train it to separate data into our two classes: valid or fraudulent. ",
"_____no_output_____"
],
[
"### EXERCISE: Create a LinearLearner Estimator\n\nYou've had some practice instantiating built-in models in SageMaker. All estimators require some constructor arguments to be passed in. See if you can complete this task, instantiating a LinearLearner estimator, using only the [LinearLearner documentation](https://sagemaker.readthedocs.io/en/stable/linear_learner.html) as a resource. This takes in a lot of arguments, but not all are required. My suggestion is to start with a simple model, utilizing default values where applicable. Later, we will discuss some specific hyperparameters and their use cases.\n\n#### Instance Types\n\nIt is suggested that you use instances that are available in the free tier of usage: `'ml.c4.xlarge'` for training and `'ml.t2.medium'` for deployment.",
"_____no_output_____"
]
],
[
[
"# import LinearLearner\nfrom sagemaker import LinearLearner\n\n# instantiate LinearLearner\npredictor = LinearLearner(role=role,\n instance_count=1,\n instance_type='ml.c4.xlarge',\n predictor_type = 'binary_classifier',\n )",
"_____no_output_____"
]
],
[
[
"### EXERCISE: Convert data into a RecordSet format\n\nNext, prepare the data for a built-in model by converting the train features and labels into numpy array's of float values. Then you can use the [record_set function](https://sagemaker.readthedocs.io/en/stable/linear_learner.html#sagemaker.LinearLearner.record_set) to format the data as a RecordSet and prepare it for training!",
"_____no_output_____"
]
],
[
[
"# create RecordSet of training data\nformatted_train_data = predictor.record_set(train=train_features, labels=train_labels, channel='train')",
"_____no_output_____"
]
],
[
[
"### EXERCISE: Train the Estimator\n\nAfter instantiating your estimator, train it with a call to `.fit()`, passing in the formatted training data.",
"_____no_output_____"
]
],
[
[
"%%time \n# train the estimator on formatted training data\npredictor.fit(formatted_train_data)",
"Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\nDefaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\n"
]
],
[
[
"### EXERCISE: Deploy the trained model\n\nDeploy your model to create a predictor. We'll use this to make predictions on our test data and evaluate the model.",
"_____no_output_____"
]
],
[
[
"%%time \n# deploy and create a predictor\nlinear_predictor = predictor.deploy(initial_instance_count=1, instance_type='ml.t2.medium')",
"Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\n"
]
],
[
[
"---\n# Evaluating Your Model\n\nOnce your model is deployed, you can see how it performs when applied to the test data.\n\nAccording to the deployed [predictor documentation](https://sagemaker.readthedocs.io/en/stable/linear_learner.html#sagemaker.LinearLearnerPredictor), this predictor expects an `ndarray` of input features and returns a list of Records.\n> \"The prediction is stored in the \"predicted_label\" key of the `Record.label` field.\"\n\nLet's first test our model on just one test point, to see the resulting list.",
"_____no_output_____"
]
],
[
[
"# test one prediction\ntest_x_np = test_features.astype('float32')\nresult = linear_predictor.predict(test_x_np[0])\n\nprint(result)",
"[label {\n key: \"predicted_label\"\n value {\n float32_tensor {\n values: 0.0\n }\n }\n}\nlabel {\n key: \"score\"\n value {\n float32_tensor {\n values: 0.0002751190622802824\n }\n }\n}\n]\n"
]
],
[
[
"### Helper function for evaluation\n\n\nThe provided function below, takes in a deployed predictor, some test features and labels, and returns a dictionary of metrics; calculating false negatives and positives as well as recall, precision, and accuracy.",
"_____no_output_____"
]
],
[
[
"# code to evaluate the endpoint on test data\n# returns a variety of model metrics\ndef evaluate(predictor, test_features, test_labels, verbose=True):\n \"\"\"\n Evaluate a model on a test set given the prediction endpoint. \n Return binary classification metrics.\n :param predictor: A prediction endpoint\n :param test_features: Test features\n :param test_labels: Class labels for test data\n :param verbose: If True, prints a table of all performance metrics\n :return: A dictionary of performance metrics.\n \"\"\"\n \n # We have a lot of test data, so we'll split it into batches of 100\n # split the test data set into batches and evaluate using prediction endpoint \n prediction_batches = [predictor.predict(batch) for batch in np.array_split(test_features, 100)]\n \n # LinearLearner produces a `predicted_label` for each data point in a batch\n # get the 'predicted_label' for every point in a batch\n test_preds = np.concatenate([np.array([x.label['predicted_label'].float32_tensor.values[0] for x in batch]) \n for batch in prediction_batches])\n \n # calculate true positives, false positives, true negatives, false negatives\n tp = np.logical_and(test_labels, test_preds).sum()\n fp = np.logical_and(1-test_labels, test_preds).sum()\n tn = np.logical_and(1-test_labels, 1-test_preds).sum()\n fn = np.logical_and(test_labels, 1-test_preds).sum()\n \n # calculate binary classification metrics\n recall = tp / (tp + fn)\n precision = tp / (tp + fp)\n accuracy = (tp + tn) / (tp + fp + tn + fn)\n \n # printing a table of metrics\n if verbose:\n print(pd.crosstab(test_labels, test_preds, rownames=['actual (row)'], colnames=['prediction (col)']))\n print(\"\\n{:<11} {:.3f}\".format('Recall:', recall))\n print(\"{:<11} {:.3f}\".format('Precision:', precision))\n print(\"{:<11} {:.3f}\".format('Accuracy:', accuracy))\n print()\n \n return {'TP': tp, 'FP': fp, 'FN': fn, 'TN': tn, \n 'Precision': precision, 'Recall': recall, 'Accuracy': accuracy}\n",
"_____no_output_____"
]
],
[
[
"### Test Results\n\nThe cell below runs the `evaluate` function. \n\nThe code assumes that you have a defined `predictor` and `test_features` and `test_labels` from previously-run cells.",
"_____no_output_____"
]
],
[
[
"print('Metrics for simple, LinearLearner.\\n')\n\n# get metrics for linear predictor\nmetrics = evaluate(linear_predictor, \n test_features.astype('float32'), \n test_labels, \n verbose=True) # verbose means we'll print out the metrics\n",
"Metrics for simple, LinearLearner.\n\nprediction (col) 0.0 1.0\nactual (row) \n0.0 85282 26\n1.0 36 99\n\nRecall: 0.733\nPrecision: 0.792\nAccuracy: 0.999\n\n"
]
],
[
[
"## Delete the Endpoint\n\nI've added a convenience function to delete prediction endpoints after we're done with them. And if you're done evaluating the model, you should delete your model endpoint!",
"_____no_output_____"
]
],
[
[
"# Deletes a precictor.endpoint\ndef delete_endpoint(predictor):\n try:\n boto3.client('sagemaker').delete_endpoint(EndpointName=predictor.endpoint)\n print('Deleted {}'.format(predictor.endpoint))\n except:\n print('Already deleted: {}'.format(predictor.endpoint))",
"_____no_output_____"
],
[
"# delete the predictor endpoint \ndelete_endpoint(linear_predictor)",
"The endpoint attribute has been renamed in sagemaker>=2.\nSee: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\nThe endpoint attribute has been renamed in sagemaker>=2.\nSee: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\n"
]
],
[
[
"---\n\n# Model Improvements\n\nThe default LinearLearner got a high accuracy, but still classified fraudulent and valid data points incorrectly. Specifically classifying more than 30 points as false negatives (incorrectly labeled, fraudulent transactions), and a little over 30 points as false positives (incorrectly labeled, valid transactions). Let's think about what, during training, could cause this behavior and what we could improve.\n\n**1. Model optimization**\n* If we imagine that we are designing this model for use in a bank application, we know that users do *not* want any valid transactions to be categorized as fraudulent. That is, we want to have as few **false positives** (0s classified as 1s) as possible. \n* On the other hand, if our bank manager asks for an application that will catch almost *all* cases of fraud, even if it means a higher number of false positives, then we'd want as few **false negatives** as possible.\n* To train according to specific product demands and goals, we do not want to optimize for accuracy only. Instead, we want to optimize for a metric that can help us decrease the number of false positives or negatives. \n\n<img src='notebook_ims/precision_recall.png' width=40% />\n \nIn this notebook, we'll look at different cases for tuning a model and make an optimization decision, accordingly.\n\n**2. Imbalanced training data**\n* At the start of this notebook, we saw that only about 0.17% of the training data was labeled as fraudulent. So, even if a model labels **all** of our data as valid, it will still have a high accuracy. \n* This may result in some overfitting towards valid data, which accounts for some **false negatives**; cases in which fraudulent data (1) is incorrectly characterized as valid (0).\n\nSo, let's address these issues in order; first, tuning our model and optimizing for a specific metric during training, and second, accounting for class imbalance in the training set. \n",
"_____no_output_____"
],
[
"## Improvement: Model Tuning\n\nOptimizing according to a specific metric is called **model tuning**, and SageMaker provides a number of ways to automatically tune a model.\n\n\n### Create a LinearLearner and tune for higher precision \n\n**Scenario:**\n* A bank has asked you to build a model that detects cases of fraud with an accuracy of about 85%. \n\nIn this case, we want to build a model that has as many true positives and as few false negatives, as possible. This corresponds to a model with a high **recall**: true positives / (true positives + false negatives). \n\nTo aim for a specific metric, LinearLearner offers the hyperparameter `binary_classifier_model_selection_criteria`, which is the model evaluation criteria for the training dataset. A reference to this parameter is in [LinearLearner's documentation](https://sagemaker.readthedocs.io/en/stable/linear_learner.html#sagemaker.LinearLearner). We'll also have to further specify the exact value we want to aim for; read more about the details of the parameters, [here](https://docs.aws.amazon.com/sagemaker/latest/dg/ll_hyperparameters.html).\n\nI will assume that performance on a training set will be within about 5% of the performance on a test set. So, for a recall of about 85%, I'll aim for a bit higher, 90%.",
"_____no_output_____"
]
],
[
[
"# specify an output path\nprefix = 'creditcard'\noutput_path = 's3://{}/{}'.format(bucket, prefix)\n\n# instantiate a LinearLearner\n# tune the model for a higher recall\nlinear_recall = LinearLearner(role=role,\n train_instance_count=1, \n train_instance_type='ml.c4.xlarge',\n predictor_type='binary_classifier',\n output_path=output_path,\n sagemaker_session=sagemaker_session,\n epochs=15,\n binary_classifier_model_selection_criteria='precision_at_target_recall', # target recall\n target_recall=0.9) # 90% recall\n",
"train_instance_count has been renamed in sagemaker>=2.\nSee: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\ntrain_instance_type has been renamed in sagemaker>=2.\nSee: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\n"
]
],
[
[
"### Train the tuned estimator\n\nFit the new, tuned estimator on the formatted training data.",
"_____no_output_____"
]
],
[
[
"%%time \n# train the estimator on formatted training data\nlinear_recall.fit(formatted_train_data)",
"Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\nDefaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\n"
]
],
[
[
"### Deploy and evaluate the tuned estimator\n\nDeploy the tuned predictor and evaluate it.\n\nWe hypothesized that a tuned model, optimized for a higher recall, would have fewer false negatives (fraudulent transactions incorrectly labeled as valid); did the number of false negatives get reduced after tuning the model?",
"_____no_output_____"
]
],
[
[
"%%time \n# deploy and create a predictor\nrecall_predictor = linear_recall.deploy(initial_instance_count=1, instance_type='ml.t2.medium')",
"Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\n"
],
[
"print('Metrics for tuned (recall), LinearLearner.\\n')\n\n# get metrics for tuned predictor\nmetrics = evaluate(recall_predictor, \n test_features.astype('float32'), \n test_labels, \n verbose=True)",
"Metrics for tuned (recall), LinearLearner.\n\nprediction (col) 0.0 1.0\nactual (row) \n0.0 84317 991\n1.0 20 115\n\nRecall: 0.852\nPrecision: 0.104\nAccuracy: 0.988\n\n"
]
],
[
[
"## Delete the endpoint \n\nAs always, when you're done evaluating a model, you should delete the endpoint. Below, I'm using the `delete_endpoint` helper function I defined earlier.",
"_____no_output_____"
]
],
[
[
"# delete the predictor endpoint \ndelete_endpoint(recall_predictor)",
"The endpoint attribute has been renamed in sagemaker>=2.\nSee: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\nThe endpoint attribute has been renamed in sagemaker>=2.\nSee: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\n"
]
],
[
[
"---\n## Improvement: Managing Class Imbalance\n\nWe have a model that is tuned to get a higher recall, which aims to reduce the number of false negatives. Earlier, we discussed how class imbalance may actually bias our model towards predicting that all transactions are valid, resulting in higher false negatives and true negatives. It stands to reason that this model could be further improved if we account for this imbalance.\n\nTo account for class imbalance during training of a binary classifier, LinearLearner offers the hyperparameter, `positive_example_weight_mult`, which is the weight assigned to positive (1, fraudulent) examples when training a binary classifier. The weight of negative examples (0, valid) is fixed at 1. \n\n### EXERCISE: Create a LinearLearner with a `positive_example_weight_mult` parameter\n\nIn **addition** to tuning a model for higher recall (you may use `linear_recall` as a starting point), you should *add* a parameter that helps account for class imbalance. From the [hyperparameter documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/ll_hyperparameters.html) on `positive_example_weight_mult`, it reads:\n> \"If you want the algorithm to choose a weight so that errors in classifying negative vs. positive examples have equal impact on training loss, specify `balanced`.\"\n\nYou could also put in a specific float value, in which case you'd want to weight positive examples more heavily than negative examples, since there are fewer of them.",
"_____no_output_____"
]
],
[
[
"# instantiate a LinearLearner\n\n# include params for tuning for higher recall\n# *and* account for class imbalance in training data\nlinear_balanced = LinearLearner(role=role,\n train_instance_count=1, \n train_instance_type='ml.c4.xlarge',\n predictor_type='binary_classifier',\n output_path=output_path,\n sagemaker_session=sagemaker_session,\n epochs=15,\n binary_classifier_model_selection_criteria='precision_at_target_recall', # target recall\n target_recall=0.9,\n positive_example_weight_mult = 'balanced') # 90% recall",
"train_instance_count has been renamed in sagemaker>=2.\nSee: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\ntrain_instance_type has been renamed in sagemaker>=2.\nSee: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\n"
]
],
[
[
"### EXERCISE: Train the balanced estimator\n\nFit the new, balanced estimator on the formatted training data.",
"_____no_output_____"
]
],
[
[
"%%time \n# train the estimator on formatted training data\nlinear_balanced.fit(formatted_train_data)",
"Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\nDefaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\n"
]
],
[
[
"### EXERCISE: Deploy and evaluate the balanced estimator\n\nDeploy the balanced predictor and evaluate it. Do the results match with your expectations?",
"_____no_output_____"
]
],
[
[
"%%time \n# deploy and create a predictor\nbalanced_predictor = linear_balanced.deploy(initial_instance_count=1, instance_type='ml.t2.medium')",
"Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\n"
],
[
"print('Metrics for balanced, LinearLearner.\\n')\n\n# get metrics for balanced predictor\nmetrics = evaluate(balanced_predictor, \n test_features.astype('float32'), \n test_labels, \n verbose=True)",
"Metrics for balanced, LinearLearner.\n\nprediction (col) 0.0 1.0\nactual (row) \n0.0 84629 679\n1.0 18 117\n\nRecall: 0.867\nPrecision: 0.147\nAccuracy: 0.992\n\n"
]
],
[
[
"## Delete the endpoint \n\nWhen you're done evaluating a model, you should delete the endpoint.",
"_____no_output_____"
]
],
[
[
"# delete the predictor endpoint \ndelete_endpoint(balanced_predictor)",
"The endpoint attribute has been renamed in sagemaker>=2.\nSee: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\nThe endpoint attribute has been renamed in sagemaker>=2.\nSee: https://sagemaker.readthedocs.io/en/stable/v2.html for details.\n"
]
],
[
[
"A note on metric variability: \n\nThe above model is tuned for the best possible precision with recall fixed at about 90%. The recall is fixed at 90% during training, but may vary when we apply our trained model to a test set of data.",
"_____no_output_____"
],
[
"---\n## Model Design\n\nNow that you've seen how to tune and balance a LinearLearner. Create, train and deploy your own model. This exercise is meant to be more open-ended, so that you get practice with the steps involved in designing a model and deploying it.\n\n### EXERCISE: Train and deploy a LinearLearner with appropriate hyperparameters, according to the given scenario\n\n**Scenario:**\n* A bank has asked you to build a model that optimizes for a good user experience; users should only ever have up to about 15% of their valid transactions flagged as fraudulent.\n\nThis requires that you make a design decision: Given the above scenario, what metric (and value) should you aim for during training?\n\nYou may assume that performance on a training set will be within about 5-10% of the performance on a test set. For example, if you get 80% on a training set, you can assume that you'll get between about 70-90% accuracy on a test set.\n\nYour final model should account for class imbalance and be appropriately tuned. ",
"_____no_output_____"
]
],
[
[
"%%time\n# instantiate and train a LinearLearner\n\n# include params for tuning for higher precision\n# *and* account for class imbalance in training data\n",
"_____no_output_____"
],
[
"%%time \n# deploy and evaluate a predictor\n",
"_____no_output_____"
],
[
"## IMPORTANT\n# delete the predictor endpoint after evaluation \n",
"_____no_output_____"
]
],
[
[
"## Final Cleanup!\n\n* Double check that you have deleted all your endpoints.\n* I'd also suggest manually deleting your S3 bucket, models, and endpoint configurations directly from your AWS console.\n\nYou can find thorough cleanup instructions, [in the documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-cleanup.html).",
"_____no_output_____"
],
[
"---\n# Conclusion\n\nIn this notebook, you saw how to train and deploy a LinearLearner in SageMaker. This model is well-suited for a binary classification task that involves specific design decisions and managing class imbalance in the training set.\n\nFollowing the steps of a machine learning workflow, you loaded in some credit card transaction data, explored that data and prepared it for model training. Then trained, deployed, and evaluated several models, according to different design considerations!",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0f1c32dc155c48bd250c71851889534818fcc87
| 31,287 |
ipynb
|
Jupyter Notebook
|
misc/pottan_ocr.ipynb
|
harish2704/pottan-ocr
|
e6c4db329be54a5dcdfcb60b0c651faf946cbd6b
|
[
"MIT"
] | 51 |
2017-12-16T07:04:37.000Z
|
2021-10-16T11:59:26.000Z
|
misc/pottan_ocr.ipynb
|
harish2704/pottan-ocr
|
e6c4db329be54a5dcdfcb60b0c651faf946cbd6b
|
[
"MIT"
] | 16 |
2018-01-05T15:13:33.000Z
|
2021-05-08T16:26:01.000Z
|
misc/pottan_ocr.ipynb
|
harish2704/pottan-ocr
|
e6c4db329be54a5dcdfcb60b0c651faf946cbd6b
|
[
"MIT"
] | 13 |
2018-01-01T07:14:40.000Z
|
2020-11-27T07:56:23.000Z
| 122.694118 | 3,688 | 0.645604 |
[
[
[
"!git clone https://github.com/harish2704/pottan-ocr.git ./pottan-ocr\n%cd pottan-ocr/\n!git checkout keras-training\n!cp config.yaml.sample config.yaml\n!wget -c https://github.com/harish2704/pottan-ocr-data/raw/master/train.txt.gz\n!wget -c https://github.com/harish2704/pottan-ocr-data/raw/master/validate.txt.gz\n!apt-get install python3-gi gir1.2-pango python3-gi-cairo fonts-mlym",
"Cloning into './pottan-ocr'...\nremote: Enumerating objects: 48, done.\u001b[K\nremote: Counting objects: 2% (1/48) \u001b[K\rremote: Counting objects: 4% (2/48) \u001b[K\rremote: Counting objects: 6% (3/48) \u001b[K\rremote: Counting objects: 8% (4/48) \u001b[K\rremote: Counting objects: 10% (5/48) \u001b[K\rremote: Counting objects: 12% (6/48) \u001b[K\rremote: Counting objects: 14% (7/48) \u001b[K\rremote: Counting objects: 16% (8/48) \u001b[K\rremote: Counting objects: 18% (9/48) \u001b[K\rremote: Counting objects: 20% (10/48) \u001b[K\rremote: Counting objects: 22% (11/48) \u001b[K\rremote: Counting objects: 25% (12/48) \u001b[K\rremote: Counting objects: 27% (13/48) \u001b[K\rremote: Counting objects: 29% (14/48) \u001b[K\rremote: Counting objects: 31% (15/48) \u001b[K\rremote: Counting objects: 33% (16/48) \u001b[K\rremote: Counting objects: 35% (17/48) \u001b[K\rremote: Counting objects: 37% (18/48) \u001b[K\rremote: Counting objects: 39% (19/48) \u001b[K\rremote: Counting objects: 41% (20/48) \u001b[K\rremote: Counting objects: 43% (21/48) \u001b[K\rremote: Counting objects: 45% (22/48) \u001b[K\rremote: Counting objects: 47% (23/48) \u001b[K\rremote: Counting objects: 50% (24/48) \u001b[K\rremote: Counting objects: 52% (25/48) \u001b[K\rremote: Counting objects: 54% (26/48) \u001b[K\rremote: Counting objects: 56% (27/48) \u001b[K\rremote: Counting objects: 58% (28/48) \u001b[K\rremote: Counting objects: 60% (29/48) \u001b[K\rremote: Counting objects: 62% (30/48) \u001b[K\rremote: Counting objects: 64% (31/48) \u001b[K\rremote: Counting objects: 66% (32/48) \u001b[K\rremote: Counting objects: 68% (33/48) \u001b[K\rremote: Counting objects: 70% (34/48) \u001b[K\rremote: Counting objects: 72% (35/48) \u001b[K\rremote: Counting objects: 75% (36/48) \u001b[K\rremote: Counting objects: 77% (37/48) \u001b[K\rremote: Counting objects: 79% (38/48) \u001b[K\rremote: Counting objects: 81% (39/48) \u001b[K\rremote: Counting objects: 83% (40/48) \u001b[K\rremote: Counting objects: 85% (41/48) \u001b[K\rremote: Counting objects: 87% (42/48) \u001b[K\rremote: Counting objects: 89% (43/48) \u001b[K\rremote: Counting objects: 91% (44/48) \u001b[K\rremote: Counting objects: 93% (45/48) \u001b[K\rremote: Counting objects: 95% (46/48) \u001b[K\rremote: Counting objects: 97% (47/48) \u001b[K\rremote: Counting objects: 100% (48/48) \u001b[K\rremote: Counting objects: 100% (48/48), done.\u001b[K\nremote: Compressing objects: 2% (1/37) \u001b[K\rremote: Compressing objects: 5% (2/37) \u001b[K\rremote: Compressing objects: 8% (3/37) \u001b[K\rremote: Compressing objects: 10% (4/37) \u001b[K\rremote: Compressing objects: 13% (5/37) \u001b[K\rremote: Compressing objects: 16% (6/37) \u001b[K\rremote: Compressing objects: 18% (7/37) \u001b[K\rremote: Compressing objects: 21% (8/37) \u001b[K\rremote: Compressing objects: 24% (9/37) \u001b[K\rremote: Compressing objects: 27% (10/37) \u001b[K\rremote: Compressing objects: 29% (11/37) \u001b[K\rremote: Compressing objects: 32% (12/37) \u001b[K\rremote: Compressing objects: 35% (13/37) \u001b[K\rremote: Compressing objects: 37% (14/37) \u001b[K\rremote: Compressing objects: 40% (15/37) \u001b[K\rremote: Compressing objects: 43% (16/37) \u001b[K\rremote: Compressing objects: 45% (17/37) \u001b[K\rremote: Compressing objects: 48% (18/37) \u001b[K\rremote: Compressing objects: 51% (19/37) \u001b[K\rremote: Compressing objects: 54% (20/37) \u001b[K\rremote: Compressing objects: 56% (21/37) \u001b[K\rremote: Compressing objects: 59% (22/37) \u001b[K\rremote: Compressing objects: 62% (23/37) \u001b[K\rremote: Compressing objects: 64% (24/37) \u001b[K\rremote: Compressing objects: 67% (25/37) \u001b[K\rremote: Compressing objects: 70% (26/37) \u001b[K\rremote: Compressing objects: 72% (27/37) \u001b[K\rremote: Compressing objects: 75% (28/37) \u001b[K\rremote: Compressing objects: 78% (29/37) \u001b[K\rremote: Compressing objects: 81% (30/37) \u001b[K\rremote: Compressing objects: 83% (31/37) \u001b[K\rremote: Compressing objects: 86% (32/37) \u001b[K\rremote: Compressing objects: 89% (33/37) \u001b[K\rremote: Compressing objects: 91% (34/37) \u001b[K\rremote: Compressing objects: 94% (35/37) \u001b[K\rremote: Compressing objects: 97% (36/37) \u001b[K\rremote: Compressing objects: 100% (37/37) \u001b[K\rremote: Compressing objects: 100% (37/37), done.\u001b[K\nReceiving objects: 0% (1/868) \rReceiving objects: 1% (9/868) \rReceiving objects: 2% (18/868) \rReceiving objects: 3% (27/868) \rReceiving objects: 4% (35/868) \rReceiving objects: 5% (44/868) \rReceiving objects: 6% (53/868) \rReceiving objects: 7% (61/868) \rReceiving objects: 8% (70/868) \rReceiving objects: 9% (79/868) \rReceiving objects: 10% (87/868) \rReceiving objects: 11% (96/868) \rReceiving objects: 12% (105/868) \rReceiving objects: 13% (113/868) \rReceiving objects: 14% (122/868) \rReceiving objects: 15% (131/868) \rReceiving objects: 16% (139/868) \rReceiving objects: 17% (148/868) \rReceiving objects: 18% (157/868) \rReceiving objects: 19% (165/868) \rReceiving objects: 20% (174/868) \rReceiving objects: 21% (183/868) \rReceiving objects: 22% (191/868) \rReceiving objects: 23% (200/868) \rReceiving objects: 24% (209/868) \rReceiving objects: 25% (217/868) \rReceiving objects: 26% (226/868) \rReceiving objects: 27% (235/868) \rReceiving objects: 28% (244/868) \rReceiving objects: 29% (252/868) \rReceiving objects: 30% (261/868) \rReceiving objects: 31% (270/868) \rReceiving objects: 32% (278/868) \rReceiving objects: 33% (287/868) \rReceiving objects: 34% (296/868) \rReceiving objects: 35% (304/868) \rReceiving objects: 36% (313/868) \rReceiving objects: 37% (322/868) \rReceiving objects: 38% (330/868) \rReceiving objects: 39% (339/868) \rReceiving objects: 40% (348/868) \rReceiving objects: 41% (356/868) \rReceiving objects: 42% (365/868) \rReceiving objects: 43% (374/868) \rReceiving objects: 44% (382/868) \rReceiving objects: 45% (391/868) \rReceiving objects: 46% (400/868) \rReceiving objects: 47% (408/868) \rReceiving objects: 48% (417/868) \rReceiving objects: 49% (426/868) \rReceiving objects: 50% (434/868) \rReceiving objects: 51% (443/868) \rReceiving objects: 52% (452/868) \rReceiving objects: 53% (461/868) \rReceiving objects: 54% (469/868) \rReceiving objects: 55% (478/868) \rReceiving objects: 56% (487/868) \rReceiving objects: 57% (495/868) \rReceiving objects: 58% (504/868) \rReceiving objects: 59% (513/868) \rReceiving objects: 60% (521/868) \rReceiving objects: 61% (530/868) \rReceiving objects: 62% (539/868) \rReceiving objects: 63% (547/868) \rReceiving objects: 64% (556/868) \rReceiving objects: 65% (565/868) \rReceiving objects: 66% (573/868) \rReceiving objects: 67% (582/868) \rReceiving objects: 68% (591/868) \rReceiving objects: 69% (599/868) \rReceiving objects: 70% (608/868) \rReceiving objects: 71% (617/868) \rReceiving objects: 72% (625/868) \rReceiving objects: 73% (634/868) \rReceiving objects: 74% (643/868) \rReceiving objects: 75% (651/868) \rReceiving objects: 76% (660/868) \rReceiving objects: 77% (669/868) \rReceiving objects: 78% (678/868) \rReceiving objects: 79% (686/868) \rReceiving objects: 80% (695/868) \rReceiving objects: 81% (704/868) \rReceiving objects: 82% (712/868) \rReceiving objects: 83% (721/868) \rReceiving objects: 84% (730/868) \rReceiving objects: 85% (738/868) \rReceiving objects: 86% (747/868) \rReceiving objects: 87% (756/868) \rReceiving objects: 88% (764/868) \rReceiving objects: 89% (773/868) \rReceiving objects: 90% (782/868) \rReceiving objects: 91% (790/868) \rReceiving objects: 92% (799/868) \rReceiving objects: 93% (808/868) \rReceiving objects: 94% (816/868) \rremote: Total 868 (delta 19), reused 27 (delta 10), pack-reused 820\u001b[K\nReceiving objects: 95% (825/868) \rReceiving objects: 96% (834/868) \rReceiving objects: 97% (842/868) \rReceiving objects: 98% (851/868) \rReceiving objects: 99% (860/868) \rReceiving objects: 100% (868/868) \rReceiving objects: 100% (868/868), 5.58 MiB | 23.60 MiB/s, done.\nResolving deltas: 0% (0/497) \rResolving deltas: 1% (5/497) \rResolving deltas: 2% (12/497) \rResolving deltas: 6% (30/497) \rResolving deltas: 7% (36/497) \rResolving deltas: 8% (42/497) \rResolving deltas: 9% (49/497) \rResolving deltas: 10% (50/497) \rResolving deltas: 14% (70/497) \rResolving deltas: 17% (85/497) \rResolving deltas: 19% (96/497) \rResolving deltas: 22% (110/497) \rResolving deltas: 26% (132/497) \rResolving deltas: 27% (137/497) \rResolving deltas: 28% (144/497) \rResolving deltas: 30% (151/497) \rResolving deltas: 32% (162/497) \rResolving deltas: 33% (166/497) \rResolving deltas: 34% (169/497) \rResolving deltas: 35% (176/497) \rResolving deltas: 36% (183/497) \rResolving deltas: 37% (184/497) \rResolving deltas: 38% (191/497) \rResolving deltas: 39% (195/497) \rResolving deltas: 43% (214/497) \rResolving deltas: 47% (238/497) \rResolving deltas: 49% (245/497) \rResolving deltas: 52% (262/497) \rResolving deltas: 53% (264/497) \rResolving deltas: 54% (272/497) \rResolving deltas: 55% (278/497) \rResolving deltas: 56% (283/497) \rResolving deltas: 66% (330/497) \rResolving deltas: 67% (335/497) \rResolving deltas: 68% (338/497) \rResolving deltas: 69% (343/497) \rResolving deltas: 73% (366/497) \rResolving deltas: 75% (374/497) \rResolving deltas: 76% (381/497) \rResolving deltas: 77% (385/497) \rResolving deltas: 78% (389/497) \rResolving deltas: 79% (394/497) \rResolving deltas: 84% (421/497) \rResolving deltas: 85% (427/497) \rResolving deltas: 86% (428/497) \rResolving deltas: 90% (450/497) \rResolving deltas: 91% (455/497) \rResolving deltas: 92% (458/497) \rResolving deltas: 94% (469/497) \rResolving deltas: 95% (473/497) \rResolving deltas: 96% (478/497) \rResolving deltas: 97% (484/497) \rResolving deltas: 98% (488/497) \rResolving deltas: 100% (497/497) \rResolving deltas: 100% (497/497), done.\n/content/pottan-ocr\nBranch 'keras-training' set up to track remote branch 'keras-training' from 'origin'.\nSwitched to a new branch 'keras-training'\n--2019-06-28 06:02:48-- https://github.com/harish2704/pottan-ocr-data/raw/master/train.txt.gz\nResolving github.com (github.com)... 192.30.253.112\nConnecting to github.com (github.com)|192.30.253.112|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://raw.githubusercontent.com/harish2704/pottan-ocr-data/master/train.txt.gz [following]\n--2019-06-28 06:02:48-- https://raw.githubusercontent.com/harish2704/pottan-ocr-data/master/train.txt.gz\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 4878737 (4.7M) [application/octet-stream]\nSaving to: ‘train.txt.gz’\n\ntrain.txt.gz 100%[===================>] 4.65M --.-KB/s in 0.09s \n\n2019-06-28 06:02:49 (49.2 MB/s) - ‘train.txt.gz’ saved [4878737/4878737]\n\n--2019-06-28 06:02:50-- https://github.com/harish2704/pottan-ocr-data/raw/master/validate.txt.gz\nResolving github.com (github.com)... 192.30.253.112\nConnecting to github.com (github.com)|192.30.253.112|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://raw.githubusercontent.com/harish2704/pottan-ocr-data/master/validate.txt.gz [following]\n--2019-06-28 06:02:51-- https://raw.githubusercontent.com/harish2704/pottan-ocr-data/master/validate.txt.gz\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 585640 (572K) [application/octet-stream]\nSaving to: ‘validate.txt.gz’\n\nvalidate.txt.gz 100%[===================>] 571.91K --.-KB/s in 0.05s \n\n2019-06-28 06:02:51 (10.8 MB/s) - ‘validate.txt.gz’ saved [585640/585640]\n\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nNote, selecting 'gir1.2-pangoxft-1.0' for regex 'gir1.2-pango'\nNote, selecting 'gir1.2-pangoft2-1.0' for regex 'gir1.2-pango'\nNote, selecting 'gir1.2-pangocairo-1.0' for regex 'gir1.2-pango'\nNote, selecting 'gir1.2-pango-1.0' for regex 'gir1.2-pango'\nNote, selecting 'gir1.2-pango-1.0' instead of 'gir1.2-pangocairo-1.0'\nNote, selecting 'gir1.2-pango-1.0' instead of 'gir1.2-pangoft2-1.0'\nNote, selecting 'gir1.2-pango-1.0' instead of 'gir1.2-pangoxft-1.0'\nfonts-mlym is already the newest version (2:1.2).\ngir1.2-pango-1.0 is already the newest version (1.40.14-1ubuntu0.1).\npython3-gi is already the newest version (3.26.1-2ubuntu1).\npython3-gi-cairo is already the newest version (3.26.1-2ubuntu1).\nThe following package was automatically installed and is no longer required:\n libnvidia-common-410\nUse 'apt autoremove' to remove it.\n0 upgraded, 0 newly installed, 0 to remove and 16 not upgraded.\n"
],
[
"!python3 misc/run_keras.py --traindata ./train.txt.gz --valdata ./validate.txt.gz --batchSize 64 --traindata_limit 12800 --valdata_limit 64 --niter 1 --nh 32",
"Namespace(adadelta=False, batchSize=64, crnn=None, lr=0.01, nh=32, niter=1, outfile='./crnn.h5', traindata='./train.txt.gz', traindata_cache=None, traindata_limit=12800, valdata='./validate.txt.gz', valdata_cache=None, valdata_limit=64)\nUsing TensorFlow backend.\nWARNING: Logging before flag parsing goes to stderr.\nW0628 06:03:04.362039 140419082147712 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nW0628 06:03:04.372155 140419082147712 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nW0628 06:03:04.373465 140419082147712 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n\nW0628 06:03:04.384960 140419082147712 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n\nW0628 06:03:04.417529 140419082147712 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n\nW0628 06:03:04.417702 140419082147712 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n\n2019-06-28 06:03:04.422135: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2300000000 Hz\n2019-06-28 06:03:04.422328: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x29a8840 executing computations on platform Host. Devices:\n2019-06-28 06:03:04.422362: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>\n2019-06-28 06:03:04.424254: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1\n2019-06-28 06:03:04.585667: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2019-06-28 06:03:04.586206: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x29a8bc0 executing computations on platform CUDA. Devices:\n2019-06-28 06:03:04.586239: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\n2019-06-28 06:03:04.586487: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2019-06-28 06:03:04.586979: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: \nname: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\npciBusID: 0000:00:04.0\n2019-06-28 06:03:04.587313: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0\n2019-06-28 06:03:04.588525: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0\n2019-06-28 06:03:04.589634: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0\n2019-06-28 06:03:04.589964: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0\n2019-06-28 06:03:04.591260: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0\n2019-06-28 06:03:04.592713: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0\n2019-06-28 06:03:04.598317: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7\n2019-06-28 06:03:04.599091: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2019-06-28 06:03:04.600064: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2019-06-28 06:03:04.602854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0\n2019-06-28 06:03:04.602950: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0\n2019-06-28 06:03:04.603935: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:\n2019-06-28 06:03:04.603974: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 \n2019-06-28 06:03:04.603991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N \n2019-06-28 06:03:04.604476: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2019-06-28 06:03:04.605027: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2019-06-28 06:03:04.605630: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:40] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n2019-06-28 06:03:04.605683: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14202 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\nW0628 06:03:05.188534 140419082147712 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.\n\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv0 (Conv2D) (None, 20, None, 16) 160 \n_________________________________________________________________\nrelu0 (Activation) (None, 20, None, 16) 0 \n_________________________________________________________________\npooling0 (MaxPooling2D) (None, 10, None, 16) 0 \n_________________________________________________________________\nconv1 (Conv2D) (None, 10, None, 32) 4640 \n_________________________________________________________________\nrelu1 (Activation) (None, 10, None, 32) 0 \n_________________________________________________________________\npooling1 (MaxPooling2D) (None, 5, None, 32) 0 \n_________________________________________________________________\nconv2 (Conv2D) (None, 5, None, 64) 18496 \n_________________________________________________________________\nbatchnorm2 (BatchNormalizati (None, 5, None, 64) 256 \n_________________________________________________________________\nrelu2 (Activation) (None, 5, None, 64) 0 \n_________________________________________________________________\nconv3 (Conv2D) (None, 5, None, 128) 73856 \n_________________________________________________________________\nrelu3 (Activation) (None, 5, None, 128) 0 \n_________________________________________________________________\nzero_padding2d_1 (ZeroPaddin (None, 5, None, 128) 0 \n_________________________________________________________________\npooling2 (MaxPooling2D) (None, 2, None, 128) 0 \n_________________________________________________________________\nconv4 (Conv2D) (None, 2, None, 256) 295168 \n_________________________________________________________________\nbatchnorm4 (BatchNormalizati (None, 2, None, 256) 1024 \n_________________________________________________________________\nrelu4 (Activation) (None, 2, None, 256) 0 \n_________________________________________________________________\nconv5 (Conv2D) (None, 2, None, 256) 590080 \n_________________________________________________________________\nrelu5 (Activation) (None, 2, None, 256) 0 \n_________________________________________________________________\nzero_padding2d_2 (ZeroPaddin (None, 2, None, 256) 0 \n_________________________________________________________________\npooling3 (MaxPooling2D) (None, 1, None, 256) 0 \n_________________________________________________________________\nreshape_1 (Reshape) (None, None, 256) 0 \n_________________________________________________________________\nbidirectional_1 (Bidirection (None, None, 64) 73984 \n_________________________________________________________________\ntime_distributed_1 (TimeDist (None, None, 32) 2080 \n_________________________________________________________________\nbidirectional_2 (Bidirection (None, None, 64) 16640 \n_________________________________________________________________\ntime_distributed_2 (TimeDist (None, None, 136) 8840 \n=================================================================\nTotal params: 1,085,224\nTrainable params: 1,084,584\nNon-trainable params: 640\n_________________________________________________________________\nW0628 06:03:06.512282 140419082147712 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py:2618: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.\nW0628 06:03:06.520996 140419082147712 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4249: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\nW0628 06:03:06.600426 140419082147712 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py:1354: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nW0628 06:03:06.628765 140419082147712 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4229: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\nW0628 06:03:06.634807 140419082147712 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\nEpoch 1/1\n2019-06-28 06:03:11.980841: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.\n2019-06-28 06:03:12.126053: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0\n2019-06-28 06:03:12.448364: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7\n 26/200 [==>...........................] - ETA: 4:37 - loss: 334.5784 - acc: 0.0000e+00"
]
]
] |
[
"code"
] |
[
[
"code",
"code"
]
] |
d0f1c44dc12ae81677bbfc0bd7c978f402d53fc8
| 29,929 |
ipynb
|
Jupyter Notebook
|
python3.6/.ipynb_checkpoints/python11-checkpoint.ipynb
|
gjbr5/python-link-e-learning
|
ed18f9537af29d7edd75686c9203d31b0a46ae5e
|
[
"MIT"
] | null | null | null |
python3.6/.ipynb_checkpoints/python11-checkpoint.ipynb
|
gjbr5/python-link-e-learning
|
ed18f9537af29d7edd75686c9203d31b0a46ae5e
|
[
"MIT"
] | null | null | null |
python3.6/.ipynb_checkpoints/python11-checkpoint.ipynb
|
gjbr5/python-link-e-learning
|
ed18f9537af29d7edd75686c9203d31b0a46ae5e
|
[
"MIT"
] | 6 |
2020-09-04T10:16:59.000Z
|
2020-12-03T01:47:03.000Z
| 24.116841 | 1,644 | 0.443182 |
[
[
[
"***\n***\n# 11. 튜플과 집합\n***\n***",
"_____no_output_____"
],
[
"***\n## 1 튜플 활용법\n***\n- 튜플(Tuples): 순서있는 임의의 객체 모음 (시퀀스형)\n- 튜플은 변경 불가능(Immutable)\n- 시퀀스형이 가지는 다음 연산 모두 지원\n - 인덱싱, 슬라이싱, 연결, 반복, 멤버쉽 테스트",
"_____no_output_____"
],
[
"### 1-1 튜플 연산",
"_____no_output_____"
]
],
[
[
"t1 = () # 비어있는 튜플\nt2 = (1,2,3) # 괄호 사용\n\nt3 = 1,2,3 # 괄호가 없어도 튜플이 됨\nprint(type(t1), type(t2), type(t3))\n\n\n# <type 'tuple'> <type 'tuple'> <type 'tuple'>",
"<class 'tuple'> <class 'tuple'> <class 'tuple'>\n"
],
[
"r1 = (1,) # 자료가 한 개일 때는 반드시 콤마가 있어야 한다.\nr2 = 1, # 괄호는 없어도 콤마는 있어야 한다.\nprint(type(r1))\nprint(type(r2))\n\n# <type 'tuple'>\n# <type 'tuple'>",
"<class 'tuple'>\n<class 'tuple'>\n"
],
[
"t = (1, 2, 3)\nprint(t * 2) # 반복\nprint(t + ('PyKUG', 'users')) # 연결\nprint(t)\nprint()\n\nprint(t[0], t[1:3]) # 인덱싱, 슬라이싱\nprint(len(t)) # 길이\nprint(1 in t) # 멤버십 테스트",
"(1, 2, 3, 1, 2, 3)\n(1, 2, 3, 'PyKUG', 'users')\n(1, 2, 3)\n\n1 (2, 3)\n3\nTrue\n"
],
[
"t[0] = 100 # 튜플은 변경 불가능, 에러발생",
"_____no_output_____"
],
[
"t = (12345, 54321, 'hello!') \nu = t, (1, 2, 3, 4, 5) # 튜플 내부 원소로 다른 튜플을 가질 수 있음\nprint(u)\n\nt2 = [1, 2, 3] # 튜플 내부 원소로 리스트 가질 수 있음 \nu2 = t2, (1, 2, 4)\nprint(u2)\n\nt3 = {1:\"abc\", 2:\"def\"} # 튜플 내부 원소로 사전 가질 수 있음 \nu3 = t3, (1, 2, 3)\nprint(u3)",
"((12345, 54321, 'hello!'), (1, 2, 3, 4, 5))\n([1, 2, 3], (1, 2, 4))\n({1: 'abc', 2: 'def'}, (1, 2, 3))\n"
],
[
"x, y, z = 1, 2, 3 # 튜플을 이용한 복수 개의 자료 할당\nprint(type(x), type(y), type(z))\nprint(x)\nprint(y)\nprint(z)\n\n\n# <type 'int'> <type 'int'> <type 'int'>\n# 1\n# 2\n# 3",
"<class 'int'> <class 'int'> <class 'int'>\n1\n2\n3\n"
],
[
"x = 1\ny = 2\nx, y = y, x # 튜플을 이용한 두 자료의 값 변경\nprint(x, y)",
"2 1\n"
]
],
[
[
"### 1-2 패킹과 언패킹",
"_____no_output_____"
],
[
"- 패킹 (Packing): 하나의 튜플 안에 여러 개의 데이터를 넣는 작업",
"_____no_output_____"
]
],
[
[
"t = 1, 2, 'hello'\nprint(t)\nprint(type(t))",
"(1, 2, 'hello')\n<class 'tuple'>\n"
]
],
[
[
"- 언패킹 (Unpacking): 하나의 튜플에서 여러 개의 데이터를 한꺼번에 꺼내와 각각 변수에 할당하는 작업",
"_____no_output_____"
]
],
[
[
"x, y, z = t",
"_____no_output_____"
]
],
[
[
"- 리스트로도 비슷한 작업이 가능하지만, 단순 패킹/언패킹 작업만을 목적으로 한다면 튜플 사용 추천",
"_____no_output_____"
]
],
[
[
"a = ['foo', 'bar', 4, 5]\n[x, y, z, w] = a\nprint(x)\nprint(y)\nprint(z)\nprint(w)\nprint()\n\nx, y, z, w = a\nprint(x)\nprint(y)\nprint(z)\nprint(w)",
"foo\nbar\n4\n5\n\nfoo\nbar\n4\n5\n"
]
],
[
[
"- 튜플과 리스트와의 공통점\n - 원소에 임의의 객체를 저장\n - 시퀀스 자료형\n - 인덱싱, 슬라이싱, 연결, 반복, 멤버쉽 테스트 연산 지원\n \n- 리스트와 다른 튜플만의 특징\n - 변경 불가능 (Immutable)\n - 튜플은 count와 index 외에 다른 메소드를 가지지 않는다.",
"_____no_output_____"
]
],
[
[
"T = (1, 2, 2, 3, 3, 4, 4, 4, 4, 5)\nprint(T.count(4))\nprint(T.index(1))",
"4\n0\n"
]
],
[
[
"- list() 와 tuple() 내장 함수를 사용하여 리스트와 튜플을 상호 변환할 수 있음",
"_____no_output_____"
]
],
[
[
"T = (1, 2, 3, 4, 5)\nL = list(T)\nL[0] = 100\nprint(L)\n\nT = tuple(L)\nprint(T)",
"[100, 2, 3, 4, 5]\n(100, 2, 3, 4, 5)\n"
]
],
[
[
"### 1-3 튜플의 사용 용도",
"_____no_output_____"
],
[
"- 튜플을 사용하는 경우 1: 함수가 하나 이상의 값을 리턴하는 경우",
"_____no_output_____"
]
],
[
[
"def calc(a, b):\n return a+b, a*b\n\nx, y = calc(5, 4)",
"_____no_output_____"
]
],
[
[
"- 튜플을 사용하는 경우 2: 문자열 포멧팅",
"_____no_output_____"
]
],
[
[
"print('id : %s, name : %s' % ('gslee', 'GangSeong'))",
"id : gslee, name : GangSeong\n"
]
],
[
[
"- 튜플을 사용하는 경우 3: 고정된 값을 쌍으로 표현하는 경우",
"_____no_output_____"
]
],
[
[
"d = {'one':1, 'two':2}\nprint(d.items())\n\n\n# [('two', 2), ('one', 1)]",
"dict_items([('one', 1), ('two', 2)])\n"
]
],
[
[
"***\n## 2 집합 자료형\n***\n- set 내장 함수를 사용한 집합 자료 생성 \n - 변경 가능(Mutable)한 객체이다.\n - 각 원소간에 순서는 없다.\n - 각 원소는 중복될 수 없다.\n - [note] set은 컨네이너 자료형이지만 시퀀스 자료형은 아니다.",
"_____no_output_____"
],
[
"### 2-1 집합 자료형 생성",
"_____no_output_____"
]
],
[
[
"a = set([1, 2, 3])\nprint(type(a))\nprint(a)",
"<class 'set'>\n{1, 2, 3}\n"
],
[
"b = set((1, 2, 3))\nprint(type(b))\nprint(b)",
"<class 'set'>\n{1, 2, 3}\n"
],
[
"c = set({'a':1, 'b':2, 'c':3})\nprint(type(c))\nprint(c)",
"<class 'set'>\n{'c', 'a', 'b'}\n"
],
[
"d = set({'a':1, 'b':2, 'c':3}.values())\nprint(type(d))\nprint(d)",
"<class 'set'>\n{1, 2, 3}\n"
]
],
[
[
"- E-learning에서 언급하지 않았던 집합 만드는 방법 --> 꼭 기억하세요~",
"_____no_output_____"
]
],
[
[
"e = {1, 2, 3, 4, 5}\nprint(type(e))\nprint(e)",
"<class 'set'>\n{1, 2, 3, 4, 5}\n"
]
],
[
[
"- set에는 동일한 자료가 중복해서 저장되지 않는다. 즉 중복이 자동으로 제거됨",
"_____no_output_____"
]
],
[
[
"f = {1, 1, 2, 2, 3, 3}\ng = set([1, 1, 2, 2, 3, 3])\nprint(f)\nprint(g)",
"{1, 2, 3}\n{1, 2, 3}\n"
]
],
[
[
"- set의 원소로는 변경 불가능(Immutable)한 것만 할당 가능하다.",
"_____no_output_____"
]
],
[
[
"print(set()) # 빈 set 객체 생성\nprint(set([1, 2, 3, 4, 5])) # 초기 값은 일반적으로 시퀀스 자료형인 리스트를 넣어준다.\nprint(set([1, 2, 3, 2, 3, 4])) # 중복된 원소는 한 나만 저장됨\nprint(set('abc')) # 문자열은 각 문자를 집합 원소로 지닌다. \nprint(set([(1, 2, 3), (4, 5, 6)])) # 각 튜플은 원소로 가질 수 있음 \nprint(set([[1, 2, 3], [4, 5, 6]])) # 변경 가능 자료인 리스트는 집합의 원소가 될 수 없다.",
"set()\n{1, 2, 3, 4, 5}\n{1, 2, 3, 4}\n{'a', 'b', 'c'}\n{(4, 5, 6), (1, 2, 3)}\n"
],
[
"print(set([{1:\"aaa\"}, {2:\"bbb\"}])) # 변경 가능 자료인 사전도 집합의 원소가 될 수 없다.",
"_____no_output_____"
]
],
[
[
"- set의 기본 연산\n\n| set 연산 | 동일 연산자 | 내용 |\n|---------------------------|-------------|-----------------------------|\n| len(s) | | 원소의 개수 |\n| x in s | | x가 집합 s의 원소인가? |\n| x not in s | | x가 집합 s의 원소가 아닌가? |",
"_____no_output_____"
]
],
[
[
"A = set([1, 2, 3, 4, 5, 6, 7, 8, 9])\n\nprint(len(A)) # 집합의 원소의 수\nprint(5 in A) # 멤버십 테스트\nprint(10 not in A) # 멤버십 테스트",
"9\nTrue\nTrue\n"
]
],
[
[
"### 2-2 집합 자료형 메소드",
"_____no_output_____"
],
[
"- set의 주요 메소드\n - 다음 연산은 원래 집합은 변경하지 않고 새로운 집합을 반환한다. \n\n| set 연산 | 동일 연산자 | 내용 |\n|---------------------------|-------------|-----------------------------|\n| s.issubset(t) | s <= t | s가 t의 부분집합인가? |\n| s.issuperset(t) | s >= t | s가 t의 슈퍼집합인가? |\n| s.union(t) | s | t | 새로운 s와 t의 합집합 |\n| s.intersection(t) | s & t | 새로운 s와 t의 교집합 |\n| s.difference(t) | s - t | 새로운 s와 t의 차집합 |\n| s.symmetric_difference(t) | s ^ t | 새로운 s와 t의 배타집합 |\n| s.copy() | | 집합 s의 shallow 복사 |",
"_____no_output_____"
]
],
[
[
"B = set([4, 5, 6, 10, 20, 30])\nC = set([10, 20, 30])\n\nprint(C.issubset(B)) # C가 B의 부분집합?\nprint(C <= B)\nprint(B.issuperset(C)) # B가 C를 포함하는 집합?\nprint(B >= C)\nprint()",
"True\nTrue\nTrue\nTrue\n\n"
],
[
"A = set([1, 2, 3, 4, 5, 6, 7, 8, 9])\nB = set([4, 5, 6, 10, 20, 30])\n\nprint(A.union(B)) # A와 B의 합집합\nprint(A | B)\nprint(A)",
"{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30}\n{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30}\n{1, 2, 3, 4, 5, 6, 7, 8, 9}\n"
],
[
"print(A.intersection(B)) # A와 B의 교집합\nprint(A & B)\nprint(A)",
"{4, 5, 6}\n{4, 5, 6}\n{1, 2, 3, 4, 5, 6, 7, 8, 9}\n"
],
[
"print(A.difference(B)) # A - B (차집합)\nprint(A - B)\nprint(A)",
"{1, 2, 3, 7, 8, 9}\n{1, 2, 3, 7, 8, 9}\n{1, 2, 3, 4, 5, 6, 7, 8, 9}\n"
],
[
"print(A.symmetric_difference(B)) # 베타집합. A와 B의 합집합에서 교집합의 원소를 제외한 집합\nprint(A ^ B)\nprint(A)",
"{1, 2, 3, 7, 8, 9, 10, 20, 30}\n{1, 2, 3, 7, 8, 9, 10, 20, 30}\n{1, 2, 3, 4, 5, 6, 7, 8, 9}\n"
],
[
"A = set([1, 2, 3, 4, 5, 6, 7, 8, 9])\nD = A.copy()\nprint(D)\nprint()\n\nprint(A == D) #자료값 비교\nprint(A is D) #객체 동등성 비교",
"{1, 2, 3, 4, 5, 6, 7, 8, 9}\n\nTrue\nFalse\n"
]
],
[
[
"- set은 시퀀스 자료형이 아니므로 인덱싱, 슬라이싱, 정렬 등을 지원하지 않는다. ",
"_____no_output_____"
]
],
[
[
"A = set([1, 2, 3, 4, 5, 6, 7, 8, 9])\nprint(A[0])",
"_____no_output_____"
],
[
"print(A[1:4])",
"_____no_output_____"
],
[
"print(A.sort())",
"_____no_output_____"
]
],
[
[
"- 집합을 리스트나 튜플로 변경가능\n - 집합에 인덱싱, 슬라이싱, 정렬 등을 적용하기 위해서는 리스트나 튜플로 변경한다.",
"_____no_output_____"
]
],
[
[
"print(list(A))\nprint(tuple(A))",
"[1, 2, 3, 4, 5, 6, 7, 8, 9]\n(1, 2, 3, 4, 5, 6, 7, 8, 9)\n"
]
],
[
[
"- 하지만 집합에 for ~ in 연산은 적용 가능하다.",
"_____no_output_____"
]
],
[
[
"A = set([1, 2, 3, 4, 5, 6, 7, 8, 9])\nfor ele in A:\n print(ele,end=\" \")",
"1 2 3 4 5 6 7 8 9 "
]
],
[
[
"- set은 변경 가능(Mutable)한 자료 구조 객체 \n- 다음 메소드들은 set을 변경하는 집합 자료 구조 메소드들임\n\n| set 연산 | 동일 연산자 | 내용 |\n|---------------------------|-------------|-----------------------------|\n| s.update(t) | s |= t | s와 t의 합집합을 s에 저장 |\n| s.intersection_update(t) | s &= t | s와 t의 교집합을 s에 저장 |\n| s.difference_update(t) | s -= t | s와 t의 차집합을 s에 저장 |\n| s.symmetric_difference_update(t)| s ^= t | s와 t의 배타집합을 s에 저장 |\n| s.add(x) | | 원소 x를 집합 s에 추가 |\n| s.remove(x) | | 원소 x를 집합 s에서 제거, 원소 x가 집합 s에 없으면 예외 발생 |\n| s.discard(x) | | 원소 x를 집합 s에서 제거 |\n| s.pop() | | 임의의 원소를 집합 s에서 제거, 집합 s가 공집합이면 예외 발생 |\n| s.clear() | | 집합 s의 모든 원소 제거 |",
"_____no_output_____"
]
],
[
[
"A = set([1, 2, 3, 4])\nB = set([3, 4, 5, 6])\n\nA.update(B) # A에 B 집합의 원소를 추가 시킴\nprint(A)",
"{1, 2, 3, 4, 5, 6}\n"
],
[
"A.intersection_update([4,5,6,7,8]) # &=\nprint(A)",
"{4, 5, 6}\n"
],
[
"A.difference_update([6,7,8]) # -=\nprint(A)",
"{4, 5}\n"
],
[
"A.symmetric_difference_update([5,6,7]) # ^=\nprint(A)",
"{4, 6, 7}\n"
],
[
"A.add(8) # 원소 추가\nprint(A)",
"{4, 6, 7, 8}\n"
],
[
"A.remove(8) # 원소 제거\nprint(A)",
"{4, 6, 7}\n"
],
[
"A.remove(10) # 없는 원소를 제거하면 KeyError 발생",
"_____no_output_____"
],
[
"A.discard(10) # remove와 같으나 예외가 발생하지 않음\nA.discard(6) # 원소 6제거\nprint(A)",
"{4, 7}\n"
],
[
"A.pop() # 임의의 원소 하나 꺼내기\nprint(A)",
"{7}\n"
],
[
"A = set([1,2,3,4])\nA.clear() # 모든 원소 없애기\nprint(A)",
"set()\n"
]
],
[
[
"<p style='text-align: right;'>참고 문헌: 파이썬(열혈강의)(개정판 VER.2), 이강성, FreeLec, 2005년 8월 29일</p>",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0f1c8c860702dbb565d730ad99fbd2cc045aa37
| 4,896 |
ipynb
|
Jupyter Notebook
|
Kinetics/plot_kinetics.ipynb
|
hwpang/misc_scripts
|
73c1af452d55a3a602f382a17802697a14d69379
|
[
"MIT"
] | null | null | null |
Kinetics/plot_kinetics.ipynb
|
hwpang/misc_scripts
|
73c1af452d55a3a602f382a17802697a14d69379
|
[
"MIT"
] | null | null | null |
Kinetics/plot_kinetics.ipynb
|
hwpang/misc_scripts
|
73c1af452d55a3a602f382a17802697a14d69379
|
[
"MIT"
] | 2 |
2019-09-19T19:25:12.000Z
|
2021-01-13T20:20:17.000Z
| 29.317365 | 123 | 0.47406 |
[
[
[
"%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom rmgpy.kinetics import *",
"_____no_output_____"
],
[
"# Set global plot styles\nplt.style.use('seaborn-paper')\nplt.rcParams['axes.labelsize'] = 16\nplt.rcParams['xtick.labelsize'] = 12\nplt.rcParams['ytick.labelsize'] = 12",
"_____no_output_____"
],
[
"# Set temperature range and pressure\npressure = 1e5 # Pa\ntemperature = np.linspace(298, 2000, 50)",
"_____no_output_____"
],
[
"def plot_kinetics(kinetics, kunits, labels=None, styles=None, colors=None, filename=None):\n # Set colormap here if desired\n colormap = mpl.cm.Set1\n if colors is None:\n colors = range(len(kinetics))\n if styles is None:\n styles = ['-'] * len(kinetics)\n\n fig = plt.figure()\n\n for i, rate in enumerate(kinetics):\n # Evaluate kinetics\n k = []\n for t in temperature:\n # Rates are returned in SI units by default\n # This hardcodes a conversion to cm\n if kunits == 'cm^3/(mol*s)':\n k.append(1e6 * rate.getRateCoefficient(t, pressure))\n else:\n k.append(rate.getRateCoefficient(t, pressure))\n\n x = 1000 / temperature\n\n plt.semilogy(x, k, styles[i], c=colormap(colors[i]))\n\n plt.xlabel('1000/T (K)')\n plt.ylabel('k [{0}]'.format(kunits))\n if labels:\n plt.legend(labels, fontsize=12, loc=8, bbox_to_anchor=(0.5, 1.02))\n \n if filename is not None:\n plt.savefig(filename, bbox_inches=\"tight\", dpi=300)",
"_____no_output_____"
],
[
"kunits = 'cm^3/(mol*s)'\n\n# List of RMG kinetics objects\n# Entries from RMG-database can be copied as is\n# Can be any RMG kinetics type, not just Arrhenius\nkinetics = [\n Arrhenius(\n A = (261.959, 'cm^3/(mol*s)'),\n n = 2.67861,\n Ea = (148.685, 'kJ/mol'),\n T0 = (1, 'K'),\n Tmin = (303.03, 'K'),\n Tmax = (2500, 'K'),\n comment = 'Fitted to 59 data points; dA = *|/ 1.00756, dn = +|- 0.000987877, dEa = +|- 0.00543432 kJ/mol',\n ),\n Arrhenius(\n A = (286.364, 'cm^3/(mol*s)'),\n n = 2.61958,\n Ea = (116.666, 'kJ/mol'),\n T0 = (1, 'K'),\n Tmin = (303.03, 'K'),\n Tmax = (2500, 'K'),\n comment = 'Fitted to 59 data points; dA = *|/ 1.01712, dn = +|- 0.00222816, dEa = +|- 0.0122571 kJ/mol',\n ),\n Arrhenius(\n A = (232.129, 'cm^3/(mol*s)'),\n n = 2.57899,\n Ea = (86.4148, 'kJ/mol'),\n T0 = (1, 'K'),\n Tmin = (303.03, 'K'),\n Tmax = (2500, 'K'),\n comment = 'Fitted to 59 data points; dA = *|/ 1.02472, dn = +|- 0.00320486, dEa = +|- 0.0176299 kJ/mol',\n ),\n]\n\n# Labels corresponding to each rate, can be empty list for no legend\nlabels = [\n 'Rate A',\n 'Rate B',\n 'Rate C',\n]\n# Matplotlib style descriptors corresponding to each rate\nstyles = ['-', '--', '-.']\n# Colormap indices corresponding to each rate\ncolors = [0, 0, 1]\n\nplot_kinetics(kinetics, kunits, labels=labels, styles=styles, colors=colors)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0f1d0011cd2292fdfb47eef82f6776bd53e6d09
| 62,670 |
ipynb
|
Jupyter Notebook
|
silva_et_al_2005_FEBS_J/parameter_estimation.ipynb
|
aeferreira/papers_repr_glyoxalases
|
c4fda95f98f8843b8f8bcca21cd7ffaf9a45d876
|
[
"MIT"
] | null | null | null |
silva_et_al_2005_FEBS_J/parameter_estimation.ipynb
|
aeferreira/papers_repr_glyoxalases
|
c4fda95f98f8843b8f8bcca21cd7ffaf9a45d876
|
[
"MIT"
] | null | null | null |
silva_et_al_2005_FEBS_J/parameter_estimation.ipynb
|
aeferreira/papers_repr_glyoxalases
|
c4fda95f98f8843b8f8bcca21cd7ffaf9a45d876
|
[
"MIT"
] | null | null | null | 252.701613 | 34,336 | 0.920041 |
[
[
[
"## Parameter estimation in the glyoxalase system of _Leishmania infantum_.",
"_____no_output_____"
],
[
"### Part of the publication\n\nSousa Silva, M. , Ferreira, A.E.N., Tomás, A.M., Cordeiro, C., Ponces Freire, A. (2005) Quantitative assessment of the glyoxalase pathway in Leishmania infantum as a therapeutic target by modelling and computer simulation. *FEBS Journal* **272(10)**: 2388-2398.\n\n[doi:10.1111/j.1742-4658.2005.04632.x](https://febs.onlinelibrary.wiley.com/doi/abs/10.1111/j.1742-4658.2005.04632.x)",
"_____no_output_____"
],
[
"The main objective of this module is to study the **estimation of parameters** of kinetic models of biochemical systems from experimental **time-course data**.\n\nWe will see in a moment what exactly a time course is.\n\nThis module uses the python library *S-timator*.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"import stimator as st",
"_____no_output_____"
]
],
[
[
"### Glyoxalase pathway in *L. infantum*",
"_____no_output_____"
],
[
"This example uses **real data for a real biochemical pathway**.\n\nThe main difference here is that we can actually use **two** time courses of experimental measures",
"_____no_output_____"
]
],
[
[
"glos = \"\"\"\n# Example file for S-timator\ntitle Glyoxalase system in L. Infantum\nvariables SDLTSH HTA # variables (the order matches the timecourse files)\n\n#reactions (with stoichiometry and rate)\nglx1 : HTA -> SDLTSH, rate = V1*HTA/(Km1 + HTA)\nglx2 : SDLTSH ->, V2*SDLTSH/(Km2 + SDLTSH)\n\nfind V1 in [0.00001, 0.0001]\nfind Km1 in [0.01, 1]\nfind V2 in [0.00001, 0.0001]\nfind Km2 in [0.01, 1]\n\ninit : (SDLTSH = 7.69231E-05, HTA = 0.1357)\n\ntimecourse TSH2a.txt\ntimecourse TSH2b.txt\n\ngenerations = 200 # maximum generations for GA\npopsize = 80 # population size in GA\"\"\"\n\nmglos = st.read_model(glos)",
"_____no_output_____"
]
],
[
[
"### Data\n\nThe data are two time courses with spectrophotometric measurements of SDL-TSH.\n",
"_____no_output_____"
]
],
[
[
"st.readTCs(['TSH2a.txt', 'TSH2b.txt']).plot(fig_size=(12,5))",
"_____no_output_____"
]
],
[
[
"### Parameter estimation\n",
"_____no_output_____"
]
],
[
[
"best = mglos.estimate()\nprint (best)\nbest.plot(fig_size=(12,5))",
"-- reading time courses -------------------------------\nfile C:\\Users\\tonho\\Desktop\\other_github_repos\\papers_repr_glyoxalases\\silva_et_al_2005_FEBS_J\\TSH2a.txt:\n244 time points, 2 variables\nfile C:\\Users\\tonho\\Desktop\\other_github_repos\\papers_repr_glyoxalases\\silva_et_al_2005_FEBS_J\\TSH2b.txt:\n347 time points, 2 variables\n\nSolving Glyoxalase system in L. Infantum...\n0 : 0.000558\n1 : 0.000558\n2 : 0.000558\n3 : 0.000361\n4 : 0.000361\n5 : 0.000361\n6 : 0.000361\n7 : 0.000361\n8 : 0.000361\n9 : 0.000361\n10 : 0.000361\n11 : 0.000361\n12 : 0.000361\n13 : 0.000361\n14 : 0.000361\n15 : 0.000361\n16 : 0.000361\n17 : 0.000361\n18 : 0.000361\n19 : 0.000361\n20 : 0.000361\n21 : 0.000361\n22 : 0.000361\n23 : 0.000361\nrefining last solution ...\n\nDone!\nToo many generations with no improvement in 24 generations.\nbest score = 0.000011\nbest solution: [2.57590880e-05 2.52521085e-01 2.23384396e-05 9.80748343e-02]\nOptimization took 3.140 s (00m 03.140s)\n\n--- PARAMETERS -----------------------------\nV1\t 2.57591e-05 +- 2.76721e-07\nKm1\t 0.252521 +- 0.00707563\nV2\t 2.23384e-05 +- 3.10728e-06\nKm2\t 0.0980748 +- 0.019734\n\n--- OPTIMIZATION -----------------------------\nFinal Score\t1.06654e-05\ngenerations\t24\nmax generations\t200\npopulation size\t80\nExit by\tToo many generations with no improvement\n\n\n--- TIME COURSES -----------------------------\nName\t\tPoints\t\tScore\nTSH2a.txt\t244\t4.17859e-06\nTSH2b.txt\t347\t6.48686e-06\n\n\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0f1e7c179eb097e1048c47d2798a3f9d309bd9b
| 45,729 |
ipynb
|
Jupyter Notebook
|
Riksdagens dokument SFS.ipynb
|
salgo60/open-data-examples
|
05a16a92c53117ff23a330a3fa5914a33b19ff6a
|
[
"MIT"
] | 5 |
2019-05-30T13:10:32.000Z
|
2021-06-30T06:04:29.000Z
|
Riksdagens dokument SFS.ipynb
|
salgo60/open-data-examples
|
05a16a92c53117ff23a330a3fa5914a33b19ff6a
|
[
"MIT"
] | null | null | null |
Riksdagens dokument SFS.ipynb
|
salgo60/open-data-examples
|
05a16a92c53117ff23a330a3fa5914a33b19ff6a
|
[
"MIT"
] | null | null | null | 34.722096 | 628 | 0.439437 |
[
[
[
"## Test Riksdagen SFS dokument \n\n* Denna [Jupyter Notebook](https://github.com/salgo60/open-data-examples/blob/master/Riksdagens%20dokument%20SFS.ipynb) \n * [KU anmälningar](https://github.com/salgo60/open-data-examples/blob/master/Riksdagens%20dokument%20KU-anm%C3%A4lningar.ipynb) \n * [Motioner](https://github.com/salgo60/open-data-examples/blob/master/Riksdagens%20dokument%20Motioner.ipynb)\n * [Ledamöter](https://github.com/salgo60/open-data-examples/blob/master/Riksdagens%20ledam%C3%B6ter.ipynb)\n * [Dokumenttyper](https://github.com/salgo60/open-data-examples/blob/master/Riksdagens%20dokumenttyper.ipynb)\n* [Skapa sökfråga](http://data.riksdagen.se/dokumentlista/) \n\n* 13980 hämtade verkar som diff med [Dokument & lagar (10 504 träffar)](https://www.riksdagen.se/sv/dokument-lagar/?doktyp=sfs) \n\n### Test SFS nr 2020-577\n* [Fulltext](https://www.riksdagen.se/sv/dokument-lagar/dokument/svensk-forfattningssamling/forordning-2020577-om-statligt-stod-for_sfs-2020-577) [text](http://data.riksdagen.se/dokument/sfs-2020-577.text) / [html](http://data.riksdagen.se/dokument/sfs-2020-577.html) / [json](http://data.riksdagen.se/dokument/sfs-2020-577.json) \n",
"_____no_output_____"
]
],
[
[
"from datetime import datetime\nnow = datetime.now()\nprint(\"Last run: \", datetime.now())",
"Last run: 2020-10-04 15:00:07.695637\n"
],
[
"import urllib3, json\nimport pandas as pd \nfrom tqdm.notebook import trange \nhttp = urllib3.PoolManager() \npd.set_option(\"display.max.columns\", None) \nurlbase =\"http://data.riksdagen.se/dokumentlista/?sok=&doktyp=SFS&utformat=json&start=\"\n\ndftot = pd.DataFrame()\nfor i in trange(1,700): # looks we today have 10504 SFS --> 10503/20\n url = urlbase + str(i)\n r = http.request('GET', url)\n data = json.loads(r.data)\n r = http.request('GET', url)\n dftot = dftot.append(pd.DataFrame(data[\"dokumentlista\"][\"dokument\"]),sort=False)\ndftot.head()\n",
"_____no_output_____"
],
[
"print(\"Min och Max publicerad: \", dftot.publicerad.min(), dftot.publicerad.max())",
"Min och Max publicerad: 2015-01-26 18:18:04 2020-10-04 04:37:57\n"
],
[
"print(\"Min och Max datum: \", dftot.datum.min(), dftot.datum.max())",
"Min och Max datum: 1942-12-04 2020-10-01\n"
],
[
"print(\"Min och Max systemdatum: \", dftot.systemdatum.min(), dftot.systemdatum.max())",
"Min och Max systemdatum: 2015-01-26 18:18:04 2020-10-04 04:37:57\n"
],
[
"dftot.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 13980 entries, 0 to 19\nData columns (total 59 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 traff 13980 non-null object\n 1 domain 13980 non-null object\n 2 database 13980 non-null object\n 3 datum 13980 non-null object\n 4 id 13980 non-null object\n 5 rdrest 0 non-null object\n 6 slutdatum 0 non-null object\n 7 rddata 0 non-null object\n 8 plats 0 non-null object\n 9 klockslag 0 non-null object\n 10 publicerad 13980 non-null object\n 11 systemdatum 13980 non-null object\n 12 undertitel 6138 non-null object\n 13 kalla 204 non-null object\n 14 kall_id 204 non-null object\n 15 dok_id 13980 non-null object\n 16 dokumentformat 0 non-null object\n 17 dokument_url_text 13980 non-null object\n 18 dokument_url_html 13980 non-null object\n 19 inlamnad 0 non-null object\n 20 motionstid 0 non-null object\n 21 tilldelat 0 non-null object\n 22 lang 0 non-null object\n 23 url 0 non-null object\n 24 relurl 0 non-null object\n 25 titel 13980 non-null object\n 26 rm 13980 non-null object\n 27 organ 13964 non-null object\n 28 relaterat_id 0 non-null object\n 29 doktyp 13980 non-null object\n 30 typ 13980 non-null object\n 31 subtyp 13980 non-null object\n 32 beteckning 13776 non-null object\n 33 tempbeteckning 204 non-null object\n 34 nummer 13980 non-null object\n 35 status 204 non-null object\n 36 score 13980 non-null object\n 37 sokdata 13980 non-null object\n 38 summary 204 non-null object\n 39 notisrubrik 13980 non-null object\n 40 notis 204 non-null object\n 41 dokintressent 0 non-null object\n 42 filbilaga 204 non-null object\n 43 avdelning 13980 non-null object\n 44 struktur 0 non-null object\n 45 audio 0 non-null object\n 46 video 0 non-null object\n 47 debattgrupp 0 non-null object\n 48 debattdag 0 non-null object\n 49 beslutsdag 0 non-null object\n 50 beredningsdag 0 non-null object\n 51 justeringsdag 0 non-null object\n 52 beslutad 0 non-null object\n 53 debattsekunder 0 non-null object\n 54 ardometyp 0 non-null object\n 55 reservationer 0 non-null object\n 56 debatt 0 non-null object\n 57 debattnamn 13980 non-null object\n 58 dokumentnamn 13980 non-null object\ndtypes: object(59)\nmemory usage: 6.4+ MB\n"
],
[
"dftot[['nummer','titel','publicerad','beslutad','datum','summary']] \n",
"_____no_output_____"
],
[
"dftot.publicerad.unique()",
"_____no_output_____"
],
[
"dftot.publicerad.value_counts()",
"_____no_output_____"
],
[
"dftot.publicerad.value_counts().sort_index(ascending=False)",
"_____no_output_____"
],
[
"ftot.publicerad.value_counts().sort_index(ascending=False)[:50]",
"_____no_output_____"
],
[
"%matplotlib inline \nimport matplotlib.pyplot as plt \nplot = dftot.publicerad.value_counts()[1:30].plot.bar(y='counts', figsize=(25, 5)) \nplt.show()",
"_____no_output_____"
],
[
"%matplotlib inline \nimport matplotlib.pyplot as plt \nplot = dftot.datum.value_counts()[1:30].plot.bar(y='counts', figsize=(25, 5)) \nplt.show()",
"_____no_output_____"
],
[
"plotPublishedSFSperMonth = dftot['publicerad'].groupby(dftot.publicerad.dt.to_period(\"M\")).agg('count')\nplotPublishedSFSperMonth.plot( kind = 'bar') \nplt.title(\"SFS per month\")\nplt.show()",
"_____no_output_____"
],
[
"plotDatumSFSperMonth = dftot['datum'].groupby(dftot.datum.dt.to_period(\"M\")).agg('count')\nplotDatumSFSperMonth.plot( kind = 'bar') \nplt.title(\"SFS Datum per month\")\nplt.show()",
"_____no_output_____"
],
[
"plotDatumSFSperMonth = dftot['datum'].groupby(dftot.datum.dt.to_period(\"M\")).agg('count')[10:]\nplotDatumSFSperMonth.plot( kind = 'bar') \nplt.title(\"SFS Datum per month\")\nplt.figsize=(5, 35) \n\nplt.show()",
"_____no_output_____"
],
[
"plotDatumSFSperMonth",
"_____no_output_____"
],
[
"#Last year \nPublishedSFS2016perMonth = dftot[dftot[\"publicerad\"].dt.year > 2016 ]\nplotPublishedSFS2016perMonth = PublishedSFS2016perMonth['publicerad'].groupby(PublishedSFS2016perMonth.publicerad.dt.to_period(\"M\")).agg('count')\nplotPublishedSFS2016perMonth.plot( kind = 'bar',) \nplt.title(\"SFS > 2016 per month\")\nplt.figsize=(5, 35) \nfigure(figsize=(1,1)) \nplt.show()\n",
"_____no_output_____"
],
[
"plotDatumSFSperMonth[100:]",
"_____no_output_____"
],
[
" dftot.debattnamn.value_counts()",
"_____no_output_____"
],
[
"dftot.info()",
"_____no_output_____"
],
[
"organCount = dftot.organ.value_counts() \norganCount",
"_____no_output_____"
],
[
"dftot.organ.value_counts().plot.pie(y='counts', figsize=(15, 15)) \nplt.show()",
"_____no_output_____"
],
[
"dftot.organ.value_counts()[1:50]",
"_____no_output_____"
],
[
"dftot.organ.value_counts()[50:100]",
"_____no_output_____"
],
[
"dftot.organ.value_counts()[100:150]",
"_____no_output_____"
],
[
"dftot.domain.value_counts()",
"_____no_output_____"
],
[
"dftot.rm.value_counts() \nplotRM = dftot.rm.value_counts().plot.bar(y='counts', figsize=(25, 5)) \nplt.show()",
"_____no_output_____"
],
[
"dftot['datum'] =pd.to_datetime(dftot.datum) \ndftot['publicerad'] =pd.to_datetime(dftot.publicerad) \ndftot['systemdatum'] =pd.to_datetime(dftot.systemdatum, format='%Y-%m-%d')\n# 2016-02-11 15:26:06",
"_____no_output_____"
],
[
"dftot.info()",
"_____no_output_____"
],
[
"dftot = dftot.sort_values('datum') \ndftot.head()",
"_____no_output_____"
],
[
"dftot.tail()",
"_____no_output_____"
],
[
"dftot.subtyp.value_counts()",
"_____no_output_____"
]
],
[
[
"Gissning \n* regl-riksg verkar vara Reglemente för Riksgäldskontoret \n* regl-riksb är nog Riksbanken",
"_____no_output_____"
]
],
[
[
"dftot.debattnamn.value_counts()",
"_____no_output_____"
],
[
"ftot = dftot.sort_values(by='id', ascending=False) ",
"_____no_output_____"
],
[
"dftot.info()",
"_____no_output_____"
],
[
"dftot.head(1000) ",
"_____no_output_____"
],
[
"print(\"End run: \", datetime.now())",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0f1f330f2f1b278272338d1704855e827e98202
| 2,560 |
ipynb
|
Jupyter Notebook
|
docs/multi.ipynb
|
zkbt/chromatic
|
e7d66c5060d6ec02db7d5d512bd066accf8229b6
|
[
"MIT"
] | null | null | null |
docs/multi.ipynb
|
zkbt/chromatic
|
e7d66c5060d6ec02db7d5d512bd066accf8229b6
|
[
"MIT"
] | 39 |
2021-06-22T01:41:52.000Z
|
2022-03-24T23:22:40.000Z
|
docs/multi.ipynb
|
zkbt/chromatic
|
e7d66c5060d6ec02db7d5d512bd066accf8229b6
|
[
"MIT"
] | 1 |
2022-03-11T22:53:46.000Z
|
2022-03-11T22:53:46.000Z
| 20.645161 | 241 | 0.542969 |
[
[
[
"# Comparing 🌈 to 🌈\n\nOften, we'll want to directly compare two different Rainbows. A wrapper called `MultiRainbow` tries to make doing so a little simpler, by providing an interface to apply many of the familiar Rainbow methods to multiple objects at once.",
"_____no_output_____"
]
],
[
[
"from chromatic import *",
"_____no_output_____"
],
[
"a = SimulatedRainbow(signal_to_noise=1000).inject_transit()\nb = SimulatedRainbow(signal_to_noise=np.sqrt(100*1000)).inject_transit()\nc = SimulatedRainbow(signal_to_noise=100).inject_transit()",
"_____no_output_____"
],
[
"m = MultiRainbow([a, b, c], names=['noisy', 'noisier', 'noisiest'])",
"_____no_output_____"
],
[
"m.bin(R=4).plot(spacing=0.01)",
"_____no_output_____"
],
[
"m.imshow(cmap='gray')",
"_____no_output_____"
],
[
"m.animate_lightcurves()",
"_____no_output_____"
]
],
[
[
"<img src='multi-animated-lightcurves.gif' align='left'>",
"_____no_output_____"
]
],
[
[
"m.animate_spectra()",
"_____no_output_____"
]
],
[
[
"<img src='multi-animated-spectra.gif' align='spectra'>",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0f1f7701991cfddb73021721e0753afbd408f36
| 2,417 |
ipynb
|
Jupyter Notebook
|
datamining/notebooks/importKittiDatasetUI.ipynb
|
zillyf/datapipeline
|
9dc5384234218df7ad77d2f26ffa73d83c9272f2
|
[
"MIT"
] | null | null | null |
datamining/notebooks/importKittiDatasetUI.ipynb
|
zillyf/datapipeline
|
9dc5384234218df7ad77d2f26ffa73d83c9272f2
|
[
"MIT"
] | null | null | null |
datamining/notebooks/importKittiDatasetUI.ipynb
|
zillyf/datapipeline
|
9dc5384234218df7ad77d2f26ffa73d83c9272f2
|
[
"MIT"
] | null | null | null | 26.56044 | 76 | 0.534133 |
[
[
[
"from json import dumps\nfrom kafka import KafkaProducer\nimport pandas as pd\nimport ipywidgets as widgets\n\ndf_temp=pd.read_json('kitti_datasets.json', orient='index')\ndf=df_temp.rename(columns={0: 'DatasetURL'})\n\ntopic = \"send_kitti_dataset_request\"\n\nproducer = KafkaProducer(\n bootstrap_servers=[\"kafka:9093\"],\n value_serializer=lambda x: dumps(x).encode(\"utf-8\"),\n)\n\n# Sample Usage:\n# newDatasetID=0;\n# newEntry = { \"KittiDatasetURL\": df.DatasetURL[newDatasetID] }\n# producer.send(topic, value=newEntry)\n",
"_____no_output_____"
],
[
"dropdownOptions=list(tuple(zip(df.index ,df.values[:,0])))\ndd=widgets.Dropdown(\n options=dropdownOptions,\n description='Kitti Dataset:',\n disabled=False,\n)\ndef importDataset(b):\n # choose among the datas from list:\n newEntry = {\n \"KittiDatasetURL\": dd.value,\n }\n producer.send(topic, value=newEntry)\n print(\"Import Dataset \"+dd.label)\n \nbuttonImport = widgets.Button(\n description='Import',\n disabled=False,\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Import Dataset',\n icon='check', # (FontAwesome names without the `fa-` prefix)\n)\nbuttonImport.on_click(importDataset)\n",
"_____no_output_____"
],
[
"widgets.Box([dd, buttonImport])",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
d0f206c31e3294da89b327ac27138940b08511c8
| 4,023 |
ipynb
|
Jupyter Notebook
|
code/algorithms/course_udemy_1/Stacks, Queues and Deques/Interview/Questions - PRACTICE/Implement a Stack .ipynb
|
vicb1/miscellaneous
|
2c9762579abf75ef6cba75d1d1536a693d69e82a
|
[
"MIT"
] | null | null | null |
code/algorithms/course_udemy_1/Stacks, Queues and Deques/Interview/Questions - PRACTICE/Implement a Stack .ipynb
|
vicb1/miscellaneous
|
2c9762579abf75ef6cba75d1d1536a693d69e82a
|
[
"MIT"
] | null | null | null |
code/algorithms/course_udemy_1/Stacks, Queues and Deques/Interview/Questions - PRACTICE/Implement a Stack .ipynb
|
vicb1/miscellaneous
|
2c9762579abf75ef6cba75d1d1536a693d69e82a
|
[
"MIT"
] | null | null | null | 17.567686 | 126 | 0.456873 |
[
[
[
"# Implement a Stack \n\nA very common interview question is to begin by just implementing a Stack! Try your best to implement your own stack!\n\nIt should have the methods:\n\n* Check if its empty\n* Push a new item\n* Pop an item\n* Peek at the top item\n* Return the size",
"_____no_output_____"
]
],
[
[
"class Stack(object):\n # Fill out the Stack Methods here\n def __init__(self):\n self.items = []\n def isEmpty(self):\n return len(self.items) == 0\n def push(self, e):\n self.items.append(e)\n def pop(self):\n return self.items.pop()\n def size(self):\n return len(self.items)\n def peek(self):\n return self.items[self.size()-1] # reuse the size funtion\n pass",
"_____no_output_____"
],
[
"stack = Stack()",
"_____no_output_____"
],
[
"stack.size()",
"_____no_output_____"
],
[
"stack.push(1)",
"_____no_output_____"
],
[
"stack.push(2)",
"_____no_output_____"
],
[
"stack.push('Three')",
"_____no_output_____"
],
[
"stack.pop()",
"_____no_output_____"
],
[
"stack.pop()",
"_____no_output_____"
],
[
"stack.size()",
"_____no_output_____"
],
[
"stack.isEmpty()",
"_____no_output_____"
],
[
"stack.peek()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.